0% found this document useful (0 votes)
208 views863 pages

MIMIX Administrator Reference

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 863

MIMIX®

Version 8.0
MIMIX Administrator Reference
Conceptual, Configuration, and Reference Information
Notices

MIMIX Administrator Reference User Guide


June 2017
Version: 8.0.18.00
© Copyright 1999, 2017 Vision Solutions®, Inc. All rights reserved.
The information in this document is subject to change without notice and is furnished under a license
agreement. This document is proprietary to Vision Solutions, Inc., and may be used only as authorized in our
license agreement. No portion of this manual may be copied or otherwise reproduced without the express
written consent of Vision Solutions, Inc.
Vision Solutions provides no expressed or implied warranty with this manual.
The following are trademarks or registered trademarks of their respective organizations or companies:
• MIMIX and Vision Solutions are registered trademarks and Data Manager, Director, Dynamic Apply,
ECS/400, IntelliStart, Integrator, iOptimize, iTERA, iTERA Availability, MIMIX AutoNotify, MIMIX Availability,
MIMIX Availability Manager, MIMIX Availability for AIX, MIMIX DB2 Replicator, MIMIX Director, MIMIX dr1,
MIMIX Enterprise, MIMIX Global, MIMIX Monitor, MIMIX Object Replicator, MIMIX for PowerHA, MIMIX
Professional, MIMIX Promoter, OMS/ODS, MIMIX DR for AIX, MIMIX Share, RJ Link, SAM/400, Switch
Assistant, Vision AutoValidate, and Vision Suite are trademarks of Vision Solutions, Inc.
• AIX, AIX 5L, AS/400, DB2, eServer, IBM, Informix, i5/OS, iSeries, OS/400, Power, PowerHA, System i,
System i5, System p, System x, System z, and WebSphere—International Business Machines Corporation.
• Adobe and Acrobat Reader—Adobe Systems, Inc.
• HP-UX—Hewlett-Packard Company.
• Teradata—Teradata Corporation.
• Intel—Intel Corporation.
• Linux—Linus Torvalds.
• Excel, Internet Explorer, Microsoft, Windows, and Windows Server—Microsoft Corporation.
• Mozilla and Firefox—Mozilla Foundation.
• Java, Solaris, Oracle—Oracle Corporation.
• Red Hat—Red Hat, Inc.
• Sybase—Sybase, Inc.
• UNIX and UNIXWare—the Open Group.
All other brands and product names are trademarks or registered trademarks of their respective owners.
If you need assistance, contact Vision Solutions’ CustomerCare team at:
CustomerCare
Vision Solutions, Inc.
Telephone: 1.800.337.8214 or 1.949.724.5465
Email: [email protected]
Web Site: www.visionsolutions.com/Support/Contact-CustomerCare.aspx
Contents
Who this book is for................................................................................................... 19
What is in this book ............................................................................................. 19
The MIMIX documentation set ............................................................................ 19
MIMIX DR Limitations................................................................................................ 20
Sources for additional information............................................................................. 22
How to contact us...................................................................................................... 23
Chapter 1 MIMIX overview 24
MIMIX concepts......................................................................................................... 26
System roles and relationships ........................................................................... 26
Application groups: the unit of coordination ........................................................ 27
Data groups: the unit of replication...................................................................... 27
Switching: moving the production environment to another system ..................... 28
Journaling and object auditing introduction ......................................................... 28
Multi-part naming convention .............................................................................. 31
The MIMIX environment ............................................................................................ 33
The product library .............................................................................................. 33
IFS directories ............................................................................................... 33
The MIMIXQGPL library ...................................................................................... 33
MIMIXSBS subsystem................................................................................... 34
Data libraries ....................................................................................................... 34
System managers ............................................................................................... 34
Journal managers................................................................................................ 35
Target journal inspection ..................................................................................... 35
Collector services ................................................................................................ 36
Cluster services ................................................................................................... 36
Named definitions for configuration objects ........................................................ 36
Data group entries ............................................................................................... 37
Procedures and steps ......................................................................................... 38
Log spaces .......................................................................................................... 38
Job descriptions and job classes......................................................................... 38
User profiles .................................................................................................. 40
System value settings for MIMIX............................................................................... 41
System values for installing software .................................................................. 41
System values for MIMIX operation .................................................................... 41
Operational overview................................................................................................. 44
Support for starting and ending replication.......................................................... 44
Support for checking installation status ............................................................... 44
Support for automatically detecting and resolving problems ............................... 45
Support for working with data groups .................................................................. 45
Support for resolving problems ........................................................................... 46
Support for switching ........................................................................................... 47
Support for advanced analysis ............................................................................ 48
Support for working with messages .................................................................... 48
Chapter 2 Replication process overview 50
Replication job and supporting job names ................................................................ 51
Cooperative processing introduction ......................................................................... 53
MIMIX Dynamic Apply ......................................................................................... 53
Legacy cooperative processing ........................................................................... 54

3
Advanced journaling ............................................................................................ 54
System journal replication ......................................................................................... 55
Activity entry processing...................................................................................... 56
Processing self-contained activity entries...................................................... 57
Processing data-retrieval activity entries ....................................................... 57
Processes with shared jobs................................................................................. 59
Processes with multiple asynchronous jobs ........................................................ 59
Tracking object replication................................................................................... 60
Managing object auditing .................................................................................... 60
User journal replication.............................................................................................. 63
What is remote journaling?.................................................................................. 63
Benefits of using remote journaling with MIMIX .................................................. 63
Restrictions of MIMIX Remote Journal support ................................................... 64
Overview of IBM processing of remote journals .................................................. 65
Synchronous delivery .................................................................................... 65
Asynchronous delivery .................................................................................. 67
User journal replication processes ...................................................................... 68
The RJ link .......................................................................................................... 68
Sharing RJ links among data groups............................................................. 69
RJ links within and independently of data groups ......................................... 69
Differences between ENDDG and ENDRJLNK commands .......................... 69
RJ link monitors ................................................................................................... 71
RJ link monitors - operation........................................................................... 71
RJ link monitors in complex configurations ................................................... 71
Support for unconfirmed entries during a switch ................................................. 73
RJ link considerations when switching ................................................................ 73
User journal replication of IFS objects, data areas, data queues.............................. 75
Benefits ............................................................................................................... 75
Processes used................................................................................................... 76
Tracking entries ................................................................................................... 77
IFS object file identifiers (FIDs) ........................................................................... 78
Older source-send user journal replication processes .............................................. 79
Chapter 3 Preparing for MIMIX 81
Checklist: pre-configuration....................................................................................... 82
New configuration default environment ..................................................................... 83
Data that should not be replicated............................................................................. 84
Planning for journaled IFS objects, data areas, and data queues............................. 87
Is user journal replication appropriate for your environment? ............................. 87
Serialized transactions with database files.......................................................... 88
Converting existing data groups .......................................................................... 88
Conversion examples .................................................................................... 89
Database apply session balancing ...................................................................... 90
User exit program considerations........................................................................ 90
Starting the MIMIXSBS subsystem ........................................................................... 92
Accessing the MIMIX Main Menu.............................................................................. 93
Chapter 4 Planning choices and details by object class 95
Replication choices by object type ............................................................................ 97
Configured object auditing value for data group entries............................................ 98

4
Identifying library-based objects for replication ....................................................... 100
How MIMIX uses object entries to evaluate journal entries for replication ........ 101
Replication of implicitly defined parents of library-based objects ................ 102
Identifying spooled files for replication .............................................................. 103
Additional choices for spooled file replication.............................................. 103
Replicating user profiles and associated message queues .............................. 104
Identifying logical and physical files for replication.................................................. 106
Considerations for LF and PF files .................................................................... 106
Files with LOBs............................................................................................ 108
Configuration requirements for LF and PF files................................................. 109
Requirements and limitations of MIMIX Dynamic Apply.................................... 111
Requirements and limitations of legacy cooperative processing....................... 112
Identifying data areas and data queues for replication............................................ 113
Configuration requirements - data areas and data queues ............................... 113
Restrictions - user journal replication of data areas and data queues .............. 114
Identifying IFS objects for replication ...................................................................... 116
Supported IFS file systems and object types .................................................... 116
Considerations when identifying IFS objects..................................................... 117
MIMIX processing order for data group IFS entries..................................... 117
Long IFS path names .................................................................................. 117
Upper and lower case IFS object names..................................................... 117
Replication of implicitly defined IFS parent objects ..................................... 118
Configured object auditing value for IFS objects ......................................... 119
Support for multiple hard links ..................................................................... 119
Configuration requirements - IFS objects .......................................................... 119
Restrictions - user journal replication of IFS objects ......................................... 120
Identifying DLOs for replication ............................................................................... 122
How MIMIX uses DLO entries to evaluate journal entries for replication .......... 122
Replication of implicitly defined DLO parent objects ................................... 122
Sequence and priority order for documents ................................................ 123
Sequence and priority order for folders ....................................................... 124
Processing of newly created files and objects......................................................... 126
Newly created files ............................................................................................ 126
New file processing - MIMIX Dynamic Apply............................................... 126
New file processing - legacy cooperative processing.................................. 127
Newly created IFS objects, data areas, and data queues ................................. 127
Determining how an activity entry for a create operation was replicated .... 128
Processing variations for common operations ........................................................ 129
Move/rename operations - journaled replication ............................................... 129
Move/rename operations - user journaled data areas, data queues, IFS objects ...
130
Delete operations - files configured for legacy cooperative processing ............ 133
Delete operations - user journaled data areas, data queues, IFS objects ........ 134
Restore operations - user journaled data areas, data queues, IFS objects ...... 134
Chapter 5 Configuration checklists 135
Checklist: New remote journal (preferred) configuration ......................................... 137
Checklist: New MIMIX source-send configuration................................................... 141
Checklist: converting to application groups ............................................................. 145
Checklist: Converting to remote journaling.............................................................. 146

5
Converting to MIMIX Dynamic Apply....................................................................... 148
Converting using the Convert Data Group command ....................................... 148
Checklist: manually converting to MIMIX Dynamic Apply.................................. 149
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling .................... 151
Checklist: Converting IFS entries to user journaling using the CVTDGIFSE command
154
Requirements for using the CVTDGIFSE command ......................................... 154
Create a list of IFS objects eligible for converting to user journaling................. 155
Running the CVTDGIFSE command................................................................. 155
Responding to CVTDGIFSE command messages ........................................... 156
Checklist: Converting to legacy cooperative processing ......................................... 157
Chapter 6 System-level communications 159
Configuring for native TCP/IP.................................................................................. 159
Port aliases-simple example ............................................................................. 160
Port aliases-complex example .......................................................................... 161
Creating port aliases ......................................................................................... 162
Configuring APPC/SNA........................................................................................... 163
Configuring OptiConnect ......................................................................................... 164
Chapter 7 Configuring system definitions 165
Tips for system definition parameters ..................................................................... 166
Creating system definitions ..................................................................................... 169
Changing a system definition .................................................................................. 170
Limiting internal communications to a network system ........................................... 170
Multiple network system considerations.................................................................. 172
Chapter 8 Configuring transfer definitions 174
Tips for transfer definition parameters..................................................................... 176
Finding the system database name for RDB directory entries .......................... 180
Using IBM i commands to work with RDB directory entries ........................ 180
Using contextual (*ANY) transfer definitions ........................................................... 181
Search and selection process ........................................................................... 181
Considerations for remote journaling ................................................................ 182
Considerations for MIMIX source-send configurations...................................... 182
Naming conventions for contextual transfer definitions ..................................... 183
Additional usage considerations for contextual transfer definitions................... 183
Creating a transfer definition ................................................................................... 184
Changing a transfer definition ................................................................................. 185
Changing a transfer definition to support remote journaling.............................. 185
Starting the DDM TCP/IP server ............................................................................. 187
Verifying that the DDM TCP/IP server is running .............................................. 187
Checking the DDM password validation level ......................................................... 188
Option 1: Manually update MIMIXOWN user profile for DDM environment ...... 188
Option 2: Force MIMIX to change password for MIMIXOWN user profile ......... 189
Option 3: Allow user profiles without passwords ............................................... 189
Starting the TCP/IP server ...................................................................................... 191
Using autostart job entries to start the TCP server ................................................. 192
Identifying the current autostart job entry information ....................................... 192
Changing an autostart job entry and its related job description ........................ 193
Using a different job description for an autostart job entry .......................... 193

6
Updating host information for a user-managed autostart job entry ............. 194
Updating port information for a user-managed autostart job entry .............. 194
Verifying a communications link for system definitions ........................................... 196
Verifying the communications link for a data group................................................. 197
Verifying all communications links..................................................................... 197
Chapter 9 Configuring journal definitions 198
Configuration processes that create journal definitions........................................... 200
Journals and journal definitions for internal use ................................................ 200
Tips for journal definition parameters ...................................................................... 201
Journal definition considerations ............................................................................. 206
Journal definition naming conventions .................................................................... 207
Preferred target journal definition naming convention ....................................... 207
Example journal definitions for three management nodes .......................... 209
Target journal definition names generated by ADDRJLNK command .............. 210
Example journal definitions for a switchable data group ............................. 211
Journal receiver management................................................................................. 213
Interaction with other products that manage receivers...................................... 214
Processing from an earlier journal receiver ....................................................... 215
Considerations when journaling on target ......................................................... 216
Journal receiver size for replicating large object data ............................................. 217
Verifying journal receiver size options ............................................................... 217
Changing journal receiver size options ............................................................. 217
Creating a journal definition..................................................................................... 218
Changing a journal definition................................................................................... 220
Building the journaling environment ........................................................................ 221
Changing the journaling environment to use *MAXOPT3 ....................................... 222
Changing the remote journal environment .............................................................. 225
Adding a remote journal link.................................................................................... 227
Changing a remote journal link................................................................................ 228
Temporarily changing from RJ to MIMIX processing .............................................. 229
Changing from remote journaling to MIMIX processing .......................................... 230
Removing a remote journaling environment............................................................ 231
Chapter 10 Configuring data group definitions 233
Tips for data group parameters ............................................................................... 234
Additional considerations for data groups ......................................................... 245
Creating a data group definition .............................................................................. 246
Changing a data group definition ............................................................................ 250
Changing a data group to use a shared object send job......................................... 250
Fine-tuning backlog warning thresholds for a data group ....................................... 251
Optimizing performance for a shared object send process ..................................... 254
Identifying which data groups share an object send process ............................ 255
Moving a data group to a different object send job ........................................... 255
Chapter 11 Additional options: working with definitions 257
Copying a definition................................................................................................. 257
Deleting a definition................................................................................................. 258
Renaming definitions............................................................................................... 259
Renaming a system definition ........................................................................... 260
Renaming a transfer definition .......................................................................... 266

7
Renaming a journal definition with considerations for RJ link ........................... 267
Renaming a data group definition ..................................................................... 268
Chapter 12 Configuring data group entries 270
Creating data group object entries .......................................................................... 271
Loading data group object entries ..................................................................... 271
Adding or changing a data group object entry................................................... 272
Creating data group file entries ............................................................................... 275
Loading file entries ............................................................................................ 275
Loading file entries from a data group’s object entries ................................ 276
Loading file entries from a library ................................................................ 278
Loading file entries from a journal definition ................................................ 279
Loading file entries from another data group’s file entries........................... 280
Adding a data group file entry ........................................................................... 281
Changing a data group file entry ....................................................................... 282
Creating data group IFS entries .............................................................................. 284
Adding or changing a data group IFS entry....................................................... 284
Loading tracking entries .......................................................................................... 286
Loading IFS tracking entries.............................................................................. 286
Loading object tracking entries.......................................................................... 287
Adding a library to an existing data group ............................................................... 288
Adding an IFS directory to an existing data group .................................................. 293
Creating data group DLO entries ............................................................................ 297
Loading DLO entries from a folder .................................................................... 297
Adding or changing a data group DLO entry ..................................................... 298
Additional options: working with DG entries ............................................................ 300
Copying a data group entry ............................................................................... 300
Removing a data group entry ............................................................................ 301
Displaying a data group entry............................................................................ 302
Chapter 13 Additional supporting tasks for configuration 303
Accessing the Configuration Menu.......................................................................... 305
Starting the system and journal managers.............................................................. 306
Manually deploying configuration changes ............................................................. 307
Setting data group auditing values manually........................................................... 309
Examples of changing of an IFS object’s auditing value ................................... 310
Checking file entry configuration manually.............................................................. 313
Starting data groups for the first time ...................................................................... 315
Identifying data groups that use an RJ link ............................................................. 316
Using file identifiers (FIDs) for IFS objects .............................................................. 317
Configuring restart times for MIMIX jobs ................................................................. 318
Configurable job restart time operation ............................................................. 318
Affected jobs...................................................................................................... 318
Examples: job restart time ................................................................................. 320
Restart time examples: system definitions .................................................. 320
Restart time examples: system and data group definition combinations..... 321
Configuring the restart time in a system definition ............................................ 323
Configuring the restart time in a data group definition....................................... 324
Setting the system time zone and time ................................................................... 325
Creating an application group definition .................................................................. 326

8
Loading data resource groups into an application group ........................................ 327
Specifying the primary node for the application group ............................................ 327
Manually adding resource group and node entries to an application group............ 328
Starting, ending, or switching an application group................................................. 330
Starting an application group............................................................................. 331
Ending an application group .............................................................................. 332
Switching an application group.......................................................................... 332
Performing target journal inspection........................................................................ 334
Automatic correction of errors found by target journal inspection ..................... 337
Enabling target journal inspection ..................................................................... 338
Determining which data groups use a journal definition .................................... 339
Disabling target journal inspection .................................................................... 340
Chapter 14 Starting, ending, and verifying journaling 342
What objects need to be journaled.......................................................................... 343
Authority requirements for starting journaling.................................................... 344
MIMIX commands for starting journaling................................................................. 345
Forcing objects to use the configured journal.................................................... 346
Journaling for physical files ..................................................................................... 347
Displaying journaling status for physical files .................................................... 347
Starting journaling for physical files ................................................................... 347
Ending journaling for physical files .................................................................... 348
Verifying journaling for physical files ................................................................. 349
Journaling for IFS objects........................................................................................ 350
Displaying journaling status for IFS objects ...................................................... 350
Starting journaling for IFS objects ..................................................................... 350
Ending journaling for IFS objects ...................................................................... 351
Verifying journaling for IFS objects.................................................................... 352
Journaling for data areas and data queues............................................................. 354
Displaying journaling status for data areas and data queues............................ 354
Starting journaling for data areas and data queues .......................................... 354
Ending journaling for data areas and data queues............................................ 355
Verifying journaling for data areas and data queues ......................................... 356
Chapter 15 Configuring for improved performance 358
Minimized journal entry data ................................................................................... 359
Restrictions of minimized journal entry data...................................................... 359
Configuring for minimized journal entry data ..................................................... 360
Configuring database apply caching ....................................................................... 361
Configuring for high availability journal performance enhancements...................... 362
Journal standby state ........................................................................................ 362
Minimizing potential performance impacts of standby state ........................ 363
Journal caching ................................................................................................. 363
MIMIX processing of high availability journal performance enhancements....... 363
Requirements of high availability journal performance enhancements ............. 364
Restrictions of high availability journal performance enhancements................. 364
Configuring journal standby state ...................................................................... 365
Configuring journal caching ............................................................................... 365
Immediately applying committed transactions......................................................... 367
Changing the specified commit mode ............................................................... 368

9
Caching extended attributes of *FILE objects ......................................................... 369
Optimizing access path maintenance...................................................................... 370
Optimizing access path maintenance on service pack 7.1.15.00 or higher ...... 370
Eligible files and limitations.......................................................................... 370
Enabling the access path maintenance function ......................................... 371
Operation..................................................................................................... 371
Error recovery.............................................................................................. 372
Behavior during a switch ............................................................................. 372
Job status .................................................................................................... 373
Using parallel access path maintenance on earlier service packs .................... 374
Increasing data returned in journal entry blocks by delaying RCVJRNE calls ........ 377
Understanding the data area format.................................................................. 377
Determining if the data area should be changed............................................... 378
Configuring the RCVJRNE call delay and block values .................................... 378
Configuring high volume objects for better performance......................................... 380
Improving performance of the #MBRRCDCNT audit .............................................. 381
Chapter 16 Configuring advanced replication techniques 383
Keyed replication..................................................................................................... 385
Keyed vs positional replication .......................................................................... 385
Requirements for keyed replication ................................................................... 385
Restrictions of keyed replication........................................................................ 386
Implementing keyed replication ......................................................................... 386
Changing a data group configuration to use keyed replication.................... 386
Changing a data group file entry to use keyed replication........................... 387
Verifying key attributes ...................................................................................... 389
Data distribution and data management scenarios ................................................. 390
Configuring for bi-directional flow ...................................................................... 390
Bi-directional requirements: system journal replication ............................... 390
Bi-directional requirements: user journal replication.................................... 391
Configuring for file routing and file combining ................................................... 392
Configuring for cascading distributions ............................................................. 395
Trigger support ........................................................................................................ 397
How MIMIX handles triggers ............................................................................. 397
Considerations when using triggers .................................................................. 397
Enabling trigger support .................................................................................... 398
Synchronizing files with triggers ........................................................................ 398
Constraint support ................................................................................................... 399
Referential constraints with delete rules............................................................ 399
Replication of constraint-induced modifications .......................................... 400
Handling SQL identity columns ............................................................................... 401
The identity column problem explained ............................................................. 401
When the SETIDCOLA command is useful....................................................... 402
SETIDCOLA command limitations .................................................................... 402
Alternative solutions .......................................................................................... 403
SETIDCOLA command details .......................................................................... 404
Usage notes ................................................................................................ 405
Examples of choosing a value for INCREMENTS....................................... 405
Checking for replication of tables with identity columns .................................... 406
Setting the identity column attribute for replicated files ..................................... 406

10
Collision resolution .................................................................................................. 408
Additional methods available with CR classes .................................................. 408
Requirements for using collision resolution ....................................................... 409
Working with collision resolution classes .......................................................... 410
Creating a collision resolution class ............................................................ 410
Changing a collision resolution class........................................................... 411
Deleting a collision resolution class............................................................. 411
Displaying a collision resolution class ......................................................... 411
Printing a collision resolution class.............................................................. 412
Changing target side locking for DBAPY processes ............................................... 413
Omitting T-ZC content from system journal replication ........................................... 415
Configuration requirements and considerations for omitting T-ZC content ....... 416
Omit content (OMTDTA) and cooperative processing................................. 417
Omit content (OMTDTA) and comparison commands ................................ 417
Selecting an object retrieval delay........................................................................... 419
Object retrieval delay considerations and examples ......................................... 419
Configuring to replicate SQL stored procedures and user-defined functions.......... 421
Requirements for replicating SQL stored procedure operations ....................... 421
To replicate SQL stored procedure operations ................................................. 422
Using Save-While-Active in MIMIX.......................................................................... 423
Considerations for save-while-active................................................................. 423
Types of save-while-active options ................................................................... 424
Example configurations ..................................................................................... 424
Chapter 17 Object selection for Compare and Synchronize commands 425
Object selection process ......................................................................................... 426
Order precedence ............................................................................................. 429
Parameters for specifying object selectors.............................................................. 429
Object selection examples ...................................................................................... 434
Processing example with a data group and an object selection parameter ...... 435
Example subtree ............................................................................................... 438
Example Name pattern...................................................................................... 441
Example subtree for IFS objects ....................................................................... 442
Report types and output formats ............................................................................. 444
Spooled files ...................................................................................................... 444
Outfiles .............................................................................................................. 445
Chapter 18 Comparing attributes 446
About the Compare Attributes commands .............................................................. 446
Choices for selecting objects to compare.......................................................... 447
Unique parameters ...................................................................................... 447
Choices for selecting attributes to compare ...................................................... 448
CMPFILA supported object attributes for *FILE objects .............................. 449
CMPOBJA supported object attributes for *FILE objects ............................ 449
Comparing file and member attributes .................................................................... 450
Comparing object attributes .................................................................................... 453
Comparing IFS object attributes.............................................................................. 456
Comparing DLO attributes....................................................................................... 459
Chapter 19 Comparing file record counts and file member data 462
Comparing file record counts .................................................................................. 462

11
To compare file record counts ........................................................................... 463
Significant features for comparing file member data ............................................... 465
Repairing data ................................................................................................... 465
Active and non-active processing...................................................................... 465
Processing members held due to error ............................................................. 465
Additional features............................................................................................. 466
Considerations for using the CMPFILDTA command ............................................. 466
Recommendations and restrictions ................................................................... 466
Using the CMPFILDTA command with firewalls................................................ 467
Security considerations ..................................................................................... 467
Comparing allocated records to records not yet allocated ................................ 467
Comparing files with unique keys, triggers, and constraints ............................. 468
Avoiding issues with triggers ....................................................................... 468
Referential integrity considerations ............................................................. 469
Job priority .................................................................................................... 469
CMPFILDTA and network inactivity................................................................... 470
Specifying CMPFILDTA parameter values.............................................................. 470
Specifying file members to compare ................................................................. 470
Tips for specifying values for unique parameters .............................................. 471
Specifying the report type, output, and type of processing ............................... 474
System to receive output ............................................................................. 474
Interactive and batch processing................................................................. 474
Using the additional parameters........................................................................ 474
Advanced subset options for CMPFILDTA.............................................................. 476
Ending CMPFILDTA requests ................................................................................. 479
Comparing file member data - basic procedure (non-active) .................................. 481
Comparing and repairing file member data - basic procedure ................................ 484
Comparing and repairing file member data - members on hold (*HLDERR) .......... 487
Comparing file member data using active processing technology .......................... 490
Comparing file member data using subsetting options ........................................... 493
Chapter 20 Synchronizing data between systems 497
Considerations for synchronizing using MIMIX commands..................................... 499
Limiting the maximum sending size .................................................................. 499
Synchronizing user profiles ............................................................................... 499
Synchronizing user profiles with SYNCnnn commands .............................. 500
Missing system distribution directory entries ............................................... 500
Synchronizing large files and objects ................................................................ 501
Status changes caused by synchronizing ......................................................... 501
Synchronizing objects in an independent ASP.................................................. 501
About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 503
About synchronizing data group activity entries (SYNCDGACTE).......................... 504
About synchronizing file entries (SYNCDGFE command) ...................................... 505
About synchronizing tracking entries....................................................................... 507
Performing the initial synchronization...................................................................... 508
Establish a synchronization point ...................................................................... 508
Resources for synchronizing ............................................................................. 509
Using SYNCDG to perform the initial synchronization ............................................ 510
To perform the initial synchronization using the SYNCDG command defaults . 511
Verifying the initial synchronization ......................................................................... 512

12
Synchronizing database files................................................................................... 514
Synchronizing objects ............................................................................................. 516
To synchronize library-based objects associated with a data group ................. 516
To synchronize library-based objects without a data group .............................. 517
Synchronizing IFS objects....................................................................................... 520
To synchronize IFS objects associated with a data group ................................ 520
To synchronize IFS objects without a data group ............................................. 522
Synchronizing DLOs................................................................................................ 524
To synchronize DLOs associated with a data group ......................................... 524
To synchronize DLOs without a data group ...................................................... 525
Synchronizing data group activity entries................................................................ 528
Synchronizing tracking entries ................................................................................ 530
To synchronize an IFS tracking entry ................................................................ 530
To synchronize an object tracking entry ............................................................ 530
Chapter 21 Introduction to programming 531
Support for customizing........................................................................................... 532
User exit points.................................................................................................. 532
Collision resolution ............................................................................................ 532
Completion and escape messages for comparison commands ............................. 534
CMPFILA messages ......................................................................................... 534
CMPOBJA messages........................................................................................ 535
CMPIFSA messages ......................................................................................... 535
CMPDLOA messages ....................................................................................... 536
CMPRCDCNT messages .................................................................................. 536
CMPFILDTA messages..................................................................................... 537
Adding messages to the MIMIX message log ......................................................... 541
Output and batch guidelines.................................................................................... 542
General output considerations .......................................................................... 542
Output parameter ........................................................................................ 542
Display output.............................................................................................. 543
Print output .................................................................................................. 543
File output.................................................................................................... 545
General batch considerations............................................................................ 546
Batch (BATCH) parameter .......................................................................... 546
Job description (JOBD) parameter .............................................................. 546
Job name (JOB) parameter ......................................................................... 546
Displaying a list of commands in a library ............................................................... 547
Running commands on a remote system................................................................ 548
Benefits - RUNCMD and RUNCMDS commands ............................................. 548
Procedures for running commands RUNCMD, RUNCMDS.................................... 549
Running commands using a specific protocol ................................................... 549
Running commands using a MIMIX configuration element ............................... 551
Using lists of retrieve commands ............................................................................ 555
Changing command defaults................................................................................... 556
Chapter 22 Customizing procedures 557
Procedure components and concepts..................................................................... 557
Procedure types ................................................................................................ 558
Procedure job processing.................................................................................. 558

13
Attributes of a step ............................................................................................ 559
Operational control ............................................................................................ 560
Current status and run history ........................................................................... 561
Customizing user application handling for switching............................................... 561
Customize the step programs for user applications .......................................... 562
Working with procedures......................................................................................... 563
Accessing the Work with Procedures display.................................................... 563
Displaying the procedures for an application group .................................... 564
Displaying the procedures for a node.......................................................... 564
Displaying all procedures ............................................................................ 565
Creating a procedure of type *NODE ................................................................ 565
Creating a procedure of type *USER ................................................................ 565
Creating a procedure of type *END, *START, *SWTPLAN, *SWTUNPLAN ..... 566
Deleting a procedure ......................................................................................... 566
Working with the steps of a procedure .................................................................... 567
Displaying the steps within a procedure ............................................................ 567
Displaying step status for the last started run of a procedure ........................... 568
Adding a step to a procedure ............................................................................ 568
Changing attributes of a step ............................................................................ 569
Enabling or disabling a step .............................................................................. 569
Removing a step from a procedure ................................................................... 570
Working with step programs.................................................................................... 570
Accessing step programs .................................................................................. 570
Creating a custom step program ....................................................................... 570
Changing a step program .................................................................................. 571
Step program format STEP0100 ....................................................................... 571
Working with step messages................................................................................... 573
Assessing the Work with Step Messages display ............................................. 573
Adding or changing a step message ................................................................. 573
Removing a step message ................................................................................ 574
Additional programming support for procedures and steps..................................... 574
Chapter 23 Shipped procedures and step programs 576
Values for procedures and steps............................................................................. 576
Shipped procedures for application groups............................................................. 578
END ................................................................................................................... 579
ENDTGT............................................................................................................ 579
ENDIMMED ....................................................................................................... 579
PRECHECK ...................................................................................................... 580
START............................................................................................................... 581
SWTPLAN ......................................................................................................... 581
SWTUNPLAN .................................................................................................... 583
Shipped procedures for data protection reports ...................................................... 585
CRTDPRDIR ..................................................................................................... 585
CRTDPRFLR..................................................................................................... 585
CRTDPRLIB ...................................................................................................... 586
Shipped default procedures for IBM i cluster type application groups .................... 586
END for clustering ............................................................................................. 587
START for clustering ......................................................................................... 587
SWTPLAN for clustering ................................................................................... 588

14
SWTUNPLAN for clustering .............................................................................. 590
Shipped user procedures for cluster type application groups ................................. 592
APP_END.......................................................................................................... 592
APP_FAIL.......................................................................................................... 593
APP_STR .......................................................................................................... 593
APP_SWT ......................................................................................................... 594
Shipped user procedures for *GMIR resource groups ............................................ 594
GMIR_END ....................................................................................................... 594
GMIR_FAIL ....................................................................................................... 595
GMIR_JOIN ....................................................................................................... 595
GMIR_STR ........................................................................................................ 596
GMIR_SWT ....................................................................................................... 596
Shipped user procedures for *LUN resource groups .............................................. 597
LUN_FAIL.......................................................................................................... 597
LUN_SWT ......................................................................................................... 598
Shipped user procedures for Peer resource groups ............................................... 598
PEER_END ....................................................................................................... 598
PEER_STR ....................................................................................................... 598
Shipped user procedures for *PPRC resource groups............................................ 599
PPRC_END ....................................................................................................... 599
PPRC_FAIL ....................................................................................................... 599
PPRC_JOIN ...................................................................................................... 600
PPRC_STR ....................................................................................................... 600
PPRC_SWT ...................................................................................................... 601
Steps for application groups.................................................................................... 602
Steps for application groups included in procedures......................................... 602
Step programs not included in shipped MIMIX procedures............................... 609
Steps for data protection report procedures............................................................ 611
Steps for clustering environments ........................................................................... 613
Steps for MIMIX for MQ........................................................................................... 623
Chapter 24 Customizing with exit point programs 625
Summary of exit points............................................................................................ 625
MIMIX user exit points ....................................................................................... 625
MIMIX Monitor user exit points .......................................................................... 626
MIMIX Promoter user exit points ....................................................................... 627
Working with journal receiver management user exit points ................................... 628
Journal receiver management exit points.......................................................... 628
Change management exit points................................................................. 628
Delete management exit points ................................................................... 629
Requirements for journal receiver management exit programs................... 629
Journal receiver management exit program example ................................. 632
Appendix A Supported object types for system journal replication 635
Appendix B MIMIX product-level security 638
Authority levels for MIMIX commands..................................................................... 639
Substitution values for command authority ....................................................... 647
Appendix C Copying configurations 648
Supported scenarios ............................................................................................... 648

15
Checklist: copy configuration................................................................................... 649
Copying configuration procedure ............................................................................ 653
Appendix D Configuring Intra communications 654
Manually configuring Intra using TCP ..................................................................... 654
Manually configuring Intra using SNA ..................................................................... 656
Appendix E MIMIX support for independent ASPs 658
Benefits of independent ASPs................................................................................. 659
Auxiliary storage pool concepts at a glance ............................................................ 659
Requirements for replicating from independent ASPs ............................................ 662
Limitations and restrictions for independent ASP support....................................... 662
Configuration planning tips for independent ASPs.................................................. 663
Journal and journal receiver considerations for independent ASPs .................. 664
Configuring IFS objects when using independent ASPs ................................... 664
Configuring library-based objects when using independent ASPs .................... 664
Avoiding unexpected changes to the library list ................................................ 665
Detecting independent ASP overflow conditions..................................................... 667
Appendix F Advanced auditing topics 668
What are rules and how they are used by auditing ................................................. 669
Using a different job scheduler for audits ................................................................ 670
Considerations for rules .......................................................................................... 671
Creating user-generated notifications ..................................................................... 673
Example of a user-generated notification .......................................................... 674
Running rules and rule groups manually................................................................. 675
Running rules .................................................................................................... 675
Running rule groups .......................................................................................... 676
MIMIX rule groups ................................................................................................... 677
Appendix G Interpreting audit results 678
Resolving auditing problems ................................................................................... 679
Resolving audit runtime status problems .......................................................... 679
Checking the job log of an audit ........................................................................ 684
Resolving audit compliance status problems .................................................... 684
When the difference is “not found” .......................................................................... 686
Interpreting results for configuration data - #DGFE audit........................................ 687
Interpreting results of audits for record counts and file data ................................... 689
What differences were detected by #FILDTA.................................................... 689
What differences were detected by #MBRRCDCNT ......................................... 691
Interpreting results of audits that compare attributes .............................................. 692
What attribute differences were detected .......................................................... 692
Where was the difference detected................................................................... 695
What attributes were compared ........................................................................ 695
Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 696
Attributes compared and expected results - #OBJATR audit ............................ 701
Attributes compared and expected results - #IFSATR audit ............................. 710
Attributes compared and expected results - #DLOATR audit ........................... 713
Comparison results for journal status and other journal attributes .................... 715
How configured journaling settings are determined .................................... 718
Comparison results for auxiliary storage pool ID (*ASP)................................... 719

16
Comparison results for user profile status (*USRPRFSTS) .............................. 722
How configured user profile status is determined........................................ 723
Comparison results for user profile password (*PRFPWDIND)......................... 725
Appendix H Journal Codes and Error Codes 727
Journal entry codes for user journal transactions.................................................... 727
Journal entry codes for files .............................................................................. 727
Error codes for files in error ............................................................................... 730
Journal codes and entry types for journaled IFS objects .................................. 732
Journal codes and entry types for journaled data areas and data queues........ 733
Journal entry codes for system journal transactions ............................................... 735
Appendix I Outfile formats 737
Work panels with outfile support ............................................................................. 738
MCAG outfile (WRKAG command) ......................................................................... 739
MCDTACRGE outfile (WRKDTARGE command) ................................................... 742
MCNODE outfile (WRKNODE command)............................................................... 745
MXCDGFE outfile (CHKDGFE command) .............................................................. 747
MXCMPDLOA outfile (CMPDLOA command)......................................................... 749
MXCMPFILA outfile (CMPFILA command) ............................................................. 751
MXCMPFILD outfile (CMPFILDTA command) ........................................................ 753
MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 756
MXCMPRCDC outfile (CMPRCDCNT command)................................................... 757
MXCMPIFSA outfile (CMPIFSA command) ............................................................ 759
MXCMPOBJA outfile (CMPOBJA command) ......................................................... 761
MXAUDHST outfile (WRKAUDHST command) ...................................................... 763
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands) ......................... 766
MXDGACT outfile (WRKDGACT command)........................................................... 769
MXDGACTE outfile (WRKDGACTE command)...................................................... 771
MXDGDFN outfile (WRKDGDFN command) .......................................................... 778
MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 786
MXDGFE outfile (WRKDGFE command)................................................................ 787
MXDGIFSE outfile (WRKDGIFSE command) ......................................................... 790
MXDGSTS outfile (WRKDG command) .................................................................. 791
WRKDG outfile SELECT statement examples .................................................. 813
WRKDG outfile example 1........................................................................... 814
WRKDG outfile example 2........................................................................... 814
WRKDG outfile example 3........................................................................... 814
WRKDG outfile example 4........................................................................... 815
MXDGOBJE outfile (WRKDGOBJE command) ...................................................... 816
MXDGTSP outfile (WRKDGTSP command) ........................................................... 819
MXJRNDFN outfile (WRKJRNDFN command) ....................................................... 821
MXRJLNK outfile (WRKRJLNK command) ............................................................. 824
MXSYSDFN outfile (WRKSYSDFN command)....................................................... 826
MXSYSSTS outfile (WRKSYS command) .............................................................. 829
MXJRNINSP outfile (WRKJRNINSP command) ..................................................... 830
MXTFRDFN outfile (WRKTFRDFN command) ....................................................... 832
MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 834
MXDGOBJTE outfile (WRKDGOBJTE command).................................................. 837
MXPROC outfile (WRKPROC command) ............................................................... 839

17
MXPROCSTS outfile (WRKPROCSTS command) ................................................. 840
MXSTEPPGM outfile (WRKSTEPPGM command)................................................. 841
MXSTEP outfile (WRKSTEP command) ................................................................. 842
MXSTEPMSG outfile (WRKSTEPMSG command)................................................. 843
MXSTEPSTS outfile (WRKSTEPSTS command) ................................................... 844
Index 846

18
Who this book is for

Who this book is for


The MIMIX Administrator Reference book is a tool for MIMIX administrators who
configure and maintain a MIMIX® DR, MIMIX® Enterprise™ or MIMIX® Professional™
replication environment. Some topics in this book may not apply to all products.

What is in this book


The MIMIX Administrator Reference book provides these distinct types of information:
• Descriptions of MIMIX concepts and replication processes
• Configuration planning information, including details about replication choices for
classes of objects
• Checklists and supporting procedures for implementing common configurations
• Detailed information for customizing configurations for improved performance and
to support advanced replication techniques
• Detailed information about comparison commands and their results, as well as
synchronization commands. Compare commands are the basis of all MIMIX
audits and synchronize commands are the basis for automatic recoveries.
• Descriptions of available support for customizing through the use of exit programs
• Reference material such as lists of supported object types and possible journal
codes and error codes, values that can be returned in output files (outfiles), and
attributes that can be compared.

The MIMIX documentation set


The following documents about MIMIX® products are available:
Using License Manager
License Manager currently supports MIMIX®, iOptimize™, and iTERA®
Availability™. This book describes software requirements, system security, and
other planning considerations for installing software and software fixes for Vision
Solutions products that are supported through License Manager. The preferred
way to obtain license keys and install software is by using Vision AutoValidate™
and the product’s Installation Wizard. However, if you cannot use the wizard or
AutoValidate, this book provides instructions for obtaining licenses and installing
software from a native user interface. This book also describes how to use the
additional security functions from Vision Solutions which are available for License
Manager and MIMIX and implemented through License Manager.
MIMIX Administrator Reference
This book provides detailed conceptual, configuration, and programming
information for MIMIX products. It includes checklists for setting up several
common configurations, information for planning what to replicate, and detailed
advanced configuration topics for custom needs. It also identifies what information
can be returned in outfiles if used in automation. Some topics may not be
applicable to all MIMIX products.

19
MIMIX Operations with IBM i Clustering
This book is for administrators and operators in an IBM i clustering environment
who use MIMIX® for PowerHA® to integrate cluster management with MIMIX
logical replication or supported hardware-based replication techniques. This book
focuses on addressing problems reported in MIMIX status and basic operational
procedures such as starting, ending, and switching.
MIMIX Operations - 5250
This book provides high level concepts and operational procedures for managing
your high availability environment using MIMIX products from a native user
interface. This book focuses on tasks typically performed by an operator, such as
checking status, starting or stopping replication, performing audits, and basic
problem resolution.
Using MIMIX Monitor
This book describes how to use the MIMIX Monitor user and programming
interfaces available with MIMIX products. This book also includes programming
information about MIMIX Model Switch Framework.
Using MIMIX Promoter
This book describes how to use MIMIX commands for copying and reorganizing
active files. MIMIX Promoter functionality is included with MIMIX® Enterprise™.
MIMIX for IBM WebSphere MQ
This book identifies requirements for the MIMIX for MQ feature which supports
replication in IBM WebSphere MQ environments. This book describes how to
configure MIMIX for this environment and how to perform the initial
synchronization and initial startup. Once configured and started, all other
operations are performed as described in the MIMIX Operations - 5250 book.

MIMIX DR Limitations
Environments that replicate in one direction between only two systems may not
require the features offered by a high availability product. Instead, a disaster recovery
solution that enables data to be readily available is all that is needed. MIMIX® DR
features many of the capabilities found in Vision Solutions high availability products,
with the following specifications that make it strictly a disaster recovery solution:
• MIMIX Instance - Only one MIMIX Instance is allowed on each participating
system. A MIMIX DR installation comprises two systems that transfer data and
objects between them. Both systems within the instance use the same name for
the library in which the product is installed.
• Switching - MIMIX DR does not support switching. Switching is the process by
which a production environment is automatically moved from one system to
another system and replication of data can be done in either direction.
• Priority based auditing - MIMIX DR supports only scheduled object auditing.
Priority based object auditing is an advanced configuration of auditing that is only
available with other products within the MIMIX family.

20
MIMIX DR Limitations

• Multi-management - One system in every MIMIX installation is designated as a


management system where operations such as configuration are performed.
Multi-management allows certain MIMIX products to be managed from multiple
systems in the MIMIX instance. Because MIMIX DR is a simple, two system
environment, multi-management is not necessary.
• Keyed replication - MIMIX DR uses positional file replication. In positional file
replication, data on the target system is identified by position, or relative record
number (RRN), in the file member. When the file on the source system is updated,
MIMIX DR finds the data in the exact location on the target system and updates
that data with the changes. Keyed replication updates files based on key values
within data instead of by the position of the data within the file. Although keyed
replication is not supported in MIMIX DR, it may be appropriate in complex
environments.
• Cluster Resource Services - MIMIX DR does not support Cluster Resource
Services, which is part of the base IBM i operating system. Cluster resource
services provides the integrated services and application programming interfaces
(APIs) necessary to create and manage an IBM cluster.
• MIMIX for WebSphere MQ - MIMIX DR does not support the separately licensed
feature, MIMIX for WebSphere MQ. This feature enables an IBM WebSphere MQ
environment to be included as part of a managed availability solution. IBM
WebSphere MQ is an application that transmits messages between systems
running on a variety of platforms.
• Copy While Active/Reorganize While Active - Copy/Reorganize while active
capabilities are not supported by MIMIX DR. These capabilities copy or
reorganize database files while the files are in active production using the Copy
Active File (CPYACTF) and Reorganize Active File (RGZACTF) commands.
• Name mapping - Name mapping is not supported by MIMIX DR. Name mapping is
an advanced configuration technique that allows files to be renamed as they are
replicated. Typically this capability is used by more complex environments where
objects exist in different libraries or paths.
• INTRA - INTRA configurations are not supported in MIMIX DR because they do
not provide disaster recovery. In an INTRA configuration, replication occurs within
a single system. Intra is a special configuration that allows certain MIMIX products
to function fully within a single system environment.
• Processor group - MIMIX DR is limited to P05 and P10 systems with 1 to 4 CPUs
(cores). The backup system can be any P-Group model.

21
Sources for additional information
This book refers to other published information. The following information, plus
additional technical information, can be located in the IBM Knowledge Center.

From the Information center you can access these IBM Power™ Systems topics,
books, and redbooks:
• Backup and Recovery
• Journal management
• DB2 Universal Database for IBM Power™ Systems Database Programming
• Integrated File System Introduction
• Independent disk pools
• TCP/IP Setup
• IBM redbook Striving for Optimal Journal Performance on DB2 Universal
Database for iSeries, SG24-6286
• IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189
• IBM redbook Power™ Systems iASPs: A Guide to Moving Applications to
Independent ASPs, SG24-6802
The following information may also be helpful if you replicate journaled data areas,
data queues, or IFS objects:
• DB2 UDB for iSeries SQL Programming Concepts
• DB2 Universal Database for iSeries SQL Reference
• IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189

22
How to contact us

How to contact us
For contact information, visit our Contact CustomerCare web page.
If you are current on maintenance, support for MIMIX products is also available when
you log in to Support Central.
It is important to include product and version information whenever you report
problems.

23
MIMIX overview

CHAPTER 1 MIMIX overview

This book provides concepts, configuration procedures, and reference information for
using MIMIX® Enterprise™, MIMIX® Professional™, or MIMIX® DR. For simplicity, this
book uses the term MIMIX to refer to the functionality provided unless a more specific
name is necessary. Concepts and reference information also apply to MIMIX®
Global™. Some topics may not apply to all products.
MIMIX version 8 provides high availability for your critical data in a production
environment on IBM Power™ Systems through real-time replication of changes and
the ability to quickly switch your production environment to a ready backup system.
These capabilities allow your business operations to continue when you have planned
or unplanned outages in your System i environment. MIMIX also provides advanced
capabilities that can help ensure the integrity of your MIMIX environment.
Replication: MIMIX continuously captures changes to critical database files and
objects on a production system, sends the changes to a backup system, and applies
the changes to the appropriate database file or object on the backup system. The
backup system stores exact duplicates of the critical database files and objects from
the production system.
MIMIX uses two replication paths to address different pieces of your replication
needs. These paths operate with configurable levels of cooperation or can operate
independently.
• The user journal replication path captures changes to critical files and objects
configured for replication through a user journal. When configuring this path,
shipped defaults use the remote journaling function of the operating system to
simplify sending data to the remote system. In previous versions, MIMIX DB2
Replicator provided this function.
• The system journal replication path handles replication of critical system objects
(such as user profiles, program objects, or spooled files), integrated file system
(IFS) objects, and document library object (DLOs) using the system journal. In
previous versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between
the system journal and user journal replication paths when replicating database files,
IFS objects, data areas, and data queues.
Switching: One common use of MIMIX is to support a hot backup system to which
operations can be switched in the event of a planned or unplanned outage. If a
production system becomes unavailable, its backup is already prepared for users. In
the event of an outage, you can quickly switch users to the backup system where they
can continue using their applications. MIMIX captures changes on the backup system
for later synchronization with the original production system. When the original
production system is brought back online, MIMIX assists you with analysis and
synchronization of the database files and other objects.

24
Automatic verification and correction: MIMIX enables earlier and easier detection
of problems known to adversely affect maintaining availability and switch-readiness of
your replication environment. MIMIX automatically detects and corrects potential
problems during replication and auditing. MIMIX also helps to ensure the integrity of
your MIMIX configuration by automatically verifying that the files and objects being
replicated are what is defined to your configuration.
MIMIX is shipped with these capabilities enabled. Incorporated best practices for
maintaining availability and switch-readiness are key to ensuring that your MIMIX
environment is in tip-top shape for protecting your data. User interfaces allow you to
fine-tune to the needs of your environment.
Analysis: MIMIX also provides advanced analysis capabilities through the MIMIX
portal application for Vision Solutions Portal (VSP). When using the VSP user
interface, you can see what objects are configured for replication as well as what
replicated objects on the target system have been changed by people or programs
other than MIMIX. (Objects changed on the target system affect your data integrity.)
You can also check historical arrival and backlog rates for replication to help you
identify trends in your operations that may affect MIMIX performance.
Uses: MIMIX is typically used among systems in a network to support a hot backup
system. Simple environments have one production system and one backup system.
More complex environments have multiple production systems or backup systems.
MIMIX can also be used on a single system.
You can view the replicated data on the backup system at any time without affecting
productivity. This allows you to generate reports, submit (read-only) batch jobs, or
perform backups to tape from the backup system. In addition to real-time backup
capability, replicated databases and objects can be used for distributed processing,
allowing you to off-load applications to a backup system.
The topics in this chapter include:
• “MIMIX concepts” on page 26 describes concepts and terminology that you need
to know about MIMIX.
• “The MIMIX environment” on page 33 describes components of the MIMIX
operating environment.
• “System value settings for MIMIX” on page 41 identifies the system value settings
that MIMIX requires for installing or upgrading software and for operation, and
identifies which system values are changed by MIMIX.
• “Operational overview” on page 44 provides information about day to day MIMIX
operations.

25
MIMIX concepts
This topic identifies concepts and terminology that are fundamental to how MIMIX
performs replication. You should be familiar with the relationships between systems,
the concepts of data groups and switching, and role of the IBM i journaling function in
replication.

System roles and relationships


Usually, replication occurs between two or more systems. The most common
scenario for replication is a two-system environment in which one system is used for
production activities and the other system is used as a backup system.
The terms production system and backup system are used to describe the role of a
system relative to the way applications are used on that system. In an availability
management context, a production system is the system currently running the
production workload for the applications. In normal operations, the production system
is the system on which the principal copy of the data and objects associated with the
application exist. A backup system is the system that is not currently running the
production workload for the applications. In normal operations, the backup system is
the system on which you maintain a copy of the data and objects associated with the
application. These roles are not always associated with a specific system. For
example, if you switch application processing to the backup system, the backup
system temporarily becomes the production system.
Typically, for normal operations in basic two-system environment, replicated data
flows from the system running the production workload to the backup system. In a
more complex environment, the terms production system and backup system may not
be sufficient to clearly identify a specific system or its current role in the replication
process. For example, if a payroll application on system CHICAGO is backed up on
system LONDON and another application on system LONDON is backed up to the
CHICAGO system, both systems are acting as production systems and as backup
systems at the same time.
The terms source system and target system identify the direction in which an
activity occurs between two participating systems. A source system is the system
from which MIMIX replication activity between two systems originates. In replication,
the source system contains the journal entries used for replication. Information from
the journal entries is either replicated to the target system or used to identify objects
to be replicated to the target system. A target system is the system on which MIMIX
replication activity between two systems completes.
Because multiple instances of MIMIX can be installed on any system, it is important to
correctly identify the instance to which you are referring. It is helpful to consider each
installation of MIMIX on a system as being part of a separate network that is referred
to as a MIMIX installation. A MIMIX installation is a network of systems that transfer
data and objects among each other using functions of a common MIMIX product. A
MIMIX installation is defined by the way in which you configure the MIMIX product for
each of the participating systems. A system can participate in multiple independent
MIMIX installations.

26
MIMIX concepts

The terms management system and network system define the role of a system
relative to how the products interact within a MIMIX installation. These roles remain
associated with the system within the MIMIX installation to which they are defined.
Typically one system in the MIMIX installation is designated as the management
system and the remaining one or more systems are designated as network systems.
A management system is the system in a MIMIX installation that is designated as the
control point for all installations of the product within the MIMIX installation. The
management system is the location from which work to be performed by the product
is defined and maintained. Often the system defined as the management system also
serves as the backup system during normal operations. A network system is any
system in a MIMIX installation that is not designated as the management system
(control point) of that MIMIX installation. Work definitions are automatically distributed
from the management system to a network system. Often a system defined as a
network system also serves as the production system during normal operations.

Application groups: the unit of coordination


The concept of an application group provides the ability to group and control
resources in a way that maintains relationships between them. The use of application
groups is best practice for MIMIX® Professional™ and MIMIX® Enterprise™ and
required for MIMIX® for PowerHA®.
In a non-clustering environment, an application group provides the ability to control
operations for switching, starting, and stopping all data replication associated with an
application in a single request.
In an IBM i clustering environment, an application group also identifies an application
and its release level, its IP takeover address, and an exit program to control actions.
An application represents commercial software or proprietary programs that will be
managed as one unit. When a cluster event occurs, the application group provides a
coordinated response for all of its associated resources and maintains relationships
between resources when responding to cluster events. Application groups also
integrate and simplify cluster management and data replication management.

Data groups: the unit of replication


The concept of a data group is used to perform replication activities. A data group is
a logical grouping of database files, library-based objects, IFS objects, DLOs, or a
combination thereof that defines a unit of work by which MIMIX replication activity is
controlled. A data group may represent an application, a set of one or more libraries,
or all of the critical data on a given system. Application environments may define a
data group as a specific set of files and objects. For example, the R/3 environment
defines a data group as a set of SQL tables that all use the same journal and which
are all replicated to the same system. Users can start and stop replication activity by
data group, switch the direction of replication for a data group, and display replication
status by data group.
By default, data groups support replication from both the system journal and the user
journal. Optionally, you can limit a data group to replicate using only one replication
path. The parameters in the data group definition identify the direction in which data is
allowed to flow between systems and whether to allow the flow to switch directions.

27
You also define the data to be replicated and many other characteristics the
replication process uses on the defined data. The replication process is started and
ended by operations on a data group.
A data group entry identifies a source of information that can be replicated. Once a
data group definition is created, you can define data group entries. MIMIX uses the
data group entries that you create during configuration to determine whether a journal
entry should be replicated. If you are using both user journal and system journal
replication, a data group can have any combination of entries for files, IFS objects,
library-based objects, and DLOs.

Switching: moving the production environment to another system


Switching a production environment to the backup system involves much more than
just replication. A Certified MIMIX Consultant can help you identify and incorporate
those things into your switching environment. Although MIMIX has configuration-
dependent variations in how switching activities are performed and controlled,
switching the direction of data replication fundamentally occurs at the data group
level.
Switching is not supported in environments licensed for MIMIX DR.
When you configure a data group definition, you specify which of the two systems in
the data group is the source for replicated data. In normal operation, data flows
between two systems in the direction defined within the data group. When you need
to switch the direction of replication, for example, when a production system is
removed from the network for planned downtime. default values in the data group
definition allow the same data group to be used for replication from either direction.
Note: A switchable data group is different than bi-directional data flow. Bi-directional
data flow is a data sharing technique described in “Configuring advanced
replication techniques” on page 383.
In a planned switch, you are purposely changing the direction of replication for any of
a variety of reasons. You may need to take the system offline to perform maintenance
on its hardware or software, or you may be testing your disaster recovery plan. In a
planned switch, the production system (the source of replication) is available. When
you perform a planned switch, data group processing is ended on both the source and
target systems. The next time you start the data group, it will be set to replicate in the
opposite direction.
In an unplanned switch, you are changing the direction of replication as a response to
a problem. Most likely the production system is no longer available. When you
perform an unplanned switch, you must initiate the switch from the target system.
Data group processing is ended on the target system. The next time you start the data
group, it will be set to replicate in the opposite direction.

Journaling and object auditing introduction


MIMIX relies on data recorded by the IBM i functions of journaling, remote journaling,
and object auditing. Each of these functions record information in a journal.
Variations in the replication process are optimized according to characteristics of the
information provided by each of these functions.

28
MIMIX concepts

Journaling is the process of recording information about changes to user-identified


objects, including those made by a system or user function, for a limited number of
object types. Events are logged in a user journal. Optionally, logged events in a user
journal can be on a remote system using remote journaling, whereby the journal and
journal receiver exist on a remote system or on a different logical partition.
Object auditing is the process by which the system creates audit records for
specified types of access to objects. Object auditing logs events in a specialized
system journal (the security audit journal, QAUDJRN).
When an event occurs to an object or database file for which journaling is enabled, or
when a security-relevant event occurs, the system logs identifying information about
the event as a journal entry, a record in a journal receiver. The journal receiver is
associated with a journal and contains the log of all activity for objects defined to the
journal or all objects for which an audit trail is kept.
Journaling must be active before MIMIX can perform replication. MIMIX uses the
recorded journal entries to replicate activity to a designated system. Data group
entries and other data group configuration settings determine whether MIMIX
replicates activity for objects and whether replication is performed based on entries
logged to the system journal or to a user journal. For some configurations, MIMIX
uses entries from both journals.
Journal entries deposited into the system journal (on behalf of an audited object)
contain only an indication of a change to an object. Some of these types of entries
contain enough information needed by MIMIX to apply the change directly to the
replicated object on the target system, however many types of these entries require
MIMIX to gather additional information about the object from the source system in
order to apply the change directly to the replicated object on the target system.
Journal entries deposited into a user journal (on behalf of a journaled file, data area,
data queue, or IFS object) contain images of the data which was changed. This
information is needed by MIMIX in order to apply the change directly to the replicated
object on the target system.
When replication is started, the start request (STRDG command) identifies a
sequence number within a journal receiver at which MIMIX processing begins. In data
groups configured with remote journaling, the specified sequence number and
receiver name is the starting point for MIMIX processing from the remote journal. The
IBM i remote journal function controls where it starts sending entries from the source
journal receiver to the remote journal receiver.
IBM i requires that journaled objects reside in the same auxiliary storage pool (ASP)
as the user journal. The journal receivers can be in a different ASP. If the journal is in
a primary independent ASP, the journal receivers must reside in the same primary
independent ASP or a secondary independent ASP within the same ASP group.
MIMIX fully supports the IBM i maximum limit of 10,000,000 objects to one user
journal. User journaling will not start if the number of objects associated with the
journal exceeds the journal maximum. The maximum includes:
• Objects for which changes are currently being journaled
• Objects for which journaling was ended while the current receiver is attached

29
• Journal receivers that are, or were, associated with the journal while the current
journal receiver is attached.
Remote journaling requires unique considerations for journaling and journal receiver
management. For additional information, see “Journal receiver management” on
page 213.

30
Multi-part naming convention
MIMIX uses named definitions to identify related user-defined configuration
information. A multi-part, qualified naming convention uniquely describes certain
types of definitions. This includes a two-part name for journal definitions and a three-
part name for transfer definitions and data group definitions. Newly created data
groups use remote journaling as the default configuration, which has unique
requirements for naming data group definitions. For more information, see “Target
journal definition names generated by ADDRJLNK command” on page 210.
The multi-part name consists of a name followed by one or two participating system
names (actually, names of system definitions). Together the elements of the multi-part
name define the entire environment for that definition. As a whole unit, a fully-qualified
two-part or three-part name must be unique. The first element, the name, does not
need to be unique. In a three-part name, the order of the system names is also
important, since two valid definitions may share the same three elements but with the
system names in different orders.
For example, MIMIX automatically creates a journal definition for the security audit
journal when you create a system definition. Each of these journal definitions is
named QAUDJRN, so the name alone is not unique. The name must be qualified with
the name of the system to which the journal definition applies, such as QAUDJRN
CHICAGO or QAUDJRN NEWYORK. Similarly, the data group definitions
INVENTORY CHICAGO HONGKONG and INVENTORY HONGKONG CHICAGO
are unique because of the order of the system names.
When using command interfaces which require a data group definition, MIMIX can
derive the fully-qualified name of a data group definition if a partial name provided is
sufficient to determine the unique name. If the first part of the name is unique, it can
be used by itself to designate the data group definition. For example, if the data group
definition INVENTORY CHICAGO HONGKONG is the only data group with the name
INVENTORY, then specifying INVENTORY on any command requiring a data group
name is sufficient. However, if a second data group named INVENTORY NEWYORK
LONDON is created, the name INVENTORY by itself no longer describes a unique
data group. INVENTORY CHICAGO would be the minimum parts of the name of the
first data definition necessary to determine its uniqueness. If a third data group named
INVENTORY CHICAGO LONDON was added, then the fully qualified name would be
required to uniquely identify the data group. The order in which the systems are
identified is also important. The system HONGKONG appears in only one of the data
groups definitions. However, specifying INVENTORY HONGKONG will generate a
“not found” error because HONGKONG is not the first system in any of the data group
definitions. This applies to all external interfaces that reference multi-part definition
names.
MIMIX can also derive a fully qualified name for a transfer definition. Data group
definitions and system definitions include parameters that identify associated transfer
definitions. When a subsequent operation requires the transfer definition, MIMIX uses
the context of the operation to determine the fully qualified name. For example, when
starting a data group, MIMIX uses information in the data group definition, the
systems specified in the data group name, and the specified transfer definition name
to derive the fully qualified transfer definition name. If MIMIX cannot find the transfer

31
definition, it reverses the order of the system names and checks again, avoiding the
need for redundant transfer definitions.
You can also use contextual system support (*ANY) to configure transfer definitions.
When you specify *ANY in a transfer definition, MIMIX uses information from the
context in which the transfer definition is called to resolve to the correct system.
Unlike the conventional configuration case, a specific search order is used if MIMIX is
still unable to find an appropriate transfer definition. For more information, see “Using
contextual (*ANY) transfer definitions” on page 181.

32
The MIMIX environment

The MIMIX environment


A variety of product-defined operating elements and user-defined configuration
elements collectively form an operational environment on each system. A MIMIX
environment can be comprised of one or more MIMIX installations. Each system that
participates in the same MIMIX environment must have the same operational
environment. This topic describes each of the components of the MIMIX operating
environment.

The product library


The name of the product library into which MIMIX is installed defines the connection
among systems in the same MIMIX installation. The default name of the product
installation library is MIMIX.
Several items are shipped as part of the product library. The IFS directory structure is
associated with the product library for the MIMIX installation and is created during the
installation process for License Manager and MIMIX. Each MIMIX installation also
contains journals and journal receivers used by internal processes as well as several
default job descriptions and job classes within its library.
Note: Do not replicate the library in which MIMIX is installed or any other libraries
created by MIMIX. Do not place user-created objects in these libraries unless
they are explicitly for MIMIX. For more information, see “Data that should not
be replicated” on page 84.

IFS directories
Vision Solutions products have an IFS directory structure used in replication for each
family of products. The IFS directory structure is created during the installation
process for License Manager and for each product.
The following two root directories contain all IFS-based objects:
/LakeviewTech
/VisionSolutions
There is a unique sub-directory structure for each product installation by instance
name. Streamfiles for the install wizard are stored in the following subdirectory:
/LakeviewTech/Upgrades

The MIMIXQGPL library


When a MIMIX product is installed, a library named MIMIXQGPL is restored on the
system. The MIMIXQGPL library includes work management objects used by all
MIMIX products. Many of these objects are customized and shipped with default
settings designed to streamline operations for the products which use them. These
objects include the MIMIXSBS subsystem and a variety of job descriptions and job
classes.
Note: If you have previous releases of MIMIX products on a system, you may find
additional objects in the MIMIXQGPL library, however you should not place

33
objects in this library. If you place objects in these libraries, they may be
deleted during the next installation process. Also, do not replicate the
MIMIXQGPL library. For additional information, see “Data that should not be
replicated” on page 84.

MIMIXSBS subsystem
The MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related
processing. This subsystem is shipped with the proper job queue entries and routing
entries for correct operation of the MIMIX jobs.

Data libraries
MIMIX uses the concept of data libraries. Currently there are two series of data
libraries:
• MIMIX uses data libraries for storing the contents of the object cache. MIMIX
creates the first data library when needed and may create additional data libraries.
The names of data libraries are of the form product-library_n (where n is a number
starting at 1).
• For system journal replication, MIMIX creates libraries named product-library_x,
where x is derived from the ASP. For example, A for ASP 1, B for ASP 2. These
ASP-specific data libraries are created when needed and are not deleted until the
product is uninstalled.

System managers
System manager processes automatically distribute configuration and status changes
among systems in a MIMIX installation. There are multiple system manager
processes associated with each system.
Each system manager process consists of a remote journal link (RJ Link) that
transmits journal entries in one direction between a pair of systems and a system
manager processing job that applies the entries to MIMIX files on the target system of
the RJ link. Between each pair of communicating systems there are always two
system manager processes.
Figure 1 shows a MIMIX installation with a management system and two network
systems. Each arrow represents one system manager process and the direction in
which it transmits data.
In environments with more than two systems, MIMIX restricts which systems can
communicate based on their designation as a management (*MGT) or network
(*NET) system. By default, each management systems communicates with all
network systems and all other management systems. Network systems communicate
only with management systems. When licensed for multiple management systems,
you can limit which management systems communicate with each network system.

Figure 1. System manager jobs in a MIMIX installation with one management system and

34
The MIMIX environment

two network systems.

Journal managers
MIMIX uses journal managers to maintain journal receivers used by replication
processes and system manager processes. A journal manager job runs on each
system in a MIMIX installation.
By default, MIMIX performs both change management and delete management for
journal receivers. Parameters in a journal definition allow you to customize details of
how the change and delete operations are performed.
The Journal manager delay parameter in the system definition determines how
frequently the journal manager looks for work.
Journal manager jobs are included in a group of jobs that MIMIX automatically
restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to
restart these MIMIX jobs at midnight (12:00 a.m.). The Job restart time parameter in
the system definition determines when the journal manager for that system restarts.

Target journal inspection


Target journal inspection consists of a set of jobs that inspect a journal on a system
when the system is the target system for replication. When configured and active,
each job checks a specific journal for activity indicating users or programs other than

35
MIMIX modified replicated objects on the target system and sends a notification when
activity is detected. These jobs provide environment analysis functions but they are
not required to perform replication.
MIMIX also automatically corrects objects identified by target journal inspection as
"changed on target by user". Depending on the details of the problem, the correction
may be performed by a separate job or by the next audit to compare the object.
Target journal inspection jobs are started on the target system when MIMIX is started,
or as necessary when MIMIX managers or data groups are started. Target journal
inspection jobs are included in a group of jobs that MIMIX automatically restarts daily
to maintain the MIMIX environment. The default operation of MIMIX is to restart these
MIMIX jobs at midnight (12:00 a.m.). MIMIX determines when to restart the journal
inspection jobs based on the value of the Job restart time parameter in the system
definitions for the network and management systems.
For more information, see “Performing target journal inspection” on page 334.

Collector services
Collector services refers to a group of jobs that are necessary for MIMIX operation on
the native user interface as well as for the MIMIX portal application within the Vision
Solutions Portal. One or more collector service jobs collect and combine MIMIX status
from all systems. Collector services submits a cleanup job on each system at
midnight for that system’s local time.

Cluster services
When MIMIX is configured and licensed for IBM i clustering, MIMIX uses the cluster
services function provided by IBM i to integrate the system management functions
needed for clustering. Cluster services must be active in order for a cluster node to be
recognized by the other nodes in the cluster. MIMIX integrates starting and stopping
cluster services into status and commands for controlling processes that run at the
system level.

Named definitions for configuration objects


MIMIX uses named definitions to identify related user-defined configuration
information. You create named definitions for system information, communication
(transfer) information, journal information, replication (data group) information, and
coordinated control (application). Any definitions you create can be used by both user
journal and system journal replication processes.
One or more or each of the following definitions are required to perform replication:
A system definition identifies the characteristics of a system that participates in a
MIMIX installation.
A transfer definition identifies the communications path and protocol to be used
between two systems. MIMIX supports the Transmission Control Protocol/Internet
Protocol (TCP/IP) protocol.

36
The MIMIX environment

A journal definition identifies a journal environment on a particular system. MIMIX


uses the journal definition to manage the journal receiver environment used by the
replication process.
A data group definition identifies the characteristics of how replication occurs
between two systems. A data group definition determines the direction in which
replication occurs between the systems, whether that direction can be switched, and
the default processing characteristics to use when processing the database and
object information associated with the data group.
An application group identifies whether the replication environment does or does not
use IBM i clustering. When clustering is used, the application group defines
information about an application or proprietary programs necessary for controlling
operations in the clustering environment.
A remote journal link (RJ link) is a MIMIX configuration element that identifies an
IBM i remote journaling environment. Newly created data groups use remote
journaling as the default configuration. An RJ link identifies journal definitions that
define the source and target journals, primary and secondary transfer definitions for
the communications path used by MIMIX, and whether the IBM i remote journal
function sends journal entries asynchronously or synchronously. When a data group
is added, the ADDRJLNK command is run automatically, using the transfer definition
defined in the data group.
The naming conventions used within definitions are described in “Multi-part naming
convention” on page 31.

Data group entries


Data group entries are part of the MIMIX environment that must exist on each system
in a MIMIX installation. MIMIX uses the data group entries that you create during
configuration to determine whether or not a journal entry should be replicated.
• Data group file entry This type of data group entry identifies the location of a
database file to be replicated and what its name and location will be on the target
system. Within a file entry, you can override the default file entry options defined
for the data group. MIMIX only replicates transactions for physical files because a
physical file contains the actual data stored in members. MIMIX supports both
positional and keyed access paths for accessing records stored in a physical file.
• Data group object entries This type of entry allows you to identify library-based
objects for replication. Examples of library-based objects include programs, user
profiles, message queues, and non-journaled database files. To select these
types of objects for replication, you select individual objects or groups of objects
by generic or specific object and library name, and object type,. Optionally, for
files, you can specify an extended object attribute such as PF-DTA or DSPF.
• Data group IFS entries This type of entry allows you to identify integrated file
system (IFS) objects for replication. IFS objects include directories, stream files,
and symbolic links. They reside in directories, similar to DOS or UNIX files. You
can select IFS objects for replication by specific or generic path name.
• Data group DLO entries This type of entry allows you to identify document
library objects (DLOs) for replication. DLOs are documents and folders. They are

37
contained in folders (except for first-level folders). To select DLOs for replication
you select individual DLOs by specific or generic folder and DLO name, and
owner.
A single data group can contain any combination of these types of data group entries.
If your license is for only one of the MIMIX products rather than for MIMIX®
Enterprise™ or MIMIX® Professional™, only the entries associated with the product to
which you are licensed will be processed for replication.

Procedures and steps


Procedures and steps are a highly customizable means of performing operations. A
set of default procedures is shipped with MIMIX for frequently used operations. These
default procedures include the ability to start, end, perform pre-check activity for
switching, switch application groups, and run data protection reports for nodes. Each
operation is performed by a procedure that consists of a sequence of steps and
multiple jobs. Each step calls a predetermined step program to perform a specific
sub-task of the larger operation. Steps also identify runtime attributes that determine
how the procedure will start processing the step and how it will respond if the step
ends in error.

Log spaces
Based on user space objects (*USRSPC), a log space is a MIMIX object that
provides an efficient storage and manipulation mechanism for replicated data that is
temporarily stored on the target system during the receive and apply processes. All
internal structures and objects that make up a log space are created and manipulated
by MIMIX.

Job descriptions and job classes


MIMIX uses a customized set of job descriptions and job classes. Customized job
descriptions optimize characteristics for a category of jobs, including the user profile,
job queue, message logging level, and routing data for the job. Customized job
classes optimize runtime characteristics such as the job priority and CPU time slice
for a category of jobs. All of the shipped job descriptions and job classes are
configured with recommended default values.
Job descriptions control batch processing. MIMIX features use a set of default job
descriptions, MXAUDIT, MXSYNC, and MXDFT. When MIMIX is installed, these job
descriptions are automatically restored in the product library. These job descriptions
exist in the product library of each MIMIX installation. Jobs and related output are
associated with the user profile submitting the request. Commands such as Compare
File Attributes (CMPFILA), Compare File Data (CMPFILDTA), Synchronize Object
(SYNCOBJ), as well as numerous others support this standard.
Older commands that provide job description support for batch processing use
different job descriptions that are located in the MIMIXQGPL library. The MIMIXQGPL
library, along with these job descriptions, is automatically restored on the system
when a MIMIX product is installed. Installing additional MIMIX installations on the
same system does not create additional copies of these job descriptions.

38
The MIMIX environment

Table 1. shows a combined list of MIMIX job descriptions.

Table 1. Job descriptions used by MIMIX

Name Description Shipped in Shipped in


Installation MIMIXQGPL
Library Library

MXAPM Access Path Maintenance (APM). Used for APM jobs in X


the installation library when the APMNT policy has been
enabled.

MXAUDIT MIMIX Auditing. Used for MIMIX compare commands, X


such as those called by MIMIX audits, as the default value
on the Job description (JOBD) parameter.

MXDFT MIMIX Default. Used for MIMIX load commands and by X


other commands that do not have a specific job
description as the default value on the JOBD parameter.

MXSYNC MIMIX Synchronization. Used for MIMIX synchronization X


commands, such as those called by MIMIX audits, as the
default value on the JOBD parameter.

MIMIXAPY MIMIX Apply. Used for MIMIX apply process jobs. X

MIMIXCLU MIMIXCluster Manager. Used by application groups which X


support IBM i clustering to route jobs to the QCTL
subsystem.

MIMIXCMN MIMIX Communications. Used for all target X


communication jobs.

MIMIXDFT MIMIX Default. Used for all MIMIX jobs that do not have a X
specific job description.

MIMIXMGR MIMIX Manager. Used for MIMIX system manager and X


journal manager jobs.

MIMIXMON MIMIX Monitor. Used for most jobs submitted by the X


MIMIX Monitor product.

MIMIXPRM MIMIX Promoter. Used for jobs submitted by the MIMIX X


Promoter product.

MIMIXRGZ MIMIX Reorganize File. Used for file reorganization jobs X


submitted by the database apply job.

MIMIXSND MIMIX Send. Used for database send, object send, object X
retrieve, container send, and status send jobs in MIMIX.

MIMIXSYNC MIMIX Synchronization. Used for MIMIX file X


synchronization. This is valid for synchronize commands
that do not have a JOBD parameter on the display.

39
Table 1. Job descriptions used by MIMIX

Name Description Shipped in Shipped in


Installation MIMIXQGPL
Library Library

MIMIXVFY MIMIX Verify. Used for MIMIX verify and compare X


command processes. This is valid for verify and compare
commands that do not have a JOBD parameter on the
display.

PORTnnnnn MIMIX TCP Server, where nnnnn identifies the server port X1
or alias name number or alias. A job description exists for each transfer
definition which uses TCP protocol and enables MIMIX to
create and manage autostart job entries. Characters
nnnnn in the name identify the server port.
1. The job descriptions are created in the installation library when transfer definitions which specify PROTOCOL(*TCP)
and MNGAJE(*YES) are created or changed. The associated autostart job entries are added to the subsystem
description for the MIMIXSBS subsystem in library MIMIXQGPL.

User profiles
All of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user
profile. This profile owns all MIMIX objects, including the objects in the MIMIX product
libraries and in the MIMIXQGPL library. The profile is created with sufficient authority
to run all MIMIX products and perform all the functions provided by the MIMIX
products. The authority of this user profile can be reduced, if business practices
require, but this is not recommended. Reducing the authority of the MIMIXOWN
requires significant effort by the user to ensure that the products continue to function
properly and to avoid adversely affecting the performance of MIMIX products. See the
Using License Manager book for additional security information for the MIMIXOWN
user profile.
Note: Do not replicate the MIMIXOWN or LAKEVIEW user profiles. For additional
information, see “Data that should not be replicated” on page 84.

40
System value settings for MIMIX

System value settings for MIMIX


System value settings affect the ability to install or upgrade MIMIX as well as the
ability to perform MIMIX operations.

System values for installing software


Before you install a Vision Solutions product, you need to ensure that the following
system values are set as indicated on each system:
• QALWOBJRST - Allow object restore option
Setting: *ALWPGMADP or *ALL
Required on each system in the product instance so that software installation and
fix installation processes function correctly.
• QALWUSRDMN - Allow user domain objects in libraries
Setting: *ALL
Required for MIMIX to function when QSECURITY is 30 or higher.
Note: If the value is not *ALL, the installation process will add the product library
and data library names to the list of libraries for this system value.
• QLIBLCKLVL - Library locking level
Setting: 1
Required on each system in the MIMIX product instance in order for MIMIX
processes to complete successfully.
• QSECURITY - System security level
Setting: 30 or higher
Strongly recommended on each system in the product instance.
• QSYSLIBL - System part of the library list
Setting: cannot include library QTEMP.
If present before installing or upgrading, remove library QTEMP from the system
library list on each system in the instance.

System values for MIMIX operation


In addition to the system values identified above for software installation, the following
system value settings are required for MIMIX to function properly. System values that
are changed by MIMIX are identified.
Note: The system value checks listed below that occur when system manager or
replication processes start will occur when any procedure or command that
starts MIMIX runs. This includes when MIMIX is started as a result of default
options within the MIMIX Installation Wizard.
• QAUDCTL - Auditing control
Setting: *OBJAUD and *AUDLVL
Required for system journal (QAUDJRN) replication. Set by MIMIX when starting
replication processes and when MIMIX commands are used to build the journaling
environment for system journal replication.
• QAUDLVL - Security auditing level

41
Setting: Multiple values set by MIMIX as described below.
Required for system journal (QAUDJRN) replication. Set by MIMIX when starting
replication processes and when MIMIX commands are used to build the journaling
environment for system journal replication. Set as follows:
– MIMIX adds the values *CREATE, *DELETE, *OBJMGT, and *SAVRST.
– MIMIX checks for values *SECURITY, *SECCFG, *SECRUN, and *SECVLDL.
If the value *SECURITY is set, no change is made. If *SECURITY is not set,
MIMIX adds the values *SECCFG, *SECRUN and *SECVLDL
– If there is any data group configured to replicate spooled files, MIMIX adds the
values *SPLFDTA and *PRTDTA
• QMLTTHDACN - Multithreaded job action
Setting: cannot be set to 3
Affects only environments licensed for MIMIX for PowerHA, which cannot have
the value 3 set on any node in the cluster.
• QPWDLVL and other QPWDnnnn system values
Setting: Strongly recommend using the same settings on all systems in the
instance.
When a data group is configured to replicate user profiles, MIMIX replication
enforces the QPWD system value settings on each system. If values on the target
system are more restrictive, replication failures can occur for user profiles with
replicated passwords. These system values are set by MIMIX.
Note: Changes to QPWDLVL require an IPL to become effective and should be
made only with careful consideration.
• QRETSVRSEC - Retain server security data
Setting: 1
Required on each system in the MIMIX product instance for MIMIX operations that
use remote journaling. If the value is not 1, MIMIX sets this value to 1 when MIMIX
system manager processes start and when a transfer definition is created or
changed.
• QTIME - Time of day
Setting: Correct value for time zone in which the partition runs.
All systems in an instance must be properly set to prevent issues when running
procedures. Not set by MIMIX.
• QTIMZON - Time zone
Setting: Time zone in which the partition runs
All systems in an instance must be properly set to prevent issues when running
procedures. Not set by MIMIX.
For additional information, see these topics:
• “System journal replication” on page 55
• “Identifying spooled files for replication” on page 103
• “Replicating user profiles and associated message queues” on page 104
• “Setting the system time zone and time” on page 325

42
System value settings for MIMIX

43
Operational overview
Before replication can begin, the following requirements must be met through the
installation and configuration processes:
• MIMIX software must be installed on each system in the MIMIX installation.
• At least one communications link must be in place for each pair of systems
between which replication will occur.
• The MIMIX operating environment must be configured and be available on each
system.
• Journaling must be active for the database files and objects configured for user
journal replication.
• For objects to be replicated from the system journal, the object auditing
environment must be set up.
• The files and objects must be initially synchronized between the systems
participating in replication.
Once MIMIX is configured and files and objects are synchronized, day-to-day
operations for MIMIX can be performed

Support for starting and ending replication


The Start MIMIX (STRMMX) and End MIMIX (ENDMMX) commands provide the
ability to start and end all elements of a MIMIX environment. These commands
include MIMIX services and manager jobs, all replication jobs for all data groups, as
well as the master monitor and jobs that are associated with it. While other commands
are available to perform these functions individually, the STRMMX and ENDMMX
commands are preferred because they ensure that processes are started or ended in
the appropriate order.
The Start Data Group (STRDG) and End Data Group (ENDDG) commands operate at
the data group level to control replication processes. These commands provide the
flexibility to start or end selected processes and apply sessions associated with a data
group, which can be helpful for balancing workload or resolving problems.
For more information about both sets of commands, see the MIMIX Operations book.

Support for checking installation status


If you use the MIMIX portal application for Vision Solutions Portal (VSP), you can see
at a glance at a single icon whether a problem exists in a MIMIX installation. VSP
provides significant guidance in flyover-text for status about what to do to resolve
problems. VSP also provides support for subscriptions so that VSP can automatically
notify you when a problem occurs.
For both the VSP and the native interface on System i, MIMIX is designed so that any
problems are rolled up into status values shown on the portlets or displays you use
most often. You can drill deeper into each user interface when necessary to resolve
problems.

44
Operational overview

The primary areas for which status can surface are: system-level processes,
replication activity, replication processes, and auditing.
System-level processes are reported in the Nodes portlet in VSP and on the Work
with Systems (WRKSYS) display.
In environments configured with application groups, application group status includes
roll up status of replication errors, replication processes, and auditing. Application
group status is found in the Applications Group portlet in VSP and on the Work with
Application Groups (WRKAG) display.
In environments configured with only data groups, data group status includes
replication errors and replication processing. Data group status is found in the Data
Groups portlet in VSP and on the Work with Data Groups (WRKDG) display.
Auditing status is reflected at the application group and data group level interfaces, as
well on the Audits portlet in VSP and the Work with Audits (WRKAUD) display.

Support for automatically detecting and resolving problems


MIMIX user interfaces fully integrate support for the following functions that
automatically detect and correct problems.
Audits: MIMIX ships with a set of audits that are automatically scheduled to run.
These audits check for common problems and automatically correct any detected
problems within data groups. Audits that automatically check all configured objects
associated with their audit class run on a weekly basis. Audits that automatically
check a subset of replicated objects selected by priorities run every day during a
specific time. Details of scheduling and priority eligibility can be customized. Audits
can be invoked manually as well. You also have control over other aspects of audit
runtime behavior, including optionally disabling automatic recovery. In the native user
interface, the Work with Audits display (WRKAUD command) is the primary user
interface for audits. In Vision Solutions Portal, the primary interface is the Audits
portlet.
Error recovery during replication: MIMIX also provides the ability to check for and
correct common problems during user journal and system journal replication that
would otherwise cause a replication error. Automatic recovery can be optionally
disabled. Problems that cannot be resolved are reported like any other replication
error.
For detailed information about audits and automatic recovery during replication, see
the MIMIX Operations book.

Support for working with data groups


Data groups are central to performing day-to-day operations. The Work with Data
Groups (WRKDG) display provide status of replication jobs and indication of any
replication errors for the data groups within an installation. Highlighted text indicates
whether problems exist. Many options are available for taking action at the data group
level and for drilling into detailed status information.
Detailed status: The command DSPDGSTS (option 8 from the Work with Data
groups display) accesses the Data Group Status display. The initial merged view

45
summarizes replication errors and the status of user journal (database) and system
journal (object) processes for both source and target systems. By using function keys,
you can display additional detailed views of only database or only object status.
Database views - These views provide information about replication performed by
user journal replication processes, including journaled files, IFS objects, data
areas, and data queues. They also include information about the replication of
user journal transactions, including journal progress, performance, and recent
activity.
Object views - These views provide information about replication performed by
system journal replication processes, including journal progress, performance,
and recent activity.
When a data group is experiencing replication problems, you can use these options
from the Work with Data Groups display to view problems grouped by type of activity:
12=Files not active, 13=Objects in error, 51=IFS trk entries not active, and 53=Obj trk
entries not active.

Support for resolving problems


MIMIX includes functions that can assist you in resolving a variety of problems.
Depending on the type of problem, some problem resolution tasks may need to be
performed from the system where the problem occurs, such as on the source system
where the journal resides or on the target system if the problem is related to the apply
process. MIMIX will direct you to the correct system when this is required.
Object activity: The Work with Data Group Activity (WRKDGACT) command allows
you to track system journal replication activity associated with a data group. You can
see the object, DLO, IFS, and spooled file activity, which can help you determine the
cause of an error. You can also see an error view that identifies the reason why the
object is in error. Options on the Work with Data Group Activity display allow you to
see messages associated with an entry, synchronize the entry between systems, and
remove a failed entry with or without related entries.
Failed requests: During normal processing, system journal replication processes
may encounter object requests that cannot be processed due to an error. Often the
error is due to a transient condition, such as when an object is in use by another
process at the time the object retrieve process attempts to gather the object data.
Although MIMIX will attempt some automatic retries, requests may still result in a
Failed status. In many cases, failed entries can be resubmitted and they will succeed.
Some errors may require user intervention, such as a never-ending process that
holds a lock on the object.
When the Automatic object recovery policy is enabled, MIMIX will attempt a third retry
cycle using the settings from the Number of third delay/retries (OBJRTY) and Third
retry interval (min.) (OBJRTYITV) policies. These policies can be set for the
installation or adjusted for a specific data group.
You can manually request that MIMIX retry processing for a data group activity entry
that has a status of *FAILED. These entries can be viewed using the Work with Data
Group Activity (WRKDGACT) command. From the Work with Data Group Activity or
Work with Data Group Activity Entries displays, you can use the retry option to

46
Operational overview

resubmit individual failed entries or all of the entries for an object. This option calls the
Retry Data Group Activity Entries (RTYDGACTE) command. From the Work with
Data Group Activity display, you can also specify a time at which to start the request,
thereby delaying the retry attempt until a time when it is more likely to succeed.
Files on hold: When the database apply process detects a data synchronization
problem, it places the file (individual member) on “error hold” and logs an error. File
entries are in held status when an error is preventing them from being applied to the
target system. You need to analyze the cause of the problem in order to determine
how to correct and release the file and ensure that the problem does not occur again.
An option on the Work with Data Groups display provides quick access to the subset
of file entries that are in error for a data group. From the Work with DG File Entries
display, you can see the status of an entry and use a number of options to assist in
resolving the error. An alternative view shows the database error code and journal
code. Available options include access to the Work with DG Files on Hold
(WRKDGFEHLD) command. The WRKDGFEHLD command allows you to work with
file entries that are in a held status. When this option is selected from the target
system, you can view and work with the entry for which the error was detected and
work with all other entries following the entry in error.
Journal analysis: With user journal replication, when the system that is the source of
replicated data fails, it is possible that some of the generated journal entries may not
have been transmitted to or received by the target system. However, it is not always
possible to determine this until the failed system has been recovered. Even if the
failed system is recovered, damage to a disk unit or to the journal itself may prevent
an accurate analysis of any missed data. Once the source system is available again,
if there is no damage to the disk unit or journal and its associated journal receivers,
you can use the journal analysis function to help determine what journal entries may
have been missed and to which files the data belongs. You can only perform journal
analysis on the system where a journal resides.
Missed transactions for IFS objects, data areas and data queues that are replicated
through the user journal will not be detected by journal analysis.

Support for switching


MIMIX provides support for switching due to planned and unplanned events in
environments licensed for MIMIX® products. The mechanism by which you perform a
switch depends on your configuration.
In environments configured with application groups, switching activity is initiated at the
application group level and performed by procedures. Data groups are controlled and
switched by those procedures. When there are more than two nodes in an instance,
MIMIX Global is used to handle the switch.
In environments that have data groups that are not associated with application
groups, switching is typically performed using the MIMIX Switch Assistant™, which
calls your default MIMIX Model Switch Framework to control the switching process.
Both methods of switching can be customized. Your authorized MIMIX representative
can assist you in implementing advanced switching scenarios.

47
To enable a switchable data group to function properly for default user journal
replication processes, four journal definitions (two RJ links) are required. “Journal
definition considerations” on page 206 contains examples of how to set up these
journal definitions.
You can specify whether to end the RJ link during a switch. Default behavior for a
planned switch is to leave the RJ link running. Default behavior during an unplanned
switch is to end the RJ link. Once you have a properly configured data group that
supports switching, you should be aware of how MIMIX supports unconfirmed entries
and the state of the RJ link following a switch. For more information, see “Support for
unconfirmed entries during a switch” on page 73 and “RJ link considerations when
switching” on page 73.
For additional information about switching, see the MIMIX Operations book. For
additional information about MIMIX Model Switch Framework, see the Using MIMIX
Monitor book.

Support for advanced analysis


MIMIX provides support for several advanced analysis functions. Some of these
functions may require the use of the MIMIX portal application for Vision Solutions
Portal (VSP).
The data integrity of replicated objects can be affected if they are changed on the
target system by programs or users other than MIMIX. Default configuration values
allow MIMIX to perform target journal inspection to check for such actions and notify
you when any are found so that you can take appropriate action. When using MIMIX
through Vision Solutions Portal, you can also use the Replicated Objects portlet to
easily view a list of all the objects changes by a particular user, program, or job.
Other analysis functions are only available when using Vision Solutions Portal. The
Replicated Objects portlet also allows you to see a list of what objects are configured
for replication and display information about the most recent audit of the object,
including whether the audit was checked by a priority audit or a scheduled audit.
The Arrivals and Backlog portlet allows you to check recent history of the rate at
which replication work arrived and the size of any replication backlogs. By charting
historical arrival rates and backlogs together, it is easier to identify how trends and
anomalies in arriving journal activity correlate to when and why replication backlogs
occurred. Identification is key to reducing or eliminating the exposure that backlogs
create.

Support for working with messages


MIMIX sends a variety of system message based on the status of MIMIX jobs and
processes. You can view messages generated by MIMIX from either the Message
Log window or from the Work with Message Log (WRKMSGLOG) display.
These messages are sent to both the primary and secondary message queues that
are specified for the system definition.
In addition to these message queues, message entries are recorded in a MIMIX
message log file. The MIMIX message log provides a powerful tool for problem
determination. Maintaining a message log file allows you to keep a record of

48
Operational overview

messages issued by MIMIX as an audit trail. In addition, the message log provides
robust subset and filter capabilities, the ability to locate and display related job logs,
and a powerful debug tool. When messages are issued, they are initially sent to the
specified primary and secondary message queues. In the event that these message
queues are erased, placing messages into the message log file secures a second
level of information concerning MIMIX operations.
The message log on the management system contains messages from the
management system and each network system defined within the installation. The
system manager is responsible for collecting messages from all network systems. On
a network system, the message log contains only those messages generated by
MIMIX activity on that system.
MIMIX automatically performs cleanup of the message log on a regular basis. The
system manager deletes entries from the message log file based on the value of the
Keep system history parameter in the system definition. However, if you process an
unusually high volume of replicated data, you may want to also periodically delete
unnecessary message log entries since the file grows in size depending on the
number of messages issued in a day.

49
Replication process overview

CHAPTER 2 Replication process overview

In general terms, a replication path is a series of processes that, together, represent


the critical path on which data to be replicated moves from its origin to its destination.
MIMIX uses two replication paths to accommodate differences in how replication
occurs for databases and objects. These paths operate with configurable levels of
cooperation or can operate independently.
• The user journal replication path captures changes to critical files and objects
configured for replication through the user journal using the IBM i remote
journaling function. In previous versions, MIMIX DB2 Replicator provided this
function.
• The system journal replication path handles replication of critical system objects
(such as user profiles or spooled files), integrated file system (IFS) objects, and
document library object (DLOs) using the IBM i system journal. In previous
versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between
the system journal and user journal replication paths when replicating files, IFS
objects, data areas, and data queues.
Within each replication path, MIMIX uses a series of processes. This chapter
describes the replication paths and the processes used in each.
The topics in this chapter include:
• “Replication job and supporting job names” on page 51 describes the replication
paths for database and object information. Included is a table which identifies the
replication job names for each of the processes that make up the replication path.
• “Cooperative processing introduction” on page 53 describes three variations
available for performing replication activities using a coordinated effort between
user journal processing and system journal processing.
• “System journal replication” on page 55 describes the system journal replication
path which is designed to handle the object-related availability needs of your
system through system journal processing.
• “User journal replication” on page 63 describes remote journaling and the benefits
of using remote journaling with MIMIX.
• “User journal replication of IFS objects, data areas, data queues” on page 75
describes a technique which allows replication of changed data for certain object
types through the user journal.
• “Older source-send user journal replication processes” on page 79 describes the
lesser used replication process called MIMIX source-send processing for
database replication.

50
Replication job and supporting job names

Replication job and supporting job names


The replication path for database information includes the IBM i remote journal
function, the MIMIX database reader process, and one or more database apply
processes. If MIMIX source-send processes are used instead of remote journaling,
then the processes include the database send process, the database receive
process, and one or more database apply processes.
The replication path for object information includes the object send process, the
object receive process, and the object apply process. When a data retrieval request is
replicated, the replication path also includes the object retrieve, container send, and
container receive processes. A data retrieval request is an operation that creates or
changes the content of an object. A self-contained request is an operation that
deletes, moves, or renames an object, or that changes the authority or ownership of
an object.
Table 2 identifies the job names for each of the processes that make up the replication
path as well as names of supporting jobs.

Table 2. MIMIX processes and their corresponding job names

Abbreviation Description Runs on Job name Notes

APMNT Access path maintenance Target sdnAPM 1, 3

CNRRCV Container receive process Target sdn_CNRRCV 3

CNRSND Container send process Source sdn_CNRSND 3

DAPOLL Data area polling Source sdn_DAPOLL 3

DBAPY Database apply process Target sdn_DBAPYs 3, 4

DBRCV Database receive process Target sdn_DBRCV 3

DBRDR Database reader Target sdn_DBRDR 3

DBSND Database send process Source sdn_DBSND 3

JRNMGR Journal manager System JRNMGR --

MXCOMMD MIMIX Communications Daemon System MXCOMMD --

MXIFCMGR Collector service process System MXIFCMGR 9

MXIFCHIST History collection process System MXIFCHIST 9

MXOBJSELPR Object selection process System MXOBJSELPR --

MXREPMGR Replication manager Target MXREPMGR 8

OBJAPY Object apply process Target sdn_OBJAPY 3

OBJRTV Object retrieve process Source sdn_OBJRTV 3

OBJSND Object send process Source nnn_OBJSND 6

OBJRCV Object receive process Target nnn_OBJRCV 6

51
Table 2. (Continued) MIMIX processes and their corresponding job names

Abbreviation Description Runs on Job name Notes

STSSND Status send Target sdn_STSSND 3

STSRCV Status receive Source sdn_STSRCV 3

SYSMGR System manager Local SM******** 2

SYSMGRRCV System manager processing job Remote SR******** 2

TEUPD Tracking entry update process Source or Target sdn_TEUPD 3, 5

TGTJRNINSP Target journal inspection Target ********** 7


Note:
1. When access path maintenance (APM) has been enabled using the APMNT policy, there is at least one
APM job running. Additional jobs are started and ended dynamically as required by the workload.
2. System manager processes run between a pair of systems. Each system manager process operates in
one direction between the local and remote systems of a system manager RJ link. The SM******** job is
used only to start remote journaling, then ends. The ******** in the job name format of either job indicates
the name of the system definition that is the other side of the RJ link for the job.
3. The characters sdn in a job name indicate the short data group name.
4. The character s is the apply session letter.
5. The job is used only for replication with advanced journaling and is started only when needed.
6. The name of object send and receive jobs indicate whether the object send job is shared among data
groups. When a data group uses the default shared job for the system, the characters nnn are MX$. (The
third character will be changed to a unique prefix if MX$ is already used as a data group short name.)
The characters nnn can also be an explicitly named shared job, or it can be the data group short name
when the job is dedicated to that data group.
7. The job name ********** for a target journal inspection job is the name of the journal definition. A job exists
for each journal on a system that is a target system for replication. Jobs do not exist when target journal
inspection is not configured, when the system is not a target system, or when all data groups using target
journal have not started. Also, jobs do not exist for the MXCFGJRN journal definition and journal
definitions that identify the remote journal used in RJ configurations (whose names typically end with
@R).
8. The replication manager starts as needed to resolve problems detected by target journal inspection. This
process is also started by the system manager to perform daily cleanup activities.
9. The collector service and history collection jobs provide information to internal MIMIX functions as well as
to the portal application in Vision Solutions Portal. Both jobs must be active.

52
Cooperative processing introduction

Cooperative processing introduction


Cooperative processing is when the MIMIX user journal processes and system
journal processes work in a coordinated effort to perform replication activities for
certain object types.
When configured, cooperative processing enables MIMIX to perform replication in the
most efficient way by evaluating the object type and the MIMIX configuration to
determine whether to use the system journal replication processes, user journal
replication processes, or a combination of both. Cooperative processing also provides
a greater level of data protection, data management efficiency, and high availability by
ensuring the complete replication of newly created or redefined files and objects.
Object types that can be journaled to a user journal are eligible to be processed
cooperatively when properly configured to MIMIX. MIMIX supports the following
variations of cooperative processing for these object types:
• MIMIX Dynamic Apply (files)
• Legacy cooperative processing (files)
• Advanced journaling (IFS objects, data areas, and data queues).

When a data group definition meets the requirements for MIMIX Dynamic Apply, any
logical files and physical (source and data) files properly identified for cooperative
processing will be processed via MIMIX Dynamic Apply unless a known restriction
prevents it.
When a data group definition does not meet the requirements for MIMIX Dynamic
Apply but still meets legacy cooperative processing requirements, any PF-DTA or
PF38-DTA files properly configured for cooperative processing will be replicated using
legacy cooperative processing. All other types of files are processed using system
journal replication.
IFS objects, data areas, or data queues that can be journaled are not automatically
configured for advanced journaling, by default. These object types must be manually
configured to use advanced journaling.
In all variations of cooperative processing, the system journal is used to replicate the
following operations:
• The creation of new objects that do not deposit an entry in a user journal when
they are created.
• Restores of objects on the source system
• Move and rename operates from a non-replicated library or path into a library or
path that is configured for replication.

MIMIX Dynamic Apply


Most environments can take advantage of cooperatively processed operations for
journaled *FILE objects that are journaled primarily through a user (database) journal.
MIMIX Dynamic Apply is the most efficient way to perform cooperative processing of
logical and physical files. MIMIX Dynamic Apply intelligently handles files with

53
relationships by assigning them to the same or appropriate apply sessions. It is also
much better at maintaining data integrity of replicated objects which previously
needed legacy cooperative processing in order to replicate some operations such as
creates, deletes, moves, and renames. Another benefit of MIMIX Dynamic Apply is
more efficient hold log processing by enabling multiple files to be processed through a
hold log instead of just one file at a time.
New data groups created with the shipped default configuration values are configured
to use MIMIX Dynamic Apply. This configuration requires data group object entries
and data group file entries.
For more information, see “Identifying logical and physical files for replication” on
page 106 and “Requirements and limitations of MIMIX Dynamic Apply” on page 111.

Legacy cooperative processing


In legacy cooperative processing, record and member operations of *FILE objects are
replicated through user journal processes, while all other transactions are replicated
through system journal processes. Legacy cooperative processing supports only data
files (PF-DTA and PF38-DTA).
Data groups that existed prior to upgrading to MIMIX version 5 are typically configured
with legacy cooperative processing which requires data group object entries and data
group file entries.
It is recommended to use MIMIX Dynamic Apply for cooperative processing. Existing
data groups configured to use legacy cooperative processing can be converted to use
MIMIX Dynamic Apply. For more information, see “Requirements and limitations of
legacy cooperative processing” on page 112.

Advanced journaling
The term advanced journaling refers to journaled IFS objects, data areas, or data
queues that are configured for cooperative processing. When these objects are
configured for cooperative processing, replication of changed bytes of the journaled
objects’ data occurs through the user journal. This is more efficient than replicating an
entire object through the system journal each time changes occur.
Such a configuration also allows for the serialization of updates to IFS objects, data
areas, and data queues with database journal entries. In addition, processing time for
these object types may be reduced, even for equal amounts of data, as user journal
replication eliminates the separate save, send, and restore processes necessary for
system replication.
Frequently you will see the phrase “user journal replication of IFS objects, data areas,
and data queues” used interchangeably with the term advanced journaling. These
terms are the same.
For more information, see “User journal replication of IFS objects, data areas, data
queues” on page 75 and “Planning for journaled IFS objects, data areas, and data
queues” on page 87.

54
System journal replication

System journal replication


The system journal replication path is designed to handle the object-related
availability needs of your system. You identify the critical system objects that you want
to replicate, such as user profiles, programs, and DLOs. MIMIX uses the journal
entries generated by the operating system’s object auditing function to identify the
changes to objects on production systems and replicates the changes to backup
systems.
Because system journal replication relies on the presence of journal entries in the
system’s security audit journal (the system journal QAUDJRN), MIMIX requires that
certain values be specified for system values related to auditing. MIMIX checks and
changes these system values if necessary when MIMIX commands are used to build
the journaling environment and when starting replication processes for data groups
configured to perform system journal replication. The following system values are
affected:
• QAUDLVL (Security auditing level) system value. MIMIX sets the values
*CREATE, *DELETE, *OBJMGT, and *SAVRST. MIMIX checks for values
*SECURITY, *SECCFG, *SECRUN and *SECVLDL. If the value *SECURITY is
set, no change is made. If *SECURITY is not set, MIMIX adds the values
*SECCFG, *SECRUN and *SECVLDL. If any data group is configured to replicate
spooled files, MIMIX also sets *SPLFDTA and *PRTDTA.
• QAUDCTL (Auditing control) system value. MIMIX adds the values *OBJAUD and
*AUDLVL.
These system value settings, along with the object audit value of each object, control
what journal entries are created in the system journal (QAUDJRN) for an object.
If an operation on an object is not represented by an entry in the system journal,
MIMIX is not aware of the operation and cannot replicate it.
The system objects you want to replicate are defined to a data group through data
group object entries, data group DLO entries, and data group IFS entries. The term
name space refers to this collection of objects that are identified for replication by
MIMIX using the system journal replication processes.
MIMIX uses the security audit journal to monitor for activity on objects within the name
space. When activity occurs on the object, such as it is being accessed or changed, a
corresponding journal entry is created in the security audit journal.
When MIMIX system journal replication processes are active, objects are replicated
when they are created, restored, moved, or renamed into the MIMIX name space.
While in the MIMIX name space, changes to the object or to the authority settings of
the object are also replicated.
Replication through the system journal is event-driven. When a data group is started,
each process used in the replication path waits for its predetermined event to occur
then begins its activity. The processes are interdependent and run concurrently. The
system journal replication path in MIMIX uses the following processes:
• Object send process: This process reads the journal entries added to the
security audit journal. When an object identified in a journal entry is within the

55
name space, the object send process creates a MIMIX construct called an activity
entry. The process also determines whether any additional information is needed
for replication, and transmits the activity entry to the target system. Data groups
can be configured to use a shared object send job or to use a dedicated job.
• Object receive process: This process receives the activity entry and waits for
notification that any additional source system processing is complete before
passing the activity entry to the object apply process.
• Object retrieve process: If any additional information is needed for replication,
the object retrieve process obtains it and places it in a container within a holding
area. This process is also used when additional processing is required on the
source system prior to transmission to the target system. The object retrieve
process uses multiple asynchronous jobs. The minimum and maximum number of
jobs is configurable for a data group.
• Container send process: When any needed additional information has been
retrieved, the container send process transmits it from a holding area to the target
system. This process also updates the activity entry and notifies the object send
process that the additional information is on the target system and the activity
entry is ready to be applied. The container send and receive processes use
multiple asynchronous jobs. The minimum and maximum number of jobs is
configurable for a data group.
• Container receive process: This process receives any needed additional
information, places it into a holding area on the target system, and notifies the
container send process when it completes these operations.
• Object apply process: This process uses the information in the activity entry as
well as any additional information that was transmitted to the target system to
replicate the operation represented by the entry. The object apply process uses
multiple asynchronous jobs. The minimum and maximum number of jobs is
configurable for a data group.
• Status send process: This process notifies the source system of the status of the
replication.
• Status receive process: This process updates the status on the source system
and, if necessary, passes control information back to the object send process.
MIMIX uses a collection of structures and customized functions for controlling these
structures during replication. Collectively the customized functions and structures are
referred to as the work log. The structures in the work log consist of log spaces, work
lists (implemented as user queues), and distribution status file.

Activity entry processing


For each journal entry for an object within the name space, the object send process
creates an activity entry in the work log. Creation of an activity entry includes adding
the entry to the log space and adding a record to the distribution status file. An activity
entry includes a copy of the journal entry and any related information associated with
a replication operation for an object, including the status of the entry. User interaction
with activity entries is through the Work with Data Group Activity display and the Work
with DG Activity Entries display.

56
System journal replication

There are two categories of activity entries: those that are self-contained and those
that require the retrieval of additional information. “Processing self-contained activity
entries” on page 57 describes the simplest object replication scenario. “Processing
data-retrieval activity entries” on page 57 describes the object replication scenario in
which additional data must be retrieved from the source system and sent to the target
system.

Processing self-contained activity entries


For a self-contained activity entry, the copied journal entry contains all of the
information required to replicate the object. Examples of journal entries include
Change Authority (T-CA), Object Move or Rename (T-OM), and Object Delete (T-DO).
After the object send process determines that an entry is to be replicated, it performs
the following actions:
• Sets the status of the entry to PA (pending apply)
• Adds the “sent” date and time to the activity entry
• Writes the activity entry to the log space and adds a record to the distribution
status file
• Transmits the activity entry to a corresponding object receive process job on the
target system.
The object receive process adds the “received” date and time to the activity entry,
writes the activity entry to the log space, adds a record to the distribution status file,
and places the activity entry on the object apply work list. Now each system has a
copy of the activity entry.
The next available object apply process job for the data group retrieves the activity
entry from the object apply work list and replicates the operation represented by the
entry. The object apply process adds the “applied” date and time to the activity entry,
changes the status of the entry to CP (completed processing), and adds the entry to
the status send work list.
The status send process retrieves the activity entry from the status send work list
and transmits the updated entry to a corresponding status receive process on the
source system. The status receive process updates the activity entry in the work log
and the distribution status file.

Processing data-retrieval activity entries


In a data retrieval activity entry, additional data must be gathered from the object on
the source system in order to replicate the operation. The copied journal entry
indicates that changes to an object affect the attributes or data of the object. The
actual content of the change is not recorded in the journal entry. To properly replicate
the object, its content, attributes, or both, must be retrieved and transmitted to the
target system. MIMIX may retrieve this data by using APIs or by using the appropriate
save command for the object type. APIs store the data in one or more user spaces
(*USRSPC) in a data library associated with the MIMIX installation. Save commands
store the object data in a save file (*SAVF) in the data library. Collectively, these
objects in the data library are known as containers.

57
After the object send process determines that an entry is to be replicated and that
additional processing or information on the source system is required, it performs the
following actions:
• Sets the status of the entry to PR (pending retrieve)
• Adds the “sent” date and time to the activity entry
• Writes the activity entry to the log space and adds a record to the distribution
status file
• Transmits the activity entry to a corresponding object receive process on the
target system.
• Adds the entry to the object retrieve work list on the source system.
The object receive process adds the “received” date and time to the activity entry,
writes the activity entry to the log space, and adds a record to the distribution status
file. Now each system has a copy of the activity entry. The object receive process
waits until the source system processing is complete before it adds the activity entry
to the object apply work list.
Concurrently, the object send process reads the object send work list. When the
object send process finds an activity entry in the object send work list, the object send
process performs one or more of the following additional steps on the entry:
• If an object retrieve job packaged the object, the activity entry is routed to the
container send work list.
• The activity entry is transmitted to the target system, its status is updated, and a
“retrieved” date and time is added to the activity entry.
On the source system the next available object retrieve process for the data group
retrieves the activity entry from the object retrieve work list and processes the
referenced object. In addition to retrieving additional information for the activity entry,
additional processing may be required on the source system. The object retrieve
process may perform some or all of the following steps:
• Retrieve the extended attribute of the object. This may be one step in retrieving
the object or it may be the primary function required of the retrieve process.
• If necessary, cooperative processing activities, such as adding or removing a data
group file entry, are performed.
• The object identified by the activity entry is packaged into a container in the data
library. The object retrieve process adds the “retrieved” date and time to the
activity entry and changes the status of the entry to “pending send.”
• The activity entry is added to the object send work list. From there the object send
job takes the appropriate action for the activity, which may be to send the entry to
the target system, add the entry to the container send work list, or both.
The container send and receive processes are only used when an activity entry
requires information in addition to what is contained within the journal entry. The next
available job for the container send process for the data group retrieves the activity
entry from the container send work list and retrieves the container for the packaged
object from the data library. The container send job transmits the container to a
corresponding job of the container receive process on the target system. The

58
System journal replication

container receive process places the container in a data library on the target system.
The container send process waits for confirmation from the container receive job, then
adds the “container sent” date and time to the activity entry, changes the status of the
activity entry to PA (pending apply), and adds the entry to the object send work list.
The next available object apply process job for the data group retrieves the activity
entry from the object apply work list, locates the container for the object in the data
library, and replicates the operation represented by the entry. The object apply
process adds the “applied” date and time to the activity entry, changes the status of
the entry to CP (completed processing), and adds the entry to the status send work
list.
The status send process retrieves the activity entry from the status send work list
and transmits the updated entry to a corresponding job for status receive process
on the source system. The status receive process updates the activity entry in the log
space and the distribution status file. If the activity entry requires further processing,
such as if an updated container is needed on the target system, the status receive job
adds the entry to the object send work list.

Processes with shared jobs


Data groups can be configured so that the object send process uses either a job
shared with other data groups or a dedicated job for the data group.
Environments with multiple data groups that perform system journal replication or any
form of cooperative processing can benefit from a shared object send process.
Sharing an object send job among data groups decreases the total number of jobs
reading from the system journal (QAUDJRN), thereby reducing the CPU used by
MIMIX on the source system. Once read, journal entries are routed to each of the
sharing data groups to be evaluated for replication.
When shipped default values are used, new data groups are configured using the
default shared job for the system. You can also use a shared job that is limited to only
the data groups that you explicitly set to use the same prefix for the object send job.
You also have the option to use a dedicated job for a data group.

Processes with multiple asynchronous jobs


The object retrieve, container send and receive, and object apply processes all
consist of one or more asynchronous jobs. You can specify the minimum and
maximum number of asynchronous jobs you want to allow MIMIX to run for each
process and a threshold for activating additional jobs. The minimum number indicates
how many permanent jobs should be started for the process. These jobs stay active
as long as the data group is active.
During periods of peak activity, if more requests are backlogged than are specified in
the threshold, additional temporary jobs, up to the maximum number, may also be
started. This load leveling feature allows system journal replication processes to react
automatically to periodic heavy workloads. By doing this, the replication process stays
current with production system activity. When system activity returns to a reduced
level, the temporary jobs end after a period of inactivity elapses.

59
Tracking object replication
After you start a data group, you need to monitor the status of the replication
processes and respond to any error conditions. Regular monitoring and timely
responses to error conditions significantly reduce the amount of time and effort
required in the event that you need to switch a data group.
MIMIX provides an indication of high level status of the processes used in object
replication and error conditions. You can access detailed status information through
the Data Group Status window.
When an operation cannot complete on either the source or target system (such as
when the object is in use by another process and cannot be accessed), the activity
entry may go to a failed state. MIMIX attempts to rectify many failures automatically,
but some failures require manual intervention. Objects with at least one failed entry
outstanding are considered to be “in error.” You should periodically review the objects
in error, and the associated failed entries, and determine the appropriate action. You
may retry or delete one or all of the failed entries for an object. You can check the
progress of activity entries and take corrective action through the Work with Data
Group Activity display and the Work with DG Activity Entries display. You can also
subset directly to the activity entries in error from the Work with Data Groups display.
If you have new objects to replicate that are not within the MIMIX name space, you
need to add data group entries for them. Before any new data group entries can be
replicated, you must end and restart the system journal replication processes in order
for the changes to take effect.
The system manager removes old activity entries from the work log on each system
after the time specified in the system definition passes. The Keep data group history
(days) parameter (KEEPDGHST) indicates how long the activity entries remain on the
system. You can also manually delete activity entries. Containers in the data libraries
are deleted after the time specified in the Keep MIMIX data (days) parameter
(KEEPMMXDTA).

Managing object auditing


The system journal replication path within MIMIX relies on entries placed in the
system journal by IBM i object auditing functions. To ensure that objects configured
for this replication path retain an object auditing value that supports replication, MIMIX
evaluates and changes the objects’ auditing value when necessary.
To do this, MIMIX employs a configuration value that is specified on the Object
auditing value (OBJAUD) parameter of data group entries (object, IFS, DLO)
configured for the system journal replication path. When MIMIX determines that an
object’s auditing value is lower than the configured value, it changes the object to
have the higher configured value specified in the data group entry that is the closest
match to the object. The OBJAUD parameter supports object audit values of *ALL,
*CHANGE, or *NONE.
MIMIX evaluates and may change an object’s auditing value when specific conditions
exist during object replication or during processing of a Start Data Group (STRDG)
request. This evaluation process can also be invoked manually for all objects
identified for replication by a data group.

60
System journal replication

During replication - MIMIX may change the auditing value during replication when
an object is replicated because it was created, restored, moved, or renamed into the
MIMIX name space (the group of objects defined to MIMIX).
While starting a data group - MIMIX may change the auditing value while
processing a STRDG request if the request specified processes that cause object
send (OBJSND) jobs to start and the request occurred after a data group switch or
after a configuration change to one or more data group entries (object, IFS, or DLO).
Shipped command defaults for the STRDG command allow MIMIX to set object
auditing if necessary. If you would rather set the auditing level for replicated objects
yourself, you can specify *NO for the Set object auditing level (SETAUD) parameter
when you start data groups.
Invoking manually - The Set Data Group Auditing (SETDGAUD) command provides
the ability to manually set the object auditing level of existing objects identified for
replication by a data group. When the command is invoked, MIMIX checks the audit
value of existing objects identified for system journal replication. Shipped default
values on the command cause MIMIX to change the object auditing value of objects
to match the configured value when an object’s actual value is lower than the
configured value.
The SETDGAUD command is used during initial configuration of a data group.
Otherwise, it is not necessary for normal operations and should only be used under
the direction of a trained MIMIX support representative.
The SETDGAUD command also supports optionally forcing a change to a configured
value that is lower than the existing value through its Force audit value (FORCE)
parameter.
Evaluation processing - Regardless of how the object auditing evaluation is
invoked, MIMIX may find that an object is identified by more than one data group
entry within the same class of object (IFS, DLO, or library-based). It is important to
understand the order of precedence for processing data group entries.
Data group entries are processed in order from most generic to most specific. IFS
entries are processed using the unicode character set; object entries and DLO entries
are processed using the EBCDIC character set. The first entry (more generic) found
that matches the object is used until a more specific match is found.
The entry that most specifically matches the object is used to process the object. If
the object has a lower audit value, it is set to the configured auditing value specified in
the data group entry that most specifically matches the object.
When MIMIX processes a data group IFS entry and changes the auditing level of
objects which match the entry, the object is checked and, if necessary, changed to the
new auditing value. In the case of an IFS entry with a generic name, all descendants
of the IFS object may also have their auditing value changed.
When you change a data group entry, MIMIX updates all objects identified by the
same type of data group entry in order to ensure that auditing is set properly for
objects identified by multiple entries with different configured auditing values. For
example, if a new DLO entry is added to a data group, MIMIX sets object auditing for
all objects identified by the data group’s DLO entries, but not for its object entries or
IFS entries.

61
For more information and examples of setting auditing values with the SETDGAUD
command, see “Setting data group auditing values manually” on page 309.

62
User journal replication

User journal replication


MIMIX Remote Journal support enables MIMIX to take advantage of the cross-journal
communications capabilities provided by the IBM i remote journal function instead of
using internal communications. Newly created data groups use remote journaling as
the default configuration.

What is remote journaling?


Remote journaling is a function of the IBM i that allows you to establish journals and
journal receivers on a target system and associate them with specific journals and
journal receivers on a source system. After the journals and journal receivers are
established on both systems, the remote journal function can replicate journal entries
from the source system to the journals and journal receivers located on the target
system.
The remote journal function supports both synchronous and asynchronous modes of
operation. More information about the benefits and implications of each mode can be
found in topic “Overview of IBM processing of remote journals” on page 65.
You should become familiar with the terminology used by the IBM i remote journal
function. The Backup and Recovery and Journal management books are good
sources for terminology and for information about considerations you should be
aware of when you use remote journaling. The IBM redbooks AS/400 Remote Journal
Function for High Availability and Data Replication (SG24-5189) and Striving for
Optimal Journal Performance on DB2 Universal Database for iSeries (SG24-6286)
provide an excellent overview of remote journaling in a high availability environment.
You can find these books online at the IBM eServer iSeries Information Center.

Benefits of using remote journaling with MIMIX


MIMIX has internal send and receive processing as part of its architecture. The MIMIX
Remote Journal support allows MIMIX to take advantage of the cross-journal
communications functions provided by the IBM i remote journal function instead of
using the internal communications provided by MIMIX. As stated in the AS/400
Remote Journal Function for High Availability and Data Replication redbook,
“The benefits of remote journal function include:
• It lowers the CPU consumption on the source machine by shifting
the processing required to receive the journal entries from the
source system to the target system. This is true when
asynchronous delivery is selected.
• It eliminates the need to buffer journal entries to a temporary area
before transmitting them from the source machine to the target
machine. This translates into less disk writes and greater DASD
efficiency on the source system.
• Since it is implemented in microcode, it significantly improves the
replication performance of journal entries and allows database
images to be sent to the target system in realtime. This realtime
operation is called the synchronous delivery mode. If the

63
synchronous delivery mode is used, the journal entries are
guaranteed to be in main storage on the target system prior to
control being returned to the application on the source machine.
• It allows the journal receiver save and restore operations to be
moved to the target system. This way, the resource utilization on
the source machine can be reduced.”

Restrictions of MIMIX Remote Journal support


The IBM i remote journal function does not allow writing journal entries directly to the
target journal receiver. This restriction severely limits the usefulness of cascading
remote journals in a managed availability environment.
MIMIX user journal replication does not support a cascading environment in which
remote journal receivers on the target system are also source journal receivers for a
third system.
Users who require this type of environment may use multiple installations of MIMIX,
implementing apply side journaling in one installation and using remote journaling to
replicate the applied transactions to a third system.

64
Overview of IBM processing of remote journals
Several key concepts within the IBM i remote journal function are important to
understanding its impact on MIMIX replication.
A local-remote journal pair refers to the relationship between a configured source
journal and target journal. The key point about a local-remote journal pair is that data
flows only in one direction within the pair, from source to target.
When the remote journal function is activated and all journal entries from the source
are requested, existing journal entries for the specified journal receiver on the source
system which have not already been replicated are replicated as quickly as possible.
This is known as catchup mode. Once the existing journal entries are delivered to
the target system, the system begins sending new entries in continuous mode
according to the delivery mode specified when the remote journal function was
started. New journal entries can be delivered either synchronously or asynchronously.

Synchronous delivery
In synchronous delivery mode the target system is updated in real time with journal
entries as they are generated by the source applications. The source applications do
not continue processing until the journal entries are sent to the target journal.
Each journal entry is first replicated to the target journal receiver in main memory on
the target system (1 in Figure 2). When the source system receives notification of the
delivery to the target journal receiver, the journal entry is placed in the source journal
receiver (2) and the source database is updated (3).
With synchronous delivery, journal entries that have been written to memory on the
target system are considered unconfirmed entries until they have been written to
auxiliary storage on the source system and confirmation of this is received on the
target system (4).

65
Figure 2. Synchronous mode sequence of activity in the IBM remote journal feature.

Source System
Applications
Source
2 Journal 3
Receiver Production
(Local) Database

Source Journal
Message Queue
1

Target System
4

Target
Journal Target Journal
Receiver Message Queue
(Remote)

Unconfirmed journal entries are entries replicated to a target system but the state of
the I/O to auxiliary storage for the same journal entries on the source system is not
known. Unconfirmed entries only pertain to remote journals that are maintained
synchronously. They are held in the data portion of the target journal receiver. These
entries are not processed with other journal entries unless specifically requested or
until confirmation of the I/O for the same entries is received from the source system.
Confirmation typically is not immediately sent to the target system for performance
reasons.
Once the confirmation is received, the entries are considered confirmed journal
entries. Confirmed journal entries are entries that have been replicated to the target
system and the I/O to auxiliary storage for the same journal entries on the source
system is known to have completed.
With synchronous delivery, the most recent copy of the data is on the target system. If
the source system becomes unavailable, you can recover using data from the target
system.
Since delivery is synchronous to the application layer, there are application
performance and communications bandwidth considerations. There is some
performance impact to the application when it is moved from asynchronous mode to
synchronous mode for high availability purposes. This impact can be minimized by
ensuring efficient data movement. In general, a minimum of a dedicated 100
megabyte ethernet connection is recommended for synchronous remote journaling.
MIMIX includes special switch processing for unconfirmed entries to ensure that the
most recent transactions are preserved in the event of a source system failure. For
more information, see “Support for unconfirmed entries during a switch” on page 73.

66
Asynchronous delivery
In asynchronous delivery mode, the journal entries are placed in the source journal
first (A in Figure 3) and then applied to the source database (B). An independent job
sends the journal entries from a buffer (C) to the target system journal receiver (D) at
some time after control is returned to the source applications that generated the
journal entries.
Because the journal entries on the target system may lag behind the source system’s
database, in the event of a source system failure, entries may become trapped on the
source system.

Figure 3. Asynchronous mode sequence of activity in the IBM remote journal feature.

Source System
Applications

Source
A Journal B
Receiver Production
(Local) Database

Source Journal Buffer


Message Queue

Target System
D
Target Journal
Message Queue
Target
Journal
Receiver
(Remote)

With asynchronous delivery, the most recent copy of the data is on the source system.
Performance critical applications frequently use asynchronous delivery.
Default values used in configuring MIMIX for remote journaling use asynchronous
delivery. This delivery mode is most similar to the MIMIX database send and receive
processes.

67
User journal replication processes
Data groups created using default values are configured to use remote journaling
support for user journal replication.
The replication path for database information includes the IBM i remote journal
function, the MIMIX database reader process, and one or more database apply
processes.
The IBM i remote journal function transfers journal entries to the target system.
The database reader process (DBRDR) process reads journal entries from the
target journal receiver of a remote journal configuration and places those journal
entries that match replication criteria for the data group into a log space.
All journal entries deposited into the source journal will be transmitted to the target
system. The database reader process performs the filtering that is identified in the
data group definition parameters and file and tracking entry options.
The database apply process applies the changes stored in the target log space to
appropriate database or replicated object on the target system. MIMIX uses multiple
apply processes in parallel for maximum efficiency. Transactions that are not part of a
commit cycle are immediately applied to the target system. For transactions that are
part of a commit cycle, processing varies depending on how the data group is
configured. With default configuration values, MIMIX processes transactions that are
part of a commit cycle but does not apply those transactions until an open commit
cycle completes. Optionally, data groups can be configured to immediately apply
transactions that are part of a commit cycle.

The RJ link
To simplify tasks associated with remote journaling, MIMIX implements the concept of
a remote journal link. A remote journal link (RJ link) is a configuration element that
identifies an IBM i remote journaling environment. An RJ link identifies:
• A “source” journal definition that identifies the system and journal which are the
source of journal entries being replicated from the source system.
• A “target” journal definition that defines a remote journal.
• Primary and secondary transfer definitions for the communications path for use by
MIMIX.
• Whether the IBM i remote journal function sends journal entries asynchronously or
synchronously.
Once an RJ link is defined and other configuration elements are properly set, user
journal replication processes will use the IBM i remote journaling environment within
its replication path.
The concept of an RJ link is integrated into existing commands. The Work with RJ
Links display makes it easy to identify the state of the IBM i remote journaling
environment defined by the RJ link.

68
Sharing RJ links among data groups
It is possible to configure multiple data groups to use the same RJ link. However, data
groups should only share an RJ link if they are intended to be switched together or if
they are non-switchable data groups. Otherwise, there is additional communications
overhead from data groups replicating in opposite directions and the potential for
journal entries for database operations to be routed back to their originating system.
See “Support for unconfirmed entries during a switch” on page 73 and “RJ link
considerations when switching” on page 73 for more details.

RJ links within and independently of data groups


The RJ link is integrated into commands for starting and ending data group replication
(STRDG and ENDDG). The STRDG and ENDDG commands automatically determine
whether the data group uses remote journaling and select the appropriate replication
path processes, including the RJ link, as needed.
Two MIMIX commands provide the ability to use an RJ link without performing data
replication. The Start Remote Journal Link (STRRJLNK) and the End Remote Journal
Link (ENDRJLNK) commands provide this capability.

Differences between ENDDG and ENDRJLNK commands


You should be aware of differences between ending data group replication (ENDDG
command) and ending only the remote journal link (ENDRJLNK command). You will
primarily use the End Data Group (ENDDG) command to end replication processes
and to optionally end the RJ link when necessary. The End Remote Journal Link
(ENDRJLNK) command ends only the RJ link.
Both commands include an end option (ENDOPT parameter) to specify whether to
end immediately or in a controlled manner. These options on the ENDRJLNK
command do not have the same meaning as on the ENDDG command. For
ENDRJLNK, the ENDOPT parameter has the following values:

Table 3. End option values on the End Remote Journal Link (ENDRJLNK) command.

*IMMED The target journal is deactivated immediately. Journal entries that are already
queued for transmission are not sent before the target journal is deactivated.
The next time the remote journal function is started, the journal entries that
were queued but not sent are prepared again for transmission to the target
journal.

*CNTRLD Any journal entries that are queued for transmission to the target journal will
be transmitted before the IBM i remote journal function is ended. At any time,
the remote journal function may have one or more journal entries prepared for
transmission to the target journal. If an asynchronous delivery mode is used
over a slow communications line, it may take a significant amount of time to
transmit the queued entries before actually ending the target journal.

The ENDRJLNK command’s ENDOPT parameter is ignored and an immediate end is


preformed when either of the following conditions are true:
• When the remote journal function is running in synchronous mode

69
(DELIVERY(*SYNC)).
• When the remote journal function is performing catch-up processing.

70
RJ link monitors
User journal replication processes monitor the journal message queues of the
journals identified by the RJ link. Two RJ link monitors are created automatically, one
on the source system and one on the target system. These monitors provide added
value by allowing MIMIX to automatically monitor the state of the remote journal link,
to notify the user of problems, and to automatically recover the link when possible.

RJ link monitors - operation


The RJ link monitors are automatically started when the master monitor is started. If
for some reason the monitors are not already started, they will be started when you
start a remote journal link. The monitors are created if they do not already exist. The
source RJ link monitor is named after the source journal definition and the target RJ
link monitor is named after the target journal definition.
The RJ link monitors are MIMIX message queue monitors. They monitor messages
put on the message queues associated with the source and target journals. The
operating system issues messages to these journal message queues when a failure
is detected in IBM i remote journal processing. Each RJ link monitor uses information
provided in the messages to determine which remote journal link is affected and to try
to automatically recover that remote journal link. (The state of a remote journal link
can be seen by using the Work with RJ Links (WRKRJLNK) command.) There is a
limit on the number of times that a link will be recovered in a particular time period; a
continually failing link will eventually be marked failed and recovery will end. Typically
this occurs when there are communications problems. Once the problem is resolved,
you can start the RJ link monitors again the using the Work with Monitors (WRKMON)
command and selecting the Start option.
The RJ link monitor for the source does not end once it is started, since more than
one remote journal link can use a source monitor. Users can end the monitors by
using the Work with Monitors (WRKMON) command and selecting the End option.
MIMIX Monitor commands can be used to see the status of your RJ link monitors. The
WRKMON command lists all monitors for a MIMIX installation and displays whether
the monitor is active or inactive. You can also view the status of your RJ link monitors
on the DSPDGSTS status display (option 8 from the Work with Data Groups display).
Both the source and target RJ link monitor processes appear on this display. The
display shows whether or not the monitor processes are active. If MIMIX Monitor is
not installed as recommended, the RJ link monitor status appears as unknown on the
Display Data Group Status display.

RJ link monitors in complex configurations


In a broadcast scenario, a single source journal definition can link to multiple target
journal definitions, each over its own remote journal link. One source RJ link monitor
handles this broadcast, since there is one source RJ monitor per source journal
definition communicating via a remote journal link.
Alternately, in a cascade scenario an intermediate system can have both a source RJ
link monitor and a target RJ link monitor running on it for the same journal definition.
This intermediate system has the target journal definition for the system that

71
originated the replication and holds the source journal definition for the next system in
the cascade.
For more information about configuring for these environments, see “Data distribution
and data management scenarios” on page 390.

72
Support for unconfirmed entries during a switch
The MIMIX Remote Journal support implements synchronous mode processing in a
way that reduces data latency in the movement of journal entries from the source to
the target system. This reduces the potential for and the degree of manual
intervention when an unplanned outage occurs.
Whenever an RJ link failure is detected MIMIX saves any unconfirmed entries on the
target system so they can be applied to the backup database if an unplanned switch
is required. The unconfirmed entries are the most recent changes to the data.
Maintaining this data on the target system is critical to your managed availability
solution.
In the event of an unplanned switch, the unconfirmed entries are routed to the MIMIX
database apply process to be applied to the backup database. As a result, you will
see the database apply process jobs run longer than they would under standard
switch processing. If the apply process is ended by a user before the switch, MIMIX
will restart the apply jobs to preserve these entries.
As part of the unplanned switch processing, MIMIX checks whether the apply jobs are
caught up. Then, unconfirmed entries are applied to the target database and added to
a journal that will be transferred to the source system when that system is brought
back up. When the backup system is brought online as the temporary the source
system, the unconfirmed entries are processed before any new journal entries
generated by the application are processed. Furthermore, to ensure full data integrity,
once the original source system is operational these unconfirmed entries are the first
entries replicated back to that system.

RJ link considerations when switching


By default, when a data group is ended or a planned switch occurs, the RJ link
remains active. You need to consider whether to keep the original RJ link active after
a planned switch of a data group. If the RJ link is used by another application or data
group, the RJ link must remain active. Sharing an RJ link among multiple data groups
is only recommended for the conditions identified in “Sharing RJ links among data
groups” on page 69.
If the RJ link is not used by any other application or data group, the link should be
ended to prevent communications and processing overhead. When you are
temporarily running production applications on the backup system after a planned
switch, journal entries generated on the backup system are transmitted to the remote
journal receiver (which is on the production system). MIMIX applies the entries to the
original production database. If journaling is still active on the original production
database, new journal entries are created for the entries that were just applied. These
new journal entries are essentially a repeat of the same operation just performed
against the database. Remote journaling causes the entries to be transmitted back to
the backup system. MIMIX prevents these repeat entries from being reapplied,
however, these repeated entries cause additional resources to be used within MIMIX
and in communications.
MIMIX Model Switch Framework considerations - When remote journaling is used
in an environment in which MIMIX Model Switch Framework is implemented, you
need to consider the implications of sharing an RJ link. In addition, default values

73
used during a planned switch cause the RJ link to remain active. You may need to
end the RJ link after a planned switch.

74
User journal replication of IFS objects, data areas, data queues

User journal replication of IFS objects, data areas, data


queues
MIMIX supports the ability to replicate IFS objects, data areas, and data queues
either by user journal (database) replication or by system journal (object) replication
processes. Replicating these objects through the user journal can be more efficient
than replication processes based on the system journal. Each time a journaled IFS
object, data area, or data queue changes, only the changed bytes are recorded in the
journal entry.
Default values for new installations and new data group object entries automatically
configure data group object entries that identify *DTAARA and *DTAQ objects for user
journal replication. Default values for IFS objects configure these objects to use
system journal replication, however you can specify data group IFS entries to allow
user journal replication. When object or IFS entries allow user journal replication,
MIMIX uses tracking entries to uniquely identify each configured object. User journal
replication of these objects is sometimes referred to as advanced journaling.
A data group that replicates some or all configured IFS objects, data areas, or data
queues through a user journal may also replicate database files from the same journal
as well as replicate objects from the system journal. For example, a data group could
be configured to support MIMIX Dynamic Apply for *FILE objects, advanced
journaling for IFS objects and data areas, and system journal processes for data
queues and other library-based objects. For more information, see “Replication
choices by object type” on page 97.
You may need to consider how much data is replicated through the same apply
session for user journal replication processes and whether any transactions need to
be serialized with database files. For more information, see “Planning for journaled
IFS objects, data areas, and data queues” on page 87.

Benefits
One of the most significant benefits of journaling through the user journal is that IFS
objects, data areas, and data queues are processed by replicating only changed
bytes.
Another key advantage for IFS support is that environments performing many create,
move, rename, and delete operations, where all objects are journaled at birth and
remain within the replication namespace, will replicate robustly and without timing
issues related to QAUDJRN latency.
Another significant benefit of user journaling for IFS objects, data areas, and data
queues is that transactions can be applied in lock-step with a database file. This
requires that the objects and database files are configured to the same data group
and the same database apply session.
For example, assume that a hotel uses a database application to reserve rooms.
Within the application, a data area contains a counter to indicate the number of rooms
reserved for a particular day and a database file contains detailed information about
reservations. Each time a room is reserved, both the counter and the database file are
updated. If these updates do not occur in the same order on the target system, the

75
hotel risks reserving too many or too few rooms. When using system journal
replication, serialization of these transactions cannot be guaranteed on the target
system due to inherent differences in MIMIX processing from the user journal
(database file) and the system journal (default for objects). With user journal
processing, MIMIX serializes these transactions on the target system by updating
both the file and the data area. Thus, as long as both the database file and data area
are configured to be processed by the same apply session and processing of an
object is not held due to an error, updates occur on the target system in the same
order they were originally made on the source system.
Additional benefits of replicating IFS objects, data areas, and data queues from the
user journal include:
• Replication is less intrusive. In system-based object replication, the save/restore
process places locks on the replicated object on the source system. Database
replication touches the user journal only, leaving the source object alone.
• More robust handling of environments with a high volume of move, rename,
create, and delete operations.
• Changes to objects replicated from the user journal may be replicated to the target
system in a more timely manner. In traditional object replication, system journal
replication processes must contend with potential locks placed on the objects by
user applications.
• Processing time may be reduced, even for equal amounts of data. Database
replication eliminates the separate save, send, and restore processes necessary
for object replication.
• The objects replicated from the user journal can reduce burden on object
replication processes when there is a lot of activity being replicated through the
system journal.
• Commitment control is supported for B journal entry types for IFS objects
journaled to a user journal.
• Support for multiple hard links to a single stream file.
Restrictions and configuration requirements vary for IFS objects and data area or data
queue objects. For detailed information, including supported journal entry types, see
“Identifying data areas and data queues for replication” on page 113 and “Identifying
IFS objects for replication” on page 116.

Processes used
When IFS objects, data areas, and data queues are properly configured, replication
occurs through the user journal replication path. Processing occurs through the IBM i
remote journal function, the MIMIX database reader process1, and one database
apply process (session A).

1. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ
support.

76
User journal replication of IFS objects, data areas, data queues

Tracking entries
A tracking entry is associated with each IFS object, data area, and data queue that is
replicated through the user journal.
The collection of data group IFS entries for a data group determines the subset of
existing IFS objects on the source system that are eligible for user journal replication
techniques. Similarly, the collection of data group object entries determines the subset
of existing data areas and data queues on the source system that are eligible for user
journal replication techniques. MIMIX requires a tracking entry for each of the eligible
objects to identify how it is defined for replication and to assist with tracking status
when it is replicated. IFS tracking entries identify IFS stream files, symbolic links and
directories, including the source and target file ID (FID), while object tracking entries
identify data areas or data queues.
When you initially configure a data group you must load tracking entries, start
journaling for the objects which they identify, and synchronize the objects with the
target system. The same is true when you add new or change existing data group IFS
entries or object entries.
It is also possible for tracking entries to be automatically created. After creating or
changing data group IFS entries or object entries that are configured for replication
through the user journal, tracking entries are created the next time the data group is
started. However, this method has disadvantanges.This can significantly increase the
amount of time needed to start a data group. If the objects you intend to replicate
through the user journal are not journaled before the start request is made, MIMIX
places the tracking entries in *HLDERR state. Error messages indicate that journaling
must be started and the objects must be synchronized between systems.
Once a tracking entry exists, it remains until one of the following occurs:
• The object identified by the tracking entry is deleted from the source system and
replication of the delete action completes on the target system.
• The object identified by the tracking entry is moved or renamed to a name that is
not configured for replication.
• The data group configuration changes so that an object is no longer identified for
replication through the user journal.

77
Figure 4 shows an IFS user directory structure, the include and exclude processing
selected for objects within that structure, and the resultant list of tracking entries
created by MIMIX.

Figure 4. IFS tracking entries produced by MIMIX

The status of tracking entries is included with other data group status. You also can
see what objects they identify, whether the objects are journaled, and their replication
status. You can also perform operations on tracking entries, such as holding and
releasing, to address replication problems.

IFS object file identifiers (FIDs)


Normally, when dealing with objects and database files, you have the ability of seeing
the name of the object (filename, library name, and member name) in the journal
entries. For IFS objects, it is impractical to put the name of the IFS object in the
header of the journal entry due to potentially long path names.
Each IFS object on a system has a unique 16-byte file ID (FID). The FID is used to
identify IFS objects in journal entries. The FID is machine-specific, meaning that IFS
objects with the same path name may have different FIDs on different systems.
MIMIX tracks the FIDs for all IFS objects and the parent directory of each object
configured for replication with advanced journaling via IFS tracking entries. When the
data group is switched, the source and target FID associations are reversed, allowing
MIMIX to successfully replicate transactions to IFS objects.

78
Older source-send user journal replication processes

Older source-send user journal replication processes


This topic describes the older, less used method of replicating from a user journal
known as MIMIX source-send processing. In this method, data groups are configured
to use MIMIX source-send processes.
Note: New data groups are created to use remote journaling support for user journal
replication when shipped default values on commands are used. Using remote
journaling support offers many benefits over using MIMIX source-send
processes.
MIMIX uses journaling to identify changes to database files and other journaled
objects to be replicated. As journal entries are added to the journal receiver, the
database send process collects data from journal entries on the source system and
compares them to the data group file entries defined for the data group.
Journal entries for which a match is found for the file and library are then transported
to the target system for replication according to the DB journal entry processing
parameter (DBJRNPRC) filtering specified in the data group definition. The Data
group file entries (FEOPT) parameter, specified either at the data group level or on
individual data group file entries, also indicates whether to send only the after-image
of the change or both before-image and after-images.
Alternatively, if all journal entries are sent to the target system, the journal entries are
filtered there by the apply process. The matching for the apply process is at the file,
library, and member level.
Note: If an application program adds or removes members and all members within
the file are to be processed by MIMIX, it is better to use *ALL as the member
name in that data group file entry. If individual members are specified, only
those members you identify are processed.
On the target system, the database receive process transfers the data received over
the communications line from the source system into a log space on the target
system.
The database apply process applies replicated database transactions from the log
space to the appropriate database or replicated object on the target system. MIMIX
uses multiple apply processes in parallel for maximum efficiency. Transactions that
are not part of a commit cycle are immediately applied to the target system. For
transactions that are part of a commit cycle, processing varies depending on how the
data group is configured. With default configuration values, MIMIX processes
transactions that are part of a commit cycle but does not apply those transactions until
an open commit cycle completes. Optionally, MIMIX can immediately apply
transactions that are part of a commit cycle.
Throughout this process, MIMIX manages the journal receiver unless you have
specified otherwise. The journal definition default operation specifies that MIMIX
automatically create the next journal receiver when the journal receiver reaches the
threshold size you specified in the journal definition. After MIMIX finishes reading the
entries from the current journal receiver, it deletes this receiver (if configured to do so)
and begins reading entries from the next journal receiver. This eliminates excessive

79
use of disk storage and allows valuable system resources to be available for other
processing.
Besides indicating the mapping between source and target file names, data group file
entries identify additional information used by database processes. The data group
file entry can also specify a particular apply session to use for processing on the
target system.
A status code in the data group file entry also stores the status of the file or member in
the MIMIX process. If a replication problem is detected, MIMIX puts the member in
hold error (*HLDERR) status so that no further transactions are applied. Files can
also be put on hold (*HLD) manually.
Putting a file on hold causes MIMIX to retain all journal entries for the file in log
spaces on the target system. If you expect to synchronize files at a later time, it is
better to put the file in an ignored state. By setting files to an ignored state, journal
entries for the file in the log spaces are deleted and additional entries received from
the target system are discarded. This keeps the log spaces to a minimal size and
improves efficiency for the apply process.
The file entry option Lock member during apply indicates whether or not to allow only
restricted access (read-only) to the file on the backup system. This file entry option
can be specified on the data group definition or on individual data group entries.

80
CHAPTER 3 Preparing for MIMIX

This chapter outlines what you need to do to prepare for using MIMIX.
Preparing for the installation and use of MIMIX is a very important step towards
meeting your availability management requirements. Because of their shared
functions and their interaction with other MIMIX products, it is best to determine IBM
System i requirements for user journal and system journal processing in the context of
your total MIMIX environment.
Give special attention to planning and implementing security for MIMIX. General
security considerations for all MIMIX products can be found in the Using License
Manager book. In addition, you can make your systems more secure with MIMIX
product-level and command-level security. Each product has its own product-level
security, but now you must consider the security implications of common functions
used by each product. Information about setting security for common functions is also
found in the Using License Manager book.
The topics in this chapter include:
• “Checklist: pre-configuration” on page 82 provides a procedure to follow to
prepare to configure MIMIX on each system that participates in a MIMIX
installation.
• “Data that should not be replicated” on page 84 describes how to consider what
data should not be replicated.
• “Planning for journaled IFS objects, data areas, and data queues” on page 87
describes considerations when planning to use advanced journaling for IFS
objects, data areas, or data queues.
• “Starting the MIMIXSBS subsystem” on page 92 describes how to start the
MIMIXSBS subsystem which all MIMIX products run in.
• “Accessing the MIMIX Main Menu” on page 93 describes the MIMIX Main Menu
and its two assistance levels, basic and intermediate which provide options to help
simplify daily interactions with MIMIX.

81
Checklist: pre-configuration
You need to configure MIMIX on each system that participates in a MIMIX installation.
Do the following:
1. By now, you should have completed the following tasks:
• The checklist for installing MIMIX software in the Using License Manager book
• You should have also turned on product-level security and granted authority to
user profiles to control access to the MIMIX products.
2. At this time, you should review the information in “Data that should not be
replicated” on page 84.
3. Decide what replication choices are appropriate for your environment. Review the
following topics:
• “New configuration default environment” on page 83
• “Planning for journaled IFS objects, data areas, and data queues” on page 87
• For detailed information see the chapter “Planning choices and details by
object class” on page 95.
4. If it is not already active, start the MIMIXSBS subsystem using topic “Starting the
MIMIXSBS subsystem” on page 92.
5. Configure each system in the MIMIX installation, beginning with the management
system. The chapter “Configuration checklists” on page 135 identifies the primary
options you have for configuring MIMIX.
6. Once you complete the configuration process you choose, you may also need to
do one or more of the following:
• If you plan to use MIMIX Monitor in conjunction with MIMIX, you may need to
write exit programs for monitoring activity and you may want to ensure that
your monitor definitions are replicated. See the MIMIX Operations book for
more information.
• Verify the configuration.
• Verify any exit programs that are called by MIMIX.
• Update any automation programs you use with MIMIX and verify their
operation.
• If you plan to use switching support, you or your Certified MIMIX Consultant
may need to take additional action to set up and test switching. Customization
of the procedures and steps for application groups may be appropriate and
should be considered. In environments that do not use application groups, a
default model switch framework must be configured and identified in MIMIX
policies. For more information about switching and policies, see the MIMIX
Operations book.

82
New configuration default environment

New configuration default environment


Default options in MIMIX result in a new configuration that replicates data in the
following ways:
• Both the system journal and a user journal are used to cooperatively process
logical and physical files, IFS objects, data areas, and data queues.
• The user journal employs a remote journal environment to transmit journal entries
to the target system.
• The system journal is used to replicate other types of library-based objects.
Multiple data groups share a job that reads and sends the journal entries to be
replicated to the target system.
• Application groups provide operational control for starting, stopping, and switching
replication.

83
Data that should not be replicated
There are some considerations to keep in mind when defining data for replication. Not
only do you need to determine what is critical to replicate, but you also need to
consider data that should not be replicated.
As you identify your critical data, consider the following:
• Do not place user created objects or programs in the LAKEVIEW, MIMIXQGPL, or
VSI001LIB libraries or in the IFS location /visionsolutions/http/vsisvr.
Any user created objects or programs in these locations will be deleted during the
installation process. Move any such objects or programs to a different location
before installing software. The one exception is that job descriptions, such as the
MIMIX Port job, can continue to be placed into the MIMIXQGPL library.
• Only user created objects or programs that are related to a product installation
should be placed within the product’s installation library or a data library.
Examples of related objects for MIMIX products include user created step
programs, user exit programs, and programs created as part of a MIMIX Model
Switch Framework implementation.

• Certain types of information must not be replicated. Also, some temporary data
associated with applications may not need to be replicated. Table 4 identifies what
data to exclude from replication.

Table 4. Data to exclude from replication

Data Category Type Do Not Replicate

Application Temporary objects or files You may not need to replicate temporary files, work
Environment files, and temporary objects, including DLOs and
stream files. Evaluate how your applications use such
files to determine if they need to be replicated.

System Libraries IBM-supplied libraries, files, and other objects for


Environment System i, which typically begin with the prefix Q.

User profiles System user profiles, such as QSYSOPR and


QSECOFR

84
Data that should not be replicated

Table 4. Data to exclude from replication (Continued)

Data Category Type Do Not Replicate

MIMIX Libraries LAKEVIEW


Environment (and contents) MIMIXQGPL
MIMIX product installation libraries
MIMIX data libraries
MXIWIZ*
VSI001LIB
Note: MIMIX is the default name for the MIMIX installation
library -- the library in which MIMIX® Enterprise™ or
MIMIX® Professional™ is installed. MIMIX data
libraries are associated with a MIMIX installation
library and have names in the format installation-
library-name_x, where x is a letter or number.

IFS directories (and contents) /LakeviewTech


/VISIONSOLUTIONS
/VisionISOImages

User profiles LAKEVIEW


MIMIXOWN
MIMIXCLU

iOptimize If iOptimize is installed on the same system or in the same partition as MIMIX, do not
Environment replicate the following:

Libraries IOPT
(and contents) IOPT71
IOPTSPLARC
IOPTOBJARC
Note: IOPT is the default name for the iOptimize
installation library -- the library in which iOptimize is
installed. iOptimize data libraries are associated with
an iOptimize installation library and begin with the
default name.

Authorization lists IOAUTLST71

User profiles IOPTOWNER


and corresponding message ITIDGUI
queues VSI001LIB

Device description VSINSVRT

IFS directories (and contents) VISIONSOLUTIONS


/VisionISOImages

85
Table 4. Data to exclude from replication (Continued)

Data Category Type Do Not Replicate

MIMIX Director™ For MIMIX Director, 8n is the release level. For example, n=1 in release 8.1. If MIMIX
Environment Director is installed on the same system or in the same partition as MIMIX, do not
replicate the following:

Libraries (and contents) ITID8n


IDSPLARC8n
IDOBJARC8n

Authorization lists IDAUTLST8n

User profiles and ITIDOWNER


corresponding message ITIDGUI
queues VSI001LIB

Device description VSINSVRT

IFS directories (and contents) VISIONSOLUTIONS


/VisionISOImages

86
Planning for journaled IFS objects, data areas, and data queues

Planning for journaled IFS objects, data areas, and data


queues
You can choose to use the cooperative processing support within MIMIX to replicate
any combination of journaled IFS objects, data queue objects, or data area objects
using user journal replication processes.
In addition to configuration and journaling requirements and the restrictions that apply,
you need to address several other considerations when planning to replicate
journaled IFS objects, data areas, or data queues. These considerations affect
whether journals should be shared, whether objects should be replicated in a data
group shared with database files, whether configuration changes are needed to
change apply sessions for database files, and whether exit programs need to be
updated.

Is user journal replication appropriate for your environment?


While user journal replication has significant advantages, it may not be appropriate for
all of the files in your environment. Or, it may be appropriate for only some of the
supported object types. Consider the following:
• IFS objects that meet the following criteria are better suited for replication through
the system journal:
• written or re-written entirely at one time
• are rarely modified for changes or additions to only a small portion of their data
• do not perform many rapid move/rename/create/delete operations involving
the same names
For example, applications that store scanned images for archival purpose and
repositories for office documents typically meet this criteria.
• User journal replication is recommended if any of the following are true:
• Small modifications are made to files by adding or changing a subset of the file
Note: An application like Microsoft Word or Microsoft Excel will typically re-
write the entire file each time the file is saved, so minor end-user edits to
these files do not fall into this category.
• Files are moved, renamed, deleted, and/or recreated rapidly
• If multiple hard links to a single stream file are used, user journal replication is
required.
The benefits of user journal replication are described in “User journal replication of
IFS objects, data areas, data queues” on page 75. For restrictions and limitations, see
“Identifying data areas and data queues for replication” on page 113 and “Identifying
IFS objects for replication” on page 116.

87
Serialized transactions with database files
Transactions completed for database files and objects (IFS objects, data areas, or
data queues) can be serialized with one another when they are applied on the target
system. If you require serialization, these objects and database files must share the
same data group as well as the same database apply session, session A. For
example, when a database record contains a reference to a corresponding stream file
that is associated with the record, serialization may be desired.
Since MIMIX uses apply session A for all objects configured for user journal
replication, serialization may require that you change the configuration for database
files to ensure that they use the same apply session. Load balancing may also
become a concern. See “Database apply session balancing” on page 90.

Converting existing data groups


When converting an existing data group consider the following:
• The Convert Data Group IFS Entries (CVTDGIFSE) command provides the most
efficient way to convert IFS entries from system journal (object) replication to user
journal (database) replication. For more information, see “Checklist: Converting
IFS entries to user journaling using the CVTDGIFSE command” on page 154.
• You may have previously used data groups with a Data group type (TYPE) value
of *OBJ to separate replication of IFS, data area, or data queue objects from other
activity. Converting these data groups requires the Data group type (TYPE) to be
*ALL. The data group definition and existing data group entries must be changed
to the values required to allow cooperative processing primarily through the user
journal. When using the CVTDGIFSE command to convert IFS entries to user
journal replication, these configuration changes are handled for you.
• When converting an existing data group, some objects in the IFS path may be
better suited for system journal replication. To achieve desired results, you may
need to create an additional data group for IFS entries. This may include creating
IFS entries that replicate some objects in an IFS directory via system journal
replication and other objects in the same IFS directory via user journal replication.
For more information, see example 3 in “Conversion examples” on page 89.
When using the CVTDGIFSE command to convert IFS entries you can choose to
convert all or some of the IFS entries currently configured for system journaling
within a data group.
• Adding IFS, data area, or data queue objects configured for user journal
replication to an existing database replication environment may increase
replication activity and affect performance. If a large amount of data is to be
replicated, consider the overall replication performance and throughput
requirements when choosing a configuration.
• Changing the replication mechanism of IFS objects, data areas, or data queues
from system journal replication to user journal replication generally reduces
bandwidth consumption, improves replication latency, and eliminates the locking
contention associated with the save and restore process. However, if these
objects have never been replicated, the addition of IFS byte stream files, data
areas, or data queues to the replication environment will increase bandwidth

88
Planning for journaled IFS objects, data areas, and data queues

consumption and processing workload.

Conversion examples
To illustrate a simple conversion, assume that the systems defined to data group
KEYAPP are running on an IBM i. You use this data group for system journal
replication of the objects in library PRODLIB. The data group has one data group
object entry which has the following values:
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*FILE)
Example 1 - You decide to use user journal replication for all *DTAARA and *DTAQ
objects replicated with data group KEYAPP. You have confirmed that the data group
definition specifies TYPE(*ALL) and does not need to change. After performing a
controlled end of the data group, you change the data group object entry to have the
following values:
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*DFT)
Note: COOPTYPE (*DFT) is equivalent to specifying COOPTYPE(*FILE *DTAARA
*DTAQ).
When the data group is started, object tracking entries are loaded for the data area
and data queue objects in PRODLIB. Those objects will now be replicated from a user
journal. Any other object types in PRODLIB continue to be replicated from the system
journal.
Example 2 - You want to use user journal replication for data group KEYAPP but one
data area, XYZ, must remain replicated from the system journal. You will need the
data group object entry described in Example 1.
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*DFT)
You will also need a new data group object entry that specifies the following so that
data area XYZ can be replicated from the system journal:
LIB1(PRODLIB) OBJ1(XYZ) OBJTYPE(*DTAARA) PRCTYPE(*INCLD)
COOPDB(*NO)
Example 3 - You want to use user journal replication for objects with the IFS Directory
IFSDIR but one object, abcXYZ, must remain replicated from the system journal. You
will need to do the following:
1. Run the Convert Data Group IFS Entries (CVTDGIFSE) command. See "Running
the CVTDGIFSE command" on page 147. A tracking entry (IFSTE) is created for
all IFS objects.
2. Add the data group IFS entry for the one object, abcXYZ, that you do not want to
replicate through the user journal:
OBJ1('/ifsdir/abcXYZ') PRCTYPE(*INCLD) COOPDB(*NO)
3. If the object previously was not journaled, but is now because CVTDGIFSE
journaled it, end journaling for the object’s tracking entry:
ENDJRNIFSE OBJ(('/ifsdir/abcXYZ'))

89
4. Remove the object’s tracking entry:
RMVDGIFSTE OBJ1('/ifsdir/abcXYZ')

Database apply session balancing


In each data group, one database apply session, session A, is used for all IFS
objects, data areas, and data queues replicated from a user journal. If you also
replicate database files in the same data group, the way in which files are configured
for replication can also affect how much data is processed by apply session A. In
some cases, you may need adjust the configured apply session in data group object
and file entries to either ensure that files that should be serialized remain in the same
apply session or to move files to another apply session to manually balance loads.
Consider the following:
• In MIMIX Dynamic Apply configurations, newly created database files are
distributed evenly across database apply sessions by default. This ensures that
the files are distributed in a way that will not overload any one apply session.
When a data group is initially started or when a start request specifies to clear
pending entries, MIMIX assigns data group file entries to an apply session.
Default behavior is to use the target system to determine the database file
network relationships when assigning apply sessions.
• In configurations using legacy cooperative processing, newly created database
files are distributed to apply session A by default. In data groups that also
replicate IFS objects, data areas or data queues through the user journal, it may
be necessary to change the apply session to which cooperatively processed files
are directed when the database files are created to prevent apply session A from
becoming overloaded. The apply session can be changed in the file entry options
(FEOPT) on the data group object and file entries.
• Logical files and physical files with referential constraints also have apply session
requirements to consider. For more information see “Considerations for LF and PF
files” on page 106.

User exit program considerations


When new or different journaled object types are added to an existing data group,
user exit programs may be affected. Be aware of the following exit program
considerations when changing an existing configuration to include IFS objects, data
areas, or data queues configured for replication processing from a user journal.
• When IFS objects, data areas, or data queues are journaled to a user journal, new
journal entry codes are provided to the user exit program. If the user exit program
interprets the journal code, changes may be required.
• The path name for IFS objects cannot be interpreted in the same way as it can for
database files. MIMIX uses the file ID (FID) to identify the IFS object being
replicated. User exit programs that rely on the library and file names in the journal
entry may need to be changed to either ignore IFS journal entries or process them
by resolving the FID to a path name using the IBM-supplied APIs.
• Journaled IFS objects and data queues can have incomplete journal entries. For
incomplete journal entries, MIMIX provides two or more journal entries with

90
Planning for journaled IFS objects, data areas, and data queues

duplicate journal entry sequence numbers and journal codes and types to the user
exit program when the data for the incomplete entry is retrieved. Programs need
to correctly handle these duplicate entries representing the single, original journal
entry.
• Journal entries for journaled IFS objects, data areas, and data queues will be
routed to the user exit program. This may be a performance consideration relative
to user exit program design.
Contact your Certified MIMIX Consultant for assistance with user exit programs.

91
Starting the MIMIXSBS subsystem
By default, all MIMIX products run in the MIMIXSBS subsystem that is created when
you install the product. This subsystem must be active before you can use the MIMIX
products.
If the MIMIXSBS is not already active, start the subsystem by typing the command
STRSBS SBSD(MIMIXQGPL/MIMIXSBS) and pressing Enter.
Any autostart job entries listed in the MIMIXSBS subsystem will start when the
subsystem is started.
Note: You can ensure that the MIMIX subsystem is started after each IPL by adding
this command to the end of the startup program for your system. Due to the
unique requirements and complexities of each MIMIX implementation, it is
strongly recommended that you contact your Certified MIMIX Consultant to
determine the best way in which to design and implement this change.

92
Accessing the MIMIX Main Menu

Accessing the MIMIX Main Menu


The MIMIX command accesses the main menu for a MIMIX installation. The MIMIX
Main Menu has two assistance levels, basic and intermediate. The command defaults
to the basic assistance level, shown in Figure 5, with its options designed to simplify
day-to-day interaction with MIMIX. Figure 6 shows the intermediate assistance level.
The options on the menu vary with the assistance level. In either assistance level, the
available options also depend on the MIMIX products installed in the installation
library and their licensing. The products installed and the licensing also affect
subsequent menus and displays.
Accessing the menu - If you know the name of the MIMIX installation you want, you
can use the name to library-qualify the command, as follows:
Type the command library-name/MIMIX and press Enter. The default name of
the installation library is MIMIX.
If you do not know the name of the library, do the following:
1. Type the command LAKEVIEW/WRKPRD and press Enter.
2. Type a 9 (Display product menu) next to the product in the library you want on the
Vision Solutions Installed Products display and press Enter.
Changing the assistance level - The F21 key (Assistance level) on the main menu
toggles between basic and intermediate levels of the menu. You can also specify the
the Assistance Level (ASTLVL) parameter on the MIMIX command.

Figure 5. MIMIX Basic Main Menu

MIMIX Basic Main Menu


System: SYSTEM1
MIMIX

Select one of the following:

1. Work with application groups WRKAG


2. Start MIMIX
3. End MIMIX
4. Switch all application groups
5. Start or complete switch using Switch Asst.
6. Work with data groups WRKDG

10. Availability status WRKMMXSTS


11. Configuration menu
12. Work with monitors WRKMON
13. Work with messages WRKMSGLOG
14. Cluster menu
More...
Selection or command
===>__________________________________________________________________________
______________________________________________________________________________
F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel
(C) Copyright Vision Solutions, Inc., 1990, 2014.

93
Note: On the MIMIX Basic Main Menu, options 5 (Start or complete switch using
Switch Asst.) and 10 (Availability Status) are not recommended for
installations that use application groups.

Figure 6. MIMIX Intermediate Main Menu

MIMIX Intermediate Main Menu


System: SYSTEM1
MIMIX
Select one of the following:

1. Work with data groups WRKDG


2. Work with systems WRKSYS
3. Work with messages WRKMSGLOG
4. Work with monitors WRKMON
5. Work with application groups WRKAG
6. Work with audits WRKAUD
7. Work with procedures WRKPROC

11. Configuration menu


12. Compare, verify, and synchronize menu
13. Utilities menu
14. Cluster menu
More...
Selection or command
===>__________________________________________________________________________
______________________________________________________________________________
F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel
(C) Copyright Vision Solutions, Inc., 1990, 2014.

94
CHAPTER 4 Planning choices and details by
object class

This chapter describes the replication choices available for objects and identifies
critical requirements, limitations, and configuration considerations for those choices.
Many MIMIX processes are customized to provide optimal handling for certain
classes of related object types and differentiate between database files, library-based
objects, integrated file system (IFS) objects, and document library objects (DLOs).
Each class of information is identified for replication by a corresponding class of data
group entries. A data group can have any combination of data group entry classes.
Some classes even support multiple choices for replication.
In each class, a data group entry identifies a source of information that can be
replicated by a specific data group. When you configure MIMIX, each data group
entry you create identifies one or more objects to be considered for replication or to
be explicitly excluded from replication. When determining whether to replicate a
journaled transaction, MIMIX evaluates all of the data group entries for the class to
which the object belongs. If the object is within the name space determined by the
existing data group entries, the transaction is replicated.
When configuring installations that are licensed for MIMIX DR, name mapping is not
supported in data group entries.
The topics in this chapter include:
• “Replication choices by object type” on page 97 identifies the available replication
choices for each object class.
• “Configured object auditing value for data group entries” on page 98 describes
how MIMIX uses a configured object auditing value that is identified in data group
entries and when MIMIX will change an object’s auditing value to match this
configuration value.
• “Identifying library-based objects for replication” on page 100 includes information
that is common to all library-based objects, such as how MIMIX interprets the data
group object entries defined for a data group. This topic also provides examples
and additional detail about configuring entries to replicate spooled files and user
profiles.
• “Identifying logical and physical files for replication” on page 106 identifies the
replication choices and considerations for *FILE objects with logical or physical file
extended attributes. This topic identifies the requirements, limitations, and
configuration requirements of MIMIX Dynamic Apply and legacy cooperative
processing.
• “Identifying data areas and data queues for replication” on page 113 identifies the
replication choices and configuration requirements for library-based objects of
type *DTAARA and *DTAQ. This topic also identifies restrictions for replication of
these object types when user journal processes (advanced journaling) is used.

95
Planning choices and details by object class

• “Identifying IFS objects for replication” on page 116 identifies supported and
unsupported file systems, replication choices, and considerations such as long
path names and case sensitivity for IFS objects. This topic also identifies
restrictions and configuration requirements for replication of these object types
when user journal processes (advanced journaling) is used.
• “Identifying DLOs for replication” on page 122 describes how MIMIX interprets the
data group DLO entries defined for a data group and includes examples for
documents and folders.
• “Processing of newly created files and objects” on page 126 describes how new
IFS objects, data areas, data queues, and files that have journaling implicitly
started are replicated from the user journal.
• “Processing variations for common operations” on page 129 describes
configuration-related variations in how MIMIX replicates move/rename, delete,
and restore operations.

96
Replication choices by object type

Replication choices by object type


A new configuration of MIMIX that uses shipped defaults for all configuration choices
will use remote journaling support for replication from user journals. Default
configuration choices will result in physical files (data and source) as well as logical
files, data areas, and data queues being processed through user journal replication.
All other supported object types and classes will be replicated using system journal
replication. You can optionally use other replication processes as described in Table
5.

Table 5. Replication choices by object class

Object Class and Replication Options Required Classes of More Information


Type DG Entry

Objects of type *FILE, Default: user journal with Object entries and “Identifying logical and
extended attributes: MIMIX Dynamic Apply1 File entries physical files for
• PF (data, source) replication” on page 106
Other: For PF data files, Object entries and
• LF
legacy cooperative File entries
processing2. (For PF
source and LF files, system
journal)

• *FILE, other Default: For other files, Object entries “Identifying library-based
extended attributes system journal objects for replication” on
page 100

Objects of type Default: advanced Object entries and “Identifying data areas
*DTAARA journaling2 Object tracking entries and data queues for
replication” on page 113
Other: system journal Object entries

Objects of type *DTAQ Default: advanced Object entries and


journaling2 Object tracking entries

Other: system journal Object entries

Other library-based Default: system journal Object entries “Identifying library-based


objects objects for replication” on
page 100

IFS objects Default: system journal IFS entries “Identifying IFS objects for
replication” on page 116
Other: advanced IFS entries and IFS
journaling2 tracking entries

DLOs Default: system journal DLO entries “Identifying DLOs for


replication” on page 122
1. New data groups are created to use remote journaling and to cooperatively process files using MIMIX Dynamic Apply.
Existing data groups can be converted to this method of cooperative processing.
2. User journal replication can be configured for either remote journaling or MIMIX source-send processes.

97
Configured object auditing value for data group entries
When you create data group entries for library-based objects, IFS objects, or DLOs,
you can specify an object auditing value within the configuration. This configured
object auditing value affects how MIMIX handles changes to attributes of objects. It is
particularly important for, but not limited to, objects configured for system journal
replication.
The Object auditing value (OBJAUD) parameter defines a configured object auditing
level for use by MIMIX. This configured value is associated with all objects identified
for processing by the data group entry. An object’s actual auditing level determines
the extent to which changes to the object are recorded in the system journal and
replicated by MIMIX. The configured value is used during initial configuration and
during processing of requests to compare objects that are identified by configuration
data.
In specific scenarios, MIMIX evaluates whether an object’s auditing value matches
the configured value of the data group entry that most closely matches the object
being processed. If the actual value is lower than the configured value, MIMIX sets
the object to the configured value so that future changes to the object will be recorded
as expected in the system journal and therefore can be replicated.
Note: MIMIX only considers changing an object’s auditing value when the data
group object entry is configured for system journal replication. MIMIX does not
change the object’s value for files that are configured for MIMIX Dynamic
Apply or legacy cooperative processing or for data areas and data queues that
are configured for user journal replication.
The configured value specified in data group entries can affect replication of some
journal entries generated when an object attribute changes. Specifically, the
configured value can affect replication of T-ZC journal entries for files and IFS objects
and T-YC entries for DLOs. Changes that generate other types of journal entries are
not affected by this parameter.
When MIMIX changes the audit level, the possible values have the following results:
• The default value, *CHANGE, ensures that all changes to the object by all users
are recorded in the system journal.
• The value *ALL ensures that all changes or read accesses to the object by all
users are recorded in the system journal. The journal entries generated by read
accesses to objects are not used for replication and their presence can adversely
affect replication performance.
• The value *NONE results in no entries recorded in the system journal when the
object is accessed or changed.
The values *CHANGE and *ALL result in replication of T-ZC and T-YC journal entries.
The value *NONE prevents replication of attribute and data changes for the identified
object or DLO because T-ZC and T-YC entries are not recorded in the system journal.
For files configured for MIMIX Dynamic Apply and any IFS objects, data areas, or
data queues configured for user journal replication, the value *NONE can improve
MIMIX performance by preventing unneeded entries from being written to the system
journal.

98
Configured object auditing value for data group entries

When a compare request includes an object with a configured object auditing value of
*NONE, any differences found for attributes that could generate T-ZC or T-YC journal
entries are reported as *EC (equal configuration).
You may also want to read the following:
• For more information about when MIMIX sets an object’s auditing value, see
“Managing object auditing” on page 60.
• For more information about manually setting values and examples, see “Setting
data group auditing values manually” on page 309.
• To see what attributes can be compared and replicated, see the following topics:
– “Attributes compared and expected results - #FILATR, #FILATRMBR audits”
on page 696
– “Attributes compared and expected results - #OBJATR audit” on page 701
– “Attributes compared and expected results - #DLOATR audit” on page 713.
– “Attributes compared and expected results - #IFSATR audit” on page 710

99
Identifying library-based objects for replication
MIMIX uses data group object entries to identify whether to process transactions for
library-based objects. Collectively, the object entries identify which library-based
objects can be replicated by a particular data group.
Each data group object entry identifies one or more library-based objects. An object
entry can specify either a specific or a generic name for the library and object. In
addition, each object entry also identifies the object types and extended object
attributes (for *FILE and *DEVD objects) to be selected, defines a configured object
auditing level for the identified objects, and indicates whether the identified objects
are to be included in or excluded from replication.
For most supported object types which can be identified by data group object entries,
only the system journal replication path is available. For a list of object types, see
“Supported object types for system journal replication” on page 635. This list includes
information about what can be specified for the extended attributes of *FILE objects.
A limited number of object types which use the system journal replication path have
unique configuration requirements. These are described in are described in
“Identifying spooled files for replication” on page 103 and “Replicating user profiles
and associated message queues” on page 104.
For detailed procedures, see “Configuring data group entries” on page 270.
Replication options for object types journaled to a user journal - For objects of
type *FILE, *DTAARA, and *DTAQ, MIMIX supports multiple replication methods. For
these object types, additional configuration data is evaluated when determining what
replication path to use for the identified objects.
For *FILE objects, the extended attribute and other configuration data are considered
when MIMIX determines what replication path to use for identified objects.
• For logical and physical files, MIMIX supports several methods of replication.
Each method varies in its efficiency, in its supported extended attributes, and in
additional configuration requirements. See “Identifying logical and physical files
for replication” on page 106 for additional details.
• For other extended attribute types, MIMIX supports only system journal
replication. Only data group object entries are required to identify these files for
replication.
For *FILE objects configured for replication through the system journal, MIMIX caches
extended file attribute information for a fixed set of *FILE objects. Also, the Omit
content (OMTDTA) parameter provides the ability to omit a subset of data-changing
operations from replication. For more information, see “Caching extended attributes of
*FILE objects” on page 369 and “Omitting T-ZC content from system journal
replication” on page 415.
For *DTAARA and *DTAQ object types, MIMIX supports replication using either
system journal or user journal replication processes. A configuration that uses the
user journal is also called an advanced journaling configuration. Additional
information, including configuration requirements are described in “Identifying data
areas and data queues for replication” on page 113.

100
Identifying library-based objects for replication

How MIMIX uses object entries to evaluate journal entries for replication
The following information and example can help you determine whether the objects
you specify in data group object entries will be selected for replication. MIMIX
determines which replication process will be used only after it determines whether the
library-based object will be replicated.
When determining whether to process a journal entry for a library-based object,
MIMIX looks for a match between the object information in the journal entry and one
of the data group object entries. The library name is the first search element, then
followed by the object type, attribute (for files and device descriptions), and the object
name. The most significant match found (if any) is checked to determine whether to
include or exclude the journal entry in replication.
Table 6 shows how MIMIX checks a journal entry for a match with a data group object
entry. The columns are arranged to show the priority of the elements within the object
entry, with the most significant (library name) at left and the least significant (object
name) at right.

Table 6. Matching order for library-based object names.


Search Order Library Name Object Type Attribute1 Object Name
1 Exact Exact Exact Exact
2 Exact Exact Exact Generic*
3 Exact Exact Exact *ALL
4 Exact Exact *ALL Exact
5 Exact Exact *ALL Generic*
6 Exact Exact *ALL *ALL
7 Exact *ALL Exact Exact
8 Exact *ALL Exact Generic*
9 Exact *ALL Exact *ALL
10 Exact *ALL *ALL Exact
11 Exact *ALL *ALL Generic*
12 Exact *ALL *ALL *ALL
13 Generic* Exact Exact Exact
14 Generic* Exact Exact Generic*
15 Generic* Exact Exact *ALL
16 Generic* Exact *ALL Exact
17 Generic* Exact *ALL Generic*
18 Generic* Exact *ALL *ALL
19 Generic* *ALL Exact Exact
20 Generic* *ALL Exact Generic*
21 Generic* *ALL Exact *ALL
22 Generic* *ALL *ALL Exact
23 Generic* *ALL *ALL Generic*
24 Generic* *ALL *ALL *ALL
1. The extended object attribute is only checked for objects of type *FILE and *DEVD.

When configuring data group object entries, the flexibility of the generic support
allows a variety of include and exclude combinations for a given library or set of

101
libraries. But, generic name support can also cause unexpected results if it is not well
planned. Consider the search order shown in Table 6 when configuring data group
object entries to ensure that objects are not unexpectedly included or excluded in
replication.
Example - For example, say you that you have a data group configured with data
group object entries like those shown in Table 8. The journal entries MIMIX is
evaluating for replication are shown in Table 7.

Table 7. Sample journal transactions for objects in the system journal


Object Type Library Object
*PGM FINANCE BOOKKEEP
*FILE FINANCE ACCOUNTG
*DTAARA FINANCE BALANCE
*DTAARA FINANCE ACCOUNT1

A transaction is received from the system journal for program BOOKKEEP in library
FINANCE. MIMIX will replicate this object since it fits the criteria of the first data group
object entry shown in Table 8.
A transaction for file ACCOUNTG in library FINANCE would also be replicated since it
fits the third entry.
A transaction for data area BALANCE in library FINANCE would not be replicated
since it fits the second entry, an Exclude entry.

Table 8. Sample of data group object entries, arranged in order from most to least specific
Entry Source Library Object Type Object Name Attribute Process Type
1 Finance *PGM *ALL *ALL *INCLD
2 Finance *DTAARA *ALL *ALL *EXCLD
3 Finance *ALL acc* *ALL *INCLD

Likewise, a transaction for data area ACCOUNT1 in library FINANCE would not be
replicated. Although the transaction fits both the second and third entries shown in
Table 8, the second entry determines whether to replicate because it provides a more
significant match in the second criteria checked (object type). The second entry
provides an exact match for the library name, an exact match for object type, and a
object name match to *ALL.
In order for MIMIX to process the data area ACCOUNT1, an additional data group
object entry with process type *INCLD could be added for object type of *DTAARA
with an exact name of ACCOUNT1 or a generic name ACC*.

Replication of implicitly defined parents of library-based objects


The values specified for System 1 library (LIB1) and System 1 object (OBJ1) in a data
group object entry explicitly identify one or more library-based objects for
consideration and implicitly identify the parent library for the specified object.
When evaluating journal transactions for objects identified by data group object
entries, object replication (QAUDJRN) will not process the implicitly identified parent

102
Identifying library-based objects for replication

library of objects. Replication of objects of type *LIB is solely determined by the


existence of data group object entries which specify the *LIB objects.

Identifying spooled files for replication


MIMIX supports spooled file replication on an output queue basis. When an output
queue (*OUTQ) is identified for replication by a data group object entry, its spooled
files are not automatically replicated when default values are used. Table 9 identifies
the values required for spooled file replication. When MIMIX processes an output
queue that is identified by an object entry with the appropriate settings, all spooled
files for the output queue (*OUTQ) are replicated by system journal replication
processes. The target job name associated with the replicated spool file is QPRTJOB
if you are using standard policies.

Table 9. Data group object entry parameter values for spooled file replication

Parameter Value

Object type (OBJTYPE) *ALL or *OUTQ

Replicate spooled files (REPSPLF) *YES

Is it important to consider which spooled files must be replicated and which should
not. Some output queues contain a large number of non-critical spooled files and
probably should not be replicated. Most likely, you want to limit the spooled files that
you replicate to mission-critical information. It may be useful to direct important
spooled files that should be replicated to specific output queues instead of defining a
large number of output queues for replication.
When an output queue is selected for replication and the data group object entry
specifies *YES for Replicate spooled files, MIMIX ensures that the values *SPLFDTA
and *PRTDTA are included in the system value for the security auditing level
(QAUDLVL). This causes the system to generate spooled file (T-SF) entries in the
system journal. When a spooled file is created, moved, deleted, or its attributes are
changed, the resulting entries in the system journal are processed by a MIMIX object
send job and are replicated.

Additional choices for spooled file replication


MIMIX provides additional options to customize your choices for spooled file
replication.
Keeping deleted spooled files: You can also specify to keep spooled files on the
target system after they have been deleted from the source system by using the Keep
deleted spooled files parameter on the data group definition. The parameter is also
available on commands to add and change data group object entries.
Options for spooled file status: You can specify additional options for processing
spooled files. The Spooled file options (SPLFOPT) parameter is only available on
commands to add and change data group object entries. The following values support
choosing how status of replicated spooled files is handled on the target system:
*NONE This is the shipped default value. Spooled files on the target system will
have the same status as on the source system.

103
*HLD All replicated spooled files are put on hold on the target system regardless
of their status on the source system.
*HLDONSAV All replicated spooled files that have a saved status on the source
system will be put on hold on the target system. Spooled files on the source
system which have other status values will have the same status on the target
system.
This parameter can be helpful if your environment includes programs which
automatically process spooled files on the target system. For example, if you have a
program that automatically prints spooled files, you may want to use one of these
values to control what is printed after replication when printers writers are active.
If you move a spooled file between output queues which have different configured
values for the SPLFOPT parameter, consider the following:
• Spooled files moved from an output queue configured with SPLFOPT(*NONE) to
an output queue configured with SPLFOPT(*HLD) are placed in a held state on
the target system.
• Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an
output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV)
remain in a held state on the target system until you take action to release them.

Replicating user profiles and associated message queues


When user profile objects (*USRPRF) are identified by a data group object entry
which specifies *ALL or *USRPRF for the Object type parameter, MIMIX replicates the
objects using system journal replication processes.
Consider the following:
• For user profiles to be replicated, decide how their status will be set on the target
system. The replicated profiles can always use their status from the source
system, always be enabled, always be disabled, or be enabled for new profiles on
the target system and remain as is when existing profiles change. The value for
user profile status can be specified for the data group or overridden in individual
data group object entries. Also, depending on your choice, you may need to
change the configuration values following a switch so that users can sign on to the
new production system.
• The user profile password rules defined in system values which begin with the
characters QPWD are enforced on each system. Ideally, these system values
should be set to the same values on each system. If the values are more
restrictive on the target system than on the source system, replication failures can
occur for user profiles with replicated passwords.
When MIMIX replicates user profiles, the message queue (*MSGQ) objects
associated with the *USRPRF objects may also be created automatically on the target
system as a result of replication. If the *MSGQ objects are not also configured for
replication, the private authorities for the *MSGQ objects may not be the same
between the source and target systems. If it is necessary for the private authorities for
the *MSGQ objects be identical between the source and target systems, it is
recommended that *MSGQ objects associated with *USRPRF objects be configured
for replication.

104
Identifying library-based objects for replication

For example, Table 10 shows the data group object entries required to replicate user
profiles beginning with the letter A and maintain identical private authorities on
associated message queues. In this example, the user profile ABC and its associated
message queue are excluded from replication.

Table 10. Sample data group object entries for maintaining private authorities of message
queues associated with user profiles
Entry Source Library Object Type Object Name Process Type
1 QSYS *USRPRF A* *INCLD
2 QUSRSYS *MSGQ A* *INCLD
3 QSYS *USRPRF ABC *EXCLD
4 QUSRSYS *MSGQ ABC *EXCLD

105
Identifying logical and physical files for replication
MIMIX supports multiple ways of replicating *FILE objects with extended attributes of
LF, PF-DTA, PF38-DTA, PF-SRC, PF38-SRC. MIMIX configuration data determines
the replication method used for these logical and physical files. The following
configurations are possible:
• MIMIX Dynamic Apply - MIMIX Dynamic Apply is strongly recommended. In this
configuration, logical files and physical files (source and data) are replicated
primarily through the user (database) journal. This configuration is the most
efficient way to replicate LF, PF-DTA, PF38-DTA, PF-SRC, and PF38-SRC files. In
this configuration, files are identified by data group object entries and file entries.
• Legacy cooperative processing - Legacy cooperative processing supports only
data files (PF-DTA and PF38-DTA). It does not support source physical files or
logical files. In legacy cooperative processing, record data and member data
operations are replicated through user journal processes, while all other file
transactions such as creates, moves, renames, and deletes are replicated
through system journal processes. The database processes can use either
remote journaling or MIMIX source-send processes, making legacy cooperative
processing the recommended choice for physical data files when the remote
journaling environment required by MIMIX Dynamic Apply is not possible. In this
configuration, files are identified by data group object entries and file entries.
• User journal (database) only configurations - Environments that do not meet
MIMIX Dynamic Apply requirements but which have data group definitions that
specify TYPE(*DB) can only replicate data changes to physical files. These
configurations may not be able to replicate other operations such as creates,
restores, moves, renames, and some copy operations. In this configuration, files
are identified by data group file entries.
• System journal (object) only configurations - Data group definitions which
specify TYPE(*OBJ) are less efficient at processing logical and physical files. The
entire member is updated with each replicated transaction. Members must be
closed in order for replication to occur. In this configuration, files are identified by
data group object entries.
You should be aware of common characteristics of replicating library-based objects,
such when the configured object auditing value is used and how MIMIX interprets
data group entries to identify objects eligible for replication. For this information, see
“Configured object auditing value for data group entries” on page 98 and “How MIMIX
uses object entries to evaluate journal entries for replication” on page 101.
Some advanced techniques may require specific configurations. See “Configuring
advanced replication techniques” on page 383 for additional information.
For detailed procedures, see “Creating data group object entries” on page 271.

Considerations for LF and PF files


Newly created data groups are automatically configured to use MIMIX Dynamic Apply
when its requirements and restrictions are met and shipped command defaults are

106
Identifying logical and physical files for replication

used. With this configuration, logical and physical files are processed primarily from
the user journal.
Cooperative journal - The value specified for the Cooperative journal (COOPJRN)
parameter in the data group definition is critical to determining how files are
cooperatively processed. When creating a new data group, you can explicitly specify
a value or you can allow MIMIX to automatically change the default value (*DFT) to
either *USRJRN or *SYSJRN based on whether operating system and configuration
requirements for MIMIX Dynamic Apply are met. When requirements are met, MIMIX
changes the value *DFT to *USRJRN. When the MIMIX Dynamic Apply requirements
are not met, MIMIX changes *DFT to *SYSJRN.
Note: Data groups set to *SYSJRN will retain the value until you take action as
described in “Converting to MIMIX Dynamic Apply” on page 148.

When a data group definition meets the requirements for MIMIX Dynamic Apply, any
logical files and physical (source and data) files properly identified for cooperative
processing will be processed via MIMIX Dynamic Apply unless a known restriction
prevents it.
When a data group definition does not meet the requirements for MIMIX Dynamic
Apply but still meets legacy cooperative processing requirements, any PF-DTA or
PF38-DTA files properly configured for cooperative processing will be replicated using
legacy cooperative processing. All other types of files are processed using system
journal replication.

Logical file considerations - Consider the following for logical files.


• Logical files are replicated through the user journal when MIMIX Dynamic Apply
requirements are met. Otherwise, they are replicated through the system journal.
• It is strongly recommended that logical files reside in the same data group as all of
their associated physical files.
Physical file considerations - Consider the following for physical files
• Physical files (source and data) are replicated through the user journal when
MIMIX Dynamic Apply requirements are met. Otherwise, data files are replicated
using legacy cooperative processing if those requirements are met, and source
files are replicated through the system journal.
• Name mapping is not supported for physical files with associated logical files.
• If a data group definition specifies TYPE(*DB) and the configuration meets other
MIMIX Dynamic Apply requirements, source files need to be identified by both
data group object entries and data group file entries.
• If a data group is configured for only user journal replication (TYPE is *DB) and
does not meet other configuration requirements for MIMIX Dynamic Apply, source
files should be identified by only data group file entries.
• If a data group is configured for only system replication (TYPE is *OBJ), any
source files should be identified by only data group object entries. Any data group
object entries configured for cooperative processing will be replicated through the
system journal and should not have any corresponding data group file entries.

107
• Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. See “Requirements and limitations of MIMIX Dynamic
Apply” on page 111 and “Requirements and limitations of legacy cooperative
processing” on page 112 for additional information. For more information about
load balancing apply sessions, see “Database apply session balancing” on
page 90.
Commitment control - This database technique allows multiple updates to one or
more files to be considered a single transaction. When used, commitment control
maintains database integrity by not exposing a part of a database transaction until the
whole transaction completes. This ensures that there are no partial updates when the
process is interrupted prior to the completion of the transaction. This technique is also
useful in the event that a partially updated transaction must be removed, or rolled
back, from the files or when updates identified as erroneous need to be removed.
MIMIX provides two modes of processing transactions that are part of a commit cycle.
The default mode, delayed commit, processes the transactions but does not apply
them until the open commit cycle is complete. You can also use immediate mode,
where the transactions are applied immediately, before the commit cycle completes.
Changing commit mode is considered an advanced technique. The benefits and
limitations of each mode are described in “Immediately applying committed
transactions” on page 367.
If your application dynamically creates database files that are subsequently used in a
commitment control environment, use MIMIX Dynamic Apply for replication.
Without MIMIX Dynamic Apply, replication of the create operation may fail if a commit
cycle is open when MIMIX tries to save the file. The save operation will be delayed
and may fail if the file being saved has uncommitted transactions.

Files with LOBs


Large objects (LOBs) in files that are configured for either MIMIX Dynamic Apply or
legacy cooperative processing are automatically replicated.
LOBs can greatly increase the amount of data being replicated. As a result, you may
see some degradation in your replication activity. The amount of degradation you see
is proportionate to the amount of journal entries with LOBs that are applied per hour.
This is also true during switch processing if you are using remote journaling and have
unconfirmed entries with LOB data.
Since the volume of data to be replicated can be very large, you should consider
using the minimized journal entry data function along with LOB replication. IBM
support for minimized journal entry data can be extremely helpful when database
records contain static, very large objects. If minimized journal entry data is enabled,
journal entries for database files containing unchanged LOB data may be complete
and therefore processed like any other complete journal entry. This can significantly
improve performance, throughput, and storage requirements. If minimized journal
entry is used with files containing LOBs, keyed replication is not supported. For more
information, see “Minimized journal entry data” on page 359.
User exit programs may be affected when journaled LOB data is added to an existing
data group. Non-minimized LOB data produces incomplete entries. For incomplete

108
Identifying logical and physical files for replication

journal entries, two or more entries with duplicate journal sequence numbers and
journal codes and types will be provided to the user exit program when the data for
the incomplete entry is retrieved and segmented. Programs need to correctly handle
these duplicate entries representing the single, original journal entry.
You should also be aware of the following restrictions:
• When using the Compare File Data (CMPFILDTA) command to compare and
repair files with LOBs, you must specify a data group when you specify a value
other than *NONE for Repair on system (REPAIR).
• Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work
against database files with LOB fields.
• There is no collision detection for LOB data. Most collision detection classes
compare the journal entries with the content of the record on the target system.
• Journaled changes cannot be removed for files with LOBs that are replicated by a
data group that does not use remote journaling (RJLNK(*NO)). In this scenario,
the F-RC entry generated by the IBM command Remove Journaled Changes
(RMVJRNCHG) cannot be applied on the target system.

Configuration requirements for LF and PF files


MIMIX Dynamic Apply and legacy cooperative processing have unique requirements
for data group definitions as well as many common requirements for data group object
entries and file entries, as indicated in Table 11. In both configurations, you must
have:
• A data group definition which specifies the required values.
• One or more data group object entries that specify the required values. These
entries identify the items within the name space for replication. You may need to
create additional entries to achieve the desired results, including entries which
specify a Process type of *EXCLD.
• The identified existing objects must be journaled to the journal defined for the data
group.
• Data group file entries for the items identified by data group object entries.
Processing cannot occur without these corresponding data group file entries.

109
Table 11. Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing

Critical Parameters MIMIX Dynamic Legacy Coopera- Configuration Notes


Apply tive Processing
Required Values Required Values

Data Group Definition

Data group type (TYPE) *ALL or *DB *ALL See “Requirements


and limitations of
MIMIX Dynamic Apply”
on page 111.

Use remote journal link *YES any value


(RJLNK)

Cooperative journal *DFT or *USRJRN *DFT or *SYSJRN See cooperative


(COOPJRN) journal is default.

File and tracking ent. opts: See “Requirements


(FEOPT) *POSITION any value and limitations of
Replication type MIMIX Dynamic Apply”
on page 111.

Data Group Object Entries

Object type (OBJTYPE) *ALL or *FILE *ALL or *FILE

Attribute (OBJATR) *ALL or one of the *ALL, PF-DTA, or


following: LF, LF38, PF38-DTA
PF-DTA, PF-SRC,
PF38-DTA, PF38-
SRC

Cooperate with database *YES *YES See Corresponding


(COOPDB) data group file entries
required.
Cooperating object types *FILE *FILE
(COOPTYPE)

File and tracking ent. opts: See “Requirements


(FEOPT) *POSITION any value and limitations of
Replication type MIMIX Dynamic Apply”
on page 111.

Corresponding data group file entries - Both MIMIX Dynamic Apply and legacy
cooperative processing require that existing files identified by a data group object
entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must
also be identified by data group file entries.
When a file is identified by both a data group object entry and an data group file entry,
the following are also required:
• The object entry must enable the cooperative processing of files by specifying
COOPDB(*YES) and COOPTYPE(*FILE).

110
Identifying logical and physical files for replication

• If name mapping is used between systems, the data group object entry and file
entry must have the same name mapping defined.
• If the data group object entry and file entry specify different values for the File and
tracking ent. opts (FEOPT) parameter, the values specified in the data group file
entry take precedence.
• Files defined by data group file entries must have journaling started and must be
synchronized. If journaling is not started, MIMIX cannot replicate activity for the
file.
Typically, data group object entries are created during initial configuration and are
then used as the source for loading the data group file entries. The #DGFE audit can
be used to determine whether corresponding data group file entries exist for the files
identified by data group object entries.

Requirements and limitations of MIMIX Dynamic Apply


MIMIX Dynamic Apply requires that user journal replication be configured to use
remote journaling. Specific data group definition and data group entry requirements
are listed in Table 11.
MIMIX Dynamic Apply configurations have the following limitations.
Files in library - It is recommended that files within a single library be replicated
using the same user journal.
Data group file entries for members - Data group file entries (DGFE) for specific
member names are not supported unless they are created by MIMIX. MIMIX may
create these for error hold processing.
Name mapping - MIMIX Dynamic Apply configurations support name mapping at the
library level only. Entries with object name mapping are not supported. For example,
MYLIB/MYOBJ mapped to MYLIB/OTHEROBJ is not supported. If you require object
name mapping, it is supported in legacy cooperative processing configurations when
there are no associated logical files.
TYPE(*DB) data groups - MIMIX Dynamic Apply configurations that specify
TYPE(*DB) in the data group definition will not be able to replicate the following
actions:
• Files created using CPYF CRTFILE(*YES) on OS V5R3 into a library configured
for replication
• Files restored into a source library configured for replication
• Files moved or renamed from a non-replicated library into a replicated library
• Files created which are not otherwise journaled upon creation into a library
configured for replication
Files created by these actions can be added to the MIMIX configuration by running
the #DGFE audit. The audit recovery will synchronize the file as part of adding the file
entry to the configuration. In data groups that specify TYPE(*ALL), the above actions
are fully supported.
Referential constraints - The following restrictions apply:

111
• If using referential constraints with *CASCADE or *SETNULL actions you must
specify *YES for the Journal on target (JRNTGT) parameter in the data group
definition.
• Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. If a particular preferred apply session has been specified
in file entry options (FEOPT), MIMIX may ignore the specification in order to
satisfy this restriction.

Requirements and limitations of legacy cooperative processing


Legacy cooperative processing requires that data groups be configured for both
database (user journal) and object (system journal) replication. While remote
journaling is recommended, MIMIX source-send processing for database replication
is also supported. Specific data group definition and data group entry requirements
are listed in Table 11.
Legacy cooperative processing configurations have the following limitations.
Supported extended attributes - Legacy cooperative processing supports only data
files (PF-DTA and PF38-DTA).
When a *FILE object is configured for legacy cooperative processing, only file and
member attribute changes identified by T-ZC journal entries with a subclass of
7=Change are logged and replicated through system journal replication processes. All
member and data changes are logged and replicated through user journal replication
processes.
File entry options - If a file is moved or renamed and both names are defined by a
data group file entry, the file entry options must be the same in both data group file
entries.
Referential constraints - Physical files with referential constraints require a field in
another physical file to be valid. All physical files in a referential constraint structure
must be in the same apply session. If this is not possible, contact CustomerCare.

112
Identifying data areas and data queues for replication

Identifying data areas and data queues for replication


MIMIX uses data group object entries to determine whether to process transactions
for data area (*DTAARA) and data queue (*DTAQ) object types. Object entries can be
configured so that these object types can be replicated from journal entries recorded
a user journal (default) or in the system journal (optional).
While user journal replication, also called advanced journaling, has significant
advantages, you must decide whether it is appropriate for your environment. For more
information, see “Planning for journaled IFS objects, data areas, and data queues” on
page 87.
For detailed procedures, see “Configuring data group entries” on page 270.

Configuration requirements - data areas and data queues


For any data group object entries you create for data areas or data queues, consider
the following:
• You must have at least one data group object entry which specifies a a Process
type of *INCLD. You may need to create additional entries to achieve the desired
results. This may include entries which specify a Process type of *EXCLD.
• When specifying objects in data group object entries, specify only the objects that
need to be replicated. Specifying *ALL or a generic name for the System 1 object
(OBJ1) parameter will select multiple objects within the library specified for
System 1 library (LIB1).
• When you create data group object entries, you can specify an object auditing
value within the configuration. The configured object auditing value affects how
MIMIX handles changes to attributes of library-based objects. It is particularly
important for, but not limited to, objects configured for system journal replication.
For objects configured for user journal replication, the configured value can affect
MIMIX performance. For detailed information, see “Configured object auditing
value for data group entries” on page 98.
Additional requirements for user journal replication - The following additional
requirements must be met before data areas or data queues identified by data group
object entries can be replicated with user journal processes.
• The data group definition and data group object entries must specify the values
indicated in Table 12 for critical parameters.
• Object tracking entries must exist for the objects identified by properly configured
object entries. Typically these are created automatically when the data group is
started.
• Journaling must be started on both the source and target systems for the objects

113
identified by object tracking entries.

Table 12. Critical configuration parameters for replicating *DTAARA and *DTAQ objects
from a user journal

Critical Parameters Required Configuration Notes


Values

Data Group Definition

Data group type (TYPE) *ALL

Data Group Object Entry

Cooperate with database (COOPDB) *YES

Cooperating object types (COOPTYPE) *DFT or The value *DFT includes


*DTAARA *FILE, *DTAARA, and *DTAQ.
*DTAQ

Additionally, if any of the following apply, see “Planning for journaled IFS objects, data
areas, and data queues” on page 87 for additional details:
• Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether data area or data queue objects should be replicated in a
data group that also replicates database files.
• Serialized transactions - If you need to serialize transactions for database files
and data area or data queue objects replicated from a user journal, you may need
to adjust the configuration for the replicated files.
• Apply session load balancing - One database apply session, session A, is used
for all data area and data queue objects are replicated from a user journal. Other
replication activity can use this apply session, and may cause it to become
overloaded. You may need to adjust the configuration accordingly.
• User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.

Restrictions - user journal replication of data areas and data queues


For operating systems V5R4 and above, changes to data area and data queue
content, as well as changes to structure (such as moves and renames) and number
(such as creates and deletes), are recognized and supported through user journal
replication.

Be aware of the following restrictions when replicating data areas and data queues
using MIMIX user journal replication processes:
• MIMIX does not support before-images for data updates to data areas, and
cannot perform data integrity checks on the target system to ensure that data
being replaced on the target system is an exact match to the data replaced on the
source system. Furthermore, MIMIX does not provide a mechanism to prevent
users or applications from updating replicated data areas on the target system

114
Identifying data areas and data queues for replication

accidentally. To guarantee the data integrity of replicated data areas between the
source and target systems, you should run audits on a regular basis.
• The apply of data area and data queue objects is restricted to a single database
apply job (DBAPYA). If a data group has too much replication activity, this job may
fall behind in the processing of journal entries. If this occurs, you should load-level
the apply sessions by moving some or all of the database files to another
database apply job.
• Pre-existing data areas and data queues to be selected for replication must have
journaling started on both the source and target systems before the data group is
started.
• The ability to replicate Distributed Data Management (DDM) data areas and data
queues is not supported. If you need to replicate DDM data areas and data
queues, use standard system journal replication methods.

• The subset of E and Q journal code entry types supported for user journal
replication are listed in “Journal codes and entry types for journaled data areas
and data queues” on page 733.

115
Identifying IFS objects for replication
MIMIX uses data group IFS entries to determine whether to process transactions for
objects in the integrated file system (IFS), and what replication process is used. IFS
entries can be configured so that the identified objects can be replicated from journal
entries recorded in the system journal (default) or in a user journal (optional).
The most efficient way to convert IFS entries for an enabled data group from system
journal (object) replication to user journal (database) replication is using the Convert
Data Group IFS Entries (CVTDGIFSE) command. For more information, see
“Checklist: Converting IFS entries to user journaling using the CVTDGIFSE
command” on page 154.
One of the most important decisions in planning for MIMIX is determining which IFS
objects you need to replicate. Most likely, you want to limit the IFS objects you
replicate to mission-critical objects and the directories that contain them.
User journal replication, also called advanced journaling, is well suited to the dynamic
environments of IFS objects. While user journal replication has significant
advantages, you must decide whether it is appropriate for your environment. For more
information, see “Planning for journaled IFS objects, data areas, and data queues” on
page 87.
For detailed procedures, see “Creating data group IFS entries” on page 284.
Objects configured for user journal replication may have create, restore, delete,
move, and rename operations. Differences in implementation details are described in
“Processing variations for common operations” on page 129.

Supported IFS file systems and object types


The IFS objects to be replicated must be in the Root (‘/’) or QOpenSys file systems.
The following object types are supported:
• Directories (*DIR)
• Stream Files (*STMF)
• Stream files can have multiple hard links. The use of multiple hard links is only
supported by user journal replication. For more information, see “Support for
multiple hard links” on page 119.
• Symbolic Links (*SYMLNK)
Table 13 identifies the IFS file systems that are not supported by MIMIX and cannot
be specified for either the System 1 object prompt or the System 2 object prompt in
the Add Data Group IFS Entry (ADDDGIFSE) command.

Table 13. IFS file systems that are not supported by MIMIX

/QDLS /QLANSrv /QOPT

/QFileSvr.400 /QNetWare /QSYS.LIB

/QFPNWSSTG /QNTC /QSR

116
Identifying IFS objects for replication

Journaling is not supported for files in Network Work Storage Spaces (NWSS), which
are used as virtual disks by IXS and IXA technology. Therefore, IFS objects
configured to be replicated from a user journal must be in the Root (‘/’) or QOpenSys
file systems.
Refer to the IBM book OS/400 Integrated File System Introduction for more
information about IFS.

Considerations when identifying IFS objects


The following considerations for IFS objects apply regardless of whether replication
occurs through the system journal or user journal.

MIMIX processing order for data group IFS entries


Data group IFS entries are processed using the unicode character set. Only the name
specified in the IFS entries is evaluated. The entry with the most specific match is
used. This is also true when multiple entries with generic names match the object. For
example, IFS entries with the name /A* and /AB* both match the file name /ABACUS,
but the entry used to determine replication is /AB* because it is the most specific
match.

Long IFS path names


MIMIX currently replicates IFS path names of 512 characters. However, any MIMIX
command that takes an IFS path name as input may be susceptible to a 506
character limit. This character limit may be reduced even further if the IFS path name
contains embedded apostrophes (') or trailing blanks. In this case, the supported IFS
path name length is reduced by four characters for every apostrophe the path name
contains.
For information about IFS path name naming conventions, refer to the IBM book,
Integrated File System.

Upper and lower case IFS object names


When you create data group IFS entries, be aware of the following information about
character case sensitivity for specifying IFS object names.
• The root file system on the System i is generally not case sensitive. Character
case is preserved when creating objects, but otherwise character case is ignored.
For example, you can create /AbCd or /ABCD, but not both. You can refer to the
object by any mix of character case, such as /AbCd, /abcd, or /ABCD.
• The QOpenSys file system on the System i is generally case sensitive. Except for
"QOpenSys" in a path name, all characters in a path name are case sensitive. For
example, you can create both /QOpenSys/AbCd and /QOpenSys/ABCD. You
must specify the correct character case when referring to an object.
During replication, MIMIX preserves the character case of IFS object names. For
example, the creation of /AbCd on the source system will be replicated as /AbCd on
the target system.

117
Replication will not alter the character case of objects that already exist on the target
system (unless the object is deleted and recreated). In the root file system, /AbCd and
/ABCD are equivalent names. If /ABCD exists as such on the target system, changes
to /AbCd will be replicated to /ABCD, but the object name will not be changed to
/AbCd on the target system.
When character case is not a concern (root file system), MIMIX may present path
names as all upper case or all lower case. For example, the WRKDGACTE display
shows all lower case, while the WRKDGIFSE display shows all upper case. Names
can be entered in either case. For example, subsetting WRKDGACTE by /AbCd and
/ABCD will produce the same result.
When character case does matter (QOpenSys file system), MIMIX presents path
names in the appropriate case. For example, the WRKDGACTE display and the
WRKDGIFSE display would show /QOpenSys/AbCd, if that is the actual object path.
Names must be entered in the appropriate character case. For example, subsetting
the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd.

Replication of implicitly defined IFS parent objects


The value specified for System 1 object (OBJ1) in a data group IFS entry explicitly
identifies one or more IFS objects for consideration and implicitly identifies the parent
objects within the path to the specified object. Table 14 shows an example of implicitly
and explicitly identified objects.
When evaluating journal transactions for objects identified by data group IFS entries,
object replication (QAUDJRN) will process the implicitly identified parent objects as
follows:
• Create operations for an implicitly defined parent object are replicated when an
explicitly defined object below that parent is created.
• Move/rename operations of an implicitly defined parent object that is within the
configured namespace or that would cause the parent object to be moved into the
configured namespace are replicated.
• Move/rename operations that would cause an implicitly defined parent object to
no longer be part of (moved out of) the configured namespace are not replicated.
• Delete operations for an implicitly defined parent object are not replicated.

Table 14. Example of a data group IFS entry with implicit and explicit objects

Data Group IFS Entry Objects within Description


Namespace

/A/B/C/D/* /A Implicitly defined parent object


Only the objects identified by what
/A/B Implicitly defined parent object
follows the right-most / are
explicitly defined. /A/B/C Implicitly defined parent object

/A/B/C/D Implicitly defined parent object

/A/B/C/D/E Explicitly defined object

118
Identifying IFS objects for replication

Configured object auditing value for IFS objects


When you create data group IFS entries, you can specify an object auditing value
within the configuration. The configured object auditing value affects how MIMIX
handles changes to attributes of IFS objects. It is particularly important for, but not
limited to, objects configured for system journal replication. For IFS objects configured
for user journal replication, the configured value can affect MIMIX performance. For
detailed information, see “Configured object auditing value for data group entries” on
page 98.

Support for multiple hard links


A hard link associates a name (or path) with a stream file. Each stream file has a
single file identifier (FID) and typically has only one name. Additional names can be
associated with a stream file by creating multiple hard links to that stream file. This
results in multiple names for the same FID and stream file. When changes are made
for one name, the changes will also be available when the stream file is viewed by its
other names. Since the FID, owner, authorities, journaling attributes, and permissions
are associated with a stream file, these attributes will be identical when viewed for any
one of the multiple hard links for a stream file. The FID and stream file exist until all
hard links that reference the them are removed.
For journaled IFS objects, MIMIX supports the replication of multiple hard links. The
replication of multiple hard links requires IFS objects to be configured for user journal
replication. The following IBM commands and APIs are used to create multiple hard
links:
• CL Command: ADDLNK LNKTYPE(*HARD)
• Shell Command: ln (without the -s option )
• QlgLink() API
• link() API

Configuration requirements - IFS objects


For any data group IFS entry you create, consider the following:
• Create, delete, rename, and move operations should not be performed for the
highest level directory being replicated. Instead, the highest level directory being
replicated should remain static to ensure move and rename operations for objects
below the directory are properly journaled. Objects are journaled on behalf of the
parent directory.
• You must have at least one data group IFS entry which specifies a a Process type
of *INCLD. You may need to create additional entries to achieve the desired
results. This may include entries which specify a Process type of *EXCLD.
• When specifying which IFS objects in data group IFS entries, specify only the IFS
objects that need to be replicated. The System 1 object (OBJ1) parameter selects
all IFS objects within the path specified.
• Consider whether replication of implicitly defined parent objects described in
“Replication of implicitly defined IFS parent objects” on page 118 addresses your

119
replication needs.
• You can specify an object auditing value within the configuration. For details, see
“Configured object auditing value for data group entries” on page 98.
Additional requirements for user journal replication - The following additional
requirements must be met before IFS objects identified by data group IFS entries can
be replicated with user journal processes.
• IFS tracking entries must exist for the objects identified by properly configured IFS
entries. Typically these are created automatically when the data group is started.
• Journaling must be started on both the source and target systems for the objects
identified by IFS tracking entries.

Table 15. Critical configuration parameters for replicating IFS objects from a user journal

Critical Parameters Required Configuration Notes


Values

Data Group Definition

Data group type (TYPE) *ALL

Data Group IFS Entry

Cooperate with database (COOPDB) *YES The default, *NO, results in


system journal replication

Additionally, see “Planning for journaled IFS objects, data areas, and data queues” on
page 87 for additional details if any of the following apply:
• Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether IFS objects should be replicated in a data group that also
replicated database files.
• Serialized transactions - If you need to serialize transactions for database files
and IFS objects replicated from a user journal, you may need to adjust the
configuration for the replicated files.
• Apply session load balancing - One database apply session, session A, is used
for all IFS objects that are replicated from a user journal. Other replication activity
can use this apply session, and may cause it to become overloaded. You may
need to adjust the configuration accordingly.
• User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.

Restrictions - user journal replication of IFS objects

When considering replicating IFS objects using MIMIX user journal replication
processes, be aware of the following restrictions:
• The apply of IFS objects is restricted to a single database apply job (DBAPYA). If
a data group has too much replication activity, this job may fall behind in the

120
Identifying IFS objects for replication

processing of journal entries. If this occurs, you should load-level the apply
sessions by moving some or all of the database files to another database apply
job.
• The ability to prevent unauthorized updates from occurring on the target system
by configuring the “Lock member during apply” file entry option (FEOPT) is not
supported when user journal replication is configured.
• The ability to use the Remove Journaled Changes (RMVJRNCHG) command for
removing journaled changes for IFS tracking entries is not supported.
• It is recommended that option 14 (Remove related) on the Work with Data Group
Activity (WKRDGACT) display not be used for failed activity entries representing
actions against cooperatively processed IFS objects. Because this option does
not remove the associated tracking entries, orphan tracking entries can
accumulate on the system.

• It is recommended that option 4 (Remove) on the Work with DG IFS Trk. Entries
display only be used under the guidance of your Certified MIMIX Consultant as
replication will be affected.
• When moving or renaming a directory within the namespace of IFS objects
configured for user journal replication, MIMIX will move or rename the directory
without regard for include, exclude, and name mapping characteristics of items
beneath the directory being moved or renamed. This applies in both their 'from' or
'to' path names. If non-corresponding include, exclude, or name mapping
configuration entries exist for the 'from' and 'to' locations, the result may be
excess, missing, or incorrectly named objects on the target. Any differences will
be detected during the next full IFS attribute audit.
• Most B journal code entry types are supported for user journal replication and are
listed in “Journal codes and entry types for journaled IFS objects” on page 732.

121
Identifying DLOs for replication
MIMIX uses data group DLO entries to determine whether to process system journal
transactions for document library objects (DLOs). Each DLO entry for a data group
includes a folder path, document name, owner, an object auditing level, and an
include or exclude indicator. In addition to specific names, MIMIX supports generic
names for DLOs. In a data group DLO entry, the folder path and document can be
generic or *ALL.
When you create data DLO object entries, you can specify an object auditing value
within the configuration. The configured object auditing value affects how MIMIX
handles changes to attributes of DLOs. For detailed information, see “Configured
object auditing value for data group entries” on page 98.
For detailed procedures, see “Creating data group DLO entries” on page 297.

How MIMIX uses DLO entries to evaluate journal entries for replication
How items are specified within a DLO determines whether MIMIX selects or omits
them from processing. This information can help you understand what is included or
omitted.
When determining whether to process a journal entry for a DLO, MIMIX looks for a
match between the DLO information in the journal entry and one of the data group
DLO entries. The folder path is the most significant search element, followed by the
document name, then the owner. The most significant match found (if any) is checked
to determine whether to process the entry.
An exact or generic folder path name in a data group DLO entry applies to folder
paths that match the entry as well as to any unnamed child folders of that path which
are not covered by a more explicit entry. For example, a data group DLO entry with a
folder path of “ACCOUNT” would also apply to a transaction for a document in folder
path ACCOUNT/JANUARY. If a second data group DLO entry with a folder path of
“ACCOUNT/J*” were added, it would take precedence because it is more specific.
For a folder path with multiple elements (for example, A/B/C/D), the exact checks and
generic checks against data group DLO entries are performed on the path. If no
match is found, the lowest path element is removed and the process is repeated. For
example, A/B/C/D is reduced to A/B/C and is rechecked. This process continues until
a match is found or until all elements of the path have been removed. If there is still no
match, then checks for folder path *ALL are performed.

Replication of implicitly defined DLO parent objects


The values specified for System 1 folder (FLR1) and System 1 document (DOC1) in a
data group DLO entry explicitly identify one or more DLO objects for consideration
and implicitly identify the parent objects within the path to the specified object. Table
16 shows an example of implicitly and explicitly identified DLO objects.
When evaluating journal transactions for objects identified by data group DLO entries,
object replication (QAUDJRN) will process the implicitly identified parent objects as
follows:

122
Identifying DLOs for replication

• Create or change operations for an implicitly defined parent object are replicated.
• Move/rename operations of an implicitly defined parent object that is within the
configured namespace or that would cause the parent object to be moved into the
configured namespace are replicated.
• Move/rename operations that would cause an implicitly defined parent object to
no longer be part of (moved out of) the configured namespace are not replicated.
• Delete operations for an implicitly defined parent object are not replicated.

Table 16. Example of a data group DLO entry with implicit and explicit objects

Folder Value Document Objects within Description


Value Namespace

A/B/C/D *ALL A Implicitly defined parent object

Only the objects identified by A/B/C Implicitly defined parent object


what follows the right-most /
in FLR1 and the objects A/B/C/D Explicitly defined object (folder)
specified in DOC1 are A/B/C/D/E.doc Explicitly defined object (document)
explicitly defined.
A/B/C/D/MYFLR Explicitly defined object (folder)

Sequence and priority order for documents


Table 17 illustrates the sequence in which MIMIX checks DLO entries for a match.

Table 17. Matching order for document names


Search Order Folder Path Document Name Owner
1 Exact Exact Exact
2 Exact Exact *ALL
3 Exact Generic* Exact
4 Exact Generic* *ALL
5 Exact *ALL Exact
6 Exact *ALL *ALL
7 Generic* Exact Exact
8 Generic* Exact *ALL
9 Generic* Generic* Exact
10 Generic* Generic* *ALL
11 Generic* *ALL Exact
12 Generic* *ALL *ALL
13 *ALL Exact Exact
14 *ALL Exact *ALL
15 *ALL Generic* Exact
16 *ALL Generic* *ALL
17 *ALL *ALL Exact
18 *ALL *ALL *ALL

Document example - Table 18 illustrates some sample data group DLO entries. For
example, a transaction for any document in a folder named FINANCE would be

123
blocked from replication because it matches entry 6. A transaction for document
ACCOUNTS in FINANCE1 owned by JONESB would be replicated because it
matches entry 4. If SMITHA owned ACCOUNTS in FINANCE1, the transaction would
be blocked by entry 3. Likewise, documents LEDGER.JUL and LEDGER.AUG in
FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1
would be blocked by entry 1. A transaction for any document in FINANCE2 would be
blocked by entry 6. However, transactions for documents in FINANCE2/Q1, or in a
child folder of that path, such as FINANCE2/Q1/FEB, would be replicated because of
entry 5.

Table 18. Sample data group DLO entries, arranged in order from most to least specific
Entry Folder Path Document Owner Process Type
1 FINANCE1 PAYROLL *ALL *EXCLD
2 FINANCE1 LEDGER* *ALL *EXCLD
3 FINANCE1 *ALL SMITHA *EXCLD
4 FINANCE1 *ALL *ALL *INCLD
5 FINANCE2/Q1 *ALL *ALL *INCLD
6 FIN* *ALL *ALL *EXCLD

Sequence and priority order for folders


Folders are treated somewhat differently than documents. Folders are replicated
based on whether there are any data group DLO entries with a process type of
*INCLD that would require the folder to exist on the target system. If a folder needs to
exist to satisfy the folder path of an include entry, the folder will be replicated even if a
different exclude entry prevents replication of the contents of the folder.
Some exceptions exist concerning the requirement of replicating folders to satisfy the
folder path for an include entry.
• A folder will not be replicated when the only include entry that would cause its
replication specifies *ALL for its folder path and the folder matches an exclude
entry with an exact or a generic folder path name, a document value of *ALL and
an owner of *ALL.
• Also, because folders are implicitly identified parent objects, certain transactions
for them are not replicated regardless of sequence or priority order matches. For
details, see “Replication of implicitly defined DLO parent objects” on page 122.
Table 18 and Table 19 illustrate the differences in matching folders to be replicated.
In Table 18, above, a transaction for a folder named FINANCE would be blocked from
replication because it matches entry 6. This would also affect all folders within
FINANCE. A transaction for folder FINANCE1 would be replicated because of entry 4.
Likewise, a transaction for folder FINANCE2 would be replicated because of entry 5.
Note that any transactions for documents in FINANCE2 or any child folders other than
those in the path that includes Q1 would be blocked by entry 6; only FINANCE2 itself
must exist to satisfy entry 5.
In Table 19, although entry 5 is an include entry, a transaction for folder ACCOUNT
would be blocked from replication because it matches entry 2. This is because of the
exception described above. ACCOUNT matches an exclude entry with an exact folder

124
Identifying DLOs for replication

path, document value of *ALL, and an owner of *ALL, and the only include entry that
would cause it to be replicated specifies folder path *ALL. The exception also affects
all child folders in the ACCOUNT folder path. Note that the exception holds true even
if ACCOUNT is owned by user profile JONESB (entry 4) because the more specific
folder name match takes precedence.

Table 19. Sample data group DLO entries, folder example


Entry Folder Path Document Owner Process Type
1 ACCOUNT2 LEDGER* *ALL *EXCLD
2 ACCOUNT *ALL *ALL *EXCLD
3 *ALL ABC* *ALL *INCLD
4 *ALL *ALL JONESB *INCLD
5 *ALL *ALL *ALL *INCLD

A transaction for folder ACCOUNT2 would be replicated even though it is an exact


path name match for exclude entry 1. The exception does not apply because entry 1
does not specify document *ALL. Entry 5 requires that ACCOUNT2 exist on the target
system to satisfy the folder path requirements for document names other than
LEDGER* and for child folders of ACCOUNT2.

125
Processing of newly created files and objects
Your production environment is dynamic. New objects continue to be created after
MIMIX is configured and running. When properly configured, MIMIX automatically
recognizes entries in the user journal that identify new create operations and
replicates any that are eligible for replication. Optionally, MIMIX can also notify you of
newly created objects not eligible for replication so that you can choose whether to
add them to the configuration.
Configurations that replicate files, data areas, data queues, or IFS objects from user
journal entries require journaling to be started on the objects before replication can
occur. When a configuration enables journaling to be implicitly started on new objects,
a newly created object is already journaled. When the journaled object falls within the
group of objects identified for replication by a data group, MIMIX replicates the create
operation. Processing variations exist based on how the data group and the data
group entry with the most specific match to the object are configured. These
variations are described in the following subtopics.
The MMNFYNEWE monitor is a shipped journal monitor that watches the security
audit journal (QAUDJRN) for newly created libraries, folders, or directories that are
not already included or excluded for replication by a data group and sends warning
notifications when its conditions are met. This monitor is shipped disabled. User
action is required to enable this monitor on the source system within your MIMIX
environment. Once enabled, the monitor will automatically start with the master
monitor. For more information about the conditions that are checked, see topic
‘Notifications for newly created objects’ in the MIMIX Operations book.
For more information about requirements and restrictions for implicit starting of
journaling as well as examples of how MIMIX determines whether to replicate a new
object, see “What objects need to be journaled” on page 343.

Newly created files


When newly created *FILE objects are implicitly journaled and are eligible for
replication, the replication processes used depend on how the data group definition is
configured and how the data group entry with the most specific match to the file is
configured.

New file processing - MIMIX Dynamic Apply


When a data group definition meets configuration requirements for MIMIX Dynamic
Apply and data group object and file entries are properly configured, new files created
on the source system that are eligible for replication will be re-created on the target
system by MIMIX. The following briefly describes the events that occur for newly
created files on the source system which are configured for MIMIX Dynamic Apply:
• System journal replication processes ignore the creation entry, knowing that user
journal replication processes will get a create entry as well.
• User journal replication processes dynamically add a file entry for a file when a file
create is seen in the user journal. The file entry is added with a status of *ACTIVE.
• User journal replication processes create the file on the target system. Replication

126
Processing of newly created files and objects

proceeds normally after the file has been created.


• All subsequent file changes including moves or renames, member operations
(adds, changes, and removes), member data updates, file changes, authority
changes, and file deletes are replicated through the user journal.
• For MIMIX Dynamic Apply configurations, MIMIX always attempts to place files
that are related due to referential constraints into the same apply session. This
eliminates the possibility of constraint violations that would otherwise occur if
apply sessions processed the files independently. However, there are some
situations where constraints are added dynamically between two files already
assigned to different apply sessions. In this case, the constraint may need to be
disabled to avoid the constraint violations. In the case of cascading constraints,
where a modification to one file cascades operations to related files, MIMIX will
always attempt to apply the cascading entries, whether the constraint is enabled
or disabled, to ensure that the modification is done.

New file processing - legacy cooperative processing


When a data group definition meets configuration requirements for legacy cooperative
processing and data group object and file entries are properly configured, files
created on the source system will be saved and restored to the target system by
system journal replication processes. The following briefly describes the events that
occur when files are created that have been defined for legacy cooperative
processing:
• System journal replication processes communicate with user journal replication
processes to add a data group file entry for the file (ADDDGFE command). The
file entry is added with the status of *HLD.
• A user journal transaction is created on the source system and is transferred to
the target system to dynamically add the file to active user journal processes.
• Journaling on the file is started if it is not already active.
• System journal replication processes save the created file, restores it on the target
system, then communicates with user journal replication processes to issue a
release wait request against the file. The status of the file entry changes to
*RLSWAIT.
• The database apply process waits for the save point in the journal, and then
makes the file active. The status of the file entry changes to *ACTIVE.

Newly created IFS objects, data areas, and data queues


When journaling is implicitly started for IFS objects, data areas, and data queues,
newly created objects that are eligible for replication are automatically replicated.
Configuration values specified in the data group IFS entry or object entry that most
specifically matches the new object determines what replication processes are used.
Note: Non-journaled objects are replicated through the system journal.
For data areas and data queues, automatic journaling of new *DTAARA or *DTAQ
objects is supported. MIMIX configurations can be enabled to permit the automatic
start of journaling for newly created data areas and data queues in libraries journaled

127
to a user journal. New MIMIX installations that are configured for MIMIX Dynamic
Apply of files automatically have this behavior.
For requirements for implicitly starting journaling on new objects, see “What objects
need to be journaled” on page 343.
If the object is journaled to the user journal, MIMIX user journal replication processes
can fully replicate the create operation. The user journal entries contain all the
information necessary for replication without needing to retrieve information from the
object on the source system. MIMIX creates a tracking entry for the newly created
object and an activity entry representing the T-CO (create) journal entry for data areas
and data queues.
If the object is not journaled to the user journal, then the create of the object is
processed with system journal processing and an activity entry is created which
represents the T-CO journal entry.
If the specified values in data group entry that identified the object as eligible for
replication do not allow the object type to be cooperatively processed, the create of
the object and subsequent operations are replicated through system journal
processes.
When MIMIX replicates a create operation through the user journal, the create
timestamp (*CRTTSP) attribute may differ between the source and target systems.

Determining how an activity entry for a create operation was replicated


To determine whether a create operation of a given object is being replicated through
user journal processes or through system journal processes, do the following:
1. On the Data Group Activity Entries (WRKDGACTE) display, locate the entry for a
create operation that you want to check. Create operations have a value of T-CO
in the Code column.
2. Use option 5 (Display) next to an activity entry for a create operation.
3. On the resulting details display, check the value of the Requires container send
field.
If *YES appears for an activity entry representing a create operation, the create
operation is being replicated through the system journal.
If *NO appears in the field, the create operation is being replicated through the
user journal.

128
Processing variations for common operations

Processing variations for common operations


Some variation exists in how MIMIX performs common operations such as moves,
renames, deletes, and restores. The variations are based on the configuration of the
data group entry used for replication.
Configurations specify whether these operations are processed through the system
journal, user journal, or a combination of both journals. Most operations for journaled
files, objects, and IFS objects can be processed through the user journal except for a
few operations, such as non-journaled creates and objects moved or restored into the
replication name space.
For IFS objects, user journal replication offers full support of create, delete, move, and
rename operations that occur entirely within the replicated name space. User journal
replication also offers full support of these operations for data area and data queue
objects that occur entirely within the replicated name space.

Move/rename operations - journaled replication


Table 20 describes how MIMIX processes a move or rename journal entry. MIMIX
uses system journal replication processes for DLOs, IFS objects, and library-based
objects which are not explicitly identified for user journal replication. Not all object
types are capable of being journaled to a user journal. The Original Source Object
and New Name or Location columns indicate whether the object is identified within
the name space for replication. The Action column indicates the operation that MIMIX
will attempt on the target system.

Table 20. Current object move actions

Original Source Object New Name or Location MIMIX Action on Target


System

Excluded from or not Within name space of objects Create Object 1


identified for replication to be replicated

Identified for replication Excluded from or not identified Delete Object 2


for replication

Identified for replication Within name space of objects Move Object


to be replicated

Excluded from or not Excluded from or not identified None


identified for replication for replication

1. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry,
it is not guaranteed that an object with the same name exists on the backup system or
that it is really the same object as on the source system. To ensure the integrity of the
target (backup) system, a copy of the source object must be brought over from the
source system.
2. If the target object is not defined to MIMIX or if it is defined by an Exclude entry, there is
no guarantee that the target library exists on the target system. Further, the customer is
assumed not to care if the target object is replicated, since it is not defined with an
Include entry, so deleting the object on the target is the most straight forward approach.

129
Move/rename operations - user journaled data areas, data queues, IFS
objects
IFS, data area, and data queue objects replicated by user journal replication
processes can be moved or renamed while maintaining the integrity of the data. If the
new location or new name on the source system remains within the set of objects
identified as eligible for replication, MIMIX will perform the move or rename operation
on the object on the target system.
When a move or rename operation starts with or results in an object that is not within
the name space for user journal replication, MIMIX may need to perform additional
operations in order to replicate the operation. MIMIX may use a create or delete
operation and may need to add or remove tracking entries.
Each row in Table 21 summarizes a move/rename scenario and identifies the action
taken by MIMIX.

Table 21. MIMIX actions when processing moves or renames of objects when user journal replication pro-
cesses are involved

Source object New name or loca- MIMIX action


tion

Identified for Within name space of Moves or renames the object on the target system and
replication with user objects to be renames the associated tracking entry. See example 1.
journal processing replicated with user
journal processing

Not identified for Not identified for None. See example 2.


replication replication

Identified for Not identified for Deletes the target object and deletes the associated
replication with user replication tracking entry. The object will no longer be replicated. See
journal processing example 3.

Identified for Within name space of Moves or renames the object using system journal
replication with user objects to be processes and removes the associated tracking entry.
journal processing replicated with See example 4.
system journal
processing

Identified for Within name space of Creates tracking entry for the object using the new name
replication with objects to be or location and moves or renames the object using user
system journal replicated with user journal processes. If the object is a library or directory,
processing journal processing MIMIX creates tracking entries for those objects within the
library or directory that are also within name space for
user journal replication and synchronizes those objects.
See example 5.

130
Processing variations for common operations

Table 21. MIMIX actions when processing moves or renames of objects when user journal replication pro-
cesses are involved

Source object New name or loca- MIMIX action


tion

Not identified for Within name space of Creates tracking entry for the object using the new name
replication objects to be or location. If the object is a library or directory, MIMIX
replicated with user creates tracking entries for those objects within the library
journal processing or directory that are also within name space for user
journal replication. Synchronizes all of the objects
identified by these new tracking entries. See example 6.

The following examples use IFS objects and directories to illustrate the MIMIX
operations in move/rename scenarios that involve user journal replication (advanced
journaling). The MIMIX behavior described is the same as that for data areas and
data queues that are within the configured name space for advanced journaling. Table
22 identifies the initial set of source system objects, data group IFS entries, and IFS
tracking entries before the move/rename operation occurs.

Table 22. Initial data group IFS entries, IFS tracking entries, and source IFS objects for
examples

Configuration Data Group Source System IFS Associated Data


Supports IFS Entries Objects in Name Group IFS Tracking
Space Entries

user journal /TEST/stmf1 /TEST/stmf1


/TEST/STMF*
replication

user journal /TEST/DIR* /TEST/dir1/doc1 /TEST/dir1


replication
/TEST/dir1/doc1

system journal /TEST/NOTAJ* /TEST/notajstmf1


replication
/TEST/notajdir1/doc1

Example 1, moves/renames within advanced journaling name space: The most


common move and rename operations occur within advanced journaling name space.
For example, MIMIX encounters user journal entries indicating that the source system
IFS directory /TEST/dir1 was renamed to /TEST/dir2, and that the IFS stream file
/TEST/stmf1 was renamed to /TEST/stmf2. In both cases, the old and new names fall
within advanced journaling name space, as indicated in Table 21. The rename
operations are replicated and names are changed on the target system objects. The
tracking entries for these objects are also renamed. The resulting changes on the
target system objects and MIMIX configuration are shown in Table 23.

Table 23. Results of move/rename operations within name space for advanced journaling

Resulting Target IFS objects Resulting data group IFS tracking entries

/TEST/stmf2 /TEST/stmf2

131
Table 23. Results of move/rename operations within name space for advanced journaling

Resulting Target IFS objects Resulting data group IFS tracking entries

/TEST/dir2/doc1 /TEST/dir2

/TEST/dir2/doc1

Example 2, moves/renames outside name space: When MIMIX encounters a


journal entry for a source system object outside of the name space that has been
renamed or moved to another location also outside of the name space, MIMIX ignores
the transaction. The object is not eligible for replication.
Example 3, moves/renames from advanced journaling name space to outside
name space: In this example, MIMIX encounters user journal entries indicating that
the source system IFS directory /TEST/dir1 was renamed to /TEST/xdir1 and IFS
stream file /TEST/stmf1 was renamed to /TEST/xstmf1. MIMIX is aware of only the
original names, as indicated in Table 21. Thus, the old name is eligible for replication,
but the new name is not. MIMIX treats this as a delete operation during replication
processing. MIMIX deletes the IFS directory and IFS stream file from the target
system. MIMIX also deletes the associated IFS tracking entries.
Example 4, moves/renames from advanced journaling to system journal name
space: In this example, MIMIX encounters user journal entries indicating that the
source system IFS directory /TEST/dir1 was renamed to /TEST/notajdir1 and that IFS
stream file /TEST/stmf1 was renamed to /TEST/notajstmf1. MIMIX is aware that both
the old names and new names are eligible for replication as indicated in Table 21.
However, the new names fall within the name space for replication through the
system journal. As a result, MIMIX removes the tracking entries associated with the
original names and performs the rename operation for the objects on the target
system. Table 24 shows these results.

Table 24. Results of move/rename operations from advanced journaling to system journal
name space

Resulting target IFS objects Resulting data group IFS tracking entries

/TEST/notajstmf1 (removed)

/TEST/notajdir1/doc1 (removed)

Example 5, moves/renames from system journal to advanced journaling name


space: In this example, MIMIX encounters journal entries indicating that source
system IFS directory from /TEST/notajdir1 was renamed to /TEST/dir1 and that IFS
stream file /TEST/notajstmf1 was renamed to /TEST/stmf1. MIMIX is aware that the
old names are within the system journal name space and that the new names are
within the advanced journaling name space. MIMIX creates tracking entries for the
names and then performs the rename operation on the target system using advanced
journaling.
MIMIX also creates tracking entries for any objects that reside within the moved or
renamed IFS directory (or library in the case of data areas or data queues). The

132
Processing variations for common operations

objects identified by these tracking entries are individually synchronized from the
source to the target system. Table 25 illustrates the results on the target system.

Table 25. Results of move/rename operations from system journal to advanced journaling
name space

Resulting target IFS objects Resulting data group IFS tracking


entries

/TEST/stmf1 /TEST/stmf1

/TEST/dir1/doc1 /TEST/dir1

/TEST/dir1/doc1

Example 6, moves/renames from outside to within advanced journaling name


space: In this example MIMIX encounters journal entries indicating that the source
system IFS directory /TEST/xdir1 was renamed to /TEST/dir1 and that IFS stream file
/TEST/xstmf1 was renamed to /TEST/stmf1. The original names are outside of the
name space and are not eligible for replication. However, the new names are within
the name space for advanced journaling as indicated in Table 21. Because the
objects were not previously replicated, MIMIX processes the operations as creates
during replication. See “Newly created files” on page 126.
MIMIX also creates tracking entries for any objects that reside within the moved or
renamed IFS directory (or library in the case of data areas or data queues). The
objects identified by these tracking entries are individually synchronized from the
source to the target system. Table 26 illustrates the results.

Table 26. Results of move/rename operations from outside to within advanced journaling
name space

Resulting target IFS objects Resulting data group IFS tracking


entries

/TEST/stmf1 /TEST/stmf1

/TEST/dir1/doc1 /TEST/dir1

/TEST/dir1/doc1

Delete operations - files configured for legacy cooperative processing


The following briefly describes the events that occur in MIMIX when a database file
that is defined for legacy cooperative processing is deleted:
• System journal replication processes communicate with user journal replication
processes that a file has been deleted on the source system and indicates that the
file should be deleted from the target system.
• A journal transaction which identifies the deleted file is created on the source
system. The transaction is transferred dynamically.
• If the data group file entry is set to use the option to dynamically update active
replication processes, the file and associated file entry will be dynamically

133
removed from the replication processes. If the dynamic update option is not used,
the data group changes are not recognized until all data group processes are
ended and restarted.
• MIMIX system journal replication processes delete the file on the target system.

Delete operations - user journaled data areas, data queues, IFS objects
When a T-DO (delete) journal entry for an IFS, data area, or data queue object is
encountered in the system journal and advanced journaling is not being used, MIMIX
system journal replication processes generate an activity entry representing the
delete operation and handle the delete of the object from the target system. The user
journal replication processes remove the corresponding tracking entry.

Restore operations - user journaled data areas, data queues, IFS objects
When an IFS, data area, or data queue object is restored, any pre-existing object is
replaced by a save from the source system. With user journal replication, restores of
IFS, data area, and data queue objects on the source system are supported through
cooperative processing between MIMIX system journal and user journal replication
processes.
Provided the object was journaled when it was saved, a restored IFS, data area, or
data queue object is also journaled.
During cooperative processing, system journal replication processes generate an
activity entry representing the T-OR (restore) journal entry from the system journal
and perform a save and restore operation on the IFS, data area, or data queue object.
Meanwhile, user journal replication processes handle the management of the
corresponding IFS or object tracking entry. MIMIX may also start journaling, or end
and restart journaling on the object so that the journaling characteristics of the IFS,
data area, or data queue object match the data group definition.

134
CHAPTER 5 Configuration checklists

MIMIX can be configured in a variety of ways to support your replication needs. Each
configuration requires a combination of definitions and data group entries. Definitions
identify systems, journals, communications, and data groups that make up the
replication environment. Data group entries identify what to replicate and the
replication option to be used. For available options, see “Replication choices by object
type” on page 97. Also, advanced techniques, such as keyed replication, have
additional configuration requirements. For additional information see “Configuring
advanced replication techniques” on page 383.
New installations: Before you start configuring MIMIX, system-level configuration
for communications (lines, controllers, IP interfaces) must already exist between the
systems that you plan to include in the MIMIX installation. Choose one of the following
checklists to configure a new installation of MIMIX.
• “Checklist: New remote journal (preferred) configuration” on page 137 uses
shipped default values to create a new installation. Unless you explicitly configure
them otherwise, new data groups will use the IBM i remote journal function as part
of user journal replication processes.
• “Checklist: New MIMIX source-send configuration” on page 141 configures a new
installation and is appropriate when your environment cannot use remote
journaling. New data groups will use MIMIX source-send processes in user journal
replication.
• To configure a new installation that is to use the integrated MIMIX support for IBM
WebSphere MQ (MIMIX for MQ), refer to the MIMIX for IBM WebSphere MQ
book.
Upgrades and conversions: You can use any of the following topics, as appropriate,
to change a configuration:
• “Checklist: converting to application groups” on page 145 provides the instructions
needed to change your environment to implement application groups. Application
groups are best practice and provide the ability to group and control multiple data
groups as one entity.
• “Checklist: Converting to remote journaling” on page 146 changes an existing
data group to use remote journaling within user journal replication processes.
• “Converting to MIMIX Dynamic Apply” on page 148 provides checklists for two
methods of changing the configuration of an existing data group to use MIMIX
Dynamic Apply for logical and physical file replication. Data groups that existed
prior to installing version 5 must use this information in order to use MIMIX
Dynamic Apply.
• “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on
page 151 changes the configuration of an existing data group to use user journal
replication processes for these objects.
• To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an

135
Configuration checklists

existing installation, use topic ‘Choosing the correct checklist for MIMIX for MQ’ in
the MIMIX for IBM WebSphere MQ book.
• “Checklist: Converting to legacy cooperative processing” on page 157 changes
the configuration of an existing data group so that logical and physical source files
are processed from the system journal and physical data files use legacy
cooperative processing.
Other checklists: The following configuration checklist employs less frequently used
configuration tools and is not included in this chapter.
• Use “Checklist: copy configuration” on page 649 if you need to copy configuration
data from an existing product library into another MIMIX installation.

136
Checklist: New remote journal (preferred) configuration

Checklist: New remote journal (preferred) configuration


Use this checklist to configure a new installation of MIMIX. This checklist creates the
preferred configuration that uses IBM i remote journaling and uses MIMIX Dynamic
Apply to cooperatively process logical and physical files.
To configure your system manually, perform the following steps on the system that
you want to designate as the management system of the MIMIX installation:
1. Communications between the systems must be configured and operational
before you start configuring MIMIX.
a. If communications is not configured, refer to “System-level communications”
on page 159 for more information.
b. If you have TCP configured and plan to use it for your transfer protocol, verify
that is it is operational using the PING command.
2. Create system definitions for the management system and each of the network
systems for the MIMIX installation. Use topic “Creating system definitions” on
page 169.
3. Create transfer definitions to define the communications protocol used between
pairs of systems. A pair of systems consists of a management system and a
network system. Use topic “Creating a transfer definition” on page 184.
4. If you are using the TCP protocol, ensure that the Lakeview TCP server is running
on each system defined in the transfer definition. You can use the Work with
Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS
subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not
active on a system, use topic “Starting the TCP/IP server” on page 191.
Note: Default values for transfer definitions enable MIMIX to create and manage
autostart job entries for the server. If your transfer definitions prevent this,
you can create and manage your own autostart job entries. For more
information see “Using autostart job entries to start the TCP server” on
page 192.
5. Start the MIMIX managers using topic “Starting the system and journal managers”
on page 306. When the system manager is running, configuration information for
data groups will be automatically replicated to the other system as you create it.
6. Verify that the communications link defined in each transfer definition is
operational using topic “Verifying a communications link for system definitions” on
page 196.
7. If you are using the TCP protocol, ensure that the DDM TCP server is running
using topic “Starting the DDM TCP/IP server” on page 187.
8. If you have implemented DDM password validation, verify that your environment
will allow MIMIX RJ support to work properly. Use topic “Checking the DDM
password validation level” on page 188.
9. Create the data group definitions that you need using topic “Creating a data group
definition” on page 246. The referenced topic creates a data group definition with
appropriate values to support MIMIX Dynamic Apply.

137
10. Confirm that the journal definitions which have been automatically created have
the values you require. For information, see “Configuration processes that create
journal definitions” on page 200, “Tips for journal definition parameters” on
page 201, and “Journal definition considerations” on page 206.
11. Build the necessary journaling environments for the RJ links using “Building the
journaling environment” on page 221. If the data group is switchable, be sure to
build the journaling environments for both directions--source system A to target
system B (target journal @R) and for source system B to target system A (target
journal @R).
Note: The use of application groups is considered best practice. Step 12 through
Step 14 create the additional configuration needed for application groups. If
you are not using application groups, skip to Step 15.
12. Create the application groups to which you will associate the data groups using
topic “Creating an application group definition” on page 326.
13. Load the data resource group entries and nodes that define the association
between application groups and data groups. “Loading data resource groups into
an application group” on page 327.
14. Identify what node (system) will be the primary node for each application group,
using “Specifying the primary node for the application group” on page 327.
15. Use Table 27 to create data group entries for this configuration. This configuration
requires object entries and file entries for LF and PF files. For other object types or
classes, any replication options identified in planning topic “Replication choices by
object type” on page 97 are supported.

Table 27. How to configure data group entries for the remote journal (preferred) configuration.

Class Do the following: Planning and Requirements


Information

Library- 1. Create object entries using. Use“Creating data group “Identifying library-based
based object entries” on page 271. objects for replication” on
objects 2. After creating object entries, load file entries for LF and page 100
PF (source and data) *FILE objects using “Loading file “Identifying logical and physical
entries from a data group’s object entries” on page 276. files for replication” on
Note: If you cannot use MIMIX Dynamic Apply for logical files or page 106
PF data files, you should still create file entries for PF data “Identifying data areas and data
files to ensure that legacy cooperative processing can be queues for replication” on
used. page 113
3. After creating object entries, load object tracking entries
for any *DTAARA and *DTAQ objects to be replicated
from a user journal. Use “Loading object tracking entries”
on page 287.

138
Checklist: New remote journal (preferred) configuration

Table 27. How to configure data group entries for the remote journal (preferred) configuration.

Class Do the following: Planning and Requirements


Information

IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 284. replication” on page 116
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
“Loading IFS tracking entries” on page 286.

DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 297. replication” on page 122

16. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
“Resolving auditing problems” on page 679 and “Interpreting results for
configuration data - #DGFE audit” on page 687.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
17. Optionally, you can manually deploy data group configuration within MIMIX.
Although MIMIX will automatically deploy configuration information when data
groups are started, manually deploying is recommended for new data groups.
Manual deploying allows you the opportunity to validate the list of objects to be
replicated and the initial start of the data groups will be faster. Use the procedure
“Manually deploying configuration changes” on page 307.
18. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
“Setting data group auditing values manually” on page 309. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
19. Start journaling using the following procedures as needed for your configuration.

139
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (JRNSYS) parameter in the commands
to start journaling.
• For user journal replication, use “Journaling for physical files” on page 347 to
start journaling on both source and target systems.
• For IFS objects, configured for user journal replication, use “Journaling for IFS
objects” on page 350.
• For data areas or data queues configured for user journal replication, use
“Journaling for data areas and data queues” on page 354.
20. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 508
identifies options available for synchronizing and identifies how to establish a
synchronization point that identifies the journal location that will be used later to
initially start replication.
21. Confirm that the systems are synchronized by checking the libraries, folders and
directories contain expected objects on both systems.
22. Start the data group using “Starting data groups for the first time” on page 315.
23. For configurations that use application groups, after you have started data groups
as described in Step 22, start the application groups using “Starting an application
group” on page 331.
24. Customize the step programs that end and start user applications before and
following a switch using “Customizing user application handling for switching” on
page 561.
25. Verify the configuration. Topic “Verifying the initial synchronization” on page 512
identifies the additional aspects of your configuration that are necessary for
successful replication.

140
Checklist: New MIMIX source-send configuration

Checklist: New MIMIX source-send configuration


Best practices for MIMIX are to use MIMIX Remote Journal support for database
replication. However, in cases where you cannot use remote journaling, this checklist
will configure a new installation that uses MIMIX source-send processes for database
replication. System journal replication is also configured.
To configure a source-send environment, perform the following steps on the system
that you want to designate as the management system of the MIMIX installation:
1. Communications between the systems must be configured and operational
before you start configuring MIMIX.
a. If communications is not configured, refer to “System-level communications”
on page 159 for more information.
b. If you have TCP configured and plan to use it for your transfer protocol, verify
that is it is operational using the PING command.
2. Create system definitions for the management system and each of the network
systems for the MIMIX installation. Use topic “Creating system definitions” on
page 169.
3. Create transfer definitions to define the communications protocol used between
pairs of systems. A pair of systems consists of a management system and a
network system. Use topic “Creating a transfer definition” on page 184.
4. If you are using the TCP protocol, ensure that the Lakeview TCP server is running
on each system defined in the transfer definition. You can use the Work with
Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS
subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not
active on a system, use topic “Starting the TCP/IP server” on page 191.
Note: Default values for transfer definition enable MIMIX to create and manage
autostart job entries for the server. If your transfer definitions prevent this,
you can create and manage your own autostart job entries. For more
information see “Using autostart job entries to start the TCP server” on
page 192.
5. Start the MIMIX managers using topic “Starting the system and journal managers”
on page 306. When the system manager is running, configuration information for
data groups will be automatically replicated to the other system as you create it.
6. Verify that the communications link defined in each transfer definition is
operational using topic “Verifying a communications link for system definitions” on
page 196.
7. Create the data group definitions that you need using topic “Creating a data group
definition” on page 246. Be sure to specify *NO for the Use remote journal link
prompt.
8. Confirm that the journal definitions which have been automatically created have
the values you require. For information, see “Configuration processes that create
journal definitions” on page 200, “Tips for journal definition parameters” on
page 201, and “Journal definition considerations” on page 206.

141
9. If the journaling environment does not exist, use topic “Building the journaling
environment” on page 221 to create the journaling environment.
Note: The use of application groups is considered best practice. Step 10 through
Step 12 create the additional configuration needed for application groups. If
you are not using application groups, skip to Step 13.
10. Create the application groups to which you will associate the data groups using
topic “Creating an application group definition” on page 326.
11. Load the data resource group entries and nodes that define the association
between application groups and data groups. “Loading data resource groups into
an application group” on page 327.
12. Identify what node (system) will be the primary node for each application group,
using “Specifying the primary node for the application group” on page 327.
13. Use Table 28 to create data group entries for this configuration. This configuration
requires object entries and file entries for legacy cooperative processing of PF
data files. For other object types or classes, any replication options identified in
planning topic “Replication choices by object type” on page 97 are supported.

Table 28. How to configure data group entries for a new MIMIX source-send configuration.

Class Do the following: Planning and Requirement


Information

Library- 1. Create object entries using “Creating data group object “Identifying library-based
based entries” on page 271. objects for replication” on
objects 2. After creating object entries, load file entries for PF (data) page 100
*FILE objects using “Loading file entries from a data “Identifying logical and physical
group’s object entries” on page 276. files for replication” on
3. After creating object entries, load object tracking entries page 106
for *DTAARA and *DTAQ objects to be replicated from a “Identifying data areas and data
user journal. Use “Loading object tracking entries” on queues for replication” on
page 287. page 113

IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 284. replication” on page 116
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
“Loading IFS tracking entries” on page 286.

DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 297. replication” on page 122

14. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)

142
Checklist: New MIMIX source-send configuration

b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
“Resolving auditing problems” on page 679 and “Interpreting results for
configuration data - #DGFE audit” on page 687.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
15. Optionally, you can manually deploy data group configuration within MIMIX.
Although MIMIX will automatically deploy configuration information when data
groups are started, manually deploying is recommended for new data groups.
Manual deploying allows you the opportunity to validate the list of objects to be
replicated and the initial start of the data groups will be faster. Use the procedure
“Manually deploying configuration changes” on page 307.
16. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
“Setting data group auditing values manually” on page 309. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
17. Start journaling using the following procedures as needed for your configuration.
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (JRNSYS) parameter in the commands
to start journaling.
• For user journal replication, use “Journaling for physical files” on page 347 to
start journaling on both source and target systems.
• For IFS objects, configured for user journal replication, use “Journaling for IFS
objects” on page 350.
• For data areas or data queues configured for user journal replication, use
“Journaling for data areas and data queues” on page 354.
18. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 508
identifies options available for synchronizing and identifies how to establish a
synchronization point that identifies the journal location that will be used later to
initially start replication.
19. Confirm that the systems are synchronized by checking the libraries, folders and
directories contain expected objects on both systems.
20. Start the data group using “Starting data groups for the first time” on page 315.
21. For configurations that use application groups, after you have started data groups

143
as described in Step 20, start the application groups using “Starting an application
group” on page 331.
22. Customize the step programs that end and start user applications before and
following a switch using “Customizing user application handling for switching” on
page 561.
23. Verify your configuration. Topic “Verifying the initial synchronization” on page 512
identifies the additional aspects of your configuration that are necessary for
successful replication.

144
Checklist: converting to application groups

Checklist: converting to application groups


Use this checklist to change an existing configuration so that data groups will be
associated with one or more application groups. The use of application groups is
considered best practice.
Note: Procedures and steps control start, end, and switch operations for application
groups and their associated data groups. Application groups do not support
switching using implementations of MIMIX Model Switch Framework.
Application groups cannot use model switch framework programs. Any
automation included within model switch framework programs should be
evaluated and considered for inclusion within corresponding procedures for
application groups.
To convert an existing environment so that one or more data groups will be controlled
by application groups, do the following:
1. Analyze your existing model switch framework implementation. You may need to
customize or create additional switching procedures or steps. For more
information, see “Customizing procedures” on page 557. Contact your Certified
MIMIX Consultant if you need assistance setting up your switching environment.
2. Create the application groups to which you will associate the data groups using
“Creating an application group definition” on page 326.
3. Load the data resource group entries and nodes that define the association
between application groups and data groups using “Loading data resource groups
into an application group” on page 327.
4. Identify what node (system) will be the primary node for each application group,
using “Specifying the primary node for the application group” on page 327.
5. If you have automation programs, evaluate them for any needed changes. If you
use automation programs to start and end individual data groups, you will need to
update those automation programs. Best practice is to convert programs to use
STRAG and ENDAG commands to switch application groups and their associated
data groups. However, if you need to allow a data group associated with an
application group to be started or ended individually through automation, you may
need to update automation programs to specify DTACRG(*YES) in the STRDG or
ENDDG commands1. To gain the benefits of procedures and steps, this type of
customized automation should be converted to user-defined procedures.
6. Customize the step programs that end and start user applications before and
following a switch using “Customizing user application handling for switching” on
page 561.
7. Once you have completed the preceding steps, start the application groups
using“Starting an application group” on page 331.

1. This is required in versions 7.1.05.00 and earlier. In 7.1.06.00 and higher, the DTACRG
parameter on these commands defaults to *DFT, which allows the requested command to
run when the data group belongs to a data resource group with two nodes. *DFT prevents
the requested command from running when there are three or more nodes, where it is partic-
ularly important to treat all members of an application group as one entity

145
Checklist: Converting to remote journaling
Use this checklist to convert an existing data group from using MIMIX source-send
processes to using MIMIX Remote Journal support for user journal replication.
Note: This checklist does not change values specified in data group entries that
affect how files are cooperatively processed or how data areas, data queues,
and IFS objects are processed. For example, files configured for legacy
processing prior to this conversion will continue to be replicated with legacy
cooperative processing.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. If you use startup programs, make any changes necessary to ensure that they will
start the TCP/IP server and the DDM server on all systems before starting
replication.
2. Do the following to ensure that you have a functional transfer definition:
a. Modify the transfer definition to identify the RDB directory entry. Use topic
“Changing a transfer definition to support remote journaling” on page 185.
b. If you have implemented DDM password validation, verify that your
environment will allow MIMIX RJ support to work properly. Use topic “Checking
the DDM password validation level” on page 188.
c. Verify the communications link using “Verifying the communications link for a
data group” on page 197.
3. If you are using the TCP protocol, ensure that the DDM TCP server is running
using topic “Starting the DDM TCP/IP server” on page 187.
4. Connect the journal definitions for the local and remote journals using “Adding a
remote journal link” on page 227. This procedure also creates the target journal
definition.
5. Build the journaling environment on each system defined by the RJ pair using
“Building the journaling environment” on page 221.
6. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *YES for the Use remote journal link prompt.
d. When you are ready to accept the changes, press Enter.
7. To make the configuration changes effective, you need to end the data group you
are converting to remote journaling and start it again as follows:
a. Perform a controlled end of the data group (ENDDG command), specifying
*ALL for Process and *CNTRLD for End process. Refer to topic “Ending all
replication in a controlled manner” in the MIMIX Operations book.

146
Checklist: Converting to remote journaling

b. Start data group replication using the procedure “Starting selected data group
processes” in the MIMIX Operations book. Be sure to specify *ALL for Start
processes prompt (PRC parameter) and *LASTPROC as the value for the
Database journal receiver and Database large sequence number prompts.

147
Converting to MIMIX Dynamic Apply
Use either procedure in this topic to change a data group configuration to use MIMIX
Dynamic Apply. In a MIMIX Dynamic Apply configuration, objects of type *FILE (LF,
PF source and data) are replicated using primarily user journal replication processes.
This configuration is the most efficient way to process these files.
• “Converting using the Convert Data Group command” on page 148 automatically
converts a data group configuration.
• “Checklist: manually converting to MIMIX Dynamic Apply” on page 149 enables
you to perform the conversion yourself.
It is recommended that you contact your Certified MIMIX Consultant for assistance
before performing this procedure.
Requirements: Before starting, consider the following:
• Any data groups set with *SYSJRN must use one of these procedures in order to
use MIMIX Dynamic Apply. Newly created data groups are automatically
configured to use MIMIX Dynamic Apply when its requirements and restrictions
are met and shipped command defaults are used.
• Any data group to be converted must already be configured to use remote
journaling.
• Any data group to be converted must have *SYSJRN specified as the value of
Cooperative journal (COOPJRN).
• A minimum level of IBM i PTFs are required on both systems. For a complete list
of required and recommended IBM PTFs, log in to Support Central and refer to
the Technical Documents page.
• The conversion must be performed from the management system. The data group
must be active when starting the conversion.
For additional information about configuration requirements and limitations of MIMIX
Dynamic Apply, see “Identifying logical and physical files for replication” on page 106.

Converting using the Convert Data Group command


The Convert Data Group (CVTDG) will automatically convert the configuration of
specified data groups to enable MIMIX Dynamic Apply. This command will
automatically attempt to perform the steps described in the manual procedure and will
issue diagnostic messages if a step cannot be performed.
Perform the following steps from the management system on an active data group:
1. From a command line enter the command:
CVTDG DGDFN(name system1 system2)
2. Watch for diagnostic messages in the job log and take any recovery action
indicated.
The conversion is complete when you see message LVI321A.

148
Converting to MIMIX Dynamic Apply

Checklist: manually converting to MIMIX Dynamic Apply


Perform the following steps from the management system to enable an existing data
group to use MIMIX Dynamic Apply:
1. Verify the environment meets the requirements and restrictions. See
“Requirements and limitations of MIMIX Dynamic Apply” on page 111.
2. Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they
pertain to your environment. Log in to Support Central and refer to the Technical
Documents page for a list of required and recommended IBM PTFs.
3. Verify that the System Manager jobs are active. See “Starting the system and
journal managers” on page 306.
4. Verify that data group is synchronized by running the MIMIX audits. See “Verifying
the initial synchronization” on page 512.
5. Use the Work with Data Groups display to ensure that there are no files on hold
and no failed or delayed activity entries. Refer to topic “Preparing for a controlled
end of a data group” in the MIMIX Operations book.
Note: Topic “Ending a data group in a controlled manner” in the MIMIX
Operations book includes subtask “Preparing for a controlled end of a data
group” and the other subtasks needed for Step 6 and Step 7.
6. Perform a controlled end of the data group you are converting. Follow the
procedure for “Performing the controlled end” in the MIMIX Operations book.
7. Ensure that there are no open commit cycles for the database apply process.
Follow the steps for “Confirming the end request completed without problems” in
the MIMIX Operations book.
8. From the management system, change the data group definition so that the
Cooperative journal (COOPJRN) parameter specifies *USRJRN. Use the
command:
CHGDGDFN DGDFN(name system1 system2) COOPJRN(*USRJRN)
9. Ensure that you have one or more data group object entries that specify the
required values. These entries identify the items within the name space for
replication. You may need to create additional entries to achieve desired results.
For more information, see “Identifying logical and physical files for replication” on
page 106.
10. To ensure that new files created while the data group is inactive are automatically
journaled, ensure that MIMIX starts library journaling or creates QDFTJRN data
areas as needed in the libraries configured for replication of cooperatively
processed files, data areas, and data queues. This can be done by running the
following command from the source system:
SETDGAUD DGDFN(name system1 system2) OBJTYPE(*AUTOJRN)
Note: The libraries are subject to some limitations. For a list of restricted libraries
and other details of requirements for implicitly starting journaling, see
“What objects need to be journaled” on page 343.
11. From the management system, use the following command to load the data group

149
file entries from the target system. Ensure that the value you specify (*SYS1 or
*SYS2) for the LODSYS parameter identifies the target system.
LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE)
UPDOPT(*ADD) LODSYS(value) SELECT(*NO)
For additional information about loading file entries, see “Loading file entries from
a data group’s object entries” on page 276.
12. Start journaling for all files not previously journaled. See “Starting journaling for
physical files” on page 347.
13. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CLRPND(*YES)
14. Verify that data groups are synchronized by running the MIMIX audits. See
“Verifying the initial synchronization” on page 512.

150
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling

Checklist: Change *DTAARA, *DTAQ, IFS objects to user


journaling
Use this checklist to change the configuration of an existing data group so that IFS
objects, *DTAARA and *DTAQ objects can be replicated from entries in a user journal.
(This environment is also called advanced journaling.) The procedure in this checklist
assumes that the data group already includes user journal replication for files.
Note: The most efficient way to convert IFS entries in a data group from system
journal (object) replication to user journal (database) replication is to use the
Convert Data Group IFS Entries (CVTDGIFSE) command as documented in
“Checklist: Convert data group IFS entries to user journaling using the
CVTDGIFSE command” on page 146.
Topic “User journal replication of IFS objects, data areas, data queues” on page 75
describes the benefits and restrictions of replicating these objects from user journal
entries. It also identifies the MIMIX processes used for replication and the purpose of
tracking entries.
To convert existing data groups to user journaling, do the following:
1. Determine if IFS objects, data areas, and data queues should be replicated in a
data group shared with other objects undergoing database replication, or if these
objects should be in a separate data group. Topic “Planning for journaled IFS
objects, data areas, and data queues” on page 87 provides guidelines for the
following planning considerations:
• Serializing transactions with database files
• Converting existing data groups, including examples
• Database apply session balancing
• User exit program considerations
2. Perform a controlled end of the data groups that will include objects to be
replicated using advanced journaling. See the MIMIX Operations book for how to
end a data group in a controlled manner (ENDOPT(*CNTRLD)).
3. Ensure that all pending activity for objects and IFS objects has completed. Use
the command WRKDGACTE STATUS(*ACTIVE) to display any pending activity
entries. Any activities that are still in progress will be listed.
4. The data group definitions used for user journal replication of IFS objects, data
areas, and data queues must specify *ALL as the value for Data group type
(TYPE). Verify the value in the data group definition is correct. If necessary,
change the value.
Note: If you have to change the Data group type, the journal definitions and
journaling environment for user journal replication may not exist. If
necessary, create the journal definitions (“Creating a journal definition” on
page 218) and build the journaling environment (“Building the journaling
environment” on page 221).
5. Add or change data group IFS entries for the IFS objects you want to replicate. Be
sure to specify *YES for the Cooperate with database prompt in procedure

151
“Adding or changing a data group IFS entry” on page 284. For additional
information, see “Restrictions - user journal replication of IFS objects” on
page 120.
6. Add or change data group object entries for the data areas and data queues you
want to replicate using the procedure “Adding or changing a data group object
entry” on page 272. For additional information, see “Restrictions - user journal
replication of data areas and data queues” on page 114.
Note: New data group object entries created in MIMIX version 7.0 or higher
automatically default to values that result in user journal replication of
*DTAARA and *DTAQ objects.
7. Load the tracking entries associated with the data group IFS entries and data
group object entries you configured. Use the procedures in “Loading tracking
entries” on page 286.
8. Optionally, you can manually deploy data group configuration within MIMIX.
Although MIMIX will automatically deploy configuration information when data
groups are started, manually deploying is recommended for data groups with
large amounts of configured IFS objects. Manual deploying allows you the
opportunity to validate the list of objects to be replicated and the subsequent start
of the data groups will be faster. Use the procedure “Manually deploying
configuration changes” on page 307.
9. Start journaling using the following procedures as needed for your configuration. If
you ever plan to switch the data groups, you must start journaling on both the
source system and on the target system.
• For IFS objects, use “Starting journaling for IFS objects” on page 350
• For data areas or data queues, use “Starting journaling for data areas and data
queues” on page 354
10. Verify that journaling is started correctly. This step is important to ensure the IFS
objects, data areas and data queues are actually replicated. For IFS objects, see
“Verifying journaling for IFS objects” on page 352. For data areas and data
queues, see “Verifying journaling for data areas and data queues” on page 356.
11. If you anticipate a delay between configuring data group IFS, object, or file entries
and starting the data group, use the SETDGAUD command before synchronizing
data between systems. Doing so will ensure that replicated objects are properly
audited and that any transactions for the objects that occur between configuration
and starting the data group are replicated. Use the procedure “Setting data group
auditing values manually” on page 309.
12. Synchronize the IFS objects, data areas and data queues between the source and
target systems. For IFS objects, follow the Synchronize IFS Object (SYNCIFS)
procedures. For data areas and data queues, follow the Synchronize Object
(SYNCOBJ) procedures. Refer to chapter “Synchronizing data between systems”
on page 497 for additional information.
13. If you are replicating large amounts of data, you should specify IBM i journal
receiver size options that provide large journal receivers and large journal entries.
Journals created by MIMIX are configured to allow maximum amounts of data.

152
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling

Journals that already exist may need to be changed.


a. After IFS objects are configured, perform the steps in “Verifying journal receiver
size options” on page 217 to ensure journaling is configured appropriately.
b. Change any journal receiver size options necessary using “Changing journal
receiver size options” on page 217.
14. If you have database replication user exit programs, changes may need to be
made. See “User exit program considerations” on page 90.
15. Once you have completed the preceding steps, start the data groups. For more
information about starting data groups, see the MIMIX Operations book.

153
Checklist: Converting IFS entries to user journaling
using the CVTDGIFSE command
Use the procedures in this topic to convert IFS entries from system journal (object)
replication to user journal (database) replication using the Convert Data Group IFS
Entries (CVTDGIFSE) command. The CVTDGIFSE command provides the most
efficient way to convert IFS entries to user journaling.
Topic “User journal replication of IFS objects, data areas, data queues” on page 75
describes the benefits and restrictions of replicating these objects from user journal
entries. It also identifies the MIMIX processes used for replication and the purpose of
tracking entries.
You can choose to convert all or some of the IFS entries currently configured for
system journaling within a data group. The CVTDGIFSE command uses a temporary
data group that allows the specified data group to remain active during most of the
conversion process.
The IFS entries are copied to the temporary data group, changed to allow cooperative
processing, and IFS tracking entries are created. Journaling is then started for the IFS
objects on the source system and, if specified, also on the target system. The copied
IFS entries replace the existing entries, and IFS tracking entries are moved to the
existing data group. If necessary, the data group is changed to the values specified on
this command and to a data group type of *ALL. The existing data group is ended and
restarted to make all the changes effective and the temporary data group is removed.
If requested, an #IFSATR audit request is submitted after the conversion completes.
The CVTDGIFSE command runs interactively and requires your response to inquiry
messages while the conversion is in progress. You may also be prompted to provide
input for additional commands.

Requirements for using the CVTDGIFSE command


In order to use the Convert Data Group IFS Entries (CVTDGIFSE) command, the
following is required:
• The user ID running the command must have *MGT authority if product level
security is enabled.
• The command must be run from a management system.
• The data group must be enabled and can be active.
• In order for the optional IFS audit to be performed, the Audit level (AUDLVL) policy
in effect at the time processing completes must be a value other than *DISABLED.
• For MIMIX DR installations, the Audit after conversion (AUDIT) parameter of the
CVTDGIFSE command cannot be *PRIORITY.

154
Checklist: Converting IFS entries to user journaling using the CVTDGIFSE command

Create a list of IFS objects eligible for converting to user journaling


To create a list of existing IFS entries for a data group and check whether they are
configured to replicate through the system journal or user journal, do the following:
1. From the MIMIX Basic Main Menu select option 6 (Work with data groups) and
press Enter.
2. The Work with Data Groups (WRKDG) display appears. Type 22 (IFS entries)
next to the data group you want and press Enter.
The Work with DG IFS Entries (WRKDGIFSE) display appears, showing the IFS
entries configured for the data group.
3. Press F10 twice to access the CPD view.
4. The values shown in the Coop with DB column indicate how objects identified by
the data group IFS entries will be replicated.
• Entries with the value *YES are already configured for user journal replication.
• Entries with the value *NO are configured for system journal replication.
To view additional information for a data group IFS entry, type 5 (Display) next to
the entry and press Enter.
5. Use option 6 (Print) to print a list of the existing IFS entries for the data group to
refer to during the conversion process.

Running the CVTDGIFSE command


Running the Convert Data Group IFS Entries (CVTDGIFSE) command will convert
IFS entries for an enabled data group from system journal (object) replication to user
journal (database) replication.
Important! Do not make changes to existing IFS entries for the data group or
perform a switch for the data group while this command is running. An inquiry
message will be issued immediately before changing the existing data group, so
that you can ensure that transaction processing is caught up before continuing
with the conversion before responding to the inquiry message.
This process may take some time. The time required depends on a number of factors
including the number of specified IFS objects (IFS entries), how many objects they
identify, and your environment. The CVTDGIFSE command runs interactively and
requires your response to inquiry messages while the conversion is in progress.
Additionally, you may also be prompted to provide input for additional commands.
To convert existing IFS entries for an enabled data group to user journal replication,
do the following from a management system:
1. If possible, ensure that all pending activity for IFS objects has completed. Use the
command WRKDGACTE STATUS(*ACTIVE) to display any pending activity
entries. Any activities that are still in progress will be listed.
2. From a command line type the following and press F4 (prompt):
CVTDGIFSE DGDFN(name system1 system2)
3. From the Convert Data Group IFS Entries prompt, specify the parameters as

155
desired and press Enter to run the command.
Note: For MIMIX DR installations, the Audit after conversion (AUDIT) parameter
cannot be *PRIORITY.
4. The Confirm Convert DG IFS Entry display appears. To start the conversion,
press Enter. The display will remain until a message is issued which requires a
response.
5. Respond to any messages issued as appropriate for your environment. See
“Responding to CVTDGIFSE command messages” on page 156.

Responding to CVTDGIFSE command messages


Messages may be encountered when running the Convert Data Group IFS Entries
(CVTDGIFSE) command which require action. If the command ends before the
Confirm Convert DG IFS Entry display appears, an escape message is issued which
identifies the reason and the recovery action required before trying the command
again. If a situation is encountered that requires your input before the command can
continue, an inquiry message is issued to indicate the situation that caused the
message and possible responses.

156
Checklist: Converting to legacy cooperative processing

Checklist: Converting to legacy cooperative processing


If you find that you cannot use MIMIX Dynamic Apply for logical and physical files, use
this checklist to change the configuration of an existing data group so that user journal
replication (MIMIX Dynamic Apply) is no longer used. This checklist changes the
configuration so that physical data files can be processed using legacy cooperative
processing. Logical files and physical source files will be processed using the system
journal. For more information, see “Requirements and limitations of legacy
cooperative processing” on page 112.
Important! Before you use this checklist, consider the following:
• As of version 5 and higher, newly created data groups are configured for MIMIX
Dynamic Apply when default values are taken and configuration requirements are
met.
• This checklist does not convert user journal replication processes from using
remote journaling to MIMIX source-send processing.
• This checklist only affects the configuration of *FILE objects. The configuration of
any other *DTAARA, *DTAQ, or IFS objects that are replicated through the user
journal are not affected.
Perform the following steps to enable legacy cooperative processing and system
journal replication:
1. Verify that data group is synchronized by running the MIMIX audits. See “Verifying
the initial synchronization” on page 512.
2. Use the Work with Data Groups display to ensure that there are no files on hold
and no failed or delayed activity entries. Refer to topic “Preparing for a controlled
end of a data group” in the MIMIX Operations book.
Note: Topic “Ending a data group in a controlled manner” in the MIMIX
Operations book includes subtask “Preparing for a controlled end of a data
group” and the subtask needed for Step 3.
3. End the data group you are converting by performing a controlled end. Follow the
procedure for “Performing the controlled end” in the MIMIX Operations book.
4. From the management system, change the data group definition so that the
Cooperative journal (COOPJRN) parameter specifies *SYSJRN. Use the
command:
CHGDGDFN DGDFN(name system1 system2) COOPJRN(*SYSJRN)
5. Save the data group file entries to an outfile. Use the command:
WRKDGFE DGDFN(DGDFN SYS1 SYS2) OUTPUT(*OUTFILE)
6. From the management system, use the following command to load the data group
file entries from the target system. Ensure that the value you specify (*SYS1 or
*SYS2) for the LODSYS parameter identifies the target system.
LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE)
UPDOPT(*REPLACE) LODSYS(value) SELECT(*NO)
For additional information about loading file entries, see “Loading file entries from

157
a data group’s object entries” on page 276.
7. Examine the data group file entries with those saved in the outfile created in
Step 5. Any differences need to be manually updated.
8. If you replicate journaled *DTAARA or *DTAQ objects with this data group, skip to
Step 10.
9. Optional step: This step prevents journaling from starting on new files, which
may not be desired because the journal image (JRNIMG) value for these files may
be different than the value specified in the MIMIX configuration. Such a difference
will be detected by the file attributes (#FILATR) audit.
For the libraries that you want to prevent journaling from starting on new files, do
one of the following:
• For systems running IBM i 5.4, to delete the QDFTJRN data areas use the
command:
DLTDTAARA DTAARA(library/QDFTJRN)
• For systems running IBM i 6.1 or higher, to end library journaling use the
command:
ENDJRNLIB LIB(library)
10. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CRLPND(*YES)

158
Configuring for native TCP/IP

CHAPTER 6 System-level communications

This information is provided to assist you with configuring the IBM Power™ Systems
communications that are necessary before you can configure MIMIX.
MIMIX supports the Transmission Control Protocol/Internet Protocol (TCP/IP)
communications protocol.
Note: MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols. Vision
Solutions will only assist customers to determine possible workarounds with
communication-related issues arise when using SNA or OptiConnect. If you
create transfer definitions for MIMIX to use these protocols, be certain that you
business can accept this limitation.
MIMIX should have a dedicated communications line that is not shared with other
applications, jobs, or users on the production system. A dedicated path will make it
easier to fine-tune your MIMIX environment and to determine the cause of problems.
For TCP/IP, it is recommended that the TCP/IP host name or interface used be in its
own subnet. For SNA, it is recommended that MIMIX have its own communication line
instead of sharing an existing SNA device.
Your Certified MIMIX Consultant can assist you in determining your communications
requirements and ensuring that communications can efficiently handle peak volumes
of journal transactions.
If you plan to use system journal replication processes, you need to consider
additional aspects that may affect the communications speed. These aspects include
the type of objects being transferred and the size of data queues, user spaces, and
files defined to cooperate with user journal replication processes.
MIMIX IntelliStart can help you determine your communications requirements.
The topics in this chapter include:
• “Configuring for native TCP/IP” on page 159 describes using native TCP/IP
communications and provides steps to prepare and configure your system for it.
• “Configuring APPC/SNA” on page 163 describes basic requirements for SNA
communications.
• “Configuring OptiConnect” on page 164 describes basic requirements for
OptiConnect communications and identifies MIMIX limitations when this
communications protocol is used.

Configuring for native TCP/IP


MIMIX has the ability to use native TCP/IP communications over sockets. This allows
users with TCP communications on their networks to use MIMIX without requiring the
use of IBM ANYNET through SNA.

159
System-level communications

Using TCP/IP communications may or may not improve your CPU usage, but if your
primary communications protocol is TCP/IP, this can simplify your network
configuration.
Native TCP/IP communications allow MIMIX users greater flexibility and provides
another option in the communications available for use on their Power™ Systems.
MIMIX users can also continue to use IBM ANYNET support to run SNA protocols
over TCP networks.
Preparing your system to use TCP/IP communications with MIMIX requires the
following:
1. Configure both systems to use TCP/IP. The procedure for configuring a system to
use TCP/IP is documented in the information included with the IBM i software.
Refer to the IBM TCP/IP Fastpath Setup book, SC41-5430, and follow the
instructions to configure the system to use TCP/IP communications.
2. If you need to use port aliases, do the following:
a. Refer to the examples “Port aliases-simple example” on page 160 and “Port
aliases-complex example” on page 161.
b. Create the port aliases for each system using the procedure in topic “Creating
port aliases” on page 162.
3. Once the system-level communication is configured, you can begin the MIMIX
configuration process.

Port aliases-simple example


Before using the MIMIX TCP/IP support, you must first configure the system to
recognize the feature. This involves identifying the ports that will be used by MIMIX to
communicate with other systems. The port identifiers used depend on the
configuration of the MIMIX installations. MIMIX installations vary according to the
needs of each enterprise. At a minimum, a MIMIX installation consists of one
management system and one network system. A more complex MIMIX installation
may consist of one management system and multiple network systems. A large
enterprise may even have multiple MIMIX installations that are interconnected.
Figure 7 shows a simple MIMIX installation in which the management system
(LONDON) and a network system (HONGKONG) use the TCP communications
protocol through the port number 50410. Figure 8 shows a MIMIX installation with two
network systems.

Figure 7. Creating Ports. In this example, the MIMIX installation consists of two systems.

Figure 8. Creating Ports. In this example, the MIMIX installation consists of three systems,

160
Configuring for native TCP/IP

two of which are network systems.

In both Figure 7 and Figure 8, if you need to use port aliases for port 50410, you need
to have a service table entry on each system that equates the port number to the port
alias. For example, you might have a service table entry on system LONDON that
defines an alias of MXMGT for port number 50410. Similarly, you might have service
table entries on systems HONGKONG and CHICAGO that define an alias of MXNET
for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in
the transfer definition.

Port aliases-complex example


If a network system communicates with more than one management system (it
participates with multiple MIMIX installations), it must have a different port for each
management system with which it communicates. Figure 9 shows an example of such
an environment with two MIMIX installations. In the LIBA cluster, the port 50410 is
used to communicate between LONDON (the management system) and
HONGKONG and CHICAGO (network systems). In the LIBB cluster, the port 50411 is
used to communicate between CHICAGO (the management system for this cluster)
and MEXICITY and CAIRO. The CHICAGO system has two port numbers defined,
one for each MIMIX installation in which it participates.

Figure 9. Creating Port Aliases. In this example, the system CHICAGO participates in two

161
System-level communications

MIMIX installations and uses a separate port for each MIMIX installation.

If you need to use port aliases in an environment such as Figure 9, you need to have
a service table entry on each system that equates the port number to the port alias. In
this example, CHICAGO would require two port aliases and two service table entries.
For example, you might use a port alias of LIBAMGT for port 50410 on LONDON and
an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. You might
use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for
port 50411 on both CAIRO and MEXICITY. You would use these port aliases in the
PORT1 and PORT2 parameters on the transfer definitions.

Creating port aliases


The following procedure describes the steps for creating port aliases which allow
MIMIX installations to communicate through TCP/IP.
Notes:
• Perform this procedure on each system in the MIMIX installation that will use
the TCP protocol.
• To allow communications in both directions between a pair of systems, such as
between a management system and a network system, you need to add port
aliases for both systems in the pair on each system.
• If you are using more than one MIMIX installation, define a different set of
aliases for each MIMIX installation.
Do the following to create a port alias on a system:
1. From a command line, type the command CFGTCP and press Enter.
2. The Configure TCP/IP menu appears. Select option 21 (Configure related tables)
and press Enter.

162
Configuring APPC/SNA

3. The Configure Related Tables display appears. Select option 1 (Work with
service table entries) and press Enter.
4. The Work with Service Table Entries display appears. Do the following:
a. Type a 1 in the Opt column next to the blank lines at the top of the list.
b. In the blank at the top of the Service column, use uppercase characters to
specify the alias that the System i will use to identify this port as a MIMIX native
TCP port.

Attention: MIMIX requires that you restrict the length of port


aliases to 14 or fewer characters and suggests that you specify the
alias in uppercase characters.

Note: Port alias names are case sensitive and must be unique to the system
on which they are defined. For environments that have only one MIMIX
installation, Vision Solutions recommends that you use the same port
number or same port alias on each system in the MIMIX installation.
c. In the blank at the top of the Port column, specify the number of an unused port
ID to be associated with the alias. The port ID can be any number greater than
1024 and less than 55534 that is not being used by another application. You
can page down through the list to ensure that the number is not being used by
the system.
d. In the blank at the top of the Protocol column, type TCP to identify this entry as
using TCP/IP communications.
e. Press Enter.
5. The Add Service Table Entry (ADDSRVTBLE) display appears. Verify that the
information shown for the alias and port is what you want. At the Text 'description'
prompt, type a description of the port alias, enclosed in apostrophes, and then
press Enter.

Configuring APPC/SNA
Before you create a transfer definition that uses the SNA protocol, a functioning SNA
(APPN or APPC) line, controller, and device must exist between the systems that will
be identified by the transfer definition. If a line, controller, and device do not exist,
consult your network administrator before continuing.
Note: MIMIX no longer fully supports the SNA protocol. Vision Solutions will only
assist customers to determine possible workarounds if communication related
issues arise when using SNA. If you create transfer definitions that specify
*SNA for protocol, be certain that your business environment can accept this
limitation.

163
System-level communications

Configuring OptiConnect
If you plan to use the OptiConnect protocol, a functioning OptiConnect line must exist
between the two system that you identify in the transfer definition.
Note: MIMIX no longer fully supports the OptiConnect/400 protocol. Vision Solutions
will only assist customers to determine possible workarounds if
communication related issues arise when using SNA. If you create transfer
definitions that specify *OPTI for protocol, be certain that your business
environment can accept this limitation.
You can use the OptiConnect® product from IBM for all communication for most1
MIMIX processes. Use the IBM book OptiConnect for OS/400 to install and verify
OptiConnect communications. Then you can do the following:
• Ensure that the QSOC library is in the system portion of the library list. Use the
command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library
is in the system portion of the library list. If it not, use the CHGSYSVAL command
to add this library to the system library list.
• When you create the transfer definition, specify *OPTI for the transfer protocol.

1. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP
communicaitons.

164
CHAPTER 7 Configuring system definitions

By creating a system definition, you identify to MIMIX characteristics of IBM Power™


Systems that participate in a MIMIX installation.
Only two system definitions can be created in environments licensed for MIMIX DR.
When you create a system definition, MIMIX automatically creates a journal definition
for the security audit journal (QAUDJRN) for the associated system. This journal
definition is used by MIMIX system journal replication processes.
MIMIX also automatically creates the MXCFGJRN and MXSYSMGR journal
definitions for the system, which are used by MIMIX.
The topics in this chapter include:
• “Tips for system definition parameters” on page 166 provides tips for using the
more common options for system definitions.
• “Creating system definitions” on page 169 provides the steps to follow for creating
system definitions.
• “Changing a system definition” on page 170 provides the steps to follow for
changing a system definition.
• “Limiting internal communications to a network system” on page 170 describes
how multi-management environments can change system definitions to potentially
reduce communications required for system managers.
• “Multiple network system considerations” on page 172 describes
recommendations when configuring an environment that has multiple network
systems.

165
Tips for system definition parameters
This topic provides tips for using the more common options for system definitions.
Context-sensitive help is available online for all options on the system definition
commands.
System definition (SYSDFN) This parameter is a single-part name that represents a
system within a MIMIX installation. This name is a logical representation and does not
need to match the system name that it represents. It is recommended that you avoid
naming system definitions based on their roles. System roles such as source, target,
production, and backup change upon switching.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
System type (TYPE) This parameter indicates the role of this system within the
MIMIX installation. A system can be a management (*MGT) system or a network
(*NET) system. Only one system in the MIMIX installation can be a management
system.
Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the
primary and secondary transfer definitions used for communicating with the system.
The communications path and protocol are defined in the transfer definitions. For
MIMIX to be operational, the transfer definition names you specify must exist. MIMIX
does not automatically create transfer definitions. If you accept the default value
primary for the Primary transfer definition, create a transfer definition by that name.
If you specify a Secondary transfer definition, it will be used by MIMIX if
communications path specified by the primary transfer definition is not available.
Cluster member (CLUMBR) You can specify if you want this system definition to be
a member of a cluster. The system (node) will not be added to the cluster until the
system manager is started the first time.
Cluster transfer definition (CLUTFRDFN) You can specify the transfer definition
that cluster resource services will use to communicate to the node and for the node to
communicate with other nodes in the cluster. You must specify *TCP as the transfer
protocol.
Message handling (PRIMSGQ, SECMSGQ) MIMIX uses the centralized message
log facility which is common to all MIMIX products. These parameters provide
additional flexibility by allowing you to identify the message queues associated with
the system definition and define the message filtering criteria for each message
queue. By default, the primary message queue, MIMIX, is located in the MIMIXQGPL
library. You can specify a different message queue or optionally specify a secondary
message queue. You can also control the severity and type of messages that are sent
to each message queue.
Communicate with mgt systems (MGTSYS) This parameter is ignored for system
definitions of type *MGT. For system definitions of type *NET, MIMIX uses this
parameter to determine which management systems will communicate with the
network system via system manager processes. The default value, *ALL, allows all
management systems to communicate with the specified network system. In

166
Tips for system definition parameters

environments licensed for multiple management systems which also have numerous
network systems, you may want to limit the number of management systems that
communicate with a network system to reduce the amount of communications
resources used by MIMIX system managers. It is recommended that you try the
default environment first. In environments with multiple management systems, system
manager processes between a management system and a network system must
exist before data groups can be created between the systems. If you need to limit the
communication resources, contact your Certified MIMIX Consultant for assistance in
balancing MIMIX communication needs with available resources.
Manager delay times (JRNMGRDLY, SYSMGRDLY) Two parameters define the
delay times used for all journal management and system management jobs. The
value of the journal manager delay parameter determines how often the journal
manager process checks for work to perform. The value of the system manager delay
parameter determines how often the system manager process checks for work to
perform.
Output queue values (OUTQ, HOLD, SAVE) These parameters identify an output
queue used by this system definition and define characteristics of how the queue is
handled. Any MIMIX functions that generate reports use this output queue. You can
hold spooled files on the queue and save spooled files after they are printed.
Keep history (KEEPSYSHST, KEEPDGHST) Two parameters specify the number of
days to retain MIMIX system history and data group history. MIMIX system history
includes the system message log. Data group history includes time stamps and
distribution history. You can keep both types of history information on the system for
up to a year.
Keep notifications (KEEPNEWNFY, KEEPACKNFY) Two parameters specify the
number of days to retain new and acknowledged notifications. The Keep new
notifications (days) parameter specifies the number of days to retain new notifications
in the MIMIX data library. The Keep acknowledged notifications (days) parameter
specifies the number of days to retain acknowledged notifications in the MIMIX data
library.
MIMIX data library, storage limit (KEEPMMXDTA, DTALIBASP, DSKSTGLMT)
Three parameters define information about MIMIX data libraries on the system. The
Keep MIMIX data (days) parameter specifies the number of days to retain objects in
the MIMIX data library, including the container cache used by system journal
replication processes. The MIMIX data library ASP parameter identifies the auxiliary
storage pool (ASP) from which the system allocates storage for the MIMIX data
library. For libraries created in a user ASP, all objects in the library must be in the
same ASP as the library. The Disk storage limit (GB) parameter specifies the
maximum amount of disk storage that may be used for the MIMIX data libraries.
User profile and job descriptions (SBMUSR, MGRJOBD, DFTJOBD) MIMIX runs
under the MIMIXOWN user profile and uses several job descriptions to optimize
MIMIX processes. The default job descriptions are stored in the MIMIXQGPL library.
Job restart time (RSTARTTIME) System-level MIMIX jobs, including the system
manager and journal manager, restart daily to maintain the MIMIX environment. You
can change the time at which these jobs restart. The management or network role of

167
the system affects the results of the time you specify on a system definition. Changing
the job restart time is considered an advanced technique.
Printing (CPI, LPI, FORMLEN, OVRFLW, COPIES) These parameters control
characteristics of printed output.
Product library (PRDLIB) This parameter is used for installing MIMIX into a
switchable independent ASP, and allows you to specify a MIMIX installation library
that does not match the library name of the other system definitions. The only time
this parameter should be used is in the case of an INTRA system or in replication
environments where it is necessary to have extra MIMIX system definitions that will
“switch locations” along with the switchable independent ASP. Due to its complexity,
changing the product library is considered an advanced technique and should not be
attempted without the assistance of a Certified MIMIX Consultant.
Note: For INTRA environments, the PRDLIB parameter must be manually
specified and must be a name ending in “I”. For more information, see
Appendix D, “Configuring Intra communications.
ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable
independent ASP, and defines the ASP group (independent ASP) in which the
product library exists. Again, this parameter should only be used in replication
environments involving a switchable independent ASP. Due to its complexity,
changing the ASP group is considered an advanced technique and should not be
attempted without the assistance of a Certified MIMIX Consultant.

168
Creating system definitions

Creating system definitions


To create a system definition, do the following:
1. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
2. The Work with System Definitions display appears. Type a 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create System Definition (CRTSYSDFN) display appears. Specify a name at
the System definition prompt. Once created, the name can only be changed by
using the Rename System Definition command.
4. Specify the appropriate value for the system you are defining at the System type
prompt.
5. Specify the names of the transfer definitions you want at the Primary transfer
definition and, if desired, the Secondary transfer definition prompts.
6. If the system definition is for a cluster environment, do the following:
a. Specify *YES at the Cluster member prompt.
b. Verify that the value of the Cluster transfer definition is what you want. If
necessary, change the value.
7. If you want use to a secondary message queue, at the prompts for Secondary
message handling specify the name and library of the message queue and values
indicating the severity and the Information type of messages to be sent to the
queue.
8. At the Description prompt, type a brief description of the system definition.
9. If you want to verify or change values for additional parameters, press F10
(Additional parameters).
10. To create the system definition, press Enter.

169
Changing a system definition
To change a system definition, do the following:
1. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
2. The Work with System Definitions display appears. Type a 2 (Change) next to the
system definition you want and press Enter.
3. The Change System Definition (CHGSYSDFN) display appears. Press F10
(Additional parameters)
4. Locate the prompt for the parameter you need to change and specify the value
you want. Press F1 (Help) for more information about the values for each
parameter.
5. To save the changes press Enter.

Limiting internal communications to a network system


These instructions identify how to change a network system in a multimanagement
environment so that it communicates with fewer management systems. Doing so may
reduce the amount of communications resources used by MIMIX system managers
when the multimanagement environment also has a large number of network
systems. The MIMIX environment must be licensed for multiple management
systems.
Important!
Before using these instructions, we strongly recommend contacting your Certified
MIMIX Consultant for assistance in assessing your communications resources
while ensuring that MIMIX systems have adequate essential internal
communications. Distance between systems can be a factor which may increase
communications traffic when fewer management systems are allowed to
communicate with network systems.
Do the following:
1. Choose one management system from which to perform all steps in these
instructions.
2. End MIMIX using the command:
ENDMMX ENDSMRJ(*YES)
3. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
4. For each network (*NET) system that you want to change on the Work with
System Definitions display, do the following:
a. Type a 2 (Change) next to the *NET system definition and press Enter.
b. At the Communicate with mgt systems prompt, specify the names of the
management systems allowed to communicate with the selected network
system. You can specify as many as three management systems.

170
Limiting internal communications to a network system

c. Press Enter.
5. When you have completed changing the network system definitions, start MIMIX
using the command:
STRMMX

171
Multiple network system considerations
When configuring an environment that has multiple network systems, it is
recommended that each system definition in the environment specify the same name
for the Primary transfer definition prompt. This configuration is necessary for the
MIMIX system managers to communicate between the management system and all
systems in the network. Data groups can use the same transfer definitions that the
system managers use, or they can use differently named transfer definitions.
Similarly, if you use secondary transfer definitions, it is recommended that each
system definition in the multiple network environment specifies the same name for the
Secondary transfer definition prompt. (The value of the Secondary transfer definition
should be different than the value of the Primary transfer definition.)
Figure 10 shows system definitions in a multiple network system environment. The
management system (LONDON) specifies the value PRIMARY for the primary
transfer definition in its system definition. The management system can communicate
with the other systems using any transfer definition named PRIMARY that has a value
for System 1 or System 2 that resolves to its system name (LONDON). Figure 11
shows the recommended transfer definition configuration which uses the value *ANY
for both systems identified by the transfer definition.
The management system LONDON could also use any transfer definition that
specified the name LONDON as the value for either System 1 or System 2.
The default value for the name of a transfer definition is PRIMARY. If you use a
different name, you need to specify that name as the value for the Primary transfer
definition prompt in all system definitions in the environment.

Figure 10. Example of system definition values in a multiple network system environment.

Work with System Definitions


System: LONDON
Type options, press Enter.
1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 7=Rename
11=Verify communications link 12=Journal definitions
13=Data group definitions 14=Transfer definitions

-Transfer Definitions- Cluster


Opt System Type Primary Secondary Member
__ _______
__ CHICAGO *NET PRIMARY *NONE *NO
__ NEWYORK *NET PRIMARY *NONE *NO
__ LONDON *MGT PRIMARY *NONE *NO

Figure 11. Example of a contextual (*ANY) transfer definition in use for a multiple network

172
Multiple network system considerations

system environment.

Work with Transfer Definitions


System: LONDON
Type options, press Enter.
1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 7=Rename
11=Verify communications link

---------Definition--------- Threshold
Opt Name System 1 System 2 Protocol (MB)
__ __________ _______ ________
PRIMARY *ANY *ANY *TCP *NOMAX

173
Configuring transfer definitions

CHAPTER 8 Configuring transfer definitions

By creating a transfer definition, you identify to MIMIX the communications path and
protocol to be used between two systems. You need at least one transfer definition for
each pair of systems between which you want to perform replication. A pair of
systems consists of a management system and a network system. If you want to be
able to use different transfer protocols between a pair of systems, create a transfer
definition for each protocol.
System-level communication must be configured and operational before you can use
a transfer definition.
You can also define an additional communications path in a secondary transfer
definition. If configured, MIMIX can automatically use a secondary transfer definition if
the path defined in your primary transfer definition is not available.
In an Intra environment, a transfer definition defines a communications path and
protocol to be used between the two product libraries used by Intra. For detailed
information about configuring an Intra environment, refer to “Configuring Intra
communications” on page 654.
Once transfer definitions exist for MIMIX, they can be used for other functions, such
as the Run Command (RUNCMD), or by other MIMIX products for their operations.
The topics in this chapter include:
• “Tips for transfer definition parameters” on page 176 provides tips for using the
more common options for transfer definitions.
• “Using contextual (*ANY) transfer definitions” on page 181 describes using the
value (*ANY) when configuring transfer definitions.
• “Creating a transfer definition” on page 184 provides the steps to follow for
creating a transfer definition.
• “Changing a transfer definition” on page 185 provides the steps to follow for
changing a transfer definition. This topic also includes sub-task for how to
changing a transfer definition when converting to a remote journaling
environment.
• “Starting the DDM TCP/IP server” on page 187 describes how to start the DDM
server that is required in configurations that use remote journaling.
• “Checking the DDM password validation level” on page 188 describes how to
check whether the DDM communications infrastructure used by MIMIX Remote
Journal support requires a password. This topic also describes options for
ensuring that systems in a MIMIX configuration have the same password and
describes implications of these options.
• “Starting the TCP/IP server” on page 191 provides the steps to follow if you need
to start the Lakeview TCP/IP server.
• “Using autostart job entries to start the TCP server” on page 192 provides the
steps to configure the Lakeview TCP server to start automatically every time the

174
MIMIX subsystem is started
• “Verifying a communications link for system definitions” on page 196 provides the
steps to verify that the communications link defined for each system definition is
operational.
• “Verifying the communications link for a data group” on page 197 provides a
procedure to verify the primary transfer definition used by the data group.

175
Tips for transfer definition parameters
This topic provides tips for using the more common options for transfer definitions.
Context-sensitive help is available online for all options on the transfer definition
commands.
Transfer definition (TFRDFN) This parameter is a three-part name that identifies a
communications path between two systems. The first part of the name identifies the
transfer definition. The second and third parts of the name identify two different
system definitions which represent the systems between which communication is
being defined. It is recommended that you use PRIMARY as the name of one transfer
definition. To support replication, a transfer definition must identify the two systems
that will be used by the data group. You can explicitly specify the two systems, or you
can allow MIMIX to resolve the names of the systems. For more information about
allowing MIMIX to resolve the system names, see “Using contextual (*ANY) transfer
definitions” on page 181.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
For more information, see “Target journal definition names generated by ADDRJLNK
command” on page 210.
Short transfer definition name (TFRSHORTN) This parameter specifies the short
name of the transfer definition to be used in generating a relational database (RDB)
directory name. The short transfer definition name must be a unique, four-character
name if you specify to have MIMIX manage your RDB directory entries. It is
recommended that you use the default value *GEN to generate the name. The
generated name is a concatenation of the first character of the transfer definition
name, the last character of the system 1 name, the last character of the system 2
name, and the fourth character will be either a blank, a letter (A - Z), or a single digit
number (0 - 9).
Transfer protocol (PROTOCOL) This parameter specifies the communications
protocol to be used. Each protocol has a set of related parameters. If you change the
protocol specified after you have created the transfer definition, MIMIX saves
information about both protocols.
Notes:
• MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols. Vision
Solutions will only assist customers to determine possible workarounds with
communication-related issues arise when using SNA or OptiConnect. If you
create transfer definitions for MIMIX to use these protocols, be certain that you
business can accept this limitation.
• TCP/IP is the only communications protocol that is supported by MIMIX in an IBM
i clustering environment.
For the *TCP protocol the following parameters apply:
• System x host name or address (HOST1, HOST2) These two parameters

176
Tips for transfer definition parameters

specify the host name or address of system 1 and system 2, respectively. The
name is a mixed-case host alias name or a TCP address (nnn.nnn.nnn.nnn) and
can be up to 256 characters in length. For the HOST1 parameter, the special
value *SYS1 indicates that the host name is the same as the name specified for
System 1 in the Transfer definition parameter. Similarly, for the HOST2 parameter,
the special value *SYS2 indicates that the host name is the same as the name
specified for System 2 in the Transfer definition parameter.
Note: The specified value is also used when starting the Lakeview TCP Server
(STRSVR command). The HOST parameter on the STRSVR command is
limited to 80 or fewer characters.
• System x port number or alias (PORT1, PORT2) These two parameters specify
the port number or port alias of system1 and system 2, respectively. The value of
each parameter can be a 14-character mixed-case TCP port number or port alias
with a range from 1000 through 55534. To avoid potential conflicts with
designations made by the operating system, it is recommended that you use
values between 40000 and 55500. By default, the PORT1 parameter uses the
port 50410. For the PORT2 parameter, the default special value *PORT1
indicates that the value specified on the System 1 port number or alias (PORT1)
parameter is used. If you configured TCP using port aliases in the service table,
specify the alias name instead of the port number.
Note: If you have transfer definitions for multiple MIMIX installations, ensure that
there is a 10-digit gap between the port numbers specified in the transfer
definitions. For example, if port 40000 is used in the transfer definition for
the MIMIXA installation, then the transfer definition for MIMIXB installation
should be 40010 or higher.
The Relational database (RDB) parameter also applies to *TCP protocol.
For the *SNA protocol the following parameters apply:
• System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
• System x network identifier (NETID1, NETID2) These two parameters specify
name of the network for system 1 and system 2, respectively. The default value
*LOC indicates that the network identifier for the location name associated with
the system is used. The special value *NETATR indicates that the value specified
in the system network attributes is used. The special value *NONE indicates that
the network has no name. For the NETID2 parameter, the special value *NETID1
indicates that the network identifier specified on the System 1 network identifier
(NETID1) parameter is used.
• SNA mode (MODE) This parameter specifies the name of mode description used
for communication. The default name is MIMIX. The special value *NETATR

177
indicates that the value specified in the system network attributes is used.
The following parameters apply for the *OPTI protocol:
• System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
Threshold size (THLDSIZE) This parameter is accessible when you press F10
(Additional parameters). This controls the size of files and objects by specifying the
maximum size of files and objects that are sent. If the file or object exceeds the
threshold it is not sent. Valid values range from 1 through 9999999. The special
value *NOMAX indicates that no maximum value is set. Transmitting large files and
objects can consume excessive communications bandwidth and negatively impact
communications performance, especially for slow communication lines.
Manage autostart job entries (MNGAJE) This parameter is accessible when you
press F10 (Additional parameters). This determines whether MIMIX will use this
transfer definition to manage an autostart job entry for starting the TCP server for the
MIMIXQGPL/MIMIXSBS subsystem description. The shipped default is *YES,
whereby MIMIX will add, change, or remove an autostart job entry based on changes
to this transfer definition. This parameter only affects transfer definitions for TCP
protocol which have host names of 80 or fewer characters. For a given port number or
alias, only one autostart job entry will be created regardless of how many transfer
definitions use that port number or alias. An autostart job entry is created on each
system related to the transfer definition.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
Relational database (RDB) This parameter is accessible when you press F10
(Additional parameters) and is valid in transfer definitions used internally by system
manager processes and by transfer definitions used in environments configured to
use remote journaling (default) in user journal replication processes. This parameter
consists of four relational database values which identify the communications path
used by the IBM i remote journal function to transport journal entries: a relational
database directory entry name, two system database names, and a management
indicator for directory entries. This parameter creates two RDB directory entries, one
on each system identified in the transfer definition. Each entry identifies the other
system’s relational database.
Note: If you use the value *ANY for both system 1 and system 2 on the transfer
definition, *NONE is used for the directory entry name, and no directory entry
is generated.

178
Tips for transfer definition parameters

If MIMIX is managing your RDB directory entries, a directory entry is


generated if you use the value *ANY for only one of the systems on the
transfer definition. This directory entry is generated for the system that is
specified as something other than *ANY. For more information about the use
of the value *ANY on transfer definitions, see “Using contextual (*ANY)
transfer definitions” on page 181.
The four elements of the relational database parameter are:
• Directory entry This element specifies the name of the relational database entry.
The default value *GEN causes MIMIX to create an RDB entry and add it to the
relational database. The generated name is in the format MX_nnnnnnnnnn_ssss,
where nnnnnnnnnn is the 10-character installation name, and ssss is the transfer
definition short name. If you specify a value for the RDB parameter, it is
recommended that you limit its length to 18 characters. When you specify the
special value *NONE, the directory entry is not added or changed by MIMIX.
• System 1 relational database This element specifies the name of the relational
database for System 1. The default value *SYSDB specifies that MIMIX will
determine the relational database name.
• System 2 relational database This element specifies the name of the relational
database for System 2. The default value *SYSDB specifies that MIMIX will
determine the relational database name.
Note: For the System 1 relational database and System 2 relational database
elements, if the remote journaling environment uses an independent ASP,
specify the database name for the independent ASP. If you are managing
the RDB directory entries and need to determine the system database
name associated with either of these elements, see “Finding the system
database name for RDB directory entries” on page 180.
• Manage directory entries This element specifies that MIMIX will manage the
relational database directory entries associated with the transfer definition
whether the directory entry name is specified or whether the directory entry name
is generated by MIMIX. Management of the relational database directory entries
consists of adding, changing, and deleting the directory entries on both systems,
as needed, when the transfer definition is created, changed, or deleted. The
special value *DFT indicates that MIMIX manages the relational database
directory entries only when the name is generated using the special value *GEN
on the Directory entry element of this parameter. The special value *YES indicates
that the directory entries on each system are managed by MIMIX. If the relational
database directory entries do not exist, MIMIX adds them and sets any needed
system values. If they do exist, MIMIX changes them to match the values
specified by the Relational database (RDB) parameter. When any of the transfer
definition relational database values change, the directory entry is also changed.
When the transfer definition is deleted, the directory entries are also deleted.

179
Finding the system database name for RDB directory entries
If you are managing the RDB directory entries and you need to determine the system
database name, do the following:
1. Login to the system that was specified for System 1 in the transfer definition.
2. From the command line type DSPRDBDIRE and press Enter. Look for the
relational database directory entry that has a corresponding remote location name
of *LOCAL.
3. Repeat steps 1 and 2 to find the system database name for System 2.

Using IBM i commands to work with RDB directory entries


The Manage directory entries element of the Relational Database (RDB) parameter in
the transfer definition determines whether MIMIX manages RDB directory entries. If
you did not accept default values of *GEN for the Directory entry element and *DFT
for the Manage directory entries element of the RDB parameter when you created
your transfer definition, or if you specified *NO for the Manage directory entries
element, you can use IBM i commands to add (ADDRDBDIRE) and change
(CHGRDBDIRE) RDB directory entries. You can also use these options from the
Work with Relational Database Directory entries display (WRKRDBDIRE command):
1=Add, 2=Change, and 5=Display details.

180
Using contextual (*ANY) transfer definitions

Using contextual (*ANY) transfer definitions


When the three-part name of transfer definition specifies the value *ANY for System 1
or System 2 instead system names, MIMIX uses information from the context in which
the transfer definition is called to resolve to the correct system. Such a transfer
definitions is called contextual transfer definition.
For remote journaling environments, best practice is to use transfer definitions that
identify specific system definitions in the thee-part transfer definition name. Although
you can use contextual transfer definitions with remote journaling, they are not
recommended. For more information, see “Considerations for remote journaling” on
page 182.
In MIMIX source-send configurations, a contextual transfer definition may be an aid in
configuration. For example, if you create a transfer definition named PRIMARY SYSA
*ANY. This definition can be used to provide the necessary parameters for
establishing communications between SYSA and any other system.
The *ANY value represents several transfer definitions, one for each system
definition. For example, a transfer definition PRIMARY SYSA *ANY in an installation
that has three system definitions (SYSA, SYSB, INTRA) represents three transfer
definitions:
• PRIMARY SYSA SYSA
• PRIMARY SYSA SYSB
• PRIMARY SYSA INTRA

Search and selection process


Data group definitions and system definitions include parameters that identify
associated transfer definitions. When an operation requires a transfer definition,
MIMIX uses the context of the operation to determine the fully qualified name. For
example, when starting a data group, MIMIX uses information in the data group
definition, the systems specified in the data group name and the specified transfer
definition name, to derive the fully qualified transfer definition name. If MIMIX is still
unable to find an appropriate transfer definition the following search order is used:
1. PRIMARY SYSA SYSB
2. PRIMARY *ANY SYSB
3. PRIMARY SYSA *ANY
4. PRIMARY SYSB SYSA
5. PRIMARY *ANY SYSA
6. PRIMARY SYSB *ANY
7. PRIMARY *ANY *ANY
When you specify *ANY in the three-part name of a transfer definition, and you have
specified *TFRDFN for the Protocol parameter on such commands as RUNCMD or
VFYCMNLNK, MIMIX searches your system and selects those systems with a

181
transfer definition that matches the transfer definition that you specified, for example,
(PRIMARY SYSA SYSB).

Considerations for remote journaling


Best practice for a remote journaling environment is to use a transfer definition that
identifies specific system definitions in the thee-part transfer definition name. By
specifying both systems, the transfer definition can be used for replication from either
direction.
If you do use a contextual transfer definition in a remote journaling environment, the
value *ANY can be used for the system where the local journal (source) resides. This
value can be either the second or third parts of the three-part name. For example, a
transfer definition of PRIMARY name *ANY is valid in a remote journaling
environment, where name identifies the system definition for the system where the
remote journal (target) resides. A transfer definition of PRIMARY *ANY name is also
valid. The command would look like this:
CRTTFRDFN TFRDFN(PRIMARY name *ANY) TEXT('description')
MIMIX Remote Journal support requires that each transfer definition that will be used
has a relational database (RDB) directory entry to properly identify the remote
system. An RDB directory entry cannot be added to a transfer definition using the
value *ANY for the remote system.
To support a switchable data group when using contextual transfer definitions, each
system in the remote journaling environment must be defined by a contextual transfer
definition. For example, an environment with systems NEWYORK and CHICAGO,
you would need a transfer definition named PRIMARY NEWYORK *ANY as well as a
transfer definition named PRIMARY CHICAGO *ANY.

Considerations for MIMIX source-send configurations


When creating a transfer definition for a MIMIX source-send configuration that uses
contextual system capability (*ANY) and the TCP protocol, take the default values for
other parameters on the CRTTFRDFN command. For example, using the naming
conventions for contextual systems, the command would look like this:
CRTTFRDFN TFRDFN(PRIMARY *ANY *ANY) TEXT('Recommended
configuration')
Note: Ensure that you consult with your site TCP administrator before making these
changes.
For an Intra environment, an additional transfer definition is needed. If there is an
Intra system definition defined, the transfer definition must specify a unique port
number to communicate with Intra. The following is an example of an additional
transfer definition that uses port number 42345 to establish communications with the
Intra system:
CRTTFRDFN TFRDFN(PRIMARY *ANY INTRA) PORT2(42345)
TEXT('Recommended configuration')

182
Using contextual (*ANY) transfer definitions

Naming conventions for contextual transfer definitions


The following suggested naming conventions make the contextual (*ANY) transfer
definitions more useful in your environment.
*TCP protocol: The MIMIX system definition names should correspond to DNS or
host table entries that tie the names to a specific TCP address.
*SNA protocol: The MIMIX system definition names must match SNA environment
(controller names) for the respective systems. The MIMIX system definitions should
match the net attribute system name (DSPNETA). For example, with two MIMIX
systems called SYSA and SYSB, on the SYSA system there would have to be a
controller called SYSB that is used for SYSA to SYSB communications. Conversely,
on SYSB, a SYSA controller would be necessary.
*OPTI protocol: The MIMIX system definition names must match the OptiConnect
names for the systems (DSPOPCLNK).
Note: MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols. Vision
Solutions will only assist customers to determine possible workarounds with
communication-related issues arise when using SNA or OptiConnect. If you
create transfer definitions for MIMIX to use these protocols, be certain that you
business can accept this limitation.

Additional usage considerations for contextual transfer definitions


The Run Command (RUNCMD) and the Verify Communications Link (VFYCMNLNK)
commands requires specific system names to verify communications between
systems. These commands do not handle transfer definitions that specify *ANY in the
three-part name.
When the VFYCMNLNK command is called from option 11 on the Work with System
Definitions display or option 11 on the Work with Data Groups display, MIMIX
determine the specific system names. However, when the command is called from
option 11 on the Work with Transfer Definitions display, entered from a command line,
or included in automation programs, you will receive an error message if the transfer
definition has the value *ANY for either system 1 or system 2.

183
Creating a transfer definition
System-level communication must be configured and operational before you can use
a transfer definition.
To create a transfer definition, do the following:
1. Access the Work with Transfer Definitions display by doing one of the following:
• From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitions) and press Enter.
• From the MIMIX Cluster Menu, select option 21 (Work with transfer definitions)
and press Enter.
2. The Work with Transfer Definitions display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create Transfer Definition display appears. Do the following:
a. At the Transfer definition prompts, specify a name and the two system
definitions between which communications will occur.
b. At the Short transfer definition name prompt, accept the default value *GEN to
generate a short transfer definition name. This short transfer definition name is
used in generating relational database directory entry names if you specify to
have MIMIX manage your RDB directory entries.
c. At the Transfer protocol prompt, specify the communications protocol you
want, then press Enter. The value *TCP is strongly recommended for all
environments and the only protocol supported by MIMIX when used in an IBM i
clustering environment.
Note: MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols.
Vision Solutions will only assist customers to determine possible
workarounds with communication-related issues arise when using SNA
or OptiConnect. If you create transfer definitions for MIMIX to use these
protocols, be certain that you business can accept this limitation.
4. Additional parameters for the protocol you selected appear on the display. Verify
that the values shown are what you want. Make any necessary changes.
5. At the Description prompt, type a text description of the transfer definition,
enclosed in apostrophes.
6. Optional step: If you need to set a maximum size for files and objects to be
transferred, press F10 (Additional parameters). At the Threshold size (MB)
prompt, specify a valid value.
7. Optional step: If you need to change the relational database information, press
F10 (Additional parameters). See “Tips for transfer definition parameters” on
page 176 for details about the Relational database (RDB) parameter. If MIMIX is
not managing the RDB directory entries, it may be necessary to change the RDB
values.
8. To create the transfer definition, press Enter.

184
Changing a transfer definition

Changing a transfer definition


To change a transfer definition, do the following:
1. Access the Work with Transfer Definitions display by doing one of the following:
• From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitions) and press Enter.
2. The Work with Transfer Definitions display appears. Type 2 (Change) next to the
definition you want and press Enter.
3. The Change Transfer Definition (CHGTFRDFN) display appears. If you want to
change which protocol is used between the specified systems, specify the value
you want for the Transfer protocol prompt.
Note: MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols.
Vision Solutions will only assist customers to determine possible
workarounds with communication-related issues arise when using SNA or
OptiConnect. If you create transfer definitions for MIMIX to use these
protocols, be certain that you business can accept this limitation.
4. Press Enter to display the parameters for the specified transfer protocol. Locate
the prompt for the parameter you need to change and specify the value you want.
Press F1 (Help) for more information about the values for each parameter.
5. If you need to set a maximum size for files and objects to be transferred, press
F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid
value.
6. If you need to create or remove an autostart job entry for the TCP server, press
F10 (Additional parameters). At the Manage autostart job entries prompt, specify
the value you want. When *YES is specified, MIMIX will add, change, or remove
the autostart entry based on changes to the transfer definition. For a given port
number or alias, only one autostart job entry will be created regardless of how
many transfer definitions use that port number or alias. An autostart job entry is
created on each system related to the transfer definition.
7. If you need to change your relational database information, press F10 (Additional
parameters). At the Relational database (RDB) prompt, specify the desired values
for each of the four elements and press Enter. For special considerations when
changing your transfer definitions that are configured to use RDB directory entries
see “Tips for transfer definition parameters” on page 176.
8. To save changes to the transfer definition, press Enter.

Changing a transfer definition to support remote journaling


If the value *ANY is specified for either system in the transfer definition, before you
complete this procedure refer to “Using contextual (*ANY) transfer definitions” on
page 181. Contextual transfer definitions are not recommended in a remote journaling
environment.

185
To support remote journaling, modify the transfer definition you plan to use as follows:
1. From the MIMIX Configuration menu, select option 2 (Work with transfer
definitions) and press Enter.
2. The Work with Transfer Definitions display appears. Type a 2 (Change) next to the
definition you want and press Enter.
3. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10
(Additional parameters), then press Page Down.
4. At the Relational database (RDB) prompt, specify the desired values for each of
the four elements and press Enter.
Note: See “Tips for transfer definition parameters” on page 176 for detailed
information about the Relational database (RDB) parameter and “Finding
the system database name for RDB directory entries” on page 180 for
information when changing transfer definitions configured to use RDB
directory entries.

186
Starting the DDM TCP/IP server

Starting the DDM TCP/IP server


Use this procedure if you need to start the DDM TCP/IP server in an environment
configured for MIMIX RJ support.
From the system on which you want to start the TCP server, do the following:
1. Ensure that the DDM TCP/IP attributes allow the DDM server to be automatically
started when the TCP/IP server is started (STRTCP). Do the following:
a. Type the command CHGDDMTCPA and press F4 (Prompt).
b. Check the value of the Autostart server prompt. If the value is *YES, it is set
appropriately. Otherwise, change the value to *YES and press Enter.
2. To prevent install problems due to locks on the library name, ensure that the
MIMIX product library is not in your user library list.
3. To start the DDM server, type the command STRTCPSVR(*DDM) and press Enter.

Verifying that the DDM TCP/IP server is running


Do the following:
1. Enter the command NETSTAT OPTION(*CNN)
2. The Work with TCP/IP Connection Status appears. Look for these servers in the
Local Port column:
• ddm
• ddm-ssl
3. These servers should exist and should have a vale of Listen in the State column.

187
Checking the DDM password validation level
MIMIX Remote Journal support uses the DDM communications infrastructure. This
infrastructure can be configured to require a password when a server connection is
made. The MIMIXOWN user profile, which establishes the remote journal connection,
ships with a preset password so that it is consistent on all systems.
If you have implemented DDM password validation on any systems where MIMIX will
be used, you should verify the DDM level. If the MIMIXOWN password is not the
same on all systems in the MIMIX environment, you may need to change the
MIMIXOWN user profile or the DDM security level to allow MIMIX Remote Journal
support to function properly. These changes have security implications of which you
should be aware.
If the MIMIXOWN password has not been changed from its shipped, preset value no
action is necessary.
If the MIMIXOWN password has been changed from its shipped value, do the
following on both systems to check the DDM password validation level in use:
1. From a command line, type CHGDDMTCPA and press F4 (prompt).
2. Check the value of the Lowest authentication method (PWDRQD) field:
• If the value is *NO, *USRID, or *VLDONLY no further action is required. Press
F12 (Cancel).
• If the field contains any other value, you must take further action to enable
MIMIX RJ support to function in your environment. Press F12, then continue
with the next step.
3. Use one of the following options to change your environment to enable MIMIX RJ
support to function. Each option has security implications. You must decide which
option is best for your environment.
• “Option 1: Manually update MIMIXOWN user profile for DDM environment” on
page 188
• “Option 2: Force MIMIX to change password for MIMIXOWN user profile” on
page 189.
• “Option 3: Allow user profiles without passwords” on page 189.

Option 1: Manually update MIMIXOWN user profile for DDM environment


This option requires you to manually change the MIMIXOWN user profile password
and adds server authentication entries to recognize the MIMIXOWN user profile.
MIMIX must be installed and transfer definitions must exist before you can make
these changes.
Note: The MIMIXOWN user profile is automatically created with a password when
MIMIX is installed. You should not need to do these steps unless you are
having authority issues when building or starting RJ links.

Use the License Manager command CHGMMXPRF1 to change the MIMIXOWN user
profile.

188
Checking the DDM password validation level

Do the following from each system in your MIMIX installation:


1. Use the CHGMMXPRF command to change the MIMIXOWN password and
update existing authentication entries for MIMIXOWN. Enter the following
command:
LAKEVIEW/CHGMMXPRF PASSWORD(user-defined-password)
Note: The password is case sensitive and must be the same on all systems in
the MIMIX network. If the password does not match on all systems, some
MIMIX functions will fail with security error message LVE0127 when the
system manager is started.
2. Be sure to repeat this procedure on all nodes where MIMIX is installed.

Option 2: Force MIMIX to change password for MIMIXOWN user profile


This option causes MIMIX to change the password for MIMIXOWN and update all
necessary server authentication entries associated with RJ links on all systems in the
MIMIX installation. You may need to use this option if you manually changed the
password for the MIMIXOWN user profile and you have added an RJ link that uses
either a new transfer definition or a transfer definition in which you changed the
values specified for the RDB parameter.
By manually changing the password, you will be causing a password mismatch.
When the system manager is started MIMIX will automatically resolve the mismatch
by changing the password and updating the authentication entries.
Note: The password generated by MIMIX is a randomly generated 20-character
string using mixed case letters, numbers, and special characters. This
password is not stored by MIMIX.
Do the following from a management system:
1. To access all RJ links used by MIMIX system managers, enter the command:
WRKRJLNK PROCESS(*INTERNAL)
2. Use option 10 (End) to end all of the displayed RJ links.
3. On the local system only, use the following command to change the MIMIXOWN
user profile to have a password and to prevent signing on with the profile:
CHGUSRPRF USRPRF(MIMIXOWN) PASSWORD(user-defined-password)
INLMNU(*SIGNOFF)
4. To start the system mangers and RJ links, enter the following command:
STRMMXMGR SYSDFN(*ALL)

Option 3: Allow user profiles without passwords


This option changes DDM TCP attributes to allow user profiles without passwords to
function in environments that use DDM password validation. Be aware that this option
reduces the security of the DDM TCP connection. You can use this option before or

1. The CHGMMXPRF command is available in the version of License Manager shipped with
service pack 8.0.05.00 and later.

189
after MIMIX is installed. However, this option should be performed before configuring
or starting MIMIX.
Do the following from a command line on each system in the installation:
Specify either *VLDONLY or *USRID as the value for PWDRQD in the following
command and press Enter:
CHGDDMTCPA PWDRQD(value)

190
Starting the TCP/IP server

Starting the TCP/IP server


Use this procedure if you need to manually start the TCP/IP server.
Once the TCP communication connections have been defined in a transfer definition,
the TCP server must be started on each of the systems identified by the transfer
definition.
You can also start the TCP/IP server automatically through an autostart job entry.
Either you can change the transfer definition to allow MIMIX to create and manage
the autostart job entry for the TCP/IP server, or you can add your own autostart job
entry. MIMIX only manages entries for the server when they are created by transfer
definitions.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
Note: Use the host name and port number (or port alias) defined in the transfer
definition for the system on which you are running this command.
Do the following on the system on which you want to start the TCP server:
1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and
press Enter.
2. The Utilities Menu appears. Select option 51 (Start TCP server) and press Enter.
3. The Start Lakeview TCP Server display appears. At the Host name or address
prompt, specify the host name or address for the local system as defined in the
transfer definition.
4. At the Port number or alias prompt, specify the port number or alias as defined in
the transfer definition for the local system.
Note: If you specify an alias, you must have an entry in the service table on this
system that equates the alias to the port number.
5. Press Enter.
6. Verify that the server job is running under the MIMIX subsystem on that system.
You can use the Work with Active Jobs (WRKACTJOB) command to look for a job
under the MIMIXSBS subsystem with a function of PGM-LVSERVER.

191
Using autostart job entries to start the TCP server
To use TCP/IP communications, the MIMIX TCP/IP server must be started each time
the MIMIX subsystem (MIMIXSBS) is started. Because this can become a time
consuming task that can be mistakenly forgotten, MIMIX supports automatically
creating and managing autostart job entries for the TCP server with the MIMIXSBS
subsystem. MIMIX does this when transfer definitions for TCP protocol specify *YES
for the Manage autostart job entries (MNGAJE) parameter.
The autostart job entry uses a job description that contains the STRSVR command
which will automatically start the Lakeview TCP server when the MIMIXSBS
subsystem is started. The STRSVR command is defined in the Request data or
command (RQSDTA) parameter of the job description.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
If you prefer, you can create and manage autostart job entries yourself. The transfer
definition must specify MNGAJE(*NO) and you must have an autostart job entry on
each system that can use the transfer definition.

Identifying the current autostart job entry information


This procedure enables you to identify the autostart job entry for the STRSVR
command in the MIMIXSBS subsystem and display the current information within the
job description associated with the entry.
To display the autostart job entry information, do the following:
1. Type the command DSPSBSD MIMIXQGPL/MIMIXSBS and press Enter. The
Display Subsystem Description display appears.
2. Type 3 (Autostart job entries) and press enter. The Display Autostart Job Entries
display appears.
3. The columns Job, Job Description, and Library identify autostart job names and
their job description information. Locate the name and library of the job description
for the autostart job entry for the STRSVR. Typically, this job description name is
either the port alias name or PORTnnnnn where nnnnn is the port number and the
library name is the name of the MIMIX installation library. Press Enter.
4. To display the STRSVR details specified in the job description, do the following:
a. Using the job description information identified in Step 3, type the command
DSPJOBD library/job_description and press Enter.
b. The Display with Job Descriptions display appears. Page down to view the
Request data field. The information in this field shows the current values of the
STRSVR command used by the autostart job entry.

192
Using autostart job entries to start the TCP server

Changing an autostart job entry and its related job description


When the host or port information for a system identified in a transfer definition
changes, those changes must be also be reflected in autostart job entries for the
STRSVR command and in their associated job descriptions. MIMIX automatically
updates this information for MIMIX-managed autostart job entries when the transfer
definition is updated.
However, if the transfer definition specifies MNGAJE(*NO) and you are managing the
autostart job entries for the STRSVR command and their associated job descriptions
yourself, you must update them when the host or port information for a system in the
MIMIX environment changes. Specifically, the following changes to a transfer
definition require changing a user-managed autostart job entry or its associated job
description on the local system:
• A change to the port number or alias identified in the PORT1 or PORT2
parameters requires replacing the job description and autostart job entry.
• A change to the host name or address identified in the HOST1 or HOST2
parameters requires changing the job description.
• If the transfer definition was renamed or copied so that the value of
HOST1(*SYS1) or HOST2(*SYS2) no longer resolves to the same system
definition system, the job description must be changed.

Using a different job description for an autostart job entry


When MIMIX manages autostart job entries for the STRSVR command, the default
job description used to submit the job is named MIMIXCMN in library MIMIXQGPL. If
you want the STRSVR request to run using a different job description, you can do the
following:
1. Identify the job description and library for the autostart job entry using the
procedure in “Identifying the current autostart job entry information” on page 192.
2. Type CHGJOBD and press F4 (Prompt). The Change Job Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library names from in Step 1.
a. Press F10 (Additional parameters), then Page Down.
b. The the Request data or command prompt shows the current values of the
STRSVR command. Change the JOBD parameter shown to specify the library
and job description you want.
Important! Change only the JOBD information for the STRSVR command
specified within the RQSDTA parameter. Do not change the HOST or PORT
values when the autostart job entry that is managed by MIMIX.
c. Press Enter.

193
Updating host information for a user-managed autostart job entry
Use this procedure to update a user-managed autostart job entry which starts the
STRSVR command with the MIMIXSBS subsystem so that the request is submitted
with the correct host information. Autostart job entries for the server are user-
managed when the transfer definition specifies MNGAJE(*NO).
Important! Do not use this procedure for MIMIX-managed autostart job entries.
Perform this procedure from the local system, which is the system for which
information changed within the transfer definition. Do the following:
1. Identify the job description and library for the autostart job entry using the
procedure in “Identifying the current autostart job entry information” on page 192.
This information is needed in the following step.
2. Type CHGJOBD and press F4 (Prompt). The Change Job Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library names from in Step 1.
a. Press F10 (Additional parameters), then Page Down to locate Request data or
command (RQSDTA).
b. The Request data or command prompt shows the current values of the
STRSVR command in the following format. Change the value specified for
HOST so that the local_host-name is the host name or address specified
for the local system in the transfer definition.
'installation_library/STRSVR HOST(''local_host_name'')
PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)'
c. Press Enter.

Updating port information for a user-managed autostart job entry


This procedure identifies how to update the port information for a user-managed
autostart job entry that starts the Lakeview TCP server with the MIMIXSBS
subsystem. Autostart job entries for the server are user-managed when the transfer
definition specifies MNGAJE(*NO).
Important! Do not use this procedure for MIMIX-managed autostart job entries.
Perform this procedure from the local system, which is the system for which
information changed within the transfer definition. Do the following:
1. Identify the job name, job description, and library for the autostart job entry using
the procedure in “Identifying the current autostart job entry information” on
page 192. This information is needed in the following steps.
2. Remove the old autostart job entry by specifying the job name from Step 1 for
job_name in the following command:
RMVAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(job_name)
3. Remove the old job description by specify the job description name and library
from Step 1 in the following command:

194
Using autostart job entries to start the TCP server

DLTJOBD JOBD(library/job_description)
4. Create a new job description for the autostart job entry using the following
command:
CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD)
TOLIB(installation-library) NEWOBJ(job_description_name)
where installation_library is the name of the library for the MIMIX
installation and where job_description_name follows the recommendation to
identify the port for the local system by specifying the port number in the format
PORTnnnnn or the port alias.
5. Type CHGJOBD and press F4 (Prompt). The Change Job Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library you created in Step 4.
b. Press F10 (Additional parameters).
c. Page Down to locate Request data or command (RQSDTA).
d. At the Request data or command prompt, specify the STRSVR command in
the following format:
'installation_library/STRSVR HOST(''local_host_name'')
PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)'
Where the values to specify are:
• installation_library is the name of the library for the MIMIX
installation
• local_host_name is the host name or address from the transfer definition
for the local system
• nnnnn is the new port information from the transfer definition for the local
system, specified as either the port number or the port alias.
e. Press Enter. The job description is changed.
6. Create a new autostart job entry using the following command:
ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(autostart_job_name)
JOBD(installation_library/job_description_name)
Where installation_library/job_description_name specifies the job
description from Step 4 and autostart_job_name specifies the same port
information and format as specified for the job description name.

195
Verifying a communications link for system definitions
Do the following to verify that the communications link defined for each system
definition is operational:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, type a 1 (Work with system definitions) and
press Enter.
3. From the Work with System Definitions display, type an 11 (Verify
communications link) next to the system definition you want and press Enter. You
should see a message indicating the link has been verified.
Note: If the system manager is not active, this process will only verify that
communications to the remote system is successful. You will also see a
message in the job log indicating that “communications link failed after 1
request.” This indicates that the remote system could not return
communications to the local system.
4. Repeat this procedure for all system definitions. If the communications link
defined for a system definition uses SNA protocol, do not check the link from the
local system.
Note: If your transfer definition uses the *TCP communications protocol, then
MIMIX uses the Verify Communications Link command to validate the
information that has been specified for the Relational database (RDB)
parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and
System 2 relational database names exist and are available on each
system.

196
Verifying the communications link for a data group

Verifying the communications link for a data group


Before you synchronize data between systems, ensure that the communications link
for the data group is active. This procedure verifies the primary transfer definition
used by the data group. If your configuration requires multiple data groups, be sure to
check communications for each data group definition.
Do the following:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, type a 4 (Work with data group definitions)
and press Enter.
3. From the Work with Data Group Definitions display, type an 11 (Verify
communications link) next to the data group you want and press F4.
4. The Verify Communications Link display appears. Ensure that the values shown
for the prompts are what you want.
5. To start the check, press Enter.
6. You should see a message "VFYCMNLNK command completed successfully."
If your data group definition specifies a secondary transfer definition, use the following
procedure to check all communications links.

Verifying all communications links


The Verify Communications Link (VFYCMNLNK) command requires specific system
names to verify communications between systems. When the command is called from
option 11 on the Work with System Definitions display or option 11 on the Work with
Data Groups display, MIMIX identifies the specific system names.
For transfer definitions using TCP protocol: MIMIX uses the Verify
Communications Link (VFYCMNLNK) command to validate the values specified for
the Relational database (RDB) parameter. MIMIX also uses VFYCMNLNK to verify
that the System 1 and System 2 relational database names exist and are available on
each system.
When the command is called from option 11 on the Work with Transfer Definitions
display or when entered from a command line, you will receive an error message if
the transfer definition specifies the value *ANY for either system 1 or system 2.
1. From the Work with Transfer Definitions display, type an 11 (Verify
communications link) next to all transfer definitions and press Enter.
2. The Verify Communications Link display appears. If you are checking a Transfer
definition with the value of *ALL, you need to specify a value for the System 1 or
System 2 prompt. Ensure that the values shown for the prompts are what you
want and then press Enter.
You will see the Verify Communications Link display for each transfer definition
you selected.
3. You should see a message "VFYCMNLNK command completed successfully."

197
Configuring journal definitions

CHAPTER 9 Configuring journal definitions

By creating a journal definition you identify to MIMIX a journal environment that can
be used in the replication process. MIMIX uses the journal definition to manage the
journaling environment, including journal receiver management.
A journal definition does not automatically build the underlying journal environment
that it defines. If the journal environment does not exist, it must be built. This can be
done after the journal definition is created. Configuration checklists indicate when to
build the journal environment.
The topics in this chapter include:
• “Configuration processes that create journal definitions” on page 200 describes
the security audit journal (QAUDJRN) and other journal definitions that are
automatically created by MIMIX.
• “Tips for journal definition parameters” on page 201 provides tips for using the
more common options for journal definitions.
• “Journal definition considerations” on page 206 provides things to consider when
creating journal definitions for remote journaling.
• “Journal definition naming conventions” on page 207 describes the naming
conventions used for journal definitions and the specific conventions used by
processes that create target journal definitions for data groups which use remote
journaling.
• “Journal receiver management” on page 213 describes how MIMIX performs
change management and delete management for replication processes.
• “Journal receiver size for replicating large object data” on page 217 provides
procedures to verify that a journal receiver is large enough to accommodate large
IFS stream files and files containing LOB data, and if necessary, to change the
receiver size options.
• “Creating a journal definition” on page 218 provides the steps to follow for creating
a journal definition.
• “Changing a journal definition” on page 220 provides the steps to follow for
changing a journal definition.
• “Building the journaling environment” on page 221 describes the journaling
environment and provides the steps to follow for building it.
• “Changing the journaling environment to use *MAXOPT3” on page 222 describes
considerations and provides procedures for changing the journaling environment
to use the *MAXOPT3 receiver size option.
• “Changing the remote journal environment” on page 225 provides steps to follow
when changing an existing remote journal configuration. The procedure is
appropriate for changing a journal receiver library for the target journal in a remote
journaling environment or for any other changes that affect the target journal.
• “Adding a remote journal link” on page 227 describes how to create a MIMIX RJ

198
link, which will in turn create a target journal definition with appropriate values to
support remote journaling. In most configurations, the RJ link is automatically
created for you when you follow the steps of the configuration checklists.
• “Changing a remote journal link” on page 228 describes how to change an
existing RJ link.
• “Temporarily changing from RJ to MIMIX processing” on page 229 describes how
to change a data group configured for remote journaling to temporarily use MIMIX
send processing.
• “Changing from remote journaling to MIMIX processing” on page 230 describes
how to change a data group that uses remote journaling so that it uses MIMIX
send processing. Remote journaling is preferred.
• “Removing a remote journaling environment” on page 231 describes how to
remove a remote journaling environment that you no longer need.

199
Configuration processes that create journal definitions
You can explicitly create journal definitions using the Create Journal Definition
(CRTJRNDFN) command. However, other configuration processes may automatically
create them for you. Journal definitions created by other processes can be changed if
necessary.
When you create system definitions, MIMIX automatically creates a journal definition
named QAUDJRN for the security audit journal (QAUDJRN) on that system. The
QAUDJRN journal, also called the system journal, is used by MIMIX system journal
replication processes. If you do not already have a journaling environment for the
security audit journal, it will be created when the first data group that replicates from
the system journal is started.
When you create a data group definition, MIMIX automatically creates a user journal
definition if one does not already exist. Any journal definitions that are created in this
manner will be named with the value specified in the data group definition.
If the data group was created using default values, the user journal created will be
used with IBM i remote journaling support. Creating a data group definition also
creates a remote journal link which in turn creates the journal definition for the target
journal. The target journal definition is created using values appropriate for remote
journaling.
When system manager processes are started, MIMIX will create all the internal
journal definitions and remote journal links necessary for all system managers in the
installation if they do not already exist.

Journals and journal definitions for internal use


When you create a system definition, MIMIX also automatically creates a journal
definition named MXCFGJRN for that system. This journal definition and its
associated journaling environment are required for internal use by MIMIX. (The
MXCFGJRN journal is created when MIMIX software is installed.)
The journal definitions, remote journal (RJ) links, and journaling environments created
by system manager processes are required for internal use by MIMIX. Each journal
definition for the source system of a system manager RJ link is named MXSYSMGR.
Each journal definition for the target system of a system manager RJ links is named
MXSYS_nn where nn is the remote journal ID (RJ ID) of its source system.
These internal journal definitions and their associated journaling environments should
not be used for other purposes and cannot be deleted.
Journal definitions for internal use by MIMIX are not shown in the default view of the
Work with Journal Definitions display (WRKJRNDFN command). However, you can
access them when needed by subsetting.

200
Tips for journal definition parameters

Tips for journal definition parameters


This topic provides tips for using the more common options for journal definitions.
Context-sensitive help is available online for all options on the journal definition
commands.
Journal definition (JRNDFN) This parameter is a two-part name that identifies a
journaling environment on a system. The first part of the name identifies the journal
definition. The second part of the name identifies a system definition which represents
the system on which you want the journal to reside.
Journal definitions created by other MIMIX configuration operations may assign
specific names for the first part of the name. See “Journal definition naming
conventions” on page 207.
Journal (JRN) This parameter specifies the qualified name of a journal to which
changes to files or objects to be replicated are journaled. For the journal name, the
default value *JRNDFN uses the name of the journal definition for the name of the
journal.
For the journal library, the default value *DFT allows MIMIX to determine the library
name based on the ASP in which the journal library is allocated, as specified in the
Journal library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses
#MXJRNIASP for the default journal library name; otherwise, the default library name
is #MXJRN.
Journal library ASP (JRNLIBASP) This parameter specifies the auxiliary storage
pool (ASP) from which the system allocates storage for the journal library. You can
use the default value *CRTDFT or you can specify the number of an ASP in the range
1 through 32.
The value *CRTDFT indicates that the command default value for the IBM i Create
Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP)
from which the system allocates storage for the library.
For libraries that are created in a user ASP, all objects in the library must be in the
same ASP as the library.
Journal receiver prefix (JRNRCVPFX) This parameter specifies the prefix to be
used in the name of journal receivers associated with the journal used in the
replication process and the library in which the journal receivers are located.
The prefix must be unique to the journal definition and cannot end in a numeric
character. The default value *GEN for the name prefix indicates that MIMIX will
generate a unique prefix, which usually is the first six characters of the journal
definition name with any trailing numeric characters removed. If that prefix is already
used in another journal definition, a unique six character prefix name is derived from
the definition name. If the journal definition will be used in a configuration which
broadcasts data to multiple systems, there are additional considerations. See “Journal
definition considerations” on page 206.
The value *DFT for the journal receiver library allows MIMIX to determine the library
name based on the ASP in which the journal receiver is allocated, as specified in the
Journal receiver library ASP parameter. If that parameter specifies *ASPDEV, MIMIX

201
uses #MXJRNIASP for the default journal receiver library name. Otherwise, the
default library name is #MXJRN. You can specify a different name or specify the value
*JRNLIB to use the same library that is used for the associated journal.
Journal receiver library ASP (RCVLIBASP) This parameter specifies the auxiliary
storage pool (ASP) from which the system allocates storage for the journal receiver
library. You can use the default value *CRTDFT or you can specify the number of an
ASP in the range 1 through 32.
The value *CRTDFT indicates that the command default value for the IBM i Create
Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP)
from which the system allocates storage for the library.
For libraries that are created in a user ASP, all objects in the library must be in the
same ASP as the library.
Target journal state (TGTSTATE) This parameter specifies the requested status of
the target journal, and can be used with active journaling support or journal standby
state. Use the default value *ACTIVE to set the target journal state to active when the
data group associated with the journal definition is journaling on the target system
(JRNTGT(*YES)). Use the value *STANDBY to journal objects on the target system
while preventing most journal entries from being deposited into the target journal.
Note: Journal standby state requires that the IBM feature for High Availability
Journal Performance be installed. For more information, see “Configuring for
high availability journal performance enhancements” on page 362.
Target journal inspection (TGTJRNINSP) This parameter specifies whether to
enable target journal inspection on the specified journal. The shipped default value,
*YES, allows the journal to be inspected when the system identified in the journal
definition is the target system for data group replication. Target journal inspection
checks the specified journal for changes to replicated objects that were initiated on
the target system by users or processes other than MIMIX and reports the activity.
When *YES is specified and the specified journal is a user journal, the value *ACTIVE
must be specified for the Target journal state (TGTSTATE). (The IBM feature for High
Availability Journal Performance is not required.) Also, any data groups definitions
using this journal definition must allow the data groups to journal on the target system.
Because inspection occurs at the journal level on a system, enabling inspection for a
system journal (QAUDJRN) affects all data groups using the system identified in the
journal definition as their target system. Similarly, enabling inspection for a user
journal affects any data groups using the journal definition as their target system.
Note: To allow full inspection to occur for a specific data group, both its target system
journal definition and its target user journal definition must specify *YES for the
TGTJRNINSP parameter.
Journal caching (JRNCACHE) This parameter specifies whether the system should
cache journal entries in main storage before writing them to disk. This option is only
available if a separately chargeable feature from IBM (Option 42) is available on the
system. The default value, *NONE, prevents unintentional use of this feature. The
value *BOTH results in journal caching on both the source and the target systems.
You can also specify values *SRC or *TGT to perform journal caching on only the
source or target system.

202
Tips for journal definition parameters

Note: Journal caching requires that the IBM feature for High Availability Journal
Performance be installed. For more information, see “Configuring for high
availability journal performance enhancements” on page 362.
Receiver change management (CHGMGT, THRESHOLD, TIME, RESETTHLD2 or
RESETTHLD) Several parameters control how journal receivers associated with the
replication process are changed.
The Receiver change management (CHGMGT) parameter controls whether MIMIX
performs change management operations for the journal receivers used in the
replication process. The shipped default value of *TIMESIZE results in MIMIX
changing journal receivers by both threshold size and time of day.
The following parameters specify conditions that must be met before change
management can occur.
• Receiver threshold size (MB) (THRESHOLD) You can specify the size, in
megabytes, of the journal receiver at which it is changed. The default value is
6600 MB. This value is used when MIMIX or the system changes the receivers.
If you decide to decrease the size of the Receiver threshold size you will need to
manually change your journal receiver to reflect this change.
If you change the journal receiver threshold size in the journal definition, the
change is effective with the next receiver change.
• Time of day to change receiver (TIME) You can specify the time of day at which
MIMIX changes the journal receiver. The time is based on a 24 hour clock and
must be specified in HHMMSS format.
• Reset large sequence threshold (RESETTHLD2) You can specify the sequence
number (in millions) at which to reset the receiver sequence number. When the
threshold is reached, the next receiver change resets the sequence number to 1.
Note: RESETTHLD2 accepts larger sequence number values than
RESETTHLD. You can specify a value for only one of these parameters.
RESETTHLD2 is recommended.
For information about how change management occurs in a remote journal
environment and about using other change management choices, see “Journal
receiver management” on page 213
Receiver delete management (DLTMGT, KEEPUNSAV, KEEPRCVCNT,
KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal
receivers associated with the replication process.
The Receiver delete management (DLTMGT) parameter specifies whether or not
MIMIX performs delete management for the journal receivers. By default, MIMIX
performs the delete management operations. MIMIX operations can be adversely
affected if you allow the system or another process to handle delete management.
For example, if another process deletes a journal receiver before MIMIX is finished
with it, replication can be adversely affected.
All of the requirements that you specify in the following parameters must be met
before MIMIX deletes a journal receiver:
• Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to

203
have MIMIX retain any unsaved journal receivers. Retaining unsaved receivers
allows you to back out (rollback) changes in the event that you need to recover
from a disaster. The default value *YES causes MIMIX to keep unsaved journal
receivers until they are saved.
• Keep journal receiver count (KEEPRCVCNT) You can specify the number of
detached journal receivers to retain. For example, if you specify 2 and there are
10 journal receivers including the attached receiver (which is number 10), MIMIX
retains two detached receivers (8 and 9) and deletes receivers 1 through 7.
• Keep journal receivers (days) (KEEPJRNRCV) You can specify the number of
days to retain detached journal receivers. For example, if you specify to keep the
journal receiver for 7 days and the journal receiver is eligible for deletion, it will be
deleted after 7 days have passed from the time of its creation. The exact time of
the deletion may vary. For example, the deletion may occur within a few hours
after the 7 days have passed.
For information see “Journal receiver management” on page 213
Journal receiver ASP (JRNRCVASP) This parameter specifies the auxiliary storage
pool (ASP) from which the system allocates storage for the journal receivers. The
default value *LIBASP indicates that the storage space for the journal receivers is
allocated from the same ASP that is used for the journal receiver library.
Threshold message queue (MSGQ) This parameter specifies the qualified name of
the threshold message queue to which the system sends journal-related messages
such as threshold messages. The default value *JRNDFN for the queue name
indicates that the message queue uses the same name as the journal definition. The
value *JRNLIB for the library name indicates that the message queue uses the library
for the associated journal.
Exit program (EXITPGM) This parameter allows you to specify the qualified name of
an exit program to use when journal receiver management is performed by MIMIX.
The exit program will be called when a journal receiver is changed or deleted by the
MIMIX journal manager. For example, you might want to use an exit program to save
journal receivers as soon as MIMIX finishes with them so that they can be removed
from the system immediately.
Receiver size option (RCVSIZOPT) This parameter specifies what option to use for
determining the maximum size of sequence numbers in journal entries written to the
attached journal receiver. Changing this value requires that you change to a new
journal receiver. In order for a change to take effect the journaling environment must
be built. When the value *MAXOPT3 is used, the journal receivers cannot be saved
and restored to systems with operating system releases earlier than V5R3M0.
To support a switchable data group, a change to this parameter requires more than
one journal definition to be changed. For additional information, see “Changing the
journaling environment to use *MAXOPT3” on page 222
Minimize entry specific data (MINENTDTA) This parameter specifies which object
types allow journal entries to have minimized entry-specific data. For additional
information about improving journaling performance with this capability, see
“Minimized journal entry data” on page 359.

204
Tips for journal definition parameters

Reset sequence threshold (RESETTHLD) You can specify the sequence number
(in millions) at which to reset the receiver sequence number. When the threshold is
reached, the next receiver change resets the sequence number to 1. You can specify
a value for this parameter or for the RESETTHLD2 parameter, but not both.
RESETTHLD2 is recommended.

205
Journal definition considerations
Consider the following as you create journal definitions for user journal replication
environments that implement remote journaling:
• The source journal definition identifies the local journal and the system on
which the local journal exists. Similarly, the target journal definition identifies
the remote journal and the system on which the remote journal exists.
Therefore, the source journal definition identifies the source system of the
remote journal process and the target journal definition identifies the target
system of the remote journal process.
• You can use an existing journal definition as the source journal definition to
identify the local journal. However, using an existing journal definition for the
target journal definition is not recommended. The existing definition is likely to
be used for journaling and therefore is not appropriate as the target journal
definition for a remote journal link.
• MIMIX recognizes the receiver change management parameters (CHGMGT,
THRESHOLD, TIME, RESETTHLD2 or RESETTHLD) specified in the source
journal definition and ignores those specified in the target journal definition.
When a new receiver is attached to the local journal, a new receiver with the
same name is automatically attached to the remote journal. The receiver prefix
specified in the target journal definition is ignored.
• Each remote journal link defines a local-remote journal pair that functions in
only one direction. Journal entries flow from the local journal to the remote
journal. The direction of a defined pair of journals cannot be switched. If you
want to use the RJ process in both directions for a switchable data group, you
need to create journal definitions for two remote journal links (four journal
definitions). For more information, see “Journal definition naming conventions”
on page 207.
• After the journal environment is built for a target journal definition, MIMIX
cannot change the value of the target journal definition’s Journal receiver prefix
(JRNRCVPFX) or Threshold message queue (MSGQ), and several other
values. To change these values see the procedure in the IBM topic “Library
Redirection with Remote Journals” in the IBM eServer iSeries Information
Center.
• If you are configuring MIMIX for a scenario in which you have one or more
target systems, there are additional considerations for the names of journal
receivers. Each source journal definition must specify a unique value for the
Journal receiver prefix (JRNRCVPFX) parameter. MIMIX ensures that the
same prefix is not used more than once on the same system but cannot
determine if the prefix is used on a target journal while it is being configured. If
the prefix defined by the source journal definition is reused by target journals
that reside in the same library and ASP, attempts to start the remote journals
will fail with message CPF699A (Unexpected journal receiver found).
When you create a target journal definition instead of having it generated using
the Add Remote Journal Link (ADDRJLNK) command, use the default value
*GEN for the prefix name for the JRNRCVPFX on a target journal definition.

206
Journal definition naming conventions

The receiver name for source and target journals will be the same on the
systems but will not be the same in the journal definitions. In the target journal,
the prefix will be the same as that specified in the source journal definition.

Journal definition naming conventions


Journal definitions for user journal replication must follow these naming conventions.
• The first character must be either A - Z, $, #, or @.
• The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
• Journal definition names cannot be UPSMON or begin with the characters MM.
The second part of the name always identifies a system definition which represents
the system on which the journal resides.
MIMIX uses the first six characters of the journal definition name to generate the
journal receiver prefix. MIMIX restricts the last character of the prefix from being
numeric. If the last character of a prefix resulting from the journal definition name is
numeric, it can become part of the receiver number and no longer match the journal
name.
Remote journal environments: When a data group uses remote journaling, the local
(or source) journal definition is named as described. However, there are additional
naming conventions required for the target journal definition name. The convention
used varies based on how the target journal definition is created.
• “Preferred target journal definition naming convention” on page 207 describes the
convention used when target journal definitions are created by the CRTDGDFN
command.
• “Target journal definition names generated by ADDRJLNK command” on
page 210 describes the convention used when target journal definitions are
created by the ADDRJLNK command.

Preferred target journal definition naming convention


When target journal definitions are created by the Create Data Group Definition
(CRTDGDFN) command, they are created using the format sourcenn@R, where:
• source is the first six characters from the journal name of the source journal
definition.
• nn is the two-character remote journal ID of the source system. MIMIX assigns
each system in an installation a remote journal ID that cannot be changed.
• @R identifies the name as being for the remote (target) journal of a local-remote
journal pair.
If a journal definition name is already in use, the source is shortened to five characters
and the sixth character will be assigned a numeric value.
The nn@R suffix is also used in the names of the target libraries for the journal,
receivers, and threshold message queue.

207
This preferred naming convention for target journal definitions ensures that local and
remote journal receivers will have unique names, as required by the IBM i remote
journal function, and that target journal names are unique.
Changing or manually creating target journal definitions: If you manually create
target journal definitions with the CRTJRNDFN command, it is recommended that you
use the preferred naming convention. Implementing this convention in two-node
environments simplifies any future transition to a three-or-more node environment
and avoids having conflicting journal names.
If you change library values in a source journal definition before creating a data group
which uses the journal definition, the target journal definition is created with the
correct library names. Similarly, if you want multiple journals with the same name in
different libraries, change the source journal name before creating a data group which
uses that journal definition. However, if you change the journal name or any of the
library values in a source journal definition after the data group which uses it exits, you
must also change the library names in the target journal definition.
When implementing the naming convention, it is helpful to consider one source node
at a time and create all the journal definitions necessary for replication from that
source, as shown in Table 29.
You can find the remote journal ID for a system by displaying the details of its system
definition. System definitions that existed before MIMIX 7.1 were assigned an ID
during the 7.1 installation. All new system definitions are assigned an ID in the order
they are created.
Multimanagement environments: In environments that use multimanagement
functions1,it is possible that each node that is a management system is also both a
source and target for replication activity. The preferred naming convention helps you
keep track of all the journaling environments needed for a switchable implementation
of MIMIX. The following is strongly recommended:
• Limit the data group name to six characters. This will simplify keeping an
association between the data group name and the names of associated journal
definitions by allowing space for the source node identifier within those names.
• Allow the CRTDGDFN command to create the target journal definitions.
• Once the appropriately named journal definitions are created for source and target
systems, manually create the remote journal links between them (ADDRJLNK
command).

1. Either a MIMIX Global or MIMIX for PowerHA license key is required for multimanagement
functions.

208
Journal definition naming conventions

Example journal definitions for three management nodes


The following example illustrates the preferred naming
convention for journal definitions. In this example, all
three nodes, CHUCK, HENRY, and OSCAR, are
designated as management systems in a
multimanagement environment. The data group name is
ABC.
When a node is the replication source, the arrows from
that node point to its possible target nodes. For each
source node, a corresponding target journal definition is
necessary for each potential target node.
Table 29 shows the remote journal ID (RJ ID) that MIMIX
assigned to each system and how that ID is used in the names of the journal
definitions, journal libraries, and journal and journal receivers. To support a fully
switchable environment for this example, all nine journal definitions shown are
needed.

Table 29. Example showing journal definitions needed to replicate from each source node

Source Node Journal Journal Definition Journal Library Receiver Library


and RJ ID Role

Local ABC HENRY ABC #MXJRN ABC #MXJRN

Remote ABC01@R OSCAR ABC #MXJRN01@R ABC #MXJRN01@R

Remote ABC01@R CHUCK ABC #MXJRN01@R ABC #MXJRN01@R

Local ABC OSCAR ABC #MXJRN ABC #MXJRN

Remote ABC02@R HENRY ABC #MXJRN02@R ABC #MXJRN02@R

Remote ABC02@R CHUCK ABC #MXJRN02@R ABC #MXJRN02@R

Local ABC CHUCK ABC #MXJRN ABC #MXJRN

Remote ABC03@R HENRY ABC #MXJRN03@R ABC #MXJRN03@R

Remote ABC03@R OSCAR ABC #MXJRN03@R ABC #MXJRN03@R

209
Figure 12 shows the RJ links needed for this example.

Figure 12. Example of RJ links for a switchable three-node example.

Work with RJ Links


System: HENRY
Type options, press Enter.
1=Add 2=Change 4=Remove 5=Display 6=Print 9=Start 10=End
14=Build 15=Remove RJ connection 17=Work with jrn attributes
24=Delete target jrn environment

---Source Jrn Def--- ---Target Jrn Def---


Opt Name System Name System Priority Dlvry State
__ __________ ________ __________ ________
__ ABC CHUCK ABC03@R HENRY *SYSDFT *ASYNC *INACTIVE
__ ABC CHUCK ABC03@R OSCAR *SYSDFT *ASYNC *INACTIVE
__ ABC HENRY ABC01@R CHUCK *SYSDFT *ASYNC *ACTIVE
__ ABC HENRY ABC01@R OSCAR *SYSDFT *ASYNC *ACTIVE
__ ABC OSCAR ABC02@R CHUCK *SYSDFT *ASYNC *INACTIVE
__ ABC OSCAR ABC02@R HENRY *SYSDFT *ASYNC *INACTIVE

Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Add F9=Retrieve F11=View 2
F12=Cancel F13=Repeat F16=Jrn Definitions F18=Subset F21=Print list

Target journal definition names generated by ADDRJLNK command


If you allow MIMIX to generate the target journal definition when you create a remote
journal link, MIMIX implements the following naming conventions for the target journal
definition and for the objects in its associated journaling environment. You may have
target journal definitions with this naming convention that were created on previous
releases of MIMIX.
The two-part name of the target journal definition is generated as follows:
• The Name is the first eight characters from the name of the source journal
definition followed by the characters @R when the journal definition is created for
MIMIX RJ support. If a journal definition name is already in use, the name may
instead include @S, @T, @U, @V, or @W.
Note: Journal definition names cannot be UPSMON or begin with the characters
MM.
• The System is the value entered in the target journal definition system field.
For example, if the source journal definition name is MYJRN and you specified
TGTJRNDFN(*GEN CHICAGO), the target journal definition will be named
MYJRN@R CHICAGO.
The target journal definition will have the following characteristics and associated new
objects:
• The Journal name will have the same name as the source journal.
• The Journal library will use the first eight characters of the name of the source

210
Journal definition naming conventions

journal library followed by the characters @R.


• The Journal library ASP will be copied from source journal definition.
• The Journal receiver prefix will be copied from the source journal definition.
• The Journal receiver library will use the first eight characters of the name of the
source journal receiver library followed by the characters @R.
• The Message queue library will use the first eight characters of the name of the
source message queue library followed by the characters @R.
• The value for the Receiver change management (CHGMGT) parameter will be
*NONE.

Example journal definitions for a switchable data group


To support a switchable data group in a remote journaling environment, you need to
have four journal definitions configured: two for the RJ link used for normal
production-to-backup operations, and two for the RJ link used for replication in the
opposite direction.
In this example, a switchable data group named PAYABLES is created between
systems CHICAGO and NEWYORK. System 1 (CHICAGO) is the data source. The
data group definition specifies *YES to Use remote journal link. Command defaults
create the data group using a generated short data group name and using the data
group name for the system 1 and system 2 journal definitions.
To create the RJ link and associated journal definitions for normal operations, option
10 (Add RJ link) on the Work with Journal Definitions display is used on an existing
journal definition named PAYABLES CHICAGO (the first entry listed in Figure 13).
This is the source journal definition for normal operations. The process of adding the
link creates the target journal definition PAYABLES@R NEWYORK (the last entry
listed in Figure 13).
To create the RJ link and associated definitions for replication in the opposite
direction, a new source journal definition, PAYABLES NEWYORK, is created (the
second entry listed in Figure 13). Then that definition is used to create second RJ link,

211
which in turn generates the target journal definition PAYABLES@R CHICAGO (the
third entry listed in Figure 13).

Figure 13. Example journal definitions for a switchable data group.

Work with Journal Definitions


CHICAGO
Type options, press Enter.
1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 7=Rename
10=Add RJ link 12=Work with RJ links 14=Build
17=Work with jrn attributes 24=Delete jrn environment

---- Definition ---- ------ Journal ------- - Management - RJ


Opt Name System Name Library Change Delete Link
__ PAYABLES CHICAGO PAYABLES MIMIXJRN *SYSTEM *YES *SRC
__ PAYABLES NEWYORK PAYABLES MIMIXJRN *SYSTEM *YES *SRC
__ PAYABLES@R CHICAGO PAYABLES MIMIXJRN@R *NONE *YES *TGT
__ PAYABLES@R NEWYORK PAYABLES MIMIXJRN@R *NONE *YES *TGT

Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Create F9=Retrieve
F10=View receivers F12=Cancel F13=Repeat F16=RJ Links F24=More keys

Identifying the correct journal definition on the Work with Journal Definition display
can be confusing. Fortunately, the Work with RJ Links display (Figure 14) shows the
association between journal definitions much more clearly.

Figure 14. Example of RJ links for a switchable data group.

Work with RJ Links


System: CHICAGO
Type options, press Enter.
1=Add 2=Change 4=Remove 5=Display 6=Print 9=Start 10=End
14=Build 15=Remove RJ connection 17=Work with jrn attributes
24=Delete target jrn environment

---Source Jrn Def--- ---Target Jrn Def---


Opt Name System Name System Priority Dlvry State
__ __________ ________ __________ ________
__ PAYABLES CHICAGO PAYABLES@R NEWYORK *SYSDFT *ASYNC *INACTIVE
__ PAYABLES NEWYORK PAYABLES@R CHICAGO *SYSDFT *ASYNC *INACTIVE

Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Add F9=Retrieve F11=View 2
F12=Cancel F13=Repeat F16=Jrn Definitions F18=Subset F21=Print list

212
Journal receiver management

Journal receiver management


Parameters in journal definition commands determine how change management and
delete management are performed on the journal receivers used by the replication
process. Shipped default values allow MIMIX to perform change management and
delete management.
Change management - The Receiver change management (CHGMGT) parameter
controls how the journal receivers are changed. The shipped default value
*TIMESIZE results in MIMIX changing the journal receiver by both threshold size and
time of day.
Additional parameters in the journal definition control the size at which to change
(THRESHOLD), the time of day to change (TIME), and when to reset the receiver
sequence number (RESETTHLD2 or RESETTHLD). The conditions specified in these
parameters must be met before change management can occur. For additional
information, see “Tips for journal definition parameters” on page 201.
If you do not use the default value *TIMESIZE for CHGMGT, consider the following:
• When you specify *TIMESYS, the system manages the receiver by size and
during IPLs and MIMIX manages changing the receiver at a specified time.
Note: The value *TIME can be specified with *SIZE or *SYSTEM to achieve the
same results as *TIMESIZE or *TIMESYS, respectively.
• When you specify *NONE, MIMIX does not handle changing the journal receivers.
You must ensure that the system or another application performs change
management to prevent the journal receivers from overflowing.
• When you allow the system to perform change management (*SYSTEM) and the
attached journal receiver reaches its threshold, the system detaches the journal
receiver and creates and attaches a new journal receiver. During an initial
program load (IPL) or the vary on of an independent ASP, the system performs a
CHGJRN command to create and attach a new journal receiver and to reset the
journal sequence number of journals that are not needed for commitment control
recovery for that IPL or vary on, unless the receiver size option (RCVSIZOPT) is
*MAXOPT3. When the RCVSIZOPT is *MAXOPT3, the sequence number will not
be reset and a new journal receiver will not be attached unless the sequence
number exceeds the sequence number threshold.
In a remote journaling configuration, MIMIX recognizes remote journals and ignores
change management for the remote journals. The remote journal receiver is changed
automatically by the IBM i remote journal function when the receiver on the source
system is changed. You can specify in the source journal definition whether to have
receiver change management performed by the system or by MIMIX. Any change
management values you specify for the target journal definition are ignored.
You can also customize how MIMIX performs journal receiver change management
through the use of exit programs. For more information, see “Working with journal
receiver management user exit points” on page 628.
Delete management - The Receiver delete management (DLTMGT) parameter
controls how the journal receivers used for replication are deleted. It is strongly

213
recommended that you use the value *YES to allow MIMIX to perform delete
management.
When MIMIX performs delete management, the journal receivers are only deleted
after MIMIX is finished with them and all other criteria specified on the journal
definition are met. The criteria includes how long to retain unsaved journal receivers
(KEEPUNSAV), how many detached journal receivers to keep (KEEPRCVCNT), and
how long to keep detached journal receivers (KEEPJRNRCV).
Note: If more than one MIMIX installation uses the same journal, the journal
manager for each installation can delete the journal regardless of whether the
other installations are finished with it. If you have this scenario, you need to
use the journal receiver delete management exit points to control deleting the
journal receiver. For more information, see “Working with journal receiver
management user exit points” on page 628.
Delete management of the source and target receivers occur independently from
each other. MIMIX operations can be affected if you allow the system to handle delete
management. The system may delete a journal receiver before MIMIX has completed
its use. It is highly recommended that you configure the journal definitions to have
MIMIX perform journal delete management. By default, the IBM i remote journal
function does not allow a receiver to be deleted until it is sent from the local journal
(source) to the remote journal (target). When MIMIX manages deletion, a remote
journal receiver on the target system, and the corresponding local journal receiver on
the source system, cannot be deleted until it is processed by the database reader
(DBRDR) and the database apply (DBAPY) processes and it meets the other criteria
defined in the journal definition.

Attention: If you need to delete journal receivers manually and


MIMIX is managing receiver deletion, contact CustomerCare.

Interaction with other products that manage receivers


If you run other products that use the same journals on the same system as MIMIX,
such as Double-Take® Share™, iOptimize™, or MIMIX® Director™, there may be
considerations for journal receiver management.
Double-Take® Share™: Although both Double-Take® Share™ and MIMIX® support
receiver change management, you need to choose only one product to perform
change management activities for a specific journal. If you choose Double-Take®
Share™, your MIMIX journal definition should specify CHGMGT(*NONE). If you
choose MIMIX, see change management for available options that can be specified in
the journal definition, including system managed receivers.
If both products scrape from the same journal, perform delete management only from
Double-Take® Share™. This will prevent MIMIX deleting receivers before Double-
Take® Share™ is finished with them. The journal definition within MIMIX should
specify DLTMGT(*NO).
iOptimize™ or MIMIX® Director™: Both MIMIX® and either iOptimize™ or MIMIX
Director read journal receiver entries from the system (QAUDJRN) journal. Shipped

214
Journal receiver management

default settings in journal definitions allow MIMIX to perform receiver delete


management. When both products are used, it is recommended that you change the
journal definition for QAUDJRN to specify a higher number for the Keep journal
receiver count (KEEPRCVCNT) parameter. If the journal definition for QAUDJRN is
set to prevent MIMIX from performing change or delete management, you must
ensure that journal receivers are retained long enough for both products to complete
their use.

Processing from an earlier journal receiver


It is possible to have a situation where the operating system attempts to retransmit
journal receivers that already exist on the target system. When this situation occurs,
the remote journal function ends with an error and transmission of entries to the target
system stops. This can occur in the following scenarios:
• When performing a clear pending start of the data group while also specifying a
sequence number that is earlier in the journal stream than the last processed
sequence number
• When starting a data group while specifying a database journal receiver that is
earlier in the receiver chain than the last processed receiver.
For example, refer to Figure 15. Replication ended while processing journal entries in
target receiver 2. Target journal receiver 1 is deleted through the configured delete
management options. If the data group is started (STRDG) with a starting journal
sequence number for an entry that is in journal receiver 1, the remote journal function
attempts to retransmit source journal receivers 1 through 4, beginning with receiver 1.
However, receiver 2 already exists on the target system. When the operating system
encounters receiver 2, an error occurs and the transmission to the target system
ends.
You can prevent this situation before starting that data group if you delete any target
journal receivers following the receiver that will be used as the starting point. If you
encounter the problem, recovery is simply to remove the target journal receivers and
let remote journaling resend them. In this example, deleting target receiver 2 would
prevent or resolve the problem.

Figure 15. Example of processing from an earlier journal receiver.

Source Journal Receivers Target Journal Receivers

4
2
3
1
2
1

215
Considerations when journaling on target
The default behavior for MIMIX is to have journaling enabled on the target systems for
the target files. After a transaction is applied to the target system, MIMIX writes the
journal entry to a separate journal on the target system. This journaling on the target
system makes it easier and faster to start replication from the backup system
following a switch. As part of the switch processing, the journal receiver is changed
before the data group is started.
In a remote journaling environment, these additional journal receivers can become
stranded on the backup system following a switch. When starting a data group after a
switch, the IBM i remote journal function begins transmitting journal entries from the
just changed journal receiver. Because the backup system is now temporarily acting
as the source system, the remote journal function interprets any earlier receivers as
unprocessed source journal receivers and prevents them from being deleted.
To remove these stranded journal receivers, you need to use the IBM command
DLTJRNRCV with *IGNTGTRCV specified as the value of the DLTOPT parameter.

216
Journal receiver size for replicating large object data

Journal receiver size for replicating large object data


For potentially large IFS stream files and files containing LOB data, it is important that
your journal receiver is large enough to accommodate the data. You may need to
change your journal receiver size in order to accommodate the data.
For data groups that can be switched, the journal receivers on both the source and
target systems must be large enough to accomodate the data.

Verifying journal receiver size options


To display the current journal receiver size options for journals used by MIMIX, do the
following from the system where the source journal definition is located:
1. Enter the command installation-library/WRKJRNDFN
2. Next to the journal definition for the system you are on, type a 17 (Work with
journal attributes).
3. View the Receiver size options field to see how the journal is configured. The
value should indicate support for large journal entries. The values *MAXOPT2 and
*MAXOPT3 support journal entries up to 4 GB.

Changing journal receiver size options


To change the journal receiver size, do the following on the *MGT system:
1. Enter the command installation-library/WRKJRNDFN.
2. Next to the journal definition for the system you want to change, type a 2
(Change).
3. Press F10 to display additional parameters on the Change Journal Definition
(CHGJRNDFN) display.
4. At the Receiver size option prompt, specify a value that indicates support for large
journal entries, such as *MAXOPT2 or *MAXOPT3. Press Enter.
Note: Make sure the other systems in your environment are compatible in size.
For more information, see “Changing the journaling environment to use
*MAXOPT3” on page 222.
5. Build the journal environment specifying *JRNDFN for the Source for values
parameter.
a. From the Work with Journal Definitions display, type 14 (Build) next to the
journal definition you want to build and press F4.
b. Change the Source for values parameter value to *JRNDFN.
c. Press Enter.

217
Creating a journal definition
Do the following to create a journal definition:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu select option 3 (Work with journal definitions)
and press Enter.
3. The Work with Journal Definitions display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
4. The Create Journal Definition display appears. At the Journal definition prompts,
specify a two-part name.
Note: Journal definition names cannot be UPSMON or begin with the characters
MM.
5. Verify that the following prompts contain the values that you want. If you have not
journaled before, the default values are appropriate. If you need to identify an
existing journaling environment to MIMIX, specify the information you need.
Journal
Library
Journal library ASP
Journal receiver prefix
Library
Journal receiver library ASP
6. At the Target journal state prompt, specify the requested status of the target
journal. The shipped default value, *ACTIVE, is required for target journal
inspection. The value *ACTIVE can be used with active journaling support or
journal standby state.
Note: Journal standby state requires the IBM feature for High Availability Journal
Performance. For more information see “Configuring for high availability
journal performance enhancements” on page 362.
7. At the Target journal inspection prompt, the shipped default value, *YES, allows
the specified journal to be inspected for activity by users or programs other than
MIMIX. Inspection occurs at the journal level when the system on which the
specified journal exists is the target system for replication by one or more enabled
data groups. To prevent journal inspection, specify *NO.
8. At the Journal caching prompt, the shipped default value, *NONE, prevents
caching of journals in main storage before writing them to disk which is only
possible if a separate, chargeable feature from IBM (Option 42) is available on the
system. To use journal caching on both systems, only the source system, or only
the target system, specify *BOTH, *SRC, or *TGT.
Note: Journal caching requires the IBM feature for High Availability Journal
Performance. For more information see “Configuring for high availability
journal performance enhancements” on page 362.

218
Creating a journal definition

9. Set the values you need to manage changing journal receivers, as follows:
a. At the Receiver change management prompt, specify the value you want. The
default values are recommended. For more information about valid
combinations of values, press F1 (Help).
b. Press Enter.
c. One or more additional prompts related to receiver change management
appear on the display. Verify that the values shown are what you want and, if
necessary, change the values.
Receiver threshold size (MB)
Time of day to change receiver
Reset large sequence threshold
d. Press Enter.
10. Set the values you need to manage deleting journal receivers, as follows:
a. It is recommended that you accept the default value *YES for the Receiver
delete management prompt to allow MIMIX to perform delete management.
b. Press Enter.
c. One or more additional prompts related to receiver delete management appear
on the display. If necessary, change the values.
Keep unsaved journal receivers
Keep journal receiver count
Keep journal receivers (days)
11. At the Description prompt, type a brief text description of the journal definition.
12. This step is optional. If you want to access additional parameters that are
considered advanced functions, press F10 (Additional parameters). Make any
changes you need to the additional prompts that appear on the display.
13. To create the journal definition, press Enter.

219
Changing a journal definition
Before making changes, review the naming convention requirements for a remote
journaling environment. Some changes may require changing additional journal
definitions. See “Journal definition naming conventions” on page 207.
To change a journal definition, do the following:
1. Access the Work with Journal Definitions display according to your configuration
needs:
• In a clustering environment, from the MIMIX Cluster Menu select option 20
(Work with system definitions) and press Enter. When the Work with System
Definitions display appears, type 12 (Journal Definitions) next to the system
name you want and press Enter.
• In a standard MIMIX environment, from the MIMIX Configuration Menu select
option 3 (Work with journal definitions) and press Enter.
2. The Work with Journal Definitions display appears. Type 2 (Change) next to the
definition you want and press Enter.
3. The Change Journal Definition (CHGJRNDFN) display appears. Press Enter twice
to see all prompts for the display.
4. Make any changes you need to the prompts. Press F1 (Help) for more information
about the values for each parameter.
5. If you need to access advanced functions, press F10 (Additional parameters).
When the additional parameters appear on the display, make the changes you
need.
6. To accept the changes, press Enter.
Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective
with the next receiver change. Before a change to any other parameter is
effective, you must rebuild the journal environment. Rebuilding the journal
environment ensures that it matches the journal definition and prevents
problems starting the data group.

220
Building the journaling environment

Building the journaling environment


Before replication for a data group can occur, the journal environment for all journal
definitions used by that data group must be created on each system. A journaling
environment includes the following objects: library, journal, journal receiver, and
threshold message queue on the system specified in the journal definition. The Build
Journal Environment (BLDJRNENV) command is used to build the journal
environment objects for a journal definition. When the BLDJRNENV command is run,
if the objects do not exist, they are created based on what is specified in the journal
definition. If the journal exists, the Source for values (JRNVAL) parameter of the
BLDJRNENV command is used to determine the source for the values of these
objects. The journal receiver prefix and library, message queue and library, and
threshold parameters are updated from the source specified in the JRNVAL
parameter.
Specifying *JRNENV for the JRNVAL parameter changes the values of the objects in
the journal definition to match the values in the existing journal environment objects.
Specifying *JRNDFN for the JRNVAL parameter changes the values of the journal
environment objects to match the values of the objects in the journal definition. In a
remote journal environment, the values specified in the journal definition (*JRNDFN)
are only applicable to the source journal.
If the data group definition specifies to journal on the target system, the journal
environment must be built on each system that will be a target system for replication
of that data group. If you do not build either source or target journal environments, the
first time the data group starts MIMIX will automatically build the journal environments
for you.
Note: When building a journal environment, ensure the journal receiver prefix in the
specified library is not already used. If the journal receiver prefix in the
specified library is already used, you must change it to an unused value.
For switchable data groups not specified to journal on the target system, it is
recommended to build the source journaling environments for both directions of
replication so the environments exist for data group replication after switching.
All previous steps in your configuration checklist must be complete before you use
this procedure.
To build the journaling environment, do the following:
Note: If you are journaling on the target system, perform this procedure for both
the source and target systems.
1. From the MIMIX Main Menu, select 11 (Configuration menu) and press Enter.
2. From the MIMIX Configuration Menu, select one of the following and press Enter:
• Select 8 (Work with remote journal links) to build the journaling environments
for remote journaling.
• Select 3 (Work with journal definitions) to build all other journaling
environments.
3. From the Work with display, type 14 (Build) next to the journal definition you want

221
to build and press Enter.
Option 14 calls the Build Journal Environment (BLDJRNENV) command. For
environments using remote journaling, the command is called twice (first for the
source journal definition and then for the target journal definition). A status
message is issued indicating that the journal environment was created for each
system.
4. To verify that the source journals have been created for a data group, do the
following from each system in the data group:
a. Enter the command WRKDGDFN
b. From the Work with DG Definitions display, type 12 (Journal definitions) next
the data group and press Enter.
c. The Work with Journal Definitions display is subsetted to the journal definitions
for the data group. Type 17 (Work with jrn attributes) next to the definition that
is the source for the local system.

Changing the journaling environment to use *MAXOPT3


This procedure changes journal definitions and builds the journaling environments
necessary in order to use a journal with a receiver size option of *MAXOPT3.
Before you use this procedure, consider the following:
• Determine which journal definitions must be changed. Table 30 identifies
requirements according to the data group configuration.
• Switchable data groups require that journal definitions be changed for both source
and target journals.
• A journal definition that is changed to use *MAXOPT3 support affects all data
groups which use the journal definition.
• When a journal definition for the system journal (QAUDJRN) is changed to use
*MAXOPT3 support, any additional MIMIX installations on the same system must
also use *MAXOPT3 support for the system journal. Doing so prevents sequence
numbers from being reset unexpectedly. The additional MIMIX installations must
be running version 6 or higher software and must have their journal definitions for
the system journal changed to use *MAXOPT3 support.
• The default value for the journal sequence reset threshold changes when using
*MAXOPT3. If your sequence numbers will exceed 10 digits, updates must be
made to use the MIMIX command and outfile fields that support sequence
numbers with more than 10 digits. Updates should be made to any automation
that uses journal sequence numbers with MIMIX and any journal receiver
management exit programs or monitors with an event class (EVTCLS) of *JRN.
• When the value *MAXOPT3 is used, the journal receivers cannot be saved and

222
Changing the journaling environment to use *MAXOPT3

restored to systems with operating system releases earlier than V5R3M0.

Table 30. Journal definitions to change when converting to *MAXOPT3.

Data Group Configuration Journal Definitions to Change

Replicates Switchable
From

User journal Yes Journal definition for normal source system (local)
with remote Journal definition for normal target system (remote, @R)
journaling Journal definition for switched source system (local)
Journal definition for switched target system (remote,
@R)

No Journal definition for source system (local)


Journal definition for target system (remote, @R)

User journal Yes Journal definition for source system


with MIMIX Journal definition for target system
source-send
processing No Journal definition for source system

System journal Yes QAUDJRN journal definition for source system


(QAUDJRN) QAUDJRN journal definition for target system

No QAUDJRN journal definition for source system

Do the following:
1. For data groups which use the journal definitions that will be changed, do the
following:
a. If commitment control is used, ensure that there are no open commit cycles.
b. End replication in a controlled manner using topic “Ending a data group in a
controlled manner” in the MIMIX Operations book. Procedures within this topic
will direct how to:
• Prepare for a controlled end of a data group
• Perform the controlled end - When ending, specify *ALL for the Process
prompt and *CNTRLD for the End process prompt.
• Confirm the end request completed without problems - This includes how to
check for and resolve any open commits.
Note: Resolve any open commits before continuing.
2. From the management system, select option 11 (Configuration menu) on the
MIMIX Main Menu. Then select option 3 (Work with journal definitions) to access
the Work with Journal Definitions display.
3. From the Work with Journal Definitions display, do the following to a journal
definition:
a. Type option 2 (Change) next to a journal definition and press Enter.

223
b. Optionally, specify a value for the Reset large sequence threshold prompt. If no
new value is specified, MIMIX will automatically use the default value
associated the value you specify for the receiver size option in Step 3d.
c. Press F10 (Additional parameters).
d. At the Receiver size option prompt, specify *MAXOPT3.
e. Press Enter.
f. Repeat Step 3 for each of the journal definitions you need to change, as
indicated in Table 30. After all the necessary journal definitions are changed,
continue with the next step.
4. From the Work with Journal Definitions display, type a 14 (Build) next to the
journal definitions you changed and press Enter.
Note: For remote journaling environments, only perform this step for a source
journal definition. Building the environment for the source journal will
automatically result in the building of the environment for the associated
target journal definition.
5. Verify that the changed journal definitions have appropriate values. Do the
following:
a. From the Work with Journal Definitions display, type a 5 (Display) next to each
changed journal definition and press Enter.
b. Verify that *MAXOPT3 is specified for the Receiver size option.
c. Verify that the Reset large sequence threshold prompt contains the value you
specified for Step 3b. If you did not specify a value, the value should be
between 9901 and 18446640000000.
6. Verify that the journals have been changed and now have appropriate values. Do
the following:
a. From the appropriate system (source or target), access the Work with Journal
Definitions display. Then do the following:
• From the source system, type 17 (Work with jrn attributes) next to a changed
source journal definition and press Enter.
• From the target system, type 17 (Work with jrn attributes) next to a changed
target journal definition and press Enter.
b. Verify that *MAXOPT3 is specified as one of the values for the Receiver size
options field.
7. Update any automation programs. Any programs that include journal sequence
numbers must be changed to use the Reset large sequence threshold
(RESETTHLD2) and the Receiver size option (RCVSIZOPT) parameters.
8. Start the data groups using default values. Refer to topic “Starting selected data
group processes” in the MIMIX Operations book.

224
Changing the remote journal environment

Changing the remote journal environment


Use the following checklist to guide you through the process of changing an existing
remote journal configuration. For example, this procedure is appropriate for changing
a journal receiver library for the target journal in a remote journaling (RJ) environment
or for any other changes that affect the target journal. These steps can be used for
synchronous or asynchronous remote journals.
Important! Changing the RJ environment must be done in the correct sequence.
Failure to follow the proper sequence can introduce errors in replication and journal
management. Also, before making changes, review the naming convention
requirements for a remote journaling environment. Some changes may require
changing additional journal definitions. See “Journal definition naming conventions”
on page 207.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. Verify that no other data groups use the RJ link using topic “Identifying data
groups that use an RJ link” on page 316.
2. Use topic “Ending a data group in a controlled manner” in the MIMIX Operations
book to prepare for and perform a controlled end of the data group and end the RJ
link. Specify the following on the ENDDG command:
• *ALL for the Process prompt
• *CNTRLD for the End process prompt
• *YES for the End remote journaling prompt.
3. Verify that the remote journal link is not in use on both systems. Use topic
“Displaying status of a remote journal link” in the MIMIX Operations book. The
remote journal link should have a state value of *INACTIVE before you continue.
4. Remove the connection to the remote journal as follows:
a. Access the journal definitions for the data group whose environment you want
to change. From the Work with Data Groups display, type a 45 (Journal
definitions) next to the data group that you want and press Enter.
b. Type a 12 (Work with RJ links) next to either journal definition you want and
press Enter. You can select either the source or target journal definition.
Note: The target journal definition will end with @R.
c. From the Work with RJ Links display, choose the link based on the name in the
Target Jrn Def column. Type a 15 (Remove RJ connection) next to the link with
the target journal definition you want and press Enter
d. A confirmation display appears. To continue removing the connections for the
selected links, press Enter.
5. From the Work with RJ Links display, do the following to delete the target system
objects associated with the RJ link:
Note: The target journal definition will end with @R.

225
a. Type a 24 (Delete target jrn environment) next to the link that you want and
press Enter.
b. A confirmation display appears. To continue deleting the journal, its associated
message queue, and the journal receiver, press Enter.
6. Make the changes you need for the target journal.
For example, to change the target (remote) journal definition to a new receiver
library, do the following:
a. Press F12 to return to the Work with Journal Definitions display.
b. Type option 2 (Change) next to the journal definition for the target system you
want and press Enter.
7. From the Work with Journal Definitions display, type a 14 (Build) next to the target
journal definition and press Enter.
Note: The target journal definition will end with @R.
8. Return to the Work with Data Groups display. Then do the following:
a. Type an 8 (Display status) next to the data group you want and press Enter.
b. Locate the name of the receiver in the Last Read field for the Database
process.
9. Do the following to start the RJ link:
a. From the Work with Data Groups display, type a 44 (RJ links) next to the data
group you want and press Enter.
b. Locate the link you want based on the name in the Target Jrn Def column. Type
a 9 (Start) next to the link with the target journal definition and press F4
(Prompt)
c. The Start Remote Journal Link (STRRJLNK) appears. Specify the receiver
name from Step 8b as the value for the Starting journal receiver (STRRCV)
and press Enter.
10. Start the data group using default values Refer to topic “Starting selected data
group processes” in the MIMIX Operations book.

226
Adding a remote journal link

Adding a remote journal link


This procedure requires that a source journal definition exists. The process of creating
an RJ link will create the target journal definition with appropriate values for remote
journaling.
Before you create the RJ link you should be familiar with the “Journal definition
considerations” on page 206.
To create a link between journal definitions, do the following:
1. From the MIMIX Configuration menu, select option 3 (Work with journal
definitions) and press Enter.
2. The Work with Journal Definitions display appears. Type a 10 (Add RJ link) next
to the journal definition you want and press Enter.
3. The Add Remote Journal Link (ADDRJLNK) display appears. The journal
definition you selected in the previous step appears in the prompts for the Source
journal definition. Verify that this is the definition you want as the source for RJ
processing.
4. At the Target journal definition prompts, specify *GEN as the Name and specify
the value you want for System.
Note: If you specify the name of a journal definition, the definition must exist and
you are responsible for ensuring that its values comply with the
recommended values. Refer to the related topic on considerations for
creating journal definitions for remote journaling for more information.
5. Verify that the values for the following prompts are what you want. If necessary,
change the values.
• Delivery
• Sending task priority
• Primary transfer definition
• Secondary transfer definition
• If you are using an independent ASP in this configuration you also need to
identify the auxiliary storage pools (ASPs) from which the journal and journal
receiver used by the remote journal are allocated. Verify and change the
values for Journal library ASP, Journal library ASP device, Journal receiver
library ASP, and Journal receiver lib ASP dev as needed.
6. At the Description prompt, type a text description of the link, enclosed in
apostrophes.
7. To create the link between journal definitions, press Enter.

227
Changing a remote journal link
Changes to the delivery and sending task priority take effect only after the remote
journal link has been ended and restarted.
To change characteristics of the link between source and target journal definitions, do
the following:
1. Before you change a remote journal link, end activity for the link. The MIMIX
Operations book describes how to end only the RJ link.
Notes:
• If you plan to change the primary transfer definition or secondary transfer
definition to a definition that uses a different RDB directory entry, you also need
to remove the existing connection between objects. Use topic “Removing a
remote journaling environment” on page 231 before changing the remote
journal link.
• Before making changes, review the naming convention requirements for a
remote journaling environment. Some changes may require changing
additional journal definitions. See “Journal definition naming conventions” on
page 207.
2. From the Work with RJ Links display, type a 2 (Change) next to the entry you want
and press Enter.
3. The Change Remote Journal Link (CHGRJLNK) display appears. Specify the
values you want for the following prompts:
• Delivery
• Sending task priority
• Primary transfer definition
• Secondary transfer definition
• Description
4. When you are ready to accept the changes, press Enter.
5. To make the changes effective, do the following:
a. If you removed the RJ connection in Step 1, you need to use topic “Building the
journaling environment” on page 221.
b. Start the data group which uses the RJ link.

228
Temporarily changing from RJ to MIMIX processing

Temporarily changing from RJ to MIMIX processing


This procedure is appropriate for when you plan to continue using remote journaling
as your primary means of transporting data to the target system but, for some reason,
temporarily need to revert to MIMIX send processing.
Important! If the data group is configured for MIMIX Dynamic Apply, you must
complete the procedure in “Checklist: Converting to legacy cooperative
processing” on page 157 before you remove remote journaling.
For the data group you want to change, do the following:
1. Use the procedure “Ending a data group in a controlled manner” in the MIMIX
Operations book to prepare for and perform a controlled end of the data group
and end the RJ link. Specify the following on the ENDDG command:
• *ALL for the Process prompt
• *CNTRLD for the End process prompt
• *YES for the End remote journaling prompt.
2. Verify that the process is ended. On the Work with Data Groups display, the data
group should change to show a red “L” in the Source DB column.
3. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *NO for the Use remote journal link prompt.
d. To accept the change press Enter.
4. Use the procedure “Starting selected data group processes” in the MIMIX
Operations book, specifying *ALL for the Start Process prompt.

229
Changing from remote journaling to MIMIX processing
Use this procedure when you no longer want to use remote journaling for a data
group and want to permanently change the data group to use MIMIX send
processing.
Important! If the data group is configured for MIMIX Dynamic Apply, you must
complete the procedure in “Checklist: Converting to legacy cooperative
processing” on page 157 before you remove remote journaling.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. Perform a controlled end for the data group that you want to change using topic
“Ending a data group in a controlled manner” in the MIMIX Operations book. On
the ENDDG command, specify the following:
• *ALL for the Process prompt
• *CNTRLD for the End process prompt
Note: Do not end the RJ link at this time. Step 2 verifies that the RJ link is not
in use by any other processes or data groups before ending and
removing the RJ environment.
2. Perform the procedure in topic “Removing a remote journaling environment” on
page 231.
3. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *NO for the Use remote journal link prompt.
d. To accept the change, press Enter.
4. Start data group replication using the procedure “Starting selected data group
processes” in the MIMIX Operations book and specify *ALL for the Start
processes prompt (PRC parameter).

230
Removing a remote journaling environment

Removing a remote journaling environment


Use this procedure when you want to remove a remote journaling environment that
you no longer need. This procedure removes configuration elements and system
objects necessary for data group replication with remote journaling.
1. Verify that the remote journal link is not used by any data group. Use “Identifying
data groups that use an RJ link” on page 316.
If you identify a data group that uses the remote journal link, check with your
MIMIX administrator and determine how to proceed. Possible courses of action
are:
• If the data group is being converted to use MIMIX send processing or if the
data group will no longer be used, perform a controlled end of the data group.
When the data group is ended, continue with Step 2 of this procedure.
• If the data group needs to remain operable using remote journaling, do not
continue with this procedure.

Attention: Do not continue with this procedure if you identified a


data group that uses the remote journal link and the data group
must continue to be operational. This procedure removes
configuration elements and system objects necessary for
replication with remote journaling

2. End the remote journal link and verify that it has a state value of *INACTIVE
before you continue. Refer to topics “Ending a remote journal link independently”
and “Checking status of a remote journal link” in the MIMIX Operations book.
3. From the management system, do the following to remove the connection to the
remote journal:
a. Access the journal definitions for the data group whose environment you want
to change. From the Work with Data Groups display, type a 45 (Journal
definitions) next to the data group that you want and press Enter.
b. Type a 12 (Work with RJ links) next to either journal definition you want and
press Enter. You can select either the source or target journal definition.
c. From the Work with RJ Links display, type a 15 (Remove RJ connection) next
to the link that you want and press Enter.
Note: If more than one RJ link is available for the data group, ensure that you
choose the link you want.
d. A confirmation display appears. To continue removing the connections for the
selected links, press Enter.
4. From the Work with RJ Links display, do the following to delete the target system
objects associated with the RJ link:
a. Type a 24 (Delete target jrn environment) next to the link that you want and
press Enter.

231
b. A confirmation display appears. To continue deleting the journal, its associated
message queue, the journal receiver, and to remove the connection to the
source journal receiver, press Enter.
5. Delete the target journal definition using topic “Deleting a definition” on page 258.
When you delete the target journal definition, its link to the source journal
definition is removed.
6. Use option 4 (Delete) on the Work with Monitors display to delete the RJLNK
monitors which have the same name as the RJ link.

232
CHAPTER 10 Configuring data group definitions

By creating a data group definition, you identify to MIMIX the characteristics of how
replication occurs between two systems. You must have at least one data group
definition in order to perform replication.
In an Intra environment, a data group definition defines how replication occurs
between the two product libraries used by INTRA.
Once data group definitions exist for MIMIX, they can also be used by the MIMIX
Promoter product.
The topics in this chapter include:
• “Tips for data group parameters” on page 234 provides tips for using the more
common options for data group definitions.
• “Creating a data group definition” on page 246 provides the steps to follow for
creating a data group definition.
• “Changing a data group definition” on page 250 provides the steps to follow for
changing a data group definition.
• “Fine-tuning backlog warning thresholds for a data group” on page 251 describes
what to consider when adjusting the values at which the backlog warning
thresholds are triggered.

233
Tips for data group parameters
This topic provides tips for using the more common options for data group definitions.
Context-sensitive help is available online for all options on the data group definition
commands. Refer to “Additional considerations for data groups” on page 245 for more
information.
Shipped default values for the Create Data Group Definition (CRTDGDFN) command
result in data groups configured for MIMIX Dynamic Apply. For additional information
see Table 11 in “Considerations for LF and PF files” on page 106.
Data group names (DGDFN, DGSHORTNAM) These parameters identify the data
group.
The Data group definition (DGDFN) is a three-part name that uniquely identifies
a data group. The three-part name must be unique to a MIMIX installation. The
first part of the name identifies the data group. The second and third parts of the
name (System 1 and System 2) specify system definitions representing the
systems between which the files and objects associated with the data group are
replicated.
Notes:
• In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_). Data group names cannot be UPSMON or
begin with the characters MM.
• For Clustering environments only, MIMIX recommends using the value
*RCYDMN in System 1 and System 2 fields for Peer CRGs.
One of the system definitions specified must represent a management system.
Although you can specify the system definitions in any order, you may find it
helpful if you specify them in the order in which replication occurs during normal
operations. For many users normal replication occurs from a production system to
a backup system, where the backup system is defined as the management
system for MIMIX. For example, if you normally replicate data for an application
from a production system (MEXICITY) to a backup system (CHICAGO) and the
backup system is the management system for the MIMIX cluster, you might name
your data group SUPERAPP MEXICITY CHICAGO.
The Short data group name (DGSHORTNAM) parameter indicates an
abbreviated name used as a prefix to identify jobs associated with a data group.
MIMIX will generate this prefix for you when the default *GEN is used. The short
name must be unique to the MIMIX cluster and cannot be changed after the data
group is created.
Data resource group entry (DTARSCGRP) This parameter identifies the data
resource group entry in which you want the data group to participate. The data
resource group entry provides the association to an application group. When the
specified value is a name or resolves to a name, operations to start, end, or switch are
typically performed at the level of the application group instead of the data group. The
default value, *DFT, will check for the existence of application groups in the
installation library to determine behavior. If there are application groups, the first part

234
Tips for data group parameters

of the three-part data group name is used for the name of the data resource group
entry. When application groups exist, the data resource group entry specified or to
which *DFT will resolve to must exist. If application groups do not exist, *DFT is the
same as *NONE and the data group will not be associated with a data resource group
entry. You can also specify the name of an existing data resource group entry.
Data source (DTASRC) This parameter indicates which of the systems in the data
group definition is used as the source of data for replication.
Allow to be switched (ALWSWT) This parameter determines whether the direction
in which data is replicated between systems can be switched. If you plan to use the
data group for high availability purposes, use the default value *YES. This allows you
to use one data group for replicating data in either direction between the two systems.
If you do not allow switching directions, you need to have second data group with
similar attributes in which the roles of source and target are reversed in order to
support high availability.
Data group type (TYPE) The default value *ALL indicates that the data group can be
used by both user journal and system journal replication processes. This enables you
to use the same data group for all of the replicated data for an application. The value
*ALL is required for user journal replication of IFS objects, data areas, and data
queues. MIMIX Dynamic Apply also supports the value *DB. For additional
information, see “Requirements and limitations of MIMIX Dynamic Apply” on page 111
Note: In Clustering environments only, the data group value of *PEER is available.
This provides you with support for system values and other system attributes
that MIMIX currently does not support.
Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the
transfer definitions used to communicate between the systems defined by the data
group. The name you specify in these parameters must match the first part of a
transfer definition name. By default, MIMIX uses the name PRIMARY for a value of
the primary transfer definition (PRITFRDFN) parameter and for the first part of the
name of a transfer definition.
If you specify a secondary transfer definition (SECTFRDFN), it is used if the
communications path specified in the primary transfer definition is not available.
Once MIMIX starts using the secondary transfer definition, it continues to use it even
after the primary communication path becomes available again.
Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of
seconds that the send process waits when there are no entries available to process.
Jobs go into a delay state when there are no entries to process. Jobs wait for the time
you specify even when new entries arrive in the journal. A value of 0 uses more
system resources.
Common database parameters (JRNTGT, JRNDFN1, JRNDFN2, ASPGRP1,
ASPGRP2, RJLNK, COOPJRN, NBRDBAPY, DBJRNPRC) These parameters apply
to data groups that can include database files or tracking entries. Data group types of
*ALL or *DB include database files. Data group types of *ALL may also include
tracking entries.
Journal on target (JRNTGT) The default value *YES enables journaling on the
target system, which allows you to switch the direction of a data group more

235
quickly. For data groups that perform user journal replication, the value *YES is
required to allow target journal inspection.
Replication of files with some types of referential constraint actions may require a
value of *YES. For more information, see “Considerations for LF and PF files” on
page 106.
If you specify *NO, you must ensure that, in the event of a switch to the direction
of replication, you manually start journaling on the target system before allowing
users to access the files. Otherwise, activity against those files may not be
properly recorded for replication.
System 1 journal definition (JRNDFN1) and System 2 journal definition
(JRNDFN2) parameters identify the user journal definitions associated with the
systems defined as System 1 and System 2, respectively, of the data group. The
value *DGDFN indicates that the journal definition has the same name as the data
group definition.
The DTASRC, ALWSWT, JRNTGT, JRNDFN1, and JRNDFN2 parameters interact
to automatically create as much of the journaling environment as possible. The
DTASRC parameter determines whether system 1 or system 2 is the source
system for the data group. When you create the data group definition, if the
journal definition for the source system does not exist, a journal definition is
created. If you specify to journal on the target system and the journal definition for
the target system does not exist, that journal definition is also created. The
names of journal definitions created in this way are taken from the values of the
JRNDFN1 and JRNDFN2 parameters according to which system is considered
the source system at the time they are created. You may need to build the
journaling environment for these journal definitions.
System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2)
parameters identify the name of the primary auxiliary storage pool (ASP) device
within an ASP group on each system. The value *NONE allows replication from
libraries in the system ASP and basic user ASPs 2-32. Specify a value when you
want to replicate IFS objects from a user journal or when you want to replicate
objects from ASPs 33 or higher. For more information see “Benefits of
independent ASPs” on page 659.
Use remote journal link (RJLNK) This parameter identifies how journal entries
are moved to the target system. The default value, *YES, uses remote journaling
to transfer data to the target system. This value results in the automatic creation of
the journal definitions (CRTJRNDFN command) and the RJ link (ADDRJLNK
command), if needed. The RJ link defines the source and target journal definitions
and the connection between them. When ADDRJLNK is run during the creation of
a data group, the data group transfer definition names are used for the
ADDRJLNK transfer definition parameters.
MIMIX Dynamic Apply requires the value *YES. The value *NO is appropriate
when MIMIX source-send processes must be used.
Cooperative journal (COOPJRN) This parameter determines whether
cooperatively processed operations for journaled objects are performed primarily
by user (database) journal replication processes or system (audit) journal
replication processes. Cooperative processing through the user journal is

236
Tips for data group parameters

recommended and is called MIMIX Dynamic Apply. For newly created data
groups, the shipped default value *DFT resolves to *USRJRN (user journal) when
configuration requirements for MIMIX Dynamic Apply are met. If those
requirements are not met, *DFT resolves to *SYSJRN and cooperative processing
is performed through system journal replication processes.
Number of DB apply sessions (NBRDBAPY) You can specify the number of
apply sessions allowed to process the data for the data group.
DB journal entry processing (DBJRNPRC) This parameter allows you to
specify several criteria that MIMIX will use to filter user journal entries before they
are sent to the database apply (DBAPY) process. Entries that are filtered out are
not replicated. In data groups that use remote journaling, the filtering is performed
by the database reader (DBRDR) process. In data groups configured to use
MIMIX source-send processes, filtering is performed by the database send
(DBSND) process.
Each element of the parameter identifies a criteria that can be set to either *SEND
or *IGNORE. The value *SEND causes the journal entries to be processed and
sent to the database apply process. The value *IGNORE prevents the entries from
being sent to the database apply process. Certain database techniques, such as
keyed replication, may require that an element be set to a specific value.
For data groups which use the DBSND process, the value *IGNORE can minimize
the amount of data sent over a communications path.
The following available elements describe how journal entries are handled by the
database reader (DBRDR) or the database send (DBSND) processes.
• Before images This criteria determines whether before-image journal entries
are filtered out and are not sent to the database apply process. If *IGNORE is
specified and *IMMED is specified for the Commit mode element of Database
apply processing (DBAPYPRC), journal entry before images are processed
and sent to the database apply process. If you use keyed replication, the
before-images are often required and you should specify *SEND. The value
*SEND is also required for the IBM RMVJRNCHG (Remove Journal Change)
command. See “Additional considerations for data groups” on page 245 for
more information.
• For files not in data group This criteria determines whether journal entries for
files that are not configured for replication by the data group are filtered out and
are not sent to the database apply process.
• Generated by MIMIX activity This criteria determines whether journal entries
resulting from the MIMIX database apply process are filtered out and are not
sent to the database apply process. Filtering out these entries may be
necessary in environments which perform bi-directional replication.
• Not used by MIMIX This criteria determines whether journal entries not used by
MIMIX are filtered out and are not sent to the database apply process.
Additional parameters: Use F10 (Additional parameters) to access the following
parameters. These parameters are considered advanced configuration topics.

237
Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog
threshold criteria for the remote journal function. When the backlog reaches any of the
specified criterion, the threshold exceeded condition is indicated in the status of the
RJ link. The threshold can be specified as a time difference, a number of journal
entries, or both. When a time difference is specified, the value is amount of time, in
minutes, between the timestamp of the last source journal entry and the timestamp of
the last remote journal entry. When a number of journal entries is specified, the value
is the number of journal entries that have not been sent from the local journal to the
remote journal. If *NONE is specified for a criterion, that criterion is not considered
when determining whether the backlog has reached the threshold.
Synchronization check interval (SYNCCHKITV) This parameter, which is only valid
for database processing, allows you to specify how many before-image entries to
process between synchronization checks. For MIMIX to use this feature, the journal
image file entry option (FEOPT parameter) must allow before-image journaling
(*BOTH). When you specify a value for the interval, a synchronization check entry is
sent to the apply process on the target system. The apply process compares the
before-image to the image in the file (the entire record, byte for byte). If there is a
synchronization problem, MIMIX puts the data group file entry on hold and stops
applying journal entries. The synchronization check transactions still occur even if
you specify to ignore before-images in the DB journal entry processing (DBJRNPRC)
parameter.
Time stamp interval (TSPITV) This parameter, which is only valid for database
processing, allows you to specify the number of entries to process before MIMIX
creates a time stamp entry. Time stamps are used to evaluate performance.
Note: The TSPITV parameter does not apply for remote journaling (RJ) data groups.
Verify interval (VFYITV) This parameter allows you to specify the number of journal
transactions (entries) to process before MIMIX performs additional processing.
When the value specified is reached, MIMIX verifies that the communications path
between the source system and the target system is still active and that the send and
receive processes are successfully processing transactions. A higher value uses less
system resources. A lower value provides more timely reaction to error conditions.
Larger, high-volume systems should have higher values. This value also affects how
often the status is updated with the “Last read” entries. A lower value results in more
accurate status information.
Data area polling interval (DTAARAITV) This parameter specifies the number of
seconds that the data area poller waits between checks for changes to data areas.
The poller process is only used when configured data group data area entries exist.
The preferred methods of replicating data areas require that data group object entries
be used to identify data areas. When object entries identify data areas, the value
specified in them for cooperative processing (COOPDB) determines whether the data
areas are processed through the user journal with advanced journaling, or through
the system journal.
Journal at creation (JRNATCRT) This parameter specifies whether to start
journaling on new objects of type *FILE, *DTAARA, and *DTAQ when they are
created. The decision to start journaling for a new object is based on whether the data
group is configured to cooperatively process any object of that type in a library. All

238
Tips for data group parameters

new objects of the same type are journaled, including those not replicated by the data
group.
If multiple data groups include the same library in their configurations, only allow one
data group to use journal at object creation (*YES or *DFT). The default for this
parameter is *DFT which allows MIMIX to determine the objects to journal at creation.
Note: There are some IBM library restrictions identified within the requirements for
implicit starting of journaling described in “What objects need to be journaled”
on page 343. For additional information, see “Processing of newly created
files and objects” on page 126.
Parameters for automatic retry processing: MIMIX may use delay retry cycles
when performing system journal replication to automatically retry processing an object
that failed due to a locking condition or an in-use condition. It is normal for some
pending activity entries to undergo delay retry processing—for example, when a
conflict occurs between replicated objects in MIMIX and another job on the system.
The following parameters define the scope of two retry cycles:
Number of times to retry (RTYNBR) This parameter specifies the number of
attempts to make during a delay retry cycle.
First retry delay interval (RTYDLYITV1) This parameter specifies the amount of
time, in seconds, to wait before retrying a process in the first (short) delay retry
cycle.
Second retry delay interval (RTYDLYITV2) specifies the amount of time, in
seconds, to wait before retrying a process in the second (long) delay retry cycle.
This is only used after all the retries for the RTYDLYITV1 parameter have been
attempted.
After the initial failed save attempt, MIMIX delays for the number of seconds specified
for the First retry delay interval (RTYDLYITV1) before retrying the save operation.
This is repeated for the specified number of times (RTYNBR).
If the object cannot be saved after all attempts in the first cycle, MIMIX enters the
second retry cycle. In the second retry cycle, MIMIX uses the number of seconds
specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the
save attempt for the specified number of times (RTYNBR).
If the object identified by the entry is in use (*INUSE) after the first and second retry
cycle attempts have been exhausted, a third retry cycle is attempted if the Automatic
object recovery policy is enabled. The values in effect for the Number of third
delay/retries policy and the Third retry interval (min.) policy determine the scope of the
third retry cycle. After all attempts have been performed, if the object still cannot be
processed because of contention with other jobs, the status of the entry will be
changed to *FAILED.
File and tracking entry options (FEOPT) This parameter specifies default options
that determine how MIMIX handles file entries and tracking entries for the data group.
All database file entries, object tracking entries, and IFS tracking entries defined to
the data group use these options unless they are explicitly overridden by values
specified in data group file or object entries. File entry options in data group object
entries enable you to set values for files and tracking entries that are cooperatively
processed.

239
The options are as follows:
• Journal image This option allows you to control the kinds of record images that
are written to the journal when data updates are made to database file records,
IFS stream files, data areas or data queues. The default value *AFTER causes
only after-images to be written to the journal. The value *BOTH causes both
before-images and after-images to be written to the journal. Some database
techniques, such as keyed replication, may require the use of both before-image
and after-images. *BOTH is also required for the IBM RMVJRNCHG (Remove
Journal Change) command. See “Additional considerations for data groups” on
page 245 for more information.
• Omit open/close entries This option allows you to specify whether open and close
entries are omitted from the journal. The default value *YES indicates that open
and close operations on file members or IFS tracking entries defined to the data
group do not create open and close journal entries and are therefore omitted from
the journal. If you specify *NO, journal entries are created for open and close
operations and are placed in the journal.
• Replication type This option allows you to specify the type of replication to use for
database files defined to the data group. The default value *POSITION indicates
that each file is replicated based on the position of the record within the file.
Positional replication uses the values of the relative record number (RRN) found
in the journal entry header to locate a database record that is being updated or
deleted. MIMIX Dynamic Apply requires the value *POSITION.
The value *KEYED indicates that each file is replicated based on the value of the
primary key defined to the database file. The value of the key is used to locate a
database record that is being deleted or updated. MIMIX strongly recommends
that any file configured for keyed replication also be enabled for both before-
image and after-image journaling. Files defined using keyed replication must have
at least one unique access path defined. For additional information, see “Keyed
replication” on page 385.
• Lock member during apply This option allows you to choose the type of lock the
database apply process will use for the data of replicated file members on the
target node. The default value allows the database apply process to obtain an
exclusive, allow read (*EXCLRD) lock on the data of file members being
processed to prevent other jobs from performing updates, thereby ensuring
access to complete replication. Locking occurs when the apply process is started
and affects members whose file entry status is active. Database apply processing
will also lock objects as needed to replicate changes from the source node. This
option does not apply to cooperatively processed data areas, data queues, or IFS
objects.
Note: If the value *NONE is specified, the apply process may hold *SHRUPD
locks on data for improved performance.
• Apply session With this option, you can assign a specific apply session for
processing files defined to the data group. The default value *ANY indicates that
MIMIX determines which apply session to use and performs load balancing.
Notes:

240
Tips for data group parameters

• Any changes made to the apply session option are not effective until the data
group is started with *YES specified for the clear pending and clear error
parameters.
• For IFS and object tracking entries, only apply session A is valid. For additional
information see “Database apply session balancing” on page 90.
• Collision resolution This option determines how data collisions are resolved. The
default value *HLDERR indicates that a file is put on hold if a collision is detected.
The value *AUTOSYNC indicates that MIMIX will attempt to automatically
synchronize the source and target file. You can also specify the name of the
collision resolution class (CRCLS) to use. A collision resolution class allows you to
specify how to handle a variety of collision types, including calling exit programs to
handle them. See the online help for the Create Collision Resolution Class
(CRTCRCLS) command for more information.
Note: The *AUTOSYNC value should not be used if the Automatic database
recovery policy is enabled.
• Disable triggers during apply This option determines if MIMIX should disable any
triggers on physical files during the database apply process. The default value
*YES indicates that triggers should be disabled by the database apply process
while the file is opened.
• Process trigger entries This option determines if MIMIX should process any
journal entries that are generated by triggers. The default value *YES indicates
that journal entries generated by triggers should be processed.
Database reader/send threshold (DBRDRTHLD) This parameter specifies the
backlog threshold criteria for the database reader (DBRDR) process. When the
backlog reaches any of the specified criterion, the threshold exceeded condition is
indicated in the status of the DBRDR process. If the data group is configured for
MIMIX source-send processing instead of remote journaling, this threshold applies to
the database send (DBSND) process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.
Database apply processing (DBAPYPRC) This parameter allows you to specify
defaults for operations associated with the database apply processes. Each
configured apply session uses the values specified in this parameter. The areas for
which you can specify defaults are as follows:
• Force data interval You can specify the number of records that are processed
before MIMIX forces the apply process information to disk from cache memory. A
lower value provides easier recovery for major system failures. A higher value
provides for more efficient processing.
• Maximum open members You can specify the maximum number of members
(with journal transactions to be applied) that the apply process can have open at
one time. Once the limit specified is reached, the apply process selectively closes

241
one file before opening a new file. A lower value reduces disk usage by the apply
process. A higher value provides more efficient processing because MIMIX does
not open and close files as often.
• Threshold warning (1000s) You can specify the number of entries, in thousands1,
that the apply process can have waiting to be applied before a warning message
is sent. When the threshold is reached, the threshold exceeded condition is
indicated in the status of the database apply process and a message is sent to the
primary and secondary message queues.
• Apply history log spaces You can specify the maximum number of history log
spaces that are kept after the journal entries are applied. Any value other than
zero (0) affects performance of the apply processes.
• Keep journal log user spaces You can specify the maximum number of journal log
spaces to retain after the journal entries are applied. Log user spaces are
automatically deleted by MIMIX. Only the number of user spaces you specify are
kept.
• Size of log user spaces (MB) You can specify the size of each log space (in
megabytes) in the log space chain. Log spaces are used as a staging area for
journal entries before they are applied. Larger log spaces provide better
performance.
• Commit mode You can specify when to apply journal entries that are under
commitment control. Default configuration values result in delaying the apply of
transactions under commitment control until the journal entry that indicates the
commit cycle completed is processed. Many users can benefit from using
immediate commit mode. For detailed information, see “Immediately applying
committed transactions” on page 367.
Object processing (OBJPRC) This parameter allows you to specify defaults for
object replication. The areas for which you can specify defaults are as follows:
• Object default owner You can specify the name of the default owner for objects
whose owning user profile does not exist on the target system. The product
default uses QDFTOWN for the owner user profile.
• DLO transmission method You can specify the method used to transmit the DLO
content and attributes to the target system. The value *OPTIMIZED uses IBM i
APIs and does not support doclists. The *SAVRST uses IBM i save and restore
commands.
• IFS transmission method You can specify the method used to transmit IFS object
content and attributes to the target system. The default value *OPTIMIZED uses
IBM i APIs for better performance. The value *SAVRST uses IBM i save and
restore commands.
Note: It is recommended that you use the *OPTIMIZED method of IFS
transmission only in environments in which the high volume of IFS activity
results in persistent replication backlogs. The IBM i save and restore
method guarantees that all attributes of an IFS object are replicated.

1. Prior to service pack 7.1.07.00, the specified value was not multiplied by 1000.

242
Tips for data group parameters

• User profile status You can specify the user profile Status value for user profiles
when they are replicated. This allows you to replicate user profiles with the same
status as the source system in either an enabled or disabled status for normal
operations. If operations are switched to the backup system, user profiles can
then be enabled or disabled as needed as part of the switching process.
• Keep deleted spooled files You can specify whether to retain replicated spooled
files on the target system after they have been deleted from the source system.
When you specify *YES, the replicated spooled files are retained on the target
system after they are deleted from the source system. MIMIX does not perform
any clean-up of these spooled files. You must delete them manually when they
are no longer needed. If you specify *NO, the replicated spooled files are deleted
from the target system when they are deleted from the source system.
• Keep DLO system object name You can specify whether the DLO on the target
system is created with the same system object name as the DLO on the source
system. The system object name is only preserved if the DLO is not being
redirected during the replication process. If the DLO from the source system is
being directed to a different name or folder on the target system, then the system
object name will not be preserved.
• Object retrieval delay You can specify the amount of time, in seconds, to wait after
an object is created or updated before MIMIX packages the object. This delay
provides time for your applications to complete their access of the object before
MIMIX begins packaging the object.
• Object send prefix The prefix specified determines whether the data group uses a
dedicated job for the object send process or a job shared by multiple data groups.
All data groups that specify the value *SHARED will share the same job on the
source system. If you specify a three-character prefix, only other data groups that
specify the same prefix will share the object send job with this prefix. To change
this value, end the data group, change the value specified, and restart the data
group.
Note: A shared object send job can process journal entries for objects within
SYSBAS or within only one independent ASP. In environments with
independent ASPs, each system within the set of data groups sharing the
same object send job must be identified consistently within those data
groups in the appropriate ASP group parameter (ASPGRP1 or
ASPGRP2). This is required regardless of whether a system is currently
the source or target for the data groups.
Object send threshold (OBJSNDTHLD) This parameter specifies the backlog
threshold criteria for the object send (OBJSND) process. When the backlog reaches
any of the specified criterion, the threshold exceeded condition is indicated in the
status of the OBJSND process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.

243
Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object retrieve requests
and the threshold at which the number of pending requests queued for processing
causes additional temporary jobs to be started. The specified minimum number of
jobs will be started when the data group is started. During periods of peak activity, if
the number of pending requests exceeds the backlog jobs threshold, additional jobs,
up to the maximum, are started to handle the extra work. When the backlog is
handled and activity returns to normal, the extra jobs will automatically end. If the
backlog reaches the warning message threshold, the threshold exceeded condition is
indicated in the status of the object retrieve (OBJRTV) process. If *NONE is specified
for the warning message threshold, the process status will not indicate that a backlog
exists.
Container send processing (CNRSNDPRC) This parameter allows you to specify
the minimum and maximum number of jobs allowed to handle container send
requests and the threshold at which the number of pending requests queued for
processing causes additional temporary jobs to be started. The specified minimum
number of jobs will be started when the data group is started. During periods of peak
activity, if the number of pending requests exceeds the backlog jobs threshold,
additional jobs, up to the maximum, are started to handle the extra work. When the
backlog is handled and activity returns to normal, the extra jobs will automatically end.
If the backlog reaches the warning message threshold, the threshold exceeded
condition is indicated in the status of the container send (CNRSND) process. If
*NONE is specified for the warning message threshold, the process status will not
indicate that a backlog exists.
Object apply processing (OBJAPYPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object apply requests and
the threshold at which the number of pending requests queued for processing triggers
additional temporary jobs to be started. The specified minimum number of jobs will be
started when the data group is started. During periods of peak activity, if the number
of pending requests exceeds the backlog threshold, additional jobs, up to the
maximum, are started to handle the extra work. When the backlog is handled and
activity returns to normal, the extra jobs will automatically terminate. You can also
specify a threshold for warning message that indicates the number of pending
requests waiting in the queue for processing before a warning message is sent. When
the threshold is reached, the threshold exceeded condition is indicated in the status of
the object apply process and a message is sent to the primary and secondary
message queues.
User profile for submit job (SBMUSR) This parameter allows you to specify the
name of the user profile used to submit jobs. The default value *JOBD indicates that
the user profile named in the specified job description is used for the job being
submitted. The value *CURRENT indicates that the same user profile used by the job
that is currently running is used for the submitted job.
Send job description (SNDJOBD) This parameter allows you to specify the name
and library of the job description used to submit send jobs. The product default uses
MIMIXSND in library MIMIXQGPL for the send job description.

244
Tips for data group parameters

Apply job description (APYJOBD) This parameter allows you to specify the name
and library of the job description used to submit apply requests. The product default
uses MIMIXAPY in library MIMIXQGPL for the apply job description.
Reorganize job description (RGZJOBD) This parameter, used by database
processing, allows you to specify the name and library of the job description used to
submit reorganize jobs. The product default uses MIMIXRGZ in library MIMIXQGPL
for the reorganize job description.
Synchronize job description (SYNCJOBD) This parameter, used by database
processing, allows you to the name and library of the job description used to submit
synchronize jobs. The product default uses MIMIXSYNC in library MIMIXQGPL for
synchronization job description. This is valid for any synchronize command that does
not have JOBD parameter on the display.
Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the
MIMIX environment. You can change the time at which these jobs restart. The source
or target role of the system affects the results of the time you specify on a data group
definition. Results may also be affected if you specify a value that uses the job restart
time in a system definition defined to the data group. Changing the job restart time is
considered an advanced technique.

Recovery window (RCYWIN) Configuring a recovery window1 for a data group


specifies the minimum amount of time, in minutes, that a recovery window is available
and identifies the replication processes that permit a recovery window. A recovery
window introduces a delay in the specified processes to create a minimum time
during which you can set a recovery point. Once a recovery point is set, you can react
to anticipated problems and take action to prevent a corrupted object from reaching
the target system. When the processes reach the recovery point, they are suspended
so that any corruption in the transactions after that point will not automatically be
processed.
By its nature, a recovery window can affect the data group's recovery time objective
(RTO). Consider the effect of the duration you specify on the data group's ability to
meet your required RTO. You should also disable auditing for any data group that has
a configured recovery window. For more information, see “Preventing audits from
running” in the MIMIX Operations book.

Additional considerations for data groups


If unwanted changes are recorded to a journal but not realized until a later time, you
can backtrack to a time prior to when the changes were made by using the Remove
Journal Changes (RMVJRNCHG) command provided by IBM. In order to use this
command, your configuration must meet certain criteria including specific values for
some of the data group definition parameters. For more information, see “Removing
journaled changes” in the MIMIX Operations book.

1. Recovery windows and recovery points are supported with the MIMIX CDP™ feature, which
requires an additional access code.

245
Creating a data group definition
Shipped default values for the Create Data Group Definition (CRTDGDFN) command
result in data groups configured for MIMIX Dynamic Apply. These data groups use
remote journaling as an integral part of the user journal replication processes. For
additional information see Table 11 in “Considerations for LF and PF files” on
page 106. For information about command parameters, see “Tips for data group
parameters” on page 234.
To create a data group, do the following:
1. To access the appropriate command, do the following:
a. From the From the MIMIX Basic Main Menu, type 11 (Configuration menu) and
press Enter
b. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
c. From the Work with Data Group Definitions display, type a 1 (Create) next to
the blank line at the top of the list area and press Enter.
2. The Create Data Group Definition (CRTDGDFN) display appears. Specify a valid
three-part name at the Data group definition prompts.
Note: Data group names cannot be UPSMON or begin with the characters MM.
3. For the remaining prompts on the display, verify the values shown are what you
want. If necessary, change the values.
a. If you want a specific prefix to be used for jobs associated with the data group,
specify a value at the Short data group name prompt. Otherwise, MIMIX will
generate a prefix.
b. The default value for the Data resource group entry prompt will use the data
group name to create an association, through a data resource group entry,
between the data group and an application group when application groups are
configured within the installation. To have the data group associated with a
different data resource group entry, specify a name. When application groups
exist but you want to prevent the data group from participating in them, specify
*NONE.
c. Ensure that the value of the Data source prompt represents the system that
you want to use as the source of data to be replicated.
d. Verify that the value of the Allow to be switched prompt is what you want.
e. Verify that the value of the Data group type prompt is what you need. MIMIX
Dynamic Apply requires either *ALL or *DB. Legacy cooperative processing
and user journal replication of IFS objects, data areas, and data queues
require *ALL.
f. Verify that the value of the Primary transfer definition prompt is what you want.
g. If you want MIMIX to have access to an alternative communications path,
specify a value for the Secondary transfer definition prompt.

246
Creating a data group definition

h. Verify that the value of the Reader wait time (seconds) prompt is what you
want.
i. Press Enter.
4. If you specified *OBJ for the Data group type, skip to Step 9.
5. The Journal on target prompt appears on the display. Verify that the value shown
is what you want and press Enter.
Note: If you specify *YES and you require that the status of journaling on the
target system is accurate, you should perform a save and restore
operation on the target system prior to loading the data group file entries. If
you are performing your initial configuration, however, it is not necessary
to perform a save and restore operation. You will synchronize as part of
the configuration checklist.
6. More prompts appear on the display that identify journaling information for the
data group. You may need to use the Page Down key to see the prompts. Do the
following:
a. Ensure that the values of System 1 journal definition and System 2 journal
definition identify the journal definitions you need.
Notes:
• If you have not journaled before, the value *DGDFN is appropriate. If you
have an existing journaling environment that you have identified to MIMIX in
a journal definition, specify the name of the journal definition.
• If you only see one of the journal definition prompts, you have specified *NO
for both the Allow to be switched prompt and the Journal on target prompt.
The journal definition prompt that appears is for the source system as
specified in the Data source prompt.
b. If any objects to replicate are located in an auxiliary storage pool (ASP) group
on either system, specify values for System1 ASP group and System 2 ASP
group as needed. The ASP group name is the name of the primary ASP device
within the ASP group.
c. The default for the Use remote journal link prompt is *YES, which required for
MIMIX Dynamic Apply and preferred for other configurations. MIMIX creates a
transfer definition and an RJ link, if needed. To create a data group definition
for a source-send configuration, change the value to *NO.
d. At the Cooperative journal (COOPJRN) prompt, specify the journal for
cooperative operations. For new data groups, the value *DFT automatically
resolves to *USRJRN when Data group type is *ALL or *DB and Remote
journal link is *YES. The value *USRJRN processes through the user
(database) journal while the value *SYSJRN processes through the system
(audit) journal.
7. At the Number of DB apply sessions prompt, specify the number of apply sessions
you want to use.
8. Verify that the values shown for the DB journal entry processing prompts are what
you want.

247
Note: *SEND is required for the IBM RMVJRNCHG (Remove Journal Change)
command. See “Additional considerations for data groups” on page 245
for more information.
9. At the Description prompt, type a text description of the data group definition,
enclosed in apostrophes.
10. Do one of the following:
• To accept the basic data group configuration, Press Enter. Most users can
accept the default values for the remaining parameters. The data group is
created when you press Enter.
• To access prompts for advanced configuration, press F10 (Additional
Parameters) and continue with the next step.
Advanced Data Group Options: The remaining steps of this procedure are only
necessary if you need to access options for advanced configuration topics. The
prompts are listed in the order they appear on the display. Because IBM i does not
allow additional parameters to be prompt-controlled, you will see all parameters
regardless of the value specified for the Data group type prompt.
11. Specify the values you need for the following prompts associated with user journal
replication:
• Remote journaling threshold
• Synchronization check interval
• Time stamp interval
• Verify interval
• Journal at creation
12. Specify the values you need for the following prompts associated with system
journal replication:
• Number of times to retry
• First retry delay interval
• Second retry delay interval
13. Specify the values you need for each of the prompts on the File and tracking ent.
opts (FEOPT) parameter.
Notes:
• Replication type must be *POSITION for MIMIX Dynamic Apply.
• Apply session A is used for IFS objects, data areas, and data queues that are
configured for user journal replication. For more information see “Database
apply session balancing” on page 90.
• The journal image value *BOTH is required for the IBM RMVJRNCHG
(Remove Journal Change) command. See “Additional considerations for data
groups” on page 245 for more information.
14. Specify the values you need for each element of the following parameters:

248
Creating a data group definition

• Database reader/send threshold


• Database apply processing
• Object processing
• Object send threshold
• Object retrieve processing
• Container send processing
• Object apply processing
15. If necessary, change the values for the following prompts:
• User profile for submit job
• Send job description and its Library
• Apply job description and its Library
• Reorganize job description and its Library
• Synchronize job description and its Library
• Job restart time
16. When you are sure that you have defined all of the values that you need, press
Enter to create the data group definition.

249
Changing a data group definition
For information about command parameters, see “Tips for data group parameters” on
page 234.
To change a data group definition, do the following:
1. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
2. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to
see additional prompts.
3. Make any changes you need for the values of the prompts. Page Down to see
more of the prompts.
Note: If you change the Number of DB apply sessions prompt (NBRDBAPY),
you need to start the data group specifying *YES for the Clear pending
prompt (CLRPND).
4. If you need to access advanced functions, press F10 (Additional parameters).
Make any changes you need for the values of the prompts.
5. When you are ready to accept the changes, press Enter.

Changing a data group to use a shared object send job


In earlier versions of MIMIX, data groups used a dedicated job for the object send
process. This task changes a data group to use a shared object send job. Data
groups that use a shared object send job must also be configured for the same ASP
group.
Note: The data group that you want to change should not have a significant backlog
for the object send process. If there is a significant backlog, allow processing
to catch up before changing the configuration. A significant backlog can cause
other data groups using the shared job to appear to have no replication activity
while the shared job addresses the backlog. When the data group is started
after the configuration change, MIMIX determines whether the specified
starting object sequence number is earlier than the currently read sequence
number. If the specified starting point for the data group is earlier, the shared
job completes its current block of entries, then returns to the earlier point for
the data group being added. The shared job reads the earlier entries and
routes the transactions to the data group being started. When the shared job
reaches the last entry it read at the time of the STRDG request, it resumes
routing transactions to all active data groups using the shared job.
To change an existing data group to use a shared object send job, do the following:
1. End the data group with the dedicated object send process that you plan to
change. The data groups currently using the shared object send job do not need
to be ended.
2. From a management system on the Work with DG Definitions display, type a 2
(Change) next to the data group you want and press Enter.

250
Fine-tuning backlog warning thresholds for a data group

3. The Change Data Group Definition (CHGDGDFN) display appears. Press F9 (All
parameters), then Page Down multiple times to locate the Object processing
parameter.
4. At the Object send prefix prompt, do one of the following
• To use the default shared job specify *SHARED. The data group will use the
default shared job on its current source system.
• To use a shared job that is limited to a subset of data groups, specify a three-
character prefix. Only the data groups that you explicitly set to use the same
prefix will share the same the object send job.
5. Press Enter.
6. Ensure the following parameters have the same value for all data groups that
share the same object send job:
• System 1 ASP group
• System 2 ASP group
7. Start the data group.

Fine-tuning backlog warning thresholds for a data group


MIMIX supports the ability to set a backlog threshold on each of the replication jobs
used by a data group. When a job has a backlog that reaches or exceeds the
specified threshold, the threshold condition is indicated in the job status and reflected
in user interfaces.
Threshold settings are meant to inform you that, while normal replication processes
are active, a condition exists that could become a problem. What is an acceptable risk
for some data groups may not be acceptable for other data groups or in some
environments. For example, a threshold condition which occurs after starting a
process that was temporarily ended or while processing an unusually large object
which rarely changes may be an acceptable risk. However, a process that is
continuously in a threshold condition or having multiple processes frequently in
threshold conditions may indicate a more serious exposure that requires attention.
Ultimately, each threshold setting must be a balance between allowing normal
fluctuations to occur while ensuring that a job status is highlighted when a backlog
approaches an unacceptable level of risk to your recovery time objectives (RTO) or
risk of data loss.
Important! When evaluating whether threshold settings are compatible with your
RTO, you must consider all of the processes in the replication paths for which the
data group is configured and their thresholds. Each threshold represents only one
process in either the user journal replication path or the system journal replication
path. If the threshold for one process is set higher than its shipped value, a
backlog for that process may not result in a threshold condition while being
sufficiently large to cause subsequent processes to have backlogs which exceed
their thresholds. Consider the cumulative effect that having multiple processes in
threshold conditions would have on RTO and your tolerance for data loss in the
event of a failure.

251
Table 31 lists the shipped values for thresholds available in a data group definition,
identifies the risk associated with a backlog for each replication process, and
identifies available options to address a persistent threshold condition. For each data
group, you may need to use multiple options or adjust one or more threshold values
multiple times before finding an appropriate setting.

Table 31. Shipped threshold values for replication processes and the risk associated with a backlog

Replication Process Risk Associated with a Backlog Options for


Backlog Threshold and Resolving Persistent
its Shipped Default Val- Threshold Conditions
ues
Note: Select a name to view a description

Remote journaling All journal entries in the backlog for the remote Option 3
threshold journaling function exist only in the source Option 5
10 minutes system journal and are waiting to be
transmitted to the remote journal. These entries
cannot be processed by MIMIX user journal
replication processes and are at risk of being
lost if the source system fails. After the source
system becomes available again, journal
analysis may be required.

Database reader/send For data groups that use remote journaling, all Option 2
threshold journal entries in the database reader backlog Option 3
10 minutes are physically located on the target system but Option 5
MIMIX has not started to replicate them. If the
source system fails, these entries need to be
read and applied before switching.
For data groups that use MIMIX source-send
processing, all journal entries in the database
send backlog, are waiting to be read and to be
transmitted to the target system. The
backlogged journal entries exist only in the
source system and are at risk of being lost if the
source system fails. After the source system
becomes available again, journal analysis may
be required.

Database apply threshold All of the entries in the database apply backlog Option 2
warning (1000s) are waiting to be applied to the target system. If Option 3
100,000 entries1) the source system fails, these entries need to Option 5
be applied before switching. A large backlog
can also affect performance.

252
Fine-tuning backlog warning thresholds for a data group

Table 31. Shipped threshold values for replication processes and the risk associated with a backlog

Replication Process Risk Associated with a Backlog Options for


Backlog Threshold and Resolving Persistent
its Shipped Default Val- Threshold Conditions
ues

Object send threshold All of the journal entries in the object send Option 2
10 minutes backlog exist only in the system journal on the Option 3
source system and are at risk of being lost if the Option 4
source system fails. MIMIX may not have
Option 5
determined all of the information necessary to
replicate the objects associated with the journal
entries. As this backlog clears, subsequent
processes may have backlogs as replication
progresses. If the object send process is
shared among multiple data groups and the
backlog is persistent, it may be necessary to
reduce the number of data groups sharing the
same object send process.

Object retrieve warning All of the objects associated with journal entries Option 1
message threshold in the object retrieve backlog are waiting to be Option 2
100 entries packaged so they can be sent to the target Option 3
system. The latest changes to these objects
Option 5
exist only in the source system and are at risk
of being lost if the source system fails. As this
backlog clears, subsequent processes may
have backlogs as replication progresses.

Container send warning All of the packaged objects associated with Option 1
message threshold journal entries in the container send backlog Option 2
100 entries are waiting to be sent to the target system. The Option 3
latest changes to these objects exist only in the
Option 5
source system and are at risk of being lost if the
source system fails. As this backlog clears,
subsequent processes may have backlogs as
replication progresses

Object apply warning All of the entries in the object apply backlog are Option 1
message threshold waiting to be applied to the target system. If the Option 2
100 requests source system fails, these entries need to be Option 3
applied before switching. Any related objects
Option 5
for which an automatic recovery action was
collecting data may be lost.
1. This appears as 100 on the CRTDGDFN command beginning with service pack 7.1.07.00, and higher, where the data-
base apply threshold is specified as a number which MIMIX multiplies by 1000.

The following options are available, listed in order of preference. Some options are
not available for all thresholds.
Option 1 - Adjust the number of available jobs. This option is available only for the
object retrieve, container send, and object apply processes. Each of these processes
have a configurable minimum and maximum number of jobs, a threshold at which

253
more jobs are started, and a warning message threshold. If the number of entries in a
backlog divided by the number of active jobs exceeds the job threshold, extra jobs are
automatically started in an attempt to address the backlog. If the backlog reaches the
higher value specified in the warning message threshold, the process status reflects
the threshold condition. If the process frequently shows a threshold status, the
maximum number of jobs may be too low or the job threshold value may be too high.
Adjusting either value in the data group configuration can result in more throughput.
Option 2 - Temporarily increase job performance. This option is available for all
processes except the RJ link. Use work management functions to increase the
resources available to a job by increasing its run priority or its timeslice (CHGJOB
command). These changes are effective only for the current instance of the job. The
changes do not persist if the job is ended manually or by nightly cleanup operations
resulting from the configured job restart time (RESTARTTIME) on the data group
definition.
Option 3 - Change threshold values or add criterion. All processes support
changing the threshold value. In addition, if the quantity of entries is more of a
concern than time, some processes support specifying additional threshold criteria
not used by shipped default settings. For the remote journal, database reader (or
database send), and object send processes, you can adjust the threshold so that a
number of journal entries is used as criteria instead of, or in conjunction with a time
value. If both time and entries are specified, the first criterion reached will trigger the
threshold condition. Changes to threshold values are effective the next time the
process status is requested.
Option 4 - Adjust the number of object send jobs. This option is only available for
the object send process. Determine if the data group uses a shared object send job. If
the threshold is persistent, it may be necessary to reduce the number of data groups
sharing the same object send process. For details, see “Optimizing performance for a
shared object send process” on page 254.
Option 5 - Get assistance. If you tried the other options and threshold conditions
persist, contact your Certified MIMIX Consultant for assistance. It may be necessary
to change configurations to adjust what is defined to each data group or to make
permanent work management changes for specific jobs.

Optimizing performance for a shared object send pro-


cess
In a new configuration, default options for data groups result in all data groups using
the default shared object send (OBJSND) process for the system. Sharing an object
send job typically reduces CPU usage on the source system.
However, if performance is slow or the object send process has a persistent backlog,
it may be necessary to reduce the number of data groups sharing the object send
process so that the process can keep up with the volume from its shared data groups.
You can change a data group configuration to use a different shared object send job
or to use a dedicated job.

254
Optimizing performance for a shared object send process

Conversely, a consistently low difference between the last read entry and the current
journal entry can indicate that more data groups may be able to share the object send
job.
The optimal number of data groups sharing an object send process is unique to every
environment. Determining the optimal number of data groups that can share an object
send process in your environment may require incremental adjustments. Add or
remove small numbers of data groups at a time to or from a shared object send
process and monitor the impact on performance and throughput.
Factors that affect performance of a shared object send process include:
• The number of data groups sharing the same object send job
• The type of data replicated by the data groups sharing the object send job. It may
be beneficial to share an object send process among data groups that replicate
only IFS objects or only DLO objects.

Identifying which data groups share an object send process


Do the following:
1. From the MIMIX Main Menu, type 11 (Configuration menu) and press Enter.
2. From the MIMIX Configuration Menu, type 4 (Work with data group definitions)
and press Enter.
3. The Work with DG Definitions display appears. Press F10 (View RJ links).
4. The Object Send Prefix column identifies whether the data group uses the default
shared job for the system (*SHARED), a named shared job, or a job that is unique
to the data group (*DGDFN).
5. To adjust the number of data groups using a shared object send process, use one
of the following:
• To add data groups to a shared object send job, use “Changing a data group to
use a shared object send job” on page 250.
• To change a data group to a dedicated object send job or to a different shared
job, use “Moving a data group to a different object send job” on page 255.

Moving a data group to a different object send job


Do the following:
1. Evaluate the data groups sharing an object send process and determine the
following:
• Which of the data groups to remove from the current shared job
• Will each data group to be changed use a different shared job or a dedicated
job
2. From a management system on the Work with DG Definitions display, type a 2
(Change) next to the data group you want to remove from the shared object send
job and press Enter.

255
3. The Change Data Group Definition (CHGDGDFN) display appears. Press F9 (All
parameters), then Page Down multiple times to locate the Object processing
parameter.
4. At the Object send prefix prompt, do one of the following:
• To use a shared job that is limited to a subset of data groups, specify a three-
character prefix. Only the data groups that you explicitly set to use the same
prefix will share the same the object send job.
• To use a job dedicated to the data group, type *DGDFN.
5. Press Enter.
6. To make the change effective, end and re-start the data group.

256
Copying a definition

CHAPTER 11 Additional options: working with


definitions

The procedures for performing common functions, such as copying, displaying, and
renaming, are very similar for all types of definitions used by MIMIX. The generic
procedures in this topic can be used for copying, deleting, displaying, and printing
definitions. Specific procedures are included for renaming each type of definition.
The topics in this chapter include:
• “Copying a definition” on page 257 provides a procedure for copying a system
definition, transfer definition, journal definition, or a data group definition.
• “Deleting a definition” on page 258 provides a procedure for deleting a system
definition, transfer definition, journal definition, or a data group definition.
• “Renaming definitions” on page 259 provides procedure for renaming definitions,
such as renaming a system definition which is typically done as a result in a
change of software.

Copying a definition
Use this procedure on a management system to copy a system definition, transfer
definition, journal definition, or a data group definition.
Notes for data group definitions:
• The data group entries associated with a data group definition are not copied.
• Before you copy a data group definition, ensure that activity is ended for the
definition to which you are copying.
Notes for journal definitions:
• The journal definition identified in the From journal definition prompt must exist
before it can be copied. The journal definition identified in the To journal defining
prompt cannot exist when you specify *NO for the Replace definition prompt.
• If you specify *YES for the Replace definition prompt, the To journal defining
prompt must exist. It is possible to introduce conflicts in your configuration when
replacing an existing journal definition. These conflicts are automatically resolved
or an error message is sent when the journal environment for the definition is built.
To copy a definition, do the following:
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 93 for information about using these.
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.

257
Additional options: working with definitions

3. The "Work with" display for the definition type appears. Type a 3 (Copy) next to
definition you want and press Enter.
4. The Copy display for the definition type you selected appears. At the To definition
prompt, specify a name for the definition to which you are copying information.
5. If you are copying a journal definition or a data group definition, the display has
additional prompts. Verify that the values of prompts are what you want.
6. If you are copying a system definition, specify the type of the new system
definition at the To system type prompt.
7. The value *NO for the Replace definition prompt prevents you from replacing an
existing definition. If you want to replace an existing definition, specify *YES.
8. To copy the definition, press Enter.

Deleting a definition
Use this procedure on a management system to delete a system definition, transfer
definition, journal definition, or a data group definition.

Attention:
When you delete a system or data group definition, information
associated with the definition is also deleted. Ensure that the
definition you delete is not being used for replication and be aware
of the following:
• If you delete a system definition, all other configuration
elements associated with that definition are deleted. This
includes journal definitions, transfer definitions, and data group
definitions with their associated data group entries. Journal
definitions and RJ links associated with system managers are
also deleted.
• If you delete a data group definition, all of its associated data
group entries are also deleted.
• The delete function does not clean up any records for files in
the error/hold file.
When you delete a journal definition, only the definition is deleted.
The objects being journaled, the journal, and the journal receivers
are not deleted. Journal definitions for internal MIMIX processes
cannot be deleted by users.

To delete a definition, do the following:


Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 93 for information about using these.
1. Ensure that the definition you want to delete is not being used for replication. Do
the following:

258
Renaming definitions

a. From the Work with Systems (WRKSYS) display, type option 8 (Work with data
groups) next to the system that you want deleted and press Enter.
The result is a list of data groups for the system you selected.
b. Optional: Type a 17 (File entries) next to the data group and press Enter. On
the Work with DG File Entries display, use option 10 (End journaling) to end
journaling for files associated with the data group.
c. Repeat Step b for additional data groups with files to end journaling on.
d. From the Work with Data Groups display, use option 10 (End data group) next
to all data groups for that system.
e. Before deleting a system definition, ensure all managers for the system
definition are ended. From the Work with Systems display, type a 10 (End) next
to the system definition you want and press Enter.
f. The End MIMIX Managers display appears. Specify the value for the type of
manager you want to end at the Manager prompt and press Enter. The
selected managers are ended.
g. If using application groups, remove the node from the recovery domain. From
the Work with application groups (WRKAG) display, take option 12 (Work with
node entries) and then option 4 (Remove) to remove the system.
2. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
3. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
4. The "Work with" display for the definition type appears. Type a 4 (Delete) next to
definition you want and press Enter.
5. When deleting system definitions, transfer definitions, or journal definitions, a
confirmation display appears with a list of definitions to be deleted. To delete the
definitions press F16.

Renaming definitions
The procedures for renaming a system definition, transfer definition, journal definition,
or data group definition must be run from a management system.

Attention:
Before you rename any definition, ensure that all other
configuration elements related to it are not active.

This section includes the following procedures:


• “Renaming a system definition” on page 260
• “Renaming a transfer definition” on page 266
• “Renaming a journal definition with considerations for RJ link” on page 267
• “Renaming a data group definition” on page 268

259
Additional options: working with definitions

Renaming a system definition


System definitions are typically renamed as a result of a change in hardware. When
you rename a system definition, all other configuration information that references the
system definition is automatically modified to include the updated system name. This
includes journal definitions, transfer definitions, data group definitions, and associated
data group entries.

Attention:
Before you rename a system definition, ensure that MIMIX activity
is ended and that remote journal links used by the MIMIX
environment are ended.
Attention:
Before you rename a system definition, ensure that MIMIX activity
is ended, that remote journal links used by the MIMIX environment
are ended, and that VSP servers that run on or have instances
which connect to the affected system are ended.

These instructions use MIMIX menus. See “Accessing the MIMIX Main Menu” on
page 93 for how to use them.
Do not attempt to start renaming multiple system definitons at the same time. If
you have multiple system definitions to rename, these instructions identify when it is
safe to begin renaming the next system definition.
Use these instructions to rename a single system definition. Variations in steps
needed for a management system or a network system are identified. Do the
following:
1. If you are using Vision Solutions Portal (VSP), you must end any VSP server that
runs on the system whose system definition will be renamed, or any VSP server
that connects to a product instance in which a system definition will be renamed.
Use the appropriate command, as follows:
• If the VSP server runs on an IBM i platform, use the following command:
VSI001LIB/ENDVSISVR
• If the VSP server runs on a Windows platform, from the Windows Start menu,
select:
All Programs > Vision Solutions Portal > Stop Server and click Stop Server.
2. From a management system, use the following command to perform a controlled
end of the MIMIX installation:
ENDMMX
It may take some time for all processes to end.
3. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems)
and press Enter.
When all processes have ended you should see the value *INACTIVE in the
System Manager, Journal Manager, Journal Inspection, and Collector Services
columns.
4. From the Work with Systems display, select option 8 (Work with data groups) for

260
Renaming definitions

the system whose definition you are renaming, and press Enter.
5. For each data group listed on the Work with Data Groups display, do the following
to ensure that replication activity is quiesced and to record information you will
need later for verifying the starting points in the journals.
a. Select option 8 (Display status) and press Enter.
b. Record the Last Read Receiver name and Sequence # for both database and
object. You will need this information to verify the starting points in a later step.
Note: We strongly recommend you also record the full three-part name of the
data group and identify which system will be renamed. This will be
useful when verifying that each data group has the correct journal
starting points after the system definition has been renamed. This will
be particularly useful if you will be attempting to rename more than one
system definition (when directed) or when renaming within a multi-node
environment.
c. Repeat Step 5 for each data group that includes the system definition to be
renamed.
d. When you addressed all of the data groups, press F12 (Cancel) to return to the
Work with Systems display.
6. To determine which transfer definitions need to be changed, do the following:
a. From the Work with Systems display, press F16 (System definitions).
b. From the Work With System Definitions display, locate the system to be
renamed and select option 14 (Transfer defintion) for that system.
c. On the Work with Transfer Definitions display, check the following:
• Each transfer definition in the list that has the system to be renamed
identified in the System1 or System 2 columns must be changed using
Step 7.
• If there is a transfer definition with *ANY specified for the System 1 or
System 2 columns, you will need to restart the system port jobs with the new
host names when directed.
7. Perform this step for each transfer definition that includes the system to be
renamed. To change a transfer definition, do the following:
a. Select option 2 (Change) and press Enter.
b. Press F10 to access additional parameters.
c. If the system to be renamed is System 1 and *SYS1 is shown for the System 1
host name or address prompt, specify the actual host name or IP address
currently used for that system.
d. If the system to be renamed is System 2 and *SYS2 is shown for the System 2
host name or address prompt, specify the actual host name or IP address
currently used for that system.
e. Press Enter.
Note: Many installations will have an autostart entry for the STRSVR command.

261
Additional options: working with definitions

Autostart entries must be reviewed for possible updates of a new system


name or IP address. For more information, see “Identifying the current
autostart job entry information” on page 188 and “Changing an autostart
job entry and its related job description” on page 189.
8. From the system that will be renamed, do the following for each transfer defintion
that was changed and for any transfer definition that specified *ANY for System 1
or System 2:
a. End the port job on the system, specifying the old value from the transfer
defintion in the command:
ENDSVR HOST(host-name-or-address) PORT(port-number)

b. Re-start the port job on the system, specifying the new value from the transfer
definition in the command:
STRSVR HOST(host-name-or-address) PORT(port-number)

c. For every transfer definition that you changed in Step 7, verify that
communication links start by using the command:
VFYCMNLNK PROTOCOL(*TFRDFN) TFRDFN(name system1 system2)

IMPORTANT!
Perform only one of the next two steps. Perform only the step that is for the type of
system definition you are renaming (Step 9 to rename a network system or Step 10 to
rename a management system.) Never attempt to perform both steps.

9. If the system to be renamed is a network system, the total number of nodes in


the MIMIX instance affects where you need to perform the action to rename the
system definition.
• If the instance has only one management system and one network system,
skip to Step 9d.
• If the instance has three or more nodes, The network system to be renamed
must be renamed from a management system with which it is allowed to
communicate. (In an environment with three or more nodes, each network
system may be configured to limit the number of management systems with
which it is allowed to communicate.) Continue with Step 9a.
a. Use the command WRKSYSDFN to access the Work with System Definitions
display.
b. Select option 5 (Display) for the network system definition that will be renamed.
c. On the Display System Definition display, press Page Down to locate the
Communicate with mgt systems field.
• If the value is *ALL, you can rename the network system defintion from any
one of the management systems.
• If one or more management systems are identified, you can rename the
network system from one of the identified management systems.

262
Renaming definitions

d. Go to the management system of a two-node environment, or to a


management system that you identified in Step 9c, and enter the command:
WRKSYSDFN
e. From the Work with System Definitions display, select option 7 (Rename) for
the network system definition to be renamed and press Enter.
f. On the Rename System Definitions (RNMSYSDFN) display, specify the new
name of the network system at the To system definition prompt. Then press
Enter.
g. Processing may take some time. Once the rename operation is complete,
press F12.
h. Go to Step 11 and continue.
10. If the system to be renamed is a management system, go to that system and
do the following:
a. Enter the command:
WRKSYSDFN
b. From the Work with System Definitions display, select option 7 (Rename) for
the management system definition to be renamed and press Enter.
c. On the Rename System Definitions (RNMSYSDFN) display, specify the new
name of the management system at the To system definition prompt. Then
press Enter.
d. Processing may take some time. Once the rename operation is complete,
press F12.
11. From the system where you performed the rename operation, enter the following
command:
WRKSYS
12. The Work with Systems display appears. Type a 9 (Start) next to the system that
is identified as the local system and press Enter.
13. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:
a. At the Manager prompt, specify *SYS.
b. Press Enter.
14. From the Work with Systems display (WRKSYS command), type 7 (System
manager status) next to the system that is the local system and press Enter.
15. On the resulting Work with System Pairs display, all of the system manager
processes associated with the selected system definition are shown. Each row
identifies a pair of systems between which a system manager process exists and
the direction (source to target) in which it transfers data. Do the following:
a. Check each applicable system manager process to ensure there is no backlog.
b. After the system pair status shows an unprocessed count of zero, continue
with the following steps.
16. End MIMIX using the command:

263
Additional options: working with definitions

ENDMMX
Wait for all processes to end before continuing.
17. From a command line on a management system, enter the following command to
start the system managers:
STRMMXMGR SYSDFN(*ALL) MGR(*SYS)
Perform Step 18 through Step 21 from the system that was renamed.
18. From the system that was renamed, enter the following command:
WRKSYS
19. On the Work with Systems display, select option 8 (Work with data groups) for the
system that was renamed and press Enter.
In the resulting list of data groups, one of the systems in each data group has
been renamed. For the following step, you will need to know the original names of
both systems in each data group and the new name of the renamed system. The
information you recorded in Step 5 to verify the starting point shows the old
system name. In multi-node environments, the verify step can become confusing
if you are not diligent about keeping track of what name has changed.
20. For each data group listed, do the following:
a. From the Work with Data Groups display, select option 9 (Start DG) and press
Enter.
b. The Start Data Group (STRDG) display appears. Press F10 to display
additional parameters.
c. At the Show confirmation screen prompt, specify *YES.
d. If the data group being started is controlled by an application group, press
PageDown. Then specify *YES for the Override if in data rsc. group prompt.
e. Press Enter.
f. The Confirmation display appears. Use the information you recorded in
Step 5b to verify the information displayed has the correct starting point for the
journal receivers.

Field on Confirmation display Expected Value

Database journal receiver Database receiver name recorded in Step 5b.

Database sequence number Equal to 1 + the last read sequence number


recorded for Database in Step 5b.
Note: A value that is more than 1 larger than the
previously recorded value is not correct.

Object journal receiver Object receiver name recorded in Step 5b.

Object sequence number Equal to 1 + the last read sequence number


recorded for Object in Step 5b.
Note: A value that is more than 1 larger than the
previously recorded value is not correct.

264
Renaming definitions

Do one of the following:


• To confirm the starting point and start the data group, press Enter.
• If the receiver name or sequence number for either database or object does
not match the expected values shown in the table above, press F12
(Cancel). Determine whether the information you are using to verify the
starting point is for the data group you are attempting to start. Then try
Step 20 again. For Step 20b, specify the recorded values for the Database
large sequence number and Object large sequence number prompts, then
press F10 and continue with the instructions.
g. Repeat Step 20 for each data group listed on the Work with Data Groups
display.
21. Verify that all of the data groups on the Work with Data Groups display are active
before continuing with the next step. Refer to the MIMIX Operations book for more
information.
22. If you are renaming only one system definition, continue with the next step. If you
have more system definitions to be renamed, you can now safely return to Step 2
to begin renaming the next system definition.
Data groups not affected by the renamed system definition must be restarted
using Step 23 through Step 26 from a management system.
23. From a management system, enter the command:
WRKSYS
24. From the Work with Systems display, select option 8 (Work with data groups) on
the management system and press Enter.
25. From the Work with Data Groups display, for each data group that is not active, do
the following:
a. Select option 9 (Start DG) for a data group (highlighted red) that is not active
and press Enter.
b. The Start Data Group (STRDG) display appears. Press Enter. Additional
parameters are displayed.
c. If the data group being started is controlled by an application group, press F10,
then press PageDown. At the Override if in data rsc. group prompt, Then
specify *YES.
d. Press Enter to start the data group.
26. The Work with data groups display reappears. Ensure all data groups are active.
Press F5 to refresh data. Refer to the MIMIX Operations book for more
information.
27. Start the VSP server using the appropriate command:
• For a VSP server on an IBM i platform, use the following command:
VSI001LIB/STRVSISVR

265
Additional options: working with definitions

• For a VSP server runs on a Windows platform, from the Windows Start menu,
select:
All Programs > Vision Solutions Portal > Start.

Renaming a transfer definition


When you rename a transfer definition, other configuration information which
references it is not updated with the new name. You must manually update other
information which references the transfer definition. The following procedure renames
the transfer definition and includes steps to update the other configuration information
that references the transfer definition including the system definition, data group
definition, and remote journal link. All of the steps must be completed.
To rename a transfer definition, do the following from the management system:
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 93 for information about using these.
1. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
2. From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitions) and press Enter.
3. From the Work with Transfer Definitions menu, type a 7 (Rename) next to the
definition you want to rename and press Enter.
4. The Rename Transfer Definition display for the definition type you selected
appears. At the To transfer definition prompt, specify the values you want for the
new name and press Enter.
5. Press F12 to return to the MIMIX Configuration Menu.
6. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
7. From the Work with System Definitions menu, type a 2 (Change) next to the
system name whose transfer definition needs to be changed and press Enter.
8. From the Change System Definition display, specify the new name for the
transfer definition and press Enter.
9. Press F12 to return to the MIMIX Configuration Menu.
10. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
11. From the Work with DG Definitions menu, type a 2 (Change) next to the data
group name whose transfer definition needs to be changed and press Enter.
12. From the Change Data Group Definition display, specify the new name for the
transfer definition and press Enter until the Work with DG Definitions display
appears.
13. Press F12 to return to the MIMIX Configuration Menu.
14. From the MIMIX Configuration Menu, select option 8 (Work with remote journal
links) and press Enter.

266
Renaming definitions

15. From the Work with RJ Links menu, press F11 to display the transfer definitions.
16. Type a 2 (Change) next to the RJ link where you changed the transfer definition
and press Enter.
17. From the Change Remote Journal Link display, specify the new name for the
transfer definition and press Enter.

Renaming a journal definition with considerations for RJ link


When you rename a journal definition, other configuration information which
references it is not updated with the new name. This procedure includes steps for
renaming the journal definition in the data group definition, including considerations
when an RJ link is used.
If you rename a journal definition, the journal name will also be renamed if you used
the default value of *JRNDFN when configuring the journal definition. If you do not
want the journal name to be renamed, you must specify the journal name rather than
the default of *JRNDFN for the journal (JRN) parameter.
To rename a journal definition, do the following from the management system:
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 93 for information about using these.
1. Perform a controlled end for the data group in your remote journaling
environment. Use topic “Ending all replication in a controlled manner” in the
MIMIX Operations book.
2. If using remote journaling, do the following. Otherwise, continue with Step 3:
a. End the remote journal link in a controlled manner. Use topic “Ending a remote
journal link independently” in the MIMIX Operations book.
b. Verify that the remote journal link is not in use on both systems. Use topic
“Displaying status of a remote journal link” in the MIMIX Operations book. The
remote journal link should have a state value of *INACTIVE before you
continue.
c. From the MIMIX Intermediate Main Menu, select option 11 (Configuration
menu) and press Enter.
d. From the MIMIX Configuration Menu, select option 8 (Work with remote
journal links) and press Enter.
e. Remove the remote journal connection (the RJ link). From the Work with RJ
Links display, type a 15 (Remove RJ connection) next to the link that you want
and press Enter. A confirmation display appears. To continue removing the
connections for the selected links, press Enter.
f. Press F12 to return to the MIMIX Configuration Menu.
3. From the MIMIX Configuration Menu, select option 3 (Work with journal
definitions) and press Enter.
4. From the Work with Journal Definitions menu, type a 7 (Rename) next to the
journal definition names you want to rename and press Enter.

267
Additional options: working with definitions

5. The Rename Journal Definition display for the definition you selected appears. At
the To journal definition prompts, specify the values you want for the new name.
a. If the journal name is *JRNDFN, ensure that there are no journal receiver
prefixes in the specified library whose names start with the journal receiver
prefix. See “Building the journaling environment” on page 221 for more
information.
6. Press Enter. The Work with Journal Definitions display appears.
7. If using remote journaling, do the following to change the corresponding definition
for the remote journal. Otherwise, continue with Step 8:
a. Type a 2 (Change) next to the corresponding remote journal definition name
you changed and press Enter.
b. Specify the values entered in Step 5 and press Enter.
8. From the Work with Journal Definitions menu, type a 14 (Build) next to the journal
definition names you changed and press F4.
9. The Build Journaling Environment display appears. At the Source for values
prompt, specify *JRNDFN.
10. Press Enter. You should see a message that indicates the journal environment
was created.
11. Press F12 to return to the MIMIX Configuration Menu. From the MIMIX
Configuration Menu, select option 4 (Work with data group definitions) and press
Enter.
12. From the Work with DG Definitions menu, type a 2 (Change) next to the data
group name that uses the journal definition you changed and press Enter.
13. Press F10 to access additional parameters.
14. From the Change Data Group Definition display, specify the new name for the
System 1 journal definition and System 2 journal definition and press Enter twice.

Renaming a data group definition


Do the following to rename a data group definition:
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 93 for information about using these.

Attention:
Before you rename a data group definition, ensure that the data
group has a status of *INACTIVE.

1. Ensure that the data group is ended. If the data group is active, end it using the
procedure “Ending a data group in a controlled manner” in the MIMIX Operations
book.
2. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
3. From the MIMIX Configuration Menu, select option 4 (Work with data group

268
Renaming definitions

definitions) and press Enter.


4. From the Work with DG Definitions menu, type a 7 (Rename) next to the data
group name you want to rename and press Enter.
5. From the Rename Data Group Definition display, specify the new name for the
data group definition and press Enter.

269
Configuring data group entries

CHAPTER 12 Configuring data group entries

Data group entries can identify one or many objects to be replicated or excluded from
replication. You can add individual data group entries, load entries from an existing
source, and change entries as needed.
The topics in this chapter include:
• “Creating data group object entries” on page 271 describes data group object
entries which are used to identify library-based objects for replication. Procedures
for creating these are included.
• “Creating data group file entries” on page 275 describes data group file entries
which are required for user journal replication of *FILE objects. Procedures for
creating these are included.
• “Creating data group IFS entries” on page 284 describes data group IFS entries
which identify IFS objects for replication. Procedures for creating these are
included.
• “Loading tracking entries” on page 286 describes how to manually load tracking
entries for IFS objects, data areas, and data queues that are configured for user
journal replication.
• “Adding a library to an existing data group” on page 288 describes how to add a
new library to an existing data group configuration, start journaling, and
synchronize its contents,
• “Adding an IFS directory to an existing data group” on page 293 describes how to
add a new directory to an existing data group configuration, start journaling, and
synchronize its contents,
• “Creating data group DLO entries” on page 297 describes data group DLO entries
which identify document library objects (DLOs) for replication by MIMIX system
journal replication processes. Procedures for creating these are included.
• “Additional options: working with DG entries” on page 300 provides procedures for
performing data group entry common functions, such as copying, removing, and
displaying,
The appendix “Supported object types for system journal replication” on page 635
lists IBM i object types and indicates whether each object type is replicated by MIMIX.
In environments where multiple data groups exist within a single resource group,
changes to data group configuration entries that identify objects to replicate are
propagated to the data groups within a resource group entry as follows:
• If the configuration entries are created or changed from an enabled data group,
they are propagated to all data groups within the resource group entry, including
disabled data groups.
• If configuration entries are created or changed from a disabled data group, they
are not propagated to the other data groups in the resource group entry.

270
Creating data group object entries

Creating data group object entries


Data group object entries are used to identify library-based objects for replication.
How replication is performed for the objects identified depends on the object type and
configuration settings. For object types that cannot be journaled to a user journal,
system journal replication processes are used. For object types that can be journaled
(*FILE, *DTAARA, and *DTAQ), values specified in the object entry and other
configuration information determine how replication is performed. For these object
types, default values in the object entry are appropriate for user journal replication;
however, user journal replication of these object types also requires file entries (for
*FILE) and object tracking entries (for *DTAARA and *DTAQ).
For detailed concepts and requirements for supported configurations, see the
following topics:
• “Identifying library-based objects for replication” on page 100
• “Identifying logical and physical files for replication” on page 106
• “Identifying data areas and data queues for replication” on page 113
When you configure MIMIX, you can create data group object entries by adding
individual object entries or by using the custom load function for library-based objects.
The custom load function can simplify creating data group entries. This function
generates a list of objects that match your specified criteria, from which you can
selectively create data group object entries. For example, if you want to replicate all
but a few of the data areas in a specific library, you could use the Add Data Group
Object Entry (ADDDGOBJE) command to create a single data group object entry that
includes all data areas in the library. Then, using the same object selection criteria
with the custom load function, you can select from a list of data areas in the library to
create exclude entries for the objects you do not want replicated.
Once you have created data group object entries, you can tailor them to meet your
requirements. You can also use the #DGFE audit or the Check Data Group File
Entries (CHKDGFE) command to ensure that the correct file entries exist for the
object entries configured for the specified data group.
Note: There is a logical limit to how many object entries are supported. Contact a
Certified MIMIX Consultant before adding additional object entries in a
configuration that already has a large number of them.

Loading data group object entries


In this procedure, you specify selection criteria that results in a list of objects with
similar characteristics. From the list, you can select multiple objects for which MIMIX
will create appropriate data group object entries. You can customize individual entries
later, if necessary.
From the management system, do the following to create a custom load of object
entries:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.

271
2. From the Work with Data Groups display, type a 20 (Object entries) next to the
data group you want and press Enter.
3. The Work with DG Object Entries display appears. Press F19 (Load).
4. The Load DG Object Entries (LODDGOBJE) display appears. Do the following to
specify the selection criteria:
a. Identify the library and objects to be considered. Specify values for the System
1 library and System 1 object prompts.
b. If necessary, specify values for the Object type, Attribute, System 2 library, and
System 2 object prompts.
c. At the Process type prompt, specify whether resulting data group object entries
should include or exclude the identified objects.
d. Specify appropriate values for the Cooperate with database and Cooperating
object types prompts. These prompts determine how *FILE, *DTAARA, and
*DTAQ objects are replicated. Change the values if you want to explicitly
replicate from the system journal or if you want to limit which object types are
cooperatively processed with the user journal.
e. Ensure that the remaining prompts contain the values you want for the data
group object entries that will be created. Press Page Down to see all of the
prompts.
5. To specify file entry options that will override those set in the data group definition,
do the following:
a. Press F9 (All parameters).
b. Press Page Down until you locate the File entry options prompt.
c. Specify the values you need on the elements of the File entry options prompt.
6. To generate the list of objects, press Enter.
Note: If you skipped Step 5, you may need to press Enter multiple times.
7. The Load DG Object Entries display appears with the list of objects that matched
your selection criteria. Either type a 1 (Select) next to the objects you want or
press F21 (Select all). Then press Enter.
8. If necessary, you can use “Adding or changing a data group object entry” on
page 272 to customize values for any of the data group object entries.
Synchronize the objects identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.

Adding or changing a data group object entry


Note: If you are converting a data group to use user journal replication for data areas
or data queues, use this procedure when directed by “Checklist: Change
*DTAARA, *DTAQ, IFS objects to user journaling” on page 151.

272
Creating data group object entries

From the management system, do the following to add a new data group object entry
or change an existing entry:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 20 (Object entries) next to the
data group you want and press Enter.
3. The Work with DG Object Entries display appears. Do one of the following:
• To add a new entry, type a 1 (Add) next to the blank line at the top of the list
and press Enter.
• To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter.
4. The appropriate Data Group Object Entry display appears. When adding an entry,
you must specify values for the System 1 library and System 1 object prompts.
Note: When changing an existing object entry to enable replication of data areas
or data queues from a user journal (COOPDB(*YES)), make sure that you
specify only the objects you want to enable for the System 1 object
prompt. Otherwise, all objects in the library specified for System 1 library
will be enabled.
5. If necessary, specify a value for the Object type prompt.
6. Press F9 (All parameters).
7. If necessary, specify values for the Attribute, System 2 library, System 2 object,
and Object auditing value prompts.
8. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
9. Specify appropriate values for the Cooperate with database and Cooperating
object types prompts.
Note: These prompts determine how *FILE, *DTAARA, and *DTAQ objects are
replicated. Change the values if you want to explicitly replicate from the
system journal or if you want to limit which object types are cooperatively
processed with the user journal.
10. Ensure that the remaining prompts contain the values you want for the data group
object entries that will be created. Press Page Down to see more prompts.
11. If there are library (*LIB) objects to replicate and you do not want them replicated
to the same auxiliary storage pool (ASP) or independent ASP device on each
system, specify values for System 1 library ASP number, System 1 library ASP
device, System 2 library ASP number, and System 2 library ASP device prompts.
12. To specify file entry options that will override those set in the data group definition,
do the following:
a. If necessary, press Page Down to locate the File and tracking entry options
(FEOPT) prompts.

273
b. Specify the values you need for the elements of the File and tracking entry
options prompts.
13. If the changes you specify on the command result in adding objects into the
replication namespace, those objects need to be synchronized between systems.
At the Synchronize on start prompt, specify the value for how synchronization will
occur:
• The default value is *NO. Use this value when you will use save and restore
processes to manually synchronize the objects. If you do not synchronize
before replication starts, the next audit that checks all objects (scheduled or
manually invoked) will attempt to synchronize the objects if recoveries are
enabled and differences are found.
• The value *YES will request to synchronize any objects added to the
replication namespace through the system journal replication processes. This
may temporarily cause threshold conditions in replication processes.
14. Press Enter.
15. For object entries configured for user journal replication of data areas or data
queues, if you were directed to this procedure from “Checklist: Change *DTAARA,
*DTAQ, IFS objects to user journaling” on page 151, return to Step 7 to proceed
with the additional steps necessary to complete the conversion.
16. Manually synchronize the objects identified by this data group object entry before
starting replication processes. You can skip this step if you specified *YES in
Step 13. The entries will be available to replication processes after the data group
is ended and restarted. This includes after the nightly restart of MIMIX jobs. The
next time an audit that checks all objects runs, the entries will be available and the
MIMIX audits will attempt to synchronize the objects they identify if recoveries are
enabled.

274
Creating data group file entries

Creating data group file entries


Data group file entries are required for user journal replication of *FILE objects.
When you configure MIMIX, you can create data group file entry information by
creating data group file entries individually or by loading entries from another source.
Once you have created the file entries, you can tailor them to meet your requirements.
Note: If you plan to use either MIMIX Dynamic Apply or legacy cooperative
processing, files must be defined by both data group object entries and data
group file entries. It is strongly recommended that you create data group
object entries first. Then, load the data group file entries from the object entry
information defined for the files. You can use the #DGFE audit or the Check
Data Group File Entries (CHKDGFE) command to ensure that the correct file
entries exist for the object entries configured for the specified data group.
For detailed concepts and requirements for supported configurations, see the
following topics:
• “Identifying library-based objects for replication” on page 100
• “Identifying logical and physical files for replication” on page 106

Loading file entries


If you need to create data group file entries for many files, you can have MIMIX create
the entries for you using the Load Data Group File Entries (LODDGFE) command.
The Configuration source (CFGSRC) parameter supports loading from a variety of
sources, listed below in order most commonly used:
• *DGOBJE - File entry information is loaded from the information in data group
object entries configured for the data group. If you are configuring to use MIMIX
Dynamic Apply or legacy cooperative processing, this value is recommended.
• *NONE - File entry information is loaded from a library on either the source or
target system, as determined by the values specified for System 1 library (LIB1),
System 2 library (LIB2), and Load from system (LODSYS) parameters. If the data
group is configured for COOPJRN(*USRJRN), then an object entry must also be
configured which includes the file and is cooperatively processed.
• *JRNDFN - File entry information is loaded from a journal specified in the journal
definition associated with the specified data group. File entries will be created for
all files currently journaled to the journal specified in the journal definition.
• *DGFE - File entry information is loaded from data group file entries defined to
another data group. This option supports loading from data groups at the previous
release or the current release on the same system. This value is typically used
when loading file entries from a data group in a different installation of MIMIX.
When loading from a data group, you can also specify the source from which file entry
options are loaded, and override elements if needed. The Default FE options source
(FEOPTSRC) parameter determines whether file entry options are loaded from the
specified configuration source (*CFGSRC) or from the data group definition
(*DGDFT). Any file entry option with a value of *DFT is loaded from the specified
source. Any values specified on elements of the File entry options (FEOPT)

275
parameter override the values loaded from the FEOPTSRC parameter for all data
group file entries created by a load request.
Regardless of where the configuration source and file entry option source are located,
the Load Data Group File Entries (LODDGFE) command must be used from a system
designated as a management system.
Note: The Load Data Group File Entries (LODDGFE) command performs a journal
verification check on the file entries using the Verify Journal File Entries
(VFYJRNFE) command. In order to accurately determine whether files are
being journaled to the target system, you should first perform a save and
restore operation to synchronize the files to the target system before loading
the data group file entries.

Loading file entries from a data group’s object entries


This topic contains examples and a procedure. The examples illustrate the flexibility
available for loading file entry options.
Example - Load from the same data group This example illustrates how to create
file entries when converting a data group to use MIMIX Dynamic Apply. In this
example, data group DGDFN1 is being converted. The data group definition specifies
*SYS1 as its data source (DTASRC). However, in this example, file entries will be
loaded from the target system to take advantage of a known synchronization point at
which replication will later be started.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) UPDOPT(*ADD) LODSYS(*SYS2)
SELECT(*NO)
Since no value was specified for FROMDGDFN, its default value *DGDFN causes the
file entries to load from existing object entries for DGDFN1. The value *SYS2 for
LODSYS causes this example configuration to load from its target system. Entries are
added (UPDOPT(*ADD) to the existing configuration. Since all files identified by
object entries are wanted, SELECT(*NO) bypasses the selection list. The data group
file entries for DGDFN1 created have file entry options which match those found in the
object entries because no values were specified for FEOPTSRC or FEOPT
parameters.
Example - Load from another data group with mixed sources for file entry
options The file entries for data group DGDFN1 are created by loading from the
object entries for data group DGDFN2, with file entry options loaded from multiple
sources.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) FROMDGDFN(DGDFN2) FEOPT(*CFGSRC
*DGDFT *CFGSRC *DGDFT)
The data group file entries created for DGDFN1 are loaded from the configuration
information in the object entries for DGDFN2, with file entry options coming from
multiple sources. Because the command specified the first element (Journal image)
and third element (Replication type) of the file entry options (FEOPT) as *CFGSRC,
the resulting file entries have the same values for those elements as the data group
object entries for DGDFN2. Because the command specified the second element
(Omit open/close entries) and the fourth element (Lock member during apply) as
*DGDFT, these elements are loaded from the data group definition. The rest of the file
entry options are loaded from the configuration source (object entries for DGDFN2).

276
Creating data group file entries

Procedure: Use this procedure to create data group file entries from the object
entries defined to a data group.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears. The name of the
data group for which you are creating file entries and the Configuration source
value of *DGOBJE are pre-selected. Press Enter.
5. The following prompts appear on the display. Specify appropriate values.
a. From data group definition - To load from entries defined to a different data
group, specify the three-part name of the data group.
b. Load from system - Ensure that the value specified is appropriate. For most
environments, files should be loaded from the source system of the data group
you are loading. (This value should be the same as the value specified for Data
source in the data group definition.)
c. Update option - If necessary, specify the value you want.
d. Default FE options source - Specify the source for loading values for default file
entry options. Each element in the file entry options is loaded from the
specified location unless you explicitly specify a different value for an element
in Step 6.
6. Optionally, you can specify a file entry option value to override those loaded from
the configuration source. Do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
7. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
8. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
9. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 282 to customize
values for any of the data group file entries.

277
Loading file entries from a library
Example: The data group file entries are created by loading from a library named
TESTLIB on the source system. This example assumes the configuration is set up so
that system 1 in the data group definition is the source for replication.
LODDGFE DGDFN(DGDFN1) CFGSRC(*NONE) LIB1(TESTLIB)
Since the FEOPT parameter was not specified, the resulting data group file entries
are created with a value of *DFT for all of the file entry options. Because there is no
MIMIX configuration source specified, the value *DFT results in the file entry options
specified in the data group definition being used.
Procedure: Use this procedure to create data group file entries from a library on
either the source system or the target system.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source
prompt, specify *NONE and press Enter.
5. Identify the location of the files to be used for loading. For common configurations,
you can accomplish this by specifying a library name at the System 1 library
prompt and accepting the default values for the System 2 library, Load from
system, and File prompts.
If you are using system 2 as the data source for replication or if you want the
library name to be different on each system, then you need to modify these values
to appropriately reflect your data group defaults. If the data group is configured for
COOPJRN(*USRJRN), then an object entry must also be configured which
includes the file and is cooperatively processed.
6. If necessary, specify the values you want for the following:
Update option prompt
Add entry for each member prompt
7. The value of the Default FE options source prompt is ignored when loading from a
library. To optionally specify file entry options, do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.

278
Creating data group file entries

8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. If necessary, you can use “Changing a data group file entry” on
page 282 to customize values for any of the data group file entries.

Loading file entries from a journal definition


Example: The data group file entries are created by loading from the journal
associated system 1 of the data group. This example assumes the configuration is set
up so that system 1 in the data group definition is the source for replication. The
journal definition 1 specified in the data group definition identifies the journal.
LODDGFE DGDFN(DGDFN1) CFGSRC(*JRNDFN) LODSYS(*SYS1)
Since the FEOPT parameter was not specified, the resulting data group file entries
are created with a value of *DFT for all of the file entry options. Because there is no
MIMIX configuration source specified, the value *DFT results in the file entry options
specified in the data group definition being used.
Procedure: Use this procedure to create data group file entries from the journal
associated with a journal definition specified for the data group.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source
prompt, specify *JRNDFN and press Enter.
File and library names on the source and target systems are set to the same
names for the load operation.
5. At the Load from system prompt, ensure that the value specified represents the
appropriate system. The journal definition associated with the specified system is
used for loading. For common configurations, the value that corresponds to the
source system of the data group you are loading should be used. (This value
should match the value specified for Data source in the data group definition.)
6. If necessary, specify the value you want for the Update option prompt.
7. The value of the Default FE options source prompt is ignored when loading from a
journal definition. To optionally specify file entry options, do the following:

279
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 282 to customize
values for any of the data group file entries.

Loading file entries from another data group’s file entries


Example 1: The data group file entries are created by loading from file entries for
another data group, DGDFN2.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) FROMDGDFN(DGDFN2)
Since the FEOPT parameter was not specified, the resulting data group file entries for
DGDFN1 are created with a value of *DFT for all of the file entry options. Because the
configuration source is another data group, the value *DFT results in file entry options
which match those specified in DGDFN2.
Example 2: The data group file entries are created by loading from file entries for
another data group, DGDFN2 in another installation MXTEST.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) PRDLIB(MXTEST) FROMDGDFN(DGDFN2)
Since the FEOPT parameter was not specified, the resulting data group file entries for
DGDFN1 are created with a value of *DFT for all of the file entry options. Because the
configuration source is another data group in another installation, the value *DFT
results in file entry options which match those specified in DGDFN2 in installation
MXTEST.
Procedure: Use this procedure to create data group file entries from the file entries
defined to another data group.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source

280
Creating data group file entries

prompt, specify *DGFE and press Enter.


5. At the Production library prompt, either accept *CURRENT or specify the name of
an installation library from which the data group you are copying is located.
6. At the From data group definition prompts, specify the three-part name of the data
group from which you are loading.
7. If necessary, specify the value you want for the Update option prompt.
8. Specify the source for loading values for default file entry options at the Default FE
options source prompt. Each element in the file entry options is loaded from the
specified location unless you explicitly specify a different value for an element in
Step 9.
9. If necessary, do the following specify a file entry option value to override those
loaded from the configuration source:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
10. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source
11. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
12. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 282 to customize
values for any of the data group file entries.

Adding a data group file entry


When you add a single data group file entry to a data group definition, the
configuration is dynamically updated and MIMIX automatically starts journaling of the
file on the source system if the file exists and is not already journaled. Special entries
are inserted into the journal data stream to enable the dynamic update. The added
data group file entry is recognized by MIMIX as soon as each active process receives
the special entries. For each MIMIX process, there may be a delay before the addition
is recognized. This is true especially for very active data groups.
Use this procedure to add a data group file entry to a data group.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. From the Work with DG File Entries display, type a 1 (Add) next to the blank line at
the top of the list and press Enter.

281
4. The Add Data Group File Entry (ADDDGFE) display appears. At the System 1 File
and Library prompts, specify the file that you want to replicate.
5. By default, all members in the file are replicated. If you want to replicate only a
specific member, specify its name at the Member prompt.
Note: All replicated members of a file must be in the same database apply
session. For data groups configured for multiple apply sessions, specify
the apply session on the File entry options prompt. See Step 7.
6. Verify that the values of the remaining prompts on the display are what you want.
If necessary, change the values as needed.
Notes:
• If you change the value of the Dynamically update prompt to *NO, you need to
end and restart the data group before the addition is recognized.
• If you change the value of the Start journaling of file prompt to *NO and the file
is not already journaled, MIMIX will not be able to replicate changes until you
start journaling the file.
7. Optionally, you can specify file entry options that will override those defined for the
data group. Do the following:
a. Press F10 (Additional parameters), then press Page Down.
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure
8. Press Enter to create the data group file entry.

Changing a data group file entry


Use this procedure to change an existing data group file entry.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. Locate the file entry you want on the Work with DG File Entries display. Type a 2
(Change) next to the entry you want and press Enter.
4. The Change Data Group File Entry (CHGDGFE) display appears. Press F10
(Additional parameters) to see all available prompts. You can change any of the
values shown on the display.
Notes:
• If the file is currently being journaled and transactions are being applied, do not
change the values specified for To system 1 file (TOFILE1) and To member
(TOMBR1).

282
Creating data group file entries

• All replicated members of a file must be in the same database apply session.
For data groups configured for multiple apply sessions, specify the apply
session on the File entry options prompt.
5. To accept your changes, press Enter.
The replication processes do not recognize the change until the data group has been
ended and restarted.

283
Creating data group IFS entries
Data group IFS entries identify IFS objects for replication. The identified objects are
replicated through the system journal unless the data group IFS entries are explicitly
configured to allow the objects to be replicated through the user journal.
Topic “Identifying IFS objects for replication” on page 116 provides detailed concepts
and identifies requirements for configuration variations for IFS objects. Supported file
systems are included, as well as examples of the effect that multiple data group IFS
entries have on object auditing values.

Adding or changing a data group IFS entry


Note: If you are converting a data group to use user journal replication for IFS
objects, use this procedure when directed by “Checklist: Change *DTAARA,
*DTAQ, IFS objects to user journaling” on page 151.
Changes become effective for replication after the data group or MIMIX jobs are
ended and restarted.
Changes to IFS entries become effective for subsequent audits at the start of the next
audit.
From the management system, do the following to add a new data group IFS entry or
change an existing IFS entry:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 22 (IFS entries) next to the data
group you want and press Enter.
3. The Work with Data Group IFS Entries display appears. Do one of the following:
• To add a new entry, type a 1 (Add) next to the blank line at the top of the
display and press Enter.
• To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter.
4. The appropriate Data Group IFS Entry display appears. When adding an entry,
you must specify a value for the System 1 object prompt.
Notes:
• The object name must begin with the '/' character and can be up to 512
characters in total length. The object name can be a simple name, a name that
is qualified with the name of the directory in which the object is located, or a
generic name that contains one or more characters followed by an asterisk (*),
such as /ABC*. Any component of the object name contained between two '/'
characters cannot exceed 255 characters in length.
• All objects in the specified path are selected. When changing an existing IFS
entry to enable replication from a user journal (COOPDB(*YES)), make sure
that you specify only the IFS objects you want to enable.
5. If necessary, specify values for the System 2 object and Object auditing value

284
Creating data group IFS entries

prompts.
6. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
7. Specify the appropriate value for the Cooperate with database prompt. To ensure
that journaled IFS objects can be replicated from the user journal, specify *YES.
To replicate from the system journal, specify *NO.
8. If necessary, specify a value for the Object retrieval delay prompt.
9. If the changes you specify on the command result in adding objects into the
replication namespace, those objects need to be synchronized between systems.
At the Synchronize on start prompt, specify the value for how synchronization will
occur:
• The default value is *NO. Use this value when you will use save and restore
processes to manually synchronize the objects. If you do not synchronize
before replication starts, the next audit that checks all objects (scheduled or
manually invoked) will attempt to synchronize the objects if recoveries are
enabled and differences are found.
• The value *YES will request to synchronize any objects added to the
replication namespace through the system journal replication processes. This
may temporarily cause threshold conditions in replication processes.
10. Press Enter to create the IFS entry.
11. For IFS entries configured for user journal replication, if you were directed to this
procedure from “Checklist: Change *DTAARA, *DTAQ, IFS objects to user
journaling” on page 151, return to Step 7 to proceed with the additional steps
necessary to complete the conversion.
12. Manually synchronize the IFS objects identified by this data group object entry
before starting replication processes. You can skip this step if you specified *YES
in Step 9. The entries will be available to replication processes after the data
group is ended and restarted. This includes after the nightly restart of MIMIX jobs.
The next time an audit that checks all objects runs, the entries will be available
and the MIMIX audits will attempt to synchronize the objects they identify if
recoveries are enabled.

285
Loading tracking entries
Tracking entries are associated with the replication of IFS objects, data areas, and
data queues with advanced journaling techniques. A tracking entry must exist for
each existing IFS object, data area, or data queue identified for replication.
IFS tracking entries identify existing IFS stream files on the source system that have
been identified as eligible for replication with advanced journaling by the collection of
data group IFS entries defined to a data group. Similarly, object tracking entries
identify existing data areas and data queues on the source system that have been
identified as eligible for replication using advanced journaling by the collection of data
group object entries defined to a data group.
When you initially configure a data group, you must load tracking entries and start
journaling for the objects which they identify. Similarly, if you add new or change
existing data group IFS entries or object entries, tracking entries for any additional IFS
objects, data areas, or data queues must be loaded and journaling must be started on
the objects which they identify.

Loading IFS tracking entries


After you have configured the data group IFS entries for advanced journaling, use this
procedure to load IFS tracking entries which match existing IFS objects. This
procedure uses the Load DG IFS Tracking Entries (LODDGIFSTE) command. Default
values for the command will load IFS tracking entries from objects on the system
identified as the source for replication without duplicating existing IFS tracking entries.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading tracking entries are not effective until the data
group is restarted.
From the management system, do the following:
1. Ensure that the data group is ended. If the data group is active, end it using the
procedure “Ending a data group in a controlled manner” in the MIMIX Operations
book.
2. On a command line, type LODDGIFSTE and press F4 (Prompt). The Load DG IFS
Tracking Entries (LODDGIFSTE) command appears.
3. At that prompts for Data group definition, specify the three-part name of the data
group for which you want to load IFS tracking entries.
4. Verify that the value specified for the Load from system prompt is appropriate for
your environment. If necessary, specify a different value.
5. Verify that the value specified for the Update option prompt is appropriate for your
environment. If necessary, specify a different value.
6. At the Submit to batch prompt, specify the value you want.
7. Press Enter.
8. If you specified *NO for batch processing, the request is processed. If you will see
additional prompts for Job description and Job name. If necessary, specify
different values and press Enter.

286
Loading tracking entries

9. You should receive message LVI3E2B indicating the number of tracking entries
loaded for the data group.
Note: The command used in this procedure does not start journaling on the tracking
entries. Start journaling for the tracking entries when indicated by your
configuration checklist.

Loading object tracking entries


After you have configured the data group object entries for advanced journaling, use
this procedure to load object tracking entries which match existing data areas and
data queues. This procedure uses the Load DG Obj Tracking Entries
(LODDGOBJTE) command. Default values for the command will load object tracking
entries from objects on the system identified as the source for replication without
duplicating existing object tracking entries.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading tracking entries are not effective until the data
group is restarted.
From the management system, do the following:
1. Ensure that the data group is ended. If the data group is active, end it using the
procedure “Ending a data group in a controlled manner” in the MIMIX Operations
book.
2. On a command line, type LODDGOBJTE and press F4 (Prompt). The Load DG Obj
Tracking Entries (LODDGOBJTE) command appears.
3. At that prompts for Data group definition, specify the three-part name of the data
group for which you want to load object tracking entries.
4. Verify that the value specified for the Load from system prompt is appropriate for
your environment. If necessary, specify a different value.
5. Verify that the value specified for the Update option prompt is appropriate for your
environment. If necessary, specify a different value.
6. At the Submit to batch prompt, specify the value you want.
7. Press Enter.
8. If you specified *NO for batch processing, the request is processed. If you will see
additional prompts for Job description and Job name. If necessary, specify
different values and press Enter.
9. You should receive message LVI3E2B indicating the number of tracking entries
loaded for the data group.
Note: The command used in this procedure does not start journaling on the tracking
entries. Start journaling for the tracking entries when indicated by your
configuration checklist.

287
Adding a library to an existing data group
These instructions describe how to create the configuration necessary to add a library
to replication for a data group, synchronize the library and its contents, and make the
configuration changes effective.
If you started the process of adding selection rules for a new library from Vision
Solutions Portal and chose the option to manually synchronize, you can also use
these instructions to complete configuration and synchronize the library.
Notes:
• Perform instructions from a management system unless otherwise directed.These
instructions are intended to be used without delays between steps.
• These instructions assume the following:
– The library to be added is located on the system specified as system 1 in the
three-part data group name.
– The data group to which the library will be added is configured using best
practices. Specifically, the data group should specify the following values: *ALL
for Data group type (TYPE), *YES for Use remote journal link (RJLNK), and
*USRJRN for Cooperative journal (COOPJRN).
• For some configurations, you may be able to skip steps in these instructions. If the
data group type is *OBJ, you do not need data group file entries or object tracking
entries. For data groups of type *ALL, if the object entries you create in Step 5
specify COOPDB(*NO), you do not need file entries or object tracking entries. If
either of these scenarios apply, you can skip Step 6 though Step 13 and Step 21
through Step 23.
• Some steps in these instructions use an advanced user technique that combines
specifying an option on a display, using a repeat function key, and specifying
parameters on the command line to be passed to the option so that the same
action is performed for all items in the list. Be sure to read each step in its entirety
before taking action.
Do the following from the management system:
Steps to ensure replication is ended.
1. Perform a controlled end of the data group using the command:
ENDDG DGDFN(dgname) ENDOPT(*CNTRLD) DTACRG(*YES)
2. Display the data group and verify that its replication processes become inactive
(red I) using the command:
WRKDG DGDFN(dgname)
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
3. When replication processes have ended, do the following from the Work with Data
Groups display to check for any open commit cycles.

288
Adding a library to an existing data group

a. Type 8 (Display status) next to the data group name and press Enter. Then
press F8 (Database).
b. Check the value of the Open Commit column for all listed database apply
sessions.
• If *YES is displayed for any apply session, you must complete Step 4.
• If *NO is displayed for all apply sessions, continue with Step 5.
4. If open commit cycles exist, this step is necessary to prevent data loss for
currently replicated objects. Do the following:
a. Use the following command to start the data group:
STRDG DGDFN(dgname) DTACRG(*YES)
b. Take action to resolve the open commit cycles, such as ending or quiescing the
application or closing the commit cycle.
c. Repeat the controlled end again using Step 1.
If you are unable to end the data group without open commits, you may need to
use these instructions when the data group is not as busy.
Steps to create configuration.
5. Use the following commands to create two data group object entries, one for the
library itself, and one for its contents. (If you were directed here from Vision
Solutions Portal when creating library selection rules and chose to synchronize
manually, you can skip Step 5.)
ADDDGOBJE DGDFN(dgname) LIB1(QSYS) OBJ1(library-name)
TYPE(*LIB)
ADDDGOBJE DGDFN(dgname) LIB1(library-name) OBJ1(*ALL)
TYPE(*ALL)
6. Create data group file entries for any objects of type *FILE in the library using the
command:
LODDGFE DGDFN(dgname) CFGSRC(*NONE) LIB1(library-name)
BATCH(*YES) SELECT(*NO)
7. Before continuing, confirm the job ran successfully using the command:
WRKJOB LODDGFE
8. Create object tracking entries for any objects of type *DTAARA or *DTAQ in the
library using the command:
LODDGOBJTE DGDFN(dgname) LODSYS(*SRC) UPDOPT(*ADD)
9. Before continuing, confirm the job ran successfully using the command:
WRKJOB LODDGOBJTE
Steps to start journaling on source.
10. This step must be performed from the source system of the data group.
a. Use the following command to display object tracking entries created for
*DTAARA and *DTAQ objects in the library:

289
WRKDGOBJTE DGDFN(dgname) OBJ1(library-name/*ALL)
b. Type 9 (Start journaling) next to the first tracking entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*SRC) FORCE(*YES)
e. Press Enter.
11. This step must be performed from the source system of the data group.
a. Use the following command to display a list of data group file entries created
for *FILE objects in the library.
WRKDGFE DGDFN(dgname) LIB1(library-name)
b. Type 9 (Start journaling) next to the first file entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*SRC) FORCE(*YES)
e. Press Enter.
Steps that temporarily change environment while synchronizing. These steps
prevent MIMIX from attempting to use audits and recoveries that would otherwise
attempt actions that may not be desired when manually synchronizing the library.
12. Display the data group file entries created for *FILE objects in the library using the
command.
WRKDGFE DGDFN(dgname) LIB1(library-name)
13. Do the following to change the status of the listed file entries to *HLD.
a. Type 23 (Hold file) next to the first file entry. Do not press Enter.
b. Press F13 (Repeat).
c. Press Enter.
d. Press F5 (Refresh) and verify that all file entries listed show *HLD as their
requested status.
14. Identify and record current policy values for the data group by doing the following:
a. Type the following command and press F4 (Prompt):
SETMMXPCY DGDFN(dgname)
b. Press Enter to display the current values. Record the values displayed for the
following fields:
• Automatic object recovery
• Automatic database recovery
• Automatic audit recovery
15. Disable automatic recoveries for the data group using the following command:

290
Adding a library to an existing data group

SETMMXPCY DGDFN(dgname) OBJRCY(*DISABLED) DBRCY(*DISABLED)


AUDRCY(*DISABLED)
16. Start the data group and clear pending entries using the command:
STRDG DGDFN(dgname) CLRPND(*YES) SETAUD(*YES) DTACRG(*YES)
DBRSYS(*SRC)
Steps to synchronize the library.
17. Verify that there is no user activity on the library on the source system. There
should be no locks on the objects in the library on the source or target system.
18. From the target system, do the following:
• If the library exists on the target system, use the CLRLIB command to clear it.
• If the library does not exist on the target system, use the CRTLIB command to
create it.
19. From the source system, synchronize the library using the following command:
SYNCOBJ OBJ(library-name/*ALL) SYS2(target-system-name)
20. Verify the job has completed using the command:
WRKJOB SYNCOBJ
Do not continue until the job has completed. If the library is too large for the
SYNCOBJ command, you will need to use media to save/restore the library from
one system to another.
Steps to start journaling on target.
21. This step must be performed from a management (*MGT) system:
a. Display object tracking entries for *DTAARA and *DTAQ objects in the library
using the command:
WRKDGOBJTE DGDFN(dgname) OBJ1(library-name/*ALL)
b. Type 9 (Start journaling) next to the first tracking entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*TGTC) FORCE(*YES)
e. Press Enter.
22. This step must be performed from the management (*MGT) system.
a. Display data group file entries for *FILE objects in the library using the
command.
WRKDGFE DGDFN(dgname) LIB1(library-name)
b. Type 9 (Start journaling) next to the first file entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*TGT) FORCE(*YES)

291
e. Press Enter.
23. From the Work with DG File Entries display, do the following to change the status
of the file entries for the library to *RLSWAIT.
a. Type 25 (Release file) next to the first file entry. Do not press Enter.
b. Press F13 (Repeat).
c. Press Enter.
d. Press F5 (Refresh) and verify that all file entries listed show *ACTIVE as their
current status.
Steps to return environment for normal operations. Perform these steps from the
management system when replication activity for that data group is caught up.
24. End the data group using the command:
ENDDG DGDFN(dgname) DTACRG(*YES)
25. Set automatic recovery policies for the data group back to their previous values.
Use the values recorded in Step 14 in the following command:
SETMMXPCY DGDFN(dgname) OBJRCY(value) DBRCY(value)
AUDRCY(value)
26. Display the data group and verify that its replication processes become inactive
(red I) using the command:
WRKDG DGDFN(dgname)
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
27. Start the data group using the command:
STRDG DGDFN(dgname) DTACRG(*YES)
28. Do the following to address expected notifications associated with actions
performed in Step 18.
a. Display notifications for the data group using the command:
WRNFY DGDFN(dgname)
b. If you see a notification from target journal inspection for the CRTLIB or
CLRLIB command, type 46 (Acknowledge) next to it and press Enter.

292
Adding an IFS directory to an existing data group

Adding an IFS directory to an existing data group


These instructions describe how to create the configuration necessary to add an IFS
directory to replication for a data group, synchronize the directory and its contents,
and make the configuration changes effective.
If you started the process of adding selection rules for a new directory from Vision
Solutions Portal and chose the option to manually synchronize, you can also use
these instructions to complete configuration and synchronize the directory.
Notes:
• Before you begin these instructions, consider the types of objects within the
directory to be added and how frequently they change. This will help you in
selecting the appropriate value for the Cooperate with database (COOPDB)
parameter in the commands in Step 3.
• Perform the instructions from a management system unless otherwise directed.
These instructions are intended to be used without delays between steps.
• These instructions assume the following:
– The directory to be added is located on the system specified as system 1 in the
three-part data group name.
– The data group to which the directory will be added is configured using best
practices in which IFS objects are cooperatively processed through the user
journal. Specifically, the data group should specify the following values: *ALL
for Data group type (TYPE), *YES for Use remote journal link (RJLNK), and
*USRJRN for Cooperative journal (COOPJRN).
• For some configurations, you may be able to skip steps in these instructions. If the
data group type is *OBJ, you do not need IFS tracking entries. For data groups of
type *ALL, if the IFS entries you create in Step 3 specify COOPDB(*NO), you do
not need IFS tracking entries. If either of these scenarios apply, you can skip steps
Step 4, Step 5, Step 6, and Step 13.
• Some steps in these instructions use an advanced user technique that combines
specifying an option on a display, using a repeat function key, and specifying
parameters on the command line to be passed to the option so that the same
action is performed for all items in the list. Be sure to read each step in its entirety
before taking action.

Do the following from the management system:


Steps to ensure replication is ended.
1. Perform a controlled end of the data group using the command:
ENDDG DGDFN(dgname) ENDOPT(*IMMED) DTACRG(*YES)
2. Display the data group and verify that its replication processes become inactive
(red I) using the command:
WRKDG DGDFN(dgname)

293
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
Steps to create configuration.
3. Use the following commands to create two data group IFS entries, one for the
directory itself, and one for its contents. (If you were directed here from Vision
Solutions Portal when creating directory selection rules and chose to synchronize
manually, you can skip Step 3.)
Note: Consider the types of objects you have within the directory and how
frequently they change when selecting a value for the COOPDB
parameter in the following commands.
*YES - The directories and objects are journaled, which allows more
efficient replication processing for frequent changes. This is best suited for
use with objects that change frequently.
*NO - Processing occurs only through system journal replication. This is
appropriate for objects that do not change frequently, such as images. This
is the default. If you use this value, you should skip the steps below that
are associated with IFS tracking entries.
ADDDGIFSE DGDFN(dgname) OBJ1('/directory-name')
COOPDB(value)
ADDDGIFSE DGDFN(dgname) OBJ1('/directory-name/*')
COOPDB(value)
4. Create data group IFS tracking entries for the objects in the directory using the
command:
LODDGIFSTE DGDFN(dgname) BATCH(*YES)
5. Before continuing, confirm the job ran successfully using the command:
WRKJOB LODDGIFSTE
Steps to start journaling on source.
6. This step must be performed from the source system of the data group.
a. Use the following command to display IFS tracking entries created for objects
and subdirectories in the directory:
WRKDGIFSTE DGDFN(dgname) OBJ1('/directory-name*')
b. Type 9 (Start journaling) next to the first tracking entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*SRC) FORCE(*YES)
e. Press Enter.
Steps that temporarily change environment while synchronizing. These steps
prevent MIMIX from attempting to use audits and recoveries that would otherwise
attempt actions that may not be desired when manually synchronizing the library.
7. Identify and record current policy values for the data group by doing the following:
a. Type the following command and press F4 (Prompt):

294
Adding an IFS directory to an existing data group

SETMMXPCY DGDFN(dgname)
b. Press Enter to display the current values. Record the values displayed for the
following fields:
• Automatic object recovery
• Automatic database recovery
• Automatic audit recovery
8. Disable automatic recoveries for the data group using the following command:
SETMMXPCY DGDFN(dgname) OBJRCY(*DISABLED) DBRCY(*DISABLED)
AUDRCY(*DISABLED)
Steps to synchronize the directory. If the directory is too large for the SYNCIFS
command, you will need to use media to save/restore the directory from one system
to another instead of the steps in this subsection.
9. Verify that there is no user activity on the IFS directory on the source system.
There should be no locks on the objects in the directory on the source or target
system.
10. From the source system, synchronize the directory using the following command:
SYNCIFS OBJ(('/directory-name' *ALL) SYS2(target-system-
name)
11. Verify the job has completed using the command:
WRKJOB SYNCIFS
Do not continue until the job has completed.
Steps to start journaling on target.
12. Start the data group using the command:
STRDG DGDFN(dgname) SETAUD(*YES) DTACRG(*YES)
13. This step must be performed from a management (*MGT) system:
a. Use the following command to display IFS tracking entries created for objects
and subdirectories in the directory:
WRKDGIFSTE DGDFN(dgname) OBJ1('/directory-name*')
b. Type 9 (Start journaling) next to the first tracking entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*TGTC) FORCE(*YES)
e. Press Enter.
Steps to return environment for normal operations. Perform these steps from the
management system when replication activity for that data group is caught up.
14. End the data group using the command:
ENDDG DGDFN(dgname) DTACRG(*YES)

295
15. Set automatic recovery policies for the data group back to their previous values.
Use the values recorded in Step 7 in the following command:
SETMMXPCY DGDFN(dgname) OBJRCY(value) DBRCY(value)
AUDRCY(value)
16. Display the data group and verify that its replication processes become inactive
(red I) using the command:
WRKDG DGDFN(dgname)
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
17. Start the data group using the command:
STRDG DGDFN(dgname) DTACRG(*YES)

296
Creating data group DLO entries

Creating data group DLO entries


Data group DLO entries identify document library objects (DLOs) for replication by
MIMIX system journal replication processes.
When you configure MIMIX, you can create data group DLO entries by loading from a
generic entry and selecting from documents in the list, or by creating individual DLO
entries. Once you have created the DLO entries, you can tailor them to meet your
requirements.
For detailed concepts and requirements, see “Identifying DLOs for replication” on
page 122.

Loading DLO entries from a folder


If you need to create data group DLO entries for a group of documents within a folder,
you can specify information so that MIMIX will create the data group DLO entries for
you. (You can customize individual entries later, if necessary.)
The user profile you use to perform this task must be enrolled in the system
distribution directory on the management system.
Note: The MIMIXOWN user profile is automatically added to the system directory
when MIMIX is installed. This entry is required for DLO replication and should
not be removed.
From the management system, do the following to create DLO entries by loading from
a list.
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data
group you want and press Enter.
3. The Work with DG DLO Entries display appears. Press F19 (Load).
4. The Load DG DLO Entries (LODDGDLOE) display appears. Do the following to
specify the selection criteria:
a. Identify the documents to be considered. Specify values for the System 1 folder
and System 1 document prompts.
b. If necessary, specify values for the Owner, System 2 folder, System 2 object,
and Object auditing value prompts.
c. At the Process type prompt, specify whether resulting data group DLO entries
should include or exclude the identified documents
d. If necessary, specify a value for the Object retrieval delay prompt.
e. Press Enter.
5. Additional prompts appear to optionally use batch processing and to load entries
without load without selecting entries from a list. Press Enter.
6. The Load DG DLO Entries display appears with the list of document that matched
your selection criteria. Either type a 1 (Select) next to the documents you want or

297
press F21 (Select all). Then press Enter.
7. If necessary, you can use “Adding or changing a data group DLO entry” on
page 298 to customize values for any of the data group DLO entries.
Synchronize the DLOs identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.

Adding or changing a data group DLO entry


The data group must be ended and restarted before any changes can become
effective.
From the management system, do the following to add or change a DLO entry:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data
group you want and press Enter.
3. The Work with DG DLO Entries display appears. Do one of the following:
• To add a new entry, type a 1 (Add) next to the blank line at the top of the list
and press Enter.
• To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter. Then skip to Step 5.
4. If you are adding a new DLO entry, the Add Data Group DLO Entry display
appears. Identify the folder and documents to be considered. Specify values for
the System 1 folder and System 1 document prompts.
5. If necessary, specify values for the Owner, System 2 folder, System 2 object, and
Object auditing value prompts.
6. At the Process type prompt, specify whether resulting data group DLO entries
should include or exclude the identified documents
7. If necessary, specify a value for the Object retrieval delay prompt.
8. If the changes you specify on the command result in adding objects into the
replication namespace, those objects need to be synchronized between systems.
At the Synchronize on start prompt, specify the value for how synchronization will
occur:
• The default value is *NO. Use this value when you will use save and restore
processes to manually synchronize the objects. If you do not synchronize
before replication starts, the next audit that checks all objects (scheduled or
manually invoked) will attempt to synchronize the objects if recoveries are
enabled and differences are found.
• The value *YES will request to synchronize any objects added to the
replication namespace through the system journal replication processes. This
may temporarily cause threshold conditions in replication processes.

298
Creating data group DLO entries

9. Press Enter.
10. Manually synchronize the DLOs identified by this data group object entry before
starting replication processes. You can skip this step if you specified *YES in
Step 8. The entries will be available to replication processes after the data group
is ended and restarted. This includes after the nightly restart of MIMIX jobs. The
next time an audit that checks all objects runs, the entries will be available and the
MIMIX audits will attempt to synchronize the objects they identify if recoveries are
enabled.

299
Additional options: working with DG entries
The procedures for performing common functions, such as copying, removing, and
displaying, are very similar for all types of data group entries used by MIMIX. Each
generic procedure in this topic indicates the type of data group entry for which it can
be used.

Copying a data group entry


Use this procedure from the management system to copy a data group entry from one
data group definition to another data group definition. The data group definition to
which you are copying must exist.
To copy a data group entry to another data group definition, do the following:
1. From the Work with DG Definitions display, type the option you want next to the
data group from which you are copying and press Enter. Any of these options will
allow an entry to be copied:
Option 17 (File entries)
Option 20 (Object entries)
Option 21 (DLO entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 3 (Copy) next
to the entry you want and press Enter.
3. The Copy display for the entry appears. Specify a name for the To definition
prompt.
4. Additional prompts appear on display that are specific for the type of entry. The
values of these prompts define the data to be replicated by the definition to which
you are copying. Ensure that the prompts identify the necessary information.

Table 32. Values to specify for each type of data group entry.

For file entries, provide: To File 1


To Member
To File 2

For object entries, provide: System 1 library


System 1 object
Object type
Attribute

For DLO entries, provide: System 1 folder


System 1 document
Owner

For IFS entries, provide: To system 1 object

5. The value *NO for the Replace definition prompt prevents you from replacing an

300
Additional options: working with DG entries

existing entry in the definition to which you are copying. If you want to replace an
existing entry, specify *YES.
6. To copy the entry, press Enter.
7. For file entries, end and restart the data group being copied.

Removing a data group entry


Use this procedure from the management system to remove a data group entry from
a data group definition. You may want to remove an entry when you no longer need to
replicate the information that the entry identifies.
Note: For all data group entries except file entries, the change is not recognized until
after the send, receive, and apply processes for the associated data group
and ended and restarted.
Data group file entries support dynamic removals if you prompt the
RMVDGFE command and specify Dynamically update (*YES). If you specify
Dynamically update (*YES), you do not need to end the processes for the data
group when you use the default. The change is recognized as soon as each
active process receives the update. If a file is on hold and you want to delete
the data group file entry, it is best to use *YES. This forces all currently held
entries to be deleted, all current entries to be ignored, and prevents additional
entries from accumulating.
If you accept the default of Dynamically update (*NO), the change is not
recognized until after the send receive, and apply processes for the
associated data group are ended and restarted. When you specify
Dynamically update (*NO), the remove function does not clean up any records
in the error/hold log. If an entry is held when you delete it, its information
remains in the error/hold log. Additional transactions for the file or member can
be accumulating in the error/hold log or will be applied to the file.
To remove an entry, do the following:
1. From the Work with DG Definitions display, type the option for the entry you want
next to the data group and press Enter. Any of these options will allow an entry to
be removed:
Option 17 (File entries)
Option 20 (Object entries)
Option 21 (DLO entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 4 (Remove)
next to the entry you want and press Enter.
3. For data group file entries, a display with additional prompts appears. Specify the
values you want and press Enter.
4. A confirmation display appears with a list of entries to be deleted. To delete the
entries, press Enter.

301
Displaying a data group entry
Use this procedure to display a data group entry for a data group definition.
To display a data group entry, do the following:
1. From the Work with DG Definitions display, type the option for the entry you want
next to the data group and press Enter. Any of these options will allow an entry to
be displayed:
Option 17 (File entries)
Option 20 (Object entries)
Option 21 (DLO entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 5 (Display)
next to the entry you want and press Enter.
3. The appropriate data group entry display appears. Page Down to see all of the
values.

302
CHAPTER 13Additional supporting tasks for
configuration

The tasks in this chapter provide supplemental configuration tasks. Always use the
configuration checklists to guide you though the steps of standard configuration
scenarios.
• “Accessing the Configuration Menu” on page 305 describes how to access the
menu of configuration options from the native user interface.
• “Starting the system and journal managers” on page 306 provides procedures for
starting these jobs. System and journal manager jobs must be running before
replication can be started.
• “Manually deploying configuration changes” on page 307 describes when
configuration is automatically deployed and when you may want to manually
deploy it. Instructions for manually deploying configuration are included.
• “Setting data group auditing values manually” on page 309 describes when to
manually set the object auditing level for objects defined to MIMIX and provides a
procedure for doing so.
• “Checking file entry configuration manually” on page 313 provides a procedure
using the CHKDGFE command to check the data group file entries defined to a
data group.
Note: The preferred method of checking is to use automatic scheduling for the
#DGFE audit, which calls the CHKDGFE command and can automatically
correct detected problems. For additional information, see “Interpreting results
for configuration data - #DGFE audit” on page 687.
• “Starting data groups for the first time” on page 315 describes how to start
replication once configuration is complete and the systems are synchronized. Use
this only when directed to by a configuration checklist.
• “Identifying data groups that use an RJ link” on page 316 describes how to
determine which data groups use a particular RJ link.
• “Using file identifiers (FIDs) for IFS objects” on page 317 describes the use of FID
parameters on commands for IFS tracking entries. When IFS objects are
configured for replication through the user journal, commands that support IFS
tracking entries can specify a unique FID for the object on each system. This topic
describes the processing resulting from combinations of values specified for the
object and FID prompts.
• “Configuring restart times for MIMIX jobs” on page 318 describes how to change
the time at which MIMIX jobs automatically restart. MIMIX jobs restart daily to
ensure that the MIMIX environment remains operational.
• “Setting the system time zone and time” on page 325 describes how to set time
zone values so that the timestamps used within status of application group
procedures will display correctly on all systems.

303
Additional supporting tasks for configuration

• “Creating an application group definition” on page 326 describes how to create an


application group that will not participate in a cluster controlled by the IBM i
operating system.
• “Loading data resource groups into an application group” on page 327 describes
how to load data resource groups with existing data group definitions and specify
the relationship between the name spaces of the data groups within each data
resource group.
• “Specifying the primary node for the application group” on page 327 describes
how to ensure that a primary node is defined for an application group.
• “Performing target journal inspection” on page 334 describes the benefits and
restrictions of performing journal inspection on target system journals. This topic
also describes how to enable or disable target journal inspection, and how to
identify which data groups use a particular journal definition.

304
Accessing the Configuration Menu

Accessing the Configuration Menu


The MIMIX Configuration Menu provides access to the options you need for
configuring MIMIX.
To access the MIMIX Configuration Menu, do the following:
1. Access the MIMIX Basic Main Menu. See “Accessing the MIMIX Main Menu” on
page 93.
2. From the on the MIMIX Basic Main Menu, select option 11 (Configuration menu)
and press Enter.

305
Starting the system and journal managers
This procedure starts all the system managers, journal managers, target journal
inspection jobs, and collector services. If the system managers are running, they will
automatically send configuration information to the network system as you complete
configuration tasks.
System and journal managers must be active to support replication. Journal
inspection jobs support analysis functionality, and collector services is needed to use
MIMIX from within the Vision Solutions Portal, and to allow collection of historical
statistics. For systems participating in an IBM i cluster with a MIMIX Global license,
this procedure also starts cluster services, which is needed to start replication.
Do the following:
1. Access the MIMIX Basic Main Menu. See “Accessing the MIMIX Main Menu” on
page 93.
2. From the MIMIX Basic Main Menu press the F21 key (Assistance level) to access
the MIMIX Intermediate Main Menu.
3. Select option 2 (Work with Systems) and press Enter.
4. The Work with Systems display appears with a list of the system definitions. Type
a 9 (Start) next to each of the system definitions you want and press Enter. This
will start all managers on all of these systems in the MIMIX environment.
5. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:
a. Verify that *ALL appears as the value for the Manager prompt.
b. Verify that *YES appears as the value for the Target journal inspection and
Collector services prompts.
c. If you are configuring a cluster environment, press F10 (Additional parameters)
and accept the value *YES for the Start cluster services prompt. If the specified
system definition is not associated with an IBM i cluster or if the cluster does
not exist, this value has no effect.
d. Press Enter to complete this request.
6. If you selected more than one system definition in Step 4, the Start MIMIX
Managers (STRMMXMGR) display will be shown for each system definition that
you selected. Repeat Step 5 for each system definition that you selected.

306
Manually deploying configuration changes

Manually deploying configuration changes


You or a MIMIX Services representative can use these instructions to manually
deploy configuration changes within MIMIX. You may want to manually deploy
configuration to verify that new or changed data group entries for a data group will
include all expected objects or to avoid a potentially lengthy delay when starting
replication processes for a data group.
MIMIX automatically deploys configuration information during requests to start data
group replication. This occurs the first time a data group is started and on start
requests when MIMIX has detected configuration changes for a data group that affect
the scope of objects configured for replication (the name space). However, this may
take an extended amount of time. For environments that replicate large quantities of
IFS objects, this may be significant and may take hours. Manually deploying some or
all of the configuration may avoid delays if performed in these scenarios:
• Before the first time you start new data groups. This is recommended.
• Before starting existing data groups after upgrading to MIMIX version 7.1, This is
especially true for data groups that replicate large quantities of IFS objects.
• Before starting a data group whose data group IFS entries have changed.
MIMIX uses the deployed information to create an internal list of the current objects
being replicated that is used as input for other functions, such as the advanced
analysis functions available through Vision Solutions Portal. After the configuration is
deployed, MIMIX processes keep the internal list up to date as objects are deleted,
moved in or out of the name space, or as other cooperative processing activities
affecting the name space are performed.
To manually deploy configuration for a data group, do the following:
1. If the data group is active, do the following:
a. Perform a controlled end of the data group. See the MIMIX Operations book for
how to end a data group in a controlled manner.
b. Ensure that all pending activity for objects and IFS objects has completed. Use
the command WRKDGACTE STATUS(*ACTIVE) to display any pending
activity entries. Any activities that are still in progress will be listed.
2. From the source system of the data group, type DPYDGCFG and press F4
(Prompt).
3. The Deploy Data Grp. Configuration (DPYDGCFG) display appears. To identify
what will be deployed, do the following:
a. Specify the data group in the Data group definition prompts.
b. At the Include entries prompt, specify the type of data group entries you want
to deploy.
4. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start deploying.

307
• To submit the job for batch processing, accept the default. Press Enter to
continue with the next step.
5. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
6. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
7. To start deploying, press Enter.
If you want to validate the list of objects to be replicated resulting from deploying
configuration, use the Replicated Objects portlet in Vision Solutions Portal.

308
Setting data group auditing values manually

Setting data group auditing values manually


Default behavior for MIMIX is to change the auditing value of IFS, DLO, and library-
based objects configured for system journal replication as needed when starting data
groups with the Start Data Group (STRDG) command.
To manually set the system auditing level of replicated objects, or to force a change to
a lower configured level, you can use the Set Data Group Auditing (SETDGAUD)
command.
The SETDGAUD command allows you to set the object auditing level for all existing
objects that are defined to MIMIX by data group object entries, data group DLO
entries, and data group IFS entries. The SETDGAUD command can be used for data
groups configured for replicating object information (type *OBJ or *ALL).
When to set object auditing values manually - If you anticipate a delay between
configuring data group entries and starting the data group, you should use the
SETDGAUD command before synchronizing data between systems. Doing so will
ensure that replicated objects will be properly audited and that any transactions for
the objects that occur between configuration and starting the data group will be
replicated.
You can also use the SETDGAUD command to reset the object auditing level for all
replicated objects if a user has changed the auditing level of one or more objects to a
value other than what is specified in the data group entries.
Processing options - MIMIX checks for existing objects identified by data group
entries for the specified data group. The object auditing level of an existing object is
set to the auditing value specified in the data group entry that most specifically
matches the object. Default behavior is that MIMIX only changes an object’s auditing
value if the configured value is higher than the object’s existing value. However, you
can optionally force a change to a configured value that is lower than the existing
value through the command’s Force audit value (FORCE) parameter.
• The default value *NO for the FORCE parameter prevents MIMIX from reducing
the auditing level of an object. For example, if the SETDGAUD command
processes a data group entry with a configured object auditing value of *CHANGE
and finds an object identified by that entry with an existing auditing value of *ALL,
MIMIX does not change the value.
• If you specify *YES for the FORCE parameter, MIMIX will change the auditing
value even if it is lower than the existing value.
For IFS objects, it is particularly important that you understand the ramifications of the
value specified for the FORCE parameter. For more information see “Examples of
changing of an IFS object’s auditing value” on page 310.
Procedure -To set the object auditing value for a data group, do the following on each
system defined to the data group:
1. Type the command SETDGAUD and press F4 (Prompt).
2. The Set Data Group Auditing (SETDGAUD) appears. Specify the name of the
data group you want.

309
3. At the Object type prompt, specify the type of objects for which you want to set
auditing values.
4. If you want to allow MIMIX to force a change to a configured value that is lower
than the object’s existing value, specify *YES for the Force audit value prompt.
Note: This may affect the operation of your replicated applications. We
recommend that you force auditing value changes only when you have
specified *ALLIFS for the Object type.
5. Press Enter.

Examples of changing of an IFS object’s auditing value


The following examples show the effect of the value of the FORCE parameter when
manually changing the object auditing values of IFS objects configured for system
journal replication.
The auditing values resulting from the SETDGAUD command can be confusing when
your environment has multiple data group IFS entries, each with different auditing
levels, and more than one entry references objects sharing common parent
directories. The following examples illustrate how these conditions affect the results of
setting object auditing for IFS objects.
Data group entries are processed in order from most generic to most specific. IFS
entries are processed using the unicode character set. The first entry (more generic)
found that matches the object is used until a more specific match is found.
When MIMIX processes a data group IFS entry and changes the auditing level of
objects which match the entry, the object is checked and, if necessary, changed to the
new auditing value. In the case of an IFS entry with a generic name, all descendents
of the IFS object may also have their auditing value changed.
Example 1: This scenario illustrates why you may need to force the configured values
to take effect after changing the existing data group IFS entries from *ALL to lower
values. The current auditing value on the object may be from the previously
configured value or may be the result of changes by an IBM command. Table 33
identifies a set of data group IFS entries and their configured auditing values. The
entries are listed in the order in which they are processed by the SETDGAUD
command.

Table 33. Example 1: configuration of data group IFS entries

Order processed Specified object Object auditing value Process type

1 /DIR1/* OBJAUD(*CHANGE) PRCTYPE(*INCLD)

2 /DIR1/DIR2/* OBJAUD(*NONE) PRCTYPE(*INCLD)

3 /DIR1/STMF OBJAUD(*NONE) PRCTYPE(*INCLD)

For this scenario, running the SETDGAUD command with FORCE(*NO) does not
change the auditing values on any existing IFS objects because the configured values
from the data group IFS entries are lower than the existing values.

310
Setting data group auditing values manually

In this scenario, SETDGAUD FORCE(*YES) must be run to have the configured


auditing values take effect. Table 34 shows the intermediate values as each entry is
processed by the force request and the final results of the change.

Table 34. Intermediate audit values which occur during FORCE(*YES) processing for example 1.

Existing objects Existing Auditing values while processing SETDGAUD FORCE(*YES)


value
Changed by Changed by Changed by Final results of
1st entry 2nd entry 3rd entry FORCE(*YES)

/DIR1 *ALL *ALL

/DIR1/STMF *ALL *CHANGE *NONE *NONE

/DIR1/STMF2 *ALL *CHANGE *CHANGE

/DIR1/DIR2 *ALL *CHANGE *CHANGE

/DIR1/DIR2/STMF *ALL *CHANGE *NONE *NONE

Example 2: This example begins with the same set of data group IFS entries used in
example 1 (Table 33) and uses the results of the forced change in example 1 as the
auditing values for the existing objects in Table 35.
Table 35 shows how running the SETDGAUD command with FORCE(*NO) causes
changes to auditing values. This scenario is quite possible as a result of a normal
STRDG request. Complex data group IFS entries and multiple configured values
cause these potentially undesirable results.
Note: Any addition or change to the data group IFS entries can cause these results
to occur.

Table 35. Example 2: comparison of object’s actual values

Existing objects Auditing value

Existing values After SETDGAUD After SETDGAUD


FORCE(*NO) FORCE(*YES)

/DIR1 *NONE *NONE *NONE

/DIR1/STMF *NONE *CHANGE *NONE

/DIR1/STMF2 *CHANGE *CHANGE *CHANGE

/DIR1/DIR2 *NONE *CHANGE *CHANGE

/DIR1/DIR2/STMF *NONE *CHANGE *NONE

There is no way to maintain the existing values in Table 35 without ensuring that a
forced change occurs every time SETDGAUD is run, which may be undesirable. In
this example, the next time data groups are started, the objects’ auditing values will
be set to those shown in Table 35 for FORCE(*NO).

311
Any addition or change to the data group IFS entries can potentially cause similar
results the next time the data group is started. To avoid this situation, we recommend
that you configure a consistent auditing value of *CHANGE across data group IFS
entries which identify objects with common parent directories.

Example 3: This scenario illustrates the results of SETDGAUD command when the
object’s auditing value is determined by the user profile which accesses the object
(value *USRPRF). Table 36 shows the configured data group IFS entry.

Table 36. Example 3 configuration of data group IFS entries

Order processed Specified Object Object auditing value Process type

1 /DIR1/STMF OBJAUD(*NONE) PRCTYPE(*INCLD)

Table 37 compares the results running the SETDGAUD command with FORCE(*NO)
and FORCE(*YES).
Running the command with FORCE(*NO) does not change the value. The value
*USRPRF is not in the range of valid values for MIMIX. Therefore, an object with an
auditing value of *USRPRF is not considered for change.
Running the command with FORCE(*YES) does force a change because the existing
value and the configured value are not equal.

Table 37. Example 3: comparison of object’s actual values

Existing objects Auditing value

Existing values After SETDGAUD After SETDGAUD


FORCE(*NO) FORCE(*YES)

/DIR1/STMF *USRPRF *USRPRF *NONE

312
Checking file entry configuration manually

Checking file entry configuration manually


The Check DG File Entries (CHKDGFE) command provides a means to detect
whether the correct data group file entries exist with respect to the data group object
entries configured for a specified data group in your MIMIX configuration. When file
entries and object entries are not properly matched, your replication results can be
affected.
Note: The preferred method of checking is to use automatic scheduling for the
#DGFE audit, which calls the CHKDGFE command and can automatically
correct detected problems. For additional information, see “Interpreting results
for configuration data - #DGFE audit” on page 687.
To check your file entry configuration manually, do the following:
1. On a command line, type CHKDGFE and press Enter. The Check Data Group File
Entries (CHKDGFE) command appears.
2. At the Data group definition prompts, select *ALL to check all data groups or
specify the three-part name of the data group.
3. At the Options prompt, you can specify that the command be run with special
options. The default, *NONE, uses no special options. If you do not want an error
to be reported if a file specified in a data group file entry does not exist, specify
*NOFILECHK.
4. At the Output prompt, specify where the output from the command should be
sent—to print, to an outfile, or to both. See Step 6.
5. At the User data prompt, you can assign your own 10-character name to the
spooled file or choose not to assign a name to the spooled file. The default, *CMD,
uses the CHKDGFE command name to identify the spooled file.
6. At the File to receive output prompts, you can direct the output of the command to
the name and library of a specific database file. If the database file does not exist,
it will be created in the specified library with the name MXCDGFE.
7. At the Output member options prompts, you can direct the output of the command
to the name of a specific database file member. You can also specify how to
handle new records if the member already exists. Do the following:
a. At the Member to receive output prompt, accept the default *FIRST to direct
the output to the first member in the file. If it does not exist, a new member is
created with the name of the file specified in Step 6. Otherwise, specify a
member name.
b. At the Replace or add records prompt, accept the default *REPLACE if you
want to clear the existing records in the file member before adding new
records. To add new records to the end of existing records in the file member,
specify *ADD.
8. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to check data group file entries.

313
• To submit the job for batch processing, accept *YES. Press Enter and continue
with the next step.
9. At the Job description prompts, specify the name and library of the job description
used to submit the batch request. Accept MXAUDIT to submit the request using
the default job description, MXAUDIT.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To start the data group file entry check, press Enter.

314
Starting data groups for the first time

Starting data groups for the first time


Use this procedure when a configuration checklist directs you to start a newly
configured data group for the first time. You should have identified the starting point in
the journals with “Establish a synchronization point” on page 508 when you
synchronized the systems.
Note: To avoid a potentially long delay while starting replication for the first time, you
can manually deploy the configuration before starting the data group. For
more information, see “Manually deploying configuration changes” on
page 307.
1. From the Work with Data Groups display, type a 9 (Start DG) next to the data
group that you want to start and press Enter.
2. The Start Data Group (STRDG) display appears. Press Enter to access additional
prompts. Do the following:
a. Specify the starting point for user journal journal replication. For the Database
journal receiver and Database large sequence number prompts specify the
information you recorded in Step 5 of “Establish a synchronization point” on
page 508.
b. Specify the starting point for system journal journal replication. For the Object
journal receiver and Object large sequence number prompts specify the
information you recorded in Step 6 of “Establish a synchronization point” on
page 508.
c. Specify *YES for the Clear pending prompt.
3. If the data group participates in an application group, do the following:
a. Press F10 (Additional parameters).
b. At the Override if in data rsc. group prompt, specify *YES.
4. Press Enter.
5. A confirmation display appears. Press Enter.
6. A second confirmation display appears. Press Enter to start the data group.

315
Identifying data groups that use an RJ link
Use this procedure to determine which data groups use a remote journal link before
you end a remote journal link or remove a remote journaling environment.
1. Enter the command WRKRJLNK and press Enter.
2. Make a note of the name indicated in the Source Jrn Def column for the RJ Link
you want.
3. From the command line, type WRKDGDFN and press Enter.
4. For all data groups listed on the Work with DG Definitions display, check the
Journal Definition column for the name of the source journal definition you
recorded in Step 2.
• If you do not find the name from Step 2, the RJ link is not used by any data
group. The RJ link can be safely ended or can have its remote journaling
environment removed without affecting existing data groups.
• If you find the name from Step 2 associated with any data groups, those data
groups may be adversely affected if you end the RJ link. A request to remove
the remote journaling environment removes configuration elements and
system objects that need to be created again before the data group can be
used. Continue with the next step.
5. Press F10 (View RJ links). Consider the following and contact your MIMIX
administrator before taking action that will end the RJ link or remove the remote
journaling environment.
• When *NO appears in the Use RJ Link column, the data group will not be
affected by a request to end the RJ link or to end the remote journaling
environment.
Note: If you allow applications other than MIMIX to use the RJ link, they will be
affected if you end the RJ link or remove the remote journaling
environment.
• When *YES appears in the Use RJ Link column, the data group may be
affected by a request to end the RJ link. If you use the procedure for ending a
remote journal link independently in the MIMIX Operations book, ensure that
any data groups that use the RJ link are inactive before ending the RJ link.

316
Using file identifiers (FIDs) for IFS objects

Using file identifiers (FIDs) for IFS objects


Commands used for user journal replication of IFS objects use file identifiers (FIDs) to
uniquely identify the correct IFS tracking entries to process. The System 1 file
identifier and System 2 file identifier prompts ensure that IFS tracking entries are
accurately identified during processing. These prompts can be used alone or in
combination with the System 1 object prompt.
These prompts enable the following combinations:
• Processing by object path: A value is specified for the System 1 object prompt
and no value is specified for the System 1 file identifier or System 2 file identifier
prompts.
When processing by object path, a tracking entry is required for all commands
with the exception of the SYNCIFS command. If no tracking entry exists, the
command cannot continue processing. If a tracking entry exists, a query is
performed using the specified object path name.
• Processing by object path and FIDs: A value is specified for the System 1
object prompt and a value is specified for either or both of the System 1 file
identifier or System 2 file identifier prompts.
When processing by object path and FIDs, a tracking entry is required for all
commands. If no tracking entry exists, the command cannot continue processing.
If a tracking entry exists, a query is performed using the specified FID values. If
the specified object path name does not match the object path name in the
tracking entry, the command cannot continue processing.
• Processing by FIDs: A value is specified for either or both of the System 1 file
identifier or System 2 file identifier prompts and, with the exception of the
SYNCIFS command, no value is specified for the System 1 object prompt. In the
case of SYNCIFS, the default value *ALL is specified for the System 1 object
prompt.
When processing by FIDs, a tracking entry is required for all commands. If no
tracking entry exists, the command cannot continue processing. If a tracking entry
exists, a query is performed using the specified FID values.

317
Configuring restart times for MIMIX jobs
Certain MIMIX jobs are restarted on a regular basis in order to maintain the MIMIX
environment. The ability to configure this activity can ease conflicts with your
scheduled workload by changing when the MIMIX jobs restart to a more convenient
time for your environment.
You can configure the job restart time on system definitions to affect when system-
level jobs are restarted, on data group definitions to affect when replication-level jobs
are restarted, or both. To make effective use of this capability, you may need to set the
job restart time in more than one location.

Configurable job restart time operation


The default operation of MIMIX is to restart affected MIMIX jobs at midnight (12:00
a.m.). However, you can change the restart time by setting a different value for the
Job restart time parameter (RSTARTTIME) on system definitions and data group
definitions. The time is based on a 24 hour clock. The values specified in the system
definitions and data group definitions are retrieved at the time the MIMIX jobs are
started. Changes to the specified values have no effect on jobs that are currently
running. Changes are effective the next time the affected MIMIX jobs are started.
For a data group definition you can also specify either *SYSDFN1 or the *SYSDFN2
for the Job restart time (RSTARTTIME) parameter. Respectively, these values use the
restart time specified in the system definition identified as System 1 or System 2 for
the data group.
Both system and data group definition commands support the special value *NONE,
which prevents the affected jobs at that level from automatically restarting.

Attention: The value *NONE for the Job restart time parameter is not
recommended.
If not restarted every day, target journal inspection becomes less effective
because reporting results per user would no longer occur every day.
If you specify *NONE in a system definition or a data group definition, you
need to develop and implement alternative procedures to ensure that the
affected MIMIX jobs are periodically restarted. Restarting the jobs ensures
that long running MIMIX jobs are not ended by the system due to resource
constraints and refreshes the job log to avoid overflow and abnormal job
termination.

Affected jobs
Results of what you specify are also affected by the following:
• The time zone in which each system exists.
• The replication role (source or target) of the system within a data group affects
which data group-level jobs are started on a system. Also, target journal
inspection jobs run at the system-level based on the system’s current role for
replication processes.

318
Configuring restart times for MIMIX jobs

Note: Each system has a nightly cleanup job (SM_CLEANUP) that is not affected by
the configurable restart time. These cleanup jobs run shortly after midnight on
the local system.
MIMIX system-level jobs restart when they detect that the time specified in the
system definition has passed. The affected system level jobs are listed in Table 38.

Table 38. System level jobs that restart and the effect of the value specified in a system definition

Restarted Where Jobs Run Effect of Specified Restart Time


System Level Jobs

Journal managers Each system Job on each system restarts at the time
(JRNMGR) specified in its system definition.

Target journal inspection Only on systems that are Jobs running on a current target system
(TGTJRNINSP) currently target systems for data restart at the time specified in the system
group replication. definition for the target system.

MIMIX data group-level jobs have a delay of 2 to 35 minutes from the specified time
is built into the job restart processing. The actual delay is unique to each job. By
distributing the jobs within this range the load on systems and communications is
more evenly distributed, reducing bottlenecks caused by many jobs simultaneously
attempting to end, start, and establish communications.

Table 39. Data group level jobs that restart and the effect of the value specified in a data group definition

Restarted Data Group Level Where Jobs Run Effect of Specified Restart Time
Jobs

Object send (OBJSND) Replication Source The actual restart time is based on the timestamp
of the system on which the OBJSND job runs.
Object retrieve (OBJRTV) Restart occurs within the allowed delay following
Container send (CNRSND) the time specified in the data group definition.
When an object send job is shared by multiple
Status receive (STSRCV) data groups, the restart time of all data groups
which share that job are evaluated for restart
Object receive (OBJRCV) Replication Target
times other than *NONE. The data group with the
Container receive (CNRRCV) earliest configured restart time is used to restart
the object send job and related object replication
Status send (STSSND) jobs for all of the sharing data groups. If all of the
sharing data groups have a restart time of
*NONE, then none of those data groups restart
the shared object send job and related object
replication jobs.

Database reader (DBRDR) Replication Target Restarts when time specified in the data group
definition occurs on the target system.

319
Table 39. Data group level jobs that restart and the effect of the value specified in a data group definition

Restarted Data Group Level Where Jobs Run Effect of Specified Restart Time
Jobs

Database send (DBSND) Replication Source The actual restart time is based on the timestamp
of the source system where the DBSND job runs.
Database receive (DBRCV) Replication Target Restart occurs within the allowed delay following
the time specified in the data group definition.
These jobs only run in data groups configured for
source-send replication.

Object apply (OBJAPY) Replication Target The actual restart time is based on the timestamp
of the target system. Restart occurs within the
allowed delay following the time specified in the
data group definition.

Examples: job restart time


“Restart time examples: system definitions” on page 320 and “Restart time examples:
system and data group definition combinations” on page 321 illustrate the effect of
using the Job restart time (RSTARTTIME) parameter. These examples assume that
the system configured as the management system for MIMIX operations is also the
target system for replication during normal operation. For each example, consider the
effect it would have on nightly backups that complete between midnight and 1 a.m. on
the target system.

Restart time examples: system definitions


These examples show the effect of changing the job restart time only in system
definitions.
Example 1: MIMIX is running Monday noon when you change the job restart time to
013000 in system definition NEWYORK, which is the management system (and
target system). The network system’s system definition uses the default value 000000
(midnight). MIMIX remains up the rest of the day. Because the current jobs use values
that existed prior to your change, system-level jobs on NEWYORK automatically
restart at midnight. As a result of your change, system-level jobs on NEWYORK
restart at 1:30 a.m. Tuesday and thereafter. The journal manager on CHICAGO
restarts when midnight occurs on that system.
Example 2: It is Friday evening and all MIMIX processes on the system CHICAGO
are ended while you perform planned maintenance. During that time you change the
job restart time to 040000 in system definition CHICAGO, which is a network system
(and source system). You start MIMIX processing again at 11:07 p.m. so your
changes are in effect. The journal manager job on CHICAGO restarts Saturday and
thereafter at 4 a.m.
Because the management system is also the target for replication and its system
definition uses the default restart value of midnight, the journal manager and target
journal inspection jobs on that system restart when midnight occurs on that system.
Example 3: Friday afternoon you change system definition HONGKONG to have a
job restart time value of *NONE. HONGKONG is the management system and the

320
Configuring restart times for MIMIX jobs

target for replication. LONDON is the associated network system and its system
definition uses the default setting 000000 (midnight). You end and restart the MIMIX
jobs to make the change effective. The journal manager and target journal inspection
on HONGKONG are no longer restarted. In your runbook you document the new
procedures to manually restart the journal manager on HONGKONG and to restart
target journal inspection on HONGKONG when that system is the target for
replication.
Example 4: Wednesday evening you change the system definitions for LONDON and
HONG KONG to both have a job restart time of *NONE. HONGKONG is the
management system and the target for replication. You restart the MIMIX jobs to
make the change effective. In your runbook you document the new procedures to
manually restart the journal managers on HONGKONG and LONDON and to restart
target journal inspection on the system that is currently the target system.

Restart time examples: system and data group definition combinations


These examples show the effect of changing the job restart time in various
combinations of system definitions and data group definitions.
Example 5: You have a data group that operates between SYSTEMA and
SYSTEMB, which are both in the same time zone. Both the system definitions and the
data group definition use the default value 000000 (midnight) for the job restart time.
The journal managers restart at midnight on both systems, and the target journal
inspection jobs restart on the system that is the current target for replication. The data
group jobs on both systems restart between midnight and 35 minutes after midnight.
Example 6: 10:30 Tuesday morning you change data group definition APP1 to have a
job restart time value of 013500 (1:35 a.m.). The data group operates between
SYSTEMA and SYSTEMB, which are both in the same time zone. Both system
definitions use the default restart time of midnight. MIMIX jobs remain up and running.
At midnight, the appropriate system-level jobs on both systems restart using the
values from the preexisting configuration; the data group-level jobs restart on both
systems between midnight and 35 minutes after midnight. On Wednesday and
thereafter, APP1 data group-level jobs restart between 1:37 and 2:10 a.m. while the
system-level jobs and jobs for other data groups restart at midnight.
Example 7: You have a data group that operates between SYSTEMA and
SYSTEMB, where SYSTEMB is specified as system 2 of the data group definition
name and is the target for replication. Both systems are in the same time zone. The
data group definition specifies a job restart time value of *SYSDFN2. The system
definition for SYSTEMA specifies the default job restart time of 000000 (midnight).
SYSTEMB system definition specifies the value *NONE for the job restart time. The
journal manager and target journal inspection on SYSTEMB do not restart and the
data group jobs do not restart on either system because of the *NONE value specified
for SYSTEMB. The journal manager on SYSTEMA restarts at midnight.
Example 8: You have a data group defined between CHICAGO and NEWYORK
(System 1 and System 2, respectively) and the data group’s job restart time is set to
030000 (3 a.m.). CHICAGO is the source system as well as a network system; its
system definition uses the default job restart time of midnight. NEWYORK is the target
system as well as the management system; its system definition uses a job restart

321
time of 020000 (2 a.m.). There is a one hour time difference between the two
systems; said another way, NEWYORK is an hour ahead of CHICAGO.
Figure 16 and Figure 17 show the effect of the time zone difference and replication
processes used by the data group.
The journal manager on CHICAGO restarts at midnight Chicago time. The journal
manager and target journal inspection on NEWYORK restart at 2 a.m. New York time.
Figure 16 shows the data group as being configured with MIMIX Remote Journal
support. The database reader (DBRDR) and object apply (OBJAPY) job restart based
on the time on NEWYORK, the target system. The remaining replication processes
restart on the system where they run based on the time on CHICAGO, the source
system.

Figure 16. The data group in this environment uses MIMIX Remote Journal support.

Figure 17 shows the data group as configured to use source-send processing for user
journal replication. With the exception of the object apply jobs (OBJAPY), the data
group jobs restart during the same 2 to 35 minute timeframe based on Chicago time
(between 2 and 35 minutes after 3 a.m. in Chicago; after 4 a.m. in New York).
Because the OBJAPY jobs are based on the time on the target system, which is an

322
Configuring restart times for MIMIX jobs

hour ahead of the source system time used for the other jobs, the OBJAPY jobs
restart between 3:02 and 3:35 a.m. New York time.

Figure 17. The data group in this environment is configured for source-send replication.

Configuring the restart time in a system definition


To configure the restart time for MIMIX system-level jobs in an existing environment,
do the following:
1. On the Work with System Definitions display, type a 2 (Change) next to the
system definition you want and press F4 (Prompt).
2. Press F10 (Additional parameters), then scroll down to the bottom of the display.
3. At the Job restart time prompt, specify the value you want.
Notes:
• The time is based on a 24 hour clock, and must be specified in HHMMSS
format. Although seconds are ignored, the complete time format must be
specified. Valid values range from 000000 to 235959. The value 000000 is the
default and is equivalent to midnight.
• Consider the effect of any time zone differences between the management
system and the network system.
4. To accept the change, press Enter.
The change has no effect on jobs that are currently running. The value for the Job
restart time is retrieved from the system definition at the time the jobs are started.
The change is effective the next time the jobs are started.

323
Configuring the restart time in a data group definition
To configure the restart time for MIMIX data group-level jobs in an existing
environment, do the following:
1. On the Work with Data Group Definitions display, type a 2 (Change) next to the
data group definition you want and press F4 (Prompt).
2. Press F10 (Additional parameters), then scroll down to the bottom of the display.
3. At the Job restart time prompt, specify the value you want.
Notes:
• The time is based on a 24 hour clock, and must be specified in HHMMSS
format. Although seconds are ignored, the complete time format must be
specified. Valid values range from 000000 to 235959. The value 000000 is the
default and is equivalent to midnight.
• Consider the effect of any time zone differences between the management
system and the network system.
4. To accept the change, press Enter.
Changes have no effect on jobs that are currently running. The value for the Job
restart time is retrieved at the time the jobs are started. The change is effective the
next time the jobs are started.

324
Setting the system time zone and time

Setting the system time zone and time


Each MIMIX system must have the correct time zone (QTIMZON) and time (QTIME)
system values set. If the time zone and time are not set correctly, it may cause issues
when running procedures for application groups. For example, the procedure status
time may display in the wrong order with incorrect times, which can make it difficult to
work with the procedure, or a switch may be unable to complete.
Note: These system values are updated immediately, so timed jobs may be
triggered when the values are updated. Therefore, you may want to schedule
this change, if necessary, during a time with minimum scheduled jobs or
during a planned outage when the system is in restricted state.
Verify that the QTIMZON system value is set with the correct value for the time zone
in which the LPAR is intended to run. If a change is needed, you should immediately
change the QTIME system value since the time of day is updated based on the new
value entered in the QTIMZON system value. To change the system values, do the
following:
1. Set the correct time zone in QTIMZON.
To determine the correct time zone when updating QTIMZON, you need to know:
• The time zone name.
• If Daylight Savings Time is observed. If Daylight Savings Time is observed, you
must also know when Daylight Savings Time starts.
In the TIME ZONE field in QTIMZON, you can press F4 for a list of time zones
included with the system. A description of the time zones included with the system
can be found at:
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/rzati/rzatitim
ezone.htm
For additional information about time zones, see the IBM InfoCenter topic time
management concepts.
Once set, the QTIME *SYSVAL immediately changes to reflect the new QTIMZON
as if the previous QTIME value was the time in GMT.
2. Set the system time (QTIME) to the correct time so that previously scheduled jobs
do not repeat or get bypassed by the change in the QTIMZON value.

325
Creating an application group definition
Use this topic to create an application group. Application groups are best practice and
provide the ability to group and control multiple data groups as one entity. Default
procedures for starting, switching, and ending the application group are also created.
To create an application group definition, do the following:
1. From the MIMIX Basic Main Menu, type 1 (Work with application groups) and
press Enter.
2. The Work with Application Groups display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create Application Group Def. (CRTAGDFN) appears. Do the following:
a. At the Application group definition prompt, specify a name.
b. The Application group type prompt defaults to *NONCLU. This indicates that the
application group will not participate in a cluster controlled by the IBM i
operating system.
c. Press Enter.
4. An additional prompt appears. Specify a description of the application group.
5. Press Enter.

326
Loading data resource groups into an application group

Loading data resource groups into an application group


Data resource groups identify the data to be replicated within the application group.
Use this topic to load data resource groups into an application group by selecting data
group definitions.
The Load Data Rsc. Grp. Ent. (LODDTARGE) command uses the names of the
specified data groups when determining the names of the data resource group entries
created. Data groups that have the same name for their three-part name will be
assigned to the same data resource group entry. The name of each data resource
group must be unique and will use the same name as its data groups. (For data
groups of type *PEER, the resource group entry will be named ADMDMN.) If a data
resource group entry already exists with the data group name or the name ADMDMN,
a unique name is generated by concatenating up to the first five characters of the data
group name, or ADMDMN, followed by the characters RGE. If necessary, a two
character alphanumeric suffix is added to ensure its uniqueness.
Note: Most environments can use these instructions. However, some environments,
such as those that perform bi-directional replication or that broadcast
replicated data to multiple systems with data groups that do not have the same
name, need to use the process described in “Manually adding resource group
and node entries to an application group” on page 328.
Do the following to load data resource group entries for an application group:
1. Enter the following command, specifying the name of the installation library:
installation_library/LODDTARGE
The Load Data Rsc. Grp. Ent. (LODDTARGE) appears.
2. At the Application group definition prompt, specify the name of the application
group.
3. At the Data group definition prompt, specify the value you want. The value *ALL
selects all available data groups within the installation. To have a smaller set of
data groups associated with the application group, specify the name of one or
more data groups. To see a list of the available data group names, press F4.
4. To load the entries, press Enter.

Specifying the primary node for the application group


Use this topic to specify the correct primary node when you are configuring and have
associated existing data groups to an application group by using “Loading data
resource groups into an application group” on page 327.
Do the following:
1. From the MIMIX Basic Main Menu, type 1 (Work with application groups) and
press Enter.
2. The Work with Application Groups display appears. Type 12 (Node entries) next
to the application group you want and press Enter.

327
3. The Work with Node Entries display appears. Press F10 to toggle between
configured view and status view.
Note: While configuring, the status view of this display will show the Current Role
and Data Provider with values of *UNDEFINED until the application group
is started.
4. From the configured view, type 2 (Change) next to the node that you want to be
the primary node and press Enter.
5. The Change Node Entry (CHGNODE) command appears. Specify *PRIMARY at
the Role prompt.
6. Press Enter.

Manually adding resource group and node entries to an


application group
For some environments, you need to manually update the configuration information
needed for application groups. This includes environments that perform bi-directional
replication and environments where data groups broadcast the same replicated data
to multiple systems using data groups that do not have the same name. For these
environments, the following instructions replace Step 3 and Step 4 of the “Checklist:
converting to application groups” on page 145.
Do the following to manually add a data resource group entry and identify it within the
data groups, and assign node roles within an application group:
1. Use the ADDDTARGE command to add a data resource group entry of type *DTA
to an application group.
2. For each data group to be associated in the same data resource group, use the
CHGDGDFN command and specify the name of the data resource group entry
added in Step 1 as the value for DTARSCGRP.
3. For each node (system) included in the data groups, use the ADDNODE
command to specify the node’s role (ROLE) and add the node to the application
group.
Example: simple three-node broadcast environment
This example manually sets up configuration information for a simple three-node
broadcast environment controlled by an application group. In this example, all objects
and changes on SYSA will be replicated to SYSB and SYSC.
The following data groups to accomplish this exist:
DTA1 SYSA SYSB
DTA2 SYSA SYSC
DTA3 SYSB SYSC
Also, the application group, APPGRP1, has already been created.

328
Manually adding resource group and node entries to an application group

For this example, the following steps create a data resource group entry, add the data
groups to the resource group entry, and define the node role of each system within the
application group.
1. Add a data resource group entry to the application group APPGRP1.
ADDDTARGE AGDFN(APPGRP1) DTARSCGRP(RCSGRP1) TYPE(*DTA)
2. Change the data group definitions, specifying RSCGRP1 as the value for the Data
resource group entry.
CHGDGDFN DGDFN(DTA1) DTARSCGRP(RSCGRP1)
CHGDGDFN DGDFN(DTA2) DTARSCGRP(RSCGRP1)
CHGDGDFN DGDFN(DTA3) DTARSCGRP(RSCGRP1)
3. Define the correct node role for each node as you add the nodes to the application
group.
ADDNODE AGDFN(APPGRP1) RSCGRP(*AGDFN) NODE(SYSA)
ROLE(*PRIMARY)
ADDNODE AGDFN(APPGRP1) RSCGRP(*AGDFN) NODE(SYSB)
ROLE(*BACKUP) POSITION(1)
ADDNODE AGDFN(APPGRP1) RSCGRP(*AGDFN) NODE(SYSC)
ROLE(*BACKUP) POSITION(2)

329
Starting, ending, or switching an application group
Application group commands that start (STRAG), end (ENDAG), or switch (SWTAG)
the replication environment invoke procedures to perform the requested operation.
For the purpose of describing their use, these commands are quite similar.
This topic describes behavior of the commands for application groups that do not
participate in a cluster controlled by the IBM i operating system (*NONCLU
application groups).
The following parameters are available on all of the commands unless otherwise
noted.
What is the scope of the request? The following parameters identify the scope of
the requested operation:
Application group definition (AGDFN) - Specifies the requested application group.
You can either specify a name or the value *ALL.
Resource groups (TYPE) - Specifies the types of resource groups to be
processed for the requested application group.
Data resource group entry (DTARSCGRP) - Specifies the data resource groups to
include in the request. The default is *ALL or you can specify a name. This
parameter is ignored when TYPE is *ALL or *APP.
What is the expected behavior? The following parameters, when available, define
the expected behavior:
Switch type (SWTTYP) - Only available on the SWTAG command, this specifies
the reason the application group is being switched. The procedure called to
perform the switch and the actions performed during the switch differ based on
whether the current primary node (data source) is available at the start of the
switch procedure. The default value, *PLANNED, indicates that the primary node
is still available and the switch is being performed for normal business processes
(such as to perform maintenance on the current source system or as part of a
standard switch procedure). The value *UNPLANNED indicates that the switch is
an unplanned activity and the data source system may not be available.
Current node roles (ROLE) - Only available on the STRAG command, this
parameter is ignored for non-cluster application groups.
Node roles (ROLE) - Only available on the SWTAG command, this specifies
which set of node roles will determine the node that becomes the new primary
node as a result of the switch. The default value *CURRENT uses the current
order of node roles. If the application group participates in a cluster, the current
roles defined within the CRGs will be used. If *CONFIG is specified, the
configured primary node will become the new primary node and the new role of
other nodes in the recovery domain will be determined from their current roles. If
you specify a name of a node within the recovery domain for the application
group, the node will be made the new primary node and the new role of other
nodes in the recovery domain will be determined from their current roles.
What procedure will be used? The following parameters identify the procedure to
use and its starting point:

330
Starting, ending, or switching an application group

Begin at step (STEP) - Specifies where the request will start within the specified
procedure. This parameter is described in detail below.
Procedure (PROC) - Specifies the name of the procedure to run to perform the
requested operation when starting from its first step. The value *DFT will use the
procedure designated as the default for the application group. The value
*LASTRUN uses the same procedure used for the previous run of the command.
You can also specify the name of a procedure that is valid the specified
application group and type of request.

Where should the procedure begin? The value specified for the Begin at step
(STEP) parameter on the request to run the procedure determines the step at which
the procedure will start. The status of the last run of the procedure determines which
values are valid.
The default value, *FIRST, will start the specified procedure at its first step. This value
can be used when the procedure has never been run, when its previous run
completed (*COMPLETED or *COMPERR), or when a user acknowledged the status
of its previous run which failed, was canceled, or completed with errors
(*ACKFAILED, *ACKCANCEL, or *ACKERR respectively).
Other values are for resolving problems with a failed or canceled procedure. When a
procedure fails or is canceled, subsequent attempts to run the same procedure will
fail until user action is taken. You will need to determine the best course of action for
your environment based on the implications of the canceled or failed steps and any
steps which completed.
The value *RESUME will start the last run of the procedure beginning with the step at
which it failed, the step that was canceled in response to an error, or the step
following where the procedure was canceled. The value *RESUME may be
appropriate after you have investigated and resolved the problem which caused the
procedure to end. Optionally, if the problem cannot be resolved and you want to
resume the procedure anyway, you can override the attributes of a step before
resuming the procedure.
The value *OVERRIDE will override the status of all runs of the specified procedure
that did not complete. The *FAILED or *CANCELED status of these procedures are
changed to acknowledged (*ACKFAILED or *ACKCANCEL) and a new run of the
procedure begins at the first step.
.

The MIMIX Operations book describes the operational level of working with
procedures and steps in detail.

Starting an application group


For an application group, a procedure for only one operation (start, end, or switch)
can run at a time.
To start an application group, do the following:
1. From the Work with Application Groups display, type 9 (Start) next to the
application group you want and press F4 (Prompt).
2. Verify that the values you want are specified for Resource groups and Data
resource group entry.

331
3. If you are starting after addressing problems with the previous start request,
specify the value you want for Begin at step. Be certain that you understand the
effect the value you specify will have on your environment.
4. Press Enter.
5. The Procedure prompt appears. Do one of the following:
• To use the default start procedure, press Enter.
• To use a different start procedure for the application group, specify its name.
Then press Enter.

Ending an application group


For an application group, a procedure for only one operation (start, end, or switch)
can run at a time.
To end an application group, do the following:
1. From the Work with Application Groups display, type 10 (End) next to the
application group you want and press F4 (Prompt).
2. Verify that the values you want are specified for Resource groups and Data
resource group entry.
3. If you are starting the procedure after addressing problems with the previous end
request, specify the value you want for Begin at step. Be certain that you
understand the effect the value you specify will have on your environment.
4. Press Enter.
5. The Procedure prompt appears. Do one of the following:
• To use the default end procedure, press Enter.
• To use a different end procedure for the application group, specify its name.
Then press Enter.

Switching an application group


For an application group, a procedure for only one operation (start, end, or switch)
can run at a time.
To switch an application group, do the following:
1. From the Work with Application Groups display, type 15 (Switch) next to the
application group you want and press Enter.
The Switch Application Group (SWTAG) display appears.
2. Verify that the values you want are specified for Resource groups and Data
resource group entry.
3. Specify the type of switch to perform at the Switch type prompt.
4. Verify that the default value *CURRENT for Node roles prompt is valid for the
switch you need to perform. If necessary, specify a different value.
5. If you are starting the procedure after addressing problems with the previous

332
Starting, ending, or switching an application group

switch request, specify the value you want for Begin at step. Be certain that you
understand the effect the value you specify will have on your environment.
6. Press Enter.
7. The Procedure prompt appears. Do one of the following:
• To use the default switch procedure for the specified switch type, press Enter.
• To use a different switch procedure for the application group, specify its name.
Then press Enter.
8. A switch confirmation panel appears. To perform the switch, press F16.

333
Performing target journal inspection
The data integrity of replicated objects can be affected if they are changed on the
target system by programs or users other than MIMIX. For new installations, shipped
default values for journal definitions and data group definitions allow MIMIX to
automatically perform target journal inspection to check for such actions. On any
given target system, both the system journal (QAUDJRN) and user journals are
inspected. MIMIX also notifies you so that you can take appropriate action.
Target journal inspection consists of a set of processes that run on a system only
when that system is currently the target system for replication. Each process reads a
journal to check for users or programs other than MIMIX that have modified replicated
objects. The number of inspection process on a system depends on how many user
journals on the target system are defined to the data groups replicating to that system.
There is one inspection process for the system journal (QAUDJRN) regardless of how
many data groups use the system as a target system. Each user journal on the target
system also has an inspection process, which may be used by one or more data
groups.
Any detected modifications are logged in an internal database of replicated objects.
The example below shows the relationships between data groups, journals, and
configuration in a simple switchable replication environment.
Each target journal inspection process sends a notification once per day per user that
changed objects on the target node. Only the first object changed by the user is
identified in the notification. However, additional objects changed by the same user
are marked in the replicated objects database with the unique ID of the already sent
notification.
When using MIMIX through Vision Solutions Portal, you can use the Replicated
Objects portlet to easily view a list of all the objects changed by a particular user,
program, or job, or a list of those that have the same notification ID. This capability is
only available through Vision Solutions Portal.
Notes:
• Target journal inspection does not occur for the journals identified in the
MXCFGJRN journal definition and journal definitions that identify the remote
journal used in RJ configurations (whose names typically end with @R).
• In environments that perform bi-directional replication, target journal inspection
does not report a target object as changed by user when that object is also
replicated by a different data group using that system as its source.
• MIMIX automatically creates journal definitions for the target system for
QAUDJRN and user journals. In environments where only user journal replication
is configured, the QAUDJRN journal definition on the target system is still needed
so that target journal inspection can check for all transactions.
Target journal inspection is started and ended when MIMIX starts or ends. Starting
data groups will start inspection jobs on the target system if necessary. You can also
manually start or end the inspection processes for a system with commands that act
on system-level processes (STRMMXMGR, ENDMMXMGR). Inspection processes

334
Performing target journal inspection

are included with other system-level jobs that restart daily. The name of each
inspection process job is the name of the journal definition.
The first time that a target journal inspection process is started, it begins with the last
sequence number in the currently attached journal receiver on the target system.1
(The starting point is not associated with the location in the source journal from which
replication is started.)
When a target journal inspection process ends, MIMIX retains information about the
last sequence number it processed. On subsequent start requests, the target journal
inspection process starts at the next journal sequence number following the last
sequence number it processed if the journal receiver is still available. If the receiver is
no longer available, processing starts with the last sequence number in the currently
attached journal receiver.1 Any time a target journal inspection process starts,
message LVI3901 is issued to the MIMIX message log and to the job log, identifying
where the journal inspection process started.
When starting target journal inspection after enabling target journal inspection in a
journal definition where it was previously disabled, processing begins with the last
sequence number in the currently attached journal receiver on the target system.1
For each data group, status of target journal inspection is included with other target
system manager processes. At the system level, the status reported is the combined
status of all target inspection processes currently running on that system. More
status-related information is available in the MIMIX Operations book.
Example: An application group controls two switchable data groups. Both data
groups perform system and user journal replication, but only one data group is

1. This behavior applies to service pack 7.1.06.00 and higher. In earlier version 7.1 service
packs, processing begins at the first entry in the currently attached journal receiver on the
target system, which can result in false target journal inspection notifications being reported
on initial startup.

335
configured for remote journaling. Figure 18 shows the journals associated with this
configuration.

Figure 18. Example: journals present in a switchable environment.

Note that the remote journals used by the remote journaling environment of data
group ABC will never be used for target journal inspection.
To enable target journal inspection for this example environment requires:
• Data group definitions ABC and DEF must specify *YES for Journal on target
(JRNTGT). Also, because these data groups perform user journal replication, the
values of System 1 journal definition (JRNDFN1) and System 2 journal definition
(JRNDFN2) must identify the journal definitions for the systems identified as
system 1 or system 2, respectively, in the data group name (DGDFN). Often the
journal definition names use the same name as the data group. However, it is
possible that a data group may be using a journal definition with a different name
or sharing a journal definition with a different data group.
• All of the journal definitions in Table 40 must specify *ACTIVE for the Target
journal state (TGTSTATE) and *YES for Target journal inspection (TGTJRNINSP).

336
Performing target journal inspection

Table 40. Example: inspected target journals and their associated journal definitions

Target System of Inspected Target Journal Associated Journal Definition


Replication in Library

OSCAR ABC #MXJRN ABC OSCAR

DEF #MXJRN DEF OSCAR

QAUDJRN QYS QAUDJRN OSCAR

HENRY ABC #MXJRN ABC HENRY

DEF #MXJRN DEF HENRY

QAUDJRN QYS QAUDJRN HENRY

Automatic correction of errors found by target journal inspection


MIMIX supports automatic correction of objects identified by target journal inspection
as “changed on target by user”.
Target journal inspection will not identify or automatically correct any operations for
implicitly defined parent objects on the target system.
When target journal inspection identifies a problem with a target system object, the
replication manager process starts and evaluates how to resolve the problem. The
replication manager (MXREPMGR) is a transient process that runs in the MIMIXSBS
subsystem as needed.
Many of the automatic corrections rely on auditing. For problems that are corrected by
auditing, the replication manager updates the internal priority for auditing the object
so that the object or member becomes eligible for the next run of a priority object
audit. The correction can occur in the next run of the appropriate audit (priority or
scheduled) only if automatic audit recovery policy is enabled.
The following types of problems are addressed:
• Replicated object or member was moved, renamed, or an attribute was
changed on target system. For an object, the next run of an audit which checks
the object type will correct the problem. For a physical file member, the next run of
the #FILATRMBR audit will correct the problem.
• Object or member data was changed on the target system. For an object, the
next run of an audit which checks the object type will correct the problem. For
changes to a physical file member, the replication manager considers how the file
member is replicated. For user journal replication, the member is included in the
next run of the #FILDTA audit. For system journal replication, the SYNCOBJ
command is used to synchronize the member from the source to the target
system.
• Object or file member was created on target system within replication
scope. The replication manager uses the value of the Object only on target
(OBJONTGT) policy in effect to determine how to recover the object or member. If

337
the value is *DELETE, the object or member is deleted from the target system. If
the value is *DISABLED, no recovery action is taken.
• Replicated object or member was deleted on target system. The replication
manager determines if the object or member still exists on the source system. If
the source object exists and is within the replication scope, it is synchronized to
the target system by the next run of an audit which checks the object type. For
physical file members, if the source member exists and is within the replication
scope, the member is synchronized to the target system by the next run of the
#FILATRMBR audit.
MIMIX tracks error conditions for three days. Once an error condition is corrected, the
object will no longer be identified as being “changed on target by user” in the
Replicated Objects portlet.
If a recovery that was submitted into replication processes fails, the replication
manager sends an error notification.

Enabling target journal inspection


The shipped default values for journal definitions and data group definitions
automatically allow target journal inspection to occur. Journal definitions that existed
before upgrading to MIMIX version 7.1 do not automatically allow target journal
inspection. These instructions describe how to change an existing configuration to
enable target journal inspection.
Target journal inspection is performed for journals on systems that are the target for
replication. Results are reported through data groups. To get complete results for a
data group about inappropriate access to its replicated objects on the target system,
two journal definitions must be enabled for inspection: one for the user journal on the
target system, and one for the system journal (QAUDJRN) on the target system. The
needs of switchable data groups and data groups that share journals must also be
considered. Each journal definition enabled for target journal inspection can report
results for multiple data groups.
Do the following to enable target journal inspection:
1. Of the systems that can be target systems for replication, determine the systems
on which you want target journal inspection to run when those systems are the
target for replication.
2. On those systems, determine which journals you want to be inspected. It is
recommended that you allow inspection for the system journal (QAUDJRN) and
any user journals on a system.
Inspection does not occur for the journals in the following journal definitions:
JRNMMX, MXCFGJRN, and those that identify the remote journal used in RJ
configurations (whose names typically end with @R, as shown in Figure 18.)
3. From a management system, do the following to change the appropriate journal
definitions:
a. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.

338
Performing target journal inspection

b. From the MIMIX Configuration Menu, type a 3 (Work with journal definitions)
and press Enter.
4. The Work with Journal Definitions display appears. For each journal definition you
need to change, do the following:
a. Type a 2 (Change) next to the journal definition for the system you want and
press Enter. The Change Journal Definition (CHGJRNDFN) display appears.
b. Verify that the value of the Target journal state prompt is *ACTIVE.
c. Specify *YES for the Target journal inspection prompt.
d. Press Enter.
5. The next step to perform depends on your configuration. Do one of the following:
• If you have only data groups of type *OBJ, skip to Step 7.
• If you have data groups of type *ALL or *DB, those data groups which include
the system identified in a user journal definition must be verified and changed if
necessary. Type 13 (Data group definitions) next to the journal definition you
changed and press Enter.
6. The Work with Data Group Definitions display appears with a list of the data
groups that use the selected journal definition. For each of the data groups on the
display do the following:
Note: If the selected journal definition was for a system journal (QAUDJRN), a
target journal of an RJ environment, or is no longer used by a data group,
the list will be blank.
a. Type a 2 (Change) next to the data group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press F9
(All parameters).
c. Check the value specified for Journal on target (JRNTGT). Change the value to
*YES if necessary.
d. Press Enter.
7. To make the changes effective, do one of the following:
• If you changed data group definitions, end and restart the data groups.
• If you changed only journal definitions (you did not perform Step 6), specify the
name of the target system in the following command and press Enter:
STRMMXMGR SYSDFN(name) TGTJRNINSP(*YES)

Determining which data groups use a journal definition


When changing which systems are enabled for target journal inspection, it is useful to
know which data groups use a particular journal definition.
Do the following:
1. From the MIMIX Basic Main Menu, type a 11 (Configuration menu) and press
Enter.

339
e. From the MIMIX Configuration Menu, type a 4 (Work with data group
definitions) and press Enter.
2. The Work with Data Group Definitions display appears. Press F18 (Subset).
3. The Subset DG Definitions appears. Do the following:
• To display a list of all data groups that use the system journal, specify *ALL
and *OBJ for the Data group type prompt and press Enter.
• To display a list of data groups that use a specific user journal, specify *ALL
and *DB for the Data group type prompt and specify the name of journal
definition at the Journal definition prompt and press Enter.
4. The resulting list includes the data groups that use the journal definition on either
its source or target system. The value displayed in the Data Source column
identifies which system is the current source system.
5. To identify whether a user journal is currently being used as a source or a target
journal, type a 5 (Display) next to the data group you want and press Enter.
6. Journal definition names for user journals are often the same name as the data
group. Therefore, to determine with certainty whether the journal definition is
being used as a source or target journal, evaluate whether the value specified for
Data Source resolves to System 1 or System 2 of the data group. Then check the
name specified in the appropriate System journal definition prompt (JRNDFN1 or
JRNDFN2).

Disabling target journal inspection


The shipped default values for new journal definitions and data group definitions
automatically allow target journal inspection to occur. These instructions describe how
to change an existing configuration to disable target journal inspection for a journal on
a system.
Target system performance may be a reason why you may want to disable journal
inspection.
Target journal inspection is performed for journals on systems that are the target for
replication. Disabling inspection for a journal definition affects all data groups using
the identified journal as a target journal. For example:
• If multiple data groups of type *ALL or *OBJ have the same target system, those
data groups are using the same QAUDJRN journal definition on the target system.
Disabling inspection in that journal definition affects all of those data groups.
• If multiple data groups of type *ALL or *DB have the same target system, you
must determine if the data groups are using the same journal definition on the
target system. Any data groups using the same target journal are affected when
inspection is disabled in the journal definition.
From a management system, do the following to disable target journal inspection for
one or more journals on a system:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.

340
Performing target journal inspection

2. From the MIMIX Configuration Menu, type a 3 (Work with journal definitions) and
press Enter.
3. The Work with Journal Definitions display appears. Do the following:
a. Press F18 (Subset). The Subset Journal Definitions display appears.
b. At the System prompt, specify the name of the system on which you want to
disable target journal inspection and press Enter.
4. The resulting list includes only the journal definitions for the specified system. For
each journal definition you want to change, do the following:
a. Type 2 (Change) next to the journal definition and press Enter.
b. The Change Journal Definition (CHGJRNDFN) display appears. Specify *NO
for the Target journal inspection prompt and press Enter.
Notes:
• You do not need to change journal definitions for journals that are excluded
from inspection when the system is the target for replication. Inspection does
not occur for the journals identified in the MXCFGJRN journal definition and
journal definitions that identify the remote journal used in RJ configurations
(whose names typically end with @R).
• When you change a QAUDJRN journal definition, all data groups that perform
system journal replication or any form of cooperative processing with a user
journal and are using that system as their target system are affected. When
you change a journal definition for a user journal, any data groups that perform
database replication or any form of cooperative processing and are using that
system as their target system are affected.
Any active journal inspection jobs are ended when the configuration change is
made. Inspection processes with status of *ACTIVE, *INACTIVE, and *NEWDG
will change to a status of not configured (*NOTCFG).

341
Starting, ending, and verifying journaling

CHAPTER 14Starting, ending, and verifying


journaling

This chapter describes procedures for starting and ending journaling. Journaling must
be active on all files, IFS objects, data areas and data queues that you want to
replicate through a user journal. Normally, journaling is started during configuration.
However, there are times when you may need to start or end journaling on items
identified to a data group.
The topics in this chapter include:
• “What objects need to be journaled” on page 343 describes, for supported
configuration scenarios, what types of objects must have journaling started before
replication can occur. It also describes when journaling is started implicitly, as well
as the authority requirements necessary for user profiles that create the objects to
be journaled when they are created.
• “What objects need to be journaled” on page 343 describes, for supported
configuration scenarios, what types of objects must have journaling started before
replication can occur. It also describes when journaling is started implicitly, as well
as the authority requirements necessary for user profiles that create the objects to
be journaled when they are created.
• “MIMIX commands for starting journaling” on page 345 identifies the MIMIX
commands available for starting journaling and describes the checking performed
by the commands. It also includes information for specifying journaling to the
configured journal.
• “Journaling for physical files” on page 347 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
physical files identified by data group file entries.
• “Journaling for IFS objects” on page 350 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
IFS objects replicated cooperatively (advanced journaling). IFS tracking entries
are used in these procedures.
• “Journaling for data areas and data queues” on page 354 includes procedures for
displaying journaling status, starting journaling, ending journaling, and verifying
journaling for data area and data queue objects replicated cooperatively
(advanced journaling). IFS tracking entries are used in these procedures.

342
What objects need to be journaled

What objects need to be journaled


A data group can be configured in a variety of ways that involve a user journal in the
replication of files, data areas, data queues and IFS objects. Journaling must be
started for any object to be replicated through a user journal or to be replicated by
cooperative processing between a user journal and the system journal.
Requirements for system journal replication - System journal replication
processes use a special journal, the security audit (QAUDJRN) journal. Events are
logged in this journal to create a security audit trail. When data group object entries,
IFS entries, and DLO entries are configured, each entry specifies an object auditing
value that determines the type of activity on the objects to be logged in the journal.
Object auditing is automatically set for all objects defined to a data group when the
data group is first started, or any time a change is made to the object entries, IFS
entries, or DLO entries for the data group. Because security auditing logs the object
changes in the system journal, no special action is need.
Requirements for user journal replication - User journal replication processes
require that journaling is started for the objects identified by data group file entries,
object tracking entries, and IFS tracking entries. Starting journaling ensures that
changes to the objects are recorded in the user journal, and are available for MIMIX to
replicate.
During initial configuration, the configuration checklists direct you when to start
journaling for objects identified by data group file entries, IFS tracking entries, and
object tracking entries. The MIMIX commands STRJRNFE, STRJRNIFSE, and
STRJRNOBJE simplify the process of starting journaling. If the journal the objects are
currently to is different than the journal defined in the data group definition, these
MIMIX commands provide a prompt to change journaling to the configured journal.
For more information about these commands, see “MIMIX commands for starting
journaling” on page 345.
Although MIMIX commands for starting journaling are preferred, you can also use
IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start journaling if you have
the appropriate authority for starting journaling.
Requirements for implicit starting of journaling - Journaling can be automatically
started for newly created database files, data areas, data queues, or IFS objects
when certain requirements are met.
The user ID creating the new objects must have the required authority to start
journaling and the following requirements must be met:
• IFS objects - A new IFS object is automatically journaled if the directory in which it
is created is journaled as a result of a request that enabled journaling inheritance
for new objects. Typically, if MIMIX started journaling on the parent directory,
inheritance is enabled. If you manually start journaling on the parent directory
using the IBM command STRJRN, specify INHERIT(*YES) and SUBTREE
(*ALL). This will cause IFS objects created within the journaled directory to inherit
journal options and the journal state of the parent directory and also cause all
objects within the directory's subtree to be journaled.
• Database files created by SQL statements - A new file created by a CREATE

343
TABLE statement is automatically journaled if the library in which it is created
contains a journal named QSQJRN or if the library is journaled with appropriate
inherit rules.
• New *FILE, *DTAARA, *DTAQ objects - The default value (*DFT) for the Journal at
creation (JRNATCRT) parameter in the data group definition enables MIMIX to
automatically start journaling for physical files, data areas, and data queues when
they are created.
– On systems running IBM i 6.1 or higher releases, MIMIX uses the support
provided by the IBM i command Start Journal Library (STRJRNLIB).
Customers are advised not to re-create the QDFTJRN data area on systems
running IBM i 6.1 or higher.
When configuration requirements are met, MIMIX will start library journaling for
the appropriate libraries as well as enable automatic journaling for the configured
cooperatively processed object types. When journal at creation configuration
requirements are met, all new objects of that type are journaled, not just those
which are eligible for replication.
When the data group is started, MIMIX evaluates all data group object entries for
each object type. (Entries for *FILE objects are only evaluated when the data
group specifies COOPJRN(*USRJRN).) Entries properly configured to allow
cooperative processing of the object type determine whether MIMIX will enforce
library journaling. MIMIX uses the data group entry with the most specific match to
the object type and library that also specifies *ALL for its System 1 object (OBJ1)
and Attribute (OBJATR).
Note: MIMIX prevents library journaling from starting in the following libraries:
QSYS*, QRECOVERY, QRCY*, QUSR*, QSPL*, QRPL*, QRCL*, QRPL*,
QGPL, QTEMP and SYSIB*.
For example, if MIMIX finds only the following data group object entries for library
MYLIB, it would use the first entry when determining whether to enforce library
journaling because it is the most specific entry that also meets the OBJ1(*ALL)
and OBJATR(*ALL) requirements. The second entry is not considered in the
determination because its OBJ1 and OBJATR values do not meet these
requirements.
LIB1(MYLIB) OBJ1(*ALL) OBJTYPE(*FILE) OBJATR(*ALL) COOPDB(*YES)
PRCTYPE(*INCLD)
LIB1(MYLIB) OBJ1(MYAPP) OBJTYPE(*FILE) OBJATR(DSPF) COOPDB(*YES)
PRCTYPE(*INCLD)

Authority requirements for starting journaling


Normal MIMIX processes run under the MIMIXOWN user profile, which ships with
*ALLOBJ special authority. Therefore, it is not necessary for other users to account
for journaling authority requirements when using MIMIX commands (STRJRNFE,
STRJRNIFSE, STRJRNOBJE) to start journaling.
When the MIMIX journal managers are started, or when the Build Journaling
Environment (BLDJRNENV) command is used, MIMIX checks the public authority
(*PUBLIC) for the journal. If necessary, MIMIX changes public authority so the user ID
in use has the appropriate authority to start journaling.

344
MIMIX commands for starting journaling

Authority requirements must be met to enable the automatic journaling of newly


created objects and if you use IBM commands to start journaling instead of MIMIX
commands.
• If you create database files, data areas, or data queues for which you expect
automatic journaling at creation, the user ID creating these objects must have the
required authority to start journaling.
• If you use the IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start
journaling, the user ID that performs the start journaling request must have the
appropriate authority requirements.
For journaling to be successfully started on an object, one of the following authority
requirements must be satisfied:
• The user profile of the user attempting to start journaling for an object must have
*ALLOBJ special authority.
• The user profile of the user attempting to start journaling for an object must have
explicit *ALL object authority for the journal to which the object is to be journaled.
• Public authority (*PUBLIC) must have *OBJALTER, *OBJMGT, and *OBJOPR
object authorities for the journal to which the object is to be journaled.

MIMIX commands for starting journaling


Before you use any of the MIMIX commands for starting journaling, the data group file
entries, IFS tracking entries, or object tracking entries associated with the command’s
object class must be loaded.
The MIMIX commands for starting journaling are:
• Start Journaling File Entries (STRJRNFE) - This command starts journaling for
files identified by data group file entries.
• Start Journaling IFS Entries (STRJRNIFSE) - This command starts journaling of
IFS objects configured for user journal replication. Data group IFS entries must be
configured and IFS tracking entries be loaded (LODDGIFSTE command) before
running the STRJRNIFSE command to start journaling.
• Start Journaling Obj Entries (STRJRNOBJE) - This command starts journaling of
data area and data queue objects configured for user journal replication. Data
group object entries must be configured and object tracking entries be loaded
(LODDGOBJTE command) before running the STRJRNOBJE command to start
journaling.
If you attempt to start journaling for files or objects that are already journaled, MIMIX
checks that the physical file, IFS object, data area, or data queue is journaled to the
journal configured in the data group definition. If the file or object is journaled to the
configured journal, the journaling status of the data group file entry, IFS tracking or
object tracking entry is changed to *YES. If the file or object is journaled using a
different journal than the configured journal, the journaling status is changed to
*DIFFJRN. If the attempt to start journaling fails for any other reason, the journaling
status is changed to *NO.

345
Forcing objects to use the configured journal
Journaled objects must use the journal defined in the data group definition in order for
replication to occur. Objects that are journaled to a journal that is different than the
journal defined in the data group definition can result in data integrity issues. MIMIX
identifies these objects with a journaling status of *DIFFJRN. A journal status of
*DIFFJRN should be investigated to determine the reason for using the different
journal. See “Resolving a problem for a journal status of *DIFFJRN” on page 139.
The MIMIX commands STRJRNFE, STRJRNIFSE, and STRJRNOBJE provide the
ability to specify the Force to configured journal (FORCE) prompt which determines
whether to end journaling for the selected objects that are currently journaled to a
different journal than the configured journal (*DIFFJRN), and then start journaling to
the configured journal.
To force journaled objects to use the journal configured in the data group definition,
specify *YES for the FORCE prompt in the MIMIX commands for starting journaling.
FORCE(*NO) is the command default for the STRJRNFE, STRJRNIFSE and
STRJRNOBJE commands when run from the native interface. See “MIMIX
commands for starting journaling” on page 345.

346
Journaling for physical files

Journaling for physical files


Data group file entries identify physical and logical files to be replicated. When data
group file entries are added to a configuration, they may have an initial status of
*ACTIVE and a journaling status of *NO even though the file may be journaled. For
the journaling status to be updated and accurately reflect journaling of the file, you
must verify journaling. In order for replication to occur, journaling must be started for
the files on the source system using the journal defined in the data group definition.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for physical files.

Displaying journaling status for physical files


Use this procedure to display journaling status for physical files identified by data
group file entries. Do the following:
1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the
Work with Data Groups display.
2. On the Work with Data Groups display, type 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. The initial view shows the current
and requested status of the data group file entry. Press F10 (Journaled view).
At the right side of the display, the Journaled System 1 and System 2 columns
indicate whether the physical file associated with the file entry is journaled to the
configured journal (*YES), is not journaled (*NO), is journaled to a journal that is
different than the configured journal (*DIFFJRN), or whether journaling is not
permitted or not required for replication (*NA). When the column displays the
value *DIFFJRN, further investigation is required to determine the reason for
journaling to a different journal than the configured journal. See “Resolving a
problem for a journal status of *DIFFJRN” on page 139.
Note: Logical files will have a status of *NA. Data group file entries exist for
logical files only in data groups configured for MIMIX Dynamic Apply.

Starting journaling for physical files


Use this procedure to start journaling for physical files identified by data group file
entries. In order for replication to occur, journaling must be started for the file on the
source system using the journal configured in the data group definition.
This procedure invokes the Start Journaling File Entries (STRJRNFE) command. The
command can also be entered from a command line.
Do the following:
1. Access the journaled view of the Work with DG File Entries display as described
in “Displaying journaling status for physical files” on page 347.
2. From the Work with DG File Entries display, type a 9 (Start journaling) next to the
file entries you want. Then do one of the following:

347
• To start journaling using the command defaults, press Enter.
• To modify command defaults, press F4 (Prompt) then continue with the next
step.
3. The Start Journaling File Entries (STRJRNFE) display appears. The Data group
definition and the System 1 file identify your selection.
4. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is started on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will start journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. Optional: If the file is journaled to a journal that is different than the configured
journal and you have determined that it is ok to force journaling to the configured
journal, press F10 (Additional parameters) to display the Force to configured
journal (FORCE) prompt.
Change the FORCE prompt to *YES to end journaling to a journal that is different
than the configured journal and then start journaling using the configured journal.
This value will also attempt to start journaling for objects not currently journaled.
Journaling will not be ended for objects already journaled to the configured
journal. For more information, see “Forcing objects to use the configured journal”
on page 346.
7. To start journaling for the physical file associated with the selected data group,
press Enter.
The system returns a message to confirm the operation was successful.

Ending journaling for physical files


Use this procedure to end journaling for files defined to a data group. Once journaling
for a file is ended, any changes to that file are not captured and are not replicated.
You may need to end journaling if a file no longer needs to be replicated or to correct
an error.
This procedure invokes the End Journaling File Entries (ENDJRNFE) command. The
command can also be entered from a command line.
To end journaling, do the following:
1. Access the journaled view of the Work with DG File Entries display as described
in “Displaying journaling status for physical files” on page 347.
2. From the Work with DG File Entries display, type a 10 (End journaling) next to the
file entry you want and do one of the following:
• To end journaling using command defaults, press Enter. Journaling is ended.
• To modify additional prompts for the command, press F4 (Prompt) and
continue with the next step.

348
Journaling for physical files

3. The End Journaling File Entries (ENDJRNFE) display appears. If you want to end
journaling for all files in the library, specify *ALL for the System 1 file prompt.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is ended on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will end journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. To end journaling, press Enter.

Verifying journaling for physical files


Use this procedure to verify that files defined by a data group file entry are journaled
to the journal defined in the data group definition. When these conditions are met, the
journal status on the Work with DG File Entries display is set to *YES.
This procedure invokes the Verify Journaling File Entry (VFYJRNFE) command. The
command can also be entered from a command line.
To verify journaling for a physical file, do the following:
1. Access the journaled view of the Work with DG File Entries display as described
in “Displaying journaling status for physical files” on page 347.
2. From the Work with DG File Entries display, type a 11 (Verify journaling) next to
the file entry you want and do one of the following:
• To verify journaling using command defaults, press Enter.
• To modify additional parameters for the command, press F4 (Prompt) and
continue with the next step.
3. The Verify Journaling File Entry (VFYJRNFE) display appears. The Data group
definition prompt and the System 1 file prompt identify your selection.
4. Specify the value you want for the Verify journaling on system prompt. When
*DGDFN is specified, MIMIX considers whether the data group is configured for
journaling on the target system (JRNTGT) when determining where to verify
journaling.
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. To verify journaling, press Enter.
7. Press F5 (Refresh) to update and view the current journaling status.

349
Journaling for IFS objects
IFS tracking entries are loaded for a data group after they are configured for
replication through the user journal and the data group has been started. However,
loading IFS tracking entries does not automatically start journaling on the IFS objects
they identify. In order for replication to occur, journaling must be started on the source
system for the IFS objects identified by IFS tracking entries using the journal defined
in the data group definition.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for IFS objects identified for replication through the user journal.
You should be aware of the information in “Long IFS path names” on page 117

Displaying journaling status for IFS objects


Use this procedure to display journaling status for IFS objects identified by IFS
tracking entries. Do the following:
1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the
Work with Data Groups display.
2. On the Work with Data Groups display, type 50 (IFS trk entries) next to the data
group you want and press Enter.
3. The Work with DG IFS Trk. Entries display appears. The initial view shows the
object type and status at the right of the display. Press F10 (Journaled view).
At the right side of the display, the Journaled System 1 and System 2 columns
indicate whether the IFS object identified by the tracking is journaled on each
system.
At the right side of the display, the Journaled System 1 and System 2 columns
indicate whether the objects associated with the file entry are journaled to the
configured journal (*YES), are not journaled (*NO), or are journaled to a journal
that is different than the configured journal (*DIFFJRN). When the column
displays the value *DIFFJRN, further investigation is required to determine the
reason for journaling to a different journal than the configured journal. See
“Resolving a problem for a journal status of *DIFFJRN” on page 139.

Starting journaling for IFS objects


Use this procedure to start journaling for IFS objects identified by IFS tracking entries.
This procedure invokes the Start Journaling IFS Entries (STRJRNIFSE) command.
The command can also be entered from a command line.
To start journaling for IFS objects, do the following:
1. If you have not already done so, load the IFS tracking entries for the data group.
Use the procedure in “Loading IFS tracking entries” on page 286.
2. Access the journaled view of the Work with DG IFS Trk. Entries display as
described in “Displaying journaling status for IFS objects” on page 350.

350
Journaling for IFS objects

3. From the Work with DG IFS Trk. Entries display, type a 9 (Start journaling) next to
the IFS tracking entries you want. Then do one of the following:
• To start journaling using the command defaults, press Enter.
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
4. The Start Journaling IFS Entries (STRJRNIFSE) display appears. The Data group
definition and IFS objects prompts identify the IFS object associated with the
tracking entry you selected. You cannot change the values shown for the IFS
objects prompts1.
5. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is started on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will start journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
6. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
7. Optional: If the file is journaled to a journal that is different than the configured
journal and you have determined that it is ok to force journaling to the configured
journal, press F10 (Additional parameters) to display the Force to configured
journal (FORCE) prompt.
Change the FORCE prompt to *YES to end journaling to a journal that is different
than the configured journal and then start journaling using the configured journal.
This value will also attempt to start journaling for objects not currently journaled.
Journaling will not be ended for objects already journaled to the configured
journal. For more information, see “Forcing objects to use the configured journal”
on page 346.
8. The System 1 file identifier and System 2 file identifier prompts identify the file
identifier (FID) of the IFS object on each system. You cannot change the values2.
9. To start journaling on the IFS objects specified, press Enter.

Ending journaling for IFS objects


Use this procedure to end journaling for IFS objects identified by IFS tracking entries.
This procedure invokes the End Journaling IFS Entries (ENDJRNIFSE) command.
The command can also be entered from a command line.

1. When the command is invoked from a command line, you can change values specified for
the IFS objects prompts. Also, you can specify as many as 300 object selectors by using the
+ for more values prompt.
2. When the command is invoked from a command line, use F10 to see the FID prompts. Then
you can optionally specify the unique FID for the IFS object on either system. The FID values
can be used alone or in combination with the IFS object path name.

351
To end journaling for IFS objects, do the following:
1. Access the journaled view of the Work with DG IFS Trk. Entries display as
described in “Displaying journaling status for IFS objects” on page 350.
2. From the Work with DG IFS Trk. Entries display, type a 10 (End journaling) next to
the IFS tracking entries you want. Then do one of the following:
• To end journaling using the command defaults, press Enter.
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The End Journaling IFS Entries (ENDJRNIFSE) display appears. The Data group
definition and IFS objects prompts identify the IFS object associated with the
tracking entry you selected. You cannot change the values shown for the IFS
objects prompts1.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is ended on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will end journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. The System 1 file identifier and System 2 file identifier identify the file identifier
(FID) of the IFS object on each system. You cannot change the values shown2.
7. To end journaling on the IFS objects specified, press Enter.

Verifying journaling for IFS objects


Use this procedure to verify if an IFS object identified by an IFS tracking entry is
journaled correctly. This procedure invokes the Verify Journaling IFS Entries
(VFYJRNIFSE) command to determine whether the IFS object is journaled, whether it
is journaled to the journal defined in the data group definition, and whether it is
journaled with the attributes defined in the data group definition. The command can
also be entered from a command line.
To verify journaling for IFS objects, do the following:
1. Access the journaled view of the Work with DG IFS Trk. Entries display as
described in “Displaying journaling status for IFS objects” on page 350.
2. From the Work with DG IFS Trk. Entries display, type a 11 (Verify journaling) next
to the IFS tracking entries you want. Then do one of the following:
• To verify journaling using the command defaults, press Enter.
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The Verify Journaling IFS Entries (VFYJRNIFSE) display appears. The Data

352
Journaling for IFS objects

group definition and IFS objects prompts identify the IFS object associated with
the tracking entry you selected. You cannot change the values shown for the IFS
objects prompts1.
4. Specify the value you want for the Verify journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN is specified, MIMIX considers whether the data group is
configured for journaling on the target system (JRNTGT) and verifies journaling on
the appropriate systems as required.
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. The System 1 file identifier and System 2 file identifier identify the file identifier
(FID) of the IFS object on each system. You cannot change the values shown2.
7. To verify journaling on the IFS objects specified, press Enter.
“Using file identifiers (FIDs) for IFS objects” on page 317.

353
Journaling for data areas and data queues
Object tracking entries are loaded for a data group after they are configured for
replication through the user journal and the data group has been started. However,
loading object tracking entries does not automatically start journaling on the objects
they identify. In order for replication to occur, journaling must be started on the source
system for objects identified by tracking entries that use the journal defined in the data
group definition.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for data areas and data queues identified for replication through the user
journal.

Displaying journaling status for data areas and data queues


To check journaling status for data areas and data queues identified by object tracking
entries. Do the following:
1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the
Work with Data Groups display.
2. On the Work with Data Groups display, type 52 (Obj trk entries) next to the data
group you want and press Enter.
3. The Work with DG Obj. Trk. Entries display appears. The initial view shows the
object type and status at the right of the display. Press F10 (Journaled view).
At the right side of the display, the Journaled System 1 and System 2 columns
indicate whether the objects associated with the tracking entry are journaled to the
configured journal (*YES), are not journaled (*NO), or are journaled to a journal
that is different than the configured journal (*DIFFJRN). When the column
displays the value *DIFFJRN, further investigation is required to determine the
reason for journaling to a different journal than the configured journal. See
“Resolving a problem for a journal status of *DIFFJRN” on page 139.

Starting journaling for data areas and data queues


Use this procedure to start journaling for data areas and data queues identified by
object tracking entries.
This procedure invokes the Start Journaling Obj Entries (STRJRNOBJE) command.
The command can also be entered from a command line.
To start journaling for data areas and data queues, do the following:
1. If you have not already done so, load the object tracking entries for the data
group. Use the procedure in “Loading object tracking entries” on page 287.
2. Access the journaled view of the Work with DG Obj. Trk. Entries display as
described in “Displaying journaling status for data areas and data queues” on
page 354.
3. From the Work with DG Obj. Trk. Entries display, type a 9 (Start journaling) next to
the object tracking entries you want. Then do one of the following:

354
Journaling for data areas and data queues

• To start journaling using the command defaults, press Enter.


• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
4. The Start Journaling Obj Entries (STRJRNOBJE) display appears. The Data
group definition and Objects prompts identify the object associated with the
tracking entry you selected. Although you can change the values shown for these
prompts, it is not recommended unless the command was invoked from a
command line.
5. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is started on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will start journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
6. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
7. Optional: If the file is journaled to a journal that is different than the configured
journal and you have determined that it is ok to force journaling to the configured
journal, press F10 (Additional parameters) to display the Force to configured
journal (FORCE) prompt.
Change the FORCE prompt to *YES to end journaling to a journal that is different
than the configured journal and then start journaling using the configured journal.
This value will also attempt to start journaling for objects not currently journaled.
Journaling will not be ended for objects already journaled to the configured
journal. For more information, see “Forcing objects to use the configured journal”
on page 346.
8. To start journaling on the objects specified, press Enter.

Ending journaling for data areas and data queues


Use this procedure to end journaling for data areas and data queues identified by
object tracking entries.
This procedure invokes the End Journaling Obj Entries (ENDJRNOBJE) command.
The command can also be entered from a command line.
To end journaling for data areas and data queues, do the following:
1. Access the journaled view of the Work with DG Obj. Trk. Entries display as
described in “Displaying journaling status for data areas and data queues” on
page 354.
2. From the Work with DG Obj. Trk. Entries display, type a 10 (End journaling) next
to the object tracking entries you want. Then do one of the following:
• To verify journaling using the command defaults, press Enter.

355
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The End Journaling Obj Entries (ENDJRNOBJE) display appears. The Data group
definition and IFS objects prompts identify the object associated with the tracking
entry you selected. Although you can change the values shown for these prompts,
it is not recommended unless the command was invoked from a command line.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is ended on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will end journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. To end journaling on the objects specified, press Enter.

Verifying journaling for data areas and data queues


Use this procedure to verify if an object identified by an object tracking entry is
journaled correctly. This procedure invokes the Verify Journaling Obj Entries
(VFYJRNOBJE) command to determine whether the object is journaled, whether it is
journaled to the journal defined in the data group definition, and whether it is journaled
with the attributes defined in the data group definition. The command can also be
entered from a command line.
To verify journaling for objects, do the following:
1. Access the journaled view of the Work with DG Obj. Trk. Entries display as
described in “Displaying journaling status for data areas and data queues” on
page 354.
2. From the Work with DG Obj. Trk. Entries display, type a 11 (Verify journaling) next
to the object tracking entries you want. Then do one of the following:
• To verify journaling using the command defaults, press Enter.
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The Verify Journaling Obj Entries (VFYJRNOBJE) display appears. The Data
group definition and Objects prompts identify the object associated with the
tracking entry you selected. Although you can change the values shown for these
prompts, it is not recommended unless the command was invoked from a
command line.
4. Specify the value you want for the Verify journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN is specified, MIMIX considers whether the data group is
configured for journaling on the target system (JRNTGT) and verifies journaling on
the appropriate systems as required.

356
Journaling for data areas and data queues

5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. To verify journaling on the objects specified, press Enter.

357
Configuring for improved performance

Configuring for improved


CHAPTER 15
performance

This chapter describes how to modify your configuration to use advanced techniques
to improve journal performance and MIMIX performance.
Journal performance: The following topics describe how to improve journal
performance:
• “Minimized journal entry data” on page 359 describes benefits of and restrictions
for using minimized user journal entries for *FILE and *DTAARA objects. A
discussion of large object (LOB) data in minimized entries and configuration
information are included.
• “Configuring database apply caching” on page 361 describes benefits of and how
to configure MIMIX functionality for database apply caching.
• “Configuring for high availability journal performance enhancements” on page 362
describes journal caching and journal standby state within MIMIX to support IBM’s
High Availability Journal Performance IBM i option 42, Journal Standby feature
and Journal caching. Requirements and restrictions are included.
MIMIX performance: The following topics describe how to improve MIMIX
performance:
• “Immediately applying committed transactions” on page 367 describes the
benefits and limitations of both immediate and delayed commit modes for the
database apply process.
• “Optimizing access path maintenance” on page 370 describes methods available
and how each can be used to improve performance for database apply
processes.
• “Caching extended attributes of *FILE objects” on page 369 describes how to
change the maximum size of the cache used to store extended attributes of *FILE
objects replicated from the system journal.
• “Increasing data returned in journal entry blocks by delaying RCVJRNE calls” on
page 377 describes how you can improve object send performance by changing
the size of the block of data from a receive journal entry (RCVJRNE) call and
delaying the next call based on a percentage of the requested block size.
• “Configuring high volume objects for better performance” on page 380 describes
how to change your configuration to improve system journal performance.
• “Improving performance of the #MBRRCDCNT audit” on page 381 describes how
to use the CMPRCDCNT commit threshold policy to limit comparisons and
thereby improve performance of this audit in environments which use commitment
control.

358
Minimized journal entry data

Minimized journal entry data


MIMIX supports the ability to process minimized journal entries placed in a user
journal for object types of file (*FILE) and data area (*DTAARA).
The IBM i provides the ability to create journal entries using an internal format that
minimizes the data specific to these object types that are stored in the journal entry.
This support is enabled in the MIMIX create or change journal definitions commands
and built using the Build Journal Environment (BLDJRNENV) command.
When a journal entry for one of these object types is generated, the system compares
the size of the minimized format to the standard format and places whichever is
smaller in the journal. For database files, only update journal entries (R-UP and R-
UB) and rollback-type update entries (R-BR and R-UR) can be minimized.
If MINENTDTA(*FILE) or MINENTDTA(*FLDBDY) is in effect and a database record
includes LOB fields, LOB data is journaled only when that LOB is changed. Changes
to other fields in the record will not cause the LOB data to be journaled unless the
LOB is also changed. When database files have records with static LOB values,
minimized journal entries can produce considerable savings.
The benefit of using minimized journal entries is that less data is stored in the journal.
In a MIMIX replication environment, you also benefit by having less data sent over
communications lines and saved in MIMIX log spaces. Factors in your environment
such as the percentage of journal entries that are updates (R-UP), the size of
database records, the number of bytes typically changed in an update, may influence
how much benefit you achieve.

Restrictions of minimized journal entry data


The following MIMIX and operating system restrictions apply:
• If you plan to use keyed replication do not use minimized journal entry data.
Minimized journal entries cannot be used when MIMIX support for keyed
replication is in use, since the key may not be present in a minimized journal entry.
• Minimized before-images cannot be selected for automatic before-image
synchronization checking.
Your environment may impose additional restrictions:
• If you rely on full image captures in the receiver as part of your auditing rules, do
not configure for minimized entry data.
• Even if you do not rely on full image captures for auditing purposes, consider the
effect of how data is minimized. The minimizing resulting from specifying *FILE
does not occur on field boundaries. Therefore, the entry specific data may not be
viewable and may not be used for auditing purposes. When *FLDBDY is
specified, file data for modified fields is minimized on field boundaries. With
*FLDBDY, entry-specific data is viewable and may be used for auditing purposes.
• Configuring for minimized journal entry data may affect your ability to use the
Work with Data Group File Entries on Hold (WRKDGFEHLD) command. For
example, using option 2 (Change) on WRKDGFEHLD to convert a minimized
record update (RUP) to a record put (RPT), will result in failure when applied.

359
RPTs requires the presence of a full, non-minimized, record.
See the IBM book, Backup and Recovery for restrictions and usage of journal entries
with minimized entry-specific data.

Configuring for minimized journal entry data


By default, MIMIX user journal replication processes use complete journal entry data.
To enable MIMIX to use minimized journal entry data for specific object types, do the
following:
1. From the Work with Journal Definitions display, use option 2 (Change) to access
the journal definition you want.
2. On the following display, press Enter twice to see all prompts for the display. Page
down to the bottom of the display.
3. Press F10 (Additional parameters) to access the Minimize entry specific data
prompt.
4. Specify the values you want at the Minimize entry specific data prompt and press
Enter.
5. In order for the changes to be effective, you must build the journaling environment
using the updated journal definition. To do this, type 14 (Build) next to the
definition you just modified on the Work with Journal Definitions display and press
Enter.

360
Configuring database apply caching

Configuring database apply caching


Customers who need faster performance from database apply processes can take
advantage of the functionality made available through the DB apply cache
(DBAPYCACHE) policy on the Set MIMIX Policies (SETMMXPCY) command. The
DBAPYCACHE policy results in significant, general improvement in database apply
performance. This functionality is ideal for customers who have allocated a highly
active file to its own apply session and need more performance but do not want to
purchase the Journal Caching feature from IBM (High Availability Journal
Performance IBM i option 42, Journal Standby feature and Journal caching).
Notes:
Database apply caching within MIMIX cannot be used in conjunction with IBM
option 42. For more information about MIMIX support for IBM option 42, see
“Configuring for high availability journal performance enhancements” on
page 362.
Environments configured for cascading should not use database apply caching on
the intermediate system. For more information about cascading, see “Configuring
for cascading distributions” on page 395.
When the DBAPYCACHE policy is enabled, before and after journal images are sent
to the local journal on the target system. This will increase the amount of storage
needed for journal receivers on the target system if before images were not previously
being sent to the journal.
As of MIMIX 8.0, the DBAPYCACHE policy is shipped so that it is enabled at the
installation level. Upgrading an existing MIMIX 7.1 installation to MIMIX 8.0 or later
does not change DBAPYCACHE. If the installation you are upgrading did not use the
DBAPYCACHE policy, follow the instructions below to enable it.
To enable the DBAPYCACHE policy, do the following from the management system:
1. From the command line type SETMMXPCY and press F4 (Prompt).
2. For the Data group definition, do one of the following:
• To set the policy for the installation, verify that the value specified for Data
group definition is *INST.
• To set the policy for a specific data group, specify the full three-part name.
3. Press Enter. You will see all the policies and their current values for the level you
specified in Step 2.
4. Use the Page Down key to locate the DB apply cache policy. Specify *ENABLED.
5. To accept the changes, press Enter.
Changes to this policy are not effective until the database apply processes for the
affected data groups have been ended and restarted.

361
Configuring for high availability journal performance
enhancements
MIMIX supports IBM’s High Availability Journal Performance IBM i option 42, Journal
Standby feature and Journal caching. These high availability performance
enhancements improve replication performance on the target system and provide
significant performance improvement by eliminating the need to start journaling at
switch time.
MIMIX support of IBM’s high availability performance enhancements consists of two
independent components: journal standby state and journal caching. These
components work individually or together, but are enabled separately.
Journal standby state minimizes replication impact on the target system by providing
the benefits of an active journal without writing the journal entries to disk. This is
particularly helpful in saving disk space in environments that do not rely on journal
entries for other purposes.
Journal caching enables the system to cache journal entries and their corresponding
database records into main storage and write to disks only as necessary. Journal
caching is particularly helpful during batch operations when large numbers of add,
update, and delete operations against journaled objects are performed.
Journal standby state and journal caching can be used in source send configuration
environments as well as in environments where remote journaling is enabled. For
restrictions of MIMIX support of IBM’s high availability performance enhancements,
see “Restrictions of high availability journal performance enhancements” on
page 364.
Note: For more information, also see the topics on journal management and system
performance in the IBM eServer iSeries Information Center.

Journal standby state


Journal standby state minimizes replication impact by providing the benefits of an
active journal without writing the journal entries to disk. As such, journal standby state
is particularly helpful in saving disk space in environments that do not rely on journal
entries for other purposes. Moreover, If you are journaling on apply, journal standby
state can provide a performance improvement on the apply session.
Generally, using journal standby state increases switch times. However, if your data
group is configured to not journal on target (JRNTGT(*NO)) and you switch, changing
the data group to journal on target (JRNTGT(*YES)) along with using journal standby
state will improve switch times.
You can start or stop journaling while the journal standby state is enabled. However,
commitment control cannot be used for files that are journaled to any journal in
standby state. Most referential constraints cannot be used when the journal is in
standby state. When journal standby state is not an option because of these
restrictions, journal caching can be used as an alternative. See “Journal caching” on
page 363.

362
Configuring for high availability journal performance enhancements

Minimizing potential performance impacts of standby state


It is possible to experience degraded performance of database apply (DBAPY)
processing after enabling journal standby state. You can reduce potential impacts by
using the Change Recovery for Access Paths (CHGRCYAP) command, which allows
you to change the target access path recovery time for the system.
Note: While this procedure improves performance, it can cause potentially longer
initial program loads (IPL). Deciding to use standby state is a trade off
between run-time performance and IPL duration.
Do the following:
1. On a command line, type the following and press Enter:
CHGRCYAP
2. At the Include access paths prompt, specify *ELIGIBLE to include only eligible
access paths in the recovery time specification.

Journal caching
Journal caching can be used in replication environments as well as by journals used
internally by MIMIX. Journal caching is an attribute of the journal that is defined in the
journal definition. When journal caching is enabled, the system caches journal entries
and their corresponding database records into main storage. This means that neither
the journal entries nor their corresponding database records are written to disk until
an efficient disk write can be scheduled. This usually occurs when the buffer is full or
at the first commit, close, or force end of data. Because most database transactions
must no longer wait for a synchronous write of the journal entries to disk, the
performance gain can be significant.
For example, batch operations must usually wait for each new journal entry to be
written to disk. Journal caching can be helpful during batch operations when large
numbers of add, update, and delete operations against journaled objects are
performed.
For more information about journal caching, see IBM’s Redbooks Technote””Journal
Caching: Understanding the Risk of Data Loss”.

MIMIX processing of high availability journal performance enhancements


You can enable both journal standby state and journal caching using a combination of
MIMIX and IBM commands. For example, the Journal state (JRNSTATE) parameter,
available on the IBM command Change Journal (CHGJRN), offers equivalent and
complementary function to the MIMIX parameter Target journal state (TGTSTATE).
Note: For purposes of this document, only MIMIX parameters are described in detail.
In a MIMIX environment, two parameters are used to enable journal standby state or
journal caching: Target journal state (TGTSTATE) and Journal caching (JRNCACHE).
When creating a new journal definition, these parameters can be specified on either
the Create Journal Definition (CRTJRNDFN) command or the Change Journal
Definition (CHGJRNDFN) command. See “Creating a journal definition” on page 218,

363
“Configuring journal standby state” on page 365, and “Configuring journal caching” on
page 365.
When journaling is used on the target system, the TGTSTATE parameter specifies the
requested status of the target journal. Valid values for the TGTSTATE parameter are
*ACTIVE and *STANDBY. When *ACTIVE is specified and the data group associated
with the journal definition is journaled on the target system (JRNTGT(*YES)), the
target journal state is set to active when the data group is started. When *STANDBY is
specified, objects are journaled on the target system, but most journal entries are
prevented from being deposited into the target journal.
The JRNCACHE parameter specifies whether the system should cache journal
entries in main storage before writing them to disk. Valid values for the JRNCACHE
parameter are *TGT, *BOTH, *NONE, or *SRC. The default value is *NONE which
prevents unintentional usage charges.

Requirements of high availability journal performance enhancements


Feature 5117, i5/OS Option 42 - HA Journal Performance, is required in order to use
MIMIX support of IBM’s high availability performance enhancements. Each system in
the replication environment must have this software installed and be up to date with
the latest PTFs and service packs applied.

Restrictions of high availability journal performance enhancements


MIMIX support of IBM’s high availability performance enhancements has a unique set
of restrictions and high availability considerations. Make sure that you are aware of
these restrictions before using journal standby state or journal caching in your MIMIX
environment.
When using journal standby state or journal caching, be aware of the following
restrictions documented by IBM:
• Do not use these high availability performance enhancements in conjunction with
commitment control. For journals in standby mode, commitment control entries
are not sent to or deposited in the journal.
Note: MIMIX does not use commitment control on the target system. As such,
MIMIX support of IBM’s high availability performance enhancements can
be configured on the target system even if commitment control is being
used on the source system.
• Do not use these high availability performance enhancements in conjunction with
referential constraints, with the exception of referential constraint types of
*RESTRICT.
Also be aware of the following additional restrictions:
• Do not change journal standby state or journal caching on IBM-supplied journals.
These journal names begin with “Q” and reside in libraries which names also
begin with “Q” (not QGPL). Attempting to change these journals results in an error
message.
• Do not place a remote journal in journal standby state. Journal caching is also not
allowed on remote journals.

364
Configuring for high availability journal performance enhancements

• Do not use MIMIX support of IBM’s high availability performance enhancements in


a cascading environment.

Configuring journal standby state


Journal standby state is only available if the separate, chargeable feature from IBM
(Option 42) is available on the system. If you have this feature, you can enable MIMIX
to use journal standby state minimized journal entry data for specific object types. To
enable journal standby state in an existing environment, do the following:
1. From the Work with Journal Definitions display, use option 2 (Change) to access
the journal definition you want.
2. From the Change Journal Definition display, specify one of the following values for
the TGTSTATE prompt and press Enter:
• Specify *ACTIVE when the data group associated with the journal definition
allows journaling on the target system (*YES for Journal on target (JRNTGT)
prompt).
• Specify *STANDBY to prevent most journal entries from being entered into the
target journal. Objects are journaled on the target system, but most journal
entries are not deposited.
3. In order for the changes to be effective, you must build the journaling environment
using the updated journal definition. Do the following:
a. Type 14 (Build) next to the definition you just modified on the Work with
Journal Definitions display and press F4 (Prompt).
b. The Build Journaling Environment (BLDJRNENV) panel is displayed. Specify
*JRNDFN for the Source for values (JRNVAL) parameter and press Enter.

Configuring journal caching


Journal caching is only available if the separate, chargeable feature from IBM (Option
42) is available on the system. If you have this feature, you can enable the system to
cache journal entries in main storage before writing them to disk. To enable journal
caching in an existing environment, do the following:
1. From the Work with Journal Definitions display, use option 2 (Change) to access
the journal definition you want.
2. Specify one of the following values for the JRNCACHE prompt and press Enter:
• Specify *BOTH to enable journal caching for both the source and target
journals.
• Specify *TGT to enable journal caching for the target journal only.
• Specify *SRC to enable journal caching for the source journal only.
3. In order for the changes to be effective, you must build the journaling environment
using the updated journal definition. Do the following:
a. Type 14 (Build) next to the definition you just modified on the Work with
Journal Definitions display and press F4 (Prompt).

365
b. The Build Journaling Environment (BLDJRNENV) panel is displayed. Specify
*JRNDFN for the Source for values (JRNVAL) parameter and press Enter.

366
Immediately applying committed transactions

Immediately applying committed transactions


In user journal replication, when commitment control is used on a source system,
MIMIX supports two modes of applying transactions under commitment control:
immediate apply and delayed apply on the target system. It is important to understand
the implications of both modes of commit processing.
In a data group definition, the value specified for the Commit mode element of the
Database apply processing (DBAPYPRC) parameter determines how the apply
process will handle journal entries that are under commitment control. Supported
values are *DLY and *IMMED.
Considerations when choosing which commit mode to use include:
• Does your environment have long-running commit cycles?
• Does your environment use data on the target system for other activities such as
running reports?
Long-running open commit cycles often occur when performing large clean-up
operations, such as when deleting all entries older than a certain date. They can also
occur when a user forgets to close a transaction for a length of time.
Delayed commit (*DLY) - This is the default value and preserves the behavior of
previous releases. MIMIX delays applying journal entries that are part of a commit
transaction until a C-CM journal entry (set of record changes committed) or C-RB (set
of record changes rolled back) for the commit transaction is processed.
When delayed commit mode is used, MIMIX maintains the integrity of the database
on the target system by preventing partial transactions from being applied until the
whole transaction completes. If the source system becomes unavailable, MIMIX will
not have applied incomplete transactions on the target system. In the event of an
incomplete (or uncommitted) commitment cycle, the integrity of the database is
maintained.
Delayed commit mode is preferred if a large number of commitment cycles will be
rolled back. This mode will also prevent reports run on the target system from
showing uncommitted data.
However, in environments where long-running open commit cycles occur regularly,
the performance of the database apply process can be affected when delayed commit
mode is used.
Immediate commit (*IMMED) - MIMIX immediately applies journal entries that are
part of a commit transaction without waiting for the outcome of the commit cycle to be
determined. Many users can benefit from using this mode. Most will see improved
performance for the database apply process. This can be significant in environments
where long-running open commit cycles occur regularly.
Immediate commit mode is preferred when the environment has long-running
commitment cycles that are eventually committed. With this mode, audits can more
accurately compare data while commit cycles are open because fewer transactions
are held until the commit cycle closes.

367
In immediate commit mode, it is possible that applied entries may be rolled back once
all the journal entries in the commit cycle are applied. At any time while entries in the
commit cycle are being processed, the target system may contain partial data or extra
data that would not be available in delayed mode. This can be a concern if you use
data on the target system for more than high availability or disaster recovery, such as
for running backups or reports or for supporting cascading environments.

Changing the specified commit mode


To change how the database apply process handles transactions under commitment
control, do the following from a management system:
1. Perform a controlled end of the data group that you want to change.
2. When the controlled end completes, verify that there are no open commit cycles
using the command:
DSPDGSTS DGDFN(name system1 system2) VIEW(*DBVIEW1)
Note: Resolve any open commit cycles before continuing. If any open commit
cycles exist, you will not be able to start the data group after changing the
configuration.
3. Do the following to change the data group:
a. Specify the name of the data group you want to change in the following
command and press F4 (Prompt)
CHGDGDFN DGDFN(name system1 system2)
b. Page down to locate the Database apply processing (DBAPYPRC) parameter.
c. Specify either *DLY or *IMMED for the Commit mode element.
d. Press Enter.
4. Start the data group.

368
Caching extended attributes of *FILE objects

Caching extended attributes of *FILE objects


In order to accurately replicate actions against *FILE objects, it is sometimes
necessary to retrieve the extended attribute of a *FILE object, such as PF, LF or
DSPF. Whenever large volumes of journal entries for *FILE objects are replicated
from the security audit journal (system journal), MIMIX caches this information for a
fixed set of *FILE objects to prevent unnecessary retrievals of the extended attribute.
The result is a potential reduction of CPU consumption by the object send job and a
significant performance improvement.
This function can be tailored to suit your environment. The maximum size of the
cache is controlled though the use of a data area in the MIMIX product library. The
cache size indicates the number of entries that can be contained in the cache. If the
data area is not created or does not exist in the MIMIX product library, the size of the
cache defaults to 15.
To configure the extended attribute cache, do the following:
1. Create the data area on the systems on which the object send jobs are running.
Type the following command:
CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR)
LEN(2)
2. Specify the cache size (xx). Valid cache values are numbers 00 through 99. Type
the following command:
CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE('xx,
RCVJRNE_delay_values')
Notes:
• The four RCVJRNE delay values are specified in this string along with the
cache size. See topic “Increasing data returned in journal entry blocks by
delaying RCVJRNE calls” on page 377 for more information.
• Using 00 for the cache size value disables the extended attribute cache.

369
Optimizing access path maintenance
MIMIX provides the ability to improve the performance of database apply processes
by delaying access path maintenance. Leveraging the ability to change the Access
path maintenance (MAINT) attribute on files removes responsibility for access path
maintenance from the regular jobs for database apply sessions, thereby allowing
those jobs to process journal entries more efficiently.
The service pack level installed determines which of the following optimization
methods is available for use:
• For installations running service pack 7.1.15.00 or higher, the available method is
“Optimizing access path maintenance on service pack 7.1.15.00 or higher” on
page 370.
• For installations running earlier version 7.1 service packs, the available method is
“Using parallel access path maintenance on earlier service packs” on page 374.

Optimizing access path maintenance on service pack 7.1.15.00 or higher


MIMIX supports access path maintenance (APM) for installations running service
pack 7.1.15.00 or higher through processing initiated by the database apply process.
The Access path maintenance (APMNT) policy controls whether MIMIX can optimize
access path maintenance during database apply processing. When the APMNT
policy is enabled, database apply processes are allowed to temporarily change the
value of the access path maintenance attribute on eligible replicated files and their
associated logical files on the target system. This policy also controls the maximum
number of jobs that can be used by the access path maintenance function for a data
group.

Eligible files and limitations


Physical files, logical files, and join logical files with keyed access paths that are not
unique and which specify *IMMED for their access path maintenance (MAINT)
attribute are eligible for optimized access path maintenance. This includes files with
shared access paths that are not uniquely keyed.
For files that are otherwise eligible but which specify delayed (*DLY) access path
maintenance, MIMIX will track record changes to the physical file and issue periodic
updates to perform delayed maintenance. MIMIX will honor the *DLY value if it was
not set by MIMIX and will not attempt to change it to *IMMED when delayed
maintenance completes.
Limitations: The following are not eligible due to their nature:
• Files with uniquely keyed access paths
• Files that have non-keyed access paths (RRN access)
• Vector indexes
• Files with access paths that do not support insert, update, and delete operations,
such as those for data dictionary files (*DTADCT).

370
Optimizing access path maintenance

Enabling the access path maintenance function


Do the following:
1. From the command line type SETMMXPCY and press F4 (Prompt).
2. For the Data group definition, do one of the following:
• To set the default policy for the installation, verify that the value specified for
Data group definition is *INST.
• To set the policy for a specific data group, specify the full three-part name.
3. Press Enter.
4. You will see all the policies and their current values for the level you specified in
Step 2. Use the Page Down key to locate the Access path maintenance policy.
5. Do the following:
a. At the Optimize for DB apply prompt, specify *ENABLED.
b. At the Maximum number of jobs prompt, specify the value you want.
6. To accept the changes, press Enter.
These changes are not in effect until the affected data groups are ended and
restarted.

Operation
When the APMNT policy is enabled, starting the database apply process for a data
group also starts an asynchronous access path maintenance job that remains active
when the apply process is active. When the database apply process opens a physical
file to apply a replicated transaction, the apply process also checks whether the file
and any associated logical files affected by the transaction are eligible for access path
maintenance optimization.
For eligible files, the apply process changes the access path maintenance (MAINT)
attribute from *IMMED to *DLY and keeps track of the number of record changes to
the physical file. When the record count exceeds 100 records and a predetermined
threshold (five percent of the file records being applied), the apply process requests
that access path maintenance job “catch up” on the delayed maintenance associated
with that physical file. The access path maintenance job performs delayed
maintenance on eligible files, using additional transient jobs if needed. The file’s
MAINT attribute is changed back to *IMMED when the apply process closes the file or
when the apply process ends.
Note: Any eligible files that were already set to *DLY before being opened by the
apply process will remain set to *DLY after the apply process closes the files.
MIMIX tracks any failed attempts to change a file’s MAINT attribute back to *IMMED
but does not report these as errors on the associated data group file entry while the
data group is active.
When the database apply process ends, the last apply session to end notifies the
access path maintenance job, then ends. The access path maintenance job uses
additional jobs, if needed, to change the access path maintenance attribute to

371
*IMMED on all files that MIMIX had previously changed to *DLY. Any failed requests
to change the MAINT attribute are retried. Before the maintenance jobs end, any files
that could not be changed to *IMMED are identified as having an access path
maintenance failure on the associated data group file entry.

Error recovery
If any access path maintenance errors exist when a data group (or the database apply
process) ends, MIMIX attempts to recover any access path maintenance errors the
next time the data group (or database apply process) is started.
If these attempts fail during start operations, MIMIX will attempt to change the
maintenance attribute to *IMMED the next time the data group (or the database apply
process) ends.
If errors exist for a physical file and associated logical files, the physical files are
addressed first.
For persistent access path maintenance errors, you can also manually retry changing
the MAINT attribute using option 40 from the Work with DG File Entries display or the
Retry Access Path Maint. Files (RTYAPMNT) command.

Behavior during a switch


Application group environments: The MXCHKAPMNT step program is included in
the default PRECHECK, SWTPLAN, and SWTUNPLAN procedures. This step
attempts to correct any access path maintenance errors for the data groups that
belong to the application group being switched.
The error action specified for the MXCHKAPMNT step within each procedure
determines the behavior if any access path maintenance errors still exist after the step
completes.
• In the PRECHECK procedure, the shipped default behavior allows the procedure
to continue to run. If the PRECHECK procedure completes, it will be “with errors”.
• However, in the SWTPLAN and SWTUNPLAN procedures, if any access path
maintenance errors still exist, the shipped default behavior would cause the
MXCHKAPMNT step and these procedures to fail.
If the MXCHKAPMNT successfully completes and the data groups are switched, file
entries on the new source system will have an access path maintenance status of
*AVAILABLE.
Data group-only environments: The Conditions that end switch (ENDSWT
parameter on the Switch Data Group (SWTDG) command supports the value
*APMNT, which checks for and attempts to correct any access path maintenance
errors. When the ENDSWT value specifies *APMNT (or a value which includes
*APMNT) and any errors cannot be corrected, the switch request will fail.
If the switch request completes, file entries on the new source system will have an
access path maintenance status of *AVAILABLE.

372
Optimizing access path maintenance

Job status
The persistent access path maintenance job is included in the summary of processes
whose status is reported in the Target DB column on the Work with Data Groups
display. When all other processes reported in this column are active but access path
maintenance is enabled and does not have at least one active job, Partial status is
displayed.
In detailed status for a data group, status for access path maintenance is displayed in
the AP Maint field on the merged view and database views 1 and 2 when the APMNT
policy is enabled. The following status values are possible:
A (Active) - One or more access path maintenance jobs exist.
I (Inactive) - No access path maintenance jobs exist. The APMNT policy is enabled.
U (Unknown) - An unknown error occurred.
Error status
The number of logical (LF) and physical (PF) files that have access path maintenance
failures for a data group is included in the number of errors specified in the DB Errors
column on the Work with Data Groups display. Option 12 (Files needing attention)
provides access to detailed information for the file entries associated with replication
errors and access path maintenance errors.
Only replication errors appear on the initial view of the Work with DG File Entries
display. Therefore, you must use F10 multiple times to see the view showing the AP
Maint. Status column. The value in this column identifies status of access path
maintenance processing for the file identified by the data group file entry. The APMNT
policy in effect determines whether the database apply process can optimize access
path maintenance.
If an access path maintenance error exists for a physical file or a logical file that is
identified by a data group file entry, the error status is *FAILED. If an access path
maintenance error exists on a logical file which does not have a data group file entry,
the error status *FAILEDLF is reported on the file entry for its associated physical
file.Therefore, a file entry for a physical file may have errors for itself and for multiple
associated logical files which are not identified by file entries. When this scenario
occurs, the *FAILED status takes precedence for the PF is displayed, and when
resolved, the *FAILEDLF status will be displayed.
When one of the files included in a join logical is not associated with a file entry and
an access path maintenance error occurs, all of the file entries associated with the
join logical are tracked as errors if they are not already in error. The error cannot be
reported on the join files that are not represented by file entries.

Table 41. Possible access path maintenance status values for data group file entries

Value Description

*AVAILABLE The file is eligible for access path maintenance. The policy in effect
allows the database apply process to temporarily delay access path
maintenance for the file on the target system.

*DISABLED Access path maintenance is disabled by the policy in effect.

373
Table 41. Possible access path maintenance status values for data group file entries

Value Description

*FAILED Access path maintenance failed for the file. The failure occurred while
resetting access path maintenance for the file from delayed (*DLY) to
immediate (*IMMED).

*FAILEDLF Access path maintenance failed for a logical file associated with the file.
The failure occurred while resetting access path maintenance for the
logical file from delayed (*DLY) to immediate (*IMMED).

*NOTALW MIMIX cannot perform access path maintenance for the file because the
operating system does not allow it.

Using parallel access path maintenance on earlier service packs


Parallel access path maintenance (PAPM) is available when the software level
installed is lower than service pack 7.1.15.00. PAPM is not available when running
service pack 7.1.15.00 or higher.
PAPM support uses multiple parallel monitor jobs to maintain access paths
associated with logical files. A set of automatically created *INTERVAL monitors are
responsible for the access path maintenance of non-uniquely keyed logical file access
paths affected by database record operations such as inserts, updates and deletes.
Eligible files: The logical files that are eligible are those in which:
1. A data group file entry that specifies MBR(*ALL) exists for the logical file, and it is
active.
2. The file is MAINT(*IMMED) on the source system.
3. The file is keyed.
4. The file is not uniquely keyed.
Operation: PAPM is enabled and disabled by the Parallel AP maintenance
(PRLAPMNT) policy on the Set MIMIX Policies (SETMMXPCY) command.
When it is enabled, PAPM sets the MAINT attribute on eligible logical files to
MAINT(*DLY) on the target system. This relieves the access path maintenance
responsibility from the database apply sessions. To avoid letting the delayed
maintenance log grow too large, PAPM also creates *INTERVAL monitors which
periodically open each file member. It is during this open operation that the access
path maintenance operations are performed, under the monitor job.
When the monitors are inactive, the MAINT attribute is reset back to its original state
(normally *IMMED.) The monitors are responsible to periodically open the logical files
to assure that the access path stays “caught up.'
Monitors: Two or more PAPM monitors run on the target system of each data group
for which the parallel access path maintenance function has been enabled and
activated. The monitors are created and started on the target system as a result of
running the Start Data Group (STRDG) command for a data group after the function
has been enabled by setting the parallel access path maintenance policy. The policy
can be set for the installation or for a specific data group.

374
Optimizing access path maintenance

The following monitors are associated with this function:


• The parallel access path maintenance group monitor (short-data-group-
name_PAPM) provides ease of control over associated monitors that perform
parallel access path maintenance for a data group.
• When this monitor exists, there are always associated monitors of one of the
following types:
– The parallel access path maint monitor nnn (short-data-group-namePAPMnnn)
monitors are created and used for the automatic method of performing parallel
access path maintenance for a data group. The value of nnn identifies a unique
monitor, with values ranging from 000 through 999. The number of monitors is
determined by the policy setting.
– The parallel access path maint monitor job-name (short-data-group-
nameJobname) monitors are created and used for the manual method of
performing parallel access path maintenance for a data group. The Jobname
identifies a unique monitor as determined by the policy setting and the
configuration file.
Enabling PAPM: Do the following to enable PAPM:
1. From the command line type SETMMXPCY and press F4 (Prompt).
2. For the Data group definition, do one of the following:
• To set the default policy for the installation, verify that the value specified for
Data group definition is *INST.
• To set the policy for a specific data group, specify the full three-part name.
3. Press Enter.
You will see all the policies and their current values for the level you specified in
Step 2.
4. Use the Page Down key to locate the Parallel AP maintenance policy, then specify
the values you want for the element prompts on the Parallel AP maintenance
parameter. Possible values are identified in Table 42.
5. To accept the changes, press Enter.
These changes are not effective until the affected data groups are ended and
restarted.

Table 42. Parallel AP maintenance (PRLAPMNT) policy. This policy is available only on
installations running service packs below 7.1.15.00.

Parameter Element

Method

375
Table 42. Parallel AP maintenance (PRLAPMNT) policy. This policy is available only on
installations running service packs below 7.1.15.00.

Parameter Element

Specifies the method by which the parallel access path maintenance function is
implemented. The shipped default for the installation level policy is *NONE.
• *NONE—The parallel access path maintenance function is not used. The values
specified for all other elements are ignored.
• *AUTO—All eligible access paths are automatically assigned to access path
maintenance jobs and are applied in parallel.
• *MANUAL—The access paths to be maintained in parallel are specified manually.
Use this method only under the direction of a certified MIMIX representative.

Number of jobs

Specifies the number of parallel jobs to use for access path maintenance. The shipped
default for the installation level policy is *CALC.
• *CALC—MIMIX calculates the number of parallel access path maintenance jobs to
use, with a minimum of two jobs.
• number-of-jobs—Specifies the number of parallel access path maintenance jobs to
use. Valid values range from 1 through 1000.

Delay interval (sec)

Specifies the number of seconds to wait between iterations of access path maintenance
operations. The shipped default for the installation level policy is 60 seconds.
• number-of-seconds—Specifies the number of seconds to wait between iterations.
Valid values range from 5 through 900 seconds.

Log retention (days)

Specifies the number of days to retain log records for the parallel access path
maintenance function. The shipped default for the installation level policy is 1 day.
• *NONE—No logging is performed.
• number-of-days–Specifies the number of days to retain log records for parallel
access path maintenance jobs. Valid values range from 1 through 365 days.
Note: All elements of the PRLAPMNT parameter support the value *INST (use the value for the
installation). For data group level policies, this value is the shipped default. You can specify
this value when a data group name or a value other than *INST is specified for the data group
definition on the SETMMXPCY command

Status reporting: The monitors associated with parallel access path maintenance
report monitor status on the target node of data group replication processes. When
the target node is the local node on which you view the Work with Application Groups
display, the summary of monitor status for the local system that is shown in the
Monitors field includes status of the monitors associated with parallel access path
maintenance.
Detailed status displays for the data group show the process status in the Prl AP Mnt
field in the Target Statistics section.

376
Increasing data returned in journal entry blocks by delaying RCVJRNE calls

Increasing data returned in journal entry blocks by delay-


ing RCVJRNE calls
Enhancements have been made to MIMIX to increase the performance of the object
send job when a small number of journal entries are present during the Receive
Journal Entry (RCVJRNE) call. Journal entries are received in configurable-sized
blocks that have a default size of 99,999 bytes. When multiple RCVJRNE calls are
performed and each block retrieved is less than 99,999 bytes, unnecessary overhead
is created.
Through additional controls added to the MXOBJSND *DTAARA objects within the
MIMIX installation library, you can now specify the size of the block of data received
from RCVJRNE and delay the next RCVJRNE call based on a percentage of the
requested block size. Doing so increases the probability of receiving a full journal
entry block and improves object send performance—reducing the number of
RCVJRNE calls while simultaneously increasing the quantity of data returned in each
block. This delay, along with the extended file attribute cache capability, also reduces
CPU consumption by the object send job. See “Caching extended attributes of *FILE
objects” on page 369 for related information.

Understanding the data area format


This enhancement allows you to provide byte values for the block size to receive data
from RCVJRNE, as well as specify the percentage of that block size to use for both a
small delay block and a medium delay block in the data area. These values are added
in segments to the string of characters used by the file attribute cache size. Each
block segment is followed by a multiplier value, which determines how long the
previously specified journal entry block is delayed. The duration of the delay is the
multiplier value multiplied by the value specified on the Reader wait time (seconds)
(RDRWAIT) parameter in the data group definition. The RDRWAIT default value is 1
second. The RCVJRNE block size is specified in kilobytes, ranging from 32 Kb to
4000 Kb. If not specified, the default size is 99,999 bytes (100 Kb -1).
The following defines each segment and includes the number of characters that
particular segment can contain:
DTAARA VALUE(‘cache_size2, small_block_percentage2,
small_multipler2, medium_block_percentage2,
medium_multiplier2, block_size4’)
To illustrate the effect of specific delay and multiplier values, let us assume the
following:
DTAARA VALUE(‘15,10,02,30,01,0200’)
In this example, a small block is defined as any journal entry block consisting of 10
percent of the RCVJRNE block size of 200 Kb, or 20,000 bytes. Assuming the
RDRWAIT default is in effect, small journal entry blocks will be delayed for 2 seconds
before the next RCVJRNE call. Similarly, a medium block is defined as any journal
entry block containing between 10 and 30 percent of the RCVJRNE block size,
between 20,001 and 60,000 bytes. Medium blocks are then delayed for 1 second
assuming the default RDRWAIT value is used.

377
Note: Delays are not applied to blocks larger than the specified medium block
percentage. In the previous example, no delays will be applied to blocks larger
than 30 percent of the RCVJRNE block size, or 60,000 bytes.

Determining if the data area should be changed


Before changing the data area, it is recommended that you contact a Certified MIMIX
Consultant for assistance with running object send processing with diagnostic
messages enabled. Review the set of LVI0001 messages returned as a result.
By default, the RCVJRNE block size is 99,999 bytes, with the small block value set to
5,000 bytes and the medium block value set to 20,000 bytes. If the resulting
messages indicate that you are processing full journal entry blocks, there is no need
to add a delay to the RCVJRNE call. In this case, the object send job is already
running as efficiently as possible. Note that a block is considered full when the next
journal entry in the sequence cannot fit within the size limitations of the block currently
being processed.
Note: Reviewing these messages can also be helpful once you have changed the
default values, to ensure that the object send job is operating efficiently.
The following are examples of LVI0001 messages:
LVI0001 OM2120 Block Sizes (in Kb): Small=20; Medium=60
LVI0001 OM2120 Block Counts: Small=129; Medium=461; Large=46;
Full=1
LVI0001 OM2120 Using RCVJRNE Block Size (in Kb): 200
LVI0001 OM2120 - Range Counts: 0%=80; 2%=28; 5%=21; 10%=23;
15%=56; 20%=161; 25%=221; 30%=23
LVI0001 OM2120 - Range Counts: 40%=10; 50%=4; 60%=5; 70%=3;
80%=0; 90%=1; Full=1
OM2120 File Attr Cache: Size= 30, no cache lookup attempts
In the above example, 636 blocks were sent but only one of the sent blocks were full.
Making changes to the delay multiplier or altering the small or medium block size
specification would probably make sense in this scenario. Recommendations for
changing the block size values are provided in “Configuring the RCVJRNE call delay
and block values” on page 378.

Configuring the RCVJRNE call delay and block values


To configure the delay and block values when retrieving journal entry blocks, do the
following:
Note: Prior to configuring the RCVJRNE call delay, carefully read the information
provided in “Understanding the data area format” on page 377 and
“Determining if the data area should be changed” on page 378.
1. Create the data area on the systems on which the object send jobs are running.
Type the following command:
CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR)
LEN(20)
Note: Although you will see improvements from the file attribute cache with the
default character value (LEN(2)), enhancements are maximized by

378
Increasing data returned in journal entry blocks by delaying RCVJRNE calls

recreating the MXOBJSEND data area as a LEN(20) to use the RCVJRNE


call delays.
2. In the above example, 636 blocks were sent but only one of the sent blocks were
full. Making changes to the delay multiplier or altering the small or medium block
size specification would probably make sense in this scenario. Recommendations
for changing the block size values are provided in “Configuring the RCVJRNE call
delay and block values” on page 378.
CHGDTAARA DTAARA(installation_library/MXOBJSND)
VALUE(‘cache_size,10,02,30,01,0100’)
Note: For information about the cache size, see “Caching extended attributes of
*FILE objects” on page 369.

379
Configuring high volume objects for better performance
Some objects, such as data areas and data queues can have significant activity
against them and can cause MIMIX to use significant CPU resource.
One or several programs can use the QSNDDTAQ and QRCVDTAQ APIs to generate
thousands of journal entries for a single *DTAQ. For each journal entry, system journal
replication processes package all of the entries of the *DTAQ and sends it to the apply
system. MIMIX then individually applies each *DTAQ entry using the QSNDDTAQ
API.
If the data group is configured for multiple Object retrieve processing (OBJRTVPRC)
jobs, then several object retrieve jobs could be started (up to the maximum
configured) to handle the activity against the *DTAQ.
MIMIX contains redundancy logic that eliminates multiple journal entries for the same
object when the entire object is replicated. When you configure a data group for
system journal replication, you should:
• Place all *DTAQs in the same object-only data group
• Limit the maximum number of object retrieve jobs for the data group to one.
Defaults can be used for the other object data group jobs.

380
Improving performance of the #MBRRCDCNT audit

Improving performance of the #MBRRCDCNT audit


Environments that use commitment control may find that, in some conditions, a
request to run the #MBRRCDCNT audit or the Compare Record Count
(CMPRCDCNT) command can be extremely long-running. This is possible in
environments that use commitment control with long-running commit transactions that
include large numbers (tens of thousands) of record operations within one
transaction. In such an environment, the compare request can be long running when
the number of members to be compared is very large and there are uncommitted
changes present at the time of the request.
The Set MIMIX Policies (SETMMXPCY) command includes the policy CMPRCDCNT
commit threshold policy (CMPRCDCMT parameter) that provides the ability to specify
a threshold at which requests to compare record counts will no longer perform the
comparison due to commit cycle activity on the source system.
The shipped default values for this policy (CMPRCDCMT parameter) permit record
count comparison requests without regard to commit cycle activity on the source
system. These policy default values are suitable for environments that do not have
the commitment control environment indicated, or that can tolerate a long-running
comparison.
If your environment cannot tolerate a long-running request, you can specify a numeric
value for the CMPRCDCMT parameter for either the MIMIX installation or for a
specific data group. This will change the behavior of MIMIX by affecting what is
compared, and can improve performance of #MBRRCDCNT and CMPRCDCNT
requests.
Note: Equal record counts suggest but do not guarantee that files are synchronized.
When a threshold is specified for the CMPRCDCNT commit threshold policy,
record count comparisons can have a higher number of file members that are
not compared. This must be taken into consideration when using the
comparison results to gauge of whether systems are synchronized.
A numeric value for the CMPRCDCMT parameter defines the maximum number of
uncommitted record operations that can exist for files waiting to be applied in an apply
session at the time a compare record count request is invoked. The number specified
must be representative of the number of uncommitted record operations.
When a numeric value is specified, MIMIX recognizes whether the number of
uncommitted record operations for an apply session exceeds the threshold at the time
a compare request is invoked. If an apply session has not reached the threshold, the
comparison is performed. If the threshold is exceeded, MIMIX will not attempt to
compare members from that apply session. Instead, the results will display the *CMT
value for the difference indicator, indicating that commit cycle activity on the source
system prevented active processing from comparing counts of current records and
deleted records in the selected member.
Each database apply session is evaluated against the threshold independently. As a
result, it is possible for record counts to be compared for files in one apply session but
not be compared in another apply session, as illustrated in the following example.

381
Example: This example shows the result of setting the policy for a data group to a
value of 10,000. Table 43 shows the files replicated by each of the apply sessions
used by the data group and the result of comparison. Because of the number of
uncommitted record operations present at the time of the request, files processed by
apply sessions A and C are not compared.

Table 43. Sample results with a policy threshold value of 10,000.

Apply Files Uncommitted Record Operation Result


Session
Per File Apply Session Total

A A01 11,000 > 10,000 Not compared, *CMT


A02 0 Not compared, *CMT

B B01 5,000 < 10,000 Compared


B02 0 Compared

C C01 7,000 > 10,000 Not compared, *CMT


C02 6,000 Not compared, *CMT

D D01 50 < 10,000 Compared


D02 500 Compared

382
Configuring advanced replication
CHAPTER 16
techniques

This chapter describes how to modify your configuration to support advanced


replication techniques for user journal (database) and system journal (object)
replication.
User journal replication: The following topics describe advanced techniques for
user journal replication:
• “Keyed replication” on page 385 describes the requirements and restrictions of
replication that is based on key values within the data. This topic also describes
how to configure keyed replication at the data group or file entry level as well as
how to verify key attributes.
• “Data distribution and data management scenarios” on page 390 defines and
identifies configuration requirements for the following techniques: bi-directional
data flow, file combining, file sharing, file merging, broadcasting, and cascading.
• “Trigger support” on page 397 describes how MIMIX handles triggers and how to
enable trigger support. Requirements and considerations for replication of
triggers, including considerations for synchronizing files with triggers, are
included.
• “Constraint support” on page 399 identifies the types of constraints MIMIX
supports. This topic also describes delete rules for referential constraints that can
cause dependent files to change and MIMIX considerations for replication of
constraint-induced modifications.
• “Handling SQL identity columns” on page 401 describes the problem of duplicate
identity column values and how the Set Identity Column Attribute (SETIDCOLA)
command can be used to support replication of SQL tables with identity columns.
Requirements and limitations of the SETIDCOLA command as well as alternative
solutions are included.
• “Collision resolution” on page 408 describes available support within MIMIX to
automatically resolve detected collisions without user intervention and its
requirements. This topic also describes how to define and work with collision
resolution classes.
• “Changing target side locking for DBAPY processes” on page 413 describes how
to change the type of lock used by the database apply process for the data of
replicated file members on the target node and the types of locks supported.
System journal replication: The following topics describe advanced techniques for
system journal replication:
• “Omitting T-ZC content from system journal replication” on page 415 describes
considerations and requirements for omitting content of T-ZC journal entries from
replicated transactions for logical and physical files.
• “Selecting an object retrieval delay” on page 419 describes how to set an object

383
Configuring advanced replication techniques

retrieval delay value so that a MIMIX lock on an object does not interfere with your
applications. This topic includes several examples.
• “Configuring to replicate SQL stored procedures and user-defined functions” on
page 421 describes the requirements for replicating these constructs and how
configure MIMIX to replicate them.
• “Using Save-While-Active in MIMIX” on page 423 describes how to change type of
save-while-active option to be used when saving objects. You can view and
change these configuration values for a data group through an interface such as
SQL or DFU.

384
Keyed replication

Keyed replication
By default, MIMIX user journal replication processes use positional replication. You
can change from positional replication to keyed replication for database files.
Keyed replication is not supported in environments licensed for MIMIX DR.

Keyed vs positional replication


In data groups that are configured for user journal replication, default values use
positional replication. In positional file replication, data on the target system is
identified by position, or relative record number (RRN), in the file member. If data
exists in a file on the source system, an exact copy must exist in the same position in
a file on the target system. When the file on the source system is updated, MIMIX
finds the data in the exact location on the target system and updates that data with the
changes.
User journal replication processes support the update of files by key, allowing
replication to be based on key values within the data instead of by the position of the
data within the file. Key replication support is subject to the requirements and
restrictions described.
Positional file replication provides the best performance. Keyed file replication offers a
greater level of flexibility, but you may notice greater CPU usage when MIMIX must
search each file for the specified key. You also need to be aware that data “collisions”
can occur when an attempt is made to simultaneously update the same data from two
different sources.
Positional replication is recommended for most high availability requirements. Keyed
replication is best used for more flexible scenarios, such as file sharing, file routing, or
file combining.

Requirements for keyed replication


Journal images - MIMIX may need to be configured so that both before and after
images of the journal transaction are placed in the journal.
The Journal image element of the File and tracking entry options (FEOPT) parameter
controls which journal images are placed in the journal. Default values result in only
an after-image of the record. However, some configurations require both before-
images and after-images. The Journal image value specified in the data group
definition is in effect unless a different value is specified for the FEOPT parameter in a
file entry or object entry.
It is recommended that you use the Journal image value of *BOTH whenever there
are file entries with keyed replication to prevent before images from being filtered out
by the database send process. If the unique key fields of the database file are
updated by applications, you must use the value *BOTH.
Unique access path - At least one unique access path must exist for the file being
replicated.The access path can be either part of the physical file itself or it can be
defined in a logical file dependent on the physical file.

385
You can use the Verify Key Attributes (VFYKEYATR) command to determine whether
a physical file is eligible for keyed replication. See “Verifying key attributes” on
page 389.

Restrictions of keyed replication


The Compare File Data (CMPFILDTA) command cannot compare files that are
configured for keyed replication. If you run the #FILDTA audit or the CMPFILDTA
command against keyed files, the files are excluded from the comparison and a
message indicates that files using *KEYED replication were not processed.
When keyed replication is in use, the journal and journal definition cannot be
configured to allow object types to support minimized entry specific data. For more
information, see “Minimized journal entry data” on page 359.

Implementing keyed replication


You can implement keyed replication for an entire data group or for individual data
group file entries. If you configure a data group for keyed replication, MIMIX uses
keyed replication as the default for all processing of all associated data group file
entries. If you configure individual data group file entries for keyed replication, the
values you define in the data group file entry override the defaults used by the data
group for the associated file.

Attention: If you attempt to change the file replication from


*KEYED to *POSITION, a warning message will be returned that
indicates that the position of the file may not match the position of
the file on the backup system. Attempting to change from keyed to
positional replication can result in a mismatch of the relative record
numbers (RRN) between the target system and source system.

Changing a data group configuration to use keyed replication


You can define keyed replication for a data group when you are initially configuring
MIMIX or you can change the configuration later. To use keyed replication for all
database replication defined for a data group, the following requirements must be
met:
1. Before you change a data group definition to support keyed replication, do the
following:
a. Verify that the files defined to the data group are journaled correctly. Do not
continue until this is verified.
b. If the files are not currently journaled correctly, you need to end journaling for
the file entries defined to the data group. Use topic “Ending Journaling” in the
MIMIX Operations book.
2. In the data group definition used for replication you must specify the following:
• Data group type of *ALL or *DB.
• DB journal entry processing must have Before images as *SEND for source
send configurations. When using remote journaling, all journal entries are sent.

386
Keyed replication

• Verify that you have the value you need specified for the Journal image
element of the File and tracking ent. options. *BOTH is recommended.
• File and tracking ent. options must specify *KEYED for the Replication type
element.
3. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic “Verifying Key Attributes” in the MIMIX Operations
book.
4. If you have modified file entry options on individual data group file entries, you
need to ensure that the values used are compatible with keyed replication.
5. Start journaling for the file entries using “Starting journaling for physical files” on
page 347.

Changing a data group file entry to use keyed replication


By default, data group file entries use the same file entry options as specified in the
data group definition. If you configure individual data group file entries for keyed
replication, the values you define in the data group file entry override the defaults
used by the data group for the associated file.
If you want to use keyed replication for one or more individual data group file entries
defined for a data group, you need the following:
1. Before you change a data group file entry to support keyed replication, if the
file is not being journaled correctly (for example the data group file entry is not set
as described in Step 4), you will need to end journaling for the file entries.
2. The data group definition used for replication must have a Data group type of
*ALL or *DB.
3. DB journal entry processing must have Before images as *SEND for source send
configurations. When using remote journaling, all journal entries are sent.
4. The data group file entry must have File and tracking ent. options set as follows:
• To override the defaults from the data group definition to use keyed replication
on only selected data group file entries, verify that you have the value you need
specified for the Journal image (*BOTH is recommended) and specify *KEYED
for the Replication type.
• If you are using keyed replication at the data group level, the data group file
entries can use the default value *DGDFT for both Journal image and
Replication type.
Note: You can use any of the following ways to configure data group file entries
for keyed replication:
• Use either procedure in topic “Loading file entries” on page 275 to add or
modify a group of data group file entries. If you are modifying existing file
entries in this way, you should specify *UPDADD for the Update option
parameter.
• Use topic “Adding a data group file entry” on page 281 to create a new file
entry.

387
• Use topic “Changing a data group file entry” on page 282 to modify an
existing file entry.
5. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic “Verifying Key Attributes” in the MIMIX Operations
book.
6. After you have changed individual data group file entries, you need to start
journaling for the file entries using “Starting journaling for physical files” on
page 347.

388
Verifying key attributes
Before you configure for keyed replication, verify that the file or files you for which you
want to use keyed replication are actually eligible.
Do the following to verify that the attributes of a file are appropriate for keyed
replication:
1. On a command line, type VFYKEYATR (Verify Key Attributes). The Verify Key
Attributes display appears.
2. Do one of the following:
• To verify a file in a library, specify a file name and a library.
• To verify all files in a library, specify *ALL and a library.
• To verify files associated with the file entries for a data group, specify
*MIMIXDFN for the File prompt and press Enter. Prompts for the Data group
definition appear. Specify the name of the data group that you want to check.
3. Press Enter.
4. A spooled file is created that indicates whether you can use keyed replication for
the files in the library or data group you specified. Display the spooled file
(WRKSPLF command) or use your standard process for printing. You can use
keyed replication for the file if *BOTH appears in the Replication Type Allowed
column. If a value appears in the Replication Type Defined column, the file is
already defined to the data group with the replication type shown.

389
Data distribution and data management scenarios
MIMIX supports a variety of scenarios for data distribution and data management
including bi-directional data flow, file combining, file sharing, and file merging. MIMIX
also supports data distribution techniques such as broadcasting, and cascading.
Often, this support requires a combination of advanced replication techniques as well
as customizing. These techniques require additional planning before you configure
MIMIX. You may need to consider the technical aspects of implementing a technique
as well as how your business practices may be affected. Consider the following:
• Can each system involved modify the data?
• Do you need to filter data before sending to it to another system?
• Do you need to implement multiple techniques to accomplish your goal?
• Do you need customized exit programs?
• Do any potential collision points exist and how will each be resolved?
MIMIX user journal replication provides filtering options within the data group
definition. Also, MIMIX provides options within the data group definition and for
individual data group file entries for resolving most collision points. Additionally,
collision resolution classes allow you to specify different resolution methods for each
collision point.

Configuring for bi-directional flow


Both MIMIX user journal and system journal replication processes allow data to flow
bi-directionally, but their implementations and configuration requirements are
distinct.
• In user journal replication processing, bi-directional data flow is a data sharing
technique in which the same named database file can be replicated between
databases on two systems in two directions at the same time. When MIMIX user
journal replication processes are configured for bi-directional data flow, each
system is both a source system and a target system.
• System journal replication processing supports the bi-directional flow of objects
between a pair of systems, however when objects on each system have updates
made in the same time frame, a loss of data will most likely result. In this case, the
object data on each system is from the last update replicated to that system.
File sharing is a scenario in which a file can be shared among a group of systems
and can be updated from any of the systems in the group. MIMIX implements file
sharing among systems defined to the same MIMIX installation. To enable file
sharing, MIMIX must be configured to allow bi-directional data flow. An example of file
sharing is when an enterprise maintains a single database file that must be updated
from any of several systems.

Bi-directional requirements: system journal replication


To configure system journal replication processes to support bi-directional flow of
objects, you need the following:

390
Data distribution and data management scenarios

• A data group (DG) definition is unique to its three part name (Name, System 1,
System 2). This allows two DG definitions to be configured to share the same data
group name with system 1 and system 2 reversed. You must specify both DG
definitions to use the same Data source (DTASRC) parameter value. For
example, in the following table both DG definitions use DataGroup1 as the data
group name and both specify *SYS1 (System 1) as their DTASRC. This results in
one DG definition that replicates from A to B, while the other replicates from B to
A.

Table 44. Example of DG Definitions with reversed system names for bi-directional replica-
tion.

Data Group Name System 1 System 2

DataGroup1 A B

DataGroup1 B A

• Each data group definition should specify *NO for the Allow to be switched
(ALWSWT) parameter.
Note: In system journal replication, MIMIX does not support simultaneous updates to
the same object on multiple systems and does not support conflict resolution
for objects. Once an object is replicated to a target system, system journal
replication processes prevent looping by not allowing the same object,
regardless of name mapping, to be replicated back to its original source
system.

Bi-directional requirements: user journal replication


To configure user journal replication processes to support bi-directional data flow, you
need the following:
• A data group (DG) definition is unique to its three part name (Name, System 1,
System 2). This allows two DG definitions to be configured to share the same data
group name with system 1 and system 2 reversed. You must specify both DG
definitions to use the same Data source (DTASRC) parameter value. For
example, in the following table both DG definitions use DataGroup1 as the data
group name and both specify *SYS1 (System 1) as their DTASRC. This results in
one DG definition that replicates from A to B, while the other replicates from B to
A.

Table 45. Example of DG Definitions with reversed system names for bi-directional replica-
tion.

Data Group Name System 1 System 2

DataGroup1 A B

DataGroup1 B A

• For each data group definition, set the DB journal entry processing (DBJRNPRC)
parameter so that its Generated by MIMIX element is set to *IGNORE. This
prevents any journal entries that are generated by MIMIX from being sent to the

391
target system and prevents looping.
• The files defined to each data group must be configured for keyed replication. Use
topics “Keyed replication” on page 385 and “Verifying key attributes” on page 389
to determine if files can use keyed replication.
Note: In order for bi-directional keyed replication to work correctly, the data
group names must be the same with the System 1 and System 2 values
reversed.
• Analyze your environment to determine the potential collision points in your data.
You need to understand how each collision point will be resolved. Consider the
following:
– Can the collision be resolved using the collision resolution methods provided in
MIMIX or do you need customized exit programs? See “Collision resolution” on
page 408.
– How will your business practices be affected by collision scenarios?
For example, say that you have an order entry application that updates shared
inventory records such as Figure 19. If two locations attempt to access the last item in
stock at the same time, which location will be allowed to fill the order? Does the other
location automatically place a backorder or generate a report?

Figure 19. Example of bi-directional configuration to implement file sharing.

Configuring for file routing and file combining


File routing and file combining are data management techniques supported by MIMIX
user journal replication processes. The way in which data is used can affect the
configuration requirements for a file routing or file combining operation. Evaluate the
needs for each pair of systems (source and target) separately. Consider the following:
• Does the data need to be updated in both directions between the systems? If you
need bi-directional data flow, see topic “Configuring for bi-directional flow” on
page 390.
• Will users update the data from only one or both systems? If users can update
data from both systems, you need to prevent the original data from being returned
to its original source system (recursion).
• Is the file routing or file combining scenario a complete solution or is it part of a
larger solution? Your complete solution may be a combination of multiple data
management and data distribution techniques. Evaluate the requirements for
each technique separately for a pair of systems (source and target). Each
technique that you need to implement may have different configuration
requirements.

392
Data distribution and data management scenarios

File combining is a scenario in which all or partial information from files on multiple
systems can be sent to and combined in a single file on a target system. In its user
journal replication processes, MIMIX implements file combining between multiple
source systems and a target system that are defined to the same MIMIX installation.
MIMIX determines what data from the multiple source files is sent to the target system
based on the contents of a journal transaction. An example of file combining is when
many locations within an enterprise update a local file and the updates from all local
files are sent to one location to update a composite file. The example in Figure 20
shows file combining from multiple source systems onto a composite file on the
management system.

Figure 20. Example of file combining

To enable file combining between two systems, MIMIX user journal replication must
be configured as follows:
• Configure the data group definition for keyed replication. See topic “Keyed
replication” on page 385.
• If only part of the information from the source system is to be sent to the target
system, you need an exit program to filter out transactions that should not be sent
to the target system.
• If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file combining operation
effectively becomes a file routing operation. To ensure that the data group will
perform file combining operations after a switch, you need an exit program that
allows the appropriate transactions to be processed regardless of which system is
acting as the source for replication.
• After the combining operating is complete, if the combined data will be replicated

393
or distributed again, you need to prevent it from returning to the system on which it
originated.
File routing is a scenario in which information from a single file can be split and sent
to files on multiple target systems. In user journal replication processes, MIMIX
implements file routing between a source system and multiple target systems that are
defined to the same MIMIX installation. To enable file routing, MIMIX calls a user exit
program that makes the file routing decision. The user exit program determines what
data from the source file is sent to each of the target systems based on the contents
of a journal transaction. An example of file routing is when one location within an
enterprise performs updates to a file for all other locations, but only updated
information relevant to a location is sent back to that location. The example in Figure
21 shows the management system routing only the information relevant to each
network system to that system.

Figure 21. Example of file routing

To enable file routing, MIMIX user journal replication processes must be configured as
follows:
• Configure the data group definition for keyed replication. See topic “Keyed
replication” on page 385.
• The data group definition must call an exit program that filters transactions so that
only those transactions which are relevant to the target system are sent to it.
• If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file routing operation
effectively becomes a file combining operation. To ensure that the data group will
perform file routing operations after a switch, you need an exit program that allows
the appropriate transactions to be processed regardless of which system is acting

394
Data distribution and data management scenarios

as the source for replication.

Configuring for cascading distributions


Cascading is a distribution technique in which data passes through one or more
intermediate systems before reaching its destination. MIMIX supports cascading in
both its user journal and system journal replication paths. However, the paths differ in
their implementation.
Data can pass through one intermediate system within a MIMIX installation. Additional
MIMIX installations will allow you to support cascading in scenarios that require data
to flow though two or more intermediate systems before reaching its destination.
Figure 22 shows the basic cascading configuration that is possible within one MIMIX
installation.

Figure 22. Example of a simple cascading scenario

To enable cascading you must have the following:


• Within a MIMIX installation, the management system must be the intermediate
system.
• Configure a data group between the originating system (a network system) to the
intermediate (management) system. Configure another data group for the flow
from the intermediate (management) system to the destination system.
• Specify *NO for the Lock on apply parameter of a data group definition whose
target is the intermediate system. This is especially important for *SYSJRN data
groups or for files that are being replicated by object only.
• For user journal replication, you also need the following:
– The data groups should be configured to send journal entries that are
generated by MIMIX. To do this, specify *SEND for the Generated by MIMIX
element of the DB journal entry processing (DBJRNPRC) parameter. When
this is the case, MIMIX performs the database updates.
– If it is possible for the data to be routed back to the originating or any
intermediate systems, you need to use keyed replication.
Note: Once an object is replicated to a target system, MIMIX system journal
replication processes prevent looping by not allowing the same object,
regardless of name mapping, to be replicated back to its original source
system.
Cascading may be used with other data management techniques to accomplish a
specific goal. Figure 23 shows an example where the Chicago system is a
management system in a MIMIX installation that collects data from the network
systems and broadcasts the updates to the other participating systems. The network
systems send unfiltered data to the management system. Figure 23 is a cascading

395
scenario because changes that originate on the Hong Kong system pass through an
intermediate system (Chicago) before being distributed to the Mexico City system and
other network systems in the MIMIX installation. Exit programs are required for the
data groups acting between the management system and the destination systems
and need to prevent updates from flowing back to their system of origin.

Figure 23. Bi-directional example that implements cascading for file distribution.

396
Trigger support

Trigger support
A trigger program is a user exit program that is called by the database when a
database modification occurs. Trigger programs can be used to make other database
modifications which are called trigger-induced database modifications.

How MIMIX handles triggers


The method used for handling triggers is determined by settings in the data group
definition and file entry options. MIMIX supports database trigger replication using
one of the following ways:
• Using IBM i trigger support to prevent the triggers from firing on the target system
and replicating the trigger-induced modifications.
• Ignoring trigger-induced modifications found in the replication stream and allowing
the triggers to fire on the target system.

Considerations when using triggers


You should choose only one of these methods for each data group file entry. Which
method you use depends on a variety of considerations:
• The default replication type for data group file entry options is positional
replication. With positional replication, each file is replicated based on the position
of the record within the file. The value of the relative record number used in the
journal entry is used to locate a database record being updated or deleted. When
positional replication is used and triggers fire on the target system they can cause
trigger-induced modifications to the files being replicated. These trigger-induced
modifications can change the relative record number of the records in the file
because the relative record numbers of the trigger-induced modifications are not
likely to match the relative record numbers generated by the same triggers on the
source system. Because of this, triggers should not be allowed to fire on the target
system. You should prevent the triggers from firing on the target system and
replicate the trigger-induced modifications from source to the target system.
• When trigger-induced modifications are made by replicated files to files not
replicated by MIMIX, you may want the triggers to fire on the target system. This
will ensure that the files that are not replicated receive the same trigger-induced
modifications on the target system as they do on the source system.
• When triggers do not cause database record changes, you may choose to allow
them to fire on the target system. However, if non-database changes occur and
you are using object replication, the object replication will replicate trigger-induced
object changes from the source system. In this case, the triggers should not be
permitted to fire.
• When triggers are allowed to fire on the target system, the files being updated by
these triggers should be replicated using the same apply session as the parent
files to avoid lock contention.
• A slight performance advantage may be achieved by replicating the trigger-
induced modifications instead of ignoring them and allowing the triggers to fire.

397
This is because the database apply process checks each transaction before
processing to see if filtering is required, and firing the trigger adds additional
overhead to database processing.

Enabling trigger support


Trigger support is enabled for user journal replication by specifying the appropriate file
entry option values for parameters on the Create Data Group Definition (CRTDGDFN)
and Change Data Group Definition (CHGDGDFN) commands. You can also enable
trigger support at a file level by specifying the appropriate file entry options associated
with the file.
If you already have a trigger solution in place you can continue to use that
implementation or you can use the MIMIX trigger support.

Synchronizing files with triggers


When you are synchronizing a file with triggers and you are using MIMIX trigger
support, you must specify *DATA on the Sending mode parameter on the Synchronize
DG File Entry (SYNCDGFE) command.
On the Disable triggers on file parameter, you can specify if you want the triggers
disabled on the target system during file synchronization. The default is *DGFE, which
will use the value indicated for the data group file entry. If you specify *YES, triggers
will be disabled on the target system during synchronization. A value of *NO will leave
triggers enabled.
For more information on synchronizing files with triggers, see “About synchronizing
file entries (SYNCDGFE command)” on page 505.

398
Constraint support

Constraint support
A constraint is a restriction or limitation placed on a file. There are four types of
constraints: referential, unique, primary key and check. Unique, primary key and
check constraints are single file operations transparent to MIMIX. If a constraint is met
for a database operation on the source system, the same constraint will be met for the
replicated database operation on the target. Referential constraints, however, ensure
the integrity between multiple files. For example, you could use a referential constraint
to:
• Ensure when an employee record is added to a personnel file that it has an
associated department from a company organization file.
• Empty a shopping cart and remove the order records if an internet shopper exits
without placing an order.
When constraints are added, removed or changed on files replicated by MIMIX, these
constraint changes will be replicated to the target system. With the exception of files
that have been placed on hold, MIMIX always enables constraints and applies
constraint entries. MIMIX tolerates mismatched before images or minimized journal
entry data CRC failures when applying constraint-generated activity. Because the
parent record was already applied, entries with mismatched before images are
applied and entries with minimized journal entry data CRC failures are ignored. To
use this support:
• Ensure that your target system is at the same release level or greater than the
source system to ensure the target system is able to use all of the IBM i function
that is available on the source system. If an earlier IBM i level is installed on the
target system the operation will be ignored.
• You must have your MIMIX environment configured for either MIMIX Dynamic
Apply or legacy cooperative processing.

Referential constraints with delete rules


Referential constraints can cause changes to dependent database files when the
parent file is changed. Referential constraints defined with the following delete rules
cause dependent files to change:
• *CASCADE: Record deletion in a parent file causes records in the dependent file
to be deleted when the parent key value matches the foreign key value.
• *SETNULL: Record deletion in a parent file updates those records in the
dependent file where the value of the parent non-null key matches the foreign key
value. For those dependent records that meet the preceding criteria, all null
capable fields in the foreign key are set to null. Foreign key fields with the non-null
attribute are not updated.
• *SETDFT: Record deletion in a parent file updates those records in the dependent
file where the value of the parent non-null key matches the foreign key value. For
those dependent records that meet the preceding criteria, the foreign key field or
fields are set to their corresponding default values.

399
Referential constraint handling for these dependent files is supported through the
replication of constraint-induced modifications.
MIMIX does not provide the ability to disable constraints because IBM i would check
every record in the file to ensure constraints are met once the constraint is re-
enabled. This would cause a significant performance impact on large files and could
impact switch performance. If the need exists, this can be done through automation.

Replication of constraint-induced modifications


MIMIX always attempts to apply constraint-induced modifications. Earlier levels of
MIMIX provided the Process constraint entries element in the File entry options
(FEOPT) parameter, which now is removed.1 Any previously specified value is now
mapped to *YES so that processing always occurs.
The considerations for replication of constraint-induced modifications are:
• Files with referential constraints and any dependent files must be replicated by the
same apply session.
• When referential constraints cause changes to dependent files not replicated by
MIMIX, enabling the same constraints on the target system will allow changes to
be made to the dependent files.

1. This element was removed in version 5 service pack 5.0.08.00.

400
Handling SQL identity columns

Handling SQL identity columns


MIMIX replicates identity columns in SQL tables and checks for scenarios that can
cause duplicate identity column values after switching and, if possible, prevents the
problem from occurring. In some cases, identity columns will need to be processed by
manually running the Set Identity Column Attribute (SETIDCOLA) command.
This command is useful for handling scenarios that would otherwise result in errors
caused by duplicate identity column values when inserting rows into tables.

The identity column problem explained


In SQL, a table may have a single numeric column which is designated an identity
column. When rows are inserted into the table, the database automatically generates
a value for this column, incrementing the value with each insertion. Several attributes
define the behavior of the identity column, including: Minimum value, Maximum value,
Increment amount, Start value, Cycle/No Cycle, Cache amount. This discussion is
limited to the following attributes:
• Increment amount - the amount by which each new row’s identity column differs
from the previously inserted row. This can be a positive or negative value.
• Start value - the value used for the next row added. This can be any value,
including one that is outside of the range defined by the minimum and maximum
values.
• Cycle/No Cycle - indicates whether or not values cycle from maximum back to
minimum, or from minimum to maximum if the increment is negative.
Nothing prevents identity column values from being generated more than once.
However, in typical usage, the identity column is also a primary, unique key and set to
not cycle.
The value generator for the identity column is stored internally with the table.
Following certain actions which transfer table data from one system to another, the
next identity column value generated on the receiving system may not be as
expected. This can occur after a MIMIX switch and after other actions such as certain
save/restore operations on the backup system. Similarly, other actions such as
applying journaled changes (APYJRNCHG), also do not keep the value generator
synchronized.
Any SQL table with an identity column that is replicated by a switchable data group
can potentially experience this problem. Journal entries used to replicate inserted
rows on the production system do not contain information that would allow the value
generator to remain synchronized. The result is that after a switch to the backup
system, rows can be inserted on the backup system using identity column values
other than the next expected value. The starting value for the value generator on the
backup system is used instead of the next expected value based on the table’s
content. This can result in the reuse of identity column values which in turn can cause
a duplicate key exception.

401
Detailed technical descriptions of all attributes are available in the IBM eServer
iSeries Information Center. Look in the Database section for the SQL Reference for
CREATE TABLE and ALTER TABLE statements.

When the SETIDCOLA command is useful


Important! The SETIDCOLA command should not be used in all environments. Its
use is subject to the limitations described in “SETIDCOLA command limitations” on
page 402. If you cannot use the SETIDCOLA command, see “Alternative solutions”
on page 403.
When the MIMIX apply job is running, the next value setting for identity columns is
retained internally within the apply job. The next value setting is correctly adjusted
when the apply job ends normally. SETIDCOLA can be used in situations when this
setting needs to be manually corrected.
Examples of when you may need to run the SETIDCOLA command are:
• The SETIDCOLA command can be used to determine whether a data group
replicates tables which contain identity columns and report the results. To do so,
specify ACTION(*CHECKONLY) on the command. It is recommended that you
initially use this capability before setting values. You may want to perform this type
of check whenever new tables are created that might contain identity columns.
See “Checking for replication of tables with identity columns” on page 406.
• If a Save While Active (SAVACT) is performed on the target system, the image on
the save media will not have the correct settings for the identity column’s next
value. If these files are restored to be used as production files, SETIDCOLA
should be performed to assure the next value settings are correct.
• If a target apply job fails unexpectedly (or is ended using the ENDJOB command)
and the files on the target are subsequently used for production, SETIDCOLA
should be performed prior to starting production activity. For example, if the apply
job ends abnormally and the source system becomes unavailable, the target
system may be required for production.
Also, the SETIDCOLA command is needed in any environment in which you are
attempting to restore from a save that was created while replication processes were
running.

SETIDCOLA command limitations


In general, SETIDCOLA only works correctly for the most typical scenario where all
values for identity columns have been generated by the system, and no cycles are
allowed. In other scenarios, it may not restart the identity column at a useful value.
Limited support for unplanned switch - Following an unplanned switch, the backup
system may not be caught up with all the changes that occurred on the production
system. Using the SETIDCOLA command on the backup system may result in the
generation of identity column values that were used on the production system but not
yet replicated to the backup system. Careful selection of the value of the
INCREMENTS parameter can minimize the likelihood of this problem, but the value
chosen must be valid for all tables in the data group. See “Examples of choosing a
value for INCREMENTS” on page 405.

402
Handling SQL identity columns

Not supported -The following scenarios are known to be problematic and are not
supported. If you cannot use the SETIDCOLA command in your environment,
consider the “Alternative solutions” on page 403.
• Columns that have cycled - If an identity column allows cycling and adding a row
increments its value beyond the maximum range, the restart value is reset to the
beginning of the range. Because cycles are allowed, the assumption is that
duplicate keys will not be a problem. However, unexpected behavior may occur
when cycles are allowed and old rows are removed from the table with a
frequency such that the identity column values never actually complete a cycle. In
this scenario, the ideal starting point would be wherever there is the largest gap
between existing values. The SETIDCOLA command cannot address this
scenario; it must be handled manually.
• Rows deleted on production table - An application may require that an identity
column value never be generated twice. For example, the value may be stored in
a different table, data area or data queue, given to another application, or given to
a customer. The application may also require that the value always locate either
the original row or, if the row is deleted, no row at all. If rows with values at the end
of the range are deleted and you perform a switch followed by the SETIDCOLA
command, the identity column values of the deleted rows will be re-generated for
newly inserted rows. The SETIDCOLA command is not recommended for this
environment. This must be handled manually.
• No rows in backup table - If there are no rows in the table on the backup system,
the restart value will be set to the initial start value. Running the SETIDCOLA
command on the backup system may result in re-generating values that were
previously used. The SETIDCOLA command cannot address this scenario; it
must be handled manually.
• Application generated values - Optionally, applications can supply identity column
values at the time they insert rows into a table. These application-generated
identity values may be outside the minimum and maximum values set for the
identity column. For example, a table’s identity column range may be from 1
through 100,000,000 but an application occasionally supplies values in the range
of 200,000,000 through 500,000,000. If cycling is permitted and the SETIDCOLA
command is run, the command would recognize the higher values from the
application and would cycle back to the minimum value of 1. Because the result
would be problematic, the SETIDCOLA command is not recommended for tables
which allow application-generated identity values. This must be handled manually.

Alternative solutions
If you cannot use the SETIDCOLA command because of its known limitations, you
have these options.
Manually reset the identity column starting point: Following a switch to the
backup system, you can manually reset the restart value for tables with identity
columns. The SQL statement ALTER TABLE name ALTER COLUMN can be used for
this purpose.
Convert to SQL sequence objects: To overcome the limitations of identity column
switching and to avoid the need to use the SETIDCOLA command, SQL sequence

403
objects can be used instead of identity columns. Sequence objects are implemented
using a data area which can be replicated by MIMIX. The data area for the sequence
object must be configured for replication through the user journal (cooperatively
processed).

SETIDCOLA command details


The Set Identity Column Attribute (SETIDCOLA) command performs a RESTART
WITH alteration on the identity column of any SQL tables defined for replication in the
specified data group. For each table, the new restart value determines the identity
column value for the next row added to the table. Careful selection of values can
ensure that, when applications are started, the identity column starting values exceed
the last values used prior to the switch or save/restore operation.
If you use Lakeview-provided product-level security, the minimum authority level for
this command is *OPR.
The Data group definition (DGDFN) parameter identifies the data group against
which the specified action is taken. Only tables that are identified for replication by the
specified data group are addressed.
The Action (ACTION) parameter specifies what action is to be taken by the
command. Only tables which can be replicated by the specified data group are acted
upon. Possible values are:
*SET The command checks and sets the attribute of the identity column of each
table which meets the criteria. This is the default value.
*CHECKONLY The command checks for tables which have identity columns. It
does not set the attributes of the identity columns. The result of the check is
reported in the job log. If there are affected tables, message LVE3E2C will be
issued. If no tables are affected, message LVI3E26 will be issued.
The Number of jobs (JOBS) parameter specifies the number of jobs to use to
process tables which meet the criteria for processing by the command. A table will
only be updated by one job; each job can update multiple tables. The default value,
*DFT, is currently set to one job. You can specify as many as 30 jobs.
The Number of increments to skip (INCREMENTS) parameter specifies how many
increments of the counter which generates the starting value for the identity column to
skip. The value specified is used for all tables which meet the criteria for processing
by the command. Be sure to read the information in “Examples of choosing a value for
INCREMENTS” on page 405. Possible values are:
*DFT Skips the default number of increments, currently set to 1 increment.
Following a planned switch where tables are synchronized, you can usually use
*DFT.
number-of-increments-to-skip Specify the number of increments to skip. Valid
values are 1 through 2,147,483,647. Following an unplanned switch, use a larger
value to ensure that you skip any values used on the production system that may
not have been replicated to the backup system.

404
Handling SQL identity columns

Usage notes
• The reason you are using this command determines which system you should run
it from. See “When the SETIDCOLA command is useful” on page 402 for details.
• The command can be invoked manually or as part of a MIMIX Model Switch
Framework custom switching program. Evaluation of your environment to
determine an appropriate increment value is highly recommended before using
the command.
• This command can be long running when many files defined for replication by the
specified data group contain identity columns. This is especially true when
affected identity columns do not have indexes over them or when they are
referenced by constraints. Specifying a higher number of jobs (JOBS) can reduce
this time.
• This command creates a work library named SETIDCOLA which is used by the
command. The SETIDCOLA library is not deleted so that it can be used for any
error analysis.
• Internally, the SETIDCOLA command builds RUNSQLSTM scripts (one for each
job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts.
RUNSQLSTM produces spooled files showing the ALTER TABLE statements
executed, along with any error messages received. If any statement fails, the
RUNSQLSTM will also fail, and return the failing status back to the job where
SETIDCOLA is running and an escape message will be issued.

Examples of choosing a value for INCREMENTS


When choosing a value for INCREMENTS, consider the rate at which each table
consumes its available identity values. Account for the needs of the table which
consumes numbers at the highest rate, as well as any backlog in MIMIX processing
and the activity causing you to run the command. If you have available numbers to
use, add a safety factor of at least 100 percent. For example, if the rate of the fastest
file is 1,000 numbers per hour and MIMIX is 15 minutes behind (0.25 hours), the value
you specify for INCREMENTS needs to result in at least 250 numbers (1000 x 0.25)
being skipped. Adding 100% to 250, results in an increment of 500.
Note: The MIMIX backlog, sometimes called the latency of changes being
transferred to the backup system, is the amount of time from when an
operation occurs on the production system until it is successfully sent to the
backup system by MIMIX. It does not include the time it takes for MIMIX to
apply the entry. Use the DSPDGSTS command to view the Unprocessed entry
count for the DB Apply process; this value is the size of the backlog. You need
to approximate how long it would take for this value to become zero (0) if
application activity were to be stopped on the production system.
For example, data group ORDERS contains tables A and B. Each row added to table
A increases the identity value by 1 and each row added to table B increases the
identify value by 1,000. Rows are inserted into table A at a rate of approximately 600
rows per hour. Rows are inserted into table B at a rate of approximately 20 rows per
hour. Prior to a switch, on the production system the latest value for table A was 75
and the latest value for table B was 30,000. Consider the following scenarios:

405
• Scenario 1. You performed a planned switch for test purposes. Because
replication of all transactions completed before the switch and no users have been
allowed on the backup system, the backup system has the same values as the
production. Before starting replication in the reverse direction you run the
SETIDCOLA command with an INCREMENTS value of 1. The next rows added to
table A and B will have values of 76 and 31,000, respectively.
• Scenario 2. You performed an unplanned switch. From previous experience, you
know that the latency of changes being transferred to the backup system is
approximately 15 minutes. Rows are inserted into Table A at the highest rate. In
15 minutes, approximately 150 rows will have been inserted into Table A (600
rows/hour * 0.25 hours). This suggests an INCREMENTS value of 150. However,
since all measurements are approximations or based on historical data, this
amount should be adjusted by a factor of at least 100% to 300 to ensure that
duplicate identity column values are not generated on the backup system. The
next rows added to table A and B will have values of 75+(300*1) = 375 and 30,000
+ (300*1000)= 330,000 respectively.

Checking for replication of tables with identity columns


To determine whether any files being replicated by a data group have identity
columns, do the following.
1. From the production system, specify the data group to check in the following
command:
SETIDCOLA DGDFN(name system1 system2) ACTION(*CHECKONLY)
2. Check the job log for the following messages. Message LVE3E2C identifies the
number of tables found with identity columns. Message LVI3E26 indicates that no
tables were found with identity columns.
3. If the results found tables with identity columns, you need to evaluate the tables
and determine whether you can use the SETIDCOLA command to set values.

Setting the identity column attribute for replicated files


At a high level, the steps you need to perform to set the identity columns of files being
replicated by a data group are listed below. You may want to plan for the time required
for investigation steps and time to run the command to set values.
1. Run the SETIDCOLA command in check only mode first to determine if you need
to set values. See “Checking for replication of tables with identity columns” on
page 406.
2. Determine whether limitations exist in the replicated tables that would prevent you
from running the command to set values. See “SETIDCOLA command limitations”
on page 402.
3. Determine what increment value is appropriate for use for all tables replicated by
the data group. Consider the needs of each table. Also consider the MIMIX
backlog at the time you plan to use the command. See “Examples of choosing a
value for INCREMENTS” on page 405.
4. From the appropriate system, as defined in “When the SETIDCOLA command is

406
Handling SQL identity columns

useful” on page 402 specify a data group and the number of increments to skip in
the command:
SETIDCOLA DGDFN(name system1 system2) ACTION(*SET)
INCREMENTS(number)

407
Collision resolution
Collision resolution is a function within MIMIX user journal replication that
automatically resolves detected collisions without user intervention. MIMIX supports
the following choices for collision resolution that you can specify in the file entry
options (FEOPT) parameter in either a data group definition or in an individual data
group file entry:
• Held due to error: (*HLDERR) This is the default value for collision resolution in
the data group definition and data group file entries. MIMIX flags file collisions as
errors and places the file entry on hold. Any data group file entry for which a
collision is detected is placed in a "held due to error" state (*HLDERR). This
results in the journal entries being replicated to the target system but they are not
applied to the target database. If the file entry specifies member *ALL, a
temporary file entry is created for the member in error and only that file entry is
held. Normal processing will continue for all other members in the file. You must
take action to apply the changes and return the file entry to an active state. When
held due to error is specified in the data group definition or the data group file
entry, it is used for all 12 of the collision points.
• Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically
synchronize file members when an error is detected. The member is put on hold
while the database apply process continues with the next transaction. The file
member is synchronized using copy active file processing, unless the collision
occurred at the compare attributes collision point. In the latter case, the file is
synchronized using save and restore processing. When automatic
synchronization is specified in the data group definition or data group file entry, it
is used for all 12 of the collision points.
• Collision resolution class: A collision resolution class is a named definition
which provides more granular control of collision resolution. Some collision points
also provide additional methods of resolution that can only be accessed by using
a collision resolution class. With a defined collision resolution class, you can
specify how to handle collision resolution at each of the 12 collision points. You
can specify multiple methods of collision resolution to attempt at each collision
point. If the first method specified does not resolve the problem, MIMIX uses the
next method specified for that collision point.

Additional methods available with CR classes


Automatic synchronization (*AUTOSYNC) and held due to error (*HLDERR) are
essentially predefined resolution methods. When you specify *HLDERR or
*AUTOSYNC in a data group definition or a data group file entry, that method is used
for all 12 of the collision points. If you specify a named collision resolution class in a
data group definition or data group file entry, you can customize what resolution
method to use at each collision point.
Within a collision resolution class, you can specify one or more resolution method to
use for each collision point. *AUTOSYNC and *HLDERR are available for use at each
collision point. Additionally, the following resolution methods are also available:
• Exit program: (*EXITPGM) A specified user exit program is called to handle the

408
Collision resolution

data collision. This method is available for all collision points.


The MXCCUSREXT service program dynamically links your exit program. The
MXCCUSREXT service program is shipped with MIMIX and runs on the target
system.
The exit program is called on three occasions. The first occasion is when the data
group is started. This call allows the exit program to handle any initialization or set
up you need to perform.
The MXCCUSREXT service program (and your exit program) is called if a
collision occurs at a collision point for which you have indicated that an exit
program should perform collision resolution actions.
Finally, the exit program is called when the data group is ended.
• Field merge: (*FLDMRG) This method is only available for the update collision
point 3, used with keyed replication. If certain rules are met, fields from the after-
image are merged with the current image of the file to create a merged record that
is written to the file. Each field within the record is checked using the series of
algorithms below.
In the following algorithms, these abbreviations are used:
RUB = before-image of the source file
RUP = after-image of the source file
RCD = current record image of the target file
a. If the RUB equals the RUP and the RUB equals the RCD, do not change the
RUP field data.
b. If the RUB equals the RUP and the RUB does not equal the RCD, copy the
RCD field data into the RUP record.
c. If the RUB does not equal the RUP and the RUB equals the RCD, do not
change the RUP field data.
d. If the RUB does not equal the RUP and the RUB does not equal the RCD, fail
the field-level merge.
• Applied: (*APPLIED) This method is only available for the update collision point 3
and the delete collision point 1. For update collision point 3, the transaction is
ignored if the record to be updated already equals the data in the updated record.
For delete collision point 1, the transaction is ignored because the record does not
exist.
If multiple collision resolution methods are specified and do not resolve the problem
MIMIX will always use *HLDERR as the last resort, placing the file on hold.

Requirements for using collision resolution


To use a collision resolution other than the default *HLDERR, you must have the
following:
• The data group definition used for replication must specify a data group type of
*ALL or *DB.

409
• You must specify either *AUTOSYNC or the name of a collision resolution class
for the Collision resolution element of the File entry option (FEOPT) parameter.
Specify the value as follows:
– If you want to implement collision resolution for all files processed by a data
group, specify a value in the parameter within the data group definition.
– If you want to implement collision resolution for only specific files, specify a
value in the parameter within an individual data group file entry.
Note: Ensure that data group activity is ended before you change a data group
definition or a data group file entry.
• If you plan to use an exit program for collision resolution, you must first create a
named collision resolution class. In the collision resolution class, specify
*EXITPGM for each of the collision points that you want to be handled by the exit
program and specify the name of the exit program.

Working with collision resolution classes


Do the following to access options for working with collision resolution:
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select option 5 (Work with collision
resolution classes) and press Enter. The Work with CR Classes display appears.

Creating a collision resolution class


To create a collision resolution class, do the following:
1. From the Work with CR Classes display, type a 1 (Create) next to the blank line at
the top of the display and press Enter.
2. The Create Collision Res. Class (CRTCRCLS) display appears. Specify a name at
the Collision resolution class prompt.
3. At each of the collision point prompts on the display, specify the value for the type
of collision resolution processing you want to use. Press F1 (Help) to see a
description of the collision point.
Note: You can specify more than one method of collision resolution for each
prompt by typing a + (plus sign) at the prompt. With the exception of the
*HLDERR method, the methods are attempted in the order you specify. If
the first method you specify does not successfully resolve the collision,
then the next method is run. *HLDERR is always the last method
attempted. If all other methods fail, the member is placed on hold due to
error.
4. Press Page Down to see additional prompts.
5. At each of the collision point prompts on the second display, specify the value for
the type of collision resolution processing you want to use.
6. If you specified *EXITPGM at any of the collision point prompts, specify the name
and library of program to use at the Exit point prompt.

410
Collision resolution

7. At the Number of retry attempts prompt, specify the number of times to try to
automatically synchronize a file. If this number is exceeded in the time specified in
the Retry time limit, the file will be placed on hold due to error
8. At the Retry time limit prompt, specify the number of maximum number of hours to
retry a process if a failure occurs due to a locking condition or an in-use condition.
Note: If a file encounters repeated failures, an error condition that requires
manual intervention is likely to exist. Allowing excessive synchronization
requests can cause communications bandwidth degradation and
negatively impact communications performance.
9. To create the collision resolution class, press Enter.

Changing a collision resolution class


To change an existing collision resolution class, do the following:
1. From the Work with CR Classes display, type a 2 (Change) next to the collision
resolution class you want and press Enter.
2. The Change CR Class Details display appears. Make any changes you need.
Page Down to see all of the prompts.
3. Provide the required values in the appropriate fields. Inspect the default values
shown on the display and either accept the defaults or change the value.
4. You can specify as many as 3 values for each collision point prompt. To expand
this field for multiple entries, type a plus sign (+) in the entry field opposite the
phrase "+ for more" and press Enter.
5. To accept the changes, press Enter.

Deleting a collision resolution class


To delete a collision resolution class, do the following:
1. From the Work with CR Classes display, type a 4 (Delete) next to the collision
resolution class you want and press Enter.
2. A confirmation display appears. Verify that the collision resolution class shown on
the display is what you want to delete.
3. Press Enter.

Displaying a collision resolution class


To display a collision resolution class, do the following:
1. From the Work with CR Classes display, type a 5 (Display) next to the collision
resolution class you want and press Enter.
2. The Display CR Class Details display appears. Press Page Down to see all of the
values.

411
Printing a collision resolution class
Use this procedure to create a spooled file of a collision resolution class which you
can print.
1. From the Work with CR Classes display, type a 6 (Print) next to the collision
resolution class you want and press Enter.
2. A spooled file is created with the name MXCRCLS on which you can use your
standard printing procedure.

412
Changing target side locking for DBAPY processes

Changing target side locking for DBAPY processes


To ensure access needed to complete replication, MIMIX defaults allow the database
apply process (DBAPY) to obtain an exclusive, allow read (*EXCLRD) lock on the
data of file members being processed on the target node.
Locking occurs when the apply process is started and affects members whose file
activity status is active. Database apply processing will also lock objects as needed to
replicate changes from the source node.
The type of lock obtained by MIMIX affects the type of lock that can be obtained by
other applications on the target node. You can also configure MIMIX to use a shared,
no update (*SHRNUP) lock or to use no lock at all (*NONE). This may be useful when
only read or query access is needed to replicated files on the target node or for an
instance that was migrated from another product which obtained different locks for
replication activity. However, changing the type of lock used may cause contention for
access to the data being replicated, which may affect performance of the apply
process.
Target side locking can be changed for a data group or for a specific data group object
entry or file entry.

Changing target side locking for a data group


To change target side locking for a data group, do the following from a management
system:
1. From the Work with Data Group Definitions display, use option 2 (Change) to
access the data group definition you want.
2. On the resulting display, press F10 (Additional parameters), then press Page
Down multiple times to locate the Lock member during apply prompt, (It is an
element of the File and tracking ent. opts (FEOPT) parameter.
3. Specify the value you want for the Lock member during apply prompt,
4. Press Enter.
The change is not effective until replication processes for the data group are ended
and started again.

Changing target side locking for a data group object entry or file entry
To change target side locking for a library selection rule, do the following from a
management system:
1. From the Work with Data Group Definitions display, do one of the following to
access configured entries in the data group you want:
• Use option 20 (Object entries) to access configured data group object entries
for library-based objects.
• Use option 17 (File entries) to access configured data group file entries. Use
this only for data groups configured to use database-only replication
processes.
2. On the resulting display, press F10 (Additional parameters), then press Page

413
Down multiple times to locate the Lock member during apply prompt, (It is an
element of the File and tracking ent. opts (FEOPT) parameter.
3. Specify the value you want for the Lock member during apply prompt,
4. Press Enter.
The change is not effective until replication processes for the data group are ended
and started again.

414
Omitting T-ZC content from system journal replication

Omitting T-ZC content from system journal replication


For logical and physical files configured for replication solely through the system
journal, MIMIX provides the ability to prevent replication of predetermined sets of T-
ZC journal entries associated with changes to object attributes or content changes.
Default T-ZC processing: Files that have an object auditing value of *CHANGE or
*ALL will generate T-ZC journal entries whenever changes to the object attributes or
contents occur. The access type field within the T-ZC journal entry indicates what type
of change operation occurred. Table 46 lists the T-ZC journal entry access types that
are generated by PF-DTA, PF38-DTA, PF-SRC, PF38-SRC, LF, and LF-38 file types.

Table 46. T-ZC journal entry access types generated by file objects. These T-ZC journal entries are eligible
for replication through the system journal.

Access Access Type Operation Type Operations that Generate T-ZC Access Type
Type Description
File Member Data

1 Add X Add member for physical files and logical files


(ADDPFM)

7 Change1 X X Change Physical File (CHGPF), Change


Logical File (CHGLF), Change Physical File
Member (CHGPFM), Change Logical File
Member (CHGLFM), Change Object
Description (CHGOBJD)

10 Clear X Clear member for physical files (CLRPFM)

25 Initialize X Initialize member for physical files (INZPFM)

30 Open X Opening member for write for physical files

36 Reorganize X Reorganize member for physical files


(RGZPFM)

37 Remove X Remove member for physical files and logical


files (RMVM)

38 Rename X Rename member for physical files and logical


files (RNMM)

62 Add X Adding constraint for physical files


constraint (ADDPFCST)

63 Change X Changing constraint for physical files


constraint (CHGPFCST)

64 Remove X Removing constraint for physical files


constraint (RMVPFCST)
1. These T-ZC journal entries may or may not have a member name associated with them. If a member name is associ-
ated with the journal entry, the T-ZC is a member operation. If no member name is associated with the journal entry,
the T-ZC is assumed to be a file operation.

By default, MIMIX replicates file attributes and file member data for all T-ZC entries
generated for logical and physical files configured for system journal replication. While

415
MIMIX recreates attribute changes on the target system, member additions and data
changes require MIMIX to replicate the entire object using save, send, and restore
processes. This can cause unnecessary replication of data and can impact
processing time, especially in environments where the replication of file data
transactions is not necessary.
Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data
group object entry commands, you can specify a predetermined set of access types
for *FILE objects to be omitted from system journal replication. T-ZC journal entries
with access types within the specified set are omitted from processing by MIMIX.
The OMTDTA parameter is useful when a file or member’s data does not need to be
the replicated. For example, when replicating work files and temporary files, it may be
desirable to replicate the file layout but not the file members or data. The OMTDTA
parameter can also help you reduce the number of transactions that require
substantial processing time to replicate, such as T-ZC journal entries with access type
30 (Open).
Each of the following values for the OMTDTA parameter define a set of access types
that can be omitted from replication:
*NONE - No T-ZCs are omitted from replication. All file, member, and data
operations in transactions for the access types listed in Table 46 are replicated.
This is the default value.
*MBR - Data operations are omitted from replication. File and member operations
in transactions for the access types listed in Table 46 are replicated. Access type
7 (Change) for both file and member operations are replicated.
*FILE - Member and data operations are omitted from replication. Only file
operations in transactions for the access types listed in Table 46 are replicated.
Only file operations in transactions with access type 7 (Change) are replicated.

Configuration requirements and considerations for omitting T-ZC content


To omit transactions, logical and physical files must be configured for system journal
replication and meet these configuration requirements:
• The data group definition must specify *ALL or *OBJ for the Data group type
(TYPE).
• The file for which you want to omit transactions must be identified by a data group
object entry that specifies the following:
– Cooperate with database (COOPDB) must be *NO when Cooperating object
types (COOPTYPE) specifies *FILE. If COOPDB is *YES, then COOPTYPE
cannot specify *FILE.
– Omit content (OMTDTA) must be either *FILE or *MBR.
Object auditing value considerations - The file must have an object auditing value
of *CHANGE or *ALL in order for any T-ZC journal entry resulting from a change
operation to be created in the system journal. To ensure that changes to the file
continue to be journaled and replicated, the data group object entry should also
specify *CHANGE or *ALL for the Object auditing value (OBJAUD) parameter.

416
Omitting T-ZC content from system journal replication

For all library-based objects, MIMIX evaluates the object auditing level when starting
data a group after a configuration change. If the configured value specified for the
OBJAUD parameter is higher than the object’s actual value, MIMIX will change the
object to use the higher value. If you use the SETDGAUD command to force the
object to have an auditing level of *NONE and the data group object entry also
specifies *NONE, any changes to the file will no longer generate T-ZC entries in the
system journal. For more information about object auditing, see “Managing object
auditing” on page 60.
Object attribute considerations - When MIMIX evaluates a system journal entry
and finds a possible match to a data group object entry which specifies an attribute in
its Attribute (OBJATR) parameter, MIMIX must retrieve the attribute from the object in
order to determine which object entry is the most specific match.
If the object attribute is not needed to determine the most specific match to a data
group object entry, it is not retrieved.
After determining which data group object entry has the most specific match, MIMIX
evaluates that entry to determine how to proceed with the journal entry. When the
matching object entry specifies *FILE or *MBR for OMTDTA, MIMIX does not need to
consider the object attribute in any other evaluations. As a result, the performance of
the object send job may improve.

Omit content (OMTDTA) and cooperative processing


For file processing, MIMIX allows only a value of *NONE for OMTDTA when a data
group object entry specifies cooperative processing of files with COOPDB(*YES) and
COOPTYPE(*FILE).
When using MIMIX Dynamic Apply for cooperative processing, logical files and
physical files (source and data) are replicated primarily through the user journal.
Legacy cooperative processing replicates only physical data files. When using legacy
cooperative processing, system journal replication processes select only file attribute
transactions. File attribute transactions are T-ZC journal entries with access types 7
(Change), 62 (Add constraint), 63 (Change constraint), and 64 (Remove constraint).
These transactions are replicated by system journal replication during legacy
cooperative processing, while most other transactions are replicated by user journal
replication.

Omit content (OMTDTA) and comparison commands


All T-ZC journal entries for files are replicated when *NONE is specified for the
OMTDTA parameter. However, when OMTDTA is enabled by specifying *FILE or
*MBR, some T-ZC journal entries for file objects are omitted from system journal
replication. This may affect whether replicated files on the source and target systems
are identical.
For example, recall how a file with an object auditing attribute value of *NONE is
processed. After MIMIX replicates the initial creation of the file through the system
journal, the file on the target system reflects the original state of the file on the source
system when it was retrieved for replication. However, any subsequent changes to file
data are not replicated to the target system. According to the configuration

417
information, the files are synchronized between source and target systems, but the
files are not the same.
A similar situation can occur when OMTDTA is used to prevent replication of
predetermined types of changes. For example, if *MBR is specified for OMTDTA, the
file and member attributes are replicated to the target system but the member data is
not. The file is not identical between source and target systems, but it is synchronized
according to configuration. Comparison commands will report these attributes as *EC
(equal configuration) even though member data is different. MIMIX audits, which call
comparison commands with a data group specified, will have the same results.
Running a comparison command without specifying a data group will report all the
synchronized-but-not-identical attributes as *NE (not equal) because no configuration
information is considered.
Consider how the following comparison commands behave when faced with non-
identical files that are synchronized according to the configuration.
• The Compare File Attributes (CMPFILA) command has access to configuration
information from data group object entries for files configured for system journal
replication. When a data group is specified on the command, files that are
configured to omit data will report those omitted attributes as *EC (equal
configuration). When CMPFILA is run without specifying a data group, the
synchronized-but-not-identical attributes are reported as *NE (not equal).
• The Compare File Data (CMPFILDTA) command uses data group file entries for
configuration information. As a result, when a data group is specified on the
command, any file objects configured for OMTDTA will not be compared. When
CMPFILDTA is run without specifying a data group, the synchronized-but-not-
identical file member attributes are reported as *NE (not equal).
• The Compare Object Attributes (CMPOBJA) command can be used to check for
the existence of a file on both systems and to compare its basic attributes (those
which are common to all object types). This command never compares file-
specific attributes or member attributes and should not be used to determine
whether a file is synchronized.

418
Selecting an object retrieval delay

Selecting an object retrieval delay


When replicating objects, particularly documents (*DOC) and stream files (*STMF),
MIMIX will obtain a lock on the object that can prevent your applications from
accessing the object in a timely manner.
Some of your applications may be unable to recover from this condition and may fail
in an unexpected manner.
You can reduce, or eliminate, contention for an object between MIMIX and your
applications if the object retrieval processing is delayed for a predetermined amount
of time before obtaining a lock on the object to retrieve it for replication.
You can use the Object retrieval delay element within the Object processing
parameter on the change or create data group definition commands to set the delay
time between the time the object was last changed on the source system and the time
MIMIX attempts to retrieve the object on the source system.
Although you can specify this value at the data group level, you can override the data
group value at the object level by specifying an Object retrieval delay value on the
commands for creating or changing data group entries.
You can specify a delay time from 0 through 999 seconds. The default is 0.
If the object retrieval latency time (the difference between when the object was last
changed and the current time) is less than the configured delay value, then MIMIX will
delay its object retrieval processing until the difference between the time the object
was last changed and the current time exceeds the configured delay value.
If the object retrieval latency time is greater than the configured delay value, MIMIX
will not delay and will continue with the object retrieval processing.

Object retrieval delay considerations and examples


You should use care when choosing the object retrieval delay. A long delay may
impact the ability of system journal replication processes to move data from a system
in a timely manner. Too short a delay may allow MIMIX to retrieve an object before an
application is finished with it. You should make the value large enough to reduce or
eliminate contention between MIMIX and applications, but small enough to allow
MIMIX to maintain a suitable high availability environment.
Example 1 - The object retrieval delay value is configured to be 3 seconds:
• Object A is created or changed at 9:05:10.
• The Object Retrieve job encounters the create/change journal entry at 9:05:14. It
retrieves the “last change date/time” attribute from the object and determines that
the delay time (object last changed date/time of 9:05:10 + configured delay value
of :03 = 9:05:13) is less than the current date/time (9:05:14). Because the object
retrieval delay time has already been exceeded, the object retrieve job continues
normal processing and attempts to package the object.
Example 2 - The object retrieval delay value is configured to be 2 seconds:
• Object A is created or changed at 10:45:51.

419
• The Object Retrieve job encounters the create/change journal entry at 10:45:52. It
retrieves the “last change date/time” attribute from the object and determines that
the delay time (object last changed date/time of 10:45:51 + configured delay value
of :02 = 10:45:53) exceeds the current date/time (10:45:52). Because the object
retrieval delay value has not be met or exceeded, the object retrieve job delays for
1 second to satisfy the configured delay value.
• After the delay (at time 10:45:53), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 10:45:51 + configured delay value of :02 =
10:45:53) is equal to the current date/time (10:45:53). Because the object retrieval
delay value has been met, the object retrieve job continues with normal
processing and attempts to package the object.
Example 3 - The object retrieval delay value is configured to be 4 seconds:
• Object A is created or changed at 13:20:26.
• The Object Retrieve job encounters the create/change journal entry at 13:20:27. It
retrieves the “last change date/time” attribute from the object and determines that
the delay time (object last changed date/time of 13:20:26 + configured delay value
of :04 = 13:20:30) exceeds the current date/time (13:20:27) and delays for 3
seconds to satisfy the configured delay value.
• While the object retrieve job is waiting to satisfy the configured delay value, the
object is changed again at 13:20:28.
• After the delay (at time 13:20:30), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 + configured delay value of :04 =
13:20:32) again exceeds the current date/time (13:20:30) and delays for 2
seconds to satisfy the configured delay value.
• After the delay (at time 13:20:32), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 + configured delay value of :04 =
13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval
delay value has now been met, the object retrieve job continues with normal
processing and attempts to package the object.

420
Configuring to replicate SQL stored procedures and user-defined functions

Configuring to replicate SQL stored procedures and


user-defined functions
DB2 UDB for IBM Power™ Systems supports external stored procedures and SQL
stored procedures. This information is specifically for replicating SQL stored
procedures and user-defined functions. SQL stored procedures are defined entirely in
SQL and may contain SQL control statements. MIMIX can replicate operations related
to stored procedures that are written in SQL (SQL stored procedures), such as
CREATE PROCEDURE (create), DROP PROCEDURE (delete), GRANT PRIVILEGES
ON PROCEDURE (authority), and REVOKE PRIVILEGES ON PROCEDURE (authority).
An SQL procedure is a program created and linked to the database as the result of a
CREATE PROCEDURE statement that specifies the language SQL and is called using
the SQL CALL statement. For example, the following statements create program
SQLPROC in LIBX and establish it as a stored procedure associated with LIBX:
CREATE PROCEDURE LIBX/SQLPROC(OUT NUM INT) LANGUAGE SQL
SELECT COUNT(*) INTO NUM FROM FILEX
For SQL stored procedures, an independent program object is created by the system
and contains the code for the procedure. The program object usually shares the name
of the procedure and resides in the same library with which the procedure is
associated. A DROP PROCEDURE statement for an SQL procedure removes the
procedure from the catalog and deletes the external program object.
Procedures are associated with a particular library. Because information about the
procedure is stored in the database catalog and not the library, it cannot be seen by
looking at the library. Use System i Navigator to view the stored procedures
associated with a particular library (select Databases > Libraries).

Requirements for replicating SQL stored procedure operations


The following configuration requirements and restrictions must be met:
• Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they
pertain to your environment. Log in to Support Central and refer the Technical
Documents page for a list of required and recommended IBM PTFs.
• To correctly replicate a create operation, name mapping cannot be used for either
the library or program name.
• GRANT and REVOKE only affect the associated program object. MIMIX
replicates these operations correctly.
• The COMMENT statement cannot be replicated.
• An appropriately configured data group object entry must identify the object to
which the stored procedure is associated.
Stored procedures or other system table concepts that have non-deterministic ties to
a library-based object cannot be replicated.

421
To replicate SQL stored procedure operations
Do the following:
1. Ensure that the replication requirements for the various operations are followed.
See “Requirements for replicating SQL stored procedure operations” on
page 421.
2. Ensure that you have a data group object entry that includes the associated
program object. For example:
ADDDGOBJE DGDFN(name system1 system2) LIB1(library)
OBJ1(*ALL) OBJTYPE(*PGM)

422
Using Save-While-Active in MIMIX

Using Save-While-Active in MIMIX


MIMIX system journal replication processes use save/restore when replicating most
types of objects. If there is conflict for the use of an object between MIMIX and some
other process, the initial save of the object may fail. When such a failure occurs,
MIMIX will attempt to process the object by automatically starting delay or retry
processing using the values configured in the data group definition.
For the initial save of *FILE objects, save-while-active capabilities will be used unless
it is disabled. By default, save-while-active is only used when saving *FILE objects; it
is not used when saving other library-based object types, DLOs, or IFS objects.
However, you can specify to have MIMIX attempt saves of DLOs and IFS objects
using save-while-active.
Values for retry processing are specified in the First retry delay interval
(RTYDLYITV1) and Number of times to retry (RTYNBR) parameters in the data group
definition. After the initial failed save attempt, MIMIX delays for the number of
seconds specified in the RTYDLYITV1 value, before retrying the save operation. This
is repeated for the number of times that is specified for the RTYNBR value in the data
group definition. If the object cannot be saved after the attempts specified in
RTYNBR, then MIMIX uses the delay interval value which is specified in the
RTYDLYITV2 parameter. The save is then attempted for the number of retries
specified in the RTYNBR parameter. For the initial default values for a data group, this
calculates to be 7 save attempts (1 initial attempt, 3 attempts using the first delay
value of 5 seconds, and 3 attempts using the second delay value of 300 seconds), in
a time frame of approximately 20 minutes. For more information on retry processing,
see the parameters for automatic retry processing in “Tips for data group parameters”
on page 234.

Considerations for save-while-active


If a file is being saved and it shares a journal with another file that has uncommitted
transactions, then the file may be successfully saved by using a normal (non save-
while-active) save. This assumes that the file being saved does not have
uncommitted transactions. If you disable save-while-active, attempts to save any type
of object will use a normal save.
In addition to providing the ability to enable the use of save-while-active for object
types other than *FILE, MIMIX provides the abilities to control the wait time when
using save-while-active or to disable the use of save-while-active for all object types.
Save-while-active wait time
For the default (*FILE objects), MIMIX uses save-while-active with a wait time of 120
seconds on the initial save attempt. MIMIX then uses normal (non save-while-active)
processing on all subsequent save attempts if the initial save attempt fails.
You can configure the save-while-active wait time when specifying to use save-while-
active for the initial save attempt of a *FILE, a DLO, or and IFS object. When
specifying to use save-while-active, the first attempt to save the object after delaying
the amount of time configured for the Second retry delay interval (RTYDLYITV2)

423
value will also use save-while-active. All other attempts to save the object will use a
normal save.
Note: Although MIMIX has the capability to replicate DLOs using save/restore
techniques, it is recommended that DLOs be replicated using optimized
techniques, which can be configured using the DLO transmission method
under Object processing in the data group definition.

Types of save-while-active options


MIMIX uses the configuration value (DGSWAT) to select the type of save-while-active
option to be used when saving objects. You can view and change these configuration
values for a data group through an interface such as SQL or DFU.
DGSWAT: Save-while-active type. You can specify the following values:
• A value of 0 (the default) indicates that save-while-active is to be used when
saving files, with a save-while-active wait time of 120 seconds. For DLOs and IFS
objects, a normal save will be attempted.
• A value of 1 through 99999 indicates that save-while-active is to be used when
saving files, DLOs and IFS objects. The value specified will be used as the save-
while-active wait time, such as when passed to the SAVACTWAIT parameter on
the SAVOBJ and SAVDLO commands.
• A value of -1 indicates that save-while-active is disabled and is not to be used
when saving files, DLOs or IFS objects. Normal saves will always be used to save
any type of object.

Example configurations
The following examples describe the SQL statements that could be used to view or
set the configuration settings for a data group definition (data group name, system 1
name, system 2 name) of MYDGDFN, SYS1, SYS2.
Example - Viewing: Use this SQL statement to view the values for the data group
definition:
SELECT DGDGN, DGSYS, DGSYS2, DGSWAT FROM MIMIX/DM0200P WHERE
DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Example - Disabling: If you want to modify the values for a data group definition to
disable use of save-while-active for a data group and use a normal save, you could
use the following statement:
UPDATE MIMIX/DM0200P SET DGSWAT=-1 WHERE DGDGN=’MYDGDFN’ AND
DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Example - Modifying: If you want to modify a data group definition to enable use of
save-while-active with a wait time of 30 seconds for files, DLOs and IFS objects, you
could use the following statement:
UPDATE MIMIX/DM0200P SET DGSWAT=30 WHERE DGDGN=’MYDGDFN’ AND
DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Note: You only have to make this change on the management system; the network
system will be automatically updated by MIMIX.

424
Object selection for Compare and
CHAPTER 17
Synchronize commands

Many of the Compare and Synchronize commands, which provide underlying support
for auditing, use an enhanced set of common parameters and a common processing
methodology that is collectively referred to as ‘object selection.’ Object selection
provides powerful, granular capability for selecting objects by data group, object
selection parameters, or a combination.
Table 47 identifies the commands and audits that use this object selection capability.

Table 47. Commands and audits that use MIMIX object selection.

Commands Audits
Audits use object selection when submitted
manually or as an automatically scheduled
audit. Prioritized auditing does not use this
method of object selection.

Compare File Attributes (CMPFILA) #FILATR, #FILATRMBR

Compare Object Attributes (CMPOBJA) #OBJATR

Compare IFS Attributes (CMPIFSA) #IFSATR

Compare DLO Attributes (CMPDLOA) #DLOATR

Compare File Data (CMPFILDTA) #FILDTA

Compare Record Count (CMPRCDCNT) #MBRRCDCNT

Synchronize Object (SYNCOBJ)

Synchronize IFS Object (SYNCIFS)

Synchronize DLO (SYNCDLO)

The topics in this chapter include:


• “Object selection process” on page 426 describes object selection which interacts
with your input from a command so that the objects you expect are selected for
processing.
• “Parameters for specifying object selectors” on page 429 describes object
selectors and elements which allow you to work with classes of objects
• “Object selection examples” on page 434 provides examples and graphics with
detailed information about object selection processing, object order precedence,
and subtree rules.
• “Report types and output formats” on page 444 describes the output of compare
commands: spooled files and output files (outfiles).

425
Object selection for Compare and Synchronize commands

Object selection process


It is important to be able to predict the manner in which object selection interacts with
your input from a command so that the objects you expect are selected for
processing.
The object selection capability provides you with the option to select objects by data
group, object selection parameter, or a combination. Object selection supports four
classes of objects: files, objects, IFS objects, and DLOs.

426
Object selection process

The object selection process takes a candidate group of objects, subsets them as
defined by a list of object selectors, and produces a list of objects to be processed.
Figure 24 illustrates the process flow for object selection.

Figure 24. Object selection process flow

Candidate objects are those objects eligible for selection. They are input to the
object selection process. Initially, candidate objects consist of all objects on the
system. Based on the command, the set of candidate objects may be narrowed down
to objects of a particular class (such as IFS objects).

427
Object selection for Compare and Synchronize commands

The values specified on the command determine the object selectors used to further
refine the list of candidate objects in the class. An object selector identifies an object
or group of objects. Object selectors can come from the configuration information for
a specified data group, from items specified in the object selector parameter, or both.
MIMIX processing for object selection consists of two distinct steps. Depending on
what is specified on the command, one or both steps may occur.
The first major selection step is optional and is performed only if a data group
definition is entered on the command. In that case, data group entries are the source
for object selectors. Data group entries represent one of four classes of objects: files,
library-based objects, IFS objects, and DLOs. Only those entries that correspond to
the class associated with the command are used. The data group entries subset the
list of candidate objects for the class to only those objects that are eligible for
replication by the data group.
Note: Only explicitly identified IFS objects and DLOs objects that are eligible for
replication are included. The audits and commands which use this method of
object selection do not include any implicitly identified parent objects for IFS or
DLO objects.
If the command specifies a data group and items on the object selection parameter,
the data group entries are processed first to determine an intermediate set of
candidate objects that are eligible for replication by the data group. That intermediate
set is input to the second major selection step. The second step then uses the input
specified on the object selection parameter to further subset the objects selected by
the data group entries.
If no data group is specified on the data group definition parameter, the object
selection parameter can be used independently to select from all objects on the
system.
The second major object selection step subsets the candidate objects based on
Object selectors from the command’s object selector parameter (file, object, IFS
object, or DLO). Up to 300 object selectors may be specified on the parameter. If
none are specified, the default is to select all candidate objects.
Note: A single object selector can select multiple objects through the use of generic
names and special values such as *ALL, so the resulting object list can easily
exceed the limit of 300 object selectors that can be entered on a command.
The selection parameter is separate and distinct from the data group
configuration entries. If a data group is specified, the possible object selectors are 1
to N, where N is defined by the number of data group entries. The remaining
candidate objects make up the resultant list of objects to be processed.
Each object selector consists of multiple object selector elements, which serve as
filters on the object selector. The object selector elements vary by object class.
Elements provide information about the object such as its name, an indicator of
whether the objects should be included in or omitted from processing, and name
mapping for dual-system and single-system environments. See Table 48 for a list of
object selector elements by object class.

428
Parameters for specifying object selectors

Order precedence
Object selectors are always processed in a well-defined sequence, which is important
when an object matches more than one selector.
Selectors from a data group follow data group rules and are processed in most- to
least-specific order. Selectors from the object selection parameter are always
processed last to first. If a candidate object matches more than one object selector,
the last matching selector in the list is used.
As a general rule when specifying items on an object selection parameter, first specify
selectors that have a broad scope and then gradually narrow the scope in subsequent
selectors. In an IFS-based command, for example, include /A/B* and then omit /A/B1.
“Object selection examples” on page 434 illustrates the precedence of object
selection.
For each object selector, the elements are checked according to a priority defined for
the object class. The most specific element is checked for a match first, then the
subsequent elements are checked according to their priority. For additional, detailed
information about order precedence and priority of elements, see the following topics:
• “How MIMIX uses object entries to evaluate journal entries for replication” on
page 101
• “Identifying IFS objects for replication” on page 116
• “How MIMIX uses DLO entries to evaluate journal entries for replication” on
page 122
• “Processing variations for common operations” on page 129

Parameters for specifying object selectors


The object selectors and elements allow you to work with classes of objects. These
objects can be library-based, directory-based, or folder-based. An object selector
consists of several elements that identify an object or group of objects, indicates if
those objects should be included in or omitted from processing, and may describe
name mapping for those objects. The elements vary, depending on the class of
objects with which a particular command works.
Library-based selection allows you to work with files or objects based on object name,
library name, member name, object type, or object attribute. Directory-based
selection allows you to work with objects based on a IFS object path name and
includes a subtree option that determines the scope of directory-based objects to
include. Folder-based selection allows you to work with objects based on DLO path
name. Folder-based selection also includes a subtree object selector.
Object selection supports generic object name values for all object classes. A generic
name is a character string that contains one or more characters followed by an
asterisk (*). When a generic name is specified, all candidate objects that match the
generic name are selected.

429
Object selection for Compare and Synchronize commands

For all classes of objects, you can specify as many as 300 object selectors. However,
the specific object selector elements that you can specify on the command is
determined by the class of object.
Object selector elements provide three functions:
• Object identification elements define the selected object by name, including
generic name specifications.
• Filtering elements provide additional filtering capability for candidate objects.
• Name mapping elements are required primarily for environments where objects
exist in different libraries or paths.
• Include or omit elements identify whether the object should be processed or
explicitly excluded from processing.
Table 48 lists object selection elements by function and identifies which elements are
available on the commands.

Table 48. Object selection parameters and parameter elements by class

Class File Library-based IFS DLO


object

Commands: CMPFILA, CMPOBJA, CMPIFSA, CMPDLOA,


CMPFILDTA, SYNCOBJ SYNCIFS SYNCDLO
CMPRCDCNT1

Parameter: FILE OBJ OBJ DLO

Identification File Object Path Path


elements: Library Library Subtree Subtree
Member Name Pattern Name Pattern

Filtering elements: Attribute1 Type Type Type


Attribute Owner

Processing elements: Include/Omit Include/Omit Include/Omit Include/Omit

Name mapping System 2 file1 System 2 object System 2 path System 2 path
elements: System 2 library1 System 2 library System 2 name System 2 name
pattern pattern
1. The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping.

File name and object name elements: The File name and Object name elements
allow you to identify a file or object by name. These elements allow you to choose a
specific name, a generic name, or the special value *ALL.
Using a generic name, you can select a group of files or objects based on a common
character string. If you want to work with all objects beginning with the letter A, for
example, you would specify A* for the object name.
To process all files within the related selection criteria, select *ALL for the file or object
name. When a data group is also specified on the command, a value of *ALL results

430
Parameters for specifying object selectors

in the selection of files and objects defined to that data group by the respective data
group file entries or data group object entries. When no data group is specified on the
command, specifying *ALL and a library name, only the objects that reside within the
given library are selected.
Library name element: The library name element specifies the name of the library
that contains the files or objects to be included or omitted from the resultant list of
objects. Like the file or object name, this element allows you to define a library a
specific name, a generic name, or the special value *ALL.
Note: The library value *ALL is supported only when a data group is specified.
Member element: For commands that support the ability to work with file members,
the Member element provides a means to select specific members. The Member
element can be a specific name, a generic name, or the special value *ALL.
Refer to the individual commands for detailed information on member processing.
Object path name (IFS) and DLO path name elements: The Object path name
(IFS) and DLO path name elements identify an object or DLO by path name. They
allow a specific path, a generic path, or the special value *ALL.
Traditionally, DLOs are identified by a folder path and a DLO name. Object selection
uses an element called DLO path, which combines the folder path and the DLO
name.
If you specify a data group, only those objects explicitly defined to that data group by
the respective data group IFS entries or data group DLO entries are selected. The
implicitly defined parent objects within the object path are not selected.
Directory subtree and folder subtree elements: The Directory subtree and Folder
subtree elements allow you to expand the scope of selected objects and include the
descendants of objects identified by the given object or DLO path name. By default,
the subtree element is *NONE, and only the named objects are selected. However, if
*ALL is used, all descendants of the named objects are also selected.
Figure 25 illustrates the hierarchical structure of folders and directories prior to
processing, and is used as the basis for the path, pattern, and subtree examples

431
Object selection for Compare and Synchronize commands

shown later in this document. For more information, see the graphics and examples
beginning with “Example subtree” on page 438.

Figure 25. Directory or folder hierarchy

Directory subtree elements for IFS objects: When selecting IFS objects, only the
objects in the file system specified will be included. Object selection will not cross file
system boundaries when processing subtrees with IFS objects. Objects from other file
systems do not need to be explicitly excluded, however you will need to specify if you
want to include objects from other file systems. For more information, see the graphic
and examples beginning with “Example subtree for IFS objects” on page 442.
Name pattern element: The Name pattern element provides a filter on the last
component of the object path name. The Name pattern element can be a specific
name, a generic name, or the special value *ALL.
If you specify a pattern of $*, for example, only those candidate objects with names
beginning with $ that reside in the named DLO path or IFS object path are selected.
Keep in mind that improper use of the Name pattern element can have undesirable
results. Let us assume you specified a path name of /corporate, a subtree of *NONE,
and pattern of $*. Since the path name, /corporate, does not match the pattern of $*,
the object selector will identify no objects. Thus, the Name pattern element is
generally most useful when subtree is *ALL.
For more information, see the “Example Name pattern” on page 441.
Object type element: The Object type element provides the ability to filter objects
based on an object type. The object type is valid for library-based objects, IFS
objects, or DLOs, and can be a specific value or *ALL. The list of allowable values
varies by object class.

432
Parameters for specifying object selectors

When you specify *ALL, only those object types which MIMIX supports for replication
are included. For a list of replicated object types, see “Supported object types for
system journal replication” on page 635.
Supported object types for CMPIFSA and SYNCIFS are listed in Table 49.

Table 49. Supported object types for CMPIFSA and SYNCIFS

Object type Description

*ALL All directories, stream files, and symbolic links are selected

*DIR Directories

*STMF Stream files

*SYMLNK Symbolic links

Supported object types for CMPDLOA and SYNCDLO are listed in Table 50.

Table 50. Supported DLO types for CMPDLOA and SYNCDLO

DLO type Description

*ALL All documents and folders are selected

*DOC Documents

*FLR Folders

For unique object types supported by a specific command, see the individual
commands.
Object attribute element: The Object attribute element provides the ability to filter
based on extended object attribute. For example, file attributes include PF, LF, SAVF,
and DSPF, and program attributes include CLP and RPG. The attribute can be a
specific value, a generic value, or *ALL.
Although any value can be entered on the Object attribute element, a list of supported
attributes is available on the command. Refer to the individual commands for the list
of supported attributes.
Owner element: The Owner element allows you to filter DLOs based on DLO owner.
The Owner element can be a specific name or the special value *ALL. Only candidate
DLOs owned by the designated user profile are selected.
Include or omit element: The Include or omit element determines if candidate
objects or included in or omitted from the resultant list of objects to be processed by
the command.
Included entries are added to the resultant list and become candidate objects for
further processing. Omitted entries are not added to the list and are excluded from
further processing.
System 2 file and system 2 object elements: The System 2 file and System 2
object elements provide support for name mapping. Name mapping is useful when

433
Object selection for Compare and Synchronize commands

working with multiple sets of files or objects in a dual-system or single-system


environment.
This element may be a specific name or the special value *FILE1 for files or *OBJ1 for
objects. If the File or Object element is not a specific name, then you must use the
default value of *FILE1 or *OBJ1. This specification indicates that the name of the file
or object on system 2 is the same as on system 1 and that no name mapping occurs.
Generic values are not supported for the system 2 value if a generic value was
specified on the File or Object parameter.
System 2 library element: The System 2 library element allows you to specify a
system 2 library name that differs from the system 1 library name, providing name
mapping between files or objects in different libraries.
This element may be a specific name or the special value *LIB1. If the System 2
library element is not a specific name, then you must use the default value of *LIB1.
This specification indicates that the name of the library on system 2 is the same as on
system 1 and that no name mapping occurs. Generic values are not supported for the
system 2 value if a generic value was specified on the Library object selector.
System 2 object path name and system 2 DLO path name elements: The System
2 object path name and System 2 DLO path name elements support name mapping
for the path specified in the Object path name or DLO path name element. Name
mapping is useful when working with two sets of IFS objects or DLOs in different
paths in either a dual-system or single-system environment.
Generic values are not supported for the system 2 value if you specified a generic
value for the IFS Object or DLO element. Instead, you must choose the default values
of *OBJ1 for IFS objects or *DLO1 for DLOs. These values indicate that the name of
the file or object on system 2 is the same as that value on system 1. The default
provides support for a two-system environment without name mapping.
System 2 name pattern element: The System 2 name pattern provides support for
name mapping for the descendents of the path specified for the Object path name or
DLO path name element.
The System 2 name pattern element may be a specific name or the special value
*PATTERN1. If the Object path name or DLO path name element is not a specific
name, then you must use the default value of *PATTERN1. This specification
indicates that no name mapping occurs. Generic values are not supported for the
System 2 name pattern element if you specified a generic value for the Name pattern
element.

Object selection examples


In this section, examples and graphics provide you with detailed information about
object selection processing, object order precedence, and subtree rules. These
illustrations show how objects are selected based on specific selection criteria.

434
Object selection examples

Processing example with a data group and an object selection parameter


Using the CMPOBJA command, let us assume you want to compare the objects
defined to data group DG1. For simplicity, all candidate objects in this example are
defined to library LIBX.
Table 51 lists all candidate objects on your system .

Table 51. Candidate objects on system

Object Library Object type

ABC LIBX *FILE

AB LIBX *SBSD

A LIBX *OUTQ

DEF LIBX *PGM

DE LIBX *DTAARA

D LIBX *CMD

Next, Table 52 represents the object selectors based on the data group object entry
configuration for data group DG1. Objects are evaluated against data group entries in
the same order of precedence used by replication processes.

Table 52. Object selectors from data group entries for data group DG1

Order Processed Object Library Object type Include or omit

3 A* LIBX *ALL *INCLUDE

2 ABC* LIBX *FILE *OMIT

1 DEF LIBX *JOBQ *INCLUDE

The object selectors from the data group subset the candidate object list, resulting in
the list of objects defined to the data group shown in Table 53. This list is internal to
MIMIX and not visible to users.

Table 53. Objects selected by data group DG1

Object Library Object type

A LIBX *OUTQ

AB LIBX *SBSD

DEF LIBX *JOBQ

Note: Although job queue DEF in library LIBX did not appear in Table 51, it would be
added to the list of candidate objects when you specify a data group for some
commands that support object selection. These commands are required to
identify or report candidate objects that do not exist.

435
Object selection for Compare and Synchronize commands

Perhaps you now want to include or omit specific objects from the filtered candidate
objects listed in Table 53. Table 54 shows the object selectors to be processed based
on the values specified on the object selection parameter. These object selectors
serve as an additional filter on the candidate objects.

Table 54. Object selectors for CMPOBJA object selection parameter

Order Processed Object Library Object type Include or omit

1 *ALL LIBX *OUTQ *INCLUDE

2 *ALL LIBX *SBSD *INCLUDE

3 *ALL LIBX *JOBQ *OMIT

The objects compared by the CMPOBJA command are shown in Table 55. These are
the result of the candidate objects selected by the data group (Table 53) that were
subsequently filtered by the object selectors specified for the Object parameter on the
CMPOBJA command (Table 54).

Table 55. Resultant list of objects to be processed

Object Library Object type

A LIBX *OUTQ

AB LIBX *SBSD

In this example, the CMPOBJA command is used to compare a set of objects. The
input source is a selection parameter. No data group is specified.
The data in the following tables show how candidate objects would be processed in
order to achieve a resultant list of objects.
Table 56 lists all the candidate objects on your system.

Table 56. Candidate objects on system

Object Library Object type

ABC LIBX *FILE

AB LIBX *SBSD

A LIBX *OUTQ

DEFG LIBX *PGM

DEF LIBX *PGM

DE LIBX *DTAARA

D LIBX *CMD

Table 57 represents the object selectors chosen on the object selection parameter.
The sequence column identifies the order in which object selectors were entered. The
object selectors serve as filters to the candidate objects listed in Table 56.

436
Object selection examples

The last object selector entered on the command is the first one used when
determining whether or not an object matches a selector. Thus, generic object
selectors with the broadest scope, such as A*, should be specified ahead of more
specific generic entries, such as ABC*. Specific entries should be specified last.

Table 57. Object selectors entered on CMPOBJA selection parameter

Sequence Object Library Object type Include or omit


Entered

1 A* LIBX *ALL *INCLUDE

2 D* LIBX *ALL *INCLUDE

3 ABC* LIBX *ALL *OMIT

4 *ALL LIBX *PGM *OMIT

5 DEFG LIBX *PGM *INCLUDE

Table 58 illustrates how the candidate objects are selected.

Table 58. Candidate objects selected by object selectors

Sequence Object Library Object type Include or Selected


Processed omit candidate objects

5 DEFG LIBX *PGM *INCLUDE DEFG

4 *ALL LIBX *PGM *OMIT DEF

3 ABC* LIBX *ALL *OMIT ABC

2 D* LIBX *ALL *INCLUDE D, DE

1 A* LIBX *ALL *INCLUDE A, AB

Table 59 represents the included objects from Table 58. This filtered set of candidate
objects is the resultant list of objects to be processed by the CMPOBJA command.

Table 59. Resultant list of objects to be processed

Object Library Object type

A LIBX *OUTQ

AB LIBX *SBSD

D LIBX *CMD

DE LIBX *DTAARA

DEFG LIBX *PGM

437
Object selection for Compare and Synchronize commands

Example subtree
In the following graphics, the shaded area shows the objects identified by the
combination of the Object path name and Subtree elements of the Object parameter
for an IFS command. Circled objects represent the final list of objects selected for
processing.
Figure 26 illustrates a path name value of /corporate/accounting, a subtree
specification of *ALL, a pattern value of *ALL, and an object type of *ALL. The
candidate objects selected include /corporate/accounting and all descendants.

Figure 26. Directory of /corporate/accounting/

Figure 27 shows a path name of /corporate/accounting/*, a subtree specification of


*NONE, a pattern value of *ALL, and an object type of *ALL. In this case, no additional

438
Object selection examples

filtering is performed on the objects identified by the path and subtree. The candidate
objects selected consist of the specified objects only.

Figure 27. Subtree *NONE for /corporate/accounting/*

439
Object selection for Compare and Synchronize commands

Figure 28 displays a path name of /corporate/accounting/*, a subtree specification of


*ALL, a pattern value of *ALL, and an object type of *ALL. All descendants of
/corporate/accounting/* are selected.

Figure 28. Subtree *ALL for /corporate/accounting/*

440
Object selection examples

Figure 29 is a subset of Figure 28. Figure 29 shows a path name of


/corporate/accounting, a subtree specification of *NONE, a pattern value of *ALL, and
an object type of *ALL, where only the specified directory is selected.

Figure 29. Subtree *NONE for /corporate/accounting

Example Name pattern


The Name pattern element acts as a filter on the last component of the object path
name. Figure 30 specifies a path name of /corporate/accounting, a subtree
specification of *ALL, a pattern value of $*, and an object type of *ALL. In this

441
Object selection for Compare and Synchronize commands

scenario, only those candidate objects which match the generic pattern value ($123,
$236, and $895) are selected for processing.

Figure 30. Pattern $* for /corporate/accounting

Example subtree for IFS objects


In the following graphic, the shaded areas show file systems containing IFS objects.
When selecting objects in file systems that contain IFS objects, only the objects in the
file system specified will be included. The non-generic part of a path name indicates
the file system to be searched. Object selection does not cross file system boundaries
when processing subtrees with IFS objects.

442
Object selection examples

Figure 31 illustrates a directory with a subtree that contains IFS objects. The shaded
areas are the file systems. Table 60 contains examples showing what file systems
would be selected with the path names specified and a subtree specification of *ALL.

Figure 31. Directory with a subtree containing IFS objects.

Table 60. Examples of specified paths and objects selected for Figure 31

Path specified File system Objects selected

/qsy* Root file system /qsyabc

/PARIS/* Root file system in independent /PARIS/qsyabc


ASP PARIS

/PARIS* Root file system None

443
Report types and output formats
The following compare commands support output in spooled files and in output files
(outfiles): the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA,
CMPDLOA), the Compare Record Count (CMPRCDCNT) command, the Compare
File Data (CMPFILDTA) command, and the Check DG File Entries (CHKDGFE)
command.
The spooled output is a human-readable print format that is intended to be delivered
as a report. The output file, on the other hand, is primarily intended for automated
purposes such as automatic synchronization. It is also a format that is easily
processed using SQL queries.
The level of information in the output is determined by the value specified on the
Report type parameter. These values vary by command.
For the CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA commands, the levels of
output available are *DIF, *SUMMARY, *OPTIMIZED, and *ALL.
• The report type of *DIF includes information on objects with detected differences.
• A report type of *SUMMARY provides a summary of all objects compared as well
as an object-level indication whether differences were detected. *SUMMARY
does not, however, include details about specific attribute differences.
• Specifying *ALL for the report type will provide you with information found on both
*DIF and *SUMMARY reports.
• The value *OPTIMIZED creates a combined report that indicates at an object level
when the objects are equal. For objects that are not equal, the individual attributes
that are not equal are included in the report. Audits based on the compare
attribute commands use this report type to return results.
The CMPRCDCNT command supports the *DIF and *ALL report types. The report
type of *DIF includes information on objects with detected differences. Specifying
*ALL for the report type will provide you with information found on all objects and
attributes that were compared.
The CMPFILDTA supports the *DIF and *ALL report types, as well as *RRN. The
*RRN value allows you to output, using the MXCMPFILR outfile format, the relative
record number of the first 1,000 objects that failed to compare. Using this value can
help resolve situations where a discrepancy is known to exist, but you are unsure
which system contains the correct data. In this case, the *RRN value provides
information that enables you to display the specific records on the two systems and to
determine the system on which the file should be repaired.

Spooled files
The spooled output is generated when a value of *PRINT is specified on the Output
parameter. The spooled output consists of four main sections—the input or header
section, the object selection list section, the differences section, and the summary
section.
First, the header section of the spooled report includes all of the input values specified
on the command, including the data group value (DGDFN), comparison level

444
Report types and output formats

(CMPLVL), report type (RPTTYPE), attributes to compare (CMPATR), actual


attributes compared, number of files, objects, IFS objects or DLOs compared, and
number of detected differences. It also provides a legend that provides a description
of special values used throughout the report.
The second section of the report is the object selection list. This section lists all of the
object selection entries specified on the comparison command. Similar to the header
section, it provides details on the input values specified on the command.
The detail section is the third section of the report, and provides details on the objects
and attributes compared. The level of detail in this section is determined by the report
type specified on the command. A report type value of *ALL will list all objects
compared, and will begin with a summary status that indicates whether or not
differences were detected. The summary row indicates the overall status of the object
compared. Following the summary row, each attribute compared is listed—along with
the status of the attribute and the attribute value. In the event the attribute compared
is an indicator, a special value of *INDONLY will be displayed in the value columns.
A comparison level value of *DIF will list details only for those objects with detected
attribute differences. A value of *SUMMARY will not include the detail section for any
object.
The fourth section of the report is the summary, which provides a one row summary
for each object compared. Each row includes an indicator that indicates whether or
not attribute differences were detected.

Outfiles
The output file is generated when a value of *OUTFILE is specified on the Output
parameter. Similar to the spooled output, the level of output in the output file is
dependent on the report type value specified on the Report type parameter.
Each command is shipped with an outfile template that uses a normalized database
to deliver a self-defined record, or row, for every attribute you compare. Key
information, including the attribute type, data group name, timestamp, command
name, and system 1 and system 2 values, helps define each row. A summary row
precedes the attribute rows. The normalized database feature ensures that new
object attributes can be added to the audit capabilities without disruption to current
automation processing.
The template files for the various commands are located in the MIMIX product library.

445
Comparing attributes

CHAPTER 18 Comparing attributes

This chapter describes the commands that compare attributes: Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA). These commands
are designed to audit the attributes, or characteristics, of the objects within your
environment and report on the status of replicated objects. Together, these command
are collectively referred to as the compare attributes commands.
You are already using the compare attributes commands when they are called by
audits. When called by an audit and used in combination with the automatic recovery
capabilities of audits, the compare attributes commands provide robust functionality to
help you determine whether your system is in a state to ensure a successful rollover
for planned events or failover for unplanned events.
The topics in this chapter include:
• “About the Compare Attributes commands” on page 446 describes the unique
features of the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA,
and CMPDLOA.
• “Comparing file and member attributes” on page 450 includes the procedure to
compare the attributes of files and members.
• “Comparing object attributes” on page 453 includes the procedure to compare
object attributes.
• “Comparing IFS object attributes” on page 456 includes the procedure to compare
IFS object attributes.
• “Comparing DLO attributes” on page 459 includes the procedure to compare DLO
attributes.

About the Compare Attributes commands


With the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, and
CMPDLOA), you have significant flexibility in selecting objects for comparison, the
attributes to be compared, and the format in which the resulting report is created.
Each command generates a candidate list of objects on both systems and can detect
objects missing from either system. For each object compared, the command checks
for the existence of the object on the source and target systems and then compares
the attributes specified on the command. The results from the comparisons performed
are placed in a report.
Each command offers several unique features as well.
• CMPFILA provides significant capability to audit file-based attributes such as
triggers, constraints, ownership, authority, database relationships, and the like.
Although the CMPFILA command does not specifically compare the data within
the database file, it does check attributes such as record counts, deleted records,

446
About the Compare Attributes commands

and others that check the size of data within a file. Comparing these attributes
provides you with assurance that files are most likely synchronized.
• The CMPOBJA command supports many attributes important to other library-
based objects, including extended attributes. Extended attributes are attributes
unique to given objects, such as auto-start job entries for subsystems.
• The CMPIFSA and CMPDLOA commands provide enhanced audit capability for
IFS objects and DLOs, respectively.

Choices for selecting objects to compare


You can select objects to compare by using a data group, the object selection
parameters, or both. The compare attributes commands do not require active data
groups to run.
• By data group only: If you specify only by data group, all of the objects of the
same class as the command that are within the name space configured for the
data group are compared. For example, specifying a data group on the CMPIFSA
command would compare all IFS objects in the name space created by data group
IFS entries associated with the data group.
• By object selection parameters only: You can compare objects that are not
replicated by a data group. By specifying *NONE for the data group and specifying
objects on the object selection parameters, you define a name space—the library
for CMPFILA or CMPOBJA, or the directory path for CMPIFSA or CMPDLOA.
Detailed information about object selection is available in “Object selection for
Compare and Synchronize commands” on page 425.
• By data group and object selection parameters: When you specify a data
group name as well as values on the object selection parameters, the values
specified in object selection parameters act as a filter for the items defined to the
data group.

Unique parameters
The following parameters for object selection are unique to the compare attributes
commands and allow you to specify an additional level of detail when comparing
objects or files.
Unique File and Object elements: The following are unique elements on the File
parameter (CMPFILA command) and Objects parameter (CMPOBJA command):
• Member: On the CMPFILA command, the value specified on the Member
element is only used when *MBR is also specified on the Comparison level
parameter.
• Object attribute: The Object attribute element enables you to select particular
characteristics of an object or file, and provides a level of filtering. For details, see
“CMPFILA supported object attributes for *FILE objects” on page 449 and
“CMPOBJA supported object attributes for *FILE objects” on page 449.
System 2: The System 2 parameter identifies the remote system name, and
represents the system to which objects on the local system are compared.
This parameter is ignored when a data group is specified, since the system 2

447
Comparing attributes

information is derived from the data group. A value is required if no data group is
specified.
Comparison level (CMPFILA only): The Comparison level parameter indicates
whether attributes are compared at the file level or at the member level.
System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only):
The System 1 ASP group and System 2 ASP group parameters identify the name of
the auxiliary storage pool (ASP) group where objects configured for replication may
reside. The ASP group name is the name of the primary ASP device within the ASP
group. This parameter is ignored when a data group is specified.

Choices for selecting attributes to compare


The Attributes to compare parameter allows you to select which combination
attributes to compare.
Each compare attribute command supports an extensive list of attributes. Each
command provides the ability to select pre-determined sets of attributes (basic or
extended), all supported attributes, as well as any other unique combination of
attributes that you require.
The basic set of attributes is intended to compare attributes that provide an indication
that the objects compared are the same, while avoiding attributes that may be
different but do not provide a valid indication that objects are not synchronized, such
as the create timestamp (CRTTSP) attribute. Some objects, for example, cannot be
replicated using IBM's save and restore technology. Therefore, the creation date
established on the source system is not maintained on the target system during the
replication process. The comparison commands take this factor into consideration
and check the creation date for only those objects whose values are retained during
replication.
The extended set of attributes includes the basic set of attributes and some additional
attributes.
The following topics list the supported attributes for each command:
• “Attributes compared and expected results - #FILATR, #FILATRMBR audits” on
page 696
• “Attributes compared and expected results - #OBJATR audit” on page 701
• “Attributes compared and expected results - #IFSATR audit” on page 710
• “Attributes compared and expected results - #DLOATR audit” on page 713
All comparison attributes supported by a specific compare attribute command may not
be applicable for all object types supported by the command. For example,
CMPOBJA supports a large number of object types and related comparison
attributes. There are many cases where a specific comparison attributes are only
valid for a particular object type.
Comparison attributes not supported by a given object type are ignored. For example,
auto-start job entries is a valid comparison attribute for object types of subsystem
description (*SBSD). For all other object types selected as a result of running the

448
About the Compare Attributes commands

report, the auto-start job entry attribute is ignored for object types that are not of type
*SBSD.
If a data group is specified on a compare request, configuration data is used when
comparing objects that are identified for replication through the system journal. If an
object’s configured object auditing value (OBJAUD) is *NONE, its attribute changes
are not replicated. When differences are detected on attributes of such an object, they
are reported as *EC (equal configuration) instead of being reported as *NE (not
equal).
For *FILE objects configured for replication through the system journal and configured
to omit T-ZC journal entries, also see “Omit content (OMTDTA) and comparison
commands” on page 417.

CMPFILA supported object attributes for *FILE objects


When you specify a data group to compare, the CMPFILA command obtains
information from the configured data group entries for all PF and LF files and their
subtypes. Those files that are within the name space created by data group entries
are compared.
Table 61 lists the extended attributes for objects of type *FILE that are supported as
values on the Object attribute element.

Table 61. CMPFILA supported extended attributes for *FILE objects

Object attribute Description

*ALL All physical and logical file types are selected for processing

LF Logical file

LF38 Files of type LF38

PF Physical file types, including PF, PF-SRC, and PF-DTA

PF-DTA Files of type PF-DTA

PF-SRC Files of type PF-SRC

PF38 Files of type PF38, including PF38, PF38-SRC, and PF38-DTA

PF38-DTA Files of type PF38-DTA

PF38-SRC Files of type PF38-SRC

CMPOBJA supported object attributes for *FILE objects


When you specify a data group to compare, the CMPOBJA command obtains data
group information from the data group object entries. Those objects defined to the
data group object entries are compared.
The default value on the Object attribute element is *ALL, which represents the entire
list of supported attributes. Any value is supported, but a list of recommended
attributes is available in the online help.

449
Comparing file and member attributes
You can compare file attributes to ensure that files and members needed for
replication exist on both systems or any time you need to verify that files are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring escape messages for differences
in file attributes, be aware that differences due to active replication (Step 16)
are signaled via a new difference indicator (*UA) and escape message. See
the auditing and reporting topics in this book.
To compare the attributes of files and members, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 1
(Compare file attributes) and press Enter.
3. The Compare File Attributes (CMPFILA) command appears. At the Data group
definition prompts, do one of the following:
• To compare attributes for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare files by name only, specify *NONE and continue with the next step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
f. Press Enter.

450
Comparing file and member attributes

5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Comparison level prompt, accept the default to compare files at a file level
only. Otherwise, specify *MBR to compare files at a member level.
Note: If *FILE is specified, the Member prompt is ignored (see Step 4b).
7. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes based on whether the comparison is at a file or member level or
press F4 to see a valid list of attributes.
8. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 7, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Report type prompt, specify the level of detail for the output report.
12. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 14.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 14.
13. The User data prompt appears if you selected *PRINT or *BOTH in Step 12.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 18.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the Maximum replication lag prompt, specify the maximum amount of time
between when a file in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.

451
Note: This parameter is only valid when a data group is specified in Step 3.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, specify *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.

452
Comparing object attributes

Comparing object attributes


You can compare object attributes to ensure that objects needed for replication exist
on both systems or any time you need to verify that objects are synchronized between
systems. You can optionally specify that results of the comparison are placed in an
outfile.
Note: If you have automation programs monitoring escape messages for differences
in object attributes, be aware that differences due to active replication
(Step 15) are signaled via a new difference indicator (*UA) and escape
message. See the auditing and reporting topics in this book.
To compare the attributes of objects, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 2
(Compare object attributes) and press Enter.
3. The Compare Object Attributes (CMPOBJA) command appears. At the Data
group definition prompts, do one of the following:
• To compare attributes for all objects defined by the data group object entries for
a particular data group definition, specify the data group name and skip to
Step 6.
• To compare objects by object name only, specify *NONE and continue with the
next step.
• To compare a subset of objects defined to a data group, specify the data group
name and skip to continue with the next step.
4. At the Object prompts, you can specify elements for one or more object selectors
that either identify objects to compare or that act as filters to the objects defined to
the data group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the Object and library prompts, specify the name or the generic value you
want.
b. At the Object type prompt, accept *ALL or specify a specific object type to
compare.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the object and library
names on system 2 are equal to system 1, accept the defaults. Otherwise,
specify the name of the object and library to which objects on the local system
are compared.
Note: The System 2 file and System 2 library values are ignored if a data

453
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing objects not defined
to a data group. If necessary, specify the name of the remote system to which
objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Report type prompt, specify the level of detail for the output report.
11. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 13.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 13.
12. The User data prompt appears if you selected *PRINT or *BOTH in Step 11.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 17.
13. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
14. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
15. At the Maximum replication lag prompt, specify the maximum amount of time
between when an object in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.

454
Comparing object attributes

16. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
17. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.

455
Comparing IFS object attributes
You can compare IFS object attributes to ensure that IFS objects needed for
replication exist on both systems or any time you need to verify that IFS objects are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring for differences in IFS object
attributes, be aware that differences due to active replication (Step 13) are
signaled via a new difference indicator (*UA) and escape message. See the
auditing and reporting topics in this book.
To compare the attributes of IFS objects, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 3
(Compare IFS attributes) and press Enter.
3. The Compare IFS Attributes (CMPIFSA) command appears. At the Data group
definition prompts, do one of the following:
• To compare attributes for all IFS objects defined by the data group IFS object
entries for a particular data group definition, specify the data group name and
skip to Step 6.
• To compare IFS objects by object path name only, specify *NONE and continue
with the next step.
• To compare a subset of IFS objects defined to a data group, specify the data
group name and continue with the next step.
4. At the IFS objects prompts, you can specify elements for one or more object
selectors that either identify IFS objects to compare or that act as filters to the IFS
objects defined to the data group indicated in Step 3. For more information, see
“Object selection for Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the Object path name prompt, accept *ALL or specify the name or the
generic value you want.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
Note: The *ALL default is not valid if a data group is specified on the Data
group definition prompts.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
compare.
e. At the Include or omit prompt, specify the value you want.

456
Comparing IFS object attributes

f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which IFS objects on the local system are compared.
Note: The System 2 object path name and System 2 name pattern values are
ignored if a data group is specified on the Data group definition prompts.
g. Press Enter.
5. The System 2 parameter prompt appears if you are comparing IFS objects not
defined to a data group. If necessary, specify the name of the remote system to
which IFS objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept
the default to use the command name to identify the spooled output or specify a
unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when an IFS object in the data group changes and when replication of
the change is expected to be complete, or accept *DFT to use the default
maximum time of 300 seconds (5 minutes). You can also specify *NONE, which
indicates that comparisons should occur without consideration for replication in
progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of

457
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.

458
Comparing DLO attributes

Comparing DLO attributes


You can compare DLO attributes to ensure that DLOs needed for replication exist on
both systems or any time you need to verify that DLOs are synchronized between
systems. You can optionally specify that results of the comparison are placed in an
outfile.
Note: If you have automation programs monitoring escape messages for differences
in DLO attributes, be aware that differences due to active replication (Step 13)
are signaled via a new difference indicator (*UA) and escape message. See
the auditing and reporting topics in this book.
To compare the attributes of DLOs, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 4
(Compare DLO attributes) and press Enter.
3. The Compare DLO Attributes (CMPDLOA) command appears. At the Data group
definition prompts, do one of the following:
• To compare attributes for all DLOs defined by the data group DLO entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare DLOs by path name only, specify *NONE and continue with the
next step.
• To compare a subset of DLOs defined to a data group, specify the data group
name and continue with the next step.
4. At the Document library objects prompts, you can specify elements for one or
more object selectors that either identify DLOs to compare or that act as filters to
the DLOs defined to the data group indicated in Step 3. For more information, see
“Object selection for Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the DLO path name prompt, accept *ALL or specify the name or the generic
value you want.
b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
Note: The *ALL default is not valid if a data group is specified on the Data
group definition prompts.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
compare.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.

459
f. At the Include or omit prompt, specify the value you want.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which DLOs on the local system are compared.
Note: The System 2 DLO path name and System 2 DLO name pattern values
are ignored if a data group is specified on the Data group definition
prompts.
h. Press Enter.
5. The System 2 parameter prompt appears if you are comparing DLOs not defined
to a data group. If necessary, specify the name of the remote system to which
DLOs on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept
the default to use the command name to identify the spooled output or specify a
unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when a DLO in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in

460
Comparing DLO attributes

the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.

461
Comparing file record counts and file member data

Comparing file record counts and file


CHAPTER 19
member data

This chapter describes the features and capabilities of the Compare Record Counts
(CMPRCDCNT) command and the Compare File Data (CMPFILDTA) command.
The topics in this chapter include:
• “Comparing file record counts” on page 462 describes the CMPRCDCNT
command and provides a procedure for performing the comparison.
• “Significant features for comparing file member data” on page 465 identifies
enhanced capabilities available for use when comparing file member data.
• “Considerations for using the CMPFILDTA command” on page 466 describes
recommendations and restrictions of the command. This topic also describes
considerations for security, use with firewalls, comparing records that are not
allocated, as well as comparing records with unique keys, triggers, and
constraints.
• “Specifying CMPFILDTA parameter values” on page 470 provides additional
information about the parameters for selecting file members to compare and using
the unique parameters of this command.
• “Advanced subset options for CMPFILDTA” on page 476 describes how to use the
capability provided by the Advanced subset options (ADVSUBSET) parameter.
• “Ending CMPFILDTA requests” on page 479 describes how to end a CMPFILDTA
request that is in progress and describes the results of ending the job.
• “Comparing file member data - basic procedure (non-active)” on page 481
describes how to compare file data in a data group that is not active.
• “Comparing and repairing file member data - basic procedure” on page 484
describes how to compare and repair file data in a data group that is not active.
• “Comparing and repairing file member data - members on hold (*HLDERR)” on
page 487 describes how to compare and repair file members that are held due to
error using active processing.
• “Comparing file member data using active processing technology” on page 490
describes how to use active processing to compare file member data.
• “Comparing file member data using subsetting options” on page 493 describes
how to use the subset feature of the CMPFILDTA command to compare a portion
of member data at one time.

Comparing file record counts


The Compare Record Counts (CMPRCDCNT) command allows you to compare the
record counts of members of a set of physical files between two systems. This
command compares the number of current records (*CURRDS) and the number of

462
Comparing file record counts

deleted records (*NBRDLTRCDS) for members of physical files that are defined for
replication by an active data group. In resource-constrained environments, this
capability provides a less-intensive means to gauge whether files are likely to be
synchronized.
Note: Equal record counts suggest but do not guarantee that members are
synchronized. To check for file data differences, use the Compare File Data
(CMPFILDTA) command. To check for attribute differences, use the Compare
File Attributes (CMPFILA) command.
Replication processes must be active for the data group when this command is used.
Members on both systems can be actively modified by applications and by MIMIX
apply processes while this command is running.
For information about the results of a comparison, see “What differences were
detected by #MBRRCDCNT” on page 691.
The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase.
Unlike other audits, the #MBRRCDCNT audit does not have an associated recovery
phase. Differences detected by this audit appear as not recovered in the Audit
Summary user interfaces. Any repairs must be undertaken manually, in the following
ways:
• When using Vision Solutions Portal, repair actions are available for specific errors
when viewing the output file for the audit.
• Run the #FILDTA audit for the data group to detect and correct problems.
• Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems.

To compare file record counts


Do the following to compare record counts for an active data group:
1. From a command line, type installation_library/CMPRCDCNT and press
F4 (Prompt).
2. The Compare Record Counts (CMPRCDCNT) display appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 4.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
3. At the File prompts, you can specify elements for one or more object selectors to
act as filters to the files defined to the data group indicated in Step 2. For more
information, see “Object selection for Compare and Synchronize commands” on
page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.

463
Comparing file record counts and file member data

b. At the Member prompt, accept *ALL or specify a member name to compare a


particular member within a file.
c. At the Include or omit prompt, specify the value you want.
4. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
5. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 9.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
6. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
7. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
8. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
9. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
12. To start the comparison, press Enter.

464
Significant features for comparing file member data

Significant features for comparing file member data


The Compare File Data (CMPFILDTA) command provides ability to compare data
within members of physical files. The CMPFILDTA command can help you determine
whether files are synchronized and whether your MIMIX environment is prepared for
switching. You can use the CMPFILDTA command interactively or call it from a
program.
Unique features of the CMPFILDTA command include active server technology and
isolated data correction capability. Together, these features enable the detection and
correction of file members that are not synchronized while applications and replication
processes remain active. File members that are held due to an error can also be
compared and repaired.

Repairing data
You can optionally choose to have the CMPFILDTA command repair differences it
detects in member data between systems.
When files are not synchronized, the CMPFILDTA command provides the ability to
resynchronize the file at the record level by sending only the data for the incorrect
member to the target system. (In contrast, the Synchronize DG File Entry
(SYNCDGFE) command would resynchronize the file by transferring all data for the
file from the source system to the target system.)

Active and non-active processing


The Process while active (ACTIVE) parameter determines whether a requested
comparison can occur while application and replication activity is present.
Two modes of operation are available: active and non-active. In non-active mode,
CMPFILDTA assumes that all files are quiesced and performs file comparisons and
repairs without regard to application or replication activity. In active mode, processing
begins in the same manner, performing an internal compare and generating a list of
records that are not synchronized. This list is not reported, however. Instead,
CMPFILDTA checks the mismatched records against the activity that is happening on
the source system and the apply activity that is occurring on the target. If there is a
member that needs repair, CMPFILDTA will then report the error. At that time, the
command will also repair the target file member if *YES was specified on the Repair
parameter.
During active processing of a member, the DB apply threshold (DBAPYTHLD)
parameter can be used to specify what action CMPFILDTA should take if the
database apply session backlog exceeds the threshold warning value configured for
the database apply process.

Processing members held due to error


The CMPFILDTA command also provides the ability to compare and repair members
being held due to error (*HLDERR). When members in *HLDERR status are
processed, the CMPFILDTA command works cooperatively with the database apply
(DBAPY) process to compare and repair the file members—and when possible,

465
Comparing file record counts and file member data

restore them to an active state. To repair members in *HLDERR status, you must also
specify that the repair be performed on the target system and request that active
processing be enabled.
To support the cooperative efforts of CMPFILDTA and DBAPY, the following
transitional states are used for file entries undergoing compare and repair processing:
• *CMPRLS - The file in *HLDERR status has been released. DBAPY will clear the
journal entry backlog by applying the file entries in catch-up mode.
• *CMPACT - The journal entry backlog has been applied. CMPFILDTA and DBAPY
are cooperatively repairing the member previously in *HLDERR status, and
incoming journal entries continue to be applied in forgiveness mode.
When a member held due to error is being processed by the CMPFILDTA command,
the entry transitions from *HLDERR status to *CMPRLS to *CMPACT. The member
then changes to *ACTIVE status if compare and repair processing is successful. In
the event that compare and repair processing is unsuccessful, the member-level entry
is set back to *HLDERR.

Additional features
The CMPFILDTA command incorporates many other features to increase
performance and efficiency.
Subsetting and advanced subsetting options provide a significant degree of flexibility
for performing periodic checks of a portion of the data within a file.
Parallel processing uses multi-threaded jobs to break up file processing into smaller
groups for increased throughput. Rather than having a single-threaded job on each
system, multiple “thread groups” break up the file into smaller units of work. This
technology can benefit environments with multiple processors as well as systems with
a single processor.

Considerations for using the CMPFILDTA command


Before you use the CMPFILDTA command, you should be aware of the information in
this topic.

Recommendations and restrictions


It is recommended that the CMPFILDTA command be used in tandem with the
CMPFILA command. Use the CMPFILA command to determine whether you have a
matching set of files and attributes on both systems and use the CMPFILDTA
command to compare the actual data within the files.
When comparing files with LOBs, you must specify a data group when the
CMPFILDTA request specifies a value other than *NONE for REPAIR.
Keyed replication - Although you can run the CMPFILDTA command on keyed files,
the command only supports files configured for *POSITIONAL replication. The
CMPFILDTA command cannot compare files configured for *KEYED replication.

466
Considerations for using the CMPFILDTA command

SNA environments - CMPFILDTA requires a TCP/IP transfer definition—you cannot


use SNA. You can be configured for SNA, but then you must override CMPFILDTA to
refer to a transfer definition that specifies *TCP as the communications protocol. For
more information, see “System-level communications” on page 159.
Apply threshold and apply backlog - Do not compare data using active processing
technology if the apply process is 180 seconds or more behind, or has exceeded a
threshold limit.

Using the CMPFILDTA command with firewalls


The CMPFILDTA command uses a communications port based on the port number
specified in the transfer definition. If you need to run simultaneous CMPFILDTA jobs,
you must open the equivalent number of ports in your firewall. For example, if the port
number in your transfer definition is 5000 and you want to run 10 CMPFILDTA jobs at
once, you should open at least 10 ports in your firewall—minimally, ports 5001
through 5010. If you attempt to run more jobs than there are open ports, those jobs
will fail.

Security considerations
You should take extra precautions when using CMPFILDTA’s repair function, as it is
capable of accessing and modifying data on your system.
To compare file data, you must have read access on both systems. When using the
repair function, write access on the system to be repaired may also be necessary
when active technology is not used.
CMPFILDTA builds upon the RUNCMD support in MIMIX. CMPFILDTA starts a
remote process using RUNCMD, which requires two conditions to be true. First, the
user profile of the job that is invoking CMPFILDTA must exist on the remote system
and have the same password on the remote system as it does on the local system.
Second, the user profile must have appropriate read or update access to the
members to be compared or repaired. If active processing and repair is requested,
only read access is needed. In this case, the repair processing would be done by the
database apply process.

Comparing allocated records to records not yet allocated


In some situations, members differ in the number of records allocated. One member
may have allocated records, while the corresponding records of the other member are
not yet allocated. If the member to be repaired is the smaller of the two members,
records are added to make the members the same size.
If the member to be repaired is the larger of the two members, however, the excess
records are deleted. When MIMIX replication encounters these situations, no error is
generated nor is the member placed on error hold.
If one or more members differ in the manner described above, a distinct escape
message is issued. If you use CMPFILDTA in a CL program, you may wish to monitor
these escape messages specifically.

467
Comparing file record counts and file member data

Comparing files with unique keys, triggers, and constraints


If members being repaired have unique keys, active triggers, or constraints, special
care should be taken. An updated or insert repair action that results in one or more
duplicate key exceptions automatically results in the deletion of records with duplicate
keys.
Note: The records that could be deleted include those outside the subset of records
being compared. Deletion of records with duplicate keys is not recorded in the
outfile statistics.
If triggers are enabled, any compare or repair action causes the applicable trigger to
be invoked. Triggers should be disabled if this action is not desired by the user. When
a compare is specified, read triggers are invoked as records are read. If repair action
is specified, update, insert, and delete triggers are invoked as records are repaired.
Table 62 describes the interaction of triggers with CMPFILDTA repair and active
processing.

Attention: If an attempt is made to use one of the unsupported


situations listed in Table 62, the job that invokes the trigger will end
abruptly. You will see a CEE0200 information message in the job
log shortly before the job ends. You may also see an MCH2004
message.

Table 62. CMPFILDTA and trigger support

Trigger type Trigger activation CMPFILDTA - CMPFILDTA - CMPFILDTA


group (ACTGRP) Repair on system Process while support
(REPAIR) active (ACTIVE)

Read *NEW Any value Any value Not supported

Read NAMED or Any value Any value Supported


*CALLER

Update, insert, and *NEW *NONE Any value Supported


delete

Update, insert, and *NEW Any value other than *NO Not supported
delete *NONE

Update, insert, and *NEW Any value other than *YES Supported
delete *NONE

Update, insert, and NAMED or Any value Any value Supported


delete *CALLER

Avoiding issues with triggers


It is possible to avoid potential trigger restrictions. You can use any one of the
following techniques, which are listed in the preferred order:
• Recreate the trigger program, specifying the ACTGRP(*CALLER) or

468
Considerations for using the CMPFILDTA command

ACTGRP(NAMED)
• Use the Update Program (UPDPRG) command to change to ACTGRP(NAMED)
• Disable trigger programs on the file
• Use the Synchronize Objects (SYNCOBJ) command rather than CMPFILDTA
• Use the Synchronize Data Group File Entries (SYNCDGFE) command rather than
CMPFILDTA
• Use the Copy Active File (CPYACTF) command rather than CMPFILDTA
• Save and restore outside of MIMIX

Referential integrity considerations


Referential integrity enforcement can present complex CMPFILDTA repair scenarios.
Like triggers, a delete rule of “cascade”, “set null”, or “set default” can cause records
in other tables to be modified or deleted as a result of a repair action. In other
situations, a repair action may be prevented due to referential integrity constraints.
Consider the case where a foreign key is defined between a “department” table and
an “employee” table. The referential integrity constraint requires that records in the
employee table only be permitted if the department number of the employee record
corresponds to a row in the department table with the same department number.
It will not be possible for CMPFILDTA repair processing to add a row to the employee
table if the corresponding parent row is not present in the department table. Because
of this, you should use CMPFILDTA to repair parent tables before using CMPFILDTA
to repair dependant tables. Note that the order you specify the tables on the
CMPFILDTA command is not necessarily the order in which they will be processed,
so you must issue the command once for the parent table, and then again for the
dependant table.
Repairing the parent department table first may present its own problems. If
CMPFILDTA attempts to delete a row in the department table and the delete rule for
the constraint is “restrict”, the row deletion may fail if the employee table still contains
records corresponding to the department to be deleted. Such constraints should use a
delete rule of “cascade”, “set null”, or “set default”. Otherwise, CMPFILDTA may not
be able to make all repairs.
See the IBM Database Programming manual (SC41-5701) for more information on
referential integrity.

Job priority
When run, the remote CMPFILDTA job uses the run priority of the local CMPFILDTA
job. However, the run priority of either CMPFILDTA job is superseded if a
CMPFILDTA class object (*CLS) exists in the installation library of the system on
which the job is running.
Note: Use the Change Job (CHGJOB) command on the local system to modify the
run priority of the local job. CMPFILDTA uses the priority of the local job to set
the priority of the remote job, so that both jobs have the same run priority. To
set the remote job to run at a different priority than the local job, use the

469
Comparing file record counts and file member data

Create Class (CRTCLS) command to create a *CLS object for the job you
want to change.

CMPFILDTA and network inactivity


When the CMPFILDTA command processes large object selection lists, there may be
an extended period of communications inactivity. If the period of inactivity exceeds the
timeout value of any network inactivity timer in effect, the network timeout will
terminate the communications session, causing the CMPFILDTA job to end. To
prevent this from occurring, you can use the Change TCP/IP Attributes (CHGTCPA)
command to change the TCP Keep Alive (TCPKEEPALV) value so that it is lower than
the network inactivity timeout value.

Specifying CMPFILDTA parameter values


This topic provides information about specific parameters of the CMPFILDTA
command.

Specifying file members to compare


The CMPFILDTA command allows you to work with physical file members only. You
can select the files to compare by using a data group, the object selection
parameters, or both.
• By data group only: If you specify only by data group, the list of candidate
objects to compare is determined by the data group configuration.
• By object selection parameters only: You can compare file members that are
not replicated by a data group. By specifying *NONE for the data group and
specifying file and member information on the object selection parameters, you
define a name space on the each system from which a list of candidate objects is
created.
The Object attribute element on the File parameter enables you to select
particular characteristics of a file. Table 63 lists the extended attributes for objects
of type *FILE that are supported as values for the Object attribute element
• By data group and object selection parameters: When you specify a data
group name as well as values on the object selection parameters, the values
specified in object selection parameters act as a filter for the items defined to the
data group.
Detailed information about object selection is available in “Object selection for
Compare and Synchronize commands” on page 425.

Table 63. CMPFILDTA supported extended attributes for *FILE objects

Object attribute Description

PF Physical file types, including PF, PF-SRC, and PF-DTA

PF-DTA Files of type PF-DTA

470
Specifying CMPFILDTA parameter values

Table 63. CMPFILDTA supported extended attributes for *FILE objects

Object attribute Description

PF-SRC Files of type PF-SRC

PF38 Files of type PF38, including PF38, PF38-SRC, and PF38-DTA

PF38-DTA Files of type PF38-DTA

PF38-SRC Files of type PF38-SRC

Tips for specifying values for unique parameters


The CMPFILDTA command includes several parameters that are unique among
MIMIX commands.
Repair on system: When you choose to repair files that do not match, CMPFILDTA
allows you to select the system on which the repair should be made.
File repairs can be performed on system 1, system 2, local, target, source, or you can
specify the system definition name.
Note: *TGT and *SRC are only valid when a data group is specified. However, you
cannot select *SRC when *YES is specified for the Process while active
parameter. Refer to the “Process while active” section.
Process while active: CMPFILDTA includes while-active support. This parameter
allows you to indicate whether compares should be made while file activity is taking
place. For efficiency’s sake, it is always best to perform active repairs during a period
of low activity. CMPFILDTA, however, uses a mechanism that retries comparison
activity until it detects no interference from active files.
Three values are allowed on the Process while active parameter—*DFT, *NO, and
*YES. The *NO option should be used when the files being compared are not actively
being updated by either application activity or MIMIX replication activity. All file repairs
are handled directly by CMPFILDTA. *YES is only allowed when a data group is
specified and should be used when the files being compared are actively being
updated by application activity or MIMIX replication activity. In this case, all file repairs
are routed through the data group and require that the data group is active. If a data
group is specified, the default value of *DFT is equivalent to *YES. If a data group is
not specified, *DFT is the same as *NO.
Specifying *NO for the Process while active parameter is the recommended option for
running in a quiesced environment. When used in combination with an active data
group, it assumes there is no application activity and MIMIX replication is current. If
you specify *NO for the Process while active parameter in combination with repairing
the file, the data group apply process must be configured not to lock the files on the
apply system. This configuration can be accomplished by specifying *NO on the Lock
on apply parameter of the data group definition.
Note: Do not compare data using active processing technology if the apply
process is 180 seconds or more behind, or has exceeded a threshold limit.
File entry status: The File entry status parameter provides options for selecting
members with specific statuses, including members held due to error (*HLDERR).

471
Comparing file record counts and file member data

When members in *HLDERR status are processed, the CMPFILDTA command works
cooperatively with the database apply (DBAPY) process to compare and repair
members held due to error—and when possible, restore them to an active state.
Valid values for the File entry status parameter are *ALL, *ACTIVE, and *HLDERR. A
data group must also be specified on the command or the parameter is ignored. The
default value, *ALL, indicates that all supported entry statuses (*ACTIVE and
*HLDERR) are included in compare and repair processing. The value *ACTIVE
processes only those members that are active1. When *HLDERR is specified, only
member-level entries being held due to error are selected for processing. To repair
members held due to error using *ALL or *HLDERR, you must also specify that the
repair be performed on the target system and request that active processing be used.
System 1 ASP group and System 2 ASP group: The System 1 ASP group and
System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP)
group where objects configured for replication may reside. The ASP group name is
the name of the primary ASP device within the ASP group. This parameter is ignored
when a data group is specified. You must be running on OS V5R2 or greater to use
these parameters.
Subsetting option: The Subsetting option parameter provides a robust means by
which to compare a subset of the data within members. In some instances, the value
you select will determine which additional elements are used when comparing data.
Several options are available on this parameter: *ALL, *ADVANCED, *ENDDTA, or
*RANGE. If *ALL is specified, all data within all selected files is compared, and no
additional subsetting is performed. The other options compare only a subset of the
data.
The following are common scenarios in which comparing a subset of your data is
preferable:
• If you only need to check a specific range of records, use *RANGE.
• When a member, such as a history file, is primarily modified with insert operations,
only recently inserted data needs to be compared. In this situation, use *ENDDTA.
• If time does not permit a full comparison, you can compare a random sample
using *ADVANCED.
• If you do not have time to perform a full comparison all at once but you want all
data to be compared over a number of days, use *ADVANCED.
*RANGE indicates that the Subset range parameter will be used to specify the subset
of records to be compared. For more information, see the “Subset range” section.
If you select *ENDDTA, the Records at end of file parameter specifies how many
trailing records are compared. This value allows you to compare a selected number of
records at the end of all selected members. For more information, see the section
titled “Records at end of file.”
Advanced subsetting can be used to audit your entire database over a number of
days or to request that a random subset of records be compared. To specify

1. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve
previous behavior, specify STATUS(*ACTIVE).

472
Specifying CMPFILDTA parameter values

advanced subsetting select *ADVANCED. For more information see “Advanced


subset options for CMPFILDTA” on page 476.
Subset range: Subset range is enabled when *RANGE is specified on the Subsetting
option parameter, as described in the “Subsetting option” section.
Two elements are included, First record and Last record. These elements allow you to
specify a range of records to compare. If more than one member is selected for
processing, all members are compared using the same relative record number range.
Thus, using the range specification is usually only useful for a single member or a set
of members with related records.
The First record element can be specified as *FIRST or as a relative record
number. In the case of *FIRST, records in the member are compared beginning
with the first record.
The Last record element can be specified as *LAST or as a relative record
number. In the case of *LAST, records in the member are compared up to, and
including, the last record.
Advanced subset options: The Advanced subset options (ADVSUBSET) provides
the ability to use sophisticated comparison techniques. For detailed information and
examples, see “Advanced subset options for CMPFILDTA” on page 476.
Records at end of file: The Records at end of file (ENDDTA) parameter allows you to
compare recently inserted data without affecting the other subsetting criteria. If you
specified *ENDDTA in the Subsetting option parameter, as indicated in the
“Subsetting option” section, only those records specified in the Records at end of file
parameter will be processed.
This parameter is also valid if values other than *ENDDTA were specified in the
Subsetting option. In this case, both records at the end of the file as well as any
additional subsetting options factor into the compare. If some records are selected by
both by the ENDDTA parameter and another subsetting option, those records are only
processed once.
The Records at end of file parameter can be specified as *NONE or number-of-
records. When *NONE is specified, records at the end of the members are not
compared unless they are selected by other subset criteria. To compare particular
records at the end of each member, you must specify the number of records.
The ENDDTA value is always applied to the smaller of the System 1 and System 2
members, and continues through until the end of the larger member. Let us assume
that you specify 200 for the ENDDTA value. If one system has 1000 records while the
other has 1100, relative records 801-1100 would be checked. The relative record
numbers of the last 200 records of the smaller file are compared as well as the
additional 100 relative record numbers due to the difference in member size.
Using the Records at end of file parameter in daily processing can keep you from
missing records that were inserted recently.

473
Comparing file record counts and file member data

Specifying the report type, output, and type of processing


The options for selecting processing method, output format, and the contents of the
reported differences are similar to that provided for other MIMIX compare commands.
For additional details, see “Report types and output formats” on page 444.

System to receive output


The System to receive output (OUTSYS) parameter indicates the system on which
the output will be created. By default, the output is created on the local system.
When Output is *OUTFILE and Process while active is *YES, complete outfile
information is only available if the System to receive output parameter indicates that
the output file is on the data group target system. In this case, the outfile will be
updated as the database apply encounters journal entries relating to possible
mismatched records.
The Wait time (seconds) parameter can be used to ensure that all such outfile
updates are complete before the command completes.

Interactive and batch processing


On the Submit to batch parameter, the *YES default submits a multi-thread capable
batch job. When *NO is specified for the parameter, CMPFILDTA generates a batch
immediate job to do the bulk of the processing. A batch immediate job is not
processed through a job queue and is identified with a job type of BCI on the
WRKACTJOB screen. Similarly, if CMPFILDTA is issued from a batch job whose
ALWMLTTHD attribute is *NO, a batch immediate job will also be spawned.
In cases where a batch immediate job is generated, the original job waits for the batch
immediate job to complete and re-issues any messages generated by CMPFILDTA.
Interactive jobs are not permitted to have multiple threads, which are required for
CMPFILDTA processing. Thus, you need to be aware of the following issues when a
batch immediate job is generated:
• The identity of the job will be issued in a message in the original job.
• Since the batch immediate job cannot access the interactive job’s QTEMP library,
outfiles and files to be compared may not reside in QTEMP, even when
CMPFILDTA is issued from a multi-thread capable batch job.
• Re-issued messages will not have the original “from” and “to” program
information. Instead, you must view the job log of the generated job to determine
this information.
• Escape messages created prior to the final message will be converted to
diagnostic messages.
• Canceling the interactive request will not cancel the batch immediate job.

Using the additional parameters


The following parameters allow you to specify an additional level of detail regarding
CMPFILDTA command processing. These parameters are available by pressing F10
(Additional parameters).

474
Specifying CMPFILDTA parameter values

Transfer definition: The default for the Transfer definition parameter is *DFT. If a
data group was specified, the default uses the transfer definition associated with the
data group. If no data group was specified, the transfer definition associated with
system 2 is used.
The CMPFILDTA command requires that you have a TCP/IP transfer definition for
communication with the remote system. If your data group is configured for SNA,
override the SNA configuration by specifying the name of the transfer definition on the
command.
Number of thread groups: The Number of thread groups parameter indicates how
many thread groups should be used to perform the comparison. You can specify from
1 to 100 thread groups.
When using this parameter, it is important to balance the time required for processing
against the available resources. If you increase the number of thread groups in order
to reduce processing time, for example, you also increase processor and memory
use. The default, *CALC, will determine the number of thread groups automatically. To
maximize processing efficiency, the value *CALC does not calculate more than 25
thread groups.
The actual number of threads used in the comparison is based on the result of the
formula 2x + 1, where x is the value specified or the value calculated internally as the
result of specifying *CALC. When *CALC is specified, the CMPFILDTA command
displays a message showing the value calculated as the number of thread groups.
Note: Thread groups are created for primary compare processing only. During
setup, multiple threads may be utilized to improve performance, depending on
the number of members selected for processing. The number of threads used
during setup will not exceed the total number of threads used for primary
compare processing. During active processing, only one thread will be used.
Wait time (seconds): The Wait time (seconds) value is only valid when active
processing is in effect and specifies the amount of time to wait for active processing to
complete. You can specify from 0 to 3600 seconds, or the default *NOMAX.
If active processing is enabled and a wait time is specified, CMPFILDTA processing
waits the specified time for all pending compare operations processed through the
MIMIX replication path to complete. In most cases, the *NOMAX default is highly
recommended.
DB apply threshold: The DB apply threshold parameter is only valid during active
processing and requires that a data group be specified. The parameter specifies what
action CMPFILDTA should take if the database apply session backlog exceeds the
threshold warning value configured for the database apply process. The default value
*END stops the requested compare and repair action when the database apply
threshold is reached; any repair actions that have not been completed are lost. The
value *NOMAX allows the compare and repair action to continue even when the
database apply threshold has been reached. Continuing processing when the apply
process has a large backlog may adversely affect performance of the CMPFILDTA
job and its ability to compare a file with an excessive number of outstanding entries.
Therefore, *NOMAX should only be used in exceptional circumstances.

475
Comparing file record counts and file member data

Change date: The Change date parameter provides the ability to compare file
members based on the date they were last changed or restored on the source
system. This parameter specifies the date and time that MIMIX will use in determining
whether to process a file member. Only members changed or restored after the
specified date and time will be processed.
Members that have not been updated or restored since the specified timestamp will
not be compared. These members are identified in the output by a difference indicator
value of *EQ (DATE), which is omitted from results when the requested report type is
*DIF.
The shipped default value is *ALL. All available dates are considered when
determining whether to include or exclude a file member. However, the last changed
and last restored timestamps are ignored by the decision process.
When *AUDIT is specified, the compare start timestamp of the #FILDTA audit is used
in the determination. The command must specify a data group when this value is
used. The *AUDIT value can only be used if audit level *LEVEL30 was in effect at the
time the last audit was performed. If the audit level is lower, an error message is
issued. The audit level is available by displaying details for the audit (WRKAUD
command).
When *ALL or *AUDIT is specified for Date, the value specified for Time is ignored.
Note: Exercise caution when specifying actual date and time values. A specified
timestamp that is later than the start of the last audit can result in one or more
file members not being compared. Any member changed between the time of
its last audit and the specified timestamp will not be compared and therefore
cannot be reported if it is not synchronized. The recommended values for this
parameter are either *ALL or *AUDIT.

Advanced subset options for CMPFILDTA


You can use the Advanced subset options (ADVSUBSET) parameter on the Compare
File Data (CMPFILDTA) command for advanced techniques such as comparing
records over time and comparing a random sample of data. These techniques provide
additional assurance that files are replicated correctly.
For example, let us assume you have a limited batch window. You do not have time to
run a total compare everyday, but have the requirement to assure that all data is
compared over the course of a week. Using the advanced CMPFILDTA capability, you
can divide this work over a number of days.
Advanced subsetting makes it simple to accomplish this task by comparing 10
percent of your data each weeknight and completing the remaining 50 percent over
the weekend. However, as the following example demonstrates, it is always best to
compare a random representative sampling of data. The Advanced subset options
also provides this capability.
For example, if a member contains 1000 records on Monday, records 1 through 100
will be compared on Monday. By Tuesday, perhaps the member has grown to 1500
records. The second 10 percent, to be processed on Tuesday, will contain records

476
Advanced subset options for CMPFILDTA

151 through 300. Records 101 through 150 will not get checked at all. Advanced
subsetting provides you with an alternative that does not skip records when members
are growing.
Advanced subset options are applied independently for each member processed. The
advanced subset function assigns the data in each member to multiple non-
overlapping subsets in one of two ways. It also allows a specified range of these
subsets to be compared, which permits a representative sample subset of the data to
be compared. It also permits a full compare to be partitioned into multiple
CMPFILDTA requests that, in combination, assures that all data that existed at the
time of the first request is compared.
To use advanced subsetting, you will need to identify the following:
• The number of subsets or “bins” to define for the compare
• The manner in which records are assigned to bins
• The specific bins to process
Number of subsets: The first issue to consider when performing advanced subset
options is how many subsets or bins to establish. The Number of subsets element is
the number of approximately equal-sized bins to define. These bins are numbered
from 1 up to the number specified (N). You must specify at least one bin. Each record
is assigned to one of these bins.
The Interleave element specifies the manner in which members are assigned to a bin.
Interleave: The Interleave factor specifies the mapping between the relative record
number and the bin number. There are two approaches that can be used.
If you specify *NONE, records in each member are divided on a percentage basis. For
example:

Table 64. Interleave *NONE

Member A on Monday Member A on Tuesday

Total records in member: 30 45

Number of subsets (bins): 3 3

Interleave: *NONE *NONE

Records assigned to bin 1: 1-10 1-15

Records assigned to bin 2: 11-20 16-30

Records assigned to bin 3: 21-30 31-45

Note that when the total number of records in a member changes, the mapping also
changes. Records that were once assigned to bin 2 may in the future be assigned to
bin 1. If you wish to compare all records over the course of a few days, the changing
mapping may cause you to miss records. A specific Interleave value is preferable in
this case.
Using bytes, the Interleave value specifies a number of contiguous records that
should be assigned to each bin before moving to the next bin. Once the last bin is

477
Comparing file record counts and file member data

filled, assignment restarts at the first bin. Let us assume you have specified in
interleave value of 20 bytes. The following example is based on the one provided in
Table 64:

Table 65. Interleave(20)

Member A on Monday Member A on Tuesday

Total records in member: 30 45

Record length: 10 bytes 10 bytes

Number of subsets (bins): 3 3

Interleave (bytes): 20 20

Interleave (records): 2 2

Records assigned to bin 1: 1-2 1-2


7-8 7-8
13-14 13-14
19-20 19-20
25-26 25-26
31-32
37-38
43-44

Records assigned to bin 2: 3-4 3-4


9-10 9-10
15-16 15-16
21-22 21-22
27-28 27-28
33-34
39-40
45

Records assigned to bin 3: 5-6 5-6


11-12 11-12
17-18 17-18
23-24 23-24
29-30 29-30
35-36
41-42

If the Interleave and Number of Subsets is constant, the mapping of relative record
numbers to bins is maintained, despite the growth of member size. Because every bin
is eventually selected, comparisons made over several days will compare every
record that existed on the first day.
In most circumstances, *CALC is recommended for the interleave specification. When
you select *CALC, the system determines how many contiguous bytes are assigned
to each bin before subsequent bytes are placed in the next bin. This calculated value
will not change due to member size changes.

478
Ending CMPFILDTA requests

Specifying *NONE or a very large interleave factor maximizes processing efficiency,


since data in each bin is processed sequentially. Specifying a very small interleave
factor can greatly reduce efficiency, as little sequential processing can be done before
the file must be repositioned. If you wish to compare a random sample, a smaller
interleave factor provides a more random, or scattered, sample to compare.
The next parameters, the First subset and the Last subset, allow you to specify which
bin to process.
First and last subset: The First subset and Last subset values work in combination
to determine a range of bins to compare. For the First subset, the possible values are
*FIRST and subset-number. If you select *FIRST, the range to compare will start with
bin 1. Last subset has similar values, *LAST and subset-number. When you specify
*LAST, the highest numbered bin is the last one processed.
To compare a random sample of your data, specify a range of subsets that represent
the size of the sample. For example, suppose you wish to compare seven percent of
your data. If the number of subsets are 100, the first subset is 1, and the last subset is
7, seven percent of the data is compared. A first subset value of 21 and a last subset
value of 27 would also compare seven percent of your data, but it would compare a
different seven percent than the first example.
To compare all your data over the course of several days, specify the number of
subsets and interleave factor that allows you to size each day’s workload as your
needs require. For example, you would keep the subset value and interleave factor a
constant, but vary the First and Last subset values each day. The following settings
could be used over the course of a week to compare all of your data:

Table 66. Using First and last subset to compare data

Day of week Number of Interleave First Last Percentage


subsets (bins) subset subset compared

Monday 100 *CALC 1 10 10

Tuesday 100 *CALC 11 20 10

Wednesday 100 *CALC 21 30 10

Thursday 100 *CALC 31 40 10

Friday 100 *CALC 41 50 10

Saturday 100 *CALC 51 65 15

Sunday 100 *CALC 66 100 35

Note: You can automate these tasks using MIMIX Monitor. Refer to the MIMIX
Monitor documentation for more information.

Ending CMPFILDTA requests


The Compare File Data (CMPFILDTA) command, or a rule which calls it, can be long
running and may exceed the time which you have available for it to run.

479
Comparing file record counts and file member data

The CMPFILDTA command recognizes requests to end the job in a controlled manner
(ENDJOB OPTION(*CNTRLD)). Messages indicate the step within CMPFILDTA
processing at which the end was requested. The report and output file contain as
much information as possible with the data available at the step in progress when the
job ended. The output may not be accurate because the full CMPFILDTA request did
not complete.
The content of the report and output file is most valuable if the command completed
processing through the end of phase 1 compare. The output may be incomplete if the
end occurred earlier. If processing did not complete to a point where MIMIX can
accurately determine the result of the compare, the value *UN (unknown) is placed in
the Difference Indicator.
Note: If the CMPFILDTA command has been long running or has encountered many
errors, you may need to specify more time on the ENDJOB command’s Delay
time, if *CNTRLD (DELAY) parameter. The default value of 30 seconds may
not be adequate in these circumstances.

480
Comparing file member data - basic procedure (non-active)

Comparing file member data - basic procedure (non-


active)
You can use the CMPFILDTA command to ensure that data required for replication
exists on both systems and any time you need to verify that files are synchronized
between systems. You can optionally specify that results of the comparison are
placed in an outfile.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 466. You
should also read “Specifying CMPFILDTA parameter values” on page 470 for
additional information about parameters and values that you can specify.
To perform a basic data comparison, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare data by file name only, specify *NONE and continue with the next
step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data

481
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, accept *NONE to indicate that no repair action is
done.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
• If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
• The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
13. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.

482
Comparing file member data - basic procedure (non-active)

• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.

483
Comparing and repairing file member data - basic proce-
dure
You can use the CMPFILDTA command to repair data on the local or remote system.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 466. You
should also read “Specifying CMPFILDTA parameter values” on page 470 for
additional information about parameters and values that you can specify.
To compare and repair data, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare data by file name only, specify *NONE and continue with the next
step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
f. Press Enter.

484
Comparing and repairing file member data - basic procedure

5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, specify *SYS1, *SYS2, *LOCAL, *TGT, *SRC, or
the system definition name to indicate the system on which repair action should
be performed.
Note: *TGT and *SRC are only valid if you are comparing files defined to a data
group. *SRC is not valid if active processing is in effect.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
13. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:

485
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.

486
Comparing and repairing file member data - members on hold (*HLDERR)

Comparing and repairing file member data - members on


hold (*HLDERR)
Members that are being held due to error (*HLDERR) can be repaired with the
Compare File Data (CMPFILDTA) command during active processing. When
members in *HLDERR status are processed, the CMPFILDTA command works
cooperatively with the database apply (DBAPY) process to compare and repair the
members—and when possible, restore them to an active state.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 466. You
should also read “Specifying CMPFILDTA parameter values” on page 470 for
additional information about parameters and values that you can specify.
The following procedure repairs a member without transmitting the entire member. As
such, this method is generally faster than other methods of repairing members in
*HLDERR status that transmit the entire member or file. However, if significant activity
has occurred on the source system that has not been replicated on the target system,
it may be faster to synchronize the member using the Synchronize Data Group File
Entry (SYNCDGFE) command.
To repair a member with a status of *HLDERR, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, you must specify a data group name.
Note: If you want to compare data for all files defined by the data group file
entries for a particular data group definition, skip to Step 5.
4. At the File prompts, you can optionally specify elements for one or more object
selectors that act as filters to the files defined to the data group indicated in
Step 3. For more information, see “Object selection for Compare and Synchronize
commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. Press Enter.
Note: The System 2 file and System 2 library values are ignored when a data
group is specified on the Data group definition prompts.

487
5. At the Repair on system prompt, specify *TGT to indicate that repair action be
performed on the target system.
6. At the Process while active prompt, specify *YES to indicate that active
processing technology should be used in the comparison.
7. At the File entry status prompt, specify *HLDERR to process members being held
due to error only.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 15.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the System to receive output prompt, specify the system on which the output
should be created.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.

488
Comparing and repairing file member data - members on hold (*HLDERR)

• To submit the job for batch processing, accept the default. Press Enter.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To compare and repair the file, press Enter.

489
Comparing file member data using active processing
technology
You can set the CMPFILDTA command to use active processing technology when a
data group is specified on the command.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 466. You
should also read “Specifying CMPFILDTA parameter values” on page 470 for
additional information about parameters and values that you can specify.
Note: Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
To compare data using the active processing, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, accept the defaults.
f. Press Enter.
5. At the Repair on system prompt, specify *TGT to indicate that repair action be
performed on the target system of the data group.
6. At the Process while active prompt, specify *YES or *DFT to indicate that active

490
Comparing file member data using active processing technology

processing technology be used in the comparison. Since a data group is specified


on the Data group definition prompts, *DFT will render the same results as *YES.
7. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
11. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
12. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 17.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
13. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
14. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
15. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *OUTFILE was specified on the Outfile prompt, it is recommended that
you select *SYS2 for the System to receive output prompt.
16. At the Object difference messages prompt, specify whether you want detail

491
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used when the command is invoked from outside of
shipped audits. When used as part of shipped audits, the default value is *OMIT
since the results are already placed in an outfile.
17. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.

492
Comparing file member data using subsetting options

Comparing file member data using subsetting options


You can use the CMPFILDTA command to audit your entire database over a number
of days.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 466. You
should also read “Specifying CMPFILDTA parameter values” on page 470 for
additional information about parameters and values that you can specify.
Note: Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
To compare data using the subsetting options, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare data by file name only, specify *NONE and continue with the next
step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.

493
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, specify a value if you want repair action
performed.
Note: To process members in *HLDERR status, you must specify *TGT. See
Step 8.
7. At the Process while active prompt, specify whether active processing technology
should be used in the comparison.
Notes:
• To process members in *HLDERR status, you must specify *YES. See
Step 8.
• If you are comparing files associated with a data group, *DFT uses active
processing. If you are comparing files not associated with a data group,
*DFT does not use active processing.
• Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
8. At the File entry status prompt, you can select files with specific statuses for
compare and repair processing. Do one of the following:
a. To process active members only, specify *ACTIVE.
b. To process both active members and members being held due to error
(*ACTIVE and *HLDERR), specify the default value *ALL.
c. To process members being held due to error only, specify *HLDERR.
Note: When *ALL or *HLDERR is specified for the File entry status prompt,
*TGT must also be specified for the Repair on system prompt (Step 6)
and *YES must be specified for the Process while active prompt
(Step 7).
9. At the Subsetting option prompt, you must specify a value other than *ALL to use
additional subsetting. Do one of the following:
• To compare a fixed range of data, specify *RANGE then press Enter to see
additional prompts. Skip to Step 10.
• To define how many subsets should be established, how member data is
assigned to the subsets, and which range of subsets to compare, specify
*ADVANCED and press Enter to see additional prompts. Skip to Step 11.
• To indicate that only data specified on the Records at end of file prompt is
compared, specify *ENDDTA and press Enter to see additional prompts. Skip to
Step 12.
10. At the Subset range prompts, do the following:
a. At the First record prompt, specify the relative record number of the first record
to compare in the range.

494
Comparing file member data using subsetting options

b. At the Last record prompt, specify the relative record number of the last record
to compare in the range.
c. Skip to Step 12.
11. At the Advanced subset options prompts, do the following:
a. At Number of subsets prompt, specify the number of approximately equal-
sized subsets to establish. Subsets are numbered beginning with 1.
b. At the Interleave prompt, specify the interleave factor. In most cases, the
default *CALC is highly recommended.
c. At the First subset prompt, specify the first subset in the sequence of subsets
to compare.
d. At the Last subset prompt, specify the last subset in the sequence of subsets to
compare.
12. At the Records at end of file prompt, specify the number of records at the end of
the member to compare. These records are compared regardless of other
subsetting criteria.
Note: If *ENDDTA is specified on the Subsetting option prompt, you must specify
a value other than *NONE.
13. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
• If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
• The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
14. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 19.

495
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
15. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
16. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
17. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
18. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
19. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
20. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
21. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
22. To start the comparison, press Enter.

496
CHAPTER 20 Synchronizing data between
systems

This chapter contains information about support provided by MIMIX commands for
synchronizing data between two systems. The data that MIMIX replicates must by
synchronized on several occasions.
• During initial configuration of a data group, you need to ensure that the data to be
replicated is synchronized between both systems defined in a data group.
• If you change the configuration of a data group to add new data group entries, the
objects must be synchronized.
• You may also need to synchronize a file or object if an error occurs that causes
the two systems to become not synchronized.
• Automatic recovery features also use synchronize commands to recover
differences detected during replication and audits. If automatic recovery policies
are disabled, you may need to use synchronize commands to correct a file or
object in error or to correct differences detected by audits or compare commands.
The synchronize commands provided with MIMIX can be loosely grouped by common
characteristics and the level of function they provide. Topic “Considerations for
synchronizing using MIMIX commands” on page 499 describes subjects that apply to
more than one group of commands, such as the maximum size of an object that can
be synchronized, how large objects are handled, and how user profiles are
addressed.
Initial synchronization: Initial synchronization can be performed manually with a
variety of MIMIX and IBM commands, or by using the Synchronize Data Group
(SYNCDG) command. The SYNCDG command is intended especially for performing
the initial synchronization of one or more data groups. The command can be long-
running. For information about initial synchronization, see these topics:
• “Performing the initial synchronization” on page 508 describes how to establish a
synchronization point and identifies other key information.
• Environments using MIMIX support for IBM WebSphere MQ have additional
requirements for the initial synchronization of replicated queue managers. For
more information, see the MIMIX for IBM WebSphere MQ book.
Synchronize commands: The commands Synchronize Object (SYNCOBJ),
Synchronize IFS Object (SYNCIFS), and Synchronize DLO (SYNCDLO) provide
robust support in MIMIX environments, for synchronizing library-based objects, IFS
objects, and DLOs, as well as their associated object authorities. Each command has
considerable flexibility for selecting objects associated with or independent of a data
group. Additionally, these commands are often called by other functions and by
options to synchronize objects identified in tracking entries used for journaling. For
additional information, see:
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on

497
Synchronizing data between systems

page 503
• “About synchronizing tracking entries” on page 507
Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry
(SYNCDGACTE) command provides the ability to synchronize library-based objects,
IFS objects, and DLOs that are associated with data group activity entries which have
specific status values. The contents of the object and its attributes and authorities are
synchronized. For additional information, see “About synchronizing data group activity
entries (SYNCDGACTE)” on page 504.
Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE)
command provides the means to synchronize database files associated with a data
group by data group file entries. Additional options provide the means to address
triggers, referential constraints, logical files, and related files. For more information
about this command, see “About synchronizing file entries (SYNCDGFE command)”
on page 505.
Procedures: The procedures in this chapter are for commands that are accessible
from the MIMIX Compare, Verify, and Synchronize menu. Typically, when you need to
synchronize individual items in your configuration, the best approach is to use the
options provided on the displays where they are appropriate to use. The options call
the appropriate command and, in many cases, pre-select some of the fields. The
following procedures are included:
• “Synchronizing database files” on page 514
• “Synchronizing objects” on page 516
• “Synchronizing IFS objects” on page 520
• “Synchronizing DLOs” on page 524
• “Synchronizing data group activity entries” on page 528
• “Synchronizing tracking entries” on page 530

498
Considerations for synchronizing using MIMIX commands

Considerations for synchronizing using MIMIX com-


mands
For discussion purposes, the synchronize commands are grouped as follows:
• Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO)
• Synchronize Data Group Activity Entry (SYNCDGACTE)
• Synchronize Data Group File Entry (SYNCDGFE)
The following subtopics apply to more than one group of commands. Before you
synchronize you should be aware of information in the following topics:
• “Limiting the maximum sending size” on page 499
• “Synchronizing user profiles” on page 499
• “Synchronizing large files and objects” on page 501
• “Status changes caused by synchronizing” on page 501
• “Synchronizing objects in an independent ASP” on page 501

Limiting the maximum sending size


The Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO) and the
Synchronize Data Group File Entry (SYNCDGFE) command provide the ability to limit
the size of files or objects transmitted during synchronization with the Maximum
sending size (MAXSIZE) parameter. By default, no maximum value is specified. You
can also specify the value *TFRDFN to use the threshold size from the transfer
definition associated with the data group1, or specify a value between 1 and
9,999,999 megabytes (MB). On the SYNCDGFE command, the value *TFRDFN is
only allowed when the Sending mode (METHOD) parameter specifies *SAVRST.
When automatic recovery actions initiate a Synchronize or SYNCDGFE command,
the policies in effect determine the value used for the command’s MAXSIZE
parameter. The Set MIMIX Policies (SETMMXPCY) command sets policies for
automatic recovery actions and for the synchronize threshold used by the commands
MIMIX invokes to perform recovery actions. When any of the automatic recovery
policies are enabled (DBRCY, OBJRCY, or AUDRCY), the value of the Sync.
threshold size (SYNCTHLD) policy is used for the MAXSIZE value on the command.
You can adjust the SYNCTHLD policy value for the installation or optionally set a
value for a specific data group.

Synchronizing user profiles


User profile objects (*USRPRF) can be synchronized explicitly or implicitly using the
Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO). The following
information describes slight variations in processing.

1. To preserve behavior prior to changes made in V4R4 service pack SPC05SP4, specify
*TFRDFN.

499
Synchronizing user profiles with SYNCnnn commands
The SYNCOBJ command explicitly synchronizes user profiles when you specify
*USRPRF for the object type on the command. The status of the user profile on the
target system is affected as follows:
• If you specified a data group and a user profile which is configured for replication,
the status of the user profile on the target system is the value specified in the
configured data group object entry.
• If you specified a user profile but did not specify a data group, the following
occurs:
– If the user profile exists on the target system, its status on the target system
remains unchanged.
– If the user profile does not exist on the target system, it is synchronized and its
status on the target system is set to *DISABLED.
When synchronizing other object types, the SYNCOBJ, SYNCIFS, and SYNCDLO
commands implicitly synchronize user profiles associated with the object if they do
not exist on the target system. Although only the requested object type, such as
*PGM, is specified on these commands, the owning user profile, the primary group
profile, and user profiles that have private authorities to an object are implicitly
synchronized, as follows:
• When the Synchronize command specifies a data group and that data group has
a data group object entry which includes the user profile, the object and the user
profile are synchronized. The status of the user profile on the target system is set
to match the value from the data group object entry.
• If a data group object entry excludes the user profile from replication, the object is
synchronized and its owner is changed to the default owner indicated in the data
group definition. The user profile is not synchronized.
• When the Synchronize command specifies a data group and that data group does
not have a data group object entry for the user profile, the object and the
associated user profile are synchronized. The status of the user profile on the
target system is set to *DISABLED.

Missing system distribution directory entries


If automatic object recovery is enabled for a data group, MIMIX automatically adds a
system distribution directory entry when a replication or synchronization request for a
DLO (document library object) determines that the user profile being set as the owner
of the DLO does not have a system directory entry on the target system. For
synchronization requests, this capability is provided by the SYNCDLO (Synchronize
DLO) command. MIMIX adds the system distribution directory entry for the user
profile on the target system and specifies these values:
• User ID: same value as retrieved from the source system
• Description: same value as retrieved from the source system
• Address: local-system name
• User profile: user-profile name

500
Considerations for synchronizing using MIMIX commands

• All other directory entry fields are blank

Synchronizing large files and objects


When configured for advanced journaling, large objects (LOBs) can be synchronized
through the user (database) journal. You can synchronize a database file that
contains LOB data using the Synchronize Data Group File Entry (SYNCDGFE)
command.
If advanced journaling is not used in your environment, you may want to consider
synchronizing large files or objects (over 1 GB) outside of MIMIX. During traditional
synchronization, large files or objects can negatively impact performance by
consuming too much bandwidth. Certain commands for synchronizing provide the
ability to limit the size of files or objects transmitted during synchronization. See
“Limiting the maximum sending size” on page 499 for more information.
On certain commands, it is possible to control the size of files and objects sent to
another system. The Threshold size (THLDSIZE) parameter on the transfer definition
can be used to limit the size of objects transmitted with the Send Network Object
commands.

Status changes caused by synchronizing


In some circumstances the Synchronize Data Group Activity Entry (SYNCDGACTE)
command changes the status of activity entries when the command completes. For
additional details, see “About synchronizing data group activity entries
(SYNCDGACTE)” on page 504.
The Synchronize commands (SYNCOBJ, SYNCIFS and SYNCDLO) do not change
the status of activity entries associated with the objects being synchronized. Activity
entries retain the same status after the command completes.
Note: The SYNCIFS command will change the status of an activity entry for an
IFS object configured for advanced journaling.
When advanced journaling is configured, each replicated activity has associated
tracking entries. When you use the SYNCOBJ or SYNCIFS commands to
synchronize an object that has a corresponding tracking entry, the status of the
tracking entry will change to *ACTIVE upon successful completion of the
synchronization request. If the synchronization is not successful, the status of the
tracking entry will remain in its original status or have a status of *HLD. If the data
group is not active, the status of the tracking entry will be updated once the data
group is restarted.

Synchronizing objects in an independent ASP


When synchronizing data that is located in an independent ASP, be aware of the
following:
• In order for MIMIX to access objects located in an independent ASP, do one of the
following on the Synchronize Object (SYNCOBJ) command:
– Specify the data group definition.

501
– If no data group is specified, you must specify values for the System 1 ASP
group or device, System 2 ASP device number, and System 2 ASP device
number parameters.

502
About MIMIX commands for synchronizing objects, IFS objects, and DLOs

About MIMIX commands for synchronizing objects, IFS


objects, and DLOs
The Synchronize Object (SYNCOBJ), Synchronize IFS (SYNCIFS), and Synchronize
DLO (SYNCDLO) commands provide versatility for synchronizing objects and their
authority attributes.
Where to run: The synchronize commands can be run from either system. However,
if you run these commands from a target system, you must specify the name of a data
group to avoid overwriting the objects on the source system.
Identifying what to synchronize: On each command, you can identify objects to
synchronize by specifying a data group, a subset of a data group, or by specifying
objects independently of a data group.
• When you specify a data group, its source system determines the objects to
synchronize. The objects to be synchronized by the command are the same as
those identified for replication by the data group. For example, specifying a data
group on the SYNCOBJ command, will synchronize the same library-based
objects as those configured for replication by the data group.
• If you specify a data group as well as specify additional object information in
command parameters, the additional parameter information is used to filter the list
of objects identified for the data group.
• When no data group is specified, the local system becomes the source system
and a target system must be identified. The list of objects to synchronize is
generated on the local system. For more information about the object selection
criteria used when no data group is specified on these commands, see “Object
selection for Compare and Synchronize commands” on page 425.
Each command has a Synchronize authorities parameter to indicate whether authority
attributes are synchronized. By default, the object and all authority-related attributes
are synchronized. You can also synchronize only the object or only the authority
attributes of an object. Authority attributes include ownership, authorization list,
primary group, public and private authorities.
When you use the SYNCOBJ command to synchronize only the authorities for an
object and a data group name is not specified, if any files processed by the command
are cooperatively processed and a data group that contains these files is active, the
command could fail if the database apply job has a lock on these files.
When to run: Each command can be run when the data group is in an active or an
inactive state. You can synchronize objects whether or not the data group is active.
Using the SYNCOBJ, SYNCIFS, and SYNCDLO commands during off-peak usage or
when the objects being synchronized are in a quiesced state reduces contention for
object locks.
When using the SYNCIFS command for a data group configured for user journal
replication, the data group can be active but it should not have a backlog of
unprocessed entries.

503
Additional parameters: On each command, the following parameters provide
additional control of the synchronization process.
• The Save active parameter provides the ability to save the object in an active
environment using IBM's save while active support. Values supported are the
same as those used in related IBM commands. When you use this capability, the
following parameters further qualify save while active operations:
– On the SYNCOBJ and SYNCDLO commands, the Save active wait time
parameter specifies the amount of time to wait for a commit boundary or for a
lock on an object. If a lock is not obtained in the specified time, the object is not
saved. If a commit boundary is not reached in the specified time, the save
operation ends and the synchronization attempt fails.
– On the SYNCIFS command, the Save active option parameter defaults to
*NONE, which is appropriate for most users. Optionally, you can specify that
the IBM command (SAV) should use the value *ALWCKPWRT for its
SAVACTOPT parameter. This allows the object being saved to be opened for
write when the checkpoint is achieved.
Note: The SYNCIFS and SYNCDLO commands ignore the Save active
parameter and their respective qualifying parameter when the synchronize
request specifies a data group which is configured to use the value
*OPTIMIZED (default) for the respective transmission method element of
the Object processing (OBJPRC) parameter.
• The Maximum sending size (MB) parameter specifies the maximum size that an
object can be in order to be synchronized. For more information, see “Limiting the
maximum sending size” on page 499.

About synchronizing data group activity entries (SYNC-


DGACTE)
The Synchronize Data Group Activity Entry (SYNCDGACTE) command supports the
ability to synchronize library-based objects, IFS objects, or DLOs associated with data
group activity entries. Activity entries whose status falls in the following categories can
be synchronized: *ACTIVE, *COMPLETED, *DELAYED, or *FAILED. The contents of
the object, its attributes, and its authorities are synchronized between the source and
target systems.
Note: From the native user interface, data group activity and the status category of
the represented object are listed on the Work with Data Group Activity display
(WRKDGACT command). The specific status of individual activity entries
appear on the Work with DG Activity Entries display (WRKDGACTE
command).
The data group can either be active or inactive during the synchronization request.
If the item you are synchronizing has multiple activity entries with varying statuses (for
example, an entry with a status of completed, followed by a failed entry, and
subsequent delayed entries), the SYNCDGACTE command will find the first non-
completed activity entry and synchronize it. The same SYNCDGACTE request will

504
About synchronizing file entries (SYNCDGFE command)

then find the next non-completed entry and synchronize it. The SYNCDGACTE
request will continue to synchronize these non-completed entries until all entries for
that object have been synchronized.
Any existing active, delayed, or failed activity entries for the specified object are
processed and set to ‘completed by synchronization’ (PZ) when the synchronization
request completes successfully.
When all activity entries are completed for the specified object, when the
synchronization request completes successfully, only the status of the very last
completed entry is changed from complete (CP) to ‘completed by synchronization’
(CZ).
Not supported: Spooled files and cooperatively processed files are not eligible to be
synchronized using the SYNCDGACTE command.
Status changes during synchronization: During synchronization processing, if the
data group is active, the status of the activity entries being synchronized are set to a
status of ‘pending synchronization’ (PZ) and then to ‘pending completion’ (PC). When
the synchronization request completes, the status of the activity entries is set to either
‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ).
If the data group is inactive, the status of the activity entries remains either ‘pending
synchronization’ (PZ) or ‘pending completion’ (PC) when the synchronization request
completes. When the data group is restarted, the status of the activity entries is set to
either ‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ).

About synchronizing file entries (SYNCDGFE command)


The Synchronize Data Group File Entry (SYNCDGFE) command synchronizes
database files associated with a data group by data group file entries.
Active data group required: Because the SYNCDGFE command runs through a
database apply job, the data group must be active when the command is used.
Choice of what to synchronize: The Sending mode (METHOD) parameter provides
granularity in specifying what is synchronized. Table 67 describes the choices.

Table 67. Sending mode (METHOD) choices on the SYNCDGFE command.

*DATA This is the default value. Only the physical file data is replicated using
MIMIX Copy Active File processing. File attributes are not replicated
using this method.
If the file exists on the target system, MIMIX refreshes its contents. If the
file format is different on the target system, the synchronization will fail. If
the file does not exist on the target system, MIMIX uses save and restore
operations to create the file on the target system and then uses copy
active file processing to fill it with data from the file on the source system.

*ATR Only the physical file attributes are replicated and synchronized.

*AUT Only the authorities for the physical file are replicated and synchronized.

505
Table 67. Sending mode (METHOD) choices on the SYNCDGFE command.

*SAVRST The content and attributes are replicated using the IBM i save and
restore commands. This method allows save-while-active operations.
This method also has the capability to save associated logical files.

Files with triggers: The SYNCDGFE command provides the ability to optionally
disable triggers during synchronization processing and enable them again when
processing is complete. The Disable triggers on file (DSBTRG) parameter specifies
whether the database apply process (used for synchronization) disables triggers
when processing a file.
The default value *DGFE uses data group file entry to determine whether triggers
should be disabled. The value *YES will disable triggers on the target system during
synchronization.
If configuration options for the data group, or optionally for a data group file entry,
allow MIMIX to replicate trigger-generated entries and disable the triggers, when
synchronizing a file with triggers you must specify *DATA as the sending mode.
Including logical files: The Include logical files (INCLF) parameter allows you to
include any attached logical files in the synchronization request. Logical files that are
explicitly excluded from replication are not sent. This parameter is only valid when
*SAVRST is specified for the Sending mode prompt.
Physical files with referential constraints: Physical files with referential constraints
require a field in another physical file to be valid. When synchronizing physical files
with referential constraints, ensure all files in the referential constraint structure are
synchronized concurrently during a time of minimal activity on the source system.
Doing so will ensure the integrity of synchronization points.
Including related files: You can optionally choose whether the synchronization
request will include files related to the file specified by specifying *YES for the Include
related (RELATED) parameter. Related files are those physical files which have a
relationship with the selected physical file by means of one or more join logical files.
Join logical files are logical files attached to fields in two or more physical files.
The Include related (RELATED) parameter defaults to *NO. In some environments,
specifying *YES could result in a high number of files being synchronized and could
potentially strain available communications and take a significant amount of time to
complete.
A physical file being synchronized cannot be name mapped if it has logical files
associated with it. Logical files may be name mapped by using object entries.

506
About synchronizing tracking entries

About synchronizing tracking entries


Tracking entries provide status of IFS objects, data areas, and data queues that are
replicated using MIMIX advanced journaling. Object tracking entries represent data
areas or data queues. IFS tracking entries represent IFS objects. IFS tracking entries
also track the file identifier (FID) of the object on the source and target systems.
You can synchronize the object represented by a tracking entry by using the
synchronize option available on the Work with DG Object Tracking Entries display or
the Work with DG IFS Tracking Entries display. For object tracking entries, the option
calls the Synchronize Object (SYNCOBJ) command. For IFS tracking entries, the
option calls the Synchronize IFS Object (SYNCIFS) command.
The contents, attributes, and authorities of the item are synchronized between the
source and target systems.
Notes:
• Before starting data groups for the first time, any existing objects to be replicated
from the source system must be synchronized to the target system.
• If tracking entries do not exist, you must create them by doing one of the following:
• Change the data group IFS entry or object entry configuration as needed and
end and restart the data groups.
• Load tracking entries using the Load DG IFS Tracking Entries (LODDGIFSTE)
or Load DG Obj Tracking Entries (LODDGOBJTE) commands. See “Loading
tracking entries” on page 286.
• Tracking entries may not exist for existing IFS objects, data areas, or data queues
that have been configured for replication with advanced journaling since the last
start of the data group.
• For status changes to be effective for a tracking entry that is being synchronized,
the data group must be active. When the apply session receives notification that
the object represented by the tracking entry is synchronized successfully, the
tracking entry status changes to *ACTIVE.

507
Performing the initial synchronization
Ensuring that data is synchronized before you begin replication is crucial to
successful replication. How you perform the initial synchronization can be influenced
by the available communications bandwidth, the complexity of describing the data,
the size of the data, as well as time.
Note: If you have configured or migrated a MIMIX configuration to use integrated
support for IBM WebSphere MQ, you must use the procedure ‘Initial
synchronization for replicated queue managers’ in the MIMIX for IBM
WebSphere MQ book. Large IBM WebSphere MQ environments should plan
to perform this during off-peak hours.

Establish a synchronization point


Just before you start the initial synchronization, establish a known start point for
replication by changing journal receivers. The information gathered in this procedure
will be used when you start replication for the first time.
From the source system, do the following:
1. Quiesce your applications before continuing with the next step.
2. For each data group that will replicate from a user journal, use the following
command to change the user journal receiver. Record the new receiver names
shown in the posted message. On a command line, type:
(installation-library-name)/CHGDGRCV DGDFN(data-group-name)
TYPE(*DB)
3. Change the system journal receiver and record the new receiver name shown in
the posted message. On a command line, type:
CHGJRN JRN(QAUDJRN) JRNRCV(*GEN)
4. When you synchronize the database files and objects between systems, record
the time at which you submit the synchronization requests as this information is
needed when determining the journal location at which to initially start replication.
“Resources for synchronizing” on page 509 identifies available options.
5. Identify the synchronization starting point in the source user journal. This
information will be needed when starting replication.
a. Specify the source user journal for library/journal_name, specify the date of the
first synchronize request for mm/dd/yyyy, and specify a time just before the first
synchronize request for hh:mm:ss in the following command:
DSPJRN JRN(library/jounal_name) RCVRNG(*CURRENT)
FROMTIME('mm/dd/yyyy' 'hh:mm:ss')
Note: You can also specify values for the ENTTYP parameter to narrow the
search. Table 68 shows values which identify save actions associated

508
Performing the initial synchronization

with synchronizing.

Table 68. Common values for using ENTTYP

Journaled Jour- Common ENT-


Object Type nal TYP Values
Code

File F MS, SS

Data Area E ES, EW

Data Queue Q QX, QY

IFS object B FS, FW

b. Record the exact time and the sequence number of the journal entry
associated with the first synchronize request. Typically, a synchronize request
is represented by a journal entry for a save operation.
c. Type 5 (Display entire entry) next to the entry and press Enter.
d. Press F10 (Display only entry details).
e. The Display Journal Entry Details display appears. Page down to locate the
Receiver name. This should be the same name as recorded in Step 2.
6. Identify the synchronization starting point in the source system journal. This
information will be needed when starting replication.
a. Specify the date from Step 5a for mm/dd/yyyy and specify the time from
Step 5b for hh:mm:ss in the following command:
DSPJRN JRN(QSYS/QAUDJRN) RCVRNG(*CURRENT)
FROMTIME('mm/dd/yyyy' 'hh:mm:ss')
b. Record the sequence number associated with the first journal entry with the
specified time stamp.
c. Type 5 (Display entire entry) next to the entry and press Enter.
d. Press F10 (Display only entry details).
e. The Display Journal Entry Details display appears. Page down to locate the
Receiver name. This should be the same name as recorded in Step 3.

Resources for synchronizing


The available choices for synchronizing are, in order of preference:
• IBM Save and Restore commands: IBM save and restore commands are best
suited for initial synchronization and are used when performing a manual
synchronization. While MIMIX SYNCDG and SYNC commands can be used, the
communications bandwidth required for the size and quantity of objects may
exceed capacity.
• SYNC commands: The Synchronize commands (SYNCOBJ, SYNCIFS,
SYNCDLO) should be your starting point. These commands provide significantly

509
more flexibility in object selection and also provide the ability to synchronize object
authorities. By specifying a data group on any of these commands, you can
synchronize the data defined by its data group entries.
You can also use the Synchronize Data Group File Entry (SYNCDGFE) to
synchronize database files and members. This command provides the ability to
choose between MIMIX copy active file processing and save/restore processing
and provides choices for handling trigger programs during synchronization.
If you have configured or migrated to integrated advanced journaling, follow the
SYNCIFS procedures for IFS objects, SYNCOBJ procedures for data areas and
data queues, and SYNCDGFE procedures for files containing LOB data. You can
also use options to synchronize objects associated with tracking entries from the
Work with DG IFS Trk. Entries display and the Work with DG Obj. Trk. Entries
display.
• SYNCDG command: The SYNCDG command is intended especially for
performing the initial synchronization of one or more data groups by MIMIX
IntelliStart™. The SYNCDG command synchronizes by using the auditing and
automatic recovery support provided by MIMIX AutoGuard. This command can be
long-running. Because this command requires that journaling and data group
replication processes be started before synchronization starts, it may not be
appropriate for some environments.
This chapter (“Synchronizing data between systems” on page 497) includes additional
information about the MIMIX SYNC commands.

Using SYNCDG to perform the initial synchronization


This topic describes the procedure for performing the initial synchronization using the
Synchronize Data Group (SYNCDG) command prior to beginning replication. The
initial synchronization ensures that data is the same on each system and reduces the
time and complexity involved with starting replication for the first time.
The SYNDG command utilizes the auditing and automatic recovery functions of
MIMIX® AutoGuard™ to synchronize an enabled data group between the source
system and the target system. The SYNCDG command is intended to be used for
initial synchronization of a data group and can be used in other situations where data
groups are not synchronized. The SYNCDG command can only be run on the
management system, and only one instance of the command per data group can be
running at any time. This command submits a batch program that can run for several
days. The SYNCDG command can be performed automatically through MIMIX
IntelliStart.
Note: The SYNCDG command will not process a request to synchronize a data
group that is currently using the MIMIX CDP™ feature. This feature is in use if
a recovery window is configured or when a recovery point is set for a data
group. Also, do not configure a recovery window or set a recovery point if a
SYNCDG request is in progress for the data group. The MIMIX CDP feature
may not protect data under these circumstances.

510
Using SYNCDG to perform the initial synchronization

Ensure the following conditions are met for each data group that you want to
synchronize, before running this command:
• Apply any IBM PTFs (or their supersedes) associated with IBM i releases as
they pertain to your environment. Log in to Support Central and access the
Technical Documents page for a list of required and recommended IBM PTFs.
• Journaling is started on the source system for everything defined to the data
group.
• All replication processes are active.
• The user ID submitting the SYNCDG has *MGT authority in product level
security if it is enabled for the installation.
• No other audits (comparisons or recoveries) are in progress when the
SYNCDG is requested.
• Collector services has been started.
• If DLOs are identified for replication, before running the SYNDG command,
ensure that the DLOs exist only on the source system.
While the synchronization is in progress, other audits for the data group are prevented
from running.

To perform the initial synchronization using the SYNCDG command


defaults
Do the following:
1. Use the command STRDG DGDFN(*ALL)
2. Type the command SYNCDG and press Enter. Specify the following values,
pressing F4 for valid options on each parameter:
• Data group definition (DGDFN).
• Job description (JOBD).
3. Press Enter to perform the initial synchronization.
4. Verify your configuration is using MIMIX AutoGuard. This step includes performing
audits to verify that journaling and other aspects of your environment are ready to
use. Audits automatically check for and attempt to correct differences found
between the source system and the target system. See “Verifying the initial
synchronization” on page 512 for more information.

511
Verifying the initial synchronization
This procedure uses audits to ensure your environment is ready to start replication.
Shipped policy settings for MIMIX allow audits to automatically attempt recovery
actions for any problems they detect. You should not use this procedure if you have
already synchronized your systems using the Synchronize Data Group (SYNCDG)
command or the automatic synchronization method in MIMIX IntelliStart.
The audits used in this procedure will:
• Verify that journaling is started on the source and target systems for the items you
identified in the deployed replication patterns. Without journaling, replication will
not occur.
• Verify that data is synchronized between systems. Audits will detect potential
problems with synchronization and attempt to automatically recover differences
found.
Do the following:
1. Check whether all necessary journaling is started for each data group. Enter the
following command:
(installation-library-name)/DSPDGSTS DGDFN(data-group-name)
VIEW(*DBFETE)
On the File and Tracking Entry Status display, the File Entries column identifies
how many file entries were configured from your replication patterns and indicates
whether any file entries are not journaled on the source or target systems. If your
configuration permits user journal replication of IFS objects, data areas, or data
queues, the Tracking Entries columns provide similar information.
2. Audit your environment. To access the audits, enter the following command:
(installation-library-name)/WRKAUD
3. Each audit listed on the Work with Audits display is a unique combination of data
group and MIMIX rule. When verifying an initial configuration, you need to perform
a subset of the available audits for each data group in a specific order, shown in
Table 69. Do the following:
a. To change the number of active audits at any one time, enter the following
command:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY)
MAXACT(*NOMAX)
b. Use F18 (Subset) to subset the audits by the name of the rule you want to run.
c. Type a 9 (Run rule) next to the audit for each data group and press Enter.
Repeat Step 3b and Step 3c for each rule in Table 69 until you have started all the
listed audits for all data groups.

Table 69. Rules for initial validation, listed in the order to be performed.

Rule Name

1. #DGFE

512
Verifying the initial synchronization

Table 69. Rules for initial validation, listed in the order to be performed.

Rule Name

2. #OBJATR

3. #FILATR

4. #IFSATR

5. #FILATRMBR

6. #DLOATR

d. Reset the number of active audit jobs to values consistent with regular
auditing:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY)
MAXACT(5)
4. Wait for all audits to complete. Some audits may take time to complete. Then
check the results and resolve any problems. You may need to change subsetting
values again so you can view all rule and data group combinations at once. On
the Work with Audits display, check the Audit Status column for the following
value:
*NOTRCVD - The comparison performed by the rule detected differences. Some
of the differences were not automatically recovered. Action is required. View
notifications for more information and resolve the problem.
Note: For more information about resolving reported problems, see “Interpreting
audit results” on page 678.

513
Synchronizing database files
The procedures in this topic use the Synchronize DG File Entry (SYNCDGFE)
command to synchronize selected database files associated with a data group,
between two systems. If you use this command when performing the initial
synchronization of a data group, use the procedure from the source system to send
database files to the target system.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About synchronizing file entries (SYNCDGFE command)” on page 505.
To synchronize a database file between two systems using the SYNCDGFE
command defaults, do the following or use the alternative process described below:
1. From the Work with DG Definitions display, type 17 (File entries) next to the data
group to which the file you want to synchronize is defined and press Enter.
2. The Work with DG File Entries display appears. Type 16 (Sync DG file entry) next
to the file entry for the file you want to synchronize and press Enter.
Note: If you are synchronizing file entries as part of your initial configuration, you
can type 16 next to the first file entry and then press F13 (Repeat). When
you press Enter, all file entries will be synchronized.
Alternative Process:
You will need to identify the data group and data group file entry in this procedure. In
Step 8 and Step 9, you will need to make choices about the sending mode and trigger
support. For additional information, see “About synchronizing file entries
(SYNCDGFE command)” on page 505.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 41
(Synchronize DG File Entry) and press Enter.
3. The Synchronize DG File Entry (SYNCDGFE) display appears. At the Data group
definition prompts, specify the name of the data group to which the file is
associated.
4. At the System 1 file and Library prompts, specify the name of the database file
you want to synchronize and the library in which it is located on system 1.
5. If you want to synchronize only one member of a file, specify its name at the
Member prompt.
6. At the Data source prompt, ensure that the value matches the system that you
want to use as the source for the synchronization.
7. The default value *YES for the Release wait prompt indicates that MIMIX will hold
the file entry in a release-wait state until a synchronization point is reached. Then
it will change the status to active. If you want to hold the file entry for your
intervention, specify *NO.

514
Synchronizing database files

8. At the Sending mode prompt, specify the value for the type of data to be
synchronized.
9. At the Disable triggers on file prompt, specify whether the database apply process
should disable triggers when processing the file. Accept *DGFE to use the value
specified in the data group file entry or specify another value. Skip to Step 14.
10. At the Save active prompt, accept *SYSDFN so that objects in use are saved while
in use, or specify another value.
11. At the Save active wait time prompt, specify the number of seconds to wait for a
commit boundary or a lock on the object before continuing the save.
12. At the Allow object differences prompt, accept the default value *ALL.
13. If you specified *SAVRST for Step 8, at the Include logical files prompt, indicate
whether you want to include attached logical files when sending the file. The
default, *YES, includes attached logical files that are not explicitly excluded from
replication.
14. To change any of the additional parameters, press F10 (Additional parameters).
Verify that the values shown for Include related files, Maximum sending file size
(MB) and Submit to batch are what you want.
15. To synchronize the file, press Enter.

515
Synchronizing objects
The procedures in this topic use the Synchronize Object (SYNCOBJ) command to
synchronize library-based objects between two systems. The objects to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503

To synchronize library-based objects associated with a data group


To synchronize objects between two systems that are identified for replication by data
group object entries, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42
(Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ)
command appears.
3. At the Data group definition prompts, specify the data group for which you want to
synchronize objects.
Note: if you run this command from a target system, you must specify the name
of a data group to avoid overwriting the objects on the source system.
4. To synchronize all objects identified by data group object entries for this data
group, skip to Step 5. To synchronize a subset of objects defined to the data
group, at the Object prompts specify elements for one or more object selectors to
act as filters to the objects defined to the data group. For more information, see
“Object selection for Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the Object and library prompts, specify the name or the generic value you
want.
b. At the Object type prompt, accept *ALL or specify a specific object type to
synchronize.
c. At the Object attribute prompt, accept *ALL to synchronize the entire list of
supported attributes or press F4 to select from a list of attributes.
d. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 object and System 2 library prompts are ignored when a
data group is specified.
e. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both

516
Synchronizing objects

authorities and objects or specify another value.


6. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *SYSDFN to allow saving objects in use, or
specify another value.
b. At the Save active wait time prompt, specify the number of seconds to wait for
a commit boundary or a lock on the object before continuing the save. This
parameter is ignored when *NO is specified for Save active.
7. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
Note: When a data group is specified the following parameters are ignored:
System 1 ASP group or device, System 2 ASP device number, and
System 2 ASP device name.
8. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
9. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To start the synchronization, press Enter.

To synchronize library-based objects without a data group


To synchronize objects between two systems, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42
(Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ)
command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the Object prompts, specify elements for one or more object selectors that
identify objects to synchronize.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For more information, see “Object selection for Compare and
Synchronize commands” on page 425.
For each selector, do the following:

517
a. At the Object and library prompts, specify the name or the generic value you
want.
b. At the Object type prompt, accept *ALL or specify a specific object type to
synchronize.
c. At the Object attribute prompt, accept *ALL to synchronize the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
e. At the System 2 object and System 2 library prompts, if the object and library
names on system 2 are equal to the system 1 names, accept the defaults.
Otherwise, specify the name of the object and library on system 2 to which you
want to synchronize the objects.
f. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system to
which to synchronize the objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
Note: When you specify *ONLY and a data group name is not specified, if any
files that are processed by this command are cooperatively processed and
the data group that contains these files is active, the command could fail if
the database apply job has a lock on these files.
7. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *SYSDFN to allow saving objects in use, or
specify another value.
b. At the Save active wait time prompt, specify the number of seconds to wait for
a commit boundary or a lock on the object before continuing the save. This
parameter is ignored when *NO is specified for Save active.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. At the System 1 ASP group or device prompt, specify the name of the auxiliary
storage pool (ASP) group or device where objects configured for replication may
reside on system 1. Otherwise, accept the default to use the current job’s ASP
group name.
10. At the System 2 ASP device number prompt, specify the number of the auxiliary
storage pool (ASP) where objects configured for replication may reside on system
2. Otherwise, accept the default to use the same ASP number from which the
object was saved (*SAVASP). Only the libraries in the system ASP and any basic
user ASPs from system 2 will be in the library name space.
11. At the System 2 ASP device name prompt, specify the name of the auxiliary
storage pool (ASP) device where objects configured for replication may reside on
system 2. Otherwise, accept the default to use the value specified for the system

518
Synchronizing objects

1 ASP group or device (*ASPGRP1).


12. Determine how the synchronize request will be processed. Choose one of the
following
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
13. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
14. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
15. To start the synchronization, press Enter.

519
Synchronizing IFS objects
The procedures in this topic use the Synchronize IFS Object (SYNCIFS) command to
synchronize IFS objects between two systems. The IFS objects to be synchronized
can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503

To synchronize IFS objects associated with a data group


To synchronize IFS objects between two systems that are identified for replication by
data group IFS entries, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43
(Synchronize IFS object) and press Enter. The Synchronize IFS Object
(SYNCIFS) command appears.
3. At the Data group definition prompts, specify the data group for which you want to
synchronize objects.
Note: if you run this command from a target system, you must specify the name
of a data group to avoid overwriting the objects on the source system.
4. To synchronize all IFS objects identified by data group IFS entries for this data
group, skip to Step 5. To synchronize a subset of IFS objects defined to the data
group, at the IFS objects prompts specify elements for one or more object
selectors to act as filters to the objects defined to the data group. For more
information, see “Object selection for Compare and Synchronize commands” on
page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the Object path name prompt, you can optionally accept *ALL or specify the
name or generic value you want.
Note: The IFS object path name can be used alone or in combination with FID
values. See Step 11.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
synchronize.

520
Synchronizing IFS objects

e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 object path name and System 2 name pattern values are
ignored when a data group is specified.
f. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
6. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *YES to allow saving objects in use, or
specify another value.
b. At the Save active option prompt, accept *NONE as the default, or specify
another value. This parameter is ignored when *NO is specified for Save
active.
Note: Both parameters are ignored if the data group specifies *OPTIMIZED for
the IFS transmission method.
7. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
8. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 11.
9. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on
page 317.
12. To start the synchronization, press Enter.

521
To synchronize IFS objects without a data group
To synchronize IFS objects not associated with a data group between two systems,
do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43
(Synchronize IFS object) and press Enter. The Synchronize IFS Object
(SYNCIFS) command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the IFS objects prompts, specify elements for one or more object selectors that
identify IFS objects to synchronize. You can specify as many as 300 object
selectors by using the + for more prompt for each selector. For more information,
see the topic on object selection in the MIMIX Administrator Reference book.
For each selector, do the following:
a. At the Object path name prompt, you can optionally accept *ALL or specify the
name or generic value you want.
Note: The IFS object path name can be used alone or in combination with FID
values. See Step 12.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
synchronize.
e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the IFS objects.
g. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the IFS objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *YES to allow saving objects in use, or
specify another value.

522
Synchronizing IFS objects

b. At the Save active option prompt, accept *NONE as the default, or specify
another value. This parameter is ignored when *NO is specified for Save
active.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 12.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
12. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on
page 317.
13. To start the synchronization, press Enter.

523
Synchronizing DLOs
The procedures in this topic use the Synchronize DLO (SYNCDLO) command to
synchronize document library objects (DLOs) between two systems. The DLOs to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503

To synchronize DLOs associated with a data group


To synchronize DLOs between two systems that are identified for replication by data
group DLO entries, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44
(Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO)
command appears.
3. At the Data group definition prompts, specify the data group for which you want to
synchronize DLOs.
Note: if you run this command from a target system, you must specify the name
of a data group to avoid overwriting the objects on the source system.
4. To synchronize all objects identified by data group DLO entries for this data group,
skip to Step 5. To synchronize a subset of objects defined to the data group, at the
Document library objects prompts specify elements for one or more object
selectors to act as filters to DLOs defined to the data group. For more information,
see “Object selection for Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the DLO path name prompt, accept *ALL or specify the name or the generic
value you want.
b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the
scope of DLOs to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
synchronize.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
f. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 DLO path name and System 2 DLO name pattern values

524
Synchronizing DLOs

are ignored when a data group is specified.


g. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
6. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *YES to allow saving objects in use, or
specify another value.
b. At the Save active wait time prompt, if needed, you can change the number of
seconds to wait for a lock on the object before continuing the save. This
parameter is ignored when *NO is specified for Save active.
Note: Both values are ignored if the data group specifies *OPTIMIZED for the
DLO transmission method.
7. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
8. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
9. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To start the synchronization, press Enter.

To synchronize DLOs without a data group


To synchronize DLOs between two systems, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44
(Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO)
command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the Document library objects prompts, specify elements for one or more object
selectors that identify DLOs to synchronize.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For more information, see “Object selection for Compare and

525
Synchronize commands” on page 425.
For each selector, do the following:
a. At the DLO path name prompt, accept *ALL or specify the name or the generic
value you want.
b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the
scope of DLOs to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
synchronize.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
f. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the DLOs.
h. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the DLOs.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *YES to allow saving objects in use, or
specify another value.
b. At the Save active wait time prompt, if needed, you can change the number of
seconds to wait for a lock on the object before continuing the save. This
parameter is ignored when *NO is specified for Save active.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
10. At the Submit to batch prompt, do one of the following:

526
Synchronizing DLOs

• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
11. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
12. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
13. To start the synchronization, press Enter.

527
Synchronizing data group activity entries
The procedures in this topic use the Synchronize DG Activity Entry (SYNCDGACTE)
command to synchronize an object that is identified by a data group activity entry with
any status value—*ACTIVE, *DELAYED, *FAILED, or *COMPLETED.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About synchronizing data group activity entries (SYNCDGACTE)” on page 504
To synchronize an object identified by a data group activity entry, do the following:
1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next
to the activity entry that identifies the object you want to synchronize and press
Enter.
2. The Confirm Synchronize of Object display appears. Press Enter to confirm the
synchronization.
Alternative Process:
You will need to identify the data group and data group activity entry in this procedure.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45
(Synchronize DG File Entry) and press Enter.
3. At the Data group definition prompts, specify the data group name.
4. At the Object type prompt, specify a specific object type to synchronize or press
F4 to see a valid list.
5. Additional parameters appear based on the object type selected. Do one of the
following:
• For files, you will see the Object, Library, and Member prompts. Specify the
object, library and member that you want to synchronize.
• For objects, you will see the Object and Library prompts. Specify the object and
library of the object you want to synchronize.
• For IFS objects, you will see the IFS object prompt. Specify the IFS object that
you want to synchronize.
• For DLOs, you will see the Document library object and Folder prompts.
Specify the folder path and DLO name of the DLO you want to synchronize.
6. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.

528
Synchronizing data group activity entries

7. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
8. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
9. To start the synchronization, press Enter.

529
Synchronizing tracking entries
Tracking entries are MIMIX constructs which identify IFS objects, data areas, or data
queues configured for replication with MIMIX advanced journaling. You can use a
tracking entry to synchronize the contents, attributes, and authorities of the item it
represents.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503
• “About synchronizing tracking entries” on page 507

To synchronize an IFS tracking entry


To synchronize an object represented by an IFS tracking entry, do the following:
1. From the Work with DG IFS Tracking Entries (WRKDGIFSTE) display, type option
16 (Synchronize) next to the IFS tracking entry you want to synchronize. If you
want to change options on the command SYNCIFS command, press F4 (Prompt).
2. To synchronize the associated IFS object, press Enter.
3. When the apply session has been notified that the object has been synchronized,
the status will change to *ACTIVE. To monitor the status, press F5 (Refresh).
4. If the synchronization fails, correct the errors and repeat the previous steps.

To synchronize an object tracking entry


To synchronize an object represented by an object tracking entry, do the following:
1. From the Work with DG Object Tracking Entries (WRKDGOBJTE) display, type
option 16 (Synchronize) next to the object tracking entry you want to synchronize.
If you want to change options on the SYNCOBJ command, press F4 (Prompt).
2. To synchronize the associated data area or data queue, press Enter.
3. When the apply session has been notified that the object has been synchronized,
the status will change to *ACTIVE. To monitor the status, press F5 (Refresh).
4. If the synchronization fails, correct the errors and repeat the previous steps.

530
CHAPTER 21 Introduction to programming

MIMIX includes a variety of functions that you can use to extend MIMIX capabilities
through automation and customization.
The topics in this chapter include:
• “Support for customizing” on page 532 describes several functions you can use to
customize your replication environment.
• “Completion and escape messages for comparison commands” on page 534 lists
completion, diagnostic, and escape messages generated by comparison
commands.
• The MIMIX message log provides a common location to see messages from all
MIMIX products. “Adding messages to the MIMIX message log” on page 541
describes how you can include your own messaging from automation programs in
the MIMIX message log.
• MIMIX supports batch output jobs on numerous commands and provides several
forms of output, including outfiles. For more information, see “Output and batch
guidelines” on page 542.
• “Displaying a list of commands in a library” on page 547 describes how to display
the super set of all commands known to License Manager or subset the list by a
particular library.
• “Running commands on a remote system” on page 548 describes how to run a
single command or multiple commands on a remote system.
• “Procedures for running commands RUNCMD, RUNCMDS” on page 549
provides procedures for using run commands with a specific protocol or by
specifying a protocol through existing MIMIX configuration elements.
• “Using lists of retrieve commands” on page 555 identifies how to use MIMIX list
commands to include retrieve commands in automation.
• Commands are typically set with default values that reflect the recommendation of
Vision Solutions. “Changing command defaults” on page 556 provides a method
for customizing default values should your business needs require it.

531
Support for customizing
MIMIX includes several functions that you can use to customize processing within
your replication environment.

User exit points


User exit points are predefined points within a MIMIX process at which you can call
customized programs. User exit points allow you insert customized programs at
specific points in an application process to perform additional processing before
continuing with the application's processing.
MIMIX provides user exit points for journal receiver management. For more
information, see Chapter 24, “Customizing with exit point programs.

Collision resolution
In the context of high availability, a collision is a clash of data that occurs when a
target object and a source object are both updated at the same time. When the
change to the source object is replicated to the target object, the data does not match
and the collision is detected.
With MIMIX user journal replication, the definition of a collision is expanded to include
any condition where the status of a file or a record is not what MIMIX determines it
should be when MIMIX applies a journal transaction. Examples of these detected
conditions include the following:
• Updating a record that does not exist
• Deleting a record that does not exist
• Writing to a record that already exists
• Updating a record for which the current record information does not match the
before image
The database apply process contains 12 collision points at which MIMIX can attempt
to resolve a collision.
When a collision is detected, by default the file is placed on hold due to an error
(*HLDERR) and user action is needed to synchronize the files. MIMIX provides
additional ways to automatically resolve detected collisions without user intervention.
This process is called collision resolution. With collision resolution, you can specify
different resolution methods to handle these different types of collisions. If a collision
does occur, MIMIX attempts the specified collision resolution methods until either the
collision is resolved or the file is placed on hold.
You can specify collision resolution methods for a data group or for individual data
group file entries. If you specify *AUTOSYNC for the collision resolution element of
the file entry options, MIMIX attempts to fix any problems it detects by synchronizing
the file.
You can also specify a named collision resolution class. A collision resolution class
allows you to define what type of resolution to use at each of the collision points.
Collision resolution classes allow you to specify several methods of resolution to try

532
Support for customizing

for each collision point and support the use of an exit program. These additional
choices for resolving collisions allow customized solutions for resolving collisions
without requiring user action. For more information, see “Collision resolution” on
page 408.

533
Completion and escape messages for comparison com-
mands
When the comparison commands finish processing, a completion or escape message
is issued. In the event of an escape message, a diagnostic message is issued prior to
the escape message. The diagnostic message provides additional information
regarding the error that occurred.
All completion or escape messages are sent to the MIMIX message log. To find
messages for comparison commands, specify the name of the command as the
process type. For more information about using the message log, see the MIMIX
Operations book.

CMPFILA messages
The following are the messages for CMPFILA, with a comparison level specification of
*FILE:
• Completion LVI3E01 – This message indicates that all files were compared
successfully.
• Diagnostic LVE3E0D – This message indicates that a particular attribute
compared differently.
• Diagnostic LVE3385 – This message indicates that differences were detected for
an active file.
• Diagnostic LVE3E12 – This message indicates that a file was not compared. The
reason the file was not compared is included in the message.
• Escape LVE3E05 – This message indicates that files were compared with
differences detected. If the cumulative differences include files that were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3381 – This message indicates that compared files were different but
active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E09 – This message indicates that the CMPFILA command ended
abnormally.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
The following are the messages for CMPFILA, with a comparison level specification of
*MBR:
• Completion LVI3E05 – This message indicates that all members compared
successfully.
• Diagnostic LVE3388 – This message indicates that differences were detected for
an active member.

534
Completion and escape messages for comparison commands

• Escape LVE3E16 – This message indicates that members were compared with
differences detected. If the cumulative differences include members that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.

CMPOBJA messages
The following are the messages for CMPOBJA:
• Completion LVI3E02 – This message indicates that objects were compared but no
differences were detected.
• Diagnostic LVE3384 – This message indicates that differences were detected for
an active object.
• Escape LVE3E06 – This message indicates that objects were compared and
differences were detected. If the cumulative differences include objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3380 – This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
The LVI3E02 includes message data containing the number of objects compared, the
system 1 name, and the system 2 name. The LVE3E06 message includes the same
message data as LVI3E02, and also includes the number of differences detected.

CMPIFSA messages
The following are the messages for CMPIFSA:
• Completion LVI3E03 – This message indicates that all IFS objects were compared
successfully.
• Diagnostic LVE3E0F – This message indicates that a particular attribute was
compared differently.
• Diagnostic LVE3386 – This message indicates that differences were detected for
an active IFS object.
• Diagnostic LVE3E14 – This message indicates that a IFS object was not
compared. The reason the IFS object was not compared is included in the
message.
• Escape LVE3E07 – This message indicates that IFS objects were compared with
differences detected. If the cumulative differences include IFS objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3382 – This message indicates that compared IFS objects were

535
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Escape LVE3E0B – This message indicates that the CMPIFSA command ended
abnormally.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.

CMPDLOA messages
The following are the messages for CMPDLOA:
• Completion LVI3E04 – This message indicates that all DLOs were compared
successfully.
• Diagnostic LVE3E11 – This message indicates that a particular attribute
compared differently.
• Diagnostic LVE3387 – This message indicates that differences were detected for
an active DLO.
• Diagnostic LVE3E15 – This message indicates that a DLO was not compared.
The reason the DLO was not compared is included in the message.
• Escape LVE3E08 – This message indicates that DLOs were compared and
differences were detected. If the cumulative differences include DLOs that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3383 – This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Escape LVE3E0C – This message indicates that the CMPDLOA command ended
abnormally.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.

CMPRCDCNT messages
The following are the messages for CMPRCDCNT:
• Escape LVE3D4D – This message indicates that ACTIVE(*YES) outfile
processing failed and identifies the reason code.
• Escape LVE3D5A – This message indicates that system journal replication is not
active.
• Escape LVE3D5F – This message indicates that an apply session exceeded the
unprocessed entry threshold.

536
Completion and escape messages for comparison commands

• Escape LVE3D6D – This message indicates that user journal replication is not
active.
• Escape LVE3D6F – This message identifies the number of members compared
and how many compared members had differences.
• Escape LVE3D72 – This message identifies a child process that ended
unexpectedly.
• Escape LVE3E17 – This message indicates that no object was found for the
specified selection criteria.
• Informational LVI306B – This message identifies a child process that started
successfully.
• Informational LVI306D – This message identifies a child process that completed
successfully.
• Informational LVI3D45 – This message indicates that active processing
completed.
• Informational LVI3D50 – This message indicates that work files are not deleted.
• Informational LVI3D5A – This message indicates that system journal replication is
not active.
• Informational LVI3D5F – This message identifies an apply session that has
exceeded the unprocessed entry threshold.
• Informational LVI3D6D – This message indicates that user journal replication is
not active.
• Informational LVI3E05 – This message identifies the number of members
compared. No differences were detected.
• Informational LVI3E06 – This message indicates that no object was selected for
processing.

CMPFILDTA messages
The following are the messages for CMPFILDTA:
• Completion LVI3D59 – This message indicates that all members compared were
identical or that one or more members differed but were then completely repaired.
• Diagnostic LVE3031 - This message indicates the name of the local system is
entered on the System 2 (SYS2) prompt. Using the name of the local system on
the SYS2 prompt is not valid.
• Diagnostic LVE3D40 – This message indicates that a record in one of the
members cannot be processed. In this case, another job is holding an update lock
on the record and the wait time has expired.
• Diagnostic LVE3D42 - This message indicates that a selected member cannot be
processed and provides a reason code.
• Diagnostic LVE3D46 – This message indicates that a file member contains one or
more field types that are not supported for comparison. These fields are excluded
from the data compared.

537
• Diagnostic LVE3D50 – This message indicates that a file member contains one or
more large object (LOB) fields and a value of *NONE was specified on the data
group definition (DGDFN) parameter or the process while active (ACTIVE)
parameter is *NO. In this case, files containing LOB fields cannot be repaired and
the request to process the file member is ignored.
• Diagnostic LVE3D64 – This message indicates that the compare detected minor
differences in a file member. In this case, one member has more records
allocated. Excess allocated records are deleted. This difference does not affect
replication processing, however.
• Diagnostic LVE3D65 – This message indicates that processing failed for the
selected member. The member cannot be compared. Error message LVE0101 is
returned.
• Escape LVE3358 – This message indicates that the compare has ended
abnormally, and is shown only when the conditions of messages LVI3D59,
LVE3D5D, and LVE3D59 do not apply.
• Escape LVE3D5D – This message indicates that insignificant differences were
found or remain after repair. The message provides a statistical summary of the
differences found. Insignificant differences may occur when a member has
deleted records while the corresponding member has no records yet allocated at
the corresponding positions. It is also possible that one or more selected
members contains excluded fields, such as large objects (LOBs).
• Escape LVE3D5D – This message indicates that insignificant differences were
found or remain after repair. The message provides a statistical summary of the
differences found. Insignificant differences may occur when a member has
deleted records while the corresponding member has no records yet allocated at
the corresponding positions.
• Escape LVE3D5E – This message indicates that the compare request ended
because the data group was not fully active. The request included active
processing (ACTIVE), which requires a fully active data group. Output may not be
complete or accurate.
• Escape LVE3D5F – This message indicates that the apply session exceeded the
specified threshold for unprocessed entries. The DB apply threshold
(DBAPYTHLD) parameter determines what action should be taken when the
threshold is exceeded. In this case, the value *END was specified for
DBAPYTHLD, thereby ending the requested compare and repair action.
• Escape LVE3D59 – This message indicates that significant differences were
found or remain after repair, or that one or more selected members could not be
compared. The message provides a statistical summary of the differences found.
• Escape LVE3D56 – This message indicates that no member was selected by the
object selection criteria.
• Escape LVE3D60 – This message indicates that the status of the data group
could not be determined. The WRKDG (MXDGSTS) outfile returned a value of
*UNKNOWN for one or more fields used in determining the overall status of the
data group.

538
Completion and escape messages for comparison commands

• Escape LVE3D62 – This message indicates the number of mismatches that will
not be fully processed for a file due to the large number of mismatches found for
this request. The compare will stop processing the affected file and will continue to
process any other files specified on the same request.
• Escape LVE3D67 – This message indicates that the value specified for the File
entry status (STATUS) parameter is not valid. To process members in *HLDERR
status, a data group must be specified on the command and *YES must be
specified for the Process while active parameter.
• Escape LVE3D68 – This message indicates that a switch cannot be performed
due to members undergoing compare and repair processing.
• Escape LVE3D69 – This message indicates that the data group is not configured
for database. Data groups used with the CMPFILDTA command must be
configured for database, and all processes for that data group must be active.
• Escape LVE3D6C – This message indicates that the CMPFILDTA command
ended before it could complete the requested action. The processing step in
progress when the end was received is indicated. The message provides a
statistical summary of the differences found.
• Escape LVE3E41 – This message indicates that a database apply job cannot
process a journal entry with the indicated code, type, and sequence number
because a supporting function failed. The journal information and the apply
session for the data group are indicated. See the database apply job log for details
of the failed function.
• Informational LVI3727 – This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and is now in
*CMPRLS state.
• Informational LVI3728 – This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and has been
changed from *CMPRLS to *CMPACT state.
• Informational LVI3729 – This message indicates that the repair request for a
specific member was not successful. As a result, the CMPFILDTA command has
changed the data group file entry for the member back to *HLDERR status.
• Informational LVI372C – The CMPFILDTA command is ending controlled because
of a user request. The command did not complete the requested compare or
repair. Its output may be incomplete or incorrect.
• Informational LVI372D – The CMPFILDTA command exceeded the maximum rule
recovery time policy and is ending. The command did not complete the requested
compare or repair. Its output may be incomplete or incorrect.
• Informational LVI372E – The CMPFILDTA command is ending unexpectedly. It
received an unexpected request from the remote CMPFILDTA job to shut down
and is ending. The command did not complete the requested compare or repair.
Its output may be incomplete or incorrect.
• Informational LVI3D4B – This message indicates that work files are not

539
automatically deleted because the time specified on the Wait time (seconds)
(ACTWAIT) prompt expired or an internal error occurred.
• Informational LVI3D59 – This message indicates that the CMPFILDTA command
completed successfully. The message also provides a statistical summary of
compare processing.
• Informational LVI3D5E - This message indicates that the compare request ended
because the request required Active processing and the data group was not
active. Results of the comparison may not be complete or accurate.
• Informational LVI3D5F – This message indicates that the apply session exceeded
the specified threshold for unprocessed entries, thereby ending the requested
compare and repair action. In this case, the value *END was specified for the DB
apply threshold (DBAPYTHLD) parameter, which determines what action should
be taken when the threshold is exceeded.
• Informational LVI3D60 - This message indicates that the status of the data group
could not be determined. The MXDGSTS outfile returned a value of *UNKNOWN
for one or more status fields associated with systems, journals, system managers,
journal managers, system communications, remote journal link, and database
send and apply processes.
• Informational LVI3E06 – This message indicates that the data group specified
contains no data group file entries.
When active processing and ACTWAIT(*NONE) is specified, or when the active wait
time out occurs, some members will have unconfirmed differences if none of the
differences initially found was verified by the MIMIX database apply process.
The CMPFILDTA outfile contains more detail on the results of each member compare,
including information on the types of differences that are found and the number of
differences found in each member.
Messages LVI3D59, LVE3D5D, LVE3D59, and LVE3D6C include message data
containing the number of members selected on each system, the number of members
compared, the number of members with confirmed differences, the number of
members with unconfirmed differences, the number of members successfully
repaired, and the number of members for which repair was unsuccessful.

540
Adding messages to the MIMIX message log

Adding messages to the MIMIX message log


The Add Message Log Entry (ADDMSGLOGE) command allows you to add an entry
to the MIMIX message log. This is helpful when you want to include messages from
your automation programs into the MIMIX message log for easier tracking. To see the
parameters for this command, type the command and press F4 (Prompt). Help text for
the parameters describe the options available.
The message is written to the message log file. The message is also sent to the
primary and secondary messages queues if the message meets the filter criteria for
those queues. The message can also be sent to a program message queue.
Messages generated on a network system will be automatically sent to the
management system. However, messages generated on a management system may
not be sent to any network systems. The system manager on the management
system does not send messages to network systems when it cannot determine which
system should receive the message.

541
Output and batch guidelines
This topic provides guidelines for display, print, and file output. In addition, the user
interface, the mechanics of selecting and producing output, and content issues such
as formatting are described.
Batch job submission guidelines are also provided. These guidelines address the
user interface as well as the mechanics of submitting batch jobs that are not part of
the mainline replication process.

General output considerations


Commands can produce many forms of output, including messages, display output
(interactive panels), printer output (spooled files), and file output. This section focuses
primarily on display, print, and file-related output. In most cases, the output
information can be selectively directed to a display, a printer, or an outfile. Messages,
on the other hand, are intended to provide diagnostic or status-related information, or
an indication of error conditions. Messages are not intended for general output.
Several commands support display, print, output files, or some combination thereof.
The Work (WRK) and Display (DSP) commands are the most common classes of
commands that support various forms of output. Other classes of commands, such as
Compare (CMP) and Verify (VFY), also support various forms of output in many
cases. As part of an on-going effort to ensure consistent capabilities across similar
classes of commands, most commands in the same class support the same output
formats. For example, all Work (WRK) commands typically support display, print, and
output formats. This section describes the general guidelines used throughout the
product. However, there are some exceptions, which are described in the sections
about specific commands.
Display support is intended primarily for Display (DSP) commands for displaying
detailed information about a specific entry, or for Work (WRK) related commands that
display lists of entries. Audit-based commands, such as Compare (CMP) and Verify
(VFY), are often long-running requests and do not typically provide display support.
Spooled output support provides a more easily readable form of output for print or
distribution purposes. Output is generated in the form of spooled output files that can
easily be printed or distributed. Nearly all Display (DSP) or Work (WRK) commands
support this form of output. In some cases, other command-specific options may
affect the contents of the spooled output file.
Output files are intended primarily for automation purposes, providing MIMIX-related
information in a manner that facilitates programming automation for various
purposes—such as additional monitoring support, auditing support, automatic
detection, and the correction of error conditions. Output files are also beneficial as
intermediate data for advance reporting using SQL query support.

Output parameter
Some commands can produce output of more than one type—display, print, or output
file. In these cases, the selection is made on the Output parameter. Table 70 lists the
values supported by the Output parameter.

542
Output and batch guidelines

Note: Not all values are supported for all commands. For some commands, a
combination of values is supported.

Table 70. Values supported by the Output parameter

* Display only

*NONE No output is generated

*PRINT Spooled output is generated

*OUTFILE An output file is generated

*BOTH Both spooled output and an output file are generated.

Commands that support OUTPUT(*) that can also run in batch are required to support
the other forms of output as well.
Commands called from a program or submitted to batch with a specification of
OUTPUT(*) default to OUTPUT(*PRINT). Displaying a panel during batch processing
or when called from another program would otherwise fail.
With the exception of messages generated as a result of running a command,
commands that support OUTPUT(*NONE) will generate no other forms of output.
Commands that support combinations of output values do not support OUTPUT(*) in
combination with other output values.

Display output
Commands that support OUTPUT(*) provide the ability to display information
interactively. Display (DSP) and Work (WRK) commands commonly use display
support. Display commands typically display detailed information for a specific entity,
such as a data group definition. Work commands display a list of entries and provide a
summary view of list of entries. Display support is required to work interactively with
the MIMIX product.
Work commands often provide subsetting capabilities that allow you to select a
subset of information. Rather than viewing all configuration entries for all data groups,
for example, subsetting allows you to view the configuration entries for a specific data
group. This ability allows you to easily view data that is important or relevant to you at
a given time.

Print output
Spooled output is generated by specifying OUTPUT(*PRINT), and is intended to
provide a readable form of output for print or distribution purposes. Output is
generated in the form of spooled output files that can easily be printed or distributed.
On commands that support spooled output, the spooled output is generated as a
result of specifying OUTPUT(*PRINT). Most Display (DSP) or Work (WRK)
commands support this form of output. Other commands, such as Compare (CMP)
and Verify (VFY), also support spooled output in most cases.

543
The Work (WRK) and Display (DSP) commands support different categories of
reports. The following are standard categories of reports available from these
commands:
• The detail report contains information for one item, such as an object, definition,
or entry. A detail report is usually obtained by using option 6 (Print) on a Work
(WRK) display, or by specifying *PRINT on the Output parameter on a Display
(DSP) command.
• The list summary report contains summary information for multiple objects,
definitions, or entries. A list summary is usually obtained by pressing F21 (Print)
on a Work (WRK) display. You can also get this report by specifying *BASIC on
the Detail parameter on a Work (WRK) command.
• The list detail report contains detailed information for multiple objects,
definitions, or entries. A list detail report is usually obtained by specifying *PRINT
on the Output parameter of a Work (WRK) command.
Certain parameters, which vary from command to command, can affect the contents
of spooled output. The following list represents a common set of parameters that
directly impact spooled output:
• EXPAND(*YES or *NO) - The expand parameter is available on the Work with
Data Group Object Entries (WRKDGOBJE), the Work with Data Group IFS
Entries (WRKDGIFSE), and the Work with Data Group DLO Entries
(WRKDGDLOE) commands. Configuration for objects, IFS objects, and DLOs can
be accomplished using generic entries, which represent one or more actual
objects on the system. The object entry ABC*, for example, can represent many
entries on a system. Expand support provides a means to determine that actual
objects on a system are represented by a MIMIX configuration. Specifying *NO on
the EXPAND parameter prints the configured data group entries.
• DETAIL(*FULL or *BASIC) - Available on the Work (WRK) commands, the detail
option determines the level of detail in the generated spool file. Specifying
DETAIL(*BASIC) prints a summary list of entries. For example, this specification
on the Work with Data Group Definitions (WRKDGDFN) command will print a
summary list of data group definitions. Specifying DETAIL(*FULL) prints each data
group definition in detail, including all attributes of the data group definition.
Note: This parameter is ignored when OUTPUT(*) or OUTPUT(*OUTFILE) is
specified.
• RPTTYPE(*DIF, *ALL, *SUMMARY or *RRN, depending on command) - The
Report Type (RPTTYPE) parameter controls the amount of information in the
spooled file. The values available for this parameter vary, depending on the
command.
The values *DIF, *ALL, and *SUMMARY are available on the Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands.
Specifying *DIF reports only detected differences. A value of *SUMMARY reports
a summary of objects compared, including an indication of differences detected.
*ALL provides a comprehensive listing of objects compared as well as difference
detail.

544
Output and batch guidelines

The Compare File Data (CMPFILDTA) command supports *DIF and *ALL values,
as well as the value *RRN. Specifying *RRN allows you to output the relative
record number of the first 1,000 objects that failed to compare. Using the *RRN
value can help resolve situations where a discrepancy is known to exist, but you
are unsure which system contains the correct data. In this case, *RRN provides
the information that enables you to display the specific records on the two
systems and to determine the system on which the file should be repaired.

File output
Output files can be generated by specifying OUTPUT(*OUTFILE). Having full outfile
support across the MIMIX product is important for a number of reasons. Outfile
support is a key enabler for advanced automation purposes. The support also allows
MIMIX customers and qualified MIMIX consultants to develop and deliver solutions
tailored to the individual needs of the user.
As with the other forms of output, output files are commonly supported across certain
classes of commands. The Work (WRK) commands commonly support output files. In
addition, many audit-based reports, such as Comparison (CMP) commands, also
provide output file support. Output file support for Work (WRK) commands provides
access to the majority of MIMIX configuration and status-related data. The Compare
(CMP) commands also provide output files as a key enabler for automatic error
detection and correction capabilities.
When you specify OUTPUT(*OUTFILE), you must also specify the OUTFILE and
OUTMBR parameters. The OUTFILE parameter requires a qualified file and library
name. As a result of running the command, the specified output file will be used. If the
file does not exist, it will automatically be created.
Note: If a new file is created for CMPFILA, for example, the record format used is
from the supplied model database file MXCMPFILA, found in the installation
library. The text description of the created file is “Output file for CMPFILA.” The
file cannot reside in the product library.
The Outmember (OUTMBR) parameter allows you to specify which member to use in
the output file. If no member exists, the default value of *FIRST will create a member
name with the same name as the file name. A second element on the Outmember
parameter indicates the way in which information is stored for an existing member. A
value of *REPLACE will clear the current contents of the member and add the new
records. A value of *ADD will append the new records to the existing data.
Expand support: The Expand support was developed specifically as a feature for
data group configuration entries that support generic specifications. Data group object
entries, IFS entries, and DLO entries can all be configured using generic name
values. If you specify an object entry with an object name of ABC* in library XYZ and
accept the default values for all other fields, for example, all objects in library XYZ are
replicated. Specifying EXPAND(*NO) will write the specific configuration entries to the
output files. Using EXPAND(*YES) will list all objects from the local system that match
the configuration specified. Thus, if object name ABC* for library XYZ represented
1000 actual objects on the system, EXPAND(*YES) would add 1000 rows to the
output file. EXPAND(*NO) would add a single generic entry.
Note: EXPAND(*YES) support locates all objects on the local system.

545
General batch considerations
MIMIX functions that are identified as long-running processes typically allow you to
submit the requests to batch and avoid the unnecessary use of interactive resources.
Parameters typically associated with the Batch (BATCH) parameter include Job
description (JOBD) and Job name (JOB).

Batch (BATCH) parameter


Values supported on the Batch (BATCH) parameter include *YES and *NO. A value of
*YES indicates that the request will be submitted to batch. A value of *NO will cause
the request to run interactively. The default value varies from command to command,
and is based on the general usage of the command. If a command usually requires
significant resource to run, the default will likely be *YES.
Some commands, such as Start Data Group (STRDG), perform a number of
interactive tasks and start numerous jobs by submitting the requests to batch.
Likewise, some jobs, such as the data group apply process, run on a continuous basis
and do not end until specifically requested. These jobs represent the various
processes required to support an active data group. Commands of this type do not
specifically provide a batch (BATCH) parameter since it is the only method available.
For commands that are called from other programs, it is important to understand the
difference between BATCH(*YES) and BATCH(*NO). Implementing automatic audit
detection and correction support is easier to accomplish using BATCH(*NO). Let us
assume you are running the Compare File Attributes (CMPFILA) command as part of
an audit. If differences are detected, specifying BATCH(*NO) allows you to monitor for
specific exceptions and implement automatic correction procedures. This capability
would not be available if you submitted the request to BATCH(*YES).

Job description (JOBD) parameter


The Job Description (JOBD) parameter allows the user of the command to specify
which job description to use when submitting the batch request. Newer MIMIX
commands use the job descriptions MXAUDIT, MXSYNC, and MXDFT, which are
automatically created in the MIMIX installation library when MIMIX is installed. Jobs
and related output are associated to the user profile submitting the request. Older
commands that provided job description support for batch processing have not been
altered. Refer to individual commands for default values.

Job name (JOB) parameter


The Job name (JOB) parameter allows the user of the command to specify the job
name used for the submitted job request. By default, the job name defaults to the
name of the command. The job name parameter is intended to make it easier to
identify the active job as well as the spooled files generated as a result of running the
command. For spooled files, the job name is also used for the user data information.
Only newer features provide this capability.

546
Displaying a list of commands in a library

Displaying a list of commands in a library


You can use the IBM Select Command (SLTCMD) command to display a list of all
commands contained within a particular library on the system. This list includes any
commands you have added to the associated library, including copies of other
commands.
Note: This list does not indicate whether you are licensed to the command or if
authority to the command exists.
Do the following:
1. From the library you want, access the MIMIX Intermediate Main Menu.
2. Select option 13 (Utilities menu) and press Enter.
3. When the MIMIX Utilities Menu is displayed, select option 1 (Select all
commands).

547
Running commands on a remote system
The Run Command (RUNCMD) and Run Commands (RUNCMDS) commands
provide a convenient way to run a single command or multiple commands on a
remote system. The RUNCMD and RUNCMDS commands replace and extend the
capabilities available in the IBM commands, Submit Remote Command
(SBTRMTCMD) and Run Remote Command (RUNRMTCMD).
The MIMIX commands provide a protocol-independent way of running commands
using MIMIX constructs such as system definitions, data group definitions, and
transfer definitions. The MIMIX commands enable you to run commands and receive
messages from the remote system.
In addition, the RUNCMD and RUNCMDS commands use the current data group
direction to determine where the command is to be run. This capability simplifies
automation by eliminating the need to manually enter source and target information at
the time a command is run.
Note: Do not change the RUNCMD or RUNCMDS commands to
PUBLIC(*EXCLUDE) without giving MIMIXOWN proper authority.

Benefits - RUNCMD and RUNCMDS commands


Individually, the RUNCMD command can be used as a convenient tool to debug base
communications problems. The RUNCMD command also provides the ability to
prompt on any command. The RUNCMDS command, while supporting up to 300
commands, does not allow command prompting. When multiple commands are run
on a single RUNCMDS command, only one communications session is established.
The target program environment, including QTEMP and the local data area, is also
kept intact. Additionally, the RUNCMDS command has options for monitoring escape
and completion messages. All messages are sent to the same program level as the
program or command line running the command, enabling you to program remote
commands in the same manner as local commands.
Both RUNCMD and RUNCMDS allow you to specify commands to be sent through
the journal stream and run by the database apply process. This protocol is a MIMIX
request that the U-MX journal entry codes send through the journal stream. The value
*DGJRN on the Protocol prompt enables this capability, thereby replacing
conventional U-EX support. In addition, the When to run (RUNOPT) prompt can be
used to specify when the journal entry associated with the command is processed by
the target system for the specified data group. See “Procedures for running
commands RUNCMD, RUNCMDS” on page 549 for additional details about the
RUNOPT parameter.
Benefits of the RUNCMD and RUNCMDS commands also include the following:
• Provides a convenient and consistent interface to automate tasks across a
network.
• Centralizes the management and control of networked systems.
• Enables protocol-independent testing and verification of MIMIX communications
setups.

548
Procedures for running commands RUNCMD, RUNCMDS

• Supports sending and receiving local data area (LDA) data.


• Allows commands to be run under other user profiles as long as the user ID and
password are the same on both systems. The password is validated before the
command is run on the remote system, thus the user must have authority to the
user profile being used.

Procedures for running commands RUNCMD, RUNCMDS


There are two ways to use the RUNCMD or RUNCMDS commands. You can use
them with a specific protocol, or you can use them by specifying a protocol through
existing MIMIX configuration elements. To use the commands with a specific protocol,
use the procedure “Running commands using a specific protocol” on page 549. To
use the commands using an existing MIMIX configuration, use the procedure
“Running commands using a MIMIX configuration element” on page 551.

Running commands using a specific protocol


1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities
Menu appears.
2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select
Command display appears.
3. Page down and do one of the following:
• To run a single command on a remote system, type a 1 next to RUNCMD. The
Run Command (RUNCMD) display appears.
• To run multiple commands on a remote system, type a 1 next to RUNCMDS.
The Run Commands (RUNCMDS) display appears.
4. Specify the commands to run or messages to monitor for the command as follows:
c. At the Command prompt specify the command to run on the remote system.
When using the RUNCMDS command, you can specify up to 300 commands.
d. If you are using the RUNCMDS command, you can specify as many as ten
escape, notify, or status messages to be monitored for each command. Specify
these at the Monitor for messages prompt.
5. Specify the protocol and protocol-specific implementation using Table 71.

Table 71. Specific protocols and specifications used for RUNCMD and RUNCMDS

How to run Specify


(protocol)

Run on local At the Protocol prompt, specify *LOCAL.


system

549
Table 71. Specific protocols and specifications used for RUNCMD and RUNCMDS

How to run Specify


(protocol)

Run using Do the following:


TCP/IP 1. At the Protocol prompt, specify *TCP to run the commands using
Transmission Control Protocol/Internet Protocol (TCP/IP)
communications. Press Enter for additional prompts.
2. At the Host name or address prompt, specify the host alias or address
of the TCP protocol.
3. At the Port number or alias prompt, specify the port number or port
alias on the local system to communicate with the remote system. This
value is a 14-character mixed-case TCP port alias or port number.

Run using Do the following:


SNA 1. At the Protocol prompt, specify *SNA to run the commands using
System Network Architecture (SNA) communications. Press Enter for
additional prompts.
2. At the Remote location prompt, specify the name or address of the
remote location.
3. At the Local location prompt, specify the unique location name that
identifies the system to remote devices.
4. At the Remote network identifier prompt, specify the name or address
of the remote location.
5. At the Mode prompt, specify the name of the mode description used
for communications. The product default for this parameter is MIMIX.

Run using Do the following:


OptiConnect 1. At the Protocol prompt, specify *OPTI to run the commands using
OptiConnect fiber optic network communications. Press Enter for
additional prompts.
2. At the Remote location prompt, specify the name or address of the
remote location.

6. Do one of the following:


• To access additional options, skip to Step 7.
• To run the commands or monitor for messages, press Enter.
7. Press F10 (Additional parameters).
8. At the Check syntax prompt, specify whether to check the syntax of the command
only. If *YES is specified, the syntax is checked but the command is not run.
9. At the Local data area length prompt, specify the amount of the current local data
area (LDA) to copy. This is useful for automating application processing that is
dependent on the local data area and for passing binary information to command
programs.
10. At the Return LDA prompt, specify whether to return the contents of the local data
area (LDA) from the remote system after the commands are run. The value
specified in the Local data area length prompt in Step 9 determines how much
data is returned.

550
Procedures for running commands RUNCMD, RUNCMDS

11. At the User prompt, specify the user profile to use when the command is run on
the remote system.
12. To run the commands or monitor for messages, press Enter.

Running commands using a MIMIX configuration element


To use RUNCMD or RUNCMDS using a MIMIX configuration element, do the
following:
1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities
Menu appears.
2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select
Command display appears.
3. Page down and do one of the following:
• To run a single command on a remote system, type a 1 next to RUNCMD. The
Run Command (RUNCMD) display appears.
• To run multiple commands on a remote system, type a 1 next to RUNCMDS.
The Run Commands (RUNCMDS) display appears.
4. Specify the commands to run or messages to monitor for the command as follows:
a. At the Command prompt specify the command to run on the remote system.
When using the RUNCMDS command, you can specify up to 300 commands.
b. If you are using the RUNCMDS command, you can specify as many as ten
escape, notify, or status messages to be monitored for each command. Specify
these at the Monitor for messages prompt.
5. Specify the MIMIX configuration element using Table 72.

Table 72. MIMIX configuration protocols and specifications

Protocol using Protocol Also specify


MIMIX configuration element prompt
value

Run on system defined by the *SYSDFN System definition prompt:


default transfer definition
• Specify the name of the
system definition or press F4
for a list of valid definitions.
• Press Enter for additional
prompts

551
Table 72. MIMIX configuration protocols and specifications

Protocol using Protocol Also specify


MIMIX configuration element prompt
value

Run on the system specified in the *TFRDFN Transfer definition prompt:


transfer definition (TFRDFN
parameter) that is not the local
• Press F1 Help for assistance
system in specifying the three-part
qualified name of the transfer
definition.
• Press Enter for additional
prompts.
Run on the system specified in the *DGDFN Data group definition prompt:
data group definition that is not the
local system
• Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run on the current source system *DGSRC Data group definition prompt:
defined for the data group
• Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run on the current target system *DGTGT Data group definition prompt:
defined for the data group
• Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run by the database apply process *DGJRN Data group definition prompt:
when the journal entry is processed
• Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run on the system defined as *DGSYS1 Data group definition prompt:
System 1 for the data group
• Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.

552
Procedures for running commands RUNCMD, RUNCMDS

Table 72. MIMIX configuration protocols and specifications

Protocol using Protocol Also specify


MIMIX configuration element prompt
value

Run on the system defined as *DGSYS2 Data group definition prompt:


System 2 for the data group
• Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.

6. Do one of the following:


• To access additional options, skip to Step 7.
• To run the commands or monitor for messages, press Enter.
7. Press F10 (Additional parameters).
8. At the Check syntax only prompt, specify whether to check the syntax of the
command only. If *YES is specified, the syntax is checked but the command is not
run.
9. At the Local data area length prompt, specify the amount of the current local data
area (LDA) to copy. This is useful for automating application processing that is
dependent on the local data area and for passing binary information to command
programs.
10. At the Return LDA prompt, specify whether to return the contents of the local data
area (LDA) from the remote system after the commands are run. The value
specified in the Local data area length prompt in Step 9 determines how much
data is returned.
11. At the User prompt, specify the user profile to use when the command is run on
the remote system.
12. If you specified *DGJRN for the Protocol prompt, you will see the File prompts. Do
the following:
a. At the File name prompt, specify the name of the file to use when the journal
entry generated by the commands is sent.
Note: Use these prompts if you want the command to run in the database
apply job associated with the named file. If a file is not specified,
database apply (DBAPY) session A is selected.
b. At the Library prompt, specify the name of the library associated with the file.
13. If you specified a file name for the File prompt, you will see the When to run
prompt. Using Table 73, specify when the journal entry associated with the
command is processed by the target system for the specified data group.
14. To run the commands or monitor for messages, press Enter.

553
Table 73. Options for processing journal entries with MIMIX *DGJRN protocol

When to run Specify


(Runopt)

Run when the database apply job for the Do the following:
specified file receives the journal entry 1. At the Protocol prompt, specify *DGJRN.
2. At the When to run prompt, specify *RCV.

Run in sequence with all other entries for Do the following:


the file. 1. At the Protocol prompt, specify *DGJRN.
2. At the When to run prompt, specify *APY.

554
Using lists of retrieve commands

Using lists of retrieve commands


The following additional commands make working with retrieve (RTVnnnnnn)
commands easier:
• Open MIMIX List (OPNMMXLST) This command allows you to open a list of
specified MIMIX definitions or data group entries for use with the MIMIX retrieve
commands. You specify the type of definitions or data group entries to include in
the list, a CL variable to receive the list identifier, and a data group definition. The
CL variable for the list identifier is needed for the MIMIX retrieve commands.
• Close MIMIX List (CLOMMXLST) This command allows you to close a list of
specified MIMIX definitions or data group entries opened by the Open MIMIX List
(OPNMMXLST) command. A close is necessary in order to free resources. You
specify the list identifier to close.
Note: The retrieve commands are primarily intended to handle retrieving information
for a specific entry. The OPNMMXLST, CLOMMXLST, and RTV commands
continue to be supported and maintained. However, additional RTV
commands will not be provided. You are encouraged to use the extensive
outfile support available. Outfile support provides the means to generate a list
of entries. For more information, see “Output and batch guidelines” on
page 542.

555
Changing command defaults
Nearly all MIMIX processes are based on commands that have been shipped with
default values that reflect best practices recommendations. This ensures the easiest
and best use of each command. MIMIX implements named configuration definitions
through which you can customize your configuration by using options on commands
without resorting to changing command defaults.
If you wish to customize command defaults to fit a specific business need, use the
IBM Change Command Default (CHGCMDDFT) command. Be aware that by
changing a command default, you may be affecting the operation of other MIMIX
processes. Also, each update of MIMIX software will cause any changes to be lost.

556
Procedure components and concepts

CHAPTER 22 Customizing procedures

This chapter describes how to customize the configuration of procedures. Procedures


provide greater flexibility when performing operations such as starting, ending, or
switching application groups, and running data protection reports for systems.
Understanding how procedures function as well as what actions their steps perform is
important when configuring procedures.
Detailed information about the effect that operational commands for procedures and
steps can have on your environment is available in the MIMIX Operations book.
This chapter includes:
• “Procedure components and concepts” on page 557 describes the functionality
provided by procedures and related configuration components, the types of
procedures available, jobs used in processing procedures and runtime attributes
of steps. This section also includes topics describing operational control, status,
and capabilities for retaining historical information about completed runs of
procedures.
• “Customizing user application handling for switching” on page 561 describes the
action required to avoid problems when attempting to switch and describes
options for customizing step programs intended for starting and ending user
applications within in switch procedures.
• “Working with procedures” on page 563 includes how to use the Work with
Procedures display and topics for creating and deleting procedures.
• “Working with the steps of a procedure” on page 567 includes how the use the
Work with Steps display and topics for displaying the configured steps of a
procedure, changing runtime attributes of steps, as well as topics for adding,
removing, and enabling or disabling steps.
• “Working with step programs” on page 570 describes how to access a list of the
available step programs and includes topics for changing step programs and
creating custom step programs. The step program format for custom programs is
included.
• “Working with step messages” on page 573 describes step messages and
includes topics for adding and removing them.
• “Additional programming support for procedures and steps” on page 574 identifies
the available commands for retrieving information and commands with outfile
support for about procedures and steps.

Procedure components and concepts


Each procedure is associated with an application group or a node. A set of default
procedures is shipped with MIMIX for frequently used operations. For example, when
an application group is created, copies of the shipped default procedures for that
application group are also created to provide the ability to start, end, perform pre-

557
Customizing procedures

check activity for switching, and switch the application group. For node procedures,
the shipped defaults provide the ability to automatically run data protection reports to
help you ensure your environment is protected the way you want.
Each operation is performed by a procedure that consists of a sequence of steps.
Each step calls a predetermined step program to perform a specific subtask of the
larger operation. Steps also identify runtime attributes for handling before and after
the program call within the context of the procedure.
Each step program is a reusable configuration element that identifies a task
performing program and its attributes which determine where it runs and what type of
work it performs. A step program can perform work on an application group, its data
resource groups, their respective data groups, or on specified nodes. A set of shipped
step programs provides functionality for the default procedures created for application
groups and nodes.
In addition, you can copy or create your own procedures and step programs to
perform custom activity, change which procedure is the default of its type for an
application group, and change attributes of steps within a procedure.
You can also optionally create step messages. These are configuration elements that
define the error action to be taken for a specific error message identifier. A step
message provides the ability to determine the error action taken by a step based on
attributes defined in the error message identifier. Each step message is defined for an
installation so it can be used by multiple steps or by steps in multiple procedures.

Procedure types
Procedures have a type (TYPE) value which determines the operations for which the
procedure can be used. The following types are supported:
*END - The procedure is usable with the End Application Group (ENDAG)
command.
*NODE - The procedure only runs on a single node and is not associated with an
application group.
*START - The procedure is usable with the Start Application Group (STRAG)
command.
*SWTPLAN - The procedure is usable with the Switch Application Group
(SWTAG) command for a *PLANNED switch type.
*SWTUNPLAN - The procedure is usable with the Switch Application Group
(SWTAG) command for an *UNPLANNED switch type.
*USER - The procedure is user defined and is associated with an application
group.

Procedure job processing


It is important to understand how multiple jobs are used to process steps for a
procedure. A procedure uses multiple asynchronous jobs to run the programs
identified within its steps. Starting a procedure starts one job for the application group
or node. For procedures associated with application groups, an additional job is

558
Procedure components and concepts

started for each of its data resource groups. These jobs operate independently and
persist until the procedure ends. Each persistent job evaluates each step in sequence
for work to be performed within the scope of the step program type. When a job for a
data resource group encounters a step that acts on data groups, it spawns an
additional job for each of its associated data groups. Each spawned data group job
performs the work for that data group and then ends.

Attributes of a step
A step defines attributes to be used at runtime for a specified step program in the
context of the specified procedure for an application group or node. The following
parameters identify the attributes of a step:
Sequence number (SEQNBR) -The sequence number determines the order in which
the step will be performed.
Action before step (BEFOREACT) - This parameter identifies what action is taken
by all jobs for the procedure before starting the step. The default value *NONE
indicates that the step will begin without additional action. Users can also specify
*WAIT so that jobs wait for all asynchronous jobs to complete processing previous
steps before the starting the step. The value *MSGW will cause the step to be
started, then wait until all asynchronous jobs have completed processing all previous
steps and an operator has responded to an inquiry message from the procedure
which indicated the step has been waiting to run. Then the step takes the action
indicated by the operator’s response to the message. A response of G (Go) will run
the program specified in the step. A response of C (Cancel) will cancel the procedure.
Action on error (ERRACT) - This parameter identifies what action to take for a job
used in processing the step when the job ends in error.
• The default value *QUIT will set the status of the job that ended in error *FAILED,
as indicated in the expanded view of step status. The type of step program used
by this step determines what happens to other jobs for the step and whether
subsequent steps are prevented from starting, as follows:
– If the step program is of type *DGDFN, jobs that are processing other data
groups within the same data resource group continue. When they complete,
the data resource group job ends. No subsequent steps that apply to that data
resource group or its data groups will be started. However, subsequent steps
will still be processed for other data resource groups and their data groups.
– If the step program is of type *DTARSCGRP, no subsequent steps that apply to
that data resource group or its data groups will be started. Jobs for other data
resource groups may still be running and will process subsequent steps that
apply to their data resource groups and data groups.
– If the step program is of type *AGDFN or *NODE, subsequent steps will not be
started. Jobs for data resource group or data group steps may still be running
and will process subsequent steps that apply to their data resource groups and
data groups.
• For the value *CONTINUE, the job continues processing as if the job had not
ended in error. The status of the job in error is set to *IGNERR and is indicated in
the expanded view of step status.

559
Customizing procedures

• For the value *MSGID, error processing is determined by what is specified in a


predefined step message identifier for the installation (see Step messages); if a
step message is not found for the error message ID, the error action defaults to
*QUIT.
• For the value *MSGW, an inquiry message issued by the job requires a response
before any additional processing for the job can occur. A response of R (Retry) will
retry processing the step program within the same job. A response of C (Cancel)
will set the job status to *CANCEL as indicated in the expanded view of step
status and any other jobs and subsequent steps are handled in the same manner
described for the value *QUIT. A response of I (Ignore) will set the job’s status to
*IGNERR as indicated in the expanded view of step status, and processing
continues as if the job had not ended in error.
State (STATE) - the state determines whether the step runs when the procedure is
invoked. The value *ENABLED indicates that a step is enabled to run. For user-
defined steps and optional steps, users can specify *DISABLED to prevent a step
from running. Steps shipped with a state value of *REQUIRED are always enabled
and cannot be disabled.

Operational control
Procedures of type *USER or *NODE can be invoked by the Run Procedure
(RUNPROC) command. For procedures of type *NODE, the RUNPROC command
always runs on the local system. For procedures of other types, the application group
command which corresponds to the procedure type must be used to invoke the
procedure. For example, a procedure of type *START must be invoked by the Start
Application Group (STRAG) command.
Where should the procedure begin? The value specified for the Begin at step
(STEP) parameter on the request to run the procedure determines the step at which
the procedure will start. The status of the last run of the procedure determines which
values are valid.
The default value, *FIRST, will start the specified procedure at its first step. This value
can be used when the procedure has never been run, when its previous run
completed (*COMPLETED or *COMPERR), or when a user acknowledged the status
of its previous run which failed, was canceled, or completed with errors
(*ACKFAILED, *ACKCANCEL, or *ACKERR respectively).
Other values are for resolving problems with a failed or canceled procedure. When a
procedure fails or is canceled, subsequent attempts to run the same procedure will
fail until user action is taken. You will need to determine the best course of action for
your environment based on the implications of the canceled or failed steps and any
steps which completed.
The value *RESUME will start the last run of the procedure beginning with the step at
which it failed, the step that was canceled in response to an error, or the step
following where the procedure was canceled. The value *RESUME may be
appropriate after you have investigated and resolved the problem which caused the
procedure to end. Optionally, if the problem cannot be resolved and you want to
resume the procedure anyway, you can override the attributes of a step before
resuming the procedure.

560
Customizing user application handling for switching

The value *OVERRIDE will override the status of all runs of the specified procedure
that did not complete. The *FAILED or *CANCELED status of these procedures are
changed to acknowledged (*ACKFAILED or *ACKCANCEL) and a new run of the
procedure begins at the first step.
.

The MIMIX Operations book describes the operational level of working with
procedures and steps in detail.

Current status and run history


When a procedure is invoked for an application group, the status of that run of the
procedure is reported at the application group level. When a procedure is invoked for
a node, the status of that run of the procedure is reported at the node level. The
overall status for all procedures rolls up within MIMIX. You can also view the status of
specific steps and jobs run by a step when viewing a procedure’s status.
Timestamps are in the local job time. If you have not already ensured that the systems
in your installation use coordinated universal time, see the topic for setting system
time.
The Work with Procedure Status display provides status of the most recent run of
each procedure and a retained history of previously completed runs of procedures.
The Work with Step Status display provides access to detailed information about
status of steps for a specific run of a procedure. Steps are listed in sequence number
order as defined by steps in the procedure. The default view collapses status to a
summary record for each step. The expanded view shows the status of each step
expanded to show the status of each job used to process each step.
The Procedure history retention (PROCHST) policy specifies criteria for retaining
historical information about procedure runs that completed, completed with errors, or
that failed or were canceled and then acknowledged. Timestamps for the procedure
and detailed information about each step are kept for each run of a procedure. Each
run is evaluated separately and its information is retained until the policy criteria are
met. When a run exceeds the policy criteria, system cleanup jobs will remove the
historical information for that procedure run from all systems. The policy values
specified at the time the cleanup jobs run are used for evaluation.
The MIMIX Operations book describes the policy and status interfaces and values in
detail, including how to resolve problems with status.

Customizing user application handling for switching


After installing MIMIX and configuring an environment that uses application groups,
you will need to customize the step programs identified in Table 74 to handle ending
and starting user applications during switching. If these step programs have not been
customized or otherwise addressed, any attempt to switch using default procedures
will result in an error message (LVEE936 or LVEE938). These error messages
indicate that the identified step program has not been customized but it is called by an
enabled step within a switch procedure for the identified application group. The switch
procedure will not continue running until you have taken action.

561
Customizing procedures

Note: Any procedure with a step that invokes the step programs identified in Table
74 will issue the same error messages if action is not taken.

Table 74. Step programs that need customizing.

Step Description
Program

ENDUSRAPP Customize to end user applications on the current primary node before a
switch occurs.
Where used: Procedures of type *SWTPLAN that use shipped default
steps.
Source code template: ENDUSRAPP in source physical file
MCTEMPLSRC in the installation library.

STRUSRAPP Customize to start user applications on the new primary system following
a switch.
Where used: Procedures of type *SWTPLAN and *SWTUNPLAN that
use shipped default steps.
Source code template: STRUSRAPP source physical file.
MCTEMPLSRC in the installation library.

You have the following options:


• Option 1. Customize the step programs so that actions to start and end user
applications are performed as part of the switch procedure. All procedures that
use the step programs will be updated. Use “Customize the step programs for
user applications” on page 562.
• Option 2. Allow a procedure with steps that reference the step programs to run by
changing the Action on error attribute of those steps. Use “Changing attributes of
a step” on page 569 to change the steps, specifying *CONTINUE as the value of
the Action on error attribute. If all other steps of a changed procedure run
successfully, the procedure will end with a status of Completed with error
(*COMPERR). This option assumes that you will start or end user applications
outside of running the procedures.
Note: With this option, you must address each affected step in each procedure
for each application group separately.
• Option 3. You have other processes that will end the user applications before
running the switch procedure and start them after the switch procedure
completes. For the steps referencing these step programs, either disable the
steps using “Enabling or disabling a step” on page 569 or remove the step using
“Removing a step from a procedure” on page 570.
Note: With this option, you must address each affected step in each procedure
for each application group separately.

Customize the step programs for user applications


Use this topic to customize the step programs so that actions to start and end user
applications are performed as part of the switch procedure. These instructions identify
how to create and compile a custom version of the program identified within the step

562
Working with procedures

program. All procedures that use the step programs listed in Table 74 will use the
customization.
Do the following:
1. Copy the source code template for the step program from the location indicated in
Table 74.
2. Create and compile a custom version of the program that will perform the
necessary activity for your applications. See “Step program format STEP0100” on
page 571 for details.
3. Copy the complied step program to all systems in the installation. Ensure that it
has the same name and location on all systems.
Note: To prevent having your custom program replaced when a service pack is
installed, either the name of the program object or the library where it is
located must be different than the name and location specified in the
shipped default step program.
4. From the management system, enter the command:
installation_library/WRKSTEPPGM
5. Type 2 (Change) next to the step program you want and press Enter.
6. The Change Step Program (CHGSTEPPGM) command appears. Specify the
name and library of your custom program and press Enter.

Working with procedures


Procedures are used to perform operations for application groups or nodes. A
procedure consists of steps, which call a predetermined step program in a particular
sequence to complete a task for the procedure. Steps also identify runtime attributes
that determine how the procedure will start processing the step and how the
procedure will respond if the step ends in error.
The sequence of steps and their runtime attributes can be changed from the Work
with Steps display.

Accessing the Work with Procedures display


The Work with Procedures display shows a list of procedures that can be subsetted
by procedure name, application group, or procedure type. This display is used

563
Customizing procedures

primarily for configuring and modifying procedures. Only procedures of type *USER or
*NODE can be run from this display.

Figure 32. Example of the Work with Procedures display.

Work with Procedures


System: SYSTEMA
Type options, press Enter.
1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 7=Rename
8=Work with steps 9=Run 13=Last started status 14=Procedure status

Opt Procedure Type App Group Dft Description


__ __________ __________
__ CRTDPRDIR *NODE CREATE DIRECTORY DATA PROTEC >
__ CRTDPRFLR *NODE CREATE FOLDER DATA PROTECTION
__ CRTDPRLIB *NODE CREATE LIBRARY DATA PROTECTI >
__ END *END SAMPLEAG *YES END APPLICATION GROUP PROCED >
__ ENDTGT *END SAMPLEAG *NO END APPLICATION GROUP PROCED >
__ NODE *NODE PROCEDURE DEFINED FOR NODE >
__ PRECHECK *USER SAMPLEAG OPTIONAL PRECHECK FOR SWITCH
__ START *START SAMPLEAG *YES START APPLICATION GROUP PROC >
__ SWTPLAN *SWTPLAN SAMPLEAG *YES PLANNED APPLICATION GROUP SW >
__ SWTUNPLAN *SWTUNPLAN SAMPLEAG *YES UNPLANNED APPLICATION GROUP >

Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Create F9=Retrieve F12=Cancel
F13=Repeat F14=Procedure status F18=Subset F21=Print list

For detailed information about status for steps and procedures, see the MIMIX
Operations book.

Displaying the procedures for an application group


Do the following:
1. From the MIMIX Basic Main Menu, type 1 (Work with application groups) and
press Enter.
2. From the Work with Application Groups display, type 20 (Procedures) next to the
application group you want and press Enter.
The Work with Procedures display appears, listing all procedures for the selected
application group.

Displaying the procedures for a node


Do the following:
1. From the MIMIX Intermediate Main Menu, type 2 (Work with systems) and press
Enter.
2. From the Work with Systems display, do one of the following:
• To see a list of the procedures for nodes, type 20 (Procedures) next to the
system you want and press Enter.

564
Working with procedures

• To see the status of procedures that run on a node, type 21 (Procedure Status)
next to the system you want and press Enter.

Displaying all procedures


Do one of the following:
• From the MIMIX Intermediate Main Menu, type 7 (Work with procedures) and
press Enter.
• From the Work with Application Groups display, press F17 (Procedures).
• Type the following command and press Enter:
installation_library/WRKPROC

Creating a procedure of type *NODE


Procedures of type *NODE do not automatically include steps. You will need to
customize the procedure to add steps.
Do the following from a management system:
1. On the Work with Procedures display, type 1 (Create) next to the blank line at the
top of the list and press Enter.
2. The Create Procedure (CRTPROC) display appears. Specify a name for the
Procedure prompt. All procedures of type *NODE must be unique.
3. At the Type prompt, specify *NODE and press Enter.
4. At the Description prompt, specify text that describes the procedure and press
Enter.
5. Customize the procedure by adding or removing steps and adjusting step
attributes as needed. See “Working with step programs” on page 570.

Creating a procedure of type *USER


Procedures of type *USER do not automatically include steps. You will need to
customize the procedure to add steps.
Do the following from a management system:
1. On the Work with Procedures display, type 1 (Create) next to the blank line at the
top of the list and press Enter.
2. The Create Procedure (CRTPROC) display appears. Specify a name for the
Procedure prompt. The name must be unique for the application group you will
specify in Step 4, and must not be used by a procedure of type *NODE in the
installation.
3. At the Type prompt, specify *USER and press Enter.
4. At the Application group definition prompt, specify the name of the application
group with which the procedure will be associated.
5. Leave *NO specified for the Default for type prompt.
6. At the Description prompt, specify text that describes the procedure and press

565
Customizing procedures

Enter.
7. Customize the procedure by adding or removing steps and adjusting step
attributes as needed using the topics within “Working with the steps of a
procedure” on page 567 and “Working with step programs” on page 570.

Creating a procedure of type *END, *START, *SWTPLAN, *SWTUNPLAN


When you create a new procedure of type *END, *START, *SWTPLAN, or
*SWTUNPLAN, it is a copy of the shipped default procedure and includes its steps.
Do the following from a management system:
1. On the Work with Procedures display, type 1 (Create) next to the blank line at the
top of the list and press Enter.
2. The Create Procedure (CRTPROC) display appears. Specify a name for the
Procedure prompt. The name must be unique for the application group you will
specify in Step 4, and must not be used by a procedure of type *NODE in the
installation.
3. At the Type prompt, specify the applicable type (*END, *START, *SWTPLAN, or
*SWTUNPLAN) and press Enter.
4. At the Application group definition prompt, specify the name of the application
group with which the procedure will be associated.
5. To make the procedure be the default of its type for the specified application
group, specify *YES for the Default for type prompt.
6. At the Description prompt, specify text that describes the procedure and press
Enter.
7. Optional step: Customize the procedure by adding or removing steps and
adjusting step attributes as needed using the topics within “Working with the steps
of a procedure” on page 567 and “Working with step programs” on page 570.

Deleting a procedure
Use these instructions to delete a procedure, including the runtime attributes of steps
within the procedure. The step programs referenced by the steps of the procedure are
not deleted.
The procedure cannot be in use. The default *USER procedure, PRECHECK, and
*NODE procedures CRTDPRDIR, CRTDPRFLR, and CRTDPRLIB cannot be
deleted.
Do the following from the management system:
1. On the Work with Procedures display, type 4 (Delete) next to the procedure you
want press Enter.
2. A confirmation display appears. To delete the procedure, press Enter.

566
Working with the steps of a procedure

Working with the steps of a procedure


A step defines attributes to be used at runtime for a specified step program in the
context of the specified procedure. You can also add steps that reference your own
custom step programs. Steps are controlled by the procedure for which they are
defined.
When an application group is created, each of the automatically created procedures
has a default set of predefined steps.
When a system definition is created, procedure status records associated with the
new system definition for each *NODE procedure are created with *NEW status.

Displaying the steps within a procedure


The Work with Steps display shows the steps within a procedure.
To access the display do the following:
1. Display a list of procedures. See “Accessing the Work with Procedures display” on
page 563.
2. Type 2 (Work with steps) next to the procedure you want and press Enter.
The Work with Steps display appears, listing all steps that have been added to the
procedure according to their sequence numbers.

Figure 33. Example of the Work with Steps display.

Work with Steps


SYSTEM: SYSTEMA
Procedure: SWTPLAN App. group: SAMPLEAG Type: *SWTPLAN

Type options, press Enter.


1=Add 2=Change 4=Remove 5=Display 6=Print 20=Enable 21=Disable

Step Before Error Step Pgm Node


Opt Program Seq. Action Action State Type Type
__ __________ _______
__ MXCHKCOM 100 *NONE *QUIT *REQUIRED *AGDFN *LOCAL
__ MXCHKCFG 200 *NONE *QUIT *REQUIRED *DGDFN *NEWPRIM
__ ENDUSRAPP 300 *WAIT *MSGW *ENABLED *AGDFN *PRIMARY
__ MXENDDG 400 *NONE *QUIT *REQUIRED *DGDFN *NEWPRIM
__ MXENDRJLNK 500 *WAIT *QUIT *ENABLED *DGDFN *NEWPRIM
__ MXAUDACT 600 *NONE *QUIT *ENABLED *DGDFN *NEWPRIM
__ MXAUDCMPLY 700 *NONE *QUIT *ENABLED *DGDFN *NEWPRIM
__ MXAUDDIFF 800 *NONE *QUIT *ENABLED *DGDFN *NEWPRIM
MORE...
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Add F9=Retrieve F14=Step programs
F15=Step messages F18=Subset F21=Print list F24=More keys

567
Customizing procedures

Displaying step status for the last started run of a procedure


To display the step status for the most recently started (last run) of a procedure, do
the following:
1. Display a list of procedures. See “Accessing the Work with Procedures display” on
page 563.
2. From the Work with Procedures display, type 14 (Procedure status) next to the
procedure you want and press Enter.
3. The Work with Procedure Status display appears. Type 8 (Step status) next to the
procedure you want.
Note: The last run procedure is at the top of the list. For *NODE procedures, all
nodes are included so the *NODE procedure at the top of the list may not
be for the node you want.
For detailed information about status for steps and procedures, see the MIMIX
Operations book.

Adding a step to a procedure


Use these instructions to add a defined step program as a step within a procedure.
You can specify the sequence in which the step is performed within the procedure and
other runtime attributes for the step. The procedure to which a step is being added
cannot be active when adding a step.
A required step program can be added as a step only once within a procedure. Step
programs that are required steps for shipped default procedures of type *SWTPLAN
or *SWTUNPLAN cannot be added as steps in procedures of type *USER. Step
programs shipped for data protection report procedures (CRTDPRDIR, CRTDPRFLR,
and CRTDPRLIB) cannot be added to any other procedure.
For more information about adding and customizing step programs, see “Working
with step programs” on page 570.
For more information about procedures and step programs that are shipped with
MIMIX, see “Shipped procedures and step programs” on page 576.
Do the following from the management system:
1. Display the existing steps of the procedure. See “Displaying the steps within a
procedure” on page 567.
2. The Work with Steps display appears. Type 1 (Add) next to the blank line at the
top of the list and press Enter.
3. The Add Step (ADDSTEP) command appears with the procedure and application
group preselected. Do the following:
a. At the Step program name prompt, specify the step program that you want this
step to run.
b. The default value *LAST for the Sequence number prompt will add the step at
the end of the procedure using a number that is 100 greater than the current

568
Working with the steps of a procedure

last sequence number in the procedure. If you want the step to run in a
different relative order within the procedure, specify a different value.
c. Specify the values you want for other runtime attributes in the remaining
prompts. Default values will allow asynchronous jobs to process the step
without waiting for other jobs reach the step, and will quit if a job ends in error.
For details about the resulting behavior of other values for Action before step,
Action on error, and State see “Attributes of a step” on page 559.
d. To add the step, press Enter.

Changing attributes of a step


These instructions use the Change Step (CHGSTEP) command to change runtime
attributes for a step. Attributes of a required step cannot be changed. The procedure
cannot be active when using the CHGSTEP command.
Some step attributes of a step in an active procedure can be overridden to change
their configured value for the current run of the procedure, by using the Override Step
(OVRSTEP) command. For details and instructions to change the attributes of an
active step, see “Overriding the attributes of a step” in the MIMIX Operations book.
Do the following from the management system:
1. Display the existing steps of the procedure. See “Displaying the steps within a
procedure” on page 567.
2. The Work with Steps display appears. Type 2 (Change) next to the step you want
and press Enter.
3. The Change Step (CHGSTEP) command appears. Make the changes you want.
• To change the relative order in which the step is performed, specify a different
value for the To sequence number prompt.
• Specify the values you want for the Action before step, Action on error, and
State prompts. For information, see “Attributes of a step” on page 559.
4. To change the step, press Enter.

Enabling or disabling a step


Use these instructions to enable or disable a step within a procedure. Disabling a step
prevents it from running but the step remains in the procedure. Enabling a step that
was disabled allows the step to be performed in sequence within the procedure.
Required steps do not support being enabled or disabled.
The procedure cannot be active when a step is enabled or disabled.
Do the following from the management system:
1. Display the existing steps of the procedure. See “Displaying the steps within a
procedure” on page 567.
2. The Work with Steps display appears. Do one of the following:
• To enable a step, type 20 (Enable) next to the step you want and press Enter.

569
Customizing procedures

• To disable a step, type 21 (Disable) next to the step you want and press Enter.

Removing a step from a procedure


Use these instructions to remove a step from a procedure. The step program
referenced by the step will remain available for use by other procedures within the
installation. Required steps cannot be removed.
The procedure cannot be active when a step is removed.
Do the following from the management system:
1. Display the existing steps of the procedure. See “Displaying the steps within a
procedure” on page 567.
2. The Work with Steps display appears. Type 4 (Remove) next to the step you want
and press Enter.
3. A confirmation display appears. To remove the step, press Enter.

Working with step programs


Step programs are configuration elements that enable the reuse of programs that
perform unique actions by multiple procedures. Each step program identifies the
name and location of a program and attributes which identify the type of node on
which the program can run as well as whether the program will run at the level of the
application group, data resource group, data group, or node.
MIMIX ships default step programs that are used as steps within shipped procedures.
Shipped step programs cannot be changed or removed.

Accessing step programs


Do one of the following to access to the Work with Step Programs display:
• From the Work with Steps display, press F14 (Step programs)
• Enter the command installation_library/WRKSTEPPGM
The list displayed identifies step programs defined within the MIMIX installation. Both
shipped step programs and user-defined step programs are listed.
For more information about shipped step programs, see “Shipped procedures and
step programs” on page 576. and “Steps for application groups” on page 602.

Creating a custom step program


This interface supports programs written in C, RPG and CL.
1. Create and compile the program that will be invoked by the step program when it
is called by a procedure. Use “Step program format STEP0100” on page 571.
2. Copy the complied step program to all systems in the installation. Ensure that the
name and location is the same on all systems.
3. Do the following from the management system to add a step program to the

570
Working with step programs

installation:
a. Type ADDSTEPPGM and press F4 (Prompt).
b. Specify a name for the step program.
c. Specify the name of the program object and the library in which it is located.
d. Specify the type of step program. This indicates the operational level at which
the program will run.
e. Specify the type of node on which the program will run.
f. Specify a description of the step program. This will be displayed when you view
details of a step which uses the step program.
g. To add the step program, press Enter.

Changing a step program


You can change the attributes of a step program. The changes you make will affect all
procedures with steps that invoke the step program.
Procedures whose steps reference the specified step program cannot be active when
these instructions performed.
To change a step program, do the following:
1. Type WRKSTEPPGM and press Enter.
2. Type 2 (Change) next to the step program you want and press Enter.
3. The Change Step Program (CHGSTEPPGM) command appears. Specify values
for the attributes you want to change.
4. Press Enter.

Step program format STEP0100


You can create your own program that can be identified in and called by a procedure
by using format STEP0100.
The program should identify a specific task to be performed. The step program
identifies the program to MIMIX and specifies the type of node on which the program
can run and the type of step program. The step program type determines the
configuration level at which jobs for the procedure will process the program. When a
step is added to a procedure, attributes of the step define runtime attributes within the
context of the procedure, including the action to take when a job used to run the
program ends in error.
Note: For steps that run at data group level, the program object will be called
regardless of whether the data group state is enabled or disabled. Therefore, if
you want your program logic to be performed only for data groups in a
particular state, you must check the state at the beginning of the program. This
is a requirement to allow steps to operate on disabled data groups, which are
frequently used in environments that have three or more nodes.

571
Customizing procedures

Programs can be written in C, RPG, or CL. Source code templates ENDUSRAPP and
STRUSRAPP in source physical file MCTEMPLSRC in the installation library can be
used as templates for any step program. These templates can be used for any
custom step program; however, avoid using these names for your program.
A step program is called with the following parameters.
Application Group Name
INPUT; CHAR (10)
The name that identifies the application group definition. If the step program is of type
*NODE, this parameter contains all blanks.
Resource Group Name
INPUT; CHAR (10)
The name that identifies the resource group. If the resource group is not applicable or
if the step program is of type *NODE, this parameter contains all blanks.
Data Group Name
INPUT; CHAR (26)
The name that identifies the data group definition (name system1 system2). If the
data group is not applicable or if the step program is of type *NODE, this parameter
contains all blanks.
Data Group Source
INPUT; CHAR (1)
The value that identifies the data source as configured in the data group definition. If
the step program is of type *NODE, this parameter contains all blanks.

1 System1 is the source for the data group.


2 System2 is the source for the data group.

New Primary Node


INPUT; CHAR (8)
The name that identifies the node that becomes the new primary node during a switch
operation. This is the system to which production is being switched. If used in a proce-
dure of a type other than *SWTPLAN or *SWTUNPLAN, the node name is the primary
node. For procedures of type *NODE, the current local node is specified.
Old Primary Node
INPUT; CHAR (8)
The name that identifies the node that is the old primary node during a switch opera-
tion. This is the system from which production is being switched. For procedures of
type *NODE, the current local node is specified.
Current Node
INPUT; CHAR (8)
The name that identifies the current local node. This is the node on which the step
program is running.

572
Working with step messages

Returned Message Identifier


OUTPUT; CHAR (7)
The value is the message identifier returned for an error message. If there is no error,
the step program should return all blanks.
Returned Message Data Length
OUTPUT; DECIMAL (5, 0)
The value identifies the length of the message data being returned.

0 No message data is returned.


value Identifies the length of the data returned in the Returned Message Data parameter.

Returned Message Data


OUTPUT; CHAR (900)
The text returned as message data for the returned message ID.

Working with step messages


A step message is an optional, user-created configuration element that defines the
error action to be taken for a specific error message identifier. A step message
provides the ability to determine the error action taken by a step based on the error
message identifier. Each step message is defined for an installation but can be used
by multiple steps or by steps in multiple procedures.
When a step with a specified error action of *MSGID fails, MIMIX will check the
installation for a defined step messages that matches the step's error message ID. If a
matching step message exists, it determines the error action used for the step. If a
step message is not found for the error, processing quits without running any
subsequent steps in the procedure.
Note: Any step with an error action of *MSGID that can encounter the specified
message ID will take the error action specified in the step message.
No step messages are shipped with MIMIX.

Assessing the Work with Step Messages display


Do one of the following to access to the Work with Step Messages display:
• From the Work with Steps display, press F15 (Step messages).
• Enter the command installation_library/WRKSTEPMSG
The list displayed identifies step messages defined within the MIMIX installation. The
messages listed in alphabetical order.

Adding or changing a step message


Step messages can only be added or changed from the management system.
From the management system, do the following
1. Access the Work with Step Messages display.

573
Customizing procedures

2. Do one of the following and press Enter:


• To add a message, type 1 (Add) next to the blank line at the top of the list.
• To change a message, type 2 (Change) next to the message you want.
3. If you are adding a message, specify the Message identifier.
4. Specify a value for the Action on error. Press F1 (Help) to see details for possible
options.
5. Specify a description of the message.
6. Press Enter.
The added or changed step message is effective immediately. Any step within the
installation which specifies an error action of *MSGID will use the message ID if the
step ends in error with the indicated message ID.

Removing a step message


Step messages can only be removed from the management system.
From the management system, do the following
1. Access the Work with Step Messages display.
2. Type 4 (Remove) next to the message you want and press Enter.
3. A confirmation display appears. Press Enter.
The change is effective immediately and can affect the behavior of procedures in the
installation. After a step message is removed, any steps that could potentially use the
step message error action will no longer have the error action available and
processing of the procedure will quit if the error message identifier is encountered.

Additional programming support for procedures and


steps
The following additional capabilities facilitate programming in environments that use
for procedures and steps:
• The Open MIMIX List (OPNMMXLST) command supports procedures and steps.
The Type of request (TYPE) parameter includes values for *PROC, *STEP,
*STEPPGM, and *STEPMSG. The parameters Procedure (PROC) and
Application group definition (AGDFN) qualify the type *STEP.
• The following retrieve commands are available:
– Retrieve Procedure (RTVPROC)
– Retrieve Step (RTVSTEP)
– Retrieve Step Message (RTVSTEPMSG)
– Retrieve Step Program (RTVSTEPPGM)
• Outfile support is available for the following commands:

574
Additional programming support for procedures and steps

– Work with Procedure (WRKPROC)


– Work with Procedure Status (WRKPROCSTS)
– Work with Steps (WRKSTEP)
– Work with Step Messages (WRKSTEPMSG)
– Work with Step Programs (WRKSTEPPGM)
– Work with Step Status (WRKSTEPSTS)

575
Shipped procedures and step programs

CHAPTER 23 Shipped procedures and step programs

This chapter includes information about procedures and step programs that are
shipped with MIMIX. Information is provided for application groups that use IBM i
clustering and those that do not. Procedures and steps are different depending on the
type of application group.
“Values for procedures and steps” on page 576 describes the values documented
throughout this section for shipped procedures and steps.
Shipped Procedures
Information for shipped procedures is provided in the following sections. The steps
within each procedure are listed in the order in which they will be performed.
• “Shipped procedures for application groups” on page 578
• “Shipped procedures for data protection reports” on page 585
• “Shipped default procedures for IBM i cluster type application groups” on
page 586
• “Shipped user procedures for cluster type application groups” on page 592
Shipped Steps
Information for shipped steps is provided in the following sections. The steps are
listed in alphabetical order. Details for the steps include a description of the step, the
procedures where the step runs, and an indication whether the step is required or can
be changed.
• “Steps for application groups” on page 602
• “Steps for data protection report procedures” on page 611
• “Steps for clustering environments” on page 613
• “Steps for MIMIX for MQ” on page 623
For information about how to add a step program to a procedure, see “Adding a step
to a procedure” on page 552.

Values for procedures and steps


The following information describes the values for shipped procedures and steps that
can appear within the tables documented in this chapter.
Sequence number — The sequence number of the step within the procedure
indicating the order in which the step will be performed. This is determined when the
step is added to a procedure.
Type — The type of step program and the level at which it runs. This is determined
when the step program is created.
Application Group - The step runs in the persistent job for the application group.

576
Values for procedures and steps

Resource Group - The step runs in a spawned job for each data resource group
within the specified application group.
Data Group - The step runs in a spawned job for each data group within the
specified application group.
Node - The step runs in the persistent job for the *NODE procedure.
Node — Identifies the type of node where the step program runs. This is determined
when the step program is created.
All - The step runs on all nodes.
Primary - The step runs only on the primary node.
Backup - The step runs on all backup nodes.
New Primary - If the step is added to a switch procedure, it runs on the new
primary node. Steps that are not part of a switch procedure run on the primary
node.
Local - The step runs only on the node on which the procedure started.
Peer - The step runs on all peer nodes.
Replicate - The step runs on all replicate nodes.
Before Action — Identifies what action is taken by all jobs for the procedure before
starting the step. This is determined for the step when it is added to a procedure.
None (*NONE) - No special action is taken. Processing continues with this step.
Message Wait (*MSGW) - The step is started, then waits until all asynchronous
jobs have completed processing all previous steps and a user has responded to
an inquiry message from the procedure which indicated the step has been waiting
to run. Then the step takes the action indicated by the user’s response to the
message. The action can be specified in the MIMIX portal application using the
Message Details dialog from the Procedures portlet, Step Status portlet, or
Procedure History window.
Wait (*WAIT) - The step is started only after all asynchronous jobs have
completed processing all previous steps.
Error Action — Identifies what action to take when a job used in processing the step
ends in error. This is determined for the step when it is added to a procedure.
Quit (*QUIT) - The status of the job that ended in error is set to Failed (*FAILED).
The type of step program determines what happens to other jobs for the step and
whether subsequent steps are prevented from starting. These behaviors are:
• Application Group - For step programs that run at the application group level,
subsequent steps that apply to the application group will not be started. Jobs
for data resource group or data group steps may still be running and will
process subsequent steps that apply to their data resource groups and data
groups.
• Data Resource Group - For step programs that run at the data resource
group level, no subsequent steps that apply to that data resource group or
its data groups will be started. Jobs for other data resource groups may still

577
Shipped procedures and step programs

be running and will process subsequent steps that apply to their data
resource groups and data groups.
• Data Group - For step programs that run at the data group level, jobs that are
processing other data groups within the same data resource group continue.
When they complete, the data resource group job ends. No subsequent
steps that apply to that data resource group or its data groups will be started.
However, subsequent steps will still be processed for other data resource
groups and their data groups.
Continue (*CONTINUE) - The job continues processing as if the job had not
ended in error. The status of the job in error is set to Ignored Error (*IGNERR) and
is indicated in the expanded view of step status.
Error Message Identifier (*MSGID) - Error processing is determined by whether
the error message ID has been predefined as a step message within the
installation. If a step message for the error ID exists, the step message
determines the action taken. If a step message is not found for the error message
ID, the error action defaults to Quit.
Message Wait (*MSGW) - The step ran and an inquiry message issued by the job
requires a response before any additional processing for the job can occur. The
possible responses and their resulting behaviors are:
• Retry the step and continue - This response will retry processing the step
program within the same job.
• Ignore the error and continue - This response will set the job’s status to Ignored
Error (*IGNERR), as indicated in the expanded view of step status, and
processing continues as if the job had not ended in error.
• Cancel the procedure - This response cancels the step and sets the status for
the step and procedure to Failed. Subsequent steps are handled in the same
manner described for the value Quit.

Shipped procedures for application groups


The shipped procedures for application groups that do not participate in an IBM i
cluster are included in this section. The steps within each procedure are listed in the
order in which they will be performed. For more information about a step, click on a
step name or see “Steps for application groups” on page 602.
• “END” on page 579
• “ENDTGT” on page 579
• “ENDIMMED” on page 579
• “PRECHECK” on page 580
• “START” on page 581
• “SWTPLAN” on page 581
• “SWTUNPLAN” on page 583

578
Shipped procedures for application groups

END
When END is created for a new application group, it is set as the default end
procedure. The End Application Group (ENDAG) command automatically uses the
application group’s current set default END procedure unless you specify a different
procedure. Steps in the END procedure are included in Table 75.

Table 75. Steps in END procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXENDDG Data Group Local None Quit Yes

ENDTGT
The ENDTGT command ends replication processes that run on the target system.
Steps in the ENDTGT procedure are included in Table 76.

Table 76. Steps in ENDTGT procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXENDDGTGT Data Group New None Quit Yes


Primary

ENDIMMED
Steps in the ENDIMMED procedure are included in Table 77.

Table 77. Steps ENDIMMED procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXENDDGIM Data Group New None Quit Yes


Primary

MXCHKDGEND Data Group New None Message Yes


Primary Wait

579
Shipped procedures and step programs

PRECHECK
Steps in the PRECHECK user procedure are included in Table 78.

Table 78. Steps in PRECHECK procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXCHKCOM Application Group Local None Quit No

MXCHKCFG Data Group New None Continue Yes


Primary

MXAUDACT Data Group New None Continue Yes


Primary

MXAUDCMPLY Data Group New None Continue Yes


Primary

MXAUDDIFF Data Group New None Continue Yes


Primary

MXDBERR Data Group New None Continue Yes


Primary

MXCHKAPMNT Data Group New None Message Yes


Primary Wait

MXDBPND Data Group New None Continue Yes


Primary

MXNFYERR Data Group New None Continue Yes


Primary

MXOBJERR Data Group New None Continue Yes


Primary

MXOBJPND Data Group New None Continue Yes


Primary

580
Shipped procedures for application groups

START
Steps in the START procedure are included in Table 79.
Note: If you are using the MIMIX for MQ feature, you will have to add customized
steps to these procedures. For more information, see the MIMIX for IBM
WebSphere MQ book.

Table 79. Steps in START procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXCHGDG Data Group New Wait Message Yes


Primary Wait

MXSTRDG Data Group Local Wait Quit No

SWTPLAN
Steps in the planned switch (SWTPLAN) procedure are included in Table 80.
Note: If you are using the MIMIX for MQ feature, you will have to add customized
steps to these procedures. For more information, see the MIMIX for IBM
WebSphere MQ book.

Table 80. Steps in SWTPLAN procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXCHKCOM Application Group Local None Quit No

MXCHKCFG Data Group New None Message Yes


Primary Wait

ENDUSRAPP Application Group Primary Wait Message Yes


Wait

MXSETSWTP Data Group New None Quit No


Primary

MXENDDG Data Group Local Wait Message Yes


Wait

MXCHKAPMNT Data Group New None Message Yes


Primary Wait

MXENDRJLNK Data Group New Wait Message Yes


Primary Wait

581
Shipped procedures and step programs

Table 80. Steps in SWTPLAN procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXAUDACT Data Group New None Message Yes


Primary Wait

MXAUDCMPLY Data Group New None Message Yes


Primary Wait

MXAUDDIFF Data Group New None Quit Yes


Primary

MXDBERR Data Group New None Message Yes


Primary Wait

MXDBPND Data Group New None Message Yes


Primary Wait

MXNFYERR Data Group New None Message Yes


Primary Wait

MXOBJERR Data Group New None Message Yes


Primary Wait

MXOBJPND Data Group New None Message Yes


Primary Wait

MXENBTRG Data Group New Wait Quit No


Primary

MXCHGDGDIR Data Group New None Quit No


Primary

MXCHGUSRJ Data Group New None Message Yes


Primary Wait

MXCHGSYSJ Data Group New None Message Yes


Primary Wait

MXSETSTRJ Data Group New None Quit No


Primary

MXSTRJRNS Data Group New None Quit No


Primary

MXENDJRNT Data Group New None Message Yes


Primary Wait

MXSWTCMP Data Group New Wait Quit No


Primary

MXCHGDG Data Group New Wait Message Yes


Primary Wait

582
Shipped procedures for application groups

Table 80. Steps in SWTPLAN procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXUPDNOD Application Group New Wait Quit No


Primary

STRUSRAPP Data Group New Wait Quit No


Primary

SWTUNPLAN
Steps in the unplanned switch (SWTUNPLAN) procedure are included in Table 81.
Note: If you are using the MIMIX for MQ feature, you will have to add customized
steps to these procedures. For more information, see the MIMIX for IBM
WebSphere MQ book.

Table 81. Steps in SWTUNPLAN procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXCHKCFG Data Group New None Quit No


Primary

MXSETSWTUN Data Group New None Quit No


Primary

MXENDDGUNP Data Group New None Quit No


Primary

MXCHKAPMNT Data Group New None Message Yes


Primary Wait

MXAUDACT Data Group New None Message Yes


Primary Wait

MXAUDCMPLY Data Group New None Message Yes


Primary Wait

MXAUDDIFF Data Group New None Quit Yes


Primary

MXDBERR Data Group New None Message Yes


Primary Wait

MXDBPND Data Group New None Message Yes


Primary Wait

583
Shipped procedures and step programs

Table 81. Steps in SWTUNPLAN procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXNFYERR Data Group New None Message Yes


Primary Wait

MXOBJERR Data Group New None Message Yes


Primary Wait

MXOBJPND Data Group New None Message Yes


Primary Wait

MXUNCNFE Data Group New None Quit No


Primary

MXENBTRG Data Group New Wait Quit No


Primary

MXCHGDGDIR Data Group New None Quit No


Primary

MXCHGUSRJ Data Group New None Message Yes


Primary Wait

MXCHGSYSJ Data Group New None Message Yes


Primary Wait

MXSETSTRJ Data Group New None Quit No


Primary

MXSTRJRNS Data Group New None Quit No


Primary

MXENDJRNT Data Group New None Message Yes


Primary Wait

MXSWTCMP Data Group New Wait Quit No


Primary

MXCHGDG Data Group New Wait Message Yes


Primary Wait

MXUPDNOD Application Group New Wait Quit No


Primary

STRUSRAPP Data Group New Wait Quit No


Primary

584
Shipped procedures for data protection reports

Shipped procedures for data protection reports


The shipped procedures for data protection reports are included in this section. The
steps within each procedure are listed in the order in which they will be performed.
For more information about a step, click on a step name or see “Steps for data
protection report procedures” on page 611.
• “CRTDPRDIR” on page 585
• “CRTDPRFLR” on page 585
• “CRTDPRLIB” on page 586

CRTDPRDIR
The CRTDPRDIR procedure creates a data protection report for directories. Steps in
the CRTDPRDIR procedure are included in Table 82.

Table 82. Steps in CRTDPRDIR procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXGETDIR Node Local None Quit No

MXUPDDIR Node Local Wait Quit No

CRTDPRFLR
The CRTDPRFLR procedure creates a data protection report for folders. Steps in the
CRTDPRFLR procedure are included in Table 83.

Table 83. Steps in CRTDPRFLR procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXGETFLR Node Local None Quit No

MXUPDFLR Node Local Wait Quit No

585
Shipped procedures and step programs

CRTDPRLIB
The CRTDPRLIB procedure creates a data protection report for libraries. Steps in the
CRTDPRLIB procedure are included in Table 84.

Table 84. Steps in CRTDPRLIB procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXGETLIB Node Local None Quit No

MXUPDLIB Node Local Wait Quit No

Shipped default procedures for IBM i cluster type appli-


cation groups
The shipped default procedures for cluster type application groups are included in this
section. The steps within each procedure are listed in the order in which they will be
performed. The following procedures have additional steps included for use with IBM i
clustering. Procedures ENDTGT, ENDIMMED, and PRECHECK are the same
regardless of the application group type. Clicking on a step name provides more
details for the step.
• “END for clustering” on page 587
• “START for clustering” on page 587
• “SWTPLAN for clustering” on page 588
• “SWTUNPLAN for clustering” on page 590
• “ENDTGT” on page 579
• “ENDIMMED” on page 579
• “PRECHECK” on page 580

586
Shipped default procedures for IBM i cluster type application groups

END for clustering


Steps in the END procedure for IBM i cluster type application groups are included in
Table 85.

Table 85. Steps in END procedure for IBM i cluster application groups. Click on a step name to see
more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXENDDG Data Group Local None Quit Yes

MCENDICRG1 Data Resource Group Primary None Quit Yes

MCODAAVL2 Application Group All None Quit No


1. This is a default step to support switchable iASP environments.
2. This is a default step for all cluster environments.

START for clustering


Steps in the START procedure for IBM i cluster type application groups are included in
Table 86.

Table 86. Steps in START procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCIASPON1 Data Resource Group New Wait Quit Yes


Primary

MXCHGDG Data Group New Wait Quit Yes


Primary

MXSTRDG Data Group Local Wait Quit No

MCSTRICRG1 Data Group Primary Wait Quit No

MCODAINUSE2 Application Group All None Continue No

MCODAPAVL2 Application Group New None Quit No


Primary
1. This is a default step to support switchable iASP environments.
2. This is a default step for all IBM i cluster environments.

587
Shipped procedures and step programs

SWTPLAN for clustering


Steps in the planned switch (SWTPLAN) procedure for IBM i cluster type application
groups are included in Table 87.

Table 87. Steps in shipped default SWTPLAN procedure. Click on a step name to see more informa-
tion.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXCHKCOM Application Group Local None Quit No

MXCHKCFG Data Group New None Message Yes


Primary Wait

MCWAITIDA1 Application Group Primary None Quit No

MXSETSWTP1 Data Group New None Quit No


Primary

MXENDDG Data Group Local Wait Message Yes


Wait

MXENDRJLNK Data Group New Wait Message Yes


Primary Wait

MXAUDACT Data Group New None Message Yes


Primary Wait

MXAUDCMPLY Data Group New None Message Yes


Primary Wait

MXAUDDIFF Data Group New None Quit Yes


Primary

MXDBERR Data Group New None Message Yes


Primary Wait

MXDBPND Data Group New None Message Yes


Primary Wait

MXNFYERR Data Group New None Message Yes


Primary Wait

MXOBJERR Data Group New None Message Yes


Primary Wait

MXOBJPND Data Group New None Message Yes


Primary Wait

MCSWTICRG2 Data Resource Group New Wait Quit Yes


Primary

MCWAITICRG2 Data Resource Group New Wait Quit Yes


Primary

588
Shipped default procedures for IBM i cluster type application groups

Table 87. Steps in shipped default SWTPLAN procedure. Click on a step name to see more informa-
tion.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCIASPON2 Data Resource Group New Wait Quit Yes


Primary

MCSTRIJOB2 Data Resource Group New Wait Quit Yes


Primary

MXENBTRG Data Group New None Quit No


Primary

MXCHGDGDIR Data Group New None Quit No


Primary

MXCHGUSRJ Data Group New None Message Yes


Primary Wait

MXCHGSYSJ Data Group New None Message Yes


Primary Wait

MXSETSTRJ Data Group New None Quit No


Primary

MXSTRJRNS Data Group New None Quit No


Primary

MXENDJRNT Data Group New None Message Yes


Primary Wait

MXSWTCMP Data Group New Wait Quit No


Primary

MXCHGDG Data Group New Wait Message Yes


Primary Wait

MCODAINUSE1 Application Group All Wait Quit No

MCODAPAVL1 Application Group New Wait Quit No


Primary

MXSTRDG1 Data Group Local Wait Quit No

MXUPDNOD Application Group New Wait Quit No


Primary
1. This is a default step for all cluster environments.
2. This is a default step to support switchable iASP environments.

589
Shipped procedures and step programs

SWTUNPLAN for clustering


Steps in the unplanned switch (SWTUNPLAN) procedure for IBM i cluster type
application groups are included in Table 88.

Table 88. Steps in SWTUNPLAN procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MXCHKCFG Data Group New None Quit No


Primary

MXSETSWTUN Data Group New None Quit No


Primary

MXENDDGUNP Data Group New None Quit No


Primary

MXAUDACT Data Group New None Message Yes


Primary Wait

MXAUDCMPLY Data Group New None Message Yes


Primary Wait

MXAUDDIFF Data Group New None Quit Yes


Primary

MXDBERR Data Group New None Message Yes


Primary Wait

MXDBPND Data Group New None Message Yes


Primary Wait

MXNFYERR Data Group New None Message Yes


Primary Wait

MXOBJERR Data Group New None Message Yes


Primary Wait

MXOBJPND Data Group New None Message Yes


Primary Wait

MXUNCNFE Data Group New None Quit No


Primary

MCSWTICRG1 Data Resource Group New Wait Quit Yes


Primary

MCWAITICRG1 Data Resource Group New Wait Quit Yes


Primary

MCIASPON1 Data Resource Group New Wait Quit Yes


Primary

590
Shipped default procedures for IBM i cluster type application groups

Table 88. Steps in SWTUNPLAN procedure. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCSTRIJOB1 Data Resource Group New Wait Quit Yes


Primary

MXENBTRG Data Group New Wait Quit No


Primary

MXCHGDGDIR Data Group New None Quit No


Primary

MXCHGUSRJ Data Group New None Message Yes


Primary Wait

MXCHGSYSJ Data Group New None Message Yes


Primary Wait

MXSETSTRJ Data Group New None Quit No


Primary

MXSTRJRNS Data Group New None Quit No


Primary

MXENDJRNT Data Group New None Message Yes


Primary Wait

MXSWTCMP Data Group New Wait Quit No


Primary

MXCHGDG Data Group New Wait Message Yes


Primary Wait

MCODAINUSE2 Application Group New Wait Continue No


Primary

MCODAPAVL2 Application Group New Wait Quit No


Primary

MXSTRDGUNP2 Data Group New Wait Quit No


Primary

MXUPDNOD Application Group New Wait Quit No


Primary
1. This is a default step to support switchable iASP environments.
2. This is a default step for all IBM i cluster environments.

591
Shipped procedures and step programs

Shipped user procedures for cluster type application


groups
The shipped user procedures for cluster type application groups are included in this
section. Cluster user defined procedures are automatically created for application
level procedures when the application group definition is type cluster (*CLU) or when
a data resource group entry that is one of the following types is created: *GMIR,
*LUN, Peer , and *PPRC. The steps within each procedure are listed in the order in
which they will be performed. Clicking on a step name provides more details for the
step.
• “APP_END” on page 592
• “APP_FAIL” on page 593
• “APP_STR” on page 593
• “APP_SWT” on page 594
• “Shipped user procedures for *GMIR resource groups” on page 594
• “Shipped user procedures for *LUN resource groups” on page 597
• “Shipped user procedures for Peer resource groups” on page 598
• “Shipped user procedures for *PPRC resource groups” on page 599

APP_END
Steps for the cluster user procedure, application end, are included in Table 89.

Table 89. Steps in cluster user procedure application end. Click on a step name to see more informa-
tion.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

ENDUSRAPP Data Group Primary None Message Yes


Wait

MCENDAGMON Data Resource Group All None Quit Yes

MCIDAAVL Data Resource Group Primary None Quit No

592
Shipped user procedures for cluster type application groups

APP_FAIL
Steps for the cluster user procedure, application failover, are included in Table 90.

Table 90. Steps in cluster user procedure application failover. Click on a step name to see more infor-
mation.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCODAFAIL/ Application Group Local None Quit No


MCWAITODA

MCIDAFAIL Application Group Local None Quit No

MCSTRAGMON Application Group All None Continue Yes

APP_STR
Steps for the cluster user procedure, application start, are included in Table 91.

Table 91. Steps in cluster user procedure application start. Click on a step name to see more informa-
tion.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

STRUSRAPP Application Group New None Message Yes


Primary Wait

MCIDAINUSE Application Group New None Quit No


Primary

MCSTRAGMON Application Group All None Continue Yes

593
Shipped procedures and step programs

APP_SWT
Steps for the cluster user procedure, application switch, are included in Table 92.

Table 92. Steps in the cluster user procedure application switch. Click on a step name to see more
information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

ENDUSRAPP Application Group Primary None Message Yes


Wait

MCIDAAVL Application Group Primary None Continue No

MCWAITODA Application Group New None Quit No


Primary

MCIDAINUSE Application Group New None Quit No


Primary

MCSTRAGMON Application Group All None Continue Yes

Shipped user procedures for *GMIR resource groups


The default user procedures for Global Mirror (GMIR) resource groups include the
following:
• “GMIR_END” on page 594
• “GMIR_FAIL” on page 595
• “GMIR_JOIN” on page 595
• “GMIR_STR” on page 596
• “GMIR_SWT” on page 596

GMIR_END
Steps in the Global Mirror (GMIR) user procedure, end, are included in Table 93.

Table 93. Steps in GMIR user procedure end. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCPSGMIR Data Resource Group Local None Quit No

594
Shipped user procedures for *GMIR resource groups

Table 93. Steps in GMIR user procedure end. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCUPDSTS Data Resource Group Local None Quit Yes

GMIR_FAIL
Steps in the shipped Global Mirror (GMIR) user procedure, failover, are included in
Table 94.

Table 94. Steps in GMIR user procedure failover. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCENDIJOB Data Resource Group Local None Continue No

MCRMVVOL Data Resource Group Local None Quit No

MCPSGMIR Data Resource Group Local None Quit No

MCFOGMIR Data Resource Group Local None Quit No

MCRCRFLASH Data Resource Group Local None Quit No

MCRVSFLASH Data Resource Group Local None Quit No

MCRMVGMIR Data Resource Group Local None Quit No

GMIR_JOIN
Steps in the Global Mirror (GMIR) user procedure, rejoin, are included in Table 95.

Table 95. Steps in GMIR user procedure rejoin. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCRSMGMIR Data Resource Group Local None Continue No

MCFBGMIR Data Resource Group Local None Continue No

MCMKFLASH Data Resource Group Local None Continue No

595
Shipped procedures and step programs

Table 95. Steps in GMIR user procedure rejoin. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCADDVOL Data Resource Group Local None Continue No

MCMKGMIR Data Resource Group Local None Continue No

MCIASPON Data Resource Group New None Quit Yes


Primary

MCUPDSTS Data Resource Group Local None Quit Yes

GMIR_STR
Steps in the Global Mirror (GMIR) user procedure, start, are included in Table 96.

Table 96. Steps in GMIR user procedure start. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCRSMGMIR Data Resource Group Local None Quit No

MCIASPON Data Resource Group New None Quit Yes


Primary

MCUPDSTS Data Resource Group Local None Quit Yes

GMIR_SWT
Steps in the Global Mirror (GMIR) user procedure, switch, are included in Table 97.

Table 97. Steps in GMIR user procedure switch. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCENDIJOB Data Resource Group Local None Quit No

MCIASPOFF Data Resource Group Local None Quit No

MCRMVVOL Data Resource Group Local None Quit No

596
Shipped user procedures for *LUN resource groups

Table 97. Steps in GMIR user procedure switch. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCPSGMIR Data Resource Group Local None Quit No

MCFOGMIR Data Resource Group Local None Quit No

MCRCRFLASH Data Resource Group Local None Quit No

MCRVSFLASH Data Resource Group Local None Quit No

MCFBGMIR Data Resource Group Local None Quit No

MCMKFLASH Data Resource Group Local None Quit No

MCADDVOL Data Resource Group Local None Quit No

MCRMVGMIR Data Resource Group Local None Quit No

MCMKGMIR Data Resource Group Local None Quit No

Shipped user procedures for *LUN resource groups


The default user procedures for LUN resource groups include the following:
• “LUN_FAIL” on page 597
• “LUN_SWT” on page 598

LUN_FAIL
Steps in the LUN user procedure, fail, are included in Table 98.

Table 98. Steps in LUN user procedure fail. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCENDIJOB Data Resource Group Local None Continue No

MCRMVVOLGP Data Resource Group Local None Quit No

MCADDVOLGP Data Resource Group Local None Quit No

597
Shipped procedures and step programs

LUN_SWT
Steps in the LUN user procedure, switch, are included in Table 99.

Table 99. Steps in LUN user procedure switch. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCENDIJOB Data Resource Group Local None Quit No

MCIASPOFF Data Resource Group Local None Quit No

MCRMVVOLGP Data Resource Group Local None Quit No

MCADDVOLGP Data Resource Group Local None Quit No

Shipped user procedures for Peer resource groups


• “PEER_END” on page 598
• “PEER_STR” on page 598

PEER_END
Steps in the PEER user procedure, end, are included in Table 100.

Table 100. Steps in PEER user procedure end. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCENDPEER Data Resource Group Peer None Continue No

PEER_STR
Steps in the PEER user procedure, start, are included in Table 101.

Table 101. Steps in PEER user procedure start. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCSTRPEER Data Resource Group Peer None Continue No

598
Shipped user procedures for *PPRC resource groups

Shipped user procedures for *PPRC resource groups


The default user procedures for Peer to Peer Remote Copy (PPRC) resource groups
include the following:
• “PPRC_END” on page 599
• “PPRC_FAIL” on page 599
• “PPRC_JOIN” on page 600
• “PPRC_STR” on page 600
• “PPRC_SWT” on page 601

PPRC_END
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, end, are included in
Table 102.

Table 102. Steps in PPRC user procedure end. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCPSPPRC Data Resource Group Local None Quit No

MCUPDSTS Data Resource Group Local None Quit Yes

PPRC_FAIL
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, failover, are included
in Table 103.

Table 103. Steps in PPRC user procedure failover. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCENDIJOB Data Resource Group Local None Quit No

MCFOPPRC Data Resource Group Local None Quit Yes

599
Shipped procedures and step programs

PPRC_JOIN
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, rejoin, are included
in Table 104.

Table 104. Steps in PPRC user procedure rejoin. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCRSMPPRC Data Resource Group Local None Continue No

MCFBPPRC Data Resource Group Local None Continue No

MCIASPON Data Resource Group New None Quit Yes


Primary

MCUPDSTS Data Resource Group Local None Quit Yes

PPRC_STR
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, start, are included in
Table 105.

Table 105. Steps in PPRC user procedure start. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCRSMPPRC Data Resource Group Local None Quit No

MCIASPON Data Resource Group New None Quit Yes


Primary

MCUPDSTS Data Resource Group Local None Quit Yes

600
Shipped user procedures for *PPRC resource groups

PPRC_SWT
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, switch, are included
in Table 106.

Table 106. Steps in PPRC user procedure switch. Click on a step name to see more information.

Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action

MCENDIJOB Data Resource Group Local None Quit No

MCIASPOFF Data Resource Group Local None Quit No

MCFOPPRC Data Resource Group Local None Quit No

MCFBPPRC Data Resource Group Local None Quit No

601
Steps for application groups

Steps for application groups


This section includes step programs for application groups that are shipped with MIMIX.
• “Steps for application groups included in procedures” on page 602
• “Step programs not included in shipped MIMIX procedures” on page 609

Steps for application groups included in procedures


Table 107 includes step programs for application groups that are shipped with MIMIX and identifies the procedures where they are used.
The values for the Used in Procedure column indicate the following:
• ‘R’ - The step is required and cannot be changed or disabled.
• ‘C’ - The step is included and can be changed or disabled.
• Blank - The step is not used in the procedure.

Table 107. Step programs for data and application groups

Where Step Runs Used in Procedure

SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step

SWTPLAN
Step Description

ENDTGT
Program Name Type Node

START
END
ENDUSRAPP End user application. Application Group Primary C
This step can be customized to perform any actions
necessary to end the production application.
.

MXAUDACT Audit activity verification. Data Group New Primary C C C


This step checks for active audits.
.

MXAUDCMPLY Audit compliance verification. Data Group New Primary C C C


This step checks for audits that are out of compliance.
.

602
Steps for application groups

Table 107. Step programs for data and application groups

Where Step Runs Used in Procedure

SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step

SWTPLAN
Step Description

ENDTGT
Program Name Type Node

START
END
MXAUDDIFF Audit difference verification. Data Group New Primary C C C
This step checks for audits that have detected differences that
have not been corrected.
.

MXCHGDG Change data group. Data Group New Primary C C C


This step enables and disables data groups in multi-
management environments based on the recovery domain
and current node roles. New starting points are created for
data groups that are enabled.
.

MXCHGDGDIR Change data group direction. Data Group New Primary R R


This step changes the direction of replication for the data
group.
.

MXCHGSYSJ Changes audit journal receivers. Application Group New Primary C C


This step changes the system journal to a new receiver.
.

MXCHGUSRJ Changes user journal receivers. Data Group New Primary C C


This step changes the user journal to a new receiver.
.

MXCHKAPMNT Check AP maintenance status. Data Group New Primary C C C


This step checks the status of access path maintenance files
and attempts to repair error if found.

603
Steps for application groups

Table 107. Step programs for data and application groups

Where Step Runs Used in Procedure

SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step

SWTPLAN
Step Description

ENDTGT
Program Name Type Node

START
END
MXCHKCFG Configuration verification. Data Group New Primary C C C
This step verifies that data groups are switchable. Data
groups with replicate nodes are not checked.
.

MXCHKCOM Communications verification. Application Group Local R R


This step verifies the communication links are active among
all nodes in the recovery domain for the application group.
.

MXCHKDGEND Checks for ended data group, maximum of 30 checks. Data Group New Primary C
This step periodically checks the data group status to verify
that replication processes are ended. The RJ Link status is
not checked. If processes are not ended, a 10 second delay
occurs before status is checked again. The step fails if
replication processes are not ended after 30 checks.

This step must be manually enabled.

MXDBERR Database error verification. Data Group New Primary C C C


This step checks all enabled data groups for objects or files in
error or in the process of being repaired.
.

MXDBPND Ensures there are no pending database transactions. Data Group New Primary C C C
This step checks for active commit cycles or unprocessed
entries in the data group.
.

604
Steps for application groups

Table 107. Step programs for data and application groups

Where Step Runs Used in Procedure

SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step

SWTPLAN
Step Description

ENDTGT
Program Name Type Node

START
END
MXDSBCST Disables foreign key constraints.
. Data Group Backup C C C C C
This step disables all foreign key constraints on the target
node if the data group is configured for disabled constraints.
.
This step is available to be manually added.

MXENBCST Enable foreign key constraints. Data Group Backup C C C C


This step enables all foreign key constraints that were
previously disabled by MIMIX.

This step is available to be manually added.

MXENBTRG Enables triggers on target. Data Group New Primary R R


This step enables triggers on the target node that were
previously disabled by MIMIX.
.

MXENDDG End data group controlled. Data Group Local C R


This step ends the data group in a controlled manner. A
timeout of 3600 seconds is allowed for the data group to end.
.

MXENDDGIM End data group immediate. Data Group New Primary C


This step ends the data group immediately.
.

MXENDDGTGT End target data group controlled. Data Group New Primary C
This step ends the processes running on the target node for
the data group in a controlled manner.
.

605
Steps for application groups

Table 107. Step programs for data and application groups

Where Step Runs Used in Procedure

SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step

SWTPLAN
Step Description

ENDTGT
Program Name Type Node

START
END
MXENDDGUNP End data group controlled for an unplanned switch. Data Group New Primary R
This step ends the processes running on the target node for
the data group in a controlled manner during an unplanned
switch using a timeout of 3600 seconds. If the timeout is
reached before the controlled end is completed, an inquiry
message is issued. Errors are ignored.
.

MXENDJRNT End journaling on the new target node. Data Group New Primary C C
This step ends journaling on the new target node when the
data group is not configured for journaling on the target node.
.

MXENDRJLNK End remote journal link. Data Group New Primary C


This step ends the remote journaling link from the old source
node to the old target node.
.

MXNFYERR Notification error verification. Data Group New Primary C C C


This step checks for unacknowledged error notifications for
the data group.
.

MXOBJERR Object error verification. Data Group New Primary C C C


This step checks if there are failed object activity entries for
the data group.
.

MXOBJPND Ensure there are no pending object transactions. Data Group New Primary C C C
This step checks for pending object activity entries for the
data group.
.

606
Steps for application groups

Table 107. Step programs for data and application groups

Where Step Runs Used in Procedure

SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step

SWTPLAN
Step Description

ENDTGT
Program Name Type Node

START
END
MXSETSTRJ Set journal starting points. Data Group New Primary R R
This step sets the starting point for the data group that is in
switch mode. A journal entry is sent to notify Target Journal
Inspection about this event.
.

MXSETSWTP Sets switch type to planned. Data Group New Primary R


This step sets the switch type used by other step programs to
planned.
.

MXSETSWTUN Sets switch type to unplanned. Data Group New Primary R


This step sets the switch type used by other step programs to
unplanned.
.

MXSTRDG Start data group. Data Group Local R


This step starts the data group using the last processed
receiver and sequence number. If a data group cannot be
started because it requires pending entries to be cleared, a
second start request is issued which clears pending entries.
.

MXSTRJRNS Starts journaling on the new source node. Data Group New Primary R R
This step starts journaling for all applicable objects on the new
source node of the data group if the data group does not allow
journaling on target. This includes files and any data areas,
data queues, and IFS objects configured for user journal
replication.
.

607
Steps for application groups

Table 107. Step programs for data and application groups

Where Step Runs Used in Procedure

SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step

SWTPLAN
Step Description

ENDTGT
Program Name Type Node

START
END
MXSWTCMP Denotes the end of a switch procedure. Data Group New Primary R R
This step sets the switch to complete for a data group when
the switch has completed.
.

MXUNCNFE Processes unconfirmed entries. Data Group New Primary R


This step processes all unconfirmed journal entries for data
group that uses synchronous remote journaling.
.

MXUPDNOD Update node roles after switch. Application Group New Primary R R
This step updates node roles after a switch has completed.
.

STRUSRAPP Start user application. Application Group New Primary C C


This step can be customized to perform any actions
necessary to start the production application.
.

608
Steps for application groups

Step programs not included in shipped MIMIX procedures


Table 108 includes step programs for application groups that are shipped with MIMIX but not included in a procedure. These steps can be
added to a procedure as needed:

Table 108. Step programs for application groups

Step / Step Where Step Runs


Step Description
Program Name Type Node

MXDLY60 Delay job for 60 seconds. Application Group Local


This step delays the procedure for 60 seconds.
.

MXSTRDBS Starts data group database source jobs. Data Group Local
This step starts the database replication processes on the source node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.

MXSTRDBT Starts data group database target jobs. Data Group Local
This step starts the database replication processes on the target node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.

MXSTRDGDB Starts data group database jobs. Data Group Local


This step starts the database replication processes starting with the last processed entry for
the receiver and sequence number. If a data group cannot be started because it requires
pending entries to be cleared, a second start request is issued which clears pending
entries.

609
Steps for application groups

Table 108. Step programs for application groups

Step / Step Where Step Runs


Step Description
Program Name Type Node

MXSTRDGUNP Starts data groups during unplanned cluster switch. Data Group New
This step starts data group at the last processed entry for both receiver and sequence for Primary
an unplanned cluster switch. Communication errors are ignored. If a data group cannot be
started because it requires pending entries to be cleared, a second start request is issued
which clears pending entries.
.

MXSTRDGOBJ Starts data group object jobs. Data Group Local


This step starts the object replication processes starting with the last processed entry for
the receiver and sequence number. If a data group cannot be started because it requires
pending entries to be cleared, a second start request is issued which clears pending
entries.

MXSTROBJS Starts data group object source jobs. Data Group Local
This step starts the object replication processes on the source node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.
.

MXSTROBJT Starts data group object target jobs. Data Group Local
This step starts the object replication processes on the target node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.
.

610
Steps for data protection report procedures

Steps for data protection report procedures


Table 109 lists the step programs that are shipped for data protection reports. The steps for data protection report procedures are
predefined and cannot be changed. Also, these steps cannot be added to any other procedures. These steps are used in procedures of
type *NODE. The values for the Used in Procedure column indicate the following:
• ‘R’ - The step is required and cannot be changed or disabled.
• Blank - The step is not used in the procedure.

Table 109. Data protection reports step programs

Used in
Where Step Runs
Procedure

CRTDPRFLR
Step / Step Program

CRTDPRDIR

CRTDPRLIB
Step Description
Name
Type Node

MXGETDIR Get directories. Node Local R


This step collects information about the protection state of directories on the
specified node. Specific IBM and Vision directories are omitted from the list.
.

MXGETFLR Get folders. Node Local R


This step collects information about the protection state of document library
object (DLO) folders on the specified node.
.

MXGETLIB Get libraries. Node Local R


This step collects information about the protection state of libraries on the
specified node. Specific IBM and Vision libraries are omitted from the list.
.

MXUPDDIR Update directory replication information. Node Local R


This step processes the information collected about the protection state of
directories for a data protection report. See the Data Protection Report portlet
on the Analysis page for results.
.

611
Steps for data protection report procedures

Table 109. Data protection reports step programs

Used in
Where Step Runs
Procedure

CRTDPRFLR
Step / Step Program

CRTDPRDIR

CRTDPRLIB
Step Description
Name
Type Node

MXUPDFLR Update folder replication information. Node Local R


This step processes the information collected about the protection state of
document library object (DLO) folders for a data protection report. See the
Data Protection Report portlet on the Analysis page for results.
.

MXUPDLIB Update library replication information. Node Local R


This step processes the information collected about the protection state of
libraries for a data protection report. See the Data Protection Report portlet on
the Analysis page for results.
.

612
Steps for clustering environments

Steps for clustering environments


Table 110 includes step programs for clustering environments that are shipped with MIMIX and the procedures where they are used. The
values for the Used in Procedure column indicate the following:
• ‘R’ - The step is required and cannot be changed or disabled.
• ‘C’ - The step is included and can be changed or disabled.
• Blank - The step is not used in the procedure.
.

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
GMIR_SWT

PEER_END

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

APP_SWT

LUN_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCADDVOL Global Mirror - add volumes to session. Data Local R R R R
This step adds a volume to the global Resource
mirror session. Group
.

MCADDVOLGP LUN - add volume group. Data Local R R


This step adds the LUN from the previous Resource
primary node to the new primary. Group
.

MCENDAGMON End the application group monitor. Application All C


This step ends and disables the Group
application group status monitor.
.

MCENDICRG Ends the iASP CRG. Data Primary C


This step ends the switchable iASP Resource
cluster resource group. Group
.

613
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCENDIJOB End iASP MIMIX system jobs Data Local R R R R R R
This step ends the jobs associated with Resource
the switchable iASP. Group
.

MCENDPEER End processes related to PEER. Data Peer R R


This step ends processes related to Resource
PEER resource groups. For admin Group
domains, the admin domain monitor is
ended. Otherwise, the associated data
groups are ended.
.

MCFBGMIR Fail back Global Mirror Data Local R R


This step starts Global Mirror replication Resource
using the fail back PPRC (failbackpprc) Group
API.
.

MCFBPPRC Fail back Peer to Peer Remote Copy Data Local R R


(PPRC) Resource
This step starts Metro Mirror replication Group
using the fail back PPRC (failbackpprc)
API.
.

MCFOGMIR Switches direction of mirroring for Global Data Local R R


Mirror Resource
This step switches the direction of Group
mirroring for Global Mirror CRGS using
the fail over PPRC (failoverpprc) API.
.

614
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCFOPPRC Peer to Peer Remote Copy (PPRC) Data Local R R
failover Resource
This step switches the direction of Group
mirroring using the failover PPRC
(failoverpprc) API.
.

MCIASPOFF Varies off the iASP. Data Local R R R


This step varies off the switchable iASP. Resource
.
Group

MCIASPON Varies on the iASP. Data New C C


This step varies on the switchable iASP. Resource Primary
.
Group

MCIDAAVL Changes the IDA status on all nodes to Application Primary R R


*AVAILABLE. Group
This step changes the QCSTHAAPPI
status on all nodes to *AVAILABLE. This
is used by the application CRG exit
program to inform the data CRG exit
program the application has been ended
and replication can now be ended.
.

615
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCIDAFAIL Changes the IDA status to be *INUSE. Application Local R
This step changes the QCSTHAAPPI Group
status to be *INUSE. This is used by the
CRG exit programs to indicate that the
application has been successfully started
on the new primary node following a
cluster failover scenario.
.

MCIDAINUSE Sets the IDA status on all nodes to Application New R R R


*INUSE. Group Primary
This step sets the QCSTHAAPPI status
on the new primary node to *INUSE. This
is used by the CRG exit programs to
indicate that the application has been
successfully started on the new primary
node.
.

MCMKFLASH Make flash copy for Global Mirror. Data Local R R


This step makes the flash copy Resource
relationship for Global Mirror replication Group
(mkflash).
.

MCMKGMIR Make Global Mirror. Data Local R R


This step makes the Global Mirror Resource
relationship for Global Mirror replication Group
(mkgmir).

616
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCODAAVL Sets the ODA status on all nodes to Application All R
*AVAILABLE. Group
This step sets the QCSTHAAPPO status
on all nodes to *AVAILABLE. This is used
by the application CRG exit program to
recognize that all data and device CRGs
have successfully completed their switch
processing.
.

MCODAFAIL Waits for the ODA status to be Application Local R


*AVAILABLE in an unplanned switch. Group
This step waits for the QCSTHAAPPO
status to be *AVAILABLE. This is used by
the application CRG exit program in a
cluster failover scenario to know that all
data and device CRGs have successfully
completed their failover processing.
.

MCODAINUSE Sets the ODA status on all nodes to Application All R R R


*INUSE. Group
This step sets the QCSTHAAPPO status
on all nodes to *INUSE to indicate that
the data CRG is in an *ACTIVE state and
replication is also active.
.

617
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCODAPAVL Sets the ODA status on primary node to Application New R R R
*AVAILABLE. Group Primary
This step sets the QCSTHAAPPO status
on the new primary node to *AVAILABLE
as the last step of a switch or failover.
.

MCPSGMIR Pause Global Mirror. Data Local R R R


This step pauses the global mirror Resource
replication using the pause GMIR Group
(pausegrmir) API.
.

MCPSPPRC Pause Peer to Peer Remote Copy Data Local R


(PPRC). Resource
This step pauses the metro mirror Group
replication using the Pause PPRC
(pausepprc) API.
.

MCRCRFLASH Recover the flash for Global Mirror Data Local R R


This step checks status to determine the Resource
state of the Global Mirror using the lsflash Group
(lsflash) API. If the status is revert, it is
reverted (revertflash). Otherwise the flash
is committed (commitflash).
.

618
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCRMVGMIR Remove Global Mirror. Data Local R R
This step removes the Global Mirror Resource
relationship using the remove GMIR Group
(rmgmir) API.
.

MCRMVVOL Remove Global Mirror session. Data Local R R


This step removes a volume from the Resource
Global Mirror session (chsession -action Group
remove).
.

MCRMVVOLGP Remove LUN volume group. Data Local R R


This step removes the LUN from the Resource
previous primary node using the change Group
host connection (chhostconnect) API.
.

MCRSMGMIR Resume Global Mirror. Data Local R R


This step resumes the Global Mirror Resource
replication using the resume GMIR Group
(resumegmir)
API.
.

MCRSMPPRC Resume Peer to Peer Remote Copy Data Local R R


(PPRC). Resource
This step resumes the Metro Mirror Group
replication using the Resume PPRC
(resumepprc) API.
.

619
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCRVSFLASH Reverse flash for Global Mirror. Data Local R R
This step reverses the flash copy Resource
relationship for global mirror during a Group
switch.
.

MCSTRAGMON Start the AG status monitor. Application All C C C


This step enables and starts the Group
application group status monitor.
.

MCSTRICRG Start iASP CRG. Data Primary C


This step starts the iASP CRG using the Resource
STRAG command. Group
.

MCSTRIJOB Starts the IASP MIMIX system jobs Data New C C


This step calls MXXPREG to register the Resource Primary
MIMIX exit point and starts the port job for Group
the iASP system definition, the MIMIX
manager for the iASP system, and the
master monitor.
.

MCSTRPEER Start processes related to PEER. Data Peer R


This step starts processes for PEER Resource
resource group. For admin domains, the Group
admin domain monitor is started.
Otherwise, the associated data groups
are started.
.

620
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCSWTICRG Switch iASP CRG. Data New C C
This step switches the iASP using the Resource Primary
SWTAG command. Group
.

MCUPDSTS Update Peer to Peer Remote Copy Data Local C C C C C C


(PPRC) status. Resource
This step updates the internal data area Group
for the global or metro mirror status.
.

MCWAITICRG Wait for iASP CRG switch to complete. Data New C C


This step waits for the iASP CRG switch Resource Primary
to complete. Group
.

MCWAITIDA Wait for IDA status to be *AVAILABLE. Application Primary R


This step waits for the QCSTHAAPPI Group
data area status flag to be available. This
is used by the data CRG exit program to
know when the application has been
ended and it can proceed with ending
replication.
.

621
Steps for clustering environments

Table 110. Step programs for clustering environments

Where Step Runs Used in Procedure

SWTUNPLAN

PPRC_JOIN

PPRC_SWT
Step / Step

GMIR_JOIN

PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT

PPRC_STR
GMIR_FAIL

PEER_STR
GMIR_END

GMIR_STR
SWTPLAN

LUN_SWT
APP_SWT
Step Description

LUN_FAIL
APP_FAIL
APP_END

APP_STR
Program Name Type Node

START
END
MCWAITODA Wait for ODA status on all nodes to be Application New R R
*AVAILABLE. Group Primary
This step waits for the QCSTHAAPPO
status on the new primary node to
become *AVAILABLE. This is used by the
application CRG exit program to know
when the data CRG has ended all
replication and is ready to proceed with
the switch processing.
.

622
Steps for MIMIX for MQ

Steps for MIMIX for MQ


This section includes programs that are shipped with MIMIX for use with MQ-specific application groups. MIMIX for MQ requires a
separate license key. Configuration for the initial synchronization and the initial start of data groups must be completed prior to using these
step programs. For more information, see the MIMIX for IBM WebSphere MQ book. Once your MIMIX for MQ is operational, the step
programs can be used.
The MQ step programs need to be modified prior to adding them into the procedures for the MQ application group. To modify the MQ step
programs, do the following:
1. Add the queue manager names to the MQPRESWT and MQPOSTSWT programs. The two source templates are shipped in file
MCTEMPLSRC in the MIMIX installation library. Ensure the queue manager name variable (#MQVAR#) is updated in MQPRESWT
and MQPOSTSWT to the name of the queue manager.
2. Add the MIMIX installation library to your library list.
3. Compile MQPRESWT and MQPOSTSWT. Use the MIMIX installation library as the resulting library.
4. Copy the MQPRESWT and MQPOSTSWT programs to the other system(s) in the installation.
5. Define the programs as step programs.
The following is an example of the steps to modify the MQ step programs:
ADDSTEPPGM STEPPGM(MQCHGJRNB) PGM(installation-library-name/MQCHGJRN) TYPE(*DGDFN) NODETYP(*BACKUP)
ADDSTEPPGM STEPPGM(MQCHGJRNP) PGM(installation-library-name/MQCHGJRN) TYPE(*DGDFN) NODETYP(*NEWPRIM)
ADDSTEPPGM STEPPGM(MQPRESWT) PGM(installation-library-name/MQPRESWT) TYPE(*AGDFN) NODETYP(*PRIMARY)
ADDSTEPPGM STEPPGM(MQPOSTSWT) PGM(installation-library-name/MQPOSTSWT) TYPE(*AGDFN) NODETYP(*NEWPRIM)

623
Shipped procedures and step programs

Table 111 lists the step programs that are shipped for MIMIX for MQ and the
procedures where they are used. The values for the Used in Procedure column
indicate the following:
• ‘R’ - The step is required and cannot be changed or disabled.
• ‘C’ - The step is included and can be changed or disabled.
• Blank - The step is not used in the procedure.

Table 111. MIMIX for MQ programs

Used in
Where Step Runs
Procedure

SWTUNPLAN3
Step/Step Program Name

SWTPLAN2
and Description

START 1
Type Node

MQCHGJRN Data Resource Group New C C


(MQ - change journal) Primary
If the associated journal is AMQAJRN, the source
system journal definition change management is set
to *SIZE and the target system journal definition
change management is set to None.

MQPRESWT Application Group Primary C


(MQ pre-switch)
Calls the switch program that ends the queue
manager.

MQPOSTSWT Application Group New C C


(MQ post-switch) Primary
Calls the switch program that enables the queue
manager.
1. The MQ step programs need to be added to the MIMIX step program's start before starting data groups (MXSTRDG)
associated with the application group.
2. The MQ step programs need to be added to the MIMIX step program's planned switch procedure using the ADDSTEP
command. The MQPRESWT should be added after the user application is ended (ENDUSRAPP), and the
MQPOSTSWT should be added after the user application is started on the new primary system (STRUSRAPP).
3. The MQ step programs need to be added to the MIMIX step program's unplanned switch procedure using the
ADDSTEP command. Both MQPRESWT and MQCHGJRNP should be added in that order, after the application is
started on the new primary system (STRUSRAPP).

624
Summary of exit points

CHAPTER 24 Customizing with exit point


programs

The MIMIX family of products provide a variety of exit points to enable you to extend
and customize your operations.
The topics in this chapter include:
• “Summary of exit points” on page 625 provides tables that summarize the exit
points available for use.
• “Working with journal receiver management user exit points” on page 628
describes how to use user exit points safely.

Summary of exit points


The following tables summarize the exit points available for use.
If you need a specialized user exit program designed for your applications or
assistance with creating programs for journal management exit points, services are
available through your Certified MIMIX Consultant. Our personnel will ask about your
requirements for custom code and design a customized program to work with your
applications.

MIMIX user exit points


MIMIX provides the exit points identified in Table 112. for journal receiver
management. For additional information, see “Working with journal receiver
management user exit points” on page 628.

Table 112. MIMIX exit points for journal receiver management

Type Exit Point Name

Journal receiver management exit Receiver change management pre-change


points Receiver change management post-change
Receiver delete management pre-check
Receiver delete management pre-delete
Receiver delete management post-delete

MIMIX also supports a generic interface to existing database and object replication
process exit points that provides enhanced filtering capability on the source system.
This generic user exit capability is only available through a Certified MIMIX
Consultant.

625
Customizing with exit point programs

MIMIX Monitor user exit points


Table 113 identifies the user exit points available in MIMIX Monitor. You can use the
exit points through programs controlled by a monitor. Monitors can be set up to
operate with other products, including MIMIX. You can also use the MIMIX Monitor
User Access API (MMUSRACCS) for all interfaces to MIMIX Monitor.
MIMIX Monitor also contains the MIMIX Model Switch Framework. This support
provides powerful customization opportunities through a set of programs and
commands that are designed to provide a consistent switch framework for you to use
in your switching environment.
The Using MIMIX Monitor book documents the user exit points, the API, and MIMIX
Model Switch Framework.

Table 113. MIMIX Monitor exit points

Type Exit Point Name

Interface exit points Pre-create Pre-work with information


Post-create Post-work with information
Pre-change Pre-hold
Post-change Post-hold
Pre-copy Pre-release
Post-copy Post-release
Pre-delete Pre-status
Post-delete Post-status
Pre-display Pre-change status
Post-display Post-change status
Pre-print Pre-run
Post-print Post-run
Pre-rename Pre-export
Post-rename Post-export
Pre-start Pre-import
Post-start Post-import
Pre-end
Post-end

Condition program exit point After pre-defined condition check

Event program exit point After condition check (pre-defined and user-defined)

626
Summary of exit points

MIMIX Promoter user exit points


Table 114 identifies the exit points within MIMIX Promoter. If you perform concurrent
operations between MIMIX Promoter and MIMIX, you might consider using these exit
points within automation.

Table 114. MIMIX Promoter exit points

Type Exit Point Name

Control exit points Transfer complete


(The control exit service program supports these exit Lock failure
points.) After lock
Copy failure
Copy finalize
After temporary journal delete

Data exit points Data initialize


(The data exit service program supports these exit Data transfer
points.) Data finalize

627
Working with journal receiver management user exit
points
User exit points in critical processing areas enable you to incorporate specialized
processing with MIMIX to extend function to meet additional needs for your
environment. Access to user exit processing is provided through the use of an exit
program that can be written in any language supported by IBM i.
Since user exit programming allows for user code to be run within MIMIX processes,
great care must be exercised to prevent the user code from interfering with the proper
operation of MIMIX. For example, a user exit program that inadvertently causes an
entry to be discarded that is needed by MIMIX could result in a file not being available
in case of a switch. Use caution in designing a configuration for use with user exit
programming. You can safely use user exit processing with proper design,
programming, and testing. Services are also available to help customers implement
specialized solutions.

Journal receiver management exit points


MIMIX includes support that allows user exit programming in the journal receiver
change management and journal receiver delete management processes. With this
support, you can customize change management and delete management of journal
receivers according to the needs of your environment
Journal receiver management exit points are enabled when you specify a exit
program to use in a journal definition.

Change management exit points


MIMIX can change journal receivers when a specified time is reached, when the
receiver reaches a specified size, or when the sequence number reaches a specified
threshold. You specify these values when you create a journal definition. MIMIX also
changes the journal receiver at other times, such as during a switch and when a user
requests a change with the Change Data Group Receiver (CHGDGRCV) command.
The following user exit points are available for customizing change management
processing:
• Receiver Change Management Pre-Change User Exit Point. This exit point is
located immediately before the point in processing where MIMIX changes a
journal receiver. Either the user forced a journal receiver change (CHGDGRCV
command) or MIMIX processing determined that the journal receiver needs to
change. The return code from the exit program can prevent MIMIX from changing
the journal receiver, which can be useful when the exit program changes the
receiver.
• Receiver Change Management Post-Change User Exit Point. This exit point is
located immediately after the point in processing where MIMIX changes a journal
receiver. MIMIX ignores the return code from the exit program. This exit point is
useful for processing that does not affect MIMIX processing, such as saving the
journal receiver to media. (The example program in Table 115 on page 632 shows
how you can determine the name of the previously attached journal by retrieving

628
Working with journal receiver management user exit points

the name of the first entry in the currently attached journal receiver.)
Restrictions for Change Management Exit Points: The following restriction applies
when the exit program is called from either of the change management exit points:
• Do not include the Change Data Group Receiver (CHGDGRCV) command in your
exit program.
• Do not submit batch jobs for journal receiver change or delete management from
the exit program. Submitting a batch job would allow the in-line exit point
processing to continue and potentially return to normal MIMIX journal
management processing, thereby conflicting with journal manager operations. By
not submitting journal receiver change management to a batch job, you prevent a
potential problem where the journal receiver is locked when it is accessed by a
batch program.

Delete management exit points


MIMIX can delete journal receivers when the send process has completed processing
the journal receiver and other configurable conditions are met. When you create a
journal definition you specify whether unsaved journal receivers can be deleted, the
number of receivers that must be retained, and how many days to retain the
receivers.
The following user exit points are available for customizing delete management
processing:
• Receiver Delete Management Pre-Check User Exit Point. This exit point is
located before MIMIX determines whether to delete a journal receiver. When
called at this exit point, actions specified in a user exit program can affect
conditions that MIMIX processing checks before the pre-delete exit point. For
example, an exit program that saves the journal receiver may make the journal
receiver eligible for deletion by MIMIX processing. The return code from the exit
program can prevent MIMIX from deleting the journal receiver and any other
journal receiver in the chain.
• Receiver Delete Management Pre-Delete User Exit Point. This exit point is
located immediately before the point in processing where MIMIX deletes a journal
receiver. MIMIX processing determined that the journal receiver is eligible for
deletion. The return code from the exit program can prevent MIMIX from deleting
the journal receiver, which is useful when the receiver is being used by another
application.
• Receiver Delete Management Post-Delete User Exit Point. This exit point is
immediately after the point in processing where MIMIX deletes a journal receiver.
The return code from the exit program can prevent MIMIX from deleting any other
(newer) journal receivers attached to the journal.

Requirements for journal receiver management exit programs


This exit program allows you to include specialized processing in your MIMIX
environment at points that handle journal receiver management. The exit program
runs with the authority of the user profile that owns the exit program. If your exit

629
program fails and signals an exception to MIMIX, MIMIX processing continues as if
the exit program was not specified.

Attention: It is possible to cause long delays in MIMIX processing


that are undesirable when you use this exit program. When the exit
program is called, MIMIX passes control to the exit program. MIMIX
will not continue change management or delete management
processing until the exit program returns. Consider placing long
running processes that will not affect journal management in a
batch job that is called by the exit program.

Return Code
OUTPUT; CHAR (1)
This value indicates how to continue processing the journal receiver when the exit
program returns control to the MIMIX process. This parameter must be set. When the
exit program is called from Function C2, the value of the return code is ignored. Pos-
sible values are:

0 Do not continue with MIMIX journal management processing for this journal
receiver.
1 Continue with MIMIX journal management processing.

Function
INPUT; CHAR (2)
The exit point from which this exit program is called. Possible values are:

C1 Pre-change exit point for receiver change management.


C2 Post-change exit point for receiver change management.
D0 Pre-check exit point for receiver delete management.
D1 Pre-change exit point for receiver delete management.
D2 Post-change exit point for receiver delete management.

Note: Restrictions for exit programs called from the C1 and C2 exit points are
described within topic “Change management exit points” on page 628.
Journal Definition
INPUT; CHAR (10)
The name that identifies the journal definition.
System
INPUT; CHAR (8)
The name of the system defined to MIMIX on which the journal is defined.
Reserved1
INPUT; CHAR (10)
This field is reserved and contains blank characters.
Journal Name
INPUT; CHAR (10)
The name of the journal that MIMIX is processing.

630
Working with journal receiver management user exit points

Journal Library
INPUT; CHAR (10)
The name of the library in which the journal is located.
Receiver Name
INPUT; CHAR (10)
The name of the journal receiver associated with the specified journal. This is the jour-
nal receiver on which journal management functions will operate. For receiver change
management functions, this always refers to the currently attached journal receiver.
For receiver delete management functions, this always refers to the same journal
receiver.
Receiver Library
INPUT; CHAR (10)
The library in which the journal receiver is located.
Sequence Option
INPUT; CHAR (6)
The value of the Sequence option (SEQOPT) parameter on the CHGJRN command
that MIMIX processing would have used to change the journal receiver. It is recom-
mended that you specify this parameter to prevent synchronization problems if you
change the journal receiver. This parameter is only used when the exit program is
called at the C1 (pre-change) exit point. Possible values are:

*CONT The journal sequence number of the next journal entry created is 1 greater than
the sequence number of the last journal entry in the currently attached journal
receiver.
*RESET The journal sequence number of the first journal entry in the newly attached
journal receiver is reset to 1. The exit program should either reset the sequence
number or set the return code to 0 to allow MIMIX to change the journal receiver
and reset the sequence number.

Threshold Value
INPUT; DECIMAL(15, 5)
The value to use for the THRESHOLD parameter on the CRTJRNRCV command.
This parameter is only used when the exit program is called at the C1 (pre-change)
exit point. Possible values are:

0 Do not change the threshold value. The exit program must not change the
threshold size for the journal receiver.
value The exit program must create a journal receiver with this threshold value, specified
in kilobytes. The exit program must also change the journal to use that receiver, or
send a return code value of 0 so that MIMIX processing can change the journal
receiver.

Reserved2
INPUT; CHAR (1)
This field is reserved and contains blank characters.

631
Reserved3
INPUT; CHAR (1)
This field is reserved and contains blank characters.

Journal receiver management exit program example


The following example shows how an exit program can customize changing and
deleting journal receivers. This exit program only processes journal receivers when it
is called at the pre-change exit point (C1), the post-change exit point (C2), or the pre-
check exit point (D0).
When called at the pre-change exit point, the sample exit program handles changing
any journal receiver in library MYLIB. For any other journal library, MIMIX handles
change management processing.
When called at the post-change exit point, the exit program saves the recently
detached journal receiver if the journal is in library ABCLIB. (The recently detached
journal receiver was the attached receiver at the pre-change exit point.)
When called at the pre-check exit point, if the journal library is TEAMLIB, the exit
program saves the journal receiver to tape and allows MIMIX receiver delete
management to continue processing.

Table 115. Sample journal receiver management exit program

/*--------------------------------------------------------------*/
/* Program....: DMJREXIT */
/* Description: Example user exit program using CL */
/*--------------------------------------------------------------*/

PGM PARM(&RETURN &FUNCTION &JRNDEF &SYSTEM +


&RESERVED1 &JRNNAME &JRNLIB &RCVNAME +
&RCVLIB &SEQOPT &THRESHOLD &RESERVED2 +
&RESERVED3)

DCL VAR(&RETURN) TYPE(*CHAR) LEN(1)


DCL VAR(&FUNCTION) TYPE(*CHAR) LEN(2)
DCL VAR(&JRNDEF) TYPE(*CHAR) LEN(10)
DCL VAR(&SYSTEM) TYPE(*CHAR) LEN(8)
DCL VAR(&RESERVED1) TYPE(*CHAR) LEN(10)
DCL VAR(&JRNNAME) TYPE(*CHAR) LEN(10)
DCL VAR(&JRNLIB) TYPE(*CHAR) LEN(10)
DCL VAR(&RCVNAME) TYPE(*CHAR) LEN(10)
DCL VAR(&RCVLIB) TYPE(*CHAR) LEN(10)
DCL VAR(&SEQOPT) TYPE(*CHAR) LEN(6)
DCL VAR(&THRESHOLD) TYPE(*DEC) LEN(15 5)
DCL VAR(&RESERVED2) TYPE(*CHAR) LEN(1)
DCL VAR(&RESERVED3) TYPE(*CHAR) LEN(1)

632
Working with journal receiver management user exit points

Table 115. Sample journal receiver management exit program

/*--------------------------------------------------------------*/
/* Constants and misc. variables */
/*--------------------------------------------------------------*/
DCL VAR(&STOP) TYPE(*CHAR) LEN(1) VALUE('0')
DCL VAR(&CONTINUE) TYPE(*CHAR) LEN(1) VALUE('1')
DCL VAR(&PRECHG) TYPE(*CHAR) LEN(2) VALUE('C1')
DCL VAR(&POSTCHG) TYPE(*CHAR) LEN(2) VALUE('C2')
DCL VAR(&PRECHK) TYPE(*CHAR) LEN(2) VALUE('D0')
DCL VAR(&PREDLT) TYPE(*CHAR) LEN(2) VALUE('D1')
DCL VAR(&POSTDLT) TYPE(*CHAR) LEN(2) VALUE('D2')
DCL VAR(&RTNJRNE) TYPE(*CHAR) LEN(165)
DCL VAR(&PRVRCV) TYPE(*CHAR) LEN(10)
DCL VAR(&PRVRLIB) TYPE(*CHAR) LEN(10)

/*--------------------------------------------------------------*/
/* MAIN */
/*--------------------------------------------------------------*/
CHGVAR &RETURN &CONTINUE /* Continue processing receiver*/
/*--------------------------------------------------------------*/

/* Handle processing for the pre-change exit point. */


/*--------------------------------------------------------------*/
IF (&FUNCTION *EQ &PRECHG) THEN(DO)
/*--------------------------------------------------------------*/
/* If the journal library is my library(MYLIB), exit program */
/* will do the changing of the receivers. */

/*--------------------------------------------------------------*/
IF (&JRNLIB *EQ 'MYLIB') THEN(DO)
IF (&THRESHOLD *GT 0) THEN(DO)
CRTJRNRCV JRNRCV(&RCVLIB/NEWRCV0000) +
THRESHOLD(&THRESHOLD)
CHGJRN JRN(&JRNLIB/&JRNNAME) +
JRNRCV(&RCVLIB/NEWRCV0000) SEQOPT(&SEQOPT)
ENDDO /* There has been a threshold change */
ELSE (CHGJRN JRN(&JRNLIB/&JRNNAME) JRNRCV(*GEN) +
SEQOPT(&SEQOPT)) /* No threshold change */
CHGVAR &RETURN &STOP /* Stop processing entry */
ENDDO /* &JRNLIB is MYLIB */
ENDDO /* &FUNCTION *EQ &PRECHG */

/*--------------------------------------------------------------*/
/* At the post-change user exit point if the journal library is */
/* ABCLIB, save the just detached journal receiver. */
/*--------------------------------------------------------------*/
ELSE IF (&FUNCTION *EQ &POSTCHG) THEN(DO)
IF COND(&JRNLIB *EQ 'ABCLIB') THEN(DO)
RTVJRNE JRN(&JRNLIB/&JRNNAME) +
RCVRNG(&RCVLIB/&RCVNAME) FROMENTLRG(*FIRST) +
RTNJRNE(&RTNJRNE)

633
Table 115. Sample journal receiver management exit program

/*----------------------------------------------------------*/
/* Retrieve the journal entry, extract the previous receiver*/
/* name and library to do the save with. */
/*----------------------------------------------------------*/
CHGVAR &PRVRCV (%SUBSTRING(&RTNJRNE 126 10))
CHGVAR &PRVRLIB (%SUBSTRING(&RTNJRNE 136 10))
SAVOBJ OBJ(&PRVRCV) LIB(&PRVRLIB) DEV(TAP02) +
OBJTYPE(*JRNRCV) /* Save detached receiver */
ENDDO /* &JRNLIB is ABCLIB */
ENDDO /* &FUNCTION is &POSTCHG */

/*--------------------------------------------------------------*/
/* Handle processing for the pre-check exit point. */
/*--------------------------------------------------------------*/
ELSE IF (&FUNCTION *EQ &PRECHK) THEN(DO)
IF (&JRNLIB *EQ 'TEAMLIB') THEN( +
SAVOBJ OBJ(&RCVNAME) LIB(&RCVLIB) DEV(TAP01) +
OBJTYPE(*JRNRCV))
ENDDO /* &FUNCTION is &PRECHK */
ENDPGM

634
APPENDIX ASupported object types for system
journal replication

This list identifies IBM i object types and indicates whether MIMIX can replicate these
through the system journal.
Note: Not all object types exist in all releases of IBM i.

Object Type Description Replicated


*ALRTBL Alert table Yes
*AUTL Authorization list Yes
*BLKSF Block special file No
*BNDDIR Binding directory Yes
*CFGL Configuration list No6
*CHTFMT Chart format No9
*CLD C locale description Yes
*CLS Class Yes
*CMD Command Yes
*CNNL Connection list Yes
*COSD Class-of-service description Yes
*CRG Cluster resource group No9
*CRQD Change request description Yes
*CSI Communications side information Yes
*CTLD Controller description Yes1
*DDIR Distributed file directory No2
*DEVD Device description Yes1,12
*DEVNWSH Device network server host adapter Yes
*DIR Directory Yes2
*DOC Document Yes
*DSTMF Distributed stream file No2
*DTAARA Data area Yes
*DTADCT Data dictionary No14
*DTAQ Data queue Yes
*EDTD Edit description Yes
*EXITRG Exit registration Yes
*FCT Forms control table Yes
*FILE File Yes3
*FLR Folder Yes
*FNTRSC Font resource Yes
*FNTTBL Font mapping table No9
*FORMDF Form definition Yes
*FTR Filter Yes
*GSS Graphics symbol set Yes
*IGCDCT Double-byte character set conversion No9
dictionary
*IGCSRT Double-byte character set sort table No9
*IGCTBL Double-byte character set font table No9
*IPXD Internetwork packet exchange Yes
description
*JOBD Job description Yes

635
Supported object types for system journal replication

Object Type Description Replicated


*JOBQ Job queue Yes4
*JOBSCD Job schedule Yes
*JRN Journal No7
*JRNRCV Journal receiver No7
*LIB Library Yes4
*LIND Line description Yes1
*LOCALE Locale space Yes
*M36 AS/400 Advanced 36 machine No8
*M36CFG AS/400 Advanced 36 machine No8
configuration
*MEDDFN Media definition Yes
*MENU Menu Yes
*MGTCOL Management collection Yes
*MODD Mode description Yes
*MODULE Module Yes
*MSGF Message file Yes
*MSGQ Message queue Yes4
*NODGRP Node group No9
*NODL Node list Yes
*NTBD NetBIOS description Yes
*NWID Network interface description Yes1
*NWSD Network server description Yes
*OOPOOL Persistent pool (for OO objects) No
*OUTQ Output queue Yes4, 5
*OVL Overlay Yes
*PAGDFN Page definition Yes
*PAGSEG Page segment Yes
*PDFMAP PDF Map Yes
*PDG Print descriptor group Yes
*PGM Program Yes11
*PNLGRP Panel group Yes
*PRDAVL Product availability No6
*PRDDFN Product definition No6
*PRDLOD Product load No6
*PSFCFG Print Services Facility (PSF) Yes
configuration
*QMFORM Query management form Yes
*QMQRY Query management query Yes
*QRYDFN Query definition Yes
*RCT Reference code translate table No9
*S36 System/36 machine description No9
*SBSD Subsystem description Yes
*SCHIDX Search index Yes
*SOCKET Local socket No
*SOMOBJ System Object Model (SOM) object No
*SPADCT Spelling aid dictionary Yes
*SPLF Spool file Yes
*SQLPKG Structured query language package Yes
*SQLUDT User-defined SQL type Yes
*SRVPGM Service program Yes
*SSND Session description Yes
*STMF Bytestream file Yes2
*SVRSTG Server storage space No8

636
Object Type Description Replicated
*SYMLNK Symbolic link Yes2
*TBL Table Yes
*USRIDX User index Yes
*USRPRF User profile Yes13
*USRQ User queue Yes4
*USRSPC User space Yes10
*VLDL Validation list Yes
*WSCST Workstation customizing object Yes
Notes:
1. Replicating configuration objects to a previous version of IBM i may cause unpredictable
results.
2. Objects in QDLS, QSYS.LIB, QFileSvr.400, QLANSrv, QOPT, QNetWare, QNTC, QSR,
and QFPNWSSTG file systems are not currently supported via Data Group IFS Entries.
Objects in QSYS.LIB and QDLS are supported via Data Group Object Entries and Data
Group DLO Entries. Excludes stream files associated with a server storage space.
3. File attribute types include: DDMF, DSPF, DSPF36, DSPF38, ICFF, LF, LF38, MXDF38,
PF-DTA, PF-SRC, PF38-DTA, PF38-SRC, PRTF, PRTF38, and SAVF.
4. Content is not replicated.
5. Spooled files are replicated separately from the output queue.
6. These objects are system specific. Duplicating them could cause unpredictable results on
the target system.
7. Duplicating these objects can potentially cause problems on the target system.
8. These objects are not duplicated due to size and IBM recommendation.
9. These object types can be supported by MIMIX for replication through the system journal,
but are not currently included. Contact CustomerCare if you need support for these object
types.
10.Changes made though external interfaces such as APIs and commands are replicated.
Direct update of the content through a pointer is not supported.
11.To replicate *PGM objects to an earlier release of IBM i you must be able to save them to
that earlier release of IBM i.
12.Device description attributes include: APPC, ASC, ASP, BSC, CRP, DKT, DSPLCL,
DSPRMT, DSPVRT, FNC, HOST, INTR, MLB, NET, OPT, PRTLAN, PRTLCL, PRTRMT,
PRTVRT, RTL, SNPTUP, SNPTDN, SNUF, and TAP.
13.The MIMIX-supplied user profiles MIMIXOWN and LAKEVIEW, as well as IBM supplied
user profiles, should not be replicated.
14.Files linked to a data dictionary, such as files starting with QIDCT, are not supported.

637
MIMIX product-level security

APPENDIX B MIMIX product-level security

License Manager provides the capability to enable additional security to protect your
MIMIX environment and limit access to the product. These functions provide an
additional level of security beyond that available with IBM i.
When enabled, product-level security, enforces the additional Vision-provided product
authority and command authority functions.
• Product authority allows an administrator to set or change the product authority
level needed for a user profile or for public access to a specific MIMIX product.
These authority levels are in addition to the standard IBM i security levels.
• Command authority allows an administrator to change the authority level of
specific MIMIX commands. When product-level security is enabled, you can use
the command authority function to raise or lower the authority level for a command
or to reset it to the shipped authority values.
Any authorization levels that you set for specific user profiles to control access to a
product or command are not enabled or enforced unless you take explicit action to set
product-level security to "On" for each product. It is recommended that you take
advantage of this additional security.
This appendix lists the authority level of MIMIX commands when product-level
security is turned on. For more information about authority levels for License Manager
commands, setting product-level security, and using the product authority and
command authority functions, see the Using License Manager book.

638
Authority levels for MIMIX commands

Authority levels for MIMIX commands


.

Table 116 shows the commands and menu interfaces within MIMIX products that can
be controlled with security functions provided by Vision Solutions. The left side of the
table indicates the products in which the commands are available; to use the
command from within a product, you must first have a valid license key that includes
the product. The right side of the table shows the minimum authority level needed for
the command when you use the provided product authority or command authority.

Before using this information, you should note that:


• Product-level security must be enabled to enforce your choices for product
authority and command authority.
• The product authority function does not apply to the security officer user profile
(QSECOFR). As long as valid license keys exist, the QSECOFR user profile can
perform all functions. This allows the security officer to access a product when all
other user profiles are excluded from access.
• Commands that are not listed are not protected by product authority and cannot
be modified with command authority.
• All users with *ADM authority to a product in a library have access to the grant and
revoke authority commands (GRTPRDAUT and RVKPRDAUT) for that instance of
the product. These users have the ability to grant and revoke authority to that
product even though they do not have *MGT authority to License Manager.
• Vision Solutions Portal uses the shipped authority level to control actions available
to users when product-level security is enabled.

Be aware of the security considerations for commands and interfaces that are used
by more than one product in the same library. When you have multiple products in the
same library, in each product, you should set command authority to use the same
product-security level. This is also true of product level authority to commands for
individual user profiles.

Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X X ABOUT X
O ADDDGDLOE X
U ADDDGFE X
U ADDDGFEALS X
O ADDDGIFSE X
O X ADDDGOBJE X
X X ADDDTARGE c X
X ADDEXITPGM X

639
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X ADDMMXDMNE d X
X X ADDMONINF X
X X X X ADDMSGLOGE X
X X ADDNFYE X
X X ADDNODE X
U ADDRJLNK X
X X ADDSTEP X
X X ADDSTEPMSG X
X X ADDSTEPPGM X
X ADDSWTDEVE X
X X BLDAGENV X
X X BLDCLUOBJ X
X BLDJRNENV X
X X CHGAGDFN X
X X CHGCLUOBJ X
X CHGDG X
X X CHGDGDFN X
O CHGDGDLOE X
U CHGDGFE X
U CHGDGFEALS X
O CHGDGIFSE X
O X CHGDGOBJE X
X CHGDGRCV X
X CHGDTARGE X
X CHGJRNDFN X
X X CHGMMXCLU X
X X CHGMONINF X
X X CHGMONOBJ X
X X CHGMONSTS X
X X CHGNODE X
X X CHGNODSTS X
U X CHGPRMGRP X
X X CHGPROC X
X X CHGPROCSTS X
U CHGRJLNK X
X X CHGSTEP X
X X CHGSTEPMSG X
X X CHGSTEPPGM X
X CHGSWTDEVE X
X CHGSWTDFN X
X X CHGSWTFWK X
X X X X CHGSYSDFN e X
X X X X CHGTFRDFN X
X CHKDGFE X
X X CHKR3PRF X
X X CHKSWTFWK X

640
Authority levels for MIMIX commands

Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
O CLRCNRCCH X
X CLRDGRCYP X
X X CLOMMXLST X
X CMPDLOA X
X CMPFILA X
X CMPFILDTA X
O CMPIFSA X
O X CMPOBJA X
X CMPRCDCNT X
O CNLDGACTE X
X X CNLPROC X
U X CPYACTF X
X X CPYCFGDTA X
X X CPYDGDFN X
O CPYDGDLOE X
U CPYDGFE X
O CPYDGIFSE X
O X CPYDGOBJE X
X CPYJRNDFN X
X X CPYMONOBJ X
X X CPYPROC X
X X X X CPYSYSDFN X
X X X X CPYTFRDFN X
X X CRTAGDFN X
U CRTCRCLS X
X X CRTDGDFN X
U CRTDGTSP X
X CRTJRNDFN X
X X CRTMMXCLU X
X X CRTMMXDFN X
X X CRTMONOBJ X
X X CRTPROC X
X CRTSWTDFN X
X X CRTSWTFWK X
X X X X CRTSYSDFN e X
X X X X CRTTFRDFN X
O CVTDG X
X CVTDGIFSE X
X X DLTAGDFN X
U DLTCRCLS X
X X DLTDGDFN X
U DLTDGTSP X
X DLTJRNDFN X
X DLTJRNENV X
X X DLTMMXCLU X
X X DLTMONOBJ X

641
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X X DLTPROC X
X DLTSWTDFN X
X X DLTSWTFWK X
X X X X DLTSYSDFN X
X X X X DLTTFRDFN X
U DMLOGOUT X
X DPYDGCFG X
X X DSPAGDFN X
O DSPPATH X
X DSPCPYDTL X
X X DSPDGDFN X
O DSPDGDLOE X
U DSPDGFE X
U DSPDGFEALS X
O DSPDGIFSE X
X DSPDGIFSTE X
O X DSPDGOBJE X
X DSPDGOBJTE X
X X DSPDGSTS X
X DSPDTARGE X
X DSPJRNDFN X
X DSPJRNSTC X
X X X X DSPMMXMSGQ X
X X DSPMONINF X
X X DSPMONOBJ X
X X DSPMONSTS X
X X DSPNODE X
U DSPRJLNK X
X DSPSWTDEVE X
X DSPSWTDFN X
X DSPSWTSTS X
X X X X DSPSYSDFN X
X X X X DSPTFRDFN X
X X ENDAG X
X X ENDCOLSRV X
X X ENDDG X
U ENDJRNFE X
X ENDJRNIFSE X
X ENDJRNOBJE X
X X X X ENDMMXf X
X X ENDMMXMGR X
X X ENDMON X
X X ENDMSTMON X
U ENDRJLNK X
X X X ENDSVR X
X ENDSWT X

642
Authority levels for MIMIX commands

Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X ENDSWTSCN X
X X EXPMONOBJ X
X X X X X EXPPRDINF X
X X X X X GRTPRDAUT X
U HLDDGLOG X
X X HLDMON X
X X IMPMONOBJ X
X X INZR3SWT X
X X LODAG X
O LODDGDLOE X
U LODDGFE X
U LODDGIFSTE X
O X LODDGOBJE X
U LODDGOBJTE X
X X LODDTARGE X
X LODSWTDEVE X
X X X X MIMIX X
X MIMIXPRM X
U MMXSNDJRNE X
X X MOVCLUMSG X
X X OPNMMXLST X
X X OVRSTEP X
X RGZACTF X
U RLSDGLOG X
X X RLSMON X
O RMVDGACTE X
O RMVDGDLOE X
U RMVDGFE X
U RMVDGFEALS X
O RMVDGIFSE X
X RMVDGIFSTE X
O X RMVDGOBJE X
X RMVDGOBJTE X
X X RMVDTARGE X
X RMVMMXDMNE d X
X X RMVMMXNOD X
X X X X RMVMSGLOGE X
X X RMVNODE X
U RMVRJCNN X
U RMVRJLNK X
X X RMVSTEP X
X X RMVSTEPMSG X
X X RMVSTEPPGM X
X RMVSWTDEVE X
X X RNMDGDFN X
X RNMJRNDFN X

643
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X X RNMMONOBJ X
X X RNMPROC X
X X RNMSTEPPGM X
X X X X RNMSYSDFN X
X X X X RNMTFRDFN X
X RTVAPYSTS X
X X RTVDGDFN X
X X RTVDGDFN2 X
O RTVDGDLOE X
U RTVDGEXIT X
U RTVDGFE X
X RTVDGIFSTE X
O X RTVDGOBJE X
X X RTVDGSTS X
X RTVJRNDFN X
X RTVJRNSTS X
X X RTVMMXLSTE X
X X RTVPROC X
X RTVPRMGRP X
U RTVRJLNK X
X X RTVSPSTS X
X X RTVSTEP X
X X RTVSTEPMSG X
X X RTVSTEPPGM X
X X RTVSWTFWK X
X X X X RTVSYSDFN X
X X RTVSYSSTS X
X X X X RTVTFRDFN X
X RTYAPMNT X
O RTYDGACTE X
X X RSMPROC X
X X X X RUNCMD X
X X X X RUNCMDS X
X X RUNMON X
X X RUNPROC X
X X RUNRULE X
X X RUNRULEGRP X
X X RUNSWTFWK X
X X X X X RVKPRDAUT X
O SETDGAUD X
U SETDGEXIT X
U SETDGFE X
X SETDGIFSTE X
X SETDGOBJTE X
X SETDGRCYP X
X SETEXTPCY X

644
Authority levels for MIMIX commands

Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X SETIDCOLA X
X X X SETLCLSYS X
X X SETODASTS X
X X SETMMXPCY X
X X SETMMXSCD X
X SETSWTSRC X
X X SNDCLUOBJ X
X X STRAG g X
X X STRCOLSRV X
X X STRCVTAG X
X X STRDG X
U STRJRNFE X
X STRJRNIFSE X
X STRJRNOBJE X
X X X X STRMMXf X
X X X X STRMMXMGR X
X X STRMON X
X X STRMSTMON X
U STRRJLNK X
X X X STRSVR X
X STRSWT X
X STRSWTSCN X
X X SWTAG X
X SWTDG X
X X SWTR3PRF X
X SYNCDG X
O SYNCDGACTE X
U SYNCDGFE X
O SYNCDLO X
O SYNCIFS X
O X SYNCOBJ X
X X TSTPWRCHG X
X X X X VFYCMNLNK X
U VFYDGFE X
U VFYJRNFE X
X VFYJRNIFSE X
X VFYJRNOBJE X
U VFYKEYATR X
X X WRKAG X
X X WRKAUD X
X X WRKAUDHST X
X X WRKAUDOBJ X
X X WRKAUDOBJH X
U X WRKCPYSTS X
U WRKCRCLS X
X X WRKDG X

645
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
O WRKDGACT X
O WRKDGACTE X
X X WRKDGDFN X
O WRKDGDLOE X
U WRKDGFE X
U WRKDGFEALS X
U WRKDGFEHLD X
O WRKDGIFSE X
X WRKDGIFSTE X
O X WRKDGOBJE X
X WRKDGOBJTE X
U WRKDGTSP X
X X WRKDTARGE X
X WRKIFSREF X
X WRKJRNDFN X
X X WRKMON X
X X WRKMONINF X
X X X X WRKMSGLOG X
X X WRKNFY Xh X
X X WRKNODE X
X X WRKPROC X
X X WRKPROCSTS X
X WRKRCY X
X WRKRJLNK X
X X WRKSPSTS X
X X WRKSTEP X
X X WRKSTEPMSG X
X X WRKSTEPPGM X
X X WRKSTEPSTS X
X WRKSWT X
X WRKSWTDEVE X
X X X X WRKSYS X
X X X X WRKSYSDFN X
X X X X WRKTFRDFN X
a. Includes licenses for MIMIX Global or MIMIX for PowerHA unless otherwise noted.
b. MIMIX Promoter is not included with MIMIX Professional or MIMIX DR.
c. Supported values for the resource group type (TYPE) parameter vary depending on which license keys are present.
When MIMIX DR is licensed, the only supported type is *DTA. When MIMIX Enterprise or MIMIX Professional is
licensed, the supported types are *DTA and *PEER. When MIMIX for PowerHA is licensed, the supported types are
*DEV, *GMIR, *LUN, *PEER, *PPRC, and *XSM.
d. Supported for MIMIX for PowerHA licenses only.
e. A license for MIMIX Global (C1) requires either a MIMIX Enterprise or a MIMIX Professional license.
f. This command is not protected by product level security. Authority to use this command is controlled by the product level
security assigned to any commands used by this command.

646
Authority levels for MIMIX commands

g. Supported values for the resource group (TYPE) parameter vary depending on which license keys are present.
When MIMIX DR is licensed, the only supported type is *DTA.When MIMIX Enterprise or MIMIX Professional is licensed,
the supported types are *APP, *DTA, and *PEER. When MIMIX for PowerHA is licensed, the supported type are *DEV,
*GMIR, *LUN, *PEER, *PPRC, and *XSM.
h. The following options available from the Work with Notifications (WRKNFY) display require *OPR or higher authority:
4=Delete, 46=Acknowledge, 47=Mark as New. To specify authority for these options, see “Substitution values for com-
mand authority” on page 647.

Substitution values for command authority


Table 117 is only needed if you change authority to individual commands using the
Change Command Authority (CHGCMDAUT) command. When you change the
command authority for any of these commands, you need to specify the value shown
in the Substitution Entry column for the Command (CMD) parameter.

Table 117. Change command authority substitutes

Command Substitution Entry

DLTDGDFN DLTDGDFN2

RTVDGDFN RTVDGDFN2

SETLCLSYS SETLOCSYS

WRKDGDFN WRKDGDFN2

WRKDGDLOE WRKDGDLOE2

WRKDGFE WRJDGFE2

WRKDGOBJE WRKDGOBJE2

WRKJRNDFN WRKJRNDFN3

WRKMSGLOG WRKMSGLOG2

WRKNFY To control authority for option 4=Delete, specify WRKNFYDLT


To control authority for options 46 and 47, specify WRKNFYACK

WRKSYSDFN WRKSYSDFN2

WRKTFRDFN WRKTFRDFN2

647
Copying configurations

APPENDIX C Copying configurations

This section provides information about how you can copy configuration data between
systems.
• “Supported scenarios” on page 648 identifies the scenarios supported in version 8
of MIMIX.
• “Checklist: copy configuration” on page 649 directs you through the correct order
of steps for copying a configuration and completing the configuration.
• “Copying configuration procedure” on page 653 documents how to use the Copy
Configuration Data (CPYCFGDTA) command.

Supported scenarios
The Copy Configuration Data (CPYCFGDTA) command supports copying
configuration data from one library to another library on the same system. After MIMIX
is installed, you can use the CPYCFGDTA command.
The supported scenarios are as follows:

Table 118. Supported scenarios for copying configuration

From To

MIMIX version 8 MIMIX version 81

MIMIX version 7.1 MIMIX version 8


1. The installation you are copying to must be at the same or a higher level service
pack.

Note: The CPYCFGDTA command is not supported in INTRA environments.

648
Checklist: copy configuration

Checklist: copy configuration


Use this checklist when you have installed MIMIX in a new library and you want to
copy an existing configuration into the new library.
To configure MIMIX with configuration information copied from one or more existing
product libraries, do the following:
1. Review “Supported scenarios” on page 648.
2. Use the procedure “Copying configuration procedure” on page 653 to copy the
configuration information from one or more existing libraries.
3. Verify that the system definitions created by the CPYCFGDTA command have the
correct message queue, output queues, and job descriptions required. Be sure to
check system definitions for the management system and all of the network
systems.
4. Verify that transfer definitions created have the correct three-part name and that
the values specified for each transfer protocol are correct. For *TCP, verify the
port number. For *SNA, verify that the SNA mode is what is defined for SNA
configuration.
Note: One of the transfer definitions should be named PRIMARY if you intend to
create additional data group definitions or system definitions that will use
the default value PRIMARY for the Primary transfer definition PRITFRDFN
parameter.
5. Verify that the journal definitions created have the information you want for the
journal receiver prefix name, auxiliary storage pool, and journal receiver change
management and delete management. The default journal receiver prefix for the
user journal is generated; for the system journal, the default journal receiver prefix
is AUDRCV. If you want to use a prefix other than these defaults, you will need to
modify the journal definition using topic “Changing a journal definition” on
page 220.
6. If you change the names of any of the system, transfer, or journal definitions
created by the copy configuration command, ensure that you also update that
name in other locations within the configuration.

Table 119. Changing named definitions after copying a configuration

If you change this name Also change the name in this location

System definition, SYSDFN parameter • Transfer definition, TFRDFN parameter


• Data group definition, DGDFN
parameter

Transfer definition, TFRDFN parameter • System definition, PRITFRDFN and


SECTFRDFN parameters
• Data group definition, PRITFRDFN and
SECTFRDFN parameters

Journal definition, JRNDFN parameter Data group definition, JRNDFN1 and


JRNDFN2 parameters

649
7. Verify the data group definitions created have the correct job descriptions. Verify
that the values of parameters for job descriptions are what you want to use.
MIMIX provides default job descriptions that are tailored for their specific tasks.
Note: You may have multiple data groups created that you no longer need.
Consider whether or not you can combine information from multiple data
groups into one data group. For example, it may be simpler to have both
database files and objects for an application be controlled by one data
group.
8. Verify that the options which control data group file entries are set appropriately.
a. For data group definitions, ensure that the values for file entry options (FEOPT)
are what you want as defaults for the data group.
b. Check the file entry options specified in each data group file entry. Any file
entry options (FEOPT) specified in a data group file entry will override the
default FEOPT values specified in the data group definition. You may need to
modify individual data group file entries.
9. Check the data group entries for each data group. Ensure that all of the files and
objects that you need to replicate are represented by entries for the data group.
Be certain that you have checked the data group entries for your critical files and
objects. Use the procedures in the MIMIX Operations book to verify your
configuration.
10. Check how the apply sessions are mapped for data group file entries. You may
need to adjust the apply sessions.
11. Use Table 120 to entries for any additional database files or objects that you need
to add to the data group.

Table 120. How to configure data group entries for the preferred configuration.

Class Do the following: Planning and Requirements


Information

Library- 1. Create object entries using “Creating data group object “Identifying library-based
based entries” on page 271. objects for replication” on
objects 2. After creating object entries, load file entries for LF and page 100
PF (source and data) *FILE objects using “Loading file “Identifying logical and physical
entries from a data group’s object entries” on page 276. files for replication” on
Note: If you cannot use MIMIX Dynamic Apply for logical files or page 106
PF data files, you should still create file entries for PF “Identifying data areas and data
source files to ensure that legacy cooperative processing queues for replication” on
can be used. page 113
3. After creating object entries, load object tracking entries
for *DTAARA and *DTAQ objects that are journaled to a
user journal. Use “Loading object tracking entries” on
page 287.

650
Checklist: copy configuration

Table 120. How to configure data group entries for the preferred configuration.

Class Do the following: Planning and Requirements


Information

IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 284. replication” on page 116
2. After creating IFS entries, load IFS tracking entries for
IFS objects that are journaled to a user journal. Use
“Loading IFS tracking entries” on page 286.

DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 297. replication” on page 122

12. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
“Resolving auditing problems” on page 679 and “Interpreting results for
configuration data - #DGFE audit” on page 687.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
13. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
“Setting data group auditing values manually” on page 309. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
14. Verify that system-level communications are configured correctly.
a. If you are using SNA as a transfer protocol, verify that the MIMIX mode and
that the communications entries are added to the MIMIXSBS subsystem.
b. If you are using TCP as a transfer protocol, verify that the MIMIX TCP server is
started on each system (on each "side" of the transfer definition). You can use
the WRKACTJOB command for this. Look for a job under the MIMIXSBS
subsystem with a function of LV-SERVER.

651
c. Use the Verify Communications Link (VFYCMNLNK) command to ensure that
a MIMIX installation on one system can communicate with a MIMIX installation
on another system. Refer to topic “Verifying the communications link for a data
group” on page 197.
15. Ensure that there are no users on the system that will be the source for replication
for the rest of this procedure. Do not allow users onto the source system until you
have successfully completed the last step of this procedure.
16. Start journaling using the following procedures as needed for your configuration.
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (JRNSYS) parameter in the commands
to start journaling.
• For user journal replication, use “Journaling for physical files” on page 347 to
start journaling on both source and target systems
• For IFS objects, configured for user journal replication, use “Journaling for IFS
objects” on page 350.
• For data areas or data queues configured for user journal replication, use
“Journaling for data areas and data queues” on page 354.
17. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 508
includes instructions for how to establish a synchronization point and identifies the
options available for synchronizing.
18. Start the system managers using topic “Starting the system and journal managers”
on page 306.
19. Start the data group using “Starting data groups for the first time” on page 315.

652
Copying configuration procedure

Copying configuration procedure


This procedure addresses only some of the tasks needed to complete your
configuration. Use this procedure only when directed from the “Checklist: copy
configuration” on page 649.
Note: By default, the CPYCFGDTA command replaces all MIMIX configuration data
in the current product library with the information from the specified library.
Any configuration created in the product library will be replaced with data from
the specified library. This may not be desirable.
To copy existing configuration data to the new MIMIX product, do the following:
1. The products in the installation library that will receive the copied configuration
data must be shut down for the duration of this procedure. Use topic “Choices
when ending replication” in the MIMIX Operations book to end activity for the
appropriate products.
2. Sign on to the system with the security officer (QSECOFR) user profile or with a
user profile that has security officer class and all special authorities.
3. Access the MIMIX Basic Main Menu in the product library that will receive the
copied configuration data. From the command line, type the command
CPYCFGDTA and press F4 (Prompt).
4. At the Copy from library prompt, specify the name of the library from which you
want to copy data.
5. To start copying configuration data, press Enter.
6. When the copy is complete, return to topic “Checklist: copy configuration” on
page 649 to verify your configuration.

653
Configuring Intra communications

APPENDIX D Configuring Intra communications

The MIMIX set of products supports a unique configuration called Intra. Intra is a
special configuration that allows the MIMIX products to function fully within a single-
system environment. Intra support replicates database and object changes to other
libraries on the same system by using system facilities that allow for communications
to be routed back to the same system. This provides an excellent way to have a test
environment on a single machine that is similar to a multiple-system configuration.
The Intra environment can also be used to perform backups while the system remains
active. You can have multiple Intra configurations in a MIMIX installation.
Intra is supported only in environments licensed for MIMIX Enterprise.
In an Intra configuration, the product is installed into two libraries on the same system
and configured in a special way. An Intra configuration uses these libraries to replicate
data to additional disk storage on the same system. The second library in effect
becomes a “backup” library.
By using an Intra configuration you can reduce or eliminate your downtime for routine
operations such as performing daily and weekly backups. When replicating changes
to another library, you can suspend the application of the replicated changes. This
enables you to concurrently back up the copied library to tape while your application
remains active. When the backup completes, you can resume operations that apply
replicated changes to the "backup" library.
An Intra configuration enables you to have a "live" copy of data or objects that can be
used to offload queries and report generations. You can also use an Intra
configuration as a test environment prior to installing MIMIX on another system or
connecting your applications to another system.
Because both libraries exist on the same system, an Intra configuration does not
provide protection from disaster.
Database replication within an Intra configuration requires that the source and target
files either have different names or reside in different libraries. Similarly, objects
cannot be replicated to the same named object in the same named library, folders, or
directory.

This section includes the following procedures:


• “Manually configuring Intra using TCP” on page 654
• “Manually configuring Intra using SNA” on page 656

Manually configuring Intra using TCP


In an Intra environment, MIMIX communicates between two product libraries on the
same system instead of between a local system and a remote system. The libraries
need to have the same name except the product library (PRDLIB) for the Intra system
definition (SYSDFN) must be manually specified, and must have an 'I' appended to

654
Manually configuring Intra using TCP

the end of the library name. For example, a library named ABC would need to be
named ABCI, ABCII, ABCIII, etc... in order to be valid for an Intra configuration.
Important! We recommend that these steps be performed by MIMIX Services
personnel. Also, the system name for Intra must be ‘INTRAnnn’ where ‘INTRA’ is
required for the first part of the system name and the second part of the name,
nnn, can be up to 3 valid system definition characters.
In this example, the MIMIX library is the management system and the MIMIXI library
is the network system. If you manually configure the communications necessary for
Intra, consider the MIMIX library as the local system and the MIMIXI library as the
remote system. You may already have a management system defined and need to
add an Intra network system. All the configuration should be done in the MIMIX library
on the management system.
Note: If you have multiple network systems, you need to configure your transfer
definitions to have the same name with system1 and system2 being different.
For more information, see “Multiple network system considerations” on
page 172.
To add an entry in the host name table, use the command Configure TCP/IP
(CFGTCP) command to access the Configure TCP/IP menu.
Select option 10 (Work with TCP/IP Host Table Entries) from the menu. From the
Work with TCP/IP Host Table display, type a 2 (Change) next to the LOOPBACK
entry and add 'INTRA' to that entry.
For this example, the host name of the management system is Source and the host
name for the network or target system is Intra.
1. Create the system definitions for the product libraries used for Intra as follows:
a. For the MIMIX library (local system) enter the following command:
MIMIX/CRTSYSDFN SYSDFN(source) TYPE(*MGT) TEXT(‘management
system’)
Note: You may have already configured this system.
b. For the MIMIXI library (remote system), use the following command:
MIMIX/CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(‘network
system’) PRDLIB(MIMIXI)
2. Create the transfer definition between the two product libraries with the following
command. Note that the values for PORT1 and PORT2 must be unique.
MIMIX/CRTTFRDFN TFRDFN(PRIMARY SOURCE INTRA) HOST1(SOURCE)
HOST2(INTRA) PORT1(55501) PORT2(55502) MNGAJE(*YES)
3. Start the server for the management system (source) by entering the following
command:
MIMIX/STRSVR HOST(SOURCE) PORT(55501)
4. Start the server for the network system (Intra) by entering the following command:
MIMIXI/STRSVR HOST(INTRA) PORT(55502)
5. Start the system managers from the management system by entering the

655
Configuring Intra communications

following command:
MIMIX/STRMMXMGR SYSDFN(*ALL)
Start the remaining managers normally.
Note: You will still need to configure journal definitions and data group definitions on
the management system.

Manually configuring Intra using SNA


Note: MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) for communications protocols. Vision Solutions will only
assist customers with determining possible workarounds for communication-
related issues that arise when using SNA. If you create transfer definitions for
MIMIX to use these protocols, be certain that your business can accept this
limitation.
In an Intra environment, MIMIX communicates between two product libraries on the
same system instead of between a local system and a remote system. If you manually
configure the communications necessary for Intra, consider the default product library
(MIMIX) to be the local system and the second product library (in this example,
MIMIXI) to be the remote system.
Important! We recommend that these steps be performed by MIMIX Services
personnel. Also, the system name for Intra should be named 'INTRA' as described
in this example.
If you need to manually configure SNA communications for an Intra environment, do
the following:
1. Create the system definitions for the product libraries used for Intra as follows:
a. For the MIMIX library (local system), use the local location name in the
following command:
CRTSYSDFN SYSDFN(local-location-name) TYPE(*MGT)
TEXT(‘Manual creation’)
b. For the MIMIXI library (remote system), use the following command:
CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(‘Manual creation’)
PRDLIB(MMIXI)
2. Create the transfer definition between the two product libraries with the following
command:
CRTTFRDFN TFRDFN(PRIMARY INTRA local-location-name)
PROTOCOL(*SNA) LOCNAME1(INTRA1) LOCNAME2(INTRA2)
NETID1(*LOC) TEXT(‘Manual creation’)
3. Create the MIMIX mode description using the following command:
CRTMODD MODD(MIMIX) MAXSSN(100) MAXCNV(100) LCLCTLSSN(12)
TEXT('MIMIX INTRA MODE DESCRIPTION – Manual creation.')
4. Create a controller description for MIMIX Intra using the following command:

656
Manually configuring Intra using SNA

CRTCTLAPPC CTLD(MIMIXINTRA) LINKTYPE(*LOCAL) TEXT('MIMIX


INTRA – Manual creation.')
5. Create a local device description for MIMIX using the following command:
CRTDEVAPPC DEVD(MIMIX) RMTLOCNAME(INTRA1) LCLLOCNAME(INTRA2)
CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES)
TEXT('MIMIX INTRA – Manual creation.')
6. Create a remote device description for MIMIX using the following command:
CRTDEVAPPC DEVD(MIMIXI) RMTLOCNAME(INTRA2)
LCLLOCNAME(INTRA1) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO)
SECURELOC(*YES) TEXT('MIMIX REMOTE INTRA SUPPORT.')
7. Add a communication entry to the MIMIXSBS subsystem for the local location
using the following command:
ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA2)
JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX)
8. Add a communication entry to the MIMIXSBS subsystem for the remote location
using the following command:
ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA1)
JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX)
9. Vary on the controller, local device, and remote device using the following
commands:
VRYCFG CFGOBJ(MIMIXINTRA) CFGTYPE(*CTL) STATUS(*ON)
VRYCFG CFGOBJ(MIMIX) CFGTYPE(*DEV) STATUS(*ON)
VRYCFG CFGOBJ(MIMIXI) CFGTYPE(*DEV) STATUS(*ON)
10. Start the MIMIX system manager in both product libraries using the following
commands:
MIMIX/STRMMXMGR SYSDFN(*INTRA) MGR(*ALL)
MIMIX/STRMMXMGR SYSDFN(*LOCAL) MGR(*JRN)
Note: You still need to configure journal definitions and data group definitions.

657
MIMIX support for independent ASPs

APPENDIX E MIMIX support for independent ASPs

MIMIX has always supported replication of library-based objects and IFS objects to
and from the system auxiliary storage pool (ASP 1) and basic storage pools (ASPs 2-
32). Now, MIMIX also supports replication of library-based objects and IFS objects,
including journaled IFS objects, data areas and data queues, located in independent
ASPs1 (33-255).
The system ASP and basic ASPs are collectively known as SYSBAS. Figure 34
shows that MIMIX supports replication to and from SYSBAS and to and from
independent ASPs. Figure 35 shows that MIMIX also supports replication from
SYSBAS to an independent ASP and from an independent ASP to SYSBAS.

Figure 34. MIMIX supports replication to and from an independent ASP as well as standard
replication to and from SYSBAS (the system ASP and basic ASPs).

Figure 35. MIMIX also supports replication between SYSBAS and an independent ASP.

Restrictions: There are several permanent and temporary restrictions that pertain to
replication when an independent ASP is included in the MIMIX configuration. See

1. An independent ASP is an iSeries construct introduced by IBM in V5R1 and extended in


V5R2 of IBM i.

658
Benefits of independent ASPs

“Requirements for replicating from independent ASPs” on page 662 and “Limitations
and restrictions for independent ASP support” on page 662.

Benefits of independent ASPs


The key characteristic of an independent ASP is its ability to function independently
from the rest of the storage on a server. Independent ASPs can also be made
available and unavailable at the time of your choosing. The benefits of using
independent ASPs in your environment can be significant. You can isolate
infrequently used data that does not always need to be available when the system is
up and running. If you have a lot of data that is unnecessary for day-to-day business
operations, for example, you can isolate it and leave it offline until it is needed. This
allows you to shorten processing time for other tasks, such as IPLs, reclaim storage,
and system start time.
Additional benefits of independent ASPs allow you to do the following:
• Consolidate applications and data from multiple servers into a single IBM System
i allowing for simpler system management and application maintenance.
• Decrease downtime, enabling data on your system to be made available or
unavailable without an IPL.
• Add storage as necessary, without having to make the system unavailable.
• Avoid the need to recover all data in the event of a system failure, since the data is
isolated.
• Streamline naming conventions, since multiple instances of data with the same
object and library names can coexist on a single System i in separate independent
ASPs.
• Protect data that is unique to a specific environment by isolating data associated
with specific applications from other groups of users.
Using MIMIX provides a robust solution for high availability and disaster recovery for
data stored in independent ASPs.

Auxiliary storage pool concepts at a glance


An independent ASP is actually a part of the larger construct of an auxiliary storage
pool (ASP). Each ASP on your system is a group of disk units that can be used to
organize data for single-level storage to limit storage device failure and recovery time.
The system spreads data across the disk units within an ASP.
Figure 36 shows the types and subtypes of ASPs. The system ASP (ASP 1) is
defined by the system and consists of disk unit 1 and any other configured storage not
assigned to a basic or independent ASP. The system ASP contains the system
objects for the operating system and any user objects not defined to a basic or
independent ASP.
User ASPs are additional ASPs defined by the user. A user ASP can either be a
basic ASP or an independent ASP.

659
MIMIX support for independent ASPs

One type of user ASP is the basic ASP. Data that resides in a basic ASP is always
accessible whenever the server is running. Basic ASPs are identified as ASPs 2
through 32. Attributes, such as those for spooled files, authorization, and ownership
of an object, stored in a basic ASP reside in the system ASP. When storage for a
basic ASP is filled, the data overflows into the system ASP.
Collectively, the system ASP and the basic ASPs are called SYSBAS.
Another type of user ASP is the independent ASP. Identified by device name and
numbered 33 through 255, an independent ASP can be made available or
unavailable to the server without restarting the system. Unlike basic ASPs, data in an
independent ASP cannot overflow into the system ASP. Independent ASPs are
configured using iSeries Navigator.

Figure 36. Types of auxiliary storage pools.

Subtypes of independent ASPs consist of primary, secondary, and user-defined file


system (UFDS) independent ASPs1. Subtypes can be grouped together to function as
a single entity known as an ASP group. An ASP group consists of a primary
independent ASP and zero or more secondary independent ASPs. For example, if
you make one independent ASP unavailable, the others in the ASP group are made
unavailable at the same time.
A primary independent ASP defines a collection of directories and libraries and may
have associated secondary independent ASPs. A primary independent ASP defines a
database for itself and other independent ASPs belonging to its ASP group. The
primary independent ASP name is always the name of the ASP group in which it
resides.
A secondary independent ASP defines a collection of directories and libraries and
must be associated with a primary independent ASP. One common use for a
secondary independent ASP is to store the journal receivers for the objects being
journaled in the primary independent ASP.
Before an independent ASP is made available (varied on), all primary and secondary
independent ASPs in the ASP group undergo a process similar to a server restart.

1. MIMIX does not support UDFS independent ASPs. UDFS independent ASPs contain only
user-defined file systems and cannot be a member of an ASP group unless they are con-
verted to a primary or secondary independent ASP.

660
Auxiliary storage pool concepts at a glance

While this processing occurs, the ASP group is in an active state and recovery steps
are performed. The primary independent ASP is synchronized with any secondary
independent ASPs in the ASP group, and journaled objects are synchronized with
their associated journal.
While being varied on, several server jobs are started in the QSYSWRK subsystem to
support the independent ASP. To ensure that their names remain unique on the
server, server jobs that service the independent ASP are given their own job name
when the independent ASP is made available.
Once the independent ASP is made available, it is ready to use. Completion message
CPC2605 (vary on completed for device name) is sent to the history log.

661
Requirements for replicating from independent ASPs
The following requirements must be met before MIMIX can support your independent
ASP environment:
• License Program 5722-SS1 option 12 (Host Server) must be installed in order for
MIMIX to properly replicate objects in an independent ASP on the source and
target systems.
• Any PTFs for IBM i that are identified as being required need to be installed on
both the source and target systems. Log in to Support Central and check the
Technical Documents page for a list of IBM i PTFs that may be required.
• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into *SYSBAS.

Limitations and restrictions for independent ASP sup-


port
Limitations: Before using independent ASP support, be aware that independent
ASPs do not protect against disk failure. If the disks in the independent ASP are
damaged and the data is unrecoverable, data is available only up to the last backup
copy. A replication solution such as MIMIX is still required for high-availability and
disaster recovery. In addition, be aware of the following limitations:
• Although you can use the same library name between independent ASPs, an
independent ASP cannot share a library name with a library in the system ASP or
basic ASPs (SYSBAS). SYSBAS is a component of every name space, so the
presence of a library name in SYSBAS precludes its use in any independent ASP.
This will affect how you configure object for replication with MIMIX, especially for
IFS objects. See “Configuring library-based objects when using independent
ASPs” on page 664.
• Unlike basic ASPs, when an independent ASP fills, no new objects can be created
into the device. Also, updates to existing objects in the independent ASP, such as
adding records to a file, may not be successful. If an independent ASP attached to
the target system fills, your high-availability and disaster recovery solutions are
compromised.
• IBM restricts the object types that can be stored in an independent ASP. For
example, DLOs cannot reside in an independent ASP.
Restrictions in MIMIX support for independent ASPs include the following:
• MIMIX supports the replication of objects in primary and secondary independent
ASPs only. Replication of IFS objects that reside in user-defined file system
(UDFS) independent ASPs is not supported.
• You should not place libraries in independent ASPs within the system portion of a
library list. MIMIX commands automatically call the IBM command SETASPGRP,
which can result in significant changes to the library list for the associated user
job. See “Avoiding unexpected changes to the library list” on page 665.

662
Configuration planning tips for independent ASPs

• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into SYSBAS. These libraries cannot exist in an independent ASP.
• Any *MSGQ libraries, *JOBD libraries, and *OUTFILE libraries specified on MIMIX
commands must reside in SYSBAS.
• For successful replication, ASP devices in ASP groups that are configured in data
group definitions must be made available (varied on). Objects in independent
ASPs attached to the source system cannot be journaled if the device is not
available. Objects cannot be applied to an independent ASP on the target system
if the device is not available.
• Planned switchovers of data groups that include an ASP group must take place
while the ASP devices on both the source and target systems are available. If the
ASP device for the data group on either the source or target system is unavailable
at the time the planned switchover is attempted, the switchover will not complete.
• To support an unplanned switch (failover), the independent ASP device on the
backup system (which will become the temporary production system) must be
available in order for the failover to complete successfully.
• In order for MIMIX to access objects located in an independent ASP, do one of the
following on the Synchronize Object (SYNCOBJ) command:
– Specify the data group definition.
– If no data group is specified, you must specify values for the System 1 ASP
group or device, System 2 ASP device number, and System 2 ASP device
number parameters.
Also be aware of the following temporary restrictions:
• MIMIX does not perform validity checking to determine if the ASP group specified
in the data group definition actually exists on the systems. This may cause error
conditions when running commands.
• Any monitors configured for use with MIMIX must specify the ASP group. Monitors
of type *JRN or *MSGQ that watch for events in an independent ASP must specify
the name of the ASP group where the journal or message queue exists. This is
done with the ASPGRP parameter of the CRTMONOBJ command.

Configuration planning tips for independent ASPs


A job can only reference one independent ASP at a time. Storing applications and
programs in SYSBAS ensures that they are accessible by any job. Data stored in an
independent ASP is not accessible for replication when the independent ASP is
varied off.
For database replication and replication of objects through Advanced Journaling
support, due to the requirement for one user journal per data group, it is not possible
for a single data group to replicate both SYSBAS data and ASP group data.
For object replication of library-based objects through the system journal, you should
configure related objects in SYSBAS and an ASP group to be replicated by the same
data group. Objects in SYSBAS and an ASP group that are not related should be

663
separated into different data groups. This precaution ensures that the data group will
start and that objects residing in SYSBAS will be replicated when the independent
ASP is not available.
Note: To avoid replicating an object by more than one data group, carefully plan
what generic library names you use when configuring data group object
entries in an environment that includes independent ASPs. Make every
attempt to avoid replicating both SYSBAS data and independent ASP data for
objects within the same data group. See the example in “Configuring library-
based objects when using independent ASPs” on page 664.

Journal and journal receiver considerations for independent ASPs


For database replication and replication of objects through Advanced Journaling
support, data to be replicated and the journal used for its replication must exist in the
same ASP. When you configure replication for independent ASP, consider what data
you store there and the location of the journal and journal receivers needed to
replicate the data.
With independent ASPs, you have the option of placing journal receivers in an
associated secondary independent ASP. When you create an independent ASP, an
ASP group is automatically created that uses the same name you gave the primary
independent ASP.

Configuring IFS objects when using independent ASPs


Replication of IFS objects in an independent ASP is supported through default
replication processes and through MIMIX Advanced Journaling support. However,
there are differences in how to configure for these different environments.
For IFS replication by default object replication processes, you do not need to identify
an ASP group in a data group definition because an IFS object’s path includes the
independent ASP device name.
However, for IFS replication through Advanced Journaling support, you must specify
the ASP group name in the data group definition so that MIMIX can locate the
appropriate user journal.
If you are using Advanced Journaling support and want to limit a data group to only
replicate IFS objects from SYSBAS, specify *NONE for the ASP group parameters in
the data group definition.

Configuring library-based objects when using independent ASPs


Use care when creating generic data group object entries; otherwise you can create
situations where the same object is replicated by multiple data groups. This applies
for replication between independent ASPs as well as replication between an
independent ASP and SYSBAS.
For example, data group APP1 defines replication between ASP groups named
WILLOW on each system. Similarly, group APP2 defines replication between ASP
groups named OAK on each system. Both data groups have a generic data group
object entry that includes object XZY from library names beginning with LIB*. If object

664
Configuration planning tips for independent ASPs

LIBASP/XYZ exists in both independent ASPs and matches the generic data group
object entry defined in each data group, both data groups replicate the corresponding
object. This is considered normal behavior for replication between independent ASPs,
as shown in Figure 37.
However, in this example, if SYSBAS contains an object that matches the generic
data group object entry defined for each data group, the same object is replicated by
both data groups. Figure 37 shows that object LIBBAS/XYZ meets the criteria for
replication by both data groups, which is not desirable.

Figure 37. Object XYZ in library LIBBAS is replicated by both data groups APP1 and APP2
because the data groups contain the same generic data group object entry. As a result, this
presents a problem if you need to perform a switch.

Avoiding unexpected changes to the library list


It is recommended that the system portion of your library list does not include any
libraries that exist in an ASP group.
Whenever you run a MIMIX command, MIMIX automatically determines whether the
job requires a call to the IBM command Set ASP Group (SETASPGRP). The
SETASPGRP command changes the current job's ASP group environment and
enables MIMIX to access objects that reside in independent ASP libraries. MIMIX
resets the job's ASP group to its initial value as needed before processing is
completed.
The SETASPGRP command may modify the library list of the current job. If the library
list contains libraries for ASP groups other than those used by the ASP group for
which the command was called, the SETASPGRP removes the extra libraries from
the library list. This can affect the system and user portions of the library list as well as
the current library in the library list.

665
When a MIMIX command runs the SETASPGRP command during processing, MIMIX
resets the user portion of the library list and the current library in the library list to their
initial values. The system portion of the library list is not restored to its initial value.
Figure 38, Figure 39, and Figure 40 show how the system portion of the library list is
affected on the Display Library List (DSPLIBL) display when the SETASPGRP
command is run.

Figure 38. Before a MIMIX command runs. The library list contains three independent ASP
libraries, including a library in independent ASP WILLOW in the system portion of the library
list.

Display Library List


System: CHICAGO
Type options, press Enter.
5=Display objects in library

Opt Library Type ASP device Text


___ LIBSYS1 SYS WILLOW :
___ LIBSYS2 SYS :
___ LIBSYS3 SYS :
___ LIBCUR1 CUR WILLOW :
___ LIBUSR1 USR OAK :
___ LIBUSR2 USR :
Bottom
F3=Exit F12=Cancel F17=Top F18=Bottom

Figure 39. During the running of a MIMIX command. The independent ASP libraries are
removed from the library list.

Display Library List


System: CHICAGO
Type options, press Enter.
5=Display objects in library

Opt Library Type ASP device Text


___ LIBSYS1 SYS :
___ LIBSYS2 SYS :
___ LIBSYS3 SYS :
___ LIBCUR1 CUR :
___ LIBUSR1 USR :
___ LIBUSR2 USR :
Bottom
F3=Exit F12=Cancel F17=Top F18=Bottom

Figure 40. After the MIMIX command runs. The library in independent ASP WILLOW in the
system portion of the library list is removed. The libraries in independent ASP OAK in the user

666
Detecting independent ASP overflow conditions

portion of the library list and the current library are restored.

Display Library List


System: CHICAGO
Type options, press Enter.
5=Display objects in library

Opt Library Type ASP device Text


___ LIBSYS1 SYS :
___ LIBSYS2 SYS :
___ LIBSYS3 SYS :
___ LIBCUR1 CUR WILLOW :
___ LIBUSR1 USR OAK :
___ LIBUSR2 USR :
Bottom
F3=Exit F12=Cancel F17=Top F18=Bottom

The SETASPGRP command can return escape message LVE3786 if License


Program 5722-SS1 option 12 (Host Server) is not installed.

Detecting independent ASP overflow conditions


You can take advantage of the independent ASP threshold monitor to detect
independent ASP overflow conditions that put your high availability solution at risk
due to insufficient storage.
The independent ASP threshold monitor, MMIASPTHLD, monitors the QSYSOPR
message queue in library QSYS for messages indicating that the amount of storage
used by an independent ASP exceeds a defined threshold. When this condition is
detected, the monitor sends a warning notification that the threshold is exceeded. The
status of warning notifications is incorporated into overall MIMIX status. Notifications
can be displayed with the Work with Notifications (WRKNFY) command.
Each ASP defaults to 90% as the threshold value. To change the threshold value, you
must use IBM's iSeries Navigator.
The independent ASP threshold monitor is shipped with MIMIX. The monitor is not
automatically started after MIMIX is installed. If you want to use this monitor, you must
start it. The monitor is controlled by the master monitor.

667
Advanced auditing topics

APPENDIX F Advanced auditing topics

MIMIX provides the capability to create user-defined rules and integrate the status of
those rules into status reporting for MIMIX. This can be useful to perform specialized
checks of your environment that augment your regularly scheduled audits. This
appendix describes how to create user-defined rules and notifications.
This appendix also describes advanced topics associated with auditing. Auditing and
the policies which control audit are described fully in the MIMIX Operations book.
Topics in this appendix include:
• “What are rules and how they are used by auditing” on page 669 defines the
differences between MIMIX rules used for auditing and user-defined rules.
• “Using a different job scheduler for audits” on page 670 identifies what is needed if
you choose to not use the automatic auditing job scheduling support in MIMIX.
• “Considerations for rules” on page 671 identifies considerations for using the Run
Rule command and replacement variables with user-defined rules.
• “Creating user-generated notifications” on page 673 describes how to create a
notification that can be used with custom automation.
• “Running rules and rule groups manually” on page 675 describes how to use the
Run Rule and Run Rule Group commands.
• “Running user rules and rule groups programmatically” on page 615 describes
running rules when initiated by a job scheduling task.
• “MIMIX rule groups” on page 677 lists the pre-configured sets of MIMIX rules that
are shipped with MIMIX.

668
What are rules and how they are used by auditing

What are rules and how they are used by auditing


A rule defines a command to be invoked by the MIMIX Run Rule (RUNRULE)
command and options for notifying you of the result.
MIMIX uses rules as the mechanism for defining and invoking audits. Each shipped
audit is a rule that pre-defines a command invoked by the compare phase of an audit
and the possible actions that can be initiated, if needed, in the recovery phase of an
audit. MIMIX audits have names which begin with the pound sign (#) character. While
audit rules cannot be changed, you have considerable control over audits through
policies and scheduling. The MIMIX scheduler job (MXSCHED) provides support to
automatically submit auditing requests.
Two commands, Run Rule (RUNRULE) and Run Rule Group (RUNRULEGRP),
enable programmatic scheduling of rule activity. MIMIX invokes the RUNRULE
command when submitting audits automatically. You can also run rules on demand by
using user interface options for audits and rules or by using these commands
interactively.

669
Using a different job scheduler for audits
If you do not want to use the job scheduling capabilities within MIMIX to schedule
audits, you need to ensure that all of the MIMIX rules are scheduled to run on a
regular basis using your preferred scheduling mechanism.
Note: Only scheduled audits can be run using a different job scheduler. Scheduled
audits select all configured objects associated with the class of the audit.
Prioritized audits cannot be run with a different job scheduler. Prioritized audits
select only those replicated objects that are eligible for auditing based on their
eligibility category and category frequency.
It is recommended that you do the following:
• Schedule the audits to run from the management system.
• Schedule all audits to run every day in the same order described for shipped
default schedules in the MIMIX Operations book.
• Specify the same Run Rule command that is displayed when you use prompt
option 9 (Run rule) on the Work with Audits display.
• Address starting and ending the scheduling jobs in your operations at points
where you need to start or end MIMIX. The Start MIMIX (STRMMX) and End
MIMIX (ENDMMX) commands only address audits scheduled by MIMIX.
• Put appropriate checks in place to prevent scheduled jobs from starting when
MIMIX would otherwise need to be ended (such as during installation).
• Disable MIMIX scheduling for all audits. For installations running service pack
7.1.12.00 or higher, specify *DISABLED for the State element in the Audit
schedule policy of every audit. For installations running earlier service pack levels,
specify *NONE for the Frequency element in the Audit schedule policy of every
audit.

670
Considerations for rules

Considerations for rules


Audit rules shipped with MIMIX are automatically submitted using the run rule
commands and variables. Typically users do not interact with auditing infrastructure at
the run rule command and variable levels. However, if interaction is required, consider
the following:
General recommendations:
• Run MIMIX rules from a management system. For most environments, the
management system is also the target system. If you cannot run rules from the
management system due to physical constraints or because of complex
configurations, you can change the Run rule on system policy to meet your needs.
See the policies chapter of the MIMIX Operations - 5250 book for more details.
• When choosing the value for the Run rule on system policy, consider your
switching needs.
Considerations for the run rule commands: The RUNRULE command allows you
to run multiple rules concurrently, with each specified rule running in an independent
process. A limit of 100 unique rules can be specified per RUNRULE request.
The RUNRULEGRP command only allows you to specify one rule group at a time.
Otherwise, this command is like the RUNRULE command.
When prompting the RUNRULE or RUNRULEGRP commands, consider the
following:
• For the Data group definition prompts, the default value, *NONE, means the
rule will not be run against a data group. If *NONE is specified on the
command when the rule uses the &DGDFN replacement variable, running the
RUNRULE command results in an error condition in the audit status and a
message log entry. When a data group name or *ALL is specified, any instance
of the &DGDFN replacement variable is replaced with the data group name
and each data group is run in a separate process.
• For the Job description and Library prompts, the default value, MXAUDIT,
submits the request using the default job description, MXAUDIT.
Rule-generated messages and notifications: For audits, the primary interface for
checking results is the Audit Summary interface (Work with Audits display or Audits
portlet). This topic describes additional, secondary messaging for rules.
When the action identified in a rule is started, an informational message appears in
the message log. An informational message also appears when a rule action
completes successfully.
When an action initiated by a rule ends in error or runs successfully but detects
differences, an escape message appears in the message log and an error notification
is sent to the notifications user interface (Work with Notifications display or
Notifications portlet).
Rules that call MIMIX commands may result in an error notification and a message
log entry if you not have a valid access code for the MIMIX product or if the access
code expired.

671
Rule-related messages are marked with a Process value of *NOTIFY to facilitate the
filtering of rules- and notification-related messages.

672
Creating user-generated notifications

Creating user-generated notifications


MIMIX supports the ability to create user-generated notifications for user-defined
events and have their status and severity reflected within overall MIMIX status. User-
generated notifications can be created interactively from a command line or by
automation programs when user-defined events are detected.
User-generated notifications are created with a status of *NEW. User-generated
notifications appear on the Work with Notifications display and their severity is
reflected in MIMIX status on higher-level displays. The systems from which you can
view a notification are subject to the role of the system on which the notification was
created and the value that was specified for the DGDFN parameter.
To create a user-generated notification, do the following:
1. Enter the following from a command line:
installation_library/ADDNFYE
2. The Add Notification Entry (ADDNFYE) display appears. Specify values for the
following prompts:
a. Notification description (TEXT) - Specify a short description with no more than
132 characters of text, enclosed in apostrophes. This text will appear on the
Work with Notifications display
b. Notification severity (SEVERITY) - Specify the severity assigned to the
notification. The specified value determines how the notification is prioritized in
overall MIMIX status. Use the default value *ERROR to indicate an error was
detected; typically action is required to resolve the problem. The value
*WARNING identifies that action may be required and the value *INFO informs
of a successful operation.
c. Data group definition (DGDFN) - If necessary, specify the three-part name for a
data group. When a data group is specified, the notification is available on
either system defined to the data group. When the value *NONE is specified,
the role of the system (management or network) determines where the
notification will be available. A notification with a value of *NONE added from a
management system will not be available on any network systems. A
notification with a value of *NONE added from a network system will not be
available on any other network systems.
d. Notification details (DETAIL) - Specify information to identify what caused the
notification and what users are expected to do if action is needed. This field
must be no more than 512 characters of text, enclosed in apostrophes. This
information is visible when the notification details are displayed.
3. You can optionally specify values for Job name details (JOB) and File details
(FILE) to identify the job which generated the notification and an associated
output file and library. In order to have this information available to users, you
must specify it now. When specified, this information is available for the
notification from the system on which the notification was sent.
4. To add the notification, press Enter.

673
Example of a user-generated notification
A MIMIX administrator wants to see a notification reflected in MIMIX status when TCP
communications fails. A message queue monitor on a specific system can check for a
message indicating a communications failure and issue a notification when the
message occurs.
Note: The administrator in this example must use care when determining where to
create the monitor. A monitor runs only on a single system but the notification
it will generate may be available on multiple systems. The role of the system
(management or network) on which the monitor runs and the values specified
for the Add Notification Entry command in the monitor’s event program
determine where the notification will be available. (For details, see the DGDFN
information in “Creating user-generated notifications” on page 673.) Because
the communications problem being monitored may also prevent the
notification from reaching the appropriate systems, the administrator chose to
create this monitor on multiple systems in the installation.
The following command creates a message queue monitor named COMPROB to
check for message LVE0113 (TCP communications request failed with error &1) in
the MIMIX message queue in the MIMIXQGPL library:
CRTMONOBJ MONITOR(COMPROB) EVTCLS(*MSGQ)
EVTPGM(user_library/COMPROB) MSGQ(MIMIXQGPL/MIMIX)
MSGID(LVE0113) AUTOSTR(*YES) TEXT('Issue notification
entry for TCP communication problem')
The event program includes the instruction to issue the following command, which will
add a notification to MIMIX in the specified installation library:
installation_library/ADDNFYE TEXT('comm failure')
SEVERITY(*ERROR) DGDFN(*NONE) DETAIL(‘TCP communications
failed. Investigation needed.’)
Once the monitor is enabled and started, the event program COMPROB will run when
the message LVE0113 is detected. For additional information about creating monitors
and writing event programs, see the Using MIMIX Monitor book.

674
Running rules and rule groups manually

Running rules and rule groups manually


User interfaces for auditing provide options to run audits which invoke the Run Rule
(RUNRULE) command. There may be times when you may need to run the
RUNRULE or the Run Rule Group (RUNRULEGRP) command manually. You can run
the RUNRULE and RUNRULEGRP commands by typing them on a command line.
Any notifications or recoveries that result from the rule are displayed in the user
interface.
Notes:
• Before running rules, you should be familiar with the information in
“Considerations for rules” on page 671.
• You can verify that your rules are running by checking the message log
available from the notifications interface.

Running rules
This procedure outlines the steps required to run a rule.
Typically, this procedure should be performed from the system and installation where
you wan the rule to run. Do the following:
1. On a command line, type RUNRULE and press F4 (Prompt). The Run Rule
(RUNRULE) display appears.
2. At the Rule name prompt, specify the rule names for the rules you want to run.
You can specify up to 100 rules to run from the command.
3. At the Data group definition prompt, specify the value you want. The default is
*NONE, but you can specify that rules be run against an individual data group or all
data groups.
4. Press F10 for additional parameters.
5. At the Notification severity prompt, specify the severity level to assign to the
notification that is sent if the rule ends in error. This value overrides values
specified in policies or in the rule itself.
For a MIMIX rule, the default value *DFT is the same as the value *POLICY,
where the Notification severity policy in effect determines the severity of the
notification. For a user rule, *DFT is the same as the value *RULE, where the rule
determines the severity of the notification.
6. At the Notification on success prompt, specify whether you want the rule to
generate a notification when the specified rule ends successfully. This value
overrides values specified in policies or in the rule itself.
For a MIMIX rule, the default value *DFT is the same as *POLICY, where the Audit
notify on success policy in effect determines whether a notification is sent. If the
policy is set for both the installation and for the data group, the data group value is
used. For a user rule, *DFT is the same as the value *RULE, where the value
specified in the rule determines whether a notification is sent.
7. At the Use run rule on system policy prompt, specify whether the rule should use

675
the policy in effect when run.This value is only used when a data group is
selected. The default value *NO will run the rule on the local system.
8. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request. The default value, MXAUDIT,
submits the request using the default job description, MXAUDIT.
9. To run the rule, press Enter.

Running rule groups


This procedure outlines the steps required to run a rule group.
Do the following:
1. On a command line, type RUNRULEGRP and press F4 (Prompt). The Run Rule
Group (RUNRULEGRP) display appears.
2. At the Rule group name prompt, specify the rule group name for the rule group
you want to run. You can only run individual rule groups.
3. At the Data group definition prompt, specify the value you want. The default is
*NONE, but you can specify that rule groups be run against an individual data
group or all data groups.
4. Press F10 for additional parameters.
5. At the Notification severity prompt, specify the severity level to assign to the
notification that is sent if a rule in the group ends in error. This value overrides
values specified in policies or in the rule itself.
For a MIMIX rule group, the default value *DFT is the same as the value *POLICY,
where the Notification severity policy in effect determines the severity of the
notification. For a user rule, *DFT is the same as the value *RULE, where the rule
determines the severity of the notification.
6. At the Notification on success prompt, specify whether you want the rule to
generate a notification when the specified rules end successfully. This value
overrides values specified in policies or in the rule itself.
For a MIMIX rule, the default value *DFT is the same as *POLICY, where the Audit
notify on success policy in effect determines whether a notification is sent. If the
policy is set for both the installation and for the data group, the data group value is
used. For a user rule, *DFT is the same as the value *RULE, where the value
specified in the rule determines whether a notification is sent.
7. At the Use run rule on system policy prompt, specify whether each rule should use
the policy in effect when run.This value is only used when a data group is
selected. The default value *NO will run the rule on the local system.
8. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request. The default value, MXAUDIT,
submits the request using the default job description, MXAUDIT.
9. To run the rule group, press Enter.

676
MIMIX rule groups

MIMIX rule groups


Each MIMIX rule group consists of a predetermined set of MIMIX rules. Table 121 lists
the pre-configured rule groups shipped with MIMIX. For a description of each MIMIX
rule used by each rule group, see topic How audits are scheduled automatically in the
MIMIX Operations book.

Table 121. Pre-configured MIMIX rule groups

Rule group name Description Individual rules included

#ALL Set of all shipped DLO, file, IFS, #DGFE, #DLOATR, #FILATR,
and object rules. #FILATRMBR, #FILDTA,
#IFSATR, #MBRRCDCNT,
#OBJATR

#ALLATR Set of shipped attribute #DLOATR, #FILATR,


comparisons for files, objects, IFS #FILATRMBR, #IFSATR,
objects, and DLOs. #OBJATR

#ALLDTA Set of data comparisons for files #FILDTA, #IFSATR


and IFS objects.

#FILALL Set of shipped file rules that #DGFE, #FILATR,


compares file and member #FILATRMBR, #FILDTA,
attributes and file data, and checks #MBRRCDCNT
configuration for files using
cooperative processing.

#FILATR Set of shipped file rules that #FILATR, #FILATRMBR


compares file and member
attributes.

#IFSALL Set of shipped IFS rules. #IFSATR

677
Interpreting audit results

APPENDIX G Interpreting audit results

Audits use commands that compare and synchronize data. The results of the audits
are placed in output files associated with the commands. The following topics provide
supporting information for interpreting data returned in the output files.
• “Resolving auditing problems” on page 679 describes how to check the status of
an audit and resolve any problems that occur.
• “Checking the job log of an audit” on page 684 describes how to use an audit’s job
log to determine why an audit failed.
• “When the difference is “not found”” on page 686 provides additional
considerations for interpreting result of not found in priority audits.
• “Interpreting results for configuration data - #DGFE audit” on page 687 describes
the #DGFE audit which verifies the configuration data defined to your
configuration using the Check Data Group File Entries (CHKDGFE) command.
• “Interpreting results of audits for record counts and file data” on page 689
describes the audits and commands that compare file data or record counts.
• “Interpreting results of audits that compare attributes” on page 692 describes the
Compare Attributes commands and their results.

678
Resolving auditing problems

Resolving auditing problems


Audits run for individual data groups. To check whether auditing problems exist for
data groups in an installation, do the following:
1. To access the Work with Data Groups display, either select option 6 from the
MIMIX Basic Main menu or select option 1 from the MIMIX Intermediate Main
menu and press Enter.
2. The Audits field located in the upper right of the display identifies the total number
of audits in the installation that require action to correct a problem or to prevent a
situation from becoming a problem.
Work with Data Groups CHICAGO
11:02:05
Type options, press Enter. Audits/Recov./Notif.: 079 / 002 / 003
5=Display definition 8=Display status 9=Start DG
10=End DG 12=Files needing attention 13=Objects in error
14=Active objects 15=Planned switch 16=Unplanned switch ...
---------Source--------- --------Target-------- -Errors-
Opt Data Group System Mgr DB Obj DA System Mgr DB Obj DB Obj

Highlighting on the Audits field indicates the severity of the errors.


Red — At least one audit either failed, ended, has unresolved
differences, did not run and *NOTRUN status was considered an error
by policies in effect at runtime, or is out of compliance.
Yellow — At least one audit either did not run and *NOTRUN status
was considered a warning by policies in effect at runtime, ignored some
objects and the *IGNOBJ status was considered a warning by policies in effect at
runtime, is out of compliance and currently active, or is approaching out of
compliance.
3. Press F7 (Audits) to access the Work with Audits display.
4. The display opens to the view that includes the highest severity errors, which may
be runtime or compliance.
• If the Audit Status column is displayed, there is a problem with the runtime
status of an audit. Continue with “Resolving audit runtime status problems” on
page 679.
• If the Compliance column is displayed, there is a problem with compliance for
an audit. Continue with “Resolving audit compliance status problems” on
page 684.

Resolving audit runtime status problems


Audit runtime status is displayed on the Audit summary view of the Work with Audits
display. Audits with potential problems are at the top of the list.

679
You may also need to view the output file or the job log, which are only available from
the system where the audits ran. In most cases, this is the management system.
Work with Audits
System: AS01
Type options, press Enter.
5=Display 6=Print 7=History 8=Recoveries 9=Run rule 10=End
14=Audited objects 46=Mark recovered ...

Audit Audit ---------Definition--------- Object Objects


Opt Status Rule DG Name System 1 System 2 Diff Selected
__ *NOTRUN #OBJATR EMP AS01 AS02 0 *PTY

Do the following from the management system:


1. If necessary, do one of the following to access the Work with Audits display.
• From the MIMIX Intermediate Main Menu, select option 6 (Work with audits)
and press Enter. Then use F10 as needed to access the Audit summary view.
• From a command line, enter WRKAUD VIEW(*AUDSTS)
2. Check the Audit Status column for values shown in Table 122 and take the
indicated action.

Table 122. Addressing problems with audit runtime status

Status Action

*FAILED If the failed audit selected objects by priority and its timeframe for starting has not passed,
the audit will automatically attempt to run again.
The rule called by the audit failed or ended abnormally.
• To run the rule for the audit again, select option 9 (Run rule). This will check all objects
regardless of how the failed audit selected objects to audit.
• To check the job log, see “Checking the job log of an audit” on page 684.

680
Resolving auditing problems

Table 122. Addressing problems with audit runtime status

Status Action

*ENDED The #FILDTA audit or the #MBRRCDCNT audit ended either because of a policy in effect
or the data group status. The recovery varies according to why the audit ended.
To determine why the audit ended:
1. Select option 5 (Display) for the audit and press Enter.
2. Check the value indicated in the Reason for ending field. Then perform the appropriate
recovery, below.
If the reason for ending is *DGINACT, the data group status became inactive while the
audit was in progress.
1. From the command line, type WRKDG and press Enter.
• If all processes for the data group are active, skip to Step 2.
• If processes for the data group show a red I, L, or P in the Source and Target columns,
use option 9 (Start DG). Note that if the data group was inactive for some time, it may
have a threshold condition after being started. Wait for the threshold condition to clear
before continuing with Step 2.
2. When the data group is active and does not have a threshold condition, return to the
Work with Audits display and use option 9 (Run rule) to run the audit. This will check all
objects regardless of how the ended audit had selected objects to audit.
If the reason for ending is *MAXRUNTIM, the Maximum rule runtime (MAXRULERUN)
policy in effect was exceeded. Do one of the following:
• Wait for the next priority-based audit to run according to its timeframe for starting.
• Change the value specified for the MAXRULERUN policy using “Setting policies -
general” on page 30. Then, run the audit again, either by using option 9 (Run rule) or by
waiting for the next scheduled audit or priority-based audit to run.
If the reason for ending is *THRESHOLD, the DB apply threshold action (DBAPYTACT)
policy in effect caused the audit to end.
1. Determine if the data group still has a threshold condition. From the command line, type
WRKDG and press Enter.
2. If the data group shows a turquoise T in the Target DB column, the threshold exceeded
condition is still present. Wait for the threshold to resolve. If the threshold persists for an
extended time, you may need to contact your MIMIX administrator.
3. When the data group no longer has a threshold condition, return to the Work with Audits
display and use option 9 (Run rule) to run the audit. (This will check all objects.)

681
Table 122. Addressing problems with audit runtime status

Status Action

*DIFFNORCY The comparison performed by the audit detected differences. No recovery actions were
attempted because of a policy in effect when the audit ran. Either the Automatic audit
recovery policy is disabled or the Action for running audits policy prevented recovery
actions while the data group was inactive or had a replication process which exceeded its
threshold.
If policy values were not changed since the audit ran, checking the current settings will
indicate which policy was the cause. Use option 36 to check data group level policies and
F16 to check installation level policies.
• If the Automatic audit recovery policy was disabled, the differences must be manually
resolved.
• If the Action for running audits policy was the cause, either manually resolve the
differences or correct any problems with the data group status. You may need to start
the data group and wait for threshold conditions to clear. Then run the audit again.
To manually resolve differences do the following:
1. Type 7 (History) next to the audit with *DIFFNORCY status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column. All differences shown for an audit with
*DIFFNORCY status need to be manually resolved. For more information about the
possible values, see “Interpreting audit results” on page 678.
To have MIMIX always attempt to recover differences on subsequent audits, change the
value of the automatic audit recovery policy.

*NOTRCVD The comparison performed by the audit detected differences. Either some attempts to
recover differences failed, or the audit job ended before recoveries could be attempted on
all differences.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits,
such as #FILDTA, may correct the detected differences.
Do the following:
1. Type 7 (History) next to the audit with *NOTRCVD status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column. Any objects with a value of *RCYFAILED must
be manually resolved. For any objects with values other than *RCYFAILED or
*RECOVERED, run the audit again. For more information about the possible values,
see “Interpreting audit results” on page 678.

682
Resolving auditing problems

Table 122. Addressing problems with audit runtime status

Status Action

*NOTRUN The audit request was submitted but did not run. Either the audit was prevented from
running by the Action for running audits policy in effect, or the data group was inactive
when a #FILDTA or #MBRRCDCNT audit was requested.
Note: An audit with *NOTRUN status may or may not be considered a problem in your environment.
The value of the Audit severity policy in effect determines whether the audit status *NOTRUN
has been assigned an error, warning, or informational severity. The severity determines
whether the *NOTRUN status rolls up into the overall status of MIMIX and affects the order in
which audits are displayed in interfaces.
A status of *NOTRUN may be expected during periods of peak activity or when data group
processes have been ended intentionally. However, if the audit is frequently not run due to
the Action for running audits policy, action may be needed to resolve the cause of the
problem.
To resolve this status:
1. From the command line, type WRKDG and press Enter.
2. Check the data group status for inactive or partially active processes and for processes
with a threshold condition.
3. When the data group no longer has a threshold condition and all processes are active,
return to the Work with Audits display and use option 9 (Run rule) to run the audit. (This
will check all objects.)

*IGNOBJ The audit ignored one or more objects because they were considered active or could not
be compared because of locks or authorizations that prevented access. All other selected
objects were compared and any detected differences were recovered.
Note: An audit with *IGNOBJ status may or may not be considered a problem in your environment.
The value of the Audit severity policy in effect determines whether the audit status *IGNOBJ
has been assigned a warning or informational severity. The severity determines whether the
*IGNOBJ status rolls up into the overall status of MIMIX, determines whether the ignored
objects are counted as differences, and affects the order in which audits are displayed in
interfaces.
To resolve this status:
1. From the Work with Audits display on the target system, type 7 (History) next to the audit
with *IGNOBJ status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column to identify the objects with a status of *UN or *UA.
4. When the locks are released or replication activity for the object completes, do one of
the following:
• Wait for the next prioritized audit to run.
• Run the audit manually using option 9 (Run rule) from the Work with Audits display,

For more information about the values displayed in the audit results, see “Interpreting
results for configuration data - #DGFE audit” on page 687, “Interpreting results of
audits for record counts and file data” on page 689, and “Interpreting results of audits
that compare attributes” on page 692.

683
Checking the job log of an audit
An audit’s job log can provide more information about why an audit failed. If it still
exists, the job log is available on the system where the audit ran. Typically, this is the
management system.
You must display the notifications from an audit in order to view the job log. Do the
following:
1. From the Work with Audits display, type 7 (History) next to the audit and press
Enter.
2. The Work with Audit History display appears with the most recent run of the audit
at the top of the list.
3. Use option 12 (Display job) next to the audit you want and press Enter.
4. The Display Job menu opens. Select option 4 (Display spooled files). Then use
option 5 (Display) from the Display Job Spooled Files display.
5. Look for messages from the job log for the audit in question. Usually the most
recent messages are at the bottom of the display.
Message LVE3197 is issued when errors remain after an audit completed.
Message LVE3358 is issued when an audit failed. Check for following
messages in the job log that indicate a communications problem (LVE3D5E,
LVE3D5F, or LVE3D60) or a problem with data group status (LVI3D5E,
LVI3D5F, or LVI3D60).

Resolving audit compliance status problems


Audit compliance status is displayed on the Compliance summary view of the Work
with Audits display. Audits with potential problems are at the top of the list.
Compliance is determined for each individual audit based on the date when the audit
last completed its compare phase.
Work with Audits
System: AS01
Type options, press Enter.
5=Display 6=Print 7=History 8=Recoveries 9=Run rule 10=End
14=Audited objects 36=Change DG policies 37=Change audit schedule

Audit ---------Definition--------- ---Compare End---


Opt Compliance Rule DB Name System 1 System 2 Date Time
__ *ATTN #DGFE EMP AS01 AS02 09/25/08 12:15:34

Do the following from the management system:


1. If necessary, do one of the following to access the Work with Audits display.
• From the MIMIX Intermediate Main Menu, select option 6 (Work with audits)
and press Enter. Then use F10 as needed to access the Compliance view
• Enter the command: installation-library/WRKAUD VIEW(*COMPLY)
2. Check the Compliance column for values of *ATTN and *ACTREQ.

684
Resolving auditing problems

*ATTN -The audit is approaching an out of compliance state as determined by the


Audit warning threshold policy. Attention is required to prevent the audit from
becoming out of compliance.
*ACTREQ - The audit is out of compliance with the Audit action threshold policy.
Action is required. Perform an audit of the data group.
3. To resolve a problem with audit compliance, the audit in question must be run and
complete its compare phase.
• To see when the scheduled run of the audit will occur, press F11.
• To see when both scheduled and prioritized audits will run, press F10 to
access the Audit summary view, then use F11 to toggle between views.
• To run the audit now, select option 9 (Run rule) and press Enter. This action will
select all replicated objects associated with the class of the audit. For more
Information, see “Running an audit immediately” on page 164.

685
When the difference is “not found”
For audits that compare replicated data, a difference indicating the object was not
found requires additional explanation. This difference can be returned for these
audits:
• For the #FILDTA and #MBRRCDCNT audits, a value of *NF1 or *NF2 for the
difference indicator (DIFIND) indicates the object was not found on one of the
systems in the data group. The 1 and 2 in these values refer to the system as
identified in the three-part name of the data group.
• For the #FILATR, #FILATRMBR, #IFSATR, #OBJATR, and #DLOATR audits, a
not found condition is indicated by a value of *NOTFOUND in either the system 1
indicator (SYS1IND) or system 2 indicator (SYS2IND) fields. Typically, the DIFIND
field result is *NE.
Audits can report not found conditions for objects that have been deleted from the
source system. A not found condition is reported when a delete transaction is in
progress for an object eligible for selection when the audit runs. This is more likely to
occur when there are replication errors or backlogs, and when policy settings do not
prevent audits from comparing when a data group is inactive or in a threshold
condition.
A scheduled audit will not identify a not found condition for an object that does not
exist on either system because it selects existing objects based on whether they are
configured for replication by the data group. This is true regardless of whether the
audit is automatically submitted or run immediately.
Because a priority audit selects already replicated objects, it will not audit objects for
which a create transaction is in progress.
Prioritized audits will not identify a not found condition when the object is not found on
the target system because prioritized auditing selects objects based on the replicated
objects database. Only objects that have been replicated to the target system are
identified in the database.
Priority audits can be more likely to report not found conditions when replication errors
or backlogs exist.

686
Interpreting results for configuration data - #DGFE audit

Interpreting results for configuration data - #DGFE audit


The #DGFE audit verifies the configuration data that is defined for replication in your
configuration. This audit invokes the Check Data Group File Entries (CHKDGFE)
command for the audit’s comparison phase. The CHKDGFE command collects data
on the source system and generates a report in a spooled file or an outfile.
Table 123 shows the possible Result values and corresponding recovery actions. If
the Automatic audit recovery policy is enabled at the time the audit runs, the audit will
attempt to perform the recovery action indicated and the status will become either
*RECOVERED or *RCYFAILED. If the recovery action could not be performed or if
the Automatic audit recovery policy is disabled, user action is needed to manually
perform the actions needed to correct the reported problem.

Table 123. CHKDGFE - possible results and actions for resolving errors

Result Description and Recovery Actions

*NODGFE No file entry exists. The object is identified by an object entry which
specifies COOPDB(*YES) but the file entry necessary for cooperative
processing is missing.
Create the DGFE or change the DGOBJE to COOPDB(*NO)
Note: Changing the object entry affects all objects using the object entry. If you
do not want all objects changed to this value, copy the existing DGOBJE
to a new, specific DGOBJE with the appropriate COOPDB value.

*EXTRADGFE An extra file entry exists. The object is identified by a file entry and an
object entry which specifies COOPDB(*NO). The file entry is extra when
cooperative processing is not used.
Delete the DGFE or change the DGOBJE to COOPDB(*YES)
Note: Changing the object entry affects all objects using the object entry. If you
do not want all objects changed to this value, copy the existing DGOBJE
to a new, specific DGOBJE with the appropriate COOPDB value.

*NOFILE No file exists for the existing file entry.


Delete the DGFE, re-create the missing file, or restore the missing file.

*NOMBR No file member exists for the existing file entry.


Delete the DGFE for the member or add the member to the file.

*RCYFAILED Automatic audit recovery actions were attempted but failed to correct the
detected error.
Run the audit again.

*RECOVERED Recovered by automatic recovery actions.


No action is needed.

*UA File entries are in transition and cannot be compared.


Run the audit again.

The Option column of the report provides supplemental information about the
comparison. Possible values are:

687
*NONE - No options were specified on the comparison request.
*NOFILECHK - The comparison request included an option that prevented an
error from being reported when a file specified in a data group file entry does not
exist.
*DGFESYNC - The data group file entry was not synchronized between the
source and target systems. This may have been resolved by automatic recovery
actions for the audit.
One possible reason why actual configuration data in your environment may not
match what is defined to your configuration is that a file was deleted but the
associated data group file entries were left intact. Another reason is that a data group
file entry was specified with a member name, but a member is no longer defined to
that file. If you use #DGFE audit with automatic scheduling and automatic audit
recovery enabled, these configuration problems can be automatically detected and
recovered for you. Table 124 provides examples of when various configuration errors
might occur.

Table 124. CHKDGFE - possible error conditions

Result File Member DGFE DGOBJE exists


exists exists exists

*NODGFE Yes Yes No COOPDB(*YES)

*EXTRADGFE Yes Yes Yes COOPDB(*NO)

*NOFILE No No Yes Exclude

*NOMBR Yes No Yes No entry

688
Interpreting results of audits for record counts and file data

Interpreting results of audits for record counts and file


data
The audits and commands that compare file data or record counts are as follows:
• #FILDTA audit or Compare File Data (CMPFILDTA) command
• #MBRRCDCNT audit or Compare Record Count (CMPRCDCNT) command
Each record in the output files for these audits or commands identifies a file member
that has been compared and indicates whether a difference was detected for that
member.
You can see the full set of fields in each output file by viewing it from the native user
interface.
The type of data included in the output file is determined by the report type specified
on the compare command. The data included for each report type is as follows:
• Difference reports (RPTTYPE(*DIF)) return information about detected
differences. Difference reports are the default for these compare commands.
• Full reports (RPTTYPE(*ALL)) return information about all objects and attributes
compared. Full reports include both differences and objects that are considered
synchronized. Audits use this report type.
• Relative record number reports (RPTTYPE(*RRN)) return the relative record
number of the first 1,000 records of a member that fail to compare. Relative record
number reports apply only to the Compare File Data command.

What differences were detected by #FILDTA


The Difference Indicator (DIFIND) field identifies the result of the comparison. Table
125 identifies values for the Compare File Data command that can appear in this field

Table 125. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)

Values Description

*APY The database apply (DBAPY) job encountered a problem


processing a U-MX journal entry for this member.

*CMT Commit cycle activity on the source system prevents active


processing from comparing records or record counts in the
selected member.

*CO Unable to process selected member. Cannot open file.

*CO (LOB) Unable to process selected member containing a large object


(LOB). The file or the MIMIX-created SQL view cannot be opened.

*DT Unable to process selected member. The file uses an unsupported


data type.

*EQ Data matches. No differences were detected within the data


compared. Global difference indicator.

689
Table 125. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)

Values Description

*EQ (DATE) Member excluded from comparison because it was not changed or
restored after the timestamp specified for the CHGDATE
parameter.

*EQ (OMIT) No difference was detected. However, fields with unsupported


types were omitted.

*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.

*FMC Matching entry not found in database apply table.

*FMT Unable to process selected member. File formats differ between


source and target files. Either the record length or the null
capability is different.

*HLD Indicates that a member is held or an inactive state was detected.

*IOERR Unable to complete processing on selected member. Messages


preceding LVE0101 may be helpful.

*NE Indicates a difference was detected.

*NF1 Member not found on system 1.

*NF2 Member not found on system 2.

*REP The file member is being processed for repair by another job
running the Compare File Data (CMPFILDTA) command.

*SJ The file is not journaled on the source node.

*SP Unable to process selected member. See messages preceding


message LVE3D42 in job log.

*SW The source file is journaled but not to the journal specified in the
journal definition.

*SYNC The file or member is being processed by the Synchronize DG File


Entry (SYNCDGFE) command.

*UE Unable to process selected member. Reason unknown. Messages


preceding message LVE3D42 in job log may be helpful.

*UN Indicates that the member’s synchronization status is unknown.

See “When the difference is “not found”” on page 686 for additional information.

690
Interpreting results of audits for record counts and file data

What differences were detected by #MBRRCDCNT


Table 126 identifies values for the Compare Record Count command that can appear
in the Difference Indicator (DIFIND) field.

Table 126. Possible values for Compare Record Count (CMPRCDCNT) output file field Dif-
ference Indicator (DIFIND)

Values Description

*APY The database apply (DBAPY) job encountered a problem


processing a U-MX journal entry for this member.

*CMT Commit cycle activity on the source system prevents active


processing from comparing records or record counts in the
selected member.

*EC The attribute compared is equal to configuration

*EQ Record counts match. No difference was detected within the record
counts compared. Global difference indicator.

*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.

*FMC Matching entry not found in database apply table.

*HLD Indicates that a member is held or an inactive state was detected.

*LCK Lock prevented access to member.

*NE Indicates a difference was detected.

*NF1 Member not found on system 1.

*NF2 Member not found on system 2.

*SJ The file is not journaled on the source node.

*SW The source file is journaled but not to the journal specified in the
journal definition.

*UE Unable to process selected member. Reason unknown. Messages


preceding LVE3D42 in job log may be helpful.

*UN Indicates that the member’s synchronization status is unknown.

See “When the difference is “not found”” on page 686 for additional information.

691
Interpreting results of audits that compare attributes
Each audit that compares attributes does so by calling a Compare Attributes1
command and places the results in an output file. Each row in an output file for a
Compare Attributes command can contain either a summary record format or a
detailed record format. Each summary row identifies a compared object and includes
a prioritized object-level summary of whether differences were detected. Each detail
row identifies a specific attribute compared for an object and the comparison results.
The type of data included in the output file is determined by the report type specified
on the Compare Attributes command. The data included for each report type is as
follows:
• Difference reports (RPTTYPE(*DIF)) return information about detected
differences. Only summary rows for objects that had detected differences are
included. Detail rows for all compared attributes are included. Difference reports
are the default for the Compare Attributes commands.
• Full reports (RPTTYPE(*ALL)) return information about all objects and attributes
compared. For each object compared there is a summary row as well as a detail
row for each attribute compared. Full reports include both differences and objects
that are considered synchronized.
• Summary reports (RPTTYPE(*SUMMARY)) return only a summary row for each
object compared. Specific attributes compared are not included.
For difference and full reports of compare attribute commands, several of the attribute
selectors return an indicator (*INDONLY) rather than an actual value. Attributes that
return indicators are usually variable in length, so an indicator is returned to conserve
space. In these instances, the attributes are checked thoroughly, but the report only
contains an indication of whether it is synchronized.
For example, an authorization list can contain a variable number of entries. When
comparing authorization lists, the CMPOBJA command will first determine if both lists
have the same number of entries. If the same number of entries exist, it will then
determine whether both lists contain the same entries. If differences in the number of
entries are found or if the entries within the authorization list are not equal, the report
will indicate that differences are detected. The report will not provide the list of
entries—it will only indicate that they are not equal in terms of count or content.
You can see the full set of fields in the output file by viewing it from the native user
interface.

What attribute differences were detected


The Difference Indicator (DIFIND) field identifies the result of the comparison. Table
127 identifies values that can appear in this field. Not all values may be valid for every
Compare command.

1. The Compare Attribute commands are: Compare File Attributes (CMPFILA), Compare
Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attri-
butes (CMPDLOA).

692
Interpreting results of audits that compare attributes

When the output file is viewed from the native user interface, the summary row is the
first record for each compared object and is indicated by an asterisk (*) in the
Compared Attribute (CMPATR) field. The summary row’s Difference Indicator value is
the prioritized summary of the status of all attributes checked for the object. When
included, detail rows appear below the summary row for the object compared and
show the actual result for the attributes compared.

The Priority2 column in Table 127 indicates the order of precedence MIMIX uses
when determining the prioritized summary value for the compared object.

Table 127. Possible values for output file field Difference Indicator (DIFIND)

Values1 Description Summary


Record2 Priority

*EC The values are based on the MIMIX configuration settings. The 5
actual values may or may not be equal.

*EQ Record counts match. No differences were detected. Global 5


difference indicator.

*EM Established mapping for file identifier (FID). Attribute indicator only 6
for CMPIFSA *FID attribute.

*NA The values are not compared. The actual values may or may not 5
be equal.

*NC The values are not equal based on the MIMIX configuration 3
settings. The actual values may or may not be equal.

*NE Indicates differences were detected. 2

*NS Indicates that the attribute is not supported on one of the systems. 5
Will not cause a global not equal condition.

*NM Not mapping consistently for file identifier (FID). Attribute indicator 2
only for CMPIFSA *FID attribute.

*RCYSBM Indicates that MIMIX submitted an automatic audit recovery action


that must be processed through the user journal replication
processes. The database apply (DBAPY) will attempt the recovery
and send an *ERROR or *INFO notification to indicate the outcome
of the recovery attempt.

*RCYFAILED Used to indicate that automatic recovery attempts via automatically


submitted audits failed to recover the detected difference.

*RECOVERED Indicates that recovery for this object was successful. 13

*UA Object status is unknown due to object activity. If an object 2


difference is found and the comparison has a value specified on
the Maximum replication lag prompt, the difference is seen as
unknown due to object activity. This status is only displayed in the
summary record.
Note: The Maximum replication lag prompt is only valid when a data
group is specified on the command.

693
Table 127. Possible values for output file field Difference Indicator (DIFIND)

Values1 Description Summary


Record2 Priority

*UN Indicates that the object’s synchronization status is unknown. 4


1. Not all values may be possible for every Compare command.
2. Priorities are used to determine the value shown in output files for Compare Attribute commands.
3. The value *RECOVERED can only appear in an output file modified by a recovery action. The object was initially
found to be *NM, *NE or *NC but MIMIX automatic recovery actions recovered the object.

For most attributes, when the outfile is viewed from the native user interface, when a
detailed row contains blanks in either of the System 1 Indicator or System 2 Indicator
fields, MIMIX determines the value of the Difference Indicator field according to Table
128. For example, if the System 1 Indicator is *NOTFOUND and the System 2
Indicator is blank (Object found), the resultant Difference Indicator is *NE.

Table 128. Difference Indicator values that are derived from System Indicator values.

Difference Indicator
System 1 Indicator
Object *NOTCMPD *NOTFOUND *NOTSPT *RTVFAILED *DAMAGED
Found (blank
value)
Object Found *EQ / *NE / *NA *NE *NS *UN *NE
(blank value) *UA / *EC /
*NC
System *NOTCMPD *NA *NA *NE *NS *UN *NE
2 *NOTFOUND *NE / *UA *NE / *UA *EQ *NE / *UA *NE / *UA *NE
Indicator
*NOTSPT *NS *NS *NE *NS *UN *NE
*RTVFAILED *UN *UN *NE *UN *UN *NE
*DAMAGED *NE *NE *NE *NE *NE *NE

When viewed through Vision Solutions Portal, data group directionality is


automatically resolved so that differences are viewed as Source and Target instead of
System1 and System2.
For a small number of specific attributes, the comparison is more complex. The
results returned vary according to parameters specified on the compare request and
MIMIX configuration values. For more information see the following topics:
• “Comparison results for journal status and other journal attributes” on page 715
• “Comparison results for auxiliary storage pool ID (*ASP)” on page 719
• “Comparison results for user profile status (*USRPRFSTS)” on page 722
• “Comparison results for user profile password (*PRFPWDIND)” on page 725

694
Interpreting results of audits that compare attributes

Where was the difference detected


The System 1 Indicator (SYS1IND) and System 2 (SYS2IND) fields show the status
of the attribute on each system as determined by the compare request. Table 129
identifies the possible values. These fields are available in both summary and detail
rows in the output file.

Table 129. Possible values for output file fields SYS1IND and SYS2IND

Value Description Summary Record1


Priority

<blank> No special conditions exist for this object. 5

*DAMAGED Object damaged condition. 3

*MBRNOTFND Member not found. 2

*NOTCMPD Attribute not compared. Due to MIMIX configuration settings, this N/A2
attribute cannot be compared.

*NOTFOUND Object not found. 1

*NOTSPT Attribute not supported. Not all attributes are supported on all IBM N/A2
i releases. This is the value that is used to indicate an
unsupported attribute has been specified.

*RTVFAILED Unable to retrieve the attributes of the object. Reason for failure 4
may be a lock condition.
1. The priority indicates the order of precedence MIMIX uses when setting the system indicators fields in the summary
record.
2. This value is not used in determining the priority of summary level records.

For comparisons which include a data group, the Data Source (DTASRC) field
identifies which system is configured as the source for replication.

What attributes were compared


In each detailed row, the Compared Attribute (CMPATR) field identifies a compared
attribute. The following topics identify the attributes that can be compared by each
command and the possible values returned.
• “Attributes compared and expected results - #FILATR, #FILATRMBR audits” on
page 696
• “Attributes compared and expected results - #OBJATR audit” on page 701
• “Attributes compared and expected results - #IFSATR audit” on page 710
• “Attributes compared and expected results - #DLOATR audit” on page 713

695
Attributes compared and expected results - #FILATR, #FILATRMBR audits
The Compare File Attribute (CMPFILA) command supports comparisons at the file
and member level. Most of the attributes supported are for file-level comparisons. The
#FILATR audit and the #FILATRMBR audit each invoke the CMPFILA command for
the comparison phase of the audit.
Some attributes are common file attributes such as owner, authority, and creation
date. Most of the attributes, however, are file-specific attributes. Examples of file-
specific attributes include triggers, constraints, database relationships, and journaling
information.
The difference Indicator (DIFIND) returned after comparing file attributes may depend
on whether the file is defined by file entries or object entries. For instance, a attribute
could be equal (*EC) to the database configuration but not equal (*NC) to the object
configuration. See “What attribute differences were detected” on page 692.
Table 130 lists the attributes that can be compared and the value shown in the
Compared Attribute (CMPATR) field in the output file. The Returned Values column
lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value
(SYS2VAL) columns as a result of running the comparison.

Table 130. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ACCPTH1 Access path AR - Arrival sequence access path


EV - Encoded vector with a 1-, 2-, or 4-byte vector.
KC - Keyed sequence access path with duplicate keys allowed.
Duplicate keys are accessed in first-changed-first-out (FCFO)
order.
KF - Keyed sequence access path with duplicate keys allowed.
Duplicate keys are accessed in first-in-first-out (FIFO) order.
KL - Keyed sequence access path with duplicate keys allowed.
Duplicate keys are accessed in last-in-first-out (LIFO) order
KN - Keyed sequence access path with duplicate keys allowed.
No order is guaranteed when accessing duplicate keys.
KU - Keyed sequence access path with no duplicate keys
allowed (UNIQUE).

*ACCPTHVLD2 3 Access path valid *YES, *NO

*ACCPTHSIZ1 Access path size *MAX4GB, *MAX1TB

*ALWDLT Allow delete *YES, *NO


operation

*ALWOPS Allow operations Group which checks attributes *ALWDLT, *ALWRD, *ALWUPD,
*ALWWRT

*ALWRD Allow read *YES, *NO


operation

*ALWUPD Allow update *YES, *NO


operation

696
Table 130. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ALWWRT Allow write *YES, *NO


operation

*ASP Auxiliary storage 1-16 (pre-V5R2)


pool ID 1-255 (V5R2)
1 = System ASP
See “Comparison results for auxiliary storage pool ID (*ASP)” on
page 719 for details.

*AUDVAL Object audit value *NONE, *CHANGE, *ALL

*AUT File authorities Group which checks attributes *AUTL, *PGP, *PRVAUTIND,
*PUBAUTIND

*AUTL Authority list name *NONE, list name

*BASEDONPF2 Name of based-on 33 character name in the format: library/file(member)


physical file
member

*BASIC Pre-determined set Group which checks a pre-determined set of attributes.


of basic attributes When *FILE is specified for the Comparison level (CMPLVL),
these attributes are compared: *CST (group), *NBRMBR,
*OBJATR, *RCDFMT, *TEXT, and *TRIGGER (group).
When *MBR is specified for the Comparison level (CMPLVL) for
a file identified by a data group file entry, these attributes are
compared: *EXPDATE, *OBJATR, *SHARE, and *TEXT.
When *MBR is specified for the Comparison level (CMPLVL) for
a file that is not identified by data group file entry, these
attributes are compared: *CURRCDS, *EXPDATE,
*NBRDLTRCD, *OBJATR, *SHARE, and *TEXT.

*CCSID1 Coded character set 1-65535

*CST Constraint attributes Group which checks attributes *CSTIND, *CSTNBR

*CSTIND 4 Constraint equal No value, indicator only7


indicator When this attribute is returned in output, its Difference Indicator
value indicates if the number of constraints, constraint names,
constraint types, and the check pending attribute are equal. For
referential and check constraints, the constraint state as well as
whether the constraint status is enabled or disabled is also
compared.

*CSTNBR 4 Number of Numeric value


constraints

*CURRCDS Current number of 0-4294967295


records

*DBCSCAP DBCS capable *YES, *NO

*DBR Group which checks *DBRIND, *OBJATR

697
Table 130. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*DBRIND 4 Database relations No value, indicator only7


When this attribute is returned in output, its Difference Indicator
value indicates if the number of database relations and the
dependent file names are equal.

*EXPDATE1 Expiration date for Blank for *NONE or date in CYYMMDD format, where C equals
member the century. Value 0 is 19nn and 1 is 20nn.

*EXTENDED Pre-determined, Valid only for Comparison level of *FILE, this group compares
extended set the basic set of attributes (*BASIC) plus an extended set of
attributes. The following attributes are compared: *ACCPTH,
*AUT (group), *CCSID, *CST (group), *CURRCDS, *DBR
(group), *MAXKEYL, *MAXMBRS, *MAXRCDL, *NBRMBR,
*OBJATR, *OWNER, *PFSIZE (group), *RCDFMT, *REUSEDLT,
*SELOMT, *SQLTYP, *TEXT, and *TRIGGER (group).

*FIRSTMBR1 5 Name of member 10 character name


*FIRST *NONE if the file has no members.

*FRCKEY1 Force keyed access *YES, *NO


path

*FRCRATIO1 Records to force a *NONE, 1-32767


write

*INCRCDS1 Increment number 0-32767


of records

*JOIN Join Logical file *YES, *NO


Add, update, and delete authorities are not checked. Differences
in these authorities do not result in an *NE condition.

*JOURNAL Journal attributes Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG,
*JRNOMIT. Results are described in “Comparison results for
journal status and other journal attributes” on page 715.

*JOURNALED File is currently *YES, *NO


journaled

*JRN Current or last 10 character name, blank if never journaled


journal

*JRNIMG Record images *AFTER, *BOTH

*JRNLIB Current or last 10 character name, blank if never journaled


journal library

*JRNOMIT Journal entries to be *OPNCLO, *NONE


omitted

*LANGID1 Language ID 3 character ID

*LASTMBR1 5 Name of member 10 character name


*LAST *NONE if the file has no members.

698
Table 130. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*LONGNAME SQL long name long SQL name (128 char value)

*LVLCHK1 Record format level *YES, *NO


check

*MAINT1 Access path *IMMED, *REBLD, *DLY6


maintenance

*MAXINC1 Maximum 0-32767


increments

*MAXKEYL1 Maximum key 1-2000


length

*MAXMBRS1 Maximum members *NOMAX, 1-32767

*MAXPCT1 Max % deleted *NONE, 1-100


records allowed

*MAXRCDL1 Maximum record 1-32766


length

*NBRDLTRCD1 Current number of 0-4294967295


deleted records

*NBRMBR1 Number of 0-32767


members

*NBRRCDS1 Initial number of *NOMAX, 1-2147483646


records

*OBJCTLLVL1 Object control level 8 character user-defined value

*OWNER File owner User profile name

*PFSIZE File size attributes Group which checks *CURRCDS, *INCRCDS, *MAXINC,
*NBRDLTRCD, *NBRRCDS

*PGP Primary group *NONE, user profile name

*PRVAUTIND Private authority No value, indicator only7


indicator When this attribute is returned in output, its Difference Indicator
value indicates if the number of private authorities and private
authority values are equal.

*PUBAUTIND Public authority No value, indicator only7


indicator When this attribute is returned in output, its Difference Indicator
value indicates if public authority values are equal.

*RCDFMT Number of record 1-32


formats

*RECOVER1 Access path *IPL, *AFTIPL, *NO


recovery

699
Table 130. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*REUSEDLT1 Reuse deleted *YES, *NO


records

*SELOMT Select / omit file *YES, *NO

*SHARE1 Share open data *YES, *NO


path

*SQLTYP SQL file type PF Types - NONE, TABLE,


LF Types - INDEX, VIEW, NONE

*SRCMBRTYP Source physical file 10 character value


member type

*TEXT1 Text description 50 character value

*TRIGGER Group which checks *TRGIND, *TRGNBR, *TRGXSTIND

*TRGIND 4 Trigger equal No value, indicator only7


indicator When this attribute is returned in output, its Difference Indicator
value indicates whether it is enabled or disabled, and if the
number of triggers, trigger names, trigger time, trigger event,
and trigger condition with an event type of ‘update’ are equal.

*TRGNBR 4 Number of triggers Numeric value

*TRGXSTIND 4 Trigger existence No value, indicator only7


indicator When this attribute is returned in output, its Difference Indicator
value indicates if a trigger program exists on the system.

*USRATR User-defined 10 character user-defined value


attribute

*WAITFILE1 Maximum file wait *IMMED, *CLS, 1-32767


time

*WAITRCD1 Maximum record *IMMED, *NOMAX, 1-32767


wait time
1. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the object is configured for system journal replication with a configured object auditing value of *NONE.
2. This attribute is only compared for logical file members by the #FILATRMBR audit.
3. Differences detected for this attribute are marked as *EC (equal configuration) when the source is set to MAINT(*DLY)
and the value for ACCPTHVLD is *NO.
4. This attribute cannot be specified as input for comparing but it is included in a group attribute. When the group attribute
is checked, this value may appear in the output.
5. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the file is configured for system journal replication with a configured Omit content (OMTDTA) value of
*FILE.
6. Differences detected for this attributes are marked as *EC (equal configuration) when the source is set to *IMMED and
the target is set to *DLY by Access Path Maintenance in installations running 7.1.15.00 and higher or by Parallel
Access Path Maintenance running on earlier levels.
7. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is
specified, however, these values are blank.

700
Attributes compared and expected results - #OBJATR audit
The #OBJATR audit calls the Compare Object Attributes (CMPOBJA) command and
places the results in an output file. Table 131 lists the attributes that can be compared
by the CMPOBJA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The command supports attributes that are common
among most library-based objects as well as extended attributes which are unique to
specific object types, such as subsystem descriptions, user profiles, and data areas.
The Returned Values column lists the values you can expect in the System1 Value
(SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the
compare.

Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ACCPTHSIZ1 2 Access path size *MAX4GB and *MAX1TB


Valid for logical files only.

*AJEIND Auto start job entries. No value, indicator only3


Valid for subsystem When this attribute is returned in output, its Difference
descriptions only. Indicator value indicates if the number of auto start job
entries, job entry and associated job description, and library
entry values are equal.

*ASP Auxiliary storage pool ID 1-16 (pre-V5R1)


1-32 (V5R1)
1-255 (V5R2), 1 = System ASP
See “Comparison results for auxiliary storage pool ID
(*ASP)” on page 719 for details.

*ASPNBR Number of defined Numeric value


storage pools.
Valid for subsystem
descriptions only.

*ATTNPGM2 Attention key handling *SYSVAL, *NONE, *ASSIST, attention program name
program
Valid for user profiles
only.

*AUDVAL Object audit value *NONE, *USRPRF, *CHANGE, *ALL

*AUT Authority attributes Group which checks *AUTL, *PGP, *PRVAUTIND,


*PUBAUTIND

*AUTCHK2 Authority to check. *OWNER, *DTAAUT


Valid for job queues only.

*AUTL Authority list name *NONE, list name

*BASIC Pre-determined set of Group which checks a pre-determined set of attributes.


basic attributes These attributes are compared: *CRTTSP, *DOMAIN,
*INFSTS, *OBJATR, *TEXT, and *USRATR.

701
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*CCSID2 Character identifier *SYSVAL, ccsid-value


control.
Valid for user profiles
only.

*CNTRYID2 Country ID *SYSVAL, country-id


Valid for user profiles
only.

*COMMEIND Communications entries No value, indicator only3


Valid for subsystem When this attribute is returned in output, its Difference
descriptions only. Indicator value indicates if the number of communication
entries, maximum number of active jobs, communication
device, communication mode, associated job description
and library, and the default user entry values are equal.

*CRTAUT2 Authority given to users *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE, *SYSVAL,
who do not have specific *CHANGE, *ALL, *USE, *EXCLUDE
authority to the object.
Valid for libraries only.

*CRTOBJAUD2 Auditing value for objects *SYSVAL, *NONE, *USRPRF, *CHANGE, *ALL
created in this library
Valid for libraries only.

*CRTOBJOWN Profile that owns objects *USRPRF, *GRPPRF, profile-name


created by user
Valid for user profiles
only.

*CRTTSP Object creation date YYYY-MM-DD-HH.MM.SS.mmmmmm

*CURLIB Current library *CRTDFT, current-library


Valid for user profiles
only.

*DATACRC Data cyclic redundancy 10 character value


check (CRC)
Valid for data queues2,
query definitions,
validation list entries, and
query management
forms4 only.

*DDMCNV2 DDM conversation *KEEP, *DROP


Valid for job descriptions
only.

*DECPOS Decimal positions 0-9


Valid for data areas only.

*DOMAIN Object Domain *SYSTEM, *USER

702
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*DTAARAEXT Data area extended Group which checks *DECPOS, *LENGTH, *TYPE, *VALUE
attributes

*EXTENDED Pre-determined, Group which compares the basic set of attributes (*BASIC)
extended set plus an extended set of attributes. The following attributes
are compared: *AUT, *CRTTSP, *DOMAIN, *INFSTS,
*OBJATR, *TEXT, and *USRATR.

*FRCRATIO1 2 Records to force a write *NONE, 1 - 32,767


Valid for logical files only.

*GID Group profile ID number 1 - 4294967294


Valid for user profiles
only.

*GRPAUT Group authority to *NONE, *ALL, *CHANGE, *USE, *EXCLUDE


created objects
Valid for user profiles
only.

*GRPAUTTYP Group authority type *PGP, *PRIVATE


Valid for user profiles
only.

*GRPPRF Group profile name *NONE, profile-name


Valid for user profiles
only.

*INFSTS Information status *OK (No errors occurred), *RTVFAILED (No information
returned - insufficient authority or object is locked),
*DAMAGED (Object is damaged or partially damaged).

*INLMNU Initial menu Menu - *SIGNOFF, menu name


Valid for user profiles Library - *LIBL, library name
only.

*INLPGM Initial program Program - *NONE, program name


Valid for user profiles Library - *LIBL, library name
only.

*JOBDEXT Job description extended Group which checks *DDMCNV, *JOBQ, *JOBQLIB,
attributes *JOBQPRI, *LIBLIND, *LOGOUTPUT, *OUTQ, *OUTQLIB,
*OUTQPRI, *PRTDEV

*JOBQ2 Job queue 10 character name


Valid for job descriptions
only.

703
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*JOBQEIND Job queue entries No value, indicator only3


Valid for subsystem When this attribute is returned in output, its Difference
descriptions only. Indicator value indicates if the number of job queue entries,
job queue names, job queue libraries, and order of entries
are the same

*JOBQEXT Job queue extended Group which checks *AUTCHK, *JOBQSBS, *JOBQSTS,
attributes *OPRCTL

*JOBQLIB2 Job queue library 10 character name


Valid for job descriptions
only.

*JOBQPRI2 Job queue priority 1 (highest) - 9 (lowest)


Valid for job descriptions
only.

*JOBQSBS2 Subsystem that receives Subsystem name


jobs from this queue
Valid for job queues only.

*JOBQSTS2 Job queue status HELD, RELEASED


Valid for job queues only.

*JOURNAL Journal attributes Group which checks *JOURNALED, *JRN, *JRNLIB,


*JRNIMG, *JRNOMIT5.
Results are described in “Comparison results for journal
status and other journal attributes” on page 715.

*JOURNALED Object is currently *YES, *NO


journaled

*JRN Current or last journal 10 character name

*JRNIMG Record images *AFTER, *BOTH

*JRNLIB Current or last journal 10 character name


library

*JRNOMIT Journal entries to be *OPNCLO, *NONE


omitted

*LANGID2 Language ID *SYSVAL, language-id


Valid for user profiles
only.

*LENGTH Data area length 1-2000 (character), 1-24 (decimal), 1 (logical)


Valid for data areas only

*LIBEXT Extended library Group which checks *CRTAUT, *CRTOBJAUD


information attributes

704
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*LIBLIND Initial library list No value, indicator only3


Valid for job descriptions When this attribute is returned in output, its Difference
only. Indicator value indicates if the number of library list entries
and entry list values are equal. The comparison is order
dependent.

*LMTCPB Limit capabilities *PARTIAL, *YES, *NO


Valid for user profiles
only.

*LOGOUTPUT2 Job log output *SYSVAL, *JOBLOGSVR, *JOBEND, *PND


Valid for job descriptions
only.

*LVLCHK1 2 Record format level *YES, *NO


check
Valid for logical files only.

*MAINT1 2 6 Access path *DLY, *IMMED, *REBLD


maintenance
Valid for logical files only.

*MAXACT 2 Maximum active jobs Numeric value, *NOMAX (32,767)


Valid for subsystem
descriptions only.

*MAXMBRS1 2 Maximum members *NOMAX, 1 - 32,767


Valid for logical files only.

*MAXSTG7 Maximum allowed Numeric value, *NOMAX (2,147,483,647KB for IBM i 7.1
storage and earlier releases, 9,223,372,036,854,775,807KB for IBM
Valid for user profiles i 7.2 and higher releases)
only. Not compared for
QSECOFR or QTCM
user profiles.

*MSGQ2 Message queue Message queue - message queue name


Valid for user profiles Library - *LIBL, library name
only.

*NBRMBR1 2 Number of logical file 0 - 32,767


members
Valid for logical files only.

*NWSUSRA Network server user No value, indicator only3


attribute
Valid for user profiles
only.

*OBJATR Object attribute 10 character object extended attribute

705
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*OBJCTLLVL2 Object control level 8 character user-defined value


Valid for object types that
support this attribute8.

*OPRCTL2 Operator controlled *YES, *NO


Valid for job queues only.

*OUTQ2 Output queue *USRPRF, *DEV, *WRKSTN, output queue name


Valid for job descriptions
only.

*OUTQLIB2 Output queue library 10 character name


Valid for job descriptions
only.

*OUTQPRI2 Output queue priority 1 (highest) - 9 (lowest)


Valid for job descriptions
only.

*OWNER Object owner 10 character name

*PGP Primary group *NONE, user profile name

*PRESTIND Pre-start job entries No value, indicator only1


Valid for subsystem When this attribute is returned in output, its Difference
descriptions only. Indicator value indicates if the number of prestart jobs,
program, user profile, start job, wait for job, initial jobs,
maximum jobs, additional jobs, threshold, maximum users,
job name, job description, first and second class, and
number of first and second class jobs values are equal.

*PRFOUTQ2 Output queue *LIBL/*WRKSTN, *DEV


Valid for user profiles
only.

*PRFPWDIND User profile password See “Comparison results for user profile password
indicator (*PRFPWDIND)” on page 725 for details.

*PRTDEV2 Printer device *USRPRF, *SYSVAL, *WRKSTN, printer device name


Valid for job descriptions
only.

*PRVAUTIND Private authority indicator No value, indicator only3


When this attribute is returned in output, its Difference
Indicator value indicates if the number of private authorities
and private authority values are equal

*PUBAUTIND Public authority indicator No value, indicator only3


When this attribute is returned in output, its Difference
Indicator value indicates if the public authority values are
equal.

706
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*PWDEXPITV Password expiration *SYSVAL, *NOMAX, 1-366 days


interval
Valid for user profiles
only.

*PWDIND No password indicator *YES (no password), *NO (password)


Valid for user profiles
only.

*QUEALCIND Job queue allocation No value, indicator only3


indicator When this attribute is returned in output, its Difference
Valid for subsystem Indicator value indicates if the job queue entries for a
descriptions only. subsystem are in the same order and have the same queue
names and queue library names. It also compares the
allocation indicator values

*RLOCIND Remote location entries No value, indicator only3


Valid for subsystem When this attribute is returned in output, its Difference
descriptions only. Indicator value indicates if the number of remote location
entries, remote location, mode, job description and library,
maximum active jabs, and default user entry values are
equal.

*RTGEIND Routing entries No value, indicator only3


Valid for subsystem When this attribute is returned in output, its Difference
descriptions only. Indicator value indicates if the number of routing entries,
sequence number, maximum active, steps, compare start,
entry program, class, and compare entry values are equal

*SBSDEXT Subsystem description Group which checks *AJEIND, *ASPNBR, *COMMEIND,


extended attributes *JOBQEIND, *MAXACT, *PRESTIND, *RLOCIND,
*RTGEIND, *SBSDSTS

*SBSDSTS2 Subsystem status *ACTIVE, *INACTIVE


Valid for subsystem
descriptions only.

*SIZE Object size Numeric value

*SPCAUTIND Special authorities No value, indicator only3


Valid for user profiles When this attribute is returned in output, its Difference
only. Indicator value indicates if special authority values are equal

*SQLSP SQL stored procedures *NONE, or indicator only3


Valid for programs and *NONE is returned when there are no stored procedures
service programs only. associated with the program or service program.
When the indicator only is returned in output, the Difference
Indicator value identifies whether SQL stored procedures
associated with the object are equal.

707
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*SQLUDF SQL user defined *NONE, or indicator only3


functions *NONE is returned when there are no user defined functions
Valid for programs and associated with the program or service program.
service programs only. When the indicator only is returned in output, the Difference
Indicator value identifies whether SQL user defined
functions associated with the object are equal.

*SUPGRPIND Supplemental Groups No value, indicator only3


Valid for user profiles When this attribute is returned in output, its Difference
only. Indicator value indicates if supplemental group values are
equal

*TEXT2 Text description 50 character description

*TYPE Data area type - data *CHAR, *DEC, *LGL


area types of DDM
resolved to actual data
area types
Valid for data areas only.

*UID User profile ID number 1 - 4294967294


Valid for user profiles
only.

*USRATR2 User-defined attribute 10 character user-defined value

*USRCLS User Class *SECOFR, *SECADM, *PGMR, *SYSOPR, *USER


Valid for user profiles
only.

*USREXPDAT User expiration date Date (in job format of the job running CMPOBJA), *NONE,
Valid for user profiles *USREXPITV
only. This attribute is only available on systems running IBM i 7.1
and higher.

*USREXPITV User expiration interval 1-366 when the user profile specifies
Valid for user profiles USREXPDATE(*USREXPITV), otherwise 0 is returned.
only. This attribute is only available on systems running IBM i 7.1
and higher.

*USRPRFEXT User profile extended Group which checks *ATTNPGM, *CCSID, *CNTRYID,
attributes *CRTOBJOWN, *CURLIB, *GRPAUT, *GRPAUTTYP,
*GRPPRF, *INLMNU, *INLPGM, *JOBD, *LANGID,
*LMTCPB, *MAXSTG *MSGQ, *PRFOUTQ, *PWDEXPITV,
*PWDIND, *SPCAUTIND, *SUPGRPIND, *USRCLS,
*USREXPDAT, *USREXPITV.

*USRPRFSTS User profile status *ENABLED, *DISABLED9


For details, see “Comparison results for user profile status
(*USRPRFSTS)” on page 722.

708
Table 131. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*VALUE2 Data area value Character value of data


Valid for data areas only.
1. This attribute only applies to logical files. Use the Compare File Attributes (CMPFILA) command to compare or omit
physical file attributes.
2. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the object is configured for system journal replication with a configured object auditing value of *NONE.
3. If *PRINT is specified for the output format on the compare request, an indicator appears in the System 1 and System
2 columns. If *OUTFILE is specified, these values are blank.
4. A more thorough check of this attribute is performed for *QRYDFN object types when the IBM i release is the same on
both systems.
5. These attributes are compared for object types of *FILE, *DTAQ, and *DTAARA. These are the only objects supported
by IBM's user journals.
6. Differences detected for this attribute are marked *EC (equal configuration) when the source is set to *IMMED and the
target is set to *DLY by Access Path Maintenance in installations running 7.1.15.00 and higher.
7. On systems running IBM i 7.2 or higher, the MAXSTGLRG field supports a larger value than the MAXSTG field. When
comparing attributes between a system that supports only MAXSTG and a system that supports both MAXSTG and
MAXSTGLRG, the numeric values are compared directly unless the MAXSTGLRG value is greater than the maximum
supported value for MAXSTG on the system running an earlier release. When the MAXSTGLRG value is greater, the
MAXSTG value on the other system must be *NOMAX for the attribute to be marked as equal (*EC).
8. The *OBJCTLLVL attribute is only supported on the following object types: *AUTL, *CNNL, *COSD, *CTLD, *DEVD,
*DTAARA, *DTAQ, *FILE, *IPXD, *LIB, *LIND, *MODD, *NTBD, *NWID, *NWSD, and *USRPRF.
9. The profile status is only compared if no data group is specified or the USRPRFSTS has a value of *SRC for the spec-
ified data group. If a data group is specified on the CMPOBJA command and the USRPRFSTS value on the object
entry has a value of *TGT, *ENABLED, or *DISABLED, the user profile status is not compared.

709
Attributes compared and expected results - #IFSATR audit
The #IFSATR audit calls the Compare IFS Attributes (CMPIFSA) command and
places the results in an output file. Table 132 lists the attributes that can be compared
by the CMPIFSA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.

Table 132. Compare IFS Attributes (CMPIFSA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ALWSAV1 Allow save *YES, *NO

*ASP Auxiliary storage pool 1-16 (pre-V5R1)


1-255 (V5R1)
1-System ASP
See “Comparison results for auxiliary storage pool ID
(*ASP)” on page 719 for details.

*AUDVAL Object auditing value *ALL, *CHANGE, *NONE, *USRPRF

*AUT Authority attributes Group which checks attributes *AUTL, *PGP,


*PUBAUTIND, *PRVAUTIND

*AUTL Authority list name *NONE, list name

*BASIC Pre-determined set of Group which checks a pre-determined set of attributes. The
basic attributes following set of attributes are compared: *CCSID,
*DATASIZE, *OBJTYPE, and the group *PCATTR.

*CCSID1 Coded character set 1-65535

*CRTTSP2 Create timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)

*DATACRC3 Data cyclic redundancy 8 character value


check (CRC)

*DATASIZE1 Data size 0-4294967295

*EXTENDED Pre-determined, Group which checks a pre-determined set of attributes.


extended set Compares the basic set of attributes (*BASIC) plus an
extended set of attributes. The following attributes are
compared: *AUT (group), *CCSID, *DATASIZE,
*OBJTYPE, *OWNER, and *PCATTR (group).

*EXTAUT4 Extended authority for A bit string indicates the permissions and privileges of the
permissions to IFS file.
objects in QSHELL.

*FID5 File ID 20 character hex value

*JOURNAL Journal information Groups which checks attributes *JOURNALED, *JRN,


*JRNLIB, *JRNIMG, *JRNOPT. Results are described in
“Comparison results for journal status and other journal
attributes” on page 715.

710
Table 132. Compare IFS Attributes (CMPIFSA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*JOURNALED File is currently journaled *YES, *NO

*JRN Current or last journal 10 character name

*JRNIMG Record images *AFTER, *BOTH

*JRNLIB Current or last journal 10 character name


library

*JRNOPT Journal optional entries *YES, *NO

*OBJTYPE Object type *STMF, *DIR, *SYMLNK

*OWNER File owner 10 character name

*PCARCHIVE1 Archived file *YES, *NO

*PCATTR PC Attributes Group which checks *PCARCHIVE, *PCHIDDEN,


*PCREADO, *PCSYSTEM

*PCHIDDEN1 Hidden file *YES, *NO

*PCREADO1 Read only attribute *YES, *NO

*PCSYSTEM1 System file *YES, *NO

*PGP Primary group *NONE, user profile name

*PRVAUTIND Private authority indicator No value, indicator only6


When this attribute is returned in output, its Difference
Indicator value indicates if the number of private authorities
and private authority values are equal.

*PUBAUTIND Public authority indicator No value, indicator only6


When this attribute is returned in output, its Difference
Indicator value indicates if the public authority values are
equal.

*SETGID Set effective group ID *YES, *NO


(GID)

*SETUID Set effective user ID *YES, *NO


(UID)
1. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the object is configured for system journal replication with a configured object auditing value of
*NONE.
2. The *CRTTSP attribute does not compare directories (*DIR) or symbolic links (*SYMLNK). For stream files (*STMF),
the #IFSATR audit omits the *CRTTSP attribute from comparison since creation timestamps are not preserved during
replication. Running the CMPIFSA command will detect differences in the creation timestamps for stream files.
3. When a stream file has Storage Freed *YES on either the source system or target system, the status of this attribute is
reflected as not supported (*NS) and the data has not been compared.
4. When *EXTAUT is selected, the read, write, and search/execute permissions for owner, group, and public
authority are selected.

711
5. The *FID attribute checks to ensure that stream files referenced by multiple hard links are the same on each system. It
does not compare the actual FIDs; instead it checks to ensure that objects with the equivalent FID on one system
have matching FIDs on the other system.
6. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is
specified, these values are blank.

712
Attributes compared and expected results - #DLOATR audit
The #DLOATR audit calls the Compare DLO Attributes (CMPDLOA) command and
places the results in an output file. Table 133 lists the attributes that can be compared
by the CMPDLOA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.

Table 133. Compare DLO Attributes (CMPDLOA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ASP Auxiliary storage pool 1-16 (pre-V5R1)


1-32 (V5R1)
1 = System ASP
See “Comparison results for auxiliary storage pool ID
(*ASP)” on page 719 for details.

*AUDVAL Object audit value *NONE, *USRPRF, *CHANGE, *ALL

*AUT Authority attributes Group which checks *AUTL, *PGP, *PUBAUTIND,


*PRVAUTIND

*AUTL Authority list name *NONE, list name

*BASIC Pre-determined set of basic Group which checks a pre-determined set of


attributes attributes. The following set of attributes are
compared: *CCSID, *DATASIZE, *OBJTYPE,
*PCATTR, and *TEXT.

*CCSID Coded character set 1-65535

*CRTTSP Create timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)

*DATACRC1 Data cyclic redundancy check 8 character value


(CRC)

*DATASIZE Data size 0-42949672952

*EXTENDED Pre-determined, extended set Group which checks a pre-determined set of


attributes. Compares the basic set of attributes
(*BASIC) plus an extended set of attributes. The
following attributes are compared *AUT, *CCSID,
*DATASIZE, *OBJTYPE, *OWNER, *PCATTR, and
*TEXT.

*MODTSP Modify timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)

*OBJTYPE Object type *DOC, *FLR3

*OWNER File owner 10 character name

*PCARCHIVE Archived file *YES, *NO

*PCATTR PC Attributes Group which checks *PCARCHIVE, *PCHIDDEN,


*PCREADO, *PCSYSTEM

713
Table 133. Compare DLO Attributes (CMPDLOA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*PCHIDDEN Hidden file *YES, *NO

*PCREADO Read only attribute *YES, *NO

*PCSYSTEM System file *YES, *NO

*PGP Primary group *NONE, user profile name

*PRVAUTIND Private authority indicator No value, indicator only4


When this attribute is returned in output, its Difference
Indicator value if the number of private authorities and
private authority values are equal

*PUBAUTIND Public authority indicator No value, indicator only2


When this attribute is returned in output, its Difference
Indicator value if the public authority values are equal

*TEXT Text description 50 character description


1. When a DLO object has Storage Freed *YES on either the source system or target system, the status of this attribute
is reflected as not supported (*NS) and the data has not been compared.
2. This attribute is not supported for DLOs with an object type of *FLR.
3. This attribute is always compared.
4. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is
specified, these values are blank.

714
Comparison results for journal status and other journal attributes
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA),
and Compare IFS Attributes (CMPIFSA) commands support comparing the journaling
attributes listed in Table 134 for objects replicated from the user journal. These
commands function similarly when comparing journaling attributes.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether the file is journaled, whether the
request includes a data group, and the data group’s configured settings for journaling.
Regardless of which journaling attribute is specified on the command, MIMIX always
checks the journaling status first (*JOURNALED attribute). If the file or object is
journaled on both systems, MIMIX then considers whether the command specified a
data group definition before comparing any other requested attribute.

Table 134. Journaling attributes

When specified on the CMPOBJA command, these values apply only to files, data areas,
or data queues. When specified on the CMPFILA command, these values apply only to
PF-DTA and PF38-DTA files.

*JOURNAL Object journal information attributes. This value acts as a group


selection, causing all other journaling attributes to be selected

*JOURNALED Journal Status. Indicates whether the object is currently being


journaled. This attribute is always compared when any of the other
journaling attributes are selected.

*JRN 1 Journal. Indicates the name of the current or last journal. If blank, the
object has never been journaled.

*JRNIMG 1 2 Journal Image. Indicates the kinds of images that are written to the
journal receiver for changes to objects.

*JRNLIB 1 Journal Library. Identifies the library that contains the journal. If blank,
the object has never been journaled.

*JRNOMIT 1 Journal Omit. Indicates whether file open and close journal entries
are omitted.
1. When these values are specified on a Compare command, the journal status (*JOURNALED) attri-
bute is always evaluated first. The result of the journal status comparison determines whether the
command will compare the specified attribute.
2. Although *JRNIMG can be specified on the CMPIFSA command, it is not compared even when the
journal status is as expected. The journal image status is reflected as not supported (*NS) because
the operating system only supports after (*AFTER) images.

Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the journaled status (*JOURNALED attribute).
Table 135 shows the result displayed in the Differences Indicator field. If the file or

715
object is not journaled on both systems, the compare ends. If both source and target
systems are journaled, MIMIX then compares any other specified journaling attribute.

Table 135. Difference indicator values for *JOURNALED attribute when no data group is
specified

Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EQ *NE *NE
Source No *NE *EQ *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.

Compares that specify a data group - When a data group is specified on the
compare request, MIMIX compares the journaled status (*JOURNALED attribute) to
the configuration values. If both source and target systems are journaled according to
the expected configuration settings, then MIMIX compares any other specified
journaling attribute against the configuration settings.
The Compare commands vary slightly in which configuration settings are checked.
• For CMPFILA requests, if the journaled status is as configured, any other
specified journal attributes are compared. Possible results from comparing the
*JOURNALED attribute are shown in Table 136.
• For CMPOBJA and CMPIFSA requests, if the journaled status is as configured
and the configuration specifies *YES for Cooperate with database (COOPDB),
then any other specified journal attributes are compared. Possible results from
comparing the *JOURNALED attribute are shown in Table 136 and Table 137. If
the configuration specifies COOPDB(*NO), only the journaled status is compared;
possible results are shown in Table 138.
Table 136, Table 137, and Table 138 show results for the *JOURNALED attribute that
can appear in the Difference Indicator field when the compare request specified a
data group and considered the configuration settings.

716
Table 136 shows results when the configured settings for Journal on target and
Cooperate with database are both *YES.

Table 136. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *YES for JRNTGT and COOPDB

Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EC *EC *NE
Source No *NC *NC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.

Table 137 shows results when the configured settings are *NO for Journal on target
and *YES for Cooperate with database. .

Table 137. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for JRNTGT and *YES for COOPDB.

Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *NC *EC *NE
Source No *NC *NC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.

Table 138 shows results when the configured setting for Cooperate with database is
*NO. In this scenario, you may want to investigate further. Even though the Difference
Indicator shows values marked as configured (*EC), the object can be not journaled

717
on one or both systems. The actual journal status values are returned in the System 1
Value (SYS1VAL) and System 2 Value (SYS2VAL) fields.

Table 138. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for COOPDB.

Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EC *EC *NE
Source No *EC *EC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.

How configured journaling settings are determined


When a data group is specified on a compare request, MIMIX also considers
configuration settings when comparing journaling attributes. For comparison
purposes, MIMIX assumes that the source system is journaled and that the target
system is journaled according to configuration settings.
Depending on the command used, there are slight differences in what configuration
settings are checked. The CMPFILA, CMPOBJA, and CMPIFSA commands retrieve
the following configurable journaling attributes from the data group definition:
• The Journal on target (JRNTGT) parameter identifies whether activity replicated
through the user journal is journaled on the target system. The default value is
*YES.
• The System 1 journal definition (JRNDFN1) and System 2 journal definition
(JRNDFN2) values are retrieved and used to determine the source journal, source
journal library, target journal, and target journal library.
• Values for elements Journal image and Omit open/close entries specified in the
File entry options (FEOPT) parameter are retrieved. The default values are
*AFTER and *YES, respectively.
Because the data group’s values for Journal image and Omit open/close entries can
be overridden by a data group file entry or a data group object entry, the CMPFILA
and CMPOBJA commands also retrieve these values from the entries. The values
determined after the order of precedence is resolved, sometimes called the overall
MIMIX configuration values, are used for the compare.
For CMPOBJA and CMPIFSA requests, the value of the Cooperate with database
(COOPDB) parameter is retrieved from the data group object entry or data group IFS
entry. The default value in object entries is *YES, while the default value in IFS entries
is *NO.

718
Comparison results for auxiliary storage pool ID (*ASP)
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA),
Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA)
commands support comparing the auxiliary storage pool (*ASP) attribute for objects
replicated from the user journal. These commands function similarly.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether a data group was specified on the
compare request.
Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the *ASP attribute for all files or objects that
match the selection criteria specified in the request. The result displayed in the
Differences Indicator field. Table 139 shows the possible results in the Difference
Indicator field.

Table 139. Difference Indicator values when no data group is specified

Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *EQ *NE *NE
Source ASP2 *NE *EQ *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.

Compares that specify a data group - When a data group is specified on the
compare request (CMPFILA, CMPDLOA, CMPIFSA commands), MIMIX does not
compare the *ASP attribute. When a data group is specified on a CMPOBJA request
which specifies an object type except libraries (*LIB), MIMIX does not compare the
*ASP attribute. Table 140 shows the possible results in the Difference Indicator field

Table 140. Difference Indicator values for non-library objects when the request specified a
data group

Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *NOTCMPD *NOTCMPD *NE
Source ASP2 *NOTCMPD *NOTCMPD *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.

719
For CMPOBJA requests which specify a a data group and an object type of *LIB,
MIMIX considers configuration settings for the library. Values for the System 1 library
ASP number (LIB1ASP), System 1 library ASP device (LIB1ASPD), System 2 library
ASP number (LIB2ASP), and System 2 library ASP device (LIB2ASPD) are retrieved
from the data group object entry and used in the comparison. Table 141, Table 142,
and Table 143 show the possible results in the Difference Indicator field.
Note: For Table 141, Table 142, and Table 143, the results are the same even if the
system roles are switched.
Table 141 shows the expected values for the ASP attribute when the request specifies
a data group and the configuration specifies *SRCLIB for the System 1 library ASP
number and the data source is system 2. .

Table 141. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*SRCLIB) and DTASRC(*SYS2).

Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *EC *NC *NE
Source ASP2 *NC *EC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.

Table 142 shows the expected values for the ASP attribute the request specifies a
data group and the configuration specifies 1 for the System 1 library ASP number and
the data source is system 2.

Table 142. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(1) and DTASRC(*SYS2)

Difference Indicator
Target
1
ASP Values 1 2 *NOTFOUND
1 *EC *NC *NE
Source 2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.

Table 143 shows the expected values for the ASP attribute when the request specifies
a data group and the configuration specifies *ASPDEV for the System 1 library ASP

720
number, DEVNAME is specified for the System 1 library ASP device, and data source
is system 2. .

Table 143. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*ASPDEV), LIB1ASPD(DEVNAME) and
DTASRC(*SYS2)

Difference Indicator
Target
ASP Values 1 DEVNAME 2 *NOTFOUND
1 *EC *NC *NE
Source 2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.

721
Comparison results for user profile status (*USRPRFSTS)
When comparing the attribute *USRPRFSTS (user profile status) with the Compare
Object Attributes (CMPOBJA) command, MIMIX determines the result displayed in
the Differences Indicator field by considering the following:
• The status values of the object on both the source and target systems
• Configured values for replicating user profile status, at the data group and object
entry levels
• The value of the Data group definition (DGDFN) parameter specified on the
CMPOBJA command.
Compares that do not specify a data group - When the CMPOBJA command does
not specify a data group, MIMIX compares the status values between source and
target systems. The result is displayed in the Differences Indicator field, according to
Table 127 in “Interpreting results of audits that compare attributes” on page 692.
Compares that specify a data group - When the CMPOBJA command specifies a
data group, MIMIX checks the configuration settings and the values on one or both
systems. (For additional information, see “How configured user profile status is
determined” on page 723.)
When the configured value is *SRC, the CMPOBJA command compares the values
on both systems. The user profile status on the target system must be the same as
the status on the source system, otherwise an error condition is reported. Table 144
shows the possible values.

Table 144. Difference Indicator values when configured user profile status is *SRC

Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *EC *NC *NE
Source *DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN

When the configured value is *ENABLED or *DISABLED, the CMPOBJA command


checks the target system value against the configured value. If the user profile status
on the target system does not match the configured value, an error condition is
reported. The source system user profile status is not relevant. Table 145 and Table

722
146 show the possible values when configured values are *ENABLED or *DISABLED,
respectively.

Table 145. Difference Indicator values when configured user profile status is *ENABLED

Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *EC *NC *NE
Source *DISABLED *EC *NC *NE
*NOTFOUND *NE *NE *UN

Table 146. Difference Indicator values when configured user profile status is *DISABLED

Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *NC *EC *NE
Source *DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN

When the configured value is *TGT, the CMPOBJA command does not compare the
values because the result is indeterminate. Any differences in user profile status
between systems are not reported. Table 147 shows possible values.

Table 147. Difference Indicator values when configured user profile status *TGT

Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *NA *NA *NE
Source *DISABLED *NA *NA *NE
*NOTFOUND *NE *NE *UN

How configured user profile status is determined


The data group definition determines the behavior for replicating user profile status
unless it is explicitly overridden by a non-default value in a data group object entry.
The value determined after the order of precedence is resolved is sometimes called
the overall MIMIX configuration value. Unless specified otherwise in the data group or

723
in an object entry, the default is to use the value *SRC from the data group definition.
Table 148 shows the possible values at both the data group and object entry levels.

Table 148. Configuration values for replicating user profile status

*DGDFT Only available for data group object entries, this indicates that the specified
in the data group definition is used for the user profile statue. This is the
default value for object entries.

*DISABLE 1 The status of the user profile is set to *DISABLED when the user profile is
created or changed on the target system.

*ENABLE 1 The status of the user profile is set to *ENABLED when the user profile is
created or changed on the target system.

*SRC This is the default value in the data group definition. The status of the user
profile on the source system is always used when the user profile is created
or changed on the target system.

*TGT If a new user profile is created, the status is set to *DISABLED. If an


existing user profile is changed, the status of the user profile on the target
system is not altered.
1. Data group definitions use these values. In data group object entries, the values *DISABLED and
*ENABLED are used but have the same meaning.

724
Comparison results for user profile password (*PRFPWDIND)
When comparing the attribute *PRFPWDIND (user profile password indicator) with
the Compare Object Attributes (CMPOBJA) command, MIMIX assumes that the user
profile names are the same on both systems. User profile passwords are only
compared if the user profile name is the same on both systems and the user profile of
the local system is enabled and has a defined password.
If the local or remote user profile has a password of *NONE, or if the local user profile
is disabled or expired, the user profile password is not compared. The System
Indicator fields will indicate that the attribute was not compared (*NOTCMPD). The
Difference Indicator field will also return a value of not compared (*NA).
The CMPOBJA command does not support name mapping while comparing the
*PRFPWDIND attribute. If the user profile names are different, or if you attempt name
mapping, the System Indicator fields will indicate that comparing the attribute is not
supported (*NOTSPT). The Difference Indicator field will also return a value of not
supported (*NS).
The following tables identify the expected results when user profile password is
compared. Note that the local system is the system on which the command is being
run, and the remote system is defined as System 2.
Table 149 shows the possible Difference Indicator values when the user profile
passwords are the same on the local and remote systems and are not defined as
*NONE.

Table 149. Difference Indicator values when user profile passwords are the same, but not
*NONE.

Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *EQ *EQ *EQ *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ

725
Table 150 shows the possible Difference Indicator values when the user profile
passwords are different on the local and remote systems and are not defined as
*NONE.

Table 150. Difference Indicator values when user profile passwords are different, but not
*NONE

Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *NE *NE *NE *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ

Table 151 shows the possible Difference Indicator values when the user profile
passwords are defined as *NONE on the local and remote systems.

Table 151. Difference Indicator values when user profile passwords are *NONE.

Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *NA *NA *NA *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ

726
Journal entry codes for user journal transactions

APPENDIX H Journal Codes and Error Codes

This appendix lists journal codes and error codes associated with replication activity,
including:
• “Journal entry codes for files” on page 727 identifies journal codes supported for
files, IFS objects, data areas, and data queues configured for replication through
the user journal. This section also includes a list of error codes associated with
files held due to error.
• “Journal entry codes for system journal transactions” on page 735 identifies
journal codes associated with object replicated through the system journal.

Journal entry codes for user journal transactions


The following sections identify journal codes associated with user journal replication.
• “Journal entry codes for files” on page 727 lists journal codes associated with files
replicated through the user journal.
• “Error codes for files in error” on page 730 lists the error codes that can be
associated with journal entries for held due to error.
• “Journal codes and entry types for journaled IFS objects” on page 732 identifies
what B entry types are supported for IFS objects configured for user journal
replication.
• “Journal codes and entry types for journaled data areas and data queues” on
page 733 identifies what E and Q entry types are supported for data area and data
queue objects configured for user journal replication.

Journal entry codes for files


Table 152 lists journal entry codes for transactions that may appear in user journal
transactions with a status of on hold due to error (*HLDERR). Journal codes for
cooperatively processed physical files are listed in Table 157.

Table 152. Journal entry codes and types supported for files

Journal Type Description Notes


Code

C CM Set of record changes committed 1

C CN Rollback ended early

C EB Internal entry for constraint handling

C RB Set of record changes rolled back 1

C SC Commit transaction started 1

727
Journal Codes and Error Codes

Table 152. Journal entry codes and types supported for files

Journal Type Description Notes


Code

D AC Add constraint

D CG Change file

D CT Create file

D DC Remove constraint

D DF File was deleted

D DH File saved

D DJ Change journaled object attribute

D DT Delete file

D DW Start of save while active

D DZ File restored

D EF Journaling of physical file (PF) ended

D FM Move file

D FN Rename File

D GC Change constraint

D GO Change owner

D GT Grant file

D JF Journaling of physical file (PF) started

D MA Member added to file

D RV Revoke file

D TC Add trigger

D TD Delete trigger

D TG Change trigger

D TQ Refresh table

D ZB Object attribute change

F CB Physical file member change

F CE Change end of data for physical file (PF)

F CR Physical file member cleared

F DM Delete member

F EJ Journaling for a physical file member ended

728
Journal entry codes for user journal transactions

Table 152. Journal entry codes and types supported for files

Journal Type Description Notes


Code

F IT Identity value

F IZ Physical file member initialized

F JM Journaling for a physical file member started

F MC Create member

F MD Physical file member deleted

F MM Physical file containing the member moved to different


library

F MN Physical file containing the member renamed

F MS Physical file member saved

F RC Journaled changes removed from a physical file member

F RG Physical file member reorganized (RGZPFM)

F RM Member reorganized

F SS Start of save of a physical file member using save-while-


active function

J NR Identifier for the next journal receiver

J PR Identifier for the previous journal receiver

R BR Before-image of record updated for rollback operation

R DL Record deleted in the physical file member

R DR Record deleted for rollback operation

R PT Record added to a physical file member

R PX Record added directly to RRN (relative record number)


physical file member

R UB Before image of a record that is updated in the physical file 1


member

R UP After image of a record that is updated in the physical file


member

R UR After image of a record that is updated for rollback


information

R PP MIMIX-generated pre-apply of RPT entry

U MX MIMIX-generated entry

Note:
1. This journal code is not supported by the Display Journal Statistics (DSPJRNSTC)
command

729
Journal Codes and Error Codes

Error codes for files in error


Table 153 lists error codes that identify the internal reason a file replicated through the
user journal is on hold due to an error.

Table 153. Error codes for files

Error code Description

01 Generic record not found

02 Before image record not found

03 After image record not found

04 Record in use

05 Allocation error

A Error attempting to place a data group file entry *ACTIVE status

AD Data group initialization for apply session failed

AF *ALL missing for file in FMC

AI Apply initialization error

AK Minimized journal entry found for keyed replication

AM Minimized journal entry cannot be applied

AM Minimized journal entry cannot be converted to R-PX or R-PT

AO Apply already active error

AP Minimized journal entry applied, full image needed

C1 The database apply process found the entry in *CMPACT, *CMPRLS,


or *CMPRPR state during a request to start data groups (STRDG
command)

C2 A request to compare file data (CMPFILDTA command) ended


abnormally while attempting to repair the entry

CC Error creating cursor

CE Change end of data operation failed

DE Data mismatch for delete request

DL Delete of record failed

FE File level error

FO File open for minimized data error

GE General error message

GL Get log space entry failed

I Error attempting to place a data group file entry on *HLDIGN status

730
Journal entry codes for user journal transactions

Table 153. Error codes for files

Error code Description

IG Database replicator is ignoring entries for this file entry

JS Error attempting to retrieve the journal information for the target


journal and the JS OS/400 HA performance feature is installed

LE Length of record retrieved not same as the transactions

LK A lock on the file caused the operation to fail

LM Error locking member

NF The record was not found

OE Error opening member

OF Error opening data group file entry file

R Error attempting to place a data group file entry on *RLS status

RE Error reading record

RI Non-restrictive referential constraint exists on the file and the target


journal is in standby state

R1 Error with data group file entry after the database apply reorganized a
file

R2 Error removing file from *HLDRGZ

R3 Error applying held entries after the database apply reorganized a file

R4 Error occurred while the database apply was reorganizing a file

SE Compare data group file entry mismatch error

TG Error while attempting to disable triggers

UD Apply of data area failed

UE Error on update (record not updated)

UF Error updating the data group file entry file

W Error attempting to place a data group file entry on *RLSWAIT status

W1 Write failed (record not written)

W2 Record written to wrong location

W3 Write of deleted record failed

WF Error writing record to data group file entry file

XX An unexpected error occurred

X0 Apply exception encountered

X1 Generic exception encountered

731
Journal Codes and Error Codes

Table 153. Error codes for files

Error code Description

X2 Could not create a needed apply object

X3 Could not add record to hold log

X4 Could not add record to commit index

X5 Error opening timestamp file

X6 Force of apply objects failed

Journal codes and entry types for journaled IFS objects


The system uses journal code B to indicate that the journal entry deposited is related
to an IFS operation. Table 154 shows the currently supported IFS entry types that
MIMIX can replicate for IFS objects configured for user journal replication.

Table 154. Journal entry codes and types supported for IFS objects

Journal Type Description Notes


Code

B AA Change audit attributes

B B1 Create files, directories, or symbolic links

B B3 Move/rename object

B B2 Add link

B B5 Remove link (unlink)

B B6 Bytes cleared, after-image

B B7 Apply create object authorities

B BD Delete object

B ET End journaling for object

B FA Change object attribute

B FR Restore object 1

B FS Saved IFS object 1

B FW Start of save-while-active 1

B JA Change journal attributes

B JT Start journaling for object

B OA Change object authority

B OG Change primary group

B OO Change object owner

732
Journal entry codes for user journal transactions

Table 154. Journal entry codes and types supported for IFS objects

Journal Type Description Notes


Code

B RN Rename file identifier

B TR Truncated IFS object

B WA Write after-image

Note:
1. The action identified in these entries are replicated cooperatively through the security
audit journal.

Journal codes and entry types for journaled data areas and data queues
The operating system uses journal codes E and Q to indicate that journal entries are
related to operations on data areas and data queues, respectively. When configured
for user journal replication, MIMIX recognizes specific E and Q journal entry types as
eligible for replication from a user journal.
Table 155 shows the currently supported journal entry types for data areas.

Table 155. Journal entry codes and types supported for data areas

Journal Type Description


Code

E EA Update data area, after image

E EB Update data area, before image

E ED Data area deleted

E EE Create data area

E EG Start journal for data area

E EH End journal for data area

E EK Change journaled object attribute

E EL Data area restored

E EM Data area moved

E EN Data area renamed

E ES Data area saved

E EW Start of save for data area

E ZA Change authority

E ZB Change object attribute

E ZO Ownership change

733
Journal Codes and Error Codes

Table 155. Journal entry codes and types supported for data areas

Journal Type Description


Code

E ZP Change primary group

E ZT Auditing change

Table 156 shows the currently supported journal entry types for data queues.

Table 156. Journal entry codes and types supported for data queues

Journal Type Description


Code

Q QA Create data queue

Q QB Start data queue journaling

Q QC Data queue cleared, no key

Q QD Data queue deleted

Q QE End data queue journaling

Q QG Data queue attribute changed

Q QJ Data queue cleared, has key

Q QK Send data queue entry, has key

Q QL Receive data queue entry, has key

Q QM Data queue moved

Q QN Data queue renamed

Q QR Receive data queue entry, no key

Q QS Send data queue entry, no key

Q QX Start of save for data queue

Q QY Data queue saved

Q QZ Data queue restored

Q ZA Change authority

Q ZB Change object attribute

Q ZO Ownership change

Q ZP Change primary group

Q ZT Auditing change

734
Journal entry codes for system journal transactions

For more information about journal entries, see Journal Entry Information (Appendix
D) in the iSeries Backup and Recovery guide in the IBM eServer iSeries Information
Center.

Journal entry codes for system journal transactions


Table 157 lists the system journal entry codes and subtypes that may be selected by
system journal replication. Replication of transactions for cooperatively processed
object types are only selected from the user journal.

Table 157. Journal entry codes and subtypes for replicated system journal entries

Journal MIMIX-generated Subtypes Synchronize object request via


entry activity replication (Synchronize on start)

T-AD Auditing change Entry U (User auditing) for *USRPRF


objects

T-CA Change authority Command GRT (Grant)


RPL (Replace)
RVK (Revoke)
USR (Grant user authority)

T-CO Create object Entry N (Create)


R (Replace)

T-CP User profile Command CHG (Change)


CRT (Create)
DST (Reset using DST)
RPA (Reset IBM-supplied)
RST (Restore)

T-DO Delete object Entry A (Object was deleted not under


commitment control)
D (Pending object create rolled
back)
P (Pending delete under
commitment control)
R (Pending delete rolled back)

T-JD Job description Command CHG (Change)


CRT (Create)

T-LD Link, unlink, or Entry U (Unlink directory)


lookup directory

T-OM Object Entry M (Move)


management R (Rename)
change

T-OR Object restore Entry N (New object restored)


E (Existing object restored)

735
Journal Codes and Error Codes

Table 157. Journal entry codes and subtypes for replicated system journal entries

Journal MIMIX-generated Subtypes Synchronize object request via


entry activity replication (Synchronize on start)

T-OW Object ownership Entry A (Change of object owner)


changed

T-PA Program adopt Entry A (Adopt owner authority)


authority J (Java adopt owner)
M (Change to S_USUID)

T-PG Change of an Entry A (Change primary group)


object’s primary
group

T-RA Authority change Entry A (Changes to authority for object


during restore restored)

T-RO Change of object Entry A (Restoring objects that had


owner during ownership changed when restored)
restore

T-SE Subsystem routing Command ADD (Add)


entry CHG (Change)
RMV (Remove)

T-SF Spooled file Access A (Read)


change C (Created)
D (Deleted)
H (Held)
I (Created inline)
R (Released)
S (Spooled file restored)
T (Spooled file saved)
U (Changed)

T-VO Validation list Entry A (Add)


change C (Change)
F (Find)
R (Remove)
U (Unsuccessful verify)
V (Successful verify)

T-YC DLO object change Access Various access types are supported.

T-ZC Object change Access Various access types are supported.

U-MX Synchronize object NA NA


request via
replication

736
APPENDIX I Outfile formats

This section contains the output files (outfile) formats for those MIMIX commands that
provide outfile support.
For each command that can produce an outfile, MIMIX provides a model database file
that defines the record format for the outfile. These database files can be found in the
product installation library.
Public authority to the created outfile is the same as the create authority of the library
in which the file is created. Use the Display Library Description (DSPLIBD) command
to see the create authority of the library.
You can use the Run Query (RUNQRY) command to display outfiles with column
headings and data type formatting if you have the licensed program 5722QU1, Query,
installed.
Otherwise, you can use the Display File Field Description (DSPFFD) command to see
detailed outfile information, such as the field length, type, starting position, and
number of bytes.

737
Work panels with outfile support
The following table lists the work panels with outfile support.

Table 158. Work panels with outfile support

Panel Description

WRKDGDFN Work with DG Definitions

WRKJRNDFN Work with Journal Definitions

WRKTFRDFN Work with Transfer Definitions

WRKSYSDFN Work with System Definitions

WRKJRNINSP Work with Journal Inspection

WRKSYS Work with Systems

WRKDGFE Work with DG File Entries

WRKDGOBJE Work with DG Object Entries

WRKDGDLOE Work with DG DLO Entries

WRKDGIFSE Work with DG IFS Entries

WRKDGACT Work with DG Activity

WRKDGACTE Work with DG Activity Entries

WRKDGIFSTE Work with DG IFS Tracking Entries

WRKDGOBJTE Work with DG Object Tracking Entries

WRKPROC Work with Procedures

WRKPROCSTS Work with Procedure Status

WRKSTEPPGM Work with Step Programs

WRKSTEP Work with Step

WRKSTEPMSG Work with Step Messages

WRKSTEPSTS Work with Step Status

738
MCAG outfile (WRKAG command)

MCAG outfile (WRKAG command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Application Groups (WRKAG)
command.

Table 159. MCAG outfile (WRKAG command)


Field Description Type, length Valid values Column
headings
AGDFN Application group definition CHAR(10) User-defined name AGDFN
NAME
USRPRF User profile CHAR(10) Any valid user profile USER
PROFILE
APP Application name CHAR(10) *AGDFN, user-defined name APP NAME
APPLIB Application library CHAR(10) *APP, user-defined name APP
LIBRARY
RLSLVL Application release level CHAR(10) User-defined value APP
RELEASE
LEVEL
TYPE Application group type CHAR(7) *CLU, *NONCLU TYPE
APPCRG Application CRG CHAR(6) *AGDFN, *NONE APP CRG
DTACRG Data CRG CHAR(10) *NONE, User-defined name DTA CRG
EXITPGM Application CRG exit program CHAR(10) User-defined name APP CRG
EXIT PGM
EXITPGMLIB Application CRG exit program library CHAR(10) *APPLIB, user-defined name CRG EXIT
PGM LIB
JOB Exit program job name CHAR(10) *APP, *JOBD, user-defined name CRG EXIT
PGM JOB
NAME
EXITDTA Exit program data CHAR(256) User-defined value CRG EXIT
PGM DATA
NBRRESTART Number of restarts PACKED(5 0) 0-3 NUMBER OF
RESTARTS

739
MCAG outfile (WRKAG command)

Table 159. MCAG outfile (WRKAG command)


Field Description Type, length Valid values Column
headings
HOST Takeover IP address CHAR(256) User-defined value TAKEOVER
IP ADDRESS
TEXT Description CHAR(50) User-defined value DESCRIPTIO
N
UPDENV Update cluster environment CHAR(10) *YES, *NO UPDATE
CLUSTER
ENV
IDA Input data area name CHAR(10) BLANK, Name of the Input Data Area INPUT DATA
AREA NAME
AGSTS Application CRG status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *UNKNOWN, APP CRG
*NONE, *NOTAVAIL, *INDOUBT, *RESTORED, STATUS
*ADDNODPND, *DLTPND, *DLTCMDPND,
*CHGPND, *CRTPND, *ENDCRGPND,
*RMVNODPND, *STRCRGPND, *SWTPND
AGNODS Application CRG nodes status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *UNKNOWN, APP CRG
*NONE, *NOTAVAIL NODES
STATUS
DCSTS Data CRGs status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, DATA CRG
*UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, STATUS
*RESTORED, *ADDNODPND, *DLTPND,
*DLTCMDPND, *CHGPND, *CRTPND,
*ENDCRGPND, *RMVNODPND, *STRCRGPND,
*SWTPND
DCNODS Data CRG nodes status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, DATA CRG
*UNKNOWN, *NONE, *NOTAVAIL NODES
STATUS
REPSTS Replication status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, DG STATUS
*UNKNOWN, *NONE, *NOTAVAIL, *ATTN_PPRC,
*AUTHORITY, *SUSPENDED, *STGMGTSVR
PROCSTS Procedure status CHAR(10) *ACTIVE, *ATTN, *COMP, *NONE PROCEDURE

740
MCAG outfile (WRKAG command)

Table 159. MCAG outfile (WRKAG command)


Field Description Type, length Valid values Column
headings
FMSGQL Failover message queue library CHAR(10) *NONE, User-defined name FAILOVER
MSGQ
LIBRARY
FMSGQN Failover message queue name CHAR(10) *NONE, User-defined name FAILOVER
MSGQ NAME
FWTIME Failover wait time PACKED(5 0) *NOMAX, 1-32767 FAILOVER
WAIT TIME
FDFTACT Failover default action PACKED(5 0) *CANCEL, *PROCEED FAILOVER
DFT ACTION

741
MCDTACRGE outfile (WRKDTARGE command)

MCDTACRGE outfile (WRKDTARGE command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Data CRG Entries (WRKDTARGE)
command.

Table 160. MCDTACRGE outfile (WRKDTARGE command)


Field Description Type, length Valid values Column headings
DTACRGE Data CRG CHAR(10) User-defined name DATA CRG
DGDFN Data group name CHAR(10) *DTACRG, user-defined name DGDFN NAME
AGDFN Application group definition CHAR(10) User-defined name AGDFN NAME
JRN Journal name CHAR(10) *DGDFN, user-defined name JOURNAL
JRNLIB Journal library CHAR(10) User-defined name JOURNAL
LIBRARY
OSF Object specifier file CHAR(10) *DTACRG, user-defined name OBJECT
SPECIFIER FILE
(OSF)
OSFLIB Object specifier file library CHAR(10) *AGDFN, user-defined name OSF LIBRARY
OSFMBR Object specifier file member CHAR(10) *DTACRG, user-defined name OSF MEMBER
DELIVERY RJ mode CHAR(10) *NONE, *ASYNC, *SYNC RJ MODE
(DELIVER)
EXITPGM Data CRG exit program CHAR(10) MMXDTACRG, user-defined name DATA CRG EXIT
PGM
EXITPGMLIB Data CRG exit program library CHAR(10) *MIMIX, user-defined name DATA CRG EXIT
PGM LIBRARY
DCSTS Data CRGs status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, DATA CRG
*NONE, *NOTAVAIL, *INDOUBT, *RESTORED, STATUS
*ADDNODPND, *DLTPND, *DLTCMDPND,
*CHGPND, *CRTPND, *ENDCRGPND,
*RMVNODPND, *STRCRGPND, *SWTPND
DCNODS Data CRG nodes status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, DATA CRG
*NONE, *NOTAVAIL STATUS
REPSTS Replication status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, REPLICATION
*NONE, *NOTAVAIL STATUS

742
MCDTACRGE outfile (WRKDTARGE command)

Table 160. MCDTACRGE outfile (WRKDTARGE command)


Field Description Type, length Valid values Column headings
DEVCRG Device CRG name CHAR(10) User-defined name DEVICE CRG
ASPGRP ASP Group CHAR(10) *NONE, User-defined name ASP
GROUP
DTATYPE Data resource group type CHAR(10) *DEV, *DTA, *PEER, *XSM DATA RESOURCE
TYPE
FMSGQL Failover message queue library CHAR(10) *AGDFN, *NONE, User-defined name FAILOVER MSGQ
LIBRARY
FMSGQN Failover message queue name CHAR(10) *AGDFN, *NONE, User-defined name FAILOVER MSGQ
NAME
FWTIME Failover wait time PACKED(5 0) *AGDFN, *NOMAX, 1-32767 FAILOVER WAIT
TIME
FDFTACT Failover default action PACKED(5 0) *AGDFN, *CANCEL, *PROCEED FAILOVER DFT
ACTION
ADMDMN Cluster administrative domain CHAR(10) *NONE, User-defined value CLUSTER
ADMINISTRATIVE
DOMAIN
SYNCOPT Synchronization option PACKED(10 5) *LASTCHG, *ACTDMN SYNCHRONIZATI
ON DOMAIN
SANUSER SAN user CHAR(16) *NONE, User-defined name SAN CONSOLE
USER
PPRCNOD1 PPRC node 1 CHAR(8) *NONE, User-defined value PPRC NODE 1
PPRCDEV1 PPRC device 1 CHAR(20) User-defined value PPRC
DEVICE 1
PPRCIP1 PPRC IP address 1 CHAR(16) User-defined value PPRC CONSOLE
IP 1
PPRCNOD2 PPRC node 2 CHAR(8) *NONE, User-defined value PPRC NODE 2
PPRCDEV2 PPRC device 2 CHAR(20) User-defined value PPRC DEVICE 2
PPRCIP2 PPRC IP address 2 CHAR(16) User-defined value PPRC CONSOLE
IP 2
PPRCLUN PPRC logical unit name CHAR(1000) *NONE, User-defined value PPRC LUNS

743
MCDTACRGE outfile (WRKDTARGE command)

Table 160. MCDTACRGE outfile (WRKDTARGE command)


Field Description Type, length Valid values Column headings
LUNDEV LUN device CHAR(20) User-defined value LUN DEVICE
LUNIP LUN IP address CHAR(16) User-defined value LUN CONSOLE IP
ADDRESS
LUNNOD1 LUN node 1 CHAR(8) *NONE, User-defined value LUN NODE 1
LUNCONID1 LUN connection ID 1 CHAR(4) User-defined value LUN CONN ID 1
LUNNOD2 LUN node 2 CHAR(8) *NONE, User-defined value LUN NODE 2
LUNCONID2 LUN connection ID 2 CHAR(4) User-defined value LUN CONN ID 2
LUNVOLID LUN volume ID CHAR(5) *NONE, User-defined value LUN VOLUME ID
GMNOD1 GM node 1 CHAR(8) *NONE, User-defined value GM NODE 1
GMDEV1 GM device 1 CHAR(20) User-defined value GM
GMIP1 GM IP address 1 CHAR(16) User-defined value GM CONSOLE
GMNOD2 GM node 2 CHAR(8) *NONE, User-defined value GM NODE 2
GMDEV2 GM device 2 CHAR(20) User-defined value GM
GMIP2 GM IP address 2 CHAR(16) User-defined value GM CONSOLE
GMLUN GM logical unit name CHAR(2500) *NONE, User-defined value GM LUNS

744
MCNODE outfile (WRKNODE command)

MCNODE outfile (WRKNODE command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Node Entries (WRKNODE) command.

Table 161. MCNODE outfile (WRKNODE command)


Field Description Type, length Valid values Column
headings
AGDFN Data CRG CHAR(10) User-defined name AGDFN
NAME
CRG CRG name CHAR(10) *AGDFN, user-defined name CRG NAME
NODE System name CHAR(8) User-defined name NODE
CURROLE Current role CHAR(10) *PRIMARY, *BACKUP, *REPLICATE, *UNDEFINED CURRENT
ROLE
CURSEQ Current sequence PACKED(5 0) -2, -1, 0-127 CURRENT
(-2= *UNDEFINED) SEQUENCE
(-1 = *REPLICATE)
(0 = *PRIMARY)
(1-127 = *BACKUP sequence)
CURDTAPVD Current data provider CHAR(10) *PRIMARY, *BACKUP, *UNDEFINED, user-defined name CURRENT
DATA
PROVIDER
PREFROLE Preferred role CHAR(10) Blank PREFERRED
ROLE
PREFSEQ Preferred sequence PACKED(5 0) 0 PREFERRED
SEQUENCE
CFGROLE Configured role CHAR(10) *PRIMARY, *BACKUP, *REPLICATE, *UNDEFINED CONFIGURE
D ROLE
CFGSEQ Configured sequence PACKED(5 0) -2, -1, 0-127 CONFIGURE
(-2= *UNDEFINED) D
(-1 = *REPLICATE) SEQUENCE
(0 = *PRIMARY)
(1-127 = *BACKUP sequence)

745
MCNODE outfile (WRKNODE command)

Table 161. MCNODE outfile (WRKNODE command)


Field Description Type, length Valid values Column
headings
CFGDTAPVD Configured data provider CHAR(10) *PRIMARY, *BACKUP, *UNDEFINED, user-defined name CONFIGURE
D DATA
PROVIDER
STATUS CRG node status CHAR(10) *ACTIVE, *INACTIVE, *ATTN, *NONE, *NOTAVAIL, CRG NODE
*UNKNOWN STATUS

746
MXCDGFE outfile (CHKDGFE command)

MXCDGFE outfile (CHKDGFE command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Check Data Group File Entries (CHKDGFE)
command. The command is also called by audits which run the #DGFE rule. For additional information, see “Interpreting results for
configuration data - #DGFE audit” on page 687.

Table 162. MXCDGFE outfile (CHKDGFE command)


Field Description Type, length Valid values Column
headings
TIMESTAMP Timestamp (YYYY-MM-DD.HH.MM.SSmmmm) TIMESTAMP SAA timestamp TIMESTAMP
COMMAND Command name CHAR(10) CHKDGFE COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT
NAME
DGDFN Data group definition name CHAR(10) User-defined data group name DGDFN
NAME
DGSYS1 System 1 CHAR(8) User-defined system name SYSTEM 1
DGSYS2 System 2 CHAR(8) User-defined system name SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
FILE System 1 file name CHAR(10) User-defined name SYSTEM 1
OBJECT
LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR System 1 member name CHAR(10) User-defined name SYSTEM 1
MEMBER
OBJTYPE Object type CHAR(10) *FILE OBJECT
TYPE

747
MXCDGFE outfile (CHKDGFE command)

Table 162. MXCDGFE outfile (CHKDGFE command)


Field Description Type, length Valid values Column
headings
RESULT Result CHAR(10) *NODGFE, *EXTRADGFE, *NOFILE, RESULT
*NOMBR, *RCYFAILED, *RECOVERED,
*UA
Note: The values *RCYFAILED and
*RECOVERED may be placed in the
outfile as a result of automatic audit
recovery actions.
OPTION Option CHAR(100) *NONE, *NOFILECHK, *DGFESYNC OPTION
FILE2 System 2 file name CHAR(10) User-defined name SYSTEM 2
OBJECT
LIB2 System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
MBR2 System 2 member name CHAR(10) User-defined name SYSTEM 2
MEMBER
ASPDEV Source ASP device CHAR(10) *UNKNOWN - if object not found or an API ASP DEVICE
error
*SYSBAS - if object in ASP 1-32
User-defined name - if object in ASP 33-255
OBJATR Object attribute CHAR(10) PF-DTA, PF-SRC, LF, OBJECT
PF38-DTA, PF38-SRC, LF38 ATTRIBUTE

748
MXCMPDLOA outfile (CMPDLOA command)

MXCMPDLOA outfile (CMPDLOA command)


For additional supporting information, see “Interpreting results of audits that compare attributes” on page 692.

Table 163. CMPDLOA Output file (MXCMPDLOA)


Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (CCCC-YY-MM- CHAR(26) SAA timestamp TIMESTAMP
DD.HH.MM.SSmmmm)
COMMAND Command name CHAR(10) CMPDLOA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name DGDFN NAME
Note: Blank if not DG specified on the command.
SYSTEM1 System 1 CHAR(8) User-defined system name SYSTEM 1
Note: Local system name if no DG specified.
SYSTEM2 System 2 CHAR(8) User-defined system name SYSTEM 2
Note: Local system name if no DG specified.
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1DLO System 1 DLO name CHAR(76) User-defined name SYSTEM 1
DLO
SYS2DLO System 2 DLO name CHAR(76) User-defined name SYSTEM 2
DLO
CCSID DLO name CCSID BIN(5) User-defined name CCSID
CNTRYID DLO name country ID CHAR(2) System-defined name CNTRYID
LANGID DLO name language ID CHAR(3) System-defined name LANGID
CMPATR Compared attribute CHAR(10) See “Attributes compared and expected results - COMPARED
#DLOATR audit” on page 713 ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 1
detected” on page 695 INDICATOR

749
MXCMPDLOA outfile (CMPDLOA command)

Table 163. CMPDLOA Output file (MXCMPDLOA)


Field Description Type, length Valid values Column head-
ings
SYS2IND System 2 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 2
detected” on page 695 INDICATOR
DIFIND Differences indicator CHAR(10) See “What attribute differences were detected” on DIFFERENCE
page 692 INDICATOR
SYS1VAL System 1 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 1
attribute MINLEN(50) #DLOATR audit” on page 713 VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 2 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 2
attribute MINLEN(50) #DLOATR audit” on page 713 VALUE
SYS2CCSID System 2 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
SYS1OBJTYP System 1 object type CHAR(10) User-defined name OBJECT TYPE
SYS2OBJTYP System 2 object type CHAR(10) User-defined name OBJECT TYPE

750
MXCMPFILA outfile (CMPFILA command)

MXCMPFILA outfile (CMPFILA command)


For additional supporting information, see “Interpreting results of audits that compare attributes” on page 692.

Table 164. CMPFILA Output file (MXCMPFILA)


Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM- TIMESTAMP SAA timestamp TIMESTAMP
DD.HH.MM.SSmmmmmm)
COMMAND Command name CHAR(10) CMPFILA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name DGDFN NAME
*blank if not DG specified on the command.
SYSTEM1 System 1 CHAR(8) User-defined system name SYSTEM 1
*local system name if no DG specified.
SYSTEM2 System 2 CHAR(8) User-defined system name SYSTEM 2
*remote system name if no DG specified.
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
FILE
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) System-defined name SYSTEM 2
FILE
SYS2LIB System 2 library name CHAR(10) System-defined name SYSTEM 2
LIBRARY
OBJTYPE Object type CHAR(10) *FILE OBJECT TYPE

751
MXCMPFILA outfile (CMPFILA command)

Table 164. CMPFILA Output file (MXCMPFILA)


Field Description Type, length Valid values Column head-
ings
CMPATR Compared attribute CHAR(10) See “Attributes compared and expected results - COMPARED
#FILATR, #FILATRMBR audits” on page 696. ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 1
detected” on page 695. INDICATOR
SYS2IND System 2 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 2
detected” on page 695. INDICATOR
DIFIND Differences indicator CHAR(10) See “What attribute differences were detected” on DIFFERENCE
page 692. INDICATOR
SYS1VAL System 1 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 1
attribute MINLEN(50) #FILATR, #FILATRMBR audits” on page 696. VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 2 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 2
attribute MINLEN(50) #FILATR, #FILATRMBR audits” on page 696. VALUE
SYS2CCSID System 2 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1
ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2
ASP
DEVICE

752
MXCMPFILD outfile (CMPFILDTA command)

MXCMPFILD outfile (CMPFILDTA command)


For additional information for interpreting this outfile, see “Interpreting results of audits for record counts and file data” on page 689.
The following fields require additional explanation:
Major mismatches before - Indicates the number of mismatched records found. A value other than 0 (zero) indicates that there are either
missing records or data within records does not match.
Major mismatches after - Indicates the number of mismatched records remaining. If repair was requested, this value should be 0 (zero);
otherwise, the value should equal that shown in the Major mismatches before column.
Minor mismatches after - Indicates the number of differences remaining that do not affect data integrity.
Apply pending - Indicates the number of records for which the database apply process has not yet performed repair processing.

Table 165. Compare File Data (CMPFILDTA) output file (MXCMPFILD)


Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM- TIMESTAMP SAA timestamp TIMESTAMP
DD.HH.MM.SSmmmmmm)
COMMAND Command name CHAR(10) “CMPFILDTA” COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN SHORT
NAME
DGNAME Data group definition name CHAR(10) User-defined data group name DGDFN NAME
* blank if not DG specified on the command
SYSTEM1 System 1 CHAR(8) User-defined system name SYSTEM 1
*local system name if no DG specified
SYSTEM2 System 2 CHAR(8) User-defined system name SYSTEM 2
*remote system name if no DG specified
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJECT

753
MXCMPFILD outfile (CMPFILDTA command)

Table 165. Compare File Data (CMPFILDTA) output file (MXCMPFILD)


Field Description Type, length Valid values Column head-
ings
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
OBJTYPE Object type CHAR(10) *FILE OBJECT TYPE
DIFIND Differences indicator CHAR(10) “What differences were detected by #FILDTA” DIFFERENCE
on page 689 INDICATOR
REPAIRSYS Repair system CHAR(10) *SYS1, *SYS2 REPAIR
SYSTEM
FILEREP File repair successful CHAR(10) Blank, *YES, *NO FILE REPAIR
SUCCESSFUL
TOTRCDS Total records compared DECIMAL(20) 0 - 99999999999999999999 TOTAL
RECORDS
COMPARED
MAJMISMBEF Major mismatches before processing DECIMAL(20) 0 - 99999999999999999999 MAJOR
MISMATCHES
BEFORE
PROCESSING
MAJMISMAFT Major mismatches after processing DECIMAL(20) 0 - 99999999999999999999 MAJOR
MISMATCHES
AFTER
PROCESSING
MINMISMAFT Minor mismatches after processing DECIMAL(20) 0 - 99999999999999999999 MINOR
MISMATCHES
AFTER
PROCESSING

754
MXCMPFILD outfile (CMPFILDTA command)

Table 165. Compare File Data (CMPFILDTA) output file (MXCMPFILD)


Field Description Type, length Valid values Column head-
ings
APYPENDING Apply pending records DECIMAL(20) 0 - 99999999999999999999 ACTIVE
RECORDS
PENDING
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1 ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2 ASP
DEVICE
TMPSQLVIEW Temporary target system SQL view CHAR(33) IBM i-format path name or blanks TEMPORARY
pathname TARGET
SQL VIEW

755
MXCMPFILR outfile (CMPFILDTA command, RRN report)

MXCMPFILR outfile (CMPFILDTA command, RRN report)


This output file format is the result of specifying *RRN for the report type on the Compare File Data command. Output in this format
enables you to see the relative record number (RRN) of the first 1,000 objects that failed to compare. This value is useful when resolving
situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. Viewing the RRN value
provides information that enables you to display the specific records on the two systems and to determine the system on which the file
should be repaired.

Table 166. Compare File Data (CMPFILDTA) relative record number (RRN) output file (MXCMPFILR)
Field Description Type, length Valid values Column head-
ings
SYSTEM 1 System 1 CHAR(8) User-defined system name SYSTEM 1
*local system name if no DG specified
SYSTEM 2 System 2 CHAR(8) User-defined system name SYSTEM 2
*local system name if no DG specified
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJECT
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
RRN Relative record number DECIMAL(10) Number RRN
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1 ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2 ASP
DEVICE

756
MXCMPRCDC outfile (CMPRCDCNT command)

MXCMPRCDC outfile (CMPRCDCNT command)


For additional information for interpreting this outfile, see “Interpreting results of audits for record counts and file data” on page 689.

Table 167. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC)


Field Description Format Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM- TIMESTAMP SAA timestamp ‘TIMESTAMP’
DD.HH.MM.SS.mmmmmm)
COMMAND Command Name CHAR(10) “CMPFILDTA” ‘COMMAND’
‘NAME’
DGSHRTNM Data group short name CHAR(3) short data group name ‘DGDFN’
‘SHORT’
‘NAME’
DGNAME Data group definition name CHAR(10) user-defined data group name ‘DGDFN’
* blank if not DG specified on the command ‘NAME’
SYSTEM1 System 1 CHAR(8) user-defined system name ‘SYSTEM 1’
* local system name if no DG specified
SYSTEM2 System 2 CHAR(8) user-defined system name ‘SYSTEM 2’
* remote system name if no DG specified
DTASRC Data source CHAR(10) *SYS1, *SYS2 ‘DATA’
‘SOURCE’
SYS1OBJ System 1 object name CHAR(10) user-defined name ‘SYSTEM 1’
‘OBJECT’
SYS1LIB System 1 library name CHAR(10) user-defined name ‘SYSTEM 1’
‘LIBRARY’
MBR Member name CHAR(10) user-defined name ‘MEMBER’
DIFIND Differences indicator CHAR(10) “What differences were detected by #MBRRCDCNT” ‘DIFFERENCE’
on page 691 ‘INDICATOR’

757
MXCMPRCDC outfile (CMPRCDCNT command)

Table 167. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC)


Field Description Format Valid values Column head-
ings
SYS1CURCNT System 1 current records DECIMAL(20) 0 - 99999999999999999999 ‘SYSTEM 1’
‘CURRENT’
‘RECORDS’
SYS2CURCNT System 2 current records DECIMAL(20) 0 - 99999999999999999999 ‘SYSTEM 2’
‘CURRENT’
RECORDS’
SYS1DLTCNT System 1 deleted records DECIMAL(20) 0 - 99999999999999999999 ‘SYSTEM 1’
‘DELETED’
‘RECORDS’
SYS2DLTCNT System 2 deleted records DECIMAL(20) 0 - 99999999999999999999 ‘SYSTEM 2’
‘DELETED’
‘RECORDS’
ASPDEV1 System 1 ASP device CHAR(10) *NONE, user-defined name ‘SYSTEM 1’
‘ASP’
‘DEVICE’
ASPDEV2 System 2 ASP device CHAR(10) *NONE, user-defined name ‘SYSTEM 2’
‘ASP’
‘DEVICE’
ACTRCDPND Active records pending DECIMAL(20) 0 - 99999999999999999999 ‘ACTIVE’
‘RECORDS’
‘PENDING’

758
MXCMPIFSA outfile (CMPIFSA command)

MXCMPIFSA outfile (CMPIFSA command)


For additional supporting information, see “Interpreting results of audits that compare attributes” on page 692.

Table 168. CMPIFSA Output file (MXCMPIFSA)


Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM- TIMESTAMP SAA timestamp TIMESTAMP
DD.HH.MM.SSmmmmmm)
COMMAND Command name CHAR(10) CMPIFSA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name DGDFN NAME
*blank if not DG specified on the command.
SYSTEM1 System 1 CHAR(8) User-defined system name SYSTEM 1
*local system name if no DG specified.
SYSTEM2 System 2 CHAR(8) User-defined system name SYSTEM 2
*remote system name if no DG specified.
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJECT
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJECT
CCSID IFS object name CCSID BIN(5) User-defined name CCSID
CNTRYID IFS object name country ID CHAR(2) System-defined name CNTRYID
LANGID IFS object name language ID CHAR(3) System-defined name LANGID
CMPATR Compared attribute CHAR(10) See “Attributes compared and expected results - COMPARED
#IFSATR audit” on page 710. ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 1
detected” on page 695. INDICATOR

759
MXCMPIFSA outfile (CMPIFSA command)

Table 168. CMPIFSA Output file (MXCMPIFSA)


Field Description Type, length Valid values Column head-
ings
SYS2IND System 2 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 2
detected” on page 695. INDICATOR
DIFIND Differences indicator CHAR(10) “What attribute differences were detected” on DIFFERENCE
page 692. INDICATOR
SYS1VAL System 1 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 1
attribute MINLEN(50) #IFSATR audit” on page 710. VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 2 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 2
attribute MINLEN(50) #IFSATR audit” on page 710. VALUE
SYS2CCSID System 2 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
SYS1OBJTYP System 1 object type CHAR(10) User-defined name OBJECT TYPE
SYS2OBJTYP System 2 object type CHAR(10) User-defined name OBJECT TYPE

760
MXCMPOBJA outfile (CMPOBJA command)

MXCMPOBJA outfile (CMPOBJA command)


For additional supporting information, see “Interpreting results of audits that compare attributes” on page 692.

Table 169. CMPOBJA Output file (MXCMPOBJA)


Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM- TIMESTAMP SAA timestamp TIMESTAMP
DD.HH.MM.SSmmmm)
COMMAND Command name CHAR(10) CMPOBJA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name DGDFN NAME
*blank if not DG specified on the command.
SYSTEM1 System 1 CHAR(8) User-defined system name SYSTEM 1
*local system name if no DG specified.
SYSTEM2 System 2 CHAR(8) User-defined system name SYSTEM 2
*remote system name if no DG specified.
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
FILE
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
OBJTYPE Object type CHAR(10) User-defined name OBJECT TYPE

761
MXCMPOBJA outfile (CMPOBJA command)

Table 169. CMPOBJA Output file (MXCMPOBJA)


Field Description Type, length Valid values Column head-
ings
CMPATR Compared attribute CHAR(10) See “Attributes compared and expected results - COMPARED
#OBJATR audit” on page 701 ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 1
detected” on page 695 INDICATOR
SYS2IND Stem 2 file indicator CHAR(10) See Table 129 in “Where was the difference SYSTEM 2
detected” on page 695 INDICATOR
DIFIND Differences indicator CHAR(10) “What attribute differences were detected” on DIFFERENCE
page 692 INDICATOR
SYS1VAL System 1 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 1
attribute MINLEN(50) #OBJATR audit” on page 701 VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 1 value of the specified VARCHAR(2048) See “Attributes compared and expected results - SYSTEM 2
attribute MINLEN(50) #OBJATR audit” on page 701 VALUE
SYS2CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1
ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2
ASP
DEVICE

762
MXAUDHST outfile (WRKAUDHST command)

MXAUDHST outfile (WRKAUDHST command)

Table 170. MXAUDHST outfile (WRKAUDHST command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group CHAR(10) User-defined data group name DGDFN
definition) NAME
DGSYS1 System 1 name (Data group CHAR(8) User-defined system name DGDFN
definition) SYSTEM 1
DGSYS2 System 2 name (Data group CHAR(8) User-defined system name DGDFN
definition) SYSTEM 2
RULE Audit rule CHAR(10) #DLOATR, #FILATR, #FILATRMBR, #FILDTA, AUDIT
#IFSATR, #MBRRCDCNT, #OBJATR RULE
CMPSTRTSP Compare start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm COMPARE
START
TIMESTAMP
CMPENDTSP Compare end timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm COMPARE
END
TIMESTAMP
AUDLVL Audit level CHAR(10) *DISABLED, *LEVEL10, *LEVEL20, *LEVEL30 AUDIT
LEVEL
STATUS Audit status CHAR(10) *AUTORCVD, *CMPACT, *DIFFNORCY, AUDIT
*DISABLED, *ENDED, *FAILED, *NEW, *NODIFF, STATUS
*NOTRCVD, *NOTRUN, *IGNOBJ, *QUEUED,
*RCYACT, *USRRCVD
TTLSELECT Total selected PACKED(9 0) 0-999999999 TOTAL
OBJECTS
SELECTED

763
MXAUDHST outfile (WRKAUDHST command)

Table 170. MXAUDHST outfile (WRKAUDHST command)


Field Description Type, length Valid values Column head-
ings
NOTRCVD Not recovered PACKED(9 0) 0-999999999 OBJECTS
NOT
RECOVERED
RCVD Recovered PACKED(9 0) 0-999999999 OBJECTS
RECOVERED
NOTCMP Not compared PACKED(9 0) 0-999999999 OBJECTS
NOT COMPARED
CMP Compared PACKED(9 0) 0-999999999 OBJECTS
COMPARED
DETECTNE Detected Not Equal PACKED(9 0) 0-999999999 DETECTED
NOT EQUAL
DURATION Audit Duration TIME HH.MM.SS AUDIT
DURATION
RCYSTRTSP Recovery start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm RECOVERY
START
TIMESTAMP
RCYENDTSP Recovery end timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm RECOVERY
END
TIMESTAMP
AUDRCY Audit Recovery status CHAR(10) *DISABLED, *LEVEL10, *LEVEL20, *LEVEL30 AUDIT
RECOVERY
STATUS
OBJSEL Objects selected CHAR(10) *ALL, *DIF, *PTY OBJECTS
SELECTED
SEVNOTRUN Not run audit status severity CHAR(10) *ERROR, *WARNING, *INFO NOT RUN
AUDIT
SEVERITY

764
MXAUDHST outfile (WRKAUDHST command)

Table 170. MXAUDHST outfile (WRKAUDHST command)


Field Description Type, length Valid values Column head-
ings
SEVIGNOBJ Ignored object audit status CHAR(10) *WARNING, *INFO IGNORED
severity OBJECTS
SEVERITY
ENDREASON Reason for *ENDED audit status CHAR(10) *THRESHOLD, *DGINACT, *MAXRUNTIM ENDED
AUDIT REASON

765
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)

MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)


This outfile is used by the Work with Audited Objects (WRKAUDOBJ) and the Work with Audited Obj. History (WRKAUDOBJH) command.
When created by the WRKAUDOBJ command, the outfile may include objects from multiple audits; however, only information from the
most recent audit that compared an object is included.
When created by the WRKAUDOBJH command, the outfile includes the available audit history for a single object that was audited for a
specific data group. The outfile records are sorted in reverse chronological order so that the audit history having the most recent audit start
date is at the top. For a given object of type *FILE, there can be records from multiple audits (#FILATR, #FILDTA, #MBRRCDCNT).

Table 171. MXAUDOBJ outfile (WRKAUDOBJ and WRKAUDOBJH commands)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group CHAR(10) User-defined data group name DGDFN
definition) NAME
DGSYS1 System 1 name (Data group CHAR(8) User-defined system name DGDFN
definition) SYSTEM 1
DGSYS2 System 2 name (Data group CHAR(8) User-defined system name DGDFN
definition) SYSTEM 2
RULE Audit rule CHAR(10) #DLOATR, #FILATR, #FILATRMBR, #FILDTA, AUDIT
#IFSATR, #MBRRCDCNT, #OBJATR RULE
CMPSTRTSP Compare start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm COMPARE
START
TIMESTAMP
TYPE Object type CHAR(10) Refer to the OM5100P file for the list of valid object OBJECT
types. TYPE
OBJLIB Library name CHAR(10) User-defined name, BLANK OBJECT
LIBRARY
OBJ Object name CHAR(10) User-defined name, BLANK OBJECT
MEMBER
OBJMBR Member name CHAR(10) User-defined name, BLANK MEMBER
DLO DLO name CHAR(12) User-defined name, BLANK DLO

766
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)

Table 171. MXAUDOBJ outfile (WRKAUDOBJ and WRKAUDOBJH commands)


Field Description Type, length Valid values Column head-
ings
FLR Folder name CHAR(63) User-defined name, BLANK FOLDER
IFS Object IFS name CHAR(1024) User-defined name, BLANK IFS
VARLEN(100) OBJECT
CCSID IFS name CCSID BINARY(5) numeric (0-65535) CCSID
RMTOBJLIB Remote Library name CHAR(10) User-defined name, BLANK REMOTE
OBJECT
LIBRARY
RMTOBJ Remote Object name CHAR(10) User-defined name, BLANK REMOTE
OBJECT
RMTOBJMBR Remote Member name CHAR(10) User-defined name, BLANK REMOTE
MEMBER
RMTDLO Remote DLO name CHAR(12) User-defined name, BLANK REMOTE
DLO
RMTFLR Remote Folder name CHAR(63) User-defined name, BLANK REMOTE
FOLDER
RMTIFS Remote Object IFS name CHAR(1024) User-defined name, BLANK REMOTE
VARLEN(100) IFS
OBJECT
AUDSTS Audited status CHAR(10) *EQ, *NE, *NS, *RCVD, *SYNC, *UN OVERALL
AUDITED
STATUS
CMPSTS Compare status CHAR(10) *APY, *CMT, *CO, *CO (LOB), *DT, *EQ, *EQ OBJECT
(DATE), *EQ (OMIT), *EC, *FF, *FMC, *FMT, *HLD, COMPARE
*IOERR, *LCK, *NA, *NC, *NE, *NF1, *NF2, *NS, STATUS
*REP, *SJ, *SP, *SYNC, *UA, *UE, *UN, *EN, *NM
(Any status issued by an audit's compare phase)
RCYSTRTSP Recovery start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm RECOVERY
START
TIMESTAMP

767
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)

Table 171. MXAUDOBJ outfile (WRKAUDOBJ and WRKAUDOBJH commands)


Field Description Type, length Valid values Column head-
ings
RCYSTS Recovery status CHAR(10) *RECOVERED, *RCYFAILED, *RCYSBM, or BLANK RECOVERY
STATUS

768
MXDGACT outfile (WRKDGACT command)

MXDGACT outfile (WRKDGACT command)

Table 172. MXDGACT outfile (WRKDGACT command)


Field Description Type, length Valid values Column
headings
DGDFN Data group name (Data group definition) CHAR(10) User-defined data group name DGDFN
NAME
DGSYS1 System 1 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name CHAR(8) User-defined system name DGDFN
(Data group definition) SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
STATUS Object status category CHAR(10) *COMPLETED, *FAILED, *DELAYED, *ACTIVE OBJECT
STATUS
CATEGORY
TYPE Object type CHAR(10) Refer to the OM5100P file for the list of valid object types OBJECT
TYPE
OBJATR Object attribute CHAR(10) Refer to the OM5200P file for the list of valid object OBJECT
attributes ATTRIBUTE
REASON Failure reason CHAR(11) *INUSE, *RESTRICTED, *NOTFOUND, *OTHER, blank FAILURE
REASON
COUNT Entry count PACKED(5 0) 0-9999 (9999 = maximum value supported) ENTRY
COUNT
OBJCAT Object category CHAR(10) *DLO, *IFS, *SPLF, *LIB OBJECT
CATEGORY
OBJLIB Object library CHAR(10) User-defined name, BLANK OBJECT
LIBRARY
OBJ Object name CHAR(10) User-defined name, BLANK OBJECT
OBJMBR Member name CHAR(10) User-defined name, BLANK MEMBER
DLO DLO name CHAR(12) User-defined name, BLANK DLO

769
MXDGACT outfile (WRKDGACT command)

Table 172. MXDGACT outfile (WRKDGACT command)


Field Description Type, length Valid values Column
headings
FLR Folder name CHAR(63) User-defined name, BLANK FOLDER
SPLFJOB Spooled file job name CHAR(26) Three part spooled file name, BLANK SPLF JOB
SPLF Spooled file name CHAR(10) User-defined name, BLANK SPLF NAME
SPLFNBR Spooled file number PACKED(7 0) 1-99999, BLANK SPLF
NUMBER
OUTQ Output queue CHAR(10) User-defined name, *NONE, BLANK OUTQ
OUTQLIB Output queue library CHAR(10) User-defined name, *NONE, BLANK OUTQ
LIBRARY
IFS Object IFS name CHAR(1024) User-defined name, BLANK IFS OBJECT
VARLEN(100)
CCSID Object CCSID BIN(5 0) Default to job CCSID. If unable to convert to job's CCSID CCSID
or job CCSID is 65535, related fields will be written in
Unicode
IFSUCS IFS Object (UNICODE) Graphic(512) User-defined name (Unicode), BLANK IFS Object
VARLEN(75) (UNICODE)
CCSID(13488)

770
MXDGACTE outfile (WRKDGACTE command)

MXDGACTE outfile (WRKDGACTE command)

Table 173. MXDGACTE outfile (WRKDGACTE command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group definition) CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
STATUS Object status category CHAR(10) *COMPLETED, *FAILED, *DELAYED, OBJECT
*ACTIVE STATUS
CATEGORY
OBJSTATUS Object status CHAR(2) Refer to on-line help for complete list OBJECT
STATUS
TYPE Object type CHAR(10) Refer to the OM5100P file for the list of OBJECT TYPE
valid object types
OBJATR Object attribute CHAR(10) Refer to the OM5200P file for the list of OBJECT
valid object attributes ATTRIBUTE
REASON Failure reason CHAR(11) *INUSE, *RESTRICTED, *NOTFOUND, FAILURE
*OTHER, blank REASON
OBJCAT Object category CHAR(10) *DLO, *IFS, *SPLF, *LIB OBJECT
CATEGORY
SEQJRN Journal sequence number ZONED(20 0) 1-99999999999999999999 JOURNAL
SEQUENCE
NUMBER
SEQNBR Journal sequence number PACKED(10 0) 1-9999999999 JOURNAL
SEQUENCE
NUMBER

771
MXDGACTE outfile (WRKDGACTE command)

Table 173. MXDGACTE outfile (WRKDGACTE command)


Field Description Type, length Valid values Column head-
ings
JRNCODE Journal entry code CHAR(1) Valid journal codes JOURNAL
ENTRY CODE
JRNTYPE Journal entry type CHAR(2) Valid journal types JOURNAL
ENTRY TYPE
JRNTSP Journal entry timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm JOURNAL
ENTRY
TIMESTAMP
JRNSNDTSP Journal entry send timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm JOURNAL
ENTRY SEND
TIMESTAMP
JRNRCVTSP Journal entry receive timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm JOURNAL
ENTRY RCV
TIMESTAMP
JRNRTVTSP Journal entry retrieve timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm JOURNAL
ENTRY RTV
TIMESTAMP
CNRSNDTSP Container send timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm CONTAINER
SEND
TIMESTAMP
JRNAPYTSP Journal entry apply timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm JOURNAL
ENTRY APY
TIMESTAMP
REQCNRSND Requires container CHAR(10) *YES, *NO REQUIRES
CONTAINER
SEND
RTYWAIT Waiting for retry CHAR(10) *YES, *NO WAITING FOR
RETRY
RTYATTEMPT Number of retries attempted PACKED(5 0) 0-1998 NUMBER OF
RETRIES
ATTEMPTED

772
MXDGACTE outfile (WRKDGACTE command)

Table 173. MXDGACTE outfile (WRKDGACTE command)


Field Description Type, length Valid values Column head-
ings
RTYREMAIN Number of retries remaining PACKED(5 0) 0-1998 NUMBER OF
RETRIES
REMAINING
DLYITV Delay interval PACKED(5 0) 1-7200 DELAY
INTERVAL
NXTRTYTSP Next retry timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm in NEXT RETRY
outfile on system of the process in TIMESTAMP
delay/retry.
0001-01-01-00.00.00.000000 for Pending
in outfile on system of the process in
delay/retry.
1928-08-23-12.03.06.315000 (system start
of time) in outfile that is remote to the
system with process in delay/retry
MSGID Message ID CHAR(7) Valid message ID, BLANK MESSAGE ID
MSGDTA Message data CHAR(256) Valid message data, BLANK MESSAGE
VARLEN(50) DATA
FAILEDJOB Failed job name CHAR(26) Job name, BLANK FAILED JOB
NAME
JRNENT Journal entry CHAR(400) Journal entry JOURNAL
ENTRY
OBJLIB Object library CHAR(10) User-defined name, BLANK OBJECT
LIBRARY
OBJ Object name CHAR(10) User-defined name, BLANK OBJECT
OBJMBR Member name CHAR(10) User-defined name, BLANK MEMBER
DLO DLO name CHAR(12) User-defined name, BLANK DLO
FLR Folder name CHAR(63) User-defined name, BLANK FOLDER
SPLFJOB Spooled file job name CHAR(26) Three part spooled file name, BLANK SPLF JOB
SPLF Spooled file name CHAR(10) User-defined name, BLANK SPLF NAME
SPLFNBR Spooled file number PACKED(7 0) 1-99999, BLANK SPLF NUMBER

773
MXDGACTE outfile (WRKDGACTE command)

Table 173. MXDGACTE outfile (WRKDGACTE command)


Field Description Type, length Valid values Column head-
ings
OUTQ Output queue CHAR(10) User-defined name, *NONE, BLANK OUTQ
OUTQLIB Output queue library CHAR(10) User-defined name, *NONE, BLANK OUTQ
LIBRARY
IFS Object IFS name CHAR(1024) User-defined name, BLANK IFS OBJECT
VARLEN(100)
CCSID Object CCSID BIN(5 0) Default to job CCSID. If unable to convert CCSID
to job's CCSID or job CCSID is 65535,
related fields will be written in Unicode.

TGTOBJLIB Target system object library name CHAR(10) User-defined name, BLANK TARGET
OBJECT
LIBRARY
TGTOBJ Target system object name CHAR(10) User-defined name, BLANK TARGET
OBJECT
TGTOBJMBR Target system object member name CHAR(10) User-defined name, BLANK TARGET
MEMBER
TGTDLO Target system DLO name CHAR(12) User-defined name, BLANK TARGET DLO
TGTFLR Target system object folder name CHAR(63) User-defined name, BLANK TARGET
FOLDER
TGTSPLFJOB Target system spooled file job name CHAR(26) Three part spooled file name, BLANK TARGET SPLF
TGTSPLF Target system spooled file name CHAR(10) User-defined name, BLANK JOB
TGTSPLFNBR Target system spooled file job number PACKED(7 0) 1-999999, BLANK TARGET SPLF
NUMBER
TGTOUTQ Target system output queue CHAR(10) User-defined name, BLANK TARGET OUTQ
TGTOUTQLIB Target system output queue library CHAR(10) User-defined name, BLANK TARGET OUTQ
LIBRARY
TGTIFS Target system IFS name CHAR(1024) User-defined name, BLANK TARGET IFS
VARLEN(100) OBJECT

774
MXDGACTE outfile (WRKDGACTE command)

Table 173. MXDGACTE outfile (WRKDGACTE command)


Field Description Type, length Valid values Column head-
ings
RNMOBJLIB Renamed object library name CHAR(10) User-defined name, BLANK RENAMED
OBJECT
LIBRARY
RNMOBJ Renamed object name CHAR(10) User-defined name, BLANK RENAMED
OBJECT
RNMOBJMBR Renamed object member name CHAR(10) User-defined name, BLANK RENAMED
MEMBER
RNMDLO Renamed DLO name CHAR(12) User-defined name, BLANK RENAMED DLO
RNMFLR Renamed object folder name CHAR(63) User-defined name, BLANK RENAMED
FOLDER
RNMSPLFJOB Renamed spooled file job name CHAR(26) Three part spooled file name, BLANK RENAMED
SPLF JOB
RNMSPLF Renamed spooled file name CHAR(10) User-defined name, BLANK RENAMED
SPLF NAME
RNMSPLFNBR Renamed spooled file number PACKED(7 0) 1-999999, BLANK RENAMED
SPLF NUMBER
RNMOUTQ Renamed output queue CHAR(10) User-defined name, BLANK RENAMED
OUTQ
RNMOUTQLIB Renamed output queue library CHAR(10) User-defined name, BLANK RENAMED
OUTQ
LIBRARY
RNMIFS Renamed IFS object name CHAR(1024) User-defined name, BLANK RENAMED IFS
VARLEN(100) OBJECT
RNMOBJLIB Renamed target object library name CHAR(10) User-defined name, BLANK RENAMED TGT
OBJECTS
LIBRARY
RNMTGTOBJ Renamed target object name CHAR(10) User-defined name, BLANK RENAMED
TARGET
OBJECT

775
MXDGACTE outfile (WRKDGACTE command)

Table 173. MXDGACTE outfile (WRKDGACTE command)


Field Description Type, length Valid values Column head-
ings
RNMTOBJMBR Renamed target object member name CHAR(10) User-defined name, BLANK RENAMED
TARGET OBJ
MEMBER
RNMTGTDLO Renamed target object DLO name CHAR(12) User-defined name, BLANK RENAMED
TARGET DLO
RNMTGTFLR Renamed target object folder name CHAR(63) User-defined name, BLANK RENAMED
TARGET
FOLDER
RNMTSPLFJ Renamed target spooled file job name CHAR(26) Three part spooled file name, BLANK RENAMED
TARGET SPLF
JOB
RNTTGTSPLF Renamed target spooled file name CHAR(10) User-defined name, BLANK RENAMED
TARGET SPLF
NAME
RNMTSPLFN Renamed target spooled file number PACKED(7 0) 1-999999, BLANK RENAMED
TARGET SPLF
NUMBER
RNMTGTOUTQ Renamed target output queue CHAR(10) User-defined name, BLANK RENAMED
TARGET OUTQ
RNMTOUTQL Renamed target output queue library CHAR(10) User-defined name, BLANK RENAMED
TARGET OUTQ
LIBRARY
RNMTGTIFS Renamed target object IFS name CHAR(1024) User-defined name, BLANK RENAMED
VARLEN(100) TARGET IFS
OBJECT
COOPDB Cooperate with DB CHAR(10) *YES, *NO, BLANK COOPERATE
WITH
DATABASE
OBJFID IFS object file identifier (binary format) BIN(16) Binary representation of file identifier IFS OBJECT
FID (Binary)

776
MXDGACTE outfile (WRKDGACTE command)

Table 173. MXDGACTE outfile (WRKDGACTE command)


Field Description Type, length Valid values Column head-
ings
OBJFIDHEX IFS object file identifier (character format) CHAR(32) Character representation of file identifier IFS OBJECT
FID (Hex)
IFSUCS IFS Object (UNICODE) GRAPHIC(512) User-defined name (Unicode), BLANK IFS Object
VARLEN(75) (UNICODE)
CCSID(13488
TGTIFSUCS TGT IFS Object (UNICODE) GRAPHIC(512) User-defined name (Unicode), BLANK TGT IFS Object
VARLEN(75) (UNICODE)
CCSID(13488)
RNMIFSUCS RNM IFS Object (UNICODE) GRAPHIC(512) User-defined name (Unicode), BLANK RNM IFS Object
VARLEN(75) (UNICODE)
CCSID(13488)
RNMTGTIFSU RNM TGT IFS Object (UNICODE) GRAPHIC(512) User-defined name (Unicode), BLANK RNM TGT IFS
VARLEN(75) Object
CCSID(13488) (UNICODE)
MSGCCSID CCSID of *CCHAR Message Data BIN(5 0) Valid CCSID or 0 (zero) MSG DATA
CCSID

777
MXDGDFN outfile (WRKDGDFN command)

MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
DGDFN Data group definition name (Data group definition) CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 (Data group definition) CHAR(8) User-defined system name DGDFN NAME
SYSTEM 1
DGSYS2 System 2 (Data group definition) CHAR(8) User-defined system name DGDFN NAME
SYSTEM 2
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN SHORT
NAME
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
ALWSWT Allow to be switched CHAR(10) *YES, *NO ALLOW SWITCH
DGTYPE Data group type CHAR(10) *ALL, *OBJ, *DB DG TYPE
PRITFRDFN Configured primary transfer definition CHAR(10) User-defined name, *DGDFN CONFIGURED
PRITFRDFN
SECTFRDFN Secondary transfer definition CHAR(10) User-defined name, *NONE CONFIGURED
SECTFRDFN
RDRWAIT Reader wait time (seconds) PACKED(5 0) 0-600 DB READER
WAITTIME
JRNTGT Journal on target CHAR(10) *YES, *NO JOURNAL ON
TARGET
JRNDFN1 Configured system 1 journal definition CHAR(10) *DGDFN, user-defined name, CONFIGURED
*NONE SYSTEM 1
JRNDFN
JRNDFN1NM Actual system 1 journal definition name CHAR(10) User-defined name, blank ACTUAL
SYSTEM 1
JRNDFN
JRNDFN2SYS System 2 journal definition system name CHAR(8) User-defined name, blank JRNDFN
SYSTEM 2

778
MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
JRNDFN2 Configured system 2 journal definition CHAR(10) *DGDFN, user-defined name, CONFIGURED
*NONE SYSTEM 2
JRNDFN
JRNDFN2NM Actual system 2 journal definition name CHAR(10) User-defined name, blank ACTUAL
SYSTEM 2
JRNDFN
JRNDFN2SYS System 2 journal definition system name CHAR(8) User-defined name, blank JRNDFN
SYSTEM 2
RJLNK User remote journal link CHAR(10) *YES, *NO RJ LINK
NBRDBAPY Number of DB apply sessions PACKED(3 0) 1-6 CURRENT
NUMBER OF DB
APPLIES
RQSDBAPY Requested number of DB apply sessions PACKED(3 0) 1-6 REQUESTED
NUMBER OF DB
APPLIES
DBBFRIMG Before images (DB journal entry processing) CHAR(10) *IGNORE, *SEND DBJRNPRC
BEFORE
IMAGES
DBNOTINDG For files not in data group (DB journal entry CHAR(10) *IGNORE, *SEND DBJRNPRC
processing) FILES NOT IN
DG
DBMMXGEN Generated by MIMIX activity (DB journal entry CHAR(10) *IGNORE, *SEND DBJRNPRC
processing) GEN’D BY MIMIX
ACT
DBNOTUSED Not used by MIMIX (DB journal entry processing) CHAR(10) *IGNORE, *SEND DBJRNPRC NOT
USED BY MIMIX
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION
SYNCCHKITV Synchronization check interval PACKED(5 0) 0 - 999999 (0=*NONE) SYNC CHECK
INTERVAL
TSPITV Time stamp interval PACKED(5 0) 0 - 999999 (0=*NONE) TIME STAMP
INTERVAL

779
MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
VFYITV Verify interval PACKED(5 0) 1000-999999 VERIFICATION
INTERVAL
DTAARAITV Data area polling interval PACKED(5 0) 1-7200 DATA AREA
POLLING
INTERVAL
RTYNBR Number of times to retry PACKED(3 0) 0-999 NUMBER OF
RETRIES
RTYDLYITV1 First retry delay interval PACKED(5 0) 1-3600 FIRST RETRY
INTERVAL
RTYDLYITV2 Second retry delay interval PACKED(5 0) 10-7200 SECOND
RETRY
INTERVAL
ADPCHE Adaptive cache CHAR(10) *YES, *NO USE ADAPTIVE
Version 7.0 and higher, field is CACHE
always *NO.
DATACRG Data cluster resource group CHAR(10) User-defined name, blank, *NONE DATA CRG
DFTJRNIMG Journal image (File entry options) CHAR(10) *AFTER, *BOTH FEOPT
JOURNAL
IMAGES
DFTOPNCLO Omit open / close entries (File entry options) CHAR(10) *NO, *YES FEOPT OMIT
OPEN CLOSE
DFTREPTYPE Replication type (File entry options) CHAR(10) *POSITION, *KEYED FEOPT
REPLICATION
TYPE
DFTAPYLOCK Lock member during apply (File entry options) CHAR(10) *EXCLRD, *SHRNUP, *NONE1 FEOPT LOCK
MBR ON APPLY
DFTAPYSSN Configured apply session (File entry options) CHAR(10) *ANY, A-F FEOPT CFG
APPY SESSION
DFTCRCLS Collision resolution (File entry options) CHAR(10) *HLDERR, *AUTOSYNC, user- FEOPT
defined name COLLISION
RESOLUTION

780
MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
DFTSBTRG Disable triggers during apply (File entry options) CHAR(10) *YES, *NO FEOPT DISABLE
TRIGGERS
DFTPRCCST Process constraint entries (File entry options) CHAR(10) *YES FEOPT
PROCESS
CONSTRAINT
DBFRCITV Force data interval (Database apply processing) PACKED(5 0) 1-99999 DBAPYPRC
FORCE DATA
DBMAXOPN Maximum open members (Database apply PACKED(5 0) 50 - 32767 DBAPYPRC
processing) MAX OPEN
MEMBERS
DBAPYTWRN Threshold warning (1000s) (Database apply PACKED(7 0) 0, 1-999999 (0 = *NONE) DBAPYPRC
processing) THRESHOLD
WARNING
DBAPYHST Apply history log spaces (Database apply PACKED(5 0) 0-9999 DBAPYPRC
processing) HISTORY
DBKEEPLOG Keep journal log spaces (Database apply PACKED(5 0) 0-9999 DBAPYPRC
processing) KEEP JRN
DBLOGSIZE Size of log spaces (MB) (Database apply PACKED(5 0) 1-16 DBAPYPRC
processing) SIZE OF LOG
SPACES
OBJDFTOWN Object default owner (Object processing) CHAR(10) User-defined name OBJPRC
DEFAULT
OWNER
OBJDLOMTH DLO transmission method (Object processing) CHAR(10) *OPTIMIZED, *SAVRST OBJPRC DLO
TRANSFER
METHOD
OBJIFSMTH IFS transmission method (Object processing) CHAR(10) *SAVRST, *OPTIMIZED OBJPRC IFS
TRANSFER
METHOD

781
MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
OBJUSRSTS User profile status (Object processing) CHAR(10) *SRC, *TGT, *ENABLE, *DISABLE OBJPRC
USER PROFILE
STATUS
OBJKEEPSPL Keep deleted spooled files (Object processing) CHAR(10) *YES, *NO OBJPRC
KEEP DELETED
SPLF
OBJKEEPDLO Keep DLO System Name (Object Processing) CHAR(10) *YES, *NO OBJPRC
KEEP DLO
SYS NAME
OBJRTVDLY Retrieve delay (Object retrieve processing) PACKED(3 0) 0-999 OBJRTVPRC
DELAY
OBJRTVMINJ Minimum number of jobs (Object retrieve PACKED(3 0) 1-99 OBJRTVPRC
processing) MIN NUMBER
OF JOBS
OBJRTVMAXJ Maximum number of jobs (Object retrieve PACKED(3 0) 1-99 OBJRTVPRC
processing) MAX NUMBER
OF JOBS
OBJRTVTHLD Threshold for more jobs (Object retrieve PACKED(5 0) 1-99999 OBJRTVPRC
processing) THLD FOR
MORE JOBS
CNRSNDMINJ Minimum number of jobs (Container send PACKED(3 0) 1-99 CNRSNDPRC
processing) MIN NUMBER
OF JOBS
CNRSNDMAXJ Maximum number of jobs (Container send PACKED(3 0) 1-99 CNRSNDPRC
processing) MAX NUMBER
OF JOBS
CNRSNDTHLD Threshold for more jobs (Container send PACKED(5 0) 1-99999 CNRSNDPRC
processing) THLD FOR
MORE JOBS

782
MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
OBJAPYMINJ Minimum number of jobs (Object apply processing) PACKED(3 0) 1-99 OBJAPYPRC
MIN NUMBER
OF JOBS
OBJAPYMAXJ Maximum number of jobs (Object apply processing) PACKED(3 0) 1-99 OBJAPYPRC
MAX NUMBER
OF JOBS
OBJAPYTHLD Threshold for more jobs (Object apply processing) PACKED(5 0) 1-99999 OBJAPYPRC
THLD FOR
MORE JOBS
OBJAPYTWRN Threshold for warning messages (Object apply PACKED(5 0) 0, 50-99999 (0 = *NONE) OBJAPYPRC
processing) THLD FOR
WARNING
MSGS
SBMUSR User profile for submit job CHAR(10) *JOBD, *CURRENT USRPRF FOR
SUBMIT JOB
SNDJOBD Send job description CHAR(10) Job description name SEND JOBD
SNDJOBDLIB Send job description library CHAR(10) Job description library SEND JOBD
LIBRARY
APYJOBD Apply job description CHAR(10) Job description name APPLY JOBD
APYJOBDLIB Apply job description library CHAR(10) Job description library APPLY JOBD
LIBRARY
RGZJOBD Reorganize job description CHAR(10) Job description name REORGANIZE
JOBD
RGZJOBDLIB Reorganize job description library CHAR(10) Job description library REORGANIZE
JOBD LIBRARY
SYNJOBD Synchronize job description CHAR(10) Job description name SYNC JOBD
SYNJOBDLIB Synchronize job description library CHAR(10) Job description library SYNC JOBD
LIBRARY

783
MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
SAVACT Save while active (seconds) PACKED(5 0) -1, 0, 1-999999 SAVE WHILE
(0 = Save while active for files only ACTIVE (SEC)
with a 120 second wait time)
(-1 = No save while active)
(1-99999 = Save while active for all
object types with specified wait time)
RSTARTTIME Restart Time CHAR((8) 000000 - 235959, *NONE, RESTART TIME
*SYSDFN1, *SYSDFN2
000000 = midnight (default)
ASPGRP1 System 1 ASP group CHAR(10) *NONE, User-defined name SYSTEM 1
ASP GROUP
ASPGRP2 System 2 ASP group CHAR(10) *NONE, User-defined name SYSTEM 2
ASP GROUP
COOPJRN Cooperative Journal CHAR(10) *SYSJRN, *USRJRN COOPERATIVE
JOURNAL
RCYWINPRC Recovery Window Process CHAR (7) *NONE, *ALLAPY RECOVERY
PROCESS
RCHWINDUR Recovery Window Duration PACKED (5 0) 0-99999 RECOVERY
DURATION
JRNATCRT Journal at creation CHAR(10) *DFT, *YES, *NO JOURNAL AT
CREATION
RJLNKTHLDM RJ Link Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 = *NONE) RJLNK
THRESHOLD
(TIME IN MIN)
RJLNKTHLDE RJ Link Threshold (Number of journal entries) PACKED(7 0) 0, 1000-9999999 (0 = *NONE) RJLNK
THRESHOLD
(NBR OF JRNE)
DBSNDTHLDM DB Send/Reader Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 = *NONE) DBSND/DBRDR
THRESHOLD
(TIME IN MIN)

784
MXDGDFN outfile (WRKDGDFN command)

Table 174. MXDGDFN outfile (WRKDGDFN command)


Field Description Type, length Valid values Column Head-
ings
DBSNDTHLDE DB Send/Reader Threshold (Number of journal PACKED(7 0) 0, 1000-9999999 (0 = *NONE) DBSND/DBRDR
entries) THRESHOLD
(NBR OF JRNE)
OBJSNDTHDM Object Send Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 = *NONE) OBJSND
THRESHOLD
(TIME IN MIN)
OBJSNDTHDE Object Send Threshold (Number of journal entries) PACKED(7 0) 0, 1000-9999999 (0 = *NONE) OBJSND
THRESHOLD
(NBR OF JRNE)
OBJRTVTHDE Object Retrieve Threshold (Number of activity PACKED(5 0) 0, 50-99999 (0 = *NONE) OBJRTV
entries) THRESHOLD
CNRSNDTHDE Container Send Threshold (Number of activity PACKED(5 0) 0, 50-99999 (0 = *NONE) CNRSND
entries) THRESHOLD
OBJSNDPFX Object send prefix CHAR(10) name, *DGDFN, *SHARED OBJECT SEND
PREFIX
DBCMTMODE Commit mode (Database apply processing) CHAR(10) *DLY, *IMMED DBAPYPRC
COMMIT MODE
1. These values are supported on installations running MIMIX service pack 8.0.08.00 or higher. Previous software levels supported values *YES and *NO, which are
mapped to *EXCLRD and *NONE, respectively, on 8.0.08.00 or higher systems.

785
MXDGDLOE outfile (WRKDGDLOE command)

MXDGDLOE outfile (WRKDGDLOE command)

Table 175. MXDGDLOE outfile (WRKDGDLOE command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group definition) CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2
FLR1 System 1 folder CHAR(63) User-defined name SYSTEM 1
FOLDER
DOC1 System 1 document CHAR(12) User-defined name, *ALL SYSTEM 1
DLO
OWNER Owner CHAR(10) User-defined name, *ALL OWNER
FLR2 System 2 folder CHAR(63) *FLR1, User-defined name SYSTEM 2
FOLDER
DOC2 System 2 document CHAR(12) *DOC1, User-defined name SYSTEM 2
DLO
OBJAUD Object auditing value CHAR(10) *CHANGE, *ALL, *NONE OBJECT
AUDITING
VALUE
PRCTYPE Process type CHAR(10) *INCLD, *EXCLD PROCESS
TYPE
OBJRTVDLY Retrieve delay (Object retrieve processing) PACKED(3 0) 0-999, *DGDFT OBJRTVPRC
DELAY
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION

786
MXDGFE outfile (WRKDGFE command)

MXDGFE outfile (WRKDGFE command)

Table 176. MXDGFE outfile (WRKDGFE command)


Field Description Type, length Valid values Column
headings
DGDFN Data group name CHAR(10) User-defined data group name DGDFN
(Data group definition) NAME
DGSYS1 System 1 name CHAR(8) User-defined system name DGDFN
(Data group definition) SYSTEM 1
DGSYS2 System 2 name CHAR(8) User-defined system name DGDFN
(Data group definition) SYSTEM 2
FILE1 System 1 file name CHAR(10) User-defined name SYSTEM 1
FILE
LIB1 System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR1 System 1 member name CHAR(10) User-defined name SYSTEM 1
MEMBER
FILE2 System 2 file name CHAR(10) User-defined name SYSTEM 2
FILE
LIB2 System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
MBR2 System 2 member name CHAR(10) User-defined name SYSTEM 2
MEMBER
TEXT Description CHAR(50) User-defined text DESCRIPTIO
N
JRNIMG Journal image CHAR(10) *AFTER, *BOTH, *DGDFT FEOPT
(File entry options) JOURNAL
IMAGE
OPNCLO Omit open/close entries (File entry options) CHAR(10) *YES, *NO, *DGDFT FEOPT OMIT
OPEN CLOSE

787
MXDGFE outfile (WRKDGFE command)

Table 176. MXDGFE outfile (WRKDGFE command)


Field Description Type, length Valid values Column
headings
REPTYPE Replication type (File entry options) CHAR(10) *POSITION, *KEYED, *DGDFT FEOPT
REPLICATION
TYPE
APYLOCK Lock member during apply (File entry options) CHAR(10) *EXCLRD, *SHRNUP, *NONE1, *DGDFT FEOPT LOCK
MBR ON
APPLY
FTRBFRIMG Filter before image (File entry options) CHAR(10) *YES, *NO, *DGDFT FEOPT
FILTER BFR
IMAGE
APYSSN Current apply session (File entry options) CHAR(10) A-F, *DGDFT FEOPT
CURRENT
APYSSN
RQSAPYSS Configured or requested apply session (File entry CHAR(10) A-F, *DGDFT FEOPT
N options) REQUESTED
APYSSN
CRCLS Collision resolution class (File entry options) CHAR(10) *HLDERR, *AUTOSYNC, user-defined name FEOPT
COLLISION
RESOLUTION
DSBTRG Disable triggers during apply (File entry options) CHAR(10) *YES, *NO, *DGDFT FEOPT
DISABLE
TRIGGERS
PRCTRG Process trigger entries (File entry options) CHAR(10) *YES, *NO, *DGDFT FEOPT
PROCESS
TRIGGERS
PRCCST Process constraint entries (File entry options) CHAR(10) *YES FEOPT
PROCESS
CONSTRAINT
S
STATUS File status CHAR(10) *ACTIVE, *RLSWAIT, *RLSCLR, *HLD, CURRENT
*HLDIGN, *RLS, *HLDRGZ, *HLDPRM, STATUS
*HLDRNM, *HLDSYNC, *HLDRTY, *HLDERR,
*HLDRLTD, *CMPACT, *CMPRLS, *CMPRPR

788
MXDGFE outfile (WRKDGFE command)

Table 176. MXDGFE outfile (WRKDGFE command)


Field Description Type, length Valid values Column
headings
RQSSTS Requested file status CHAR(10) *ACTIVE, *HLD, *HLDIGN, *RLS, *RLSWAIT REQUESTED
STATUS
JRN1STS System 1 journaled CHAR(10) *YES, *NO, *NA, *DIFFJRN SYSTEM 1
JOURNALED
JRN2STS System 2 journaled CHAR(10) *YES, *NO, *NA, *DIFFJRN SYSTEM 2
JOURNALED
ERRCDE Error code CHAR(2) Valid error codes ERROR
CODE
JECDE Journal entry code CHAR(1) Valid journal entry code JOURNAL
ENTRY CODE
JETYPE Journal entry type CHAR(2) Valid journal entry type JOURNAL
ENTRY TYPE
APMNTSTS Access path maintenance status CHAR(10) *AVAILABLE, *DISABLED, *FAILED, AP MAINT
*FAILEDLF, *NOTALW STATUS
1. These values are supported on installations running MIMIX service pack 8.0.08.00 or higher. Previous software levels supported values *YES and *NO, which
are mapped to *EXCLRD and *NONE, respectively, on 8.0.08.00 or higher systems.

789
MXDGIFSE outfile (WRKDGIFSE command)

MXDGIFSE outfile (WRKDGIFSE command)

Table 177. MXDGIFSE outfile (WRKDGIFSE command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group definition) CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2
OBJ1 System 1 object CHAR(1024) User-defined name SYSTEM 1 IFS
OBJECT
OBJ2 System 2 object CHAR(1024) *OBJ1, user-defined name SYSTEM 2 IFS
OBJECT
CCSID Object CCSID BIN(5 0) Defaults to job CCSID. If job CCSID is 65535 or data CCSID
cannot be converted to job CCSID, OBJ1 and OBJ2
values remain in Unicode
PRCTYPE Process type CHAR(10) *INCLD, *EXCLD PROCESS
TYPE
TYPE Object type CHAR(10) *DIR, *STMF, *SYMLNK OBJECT TYPE
OBJRTVDLY Retrieve delay (Object retrieve CHAR(10) 0-999, *DGDFT OBJRTVPRC
processing) DELAY
COOPDB Cooperate with database CHAR(10) *YES, *NO, blank COOPERATE
WITH
DATABASE
OBJAUD Object auditing CHAR(10) *NONE, *CHANGE, *ALL OBJECT
AUDITING
VALUE
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION

790
MXDGSTS outfile (WRKDG command)

MXDGSTS outfile (WRKDG command)


The MXDGSTS outfile contains status information which corresponds to fields available on the Work with Data Groups (WRKDG)
command.
The Work with Data Groups (WRKDG) command generates new outfiles based on the MXDGSTSF record format from the MXDGSTS
model database file supplied with MIMIX. The content of the outfile is based on the criteria specified on the command. If there are no data
groups that match the criteria specified, the file is empty.
Usage notes:
• When the value *UNKNOWN is returned for either the Data group source system status (DTASRCSTS) field or the Data group target
system status (DTATGTSTS), status information is not available from the system that is remote relative to where the request was
made. For example, if you requested the report from the target system and the value returned for DTASRCSTS is *UNKNOWN, the
WRKDG request could not communicate with the source system. Fields which rely on data collected from the remote system will be
blank.
• If a data group is configured for only database or only object replication, any fields associated with processes not used by the
configured type of replication will be blank.
• See “WRKDG outfile SELECT statement examples” on page 813 for examples of how to query the contents of this output file.
• You can automate the process of gathering status. If you use MIMIX Monitor to create a synchronous interval monitor, the monitor can
specify the command to generate the outfile. Through exit programs, you can program the monitor to take action based on the status
returned in the outfile. For information about creating interval monitors, see the Using MIMIX Monitor book.

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
ENTRYTSP Entry timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu TIME REQUEST
PROCESSED
DGDFN Data group definition name (Data group CHAR(10) User-defined data group name DGDFN NAME
definition)
DGSYS1 System 1 (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2

791
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
STSTIME Elapsed time for data group status PACKED(10 0) Calculated, 0-9999999999 ELAPSED TIME
(seconds)
STSTIMF Elapsed time for data group status CHAR(10) Calculated, 0-9999999 ELAPSED TIME
(HHH:MM:SS) (HHH:MM:SS)
STSAVAIL Data group status retrieved from these CHAR(10) *ALL, *SOURCE, *TARGET, *NONE SYS STATUS
systems RETRIEVED
FROM
DTASRC Data group source system CHAR(8) User-defined system name DG SOURCE
SYSTEM
DTASRCSTS Data group source system status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN DG SOURCE
STATUS
DTATGT Data group target system CHAR(8) User-defined system name DG TARGET
SYSTEM
DTATGTSTS Data group target system status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN DG TARGET
STATUS
SWTSTS1 Switch mode status for system 1 CHAR(10) *NONE, *SWITCH SYSTEM 1
SWITCH
STATUS
SWTSTS2 Switch mode status for system 2 CHAR(10) *NONE, *SWITCH SYSTEM 2
SWITCH
STATUS
DGSTS Data group status summary CHAR(10) BLANK, *ERROR, *WARNING, *DISABLED OVERALL DG
STATUS
DBCFG Data group configured for data base CHAR(10) *YES, *NO CONFIGURED
replication FOR DB
REPLICATION
OBJCFG Data group configured for object CHAR(10) *YES, *NO CONFIGURED
replication FOR OBJECT
REPLICATION

792
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
SRCSYSSTS Source system manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN SOURCE
summation (system manager MANAGER
SUMMATION
DBSNDSTS Database send process status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN, *NONE, DB SEND
summation (DBSNDPRC) *THRESHOLD STATUS
OBJSNDSTS Object send process status summation CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN, *NONE, OBJECT SEND
(OBJSNDPRC) *THRESHOLD STATUS
DTAPOLLSTS Data area polling process status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN, *NONE DATA AREA
(DTAPOLLPRC) POLLER
STATUS
TGTSYSSTS Target System manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN TARGET
summation (system manager plus MANAGER
journal manager status) SUMMATION
DBAPYSTS Database apply status summation CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, DB APPLY
(Apply sessions A-F) *NONE, *THRESHOLD SUMMATION
Note: In service pack 7.1.15.00 or higher, access path
maintenance, if used, is considered part of the
database apply status. The value *PARTIAL in
this field can indicate that DBAPY jobs are
running, but access path maintenance jobs are
not running. Check the value returned in the
APMNTSTS field.
OBJAPYSTS Object apply status summation CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, OBJECT APPLY
*NONE, *THRESHOLD SUMMATION
FECNT Total database file entries PACKED(5 0) 0-99999 TOTAL DB FILE
ENTRIES
FEACTIVE Active database file entries (FEACT) PACKED(5 0) 0-99999 ACTIVE DB FILE
ENTRIES
FENOTACT Inactive database file entries PACKED(5 0) 0-99999 INACTIVE DB
FILE ENTRIES

793
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
FENOTJRNS Database file entries not journaled on PACKED(5 0) 0-99999 FILES NOT
source JOURNALED
ON SOURCE
FENOTJRNT Database file entries not journaled on PACKED(5 0) 0-99999 FILES NOT
target JOURNALED
ON TARGET
FEHLDERR Database file entries held due to error PACKED(5 0) 0-99999 FILES HELD
FOR ERRORS
FEHLDOTHR Database file entries held for other PACKED(5 0) 0-99999 FILES HELD
reasons (FEHLD) FOR OTHER
OBJPENDSRC Objects in pending status, source PACKED(5 0) 0-99999 OBJECTS
system PENDING ON
SOURCE
SYSTEM
OBJPENDAPY Objects in pending status, target system PACKED(5 0) 0-99999 OBJECTS
PENDING ON
TARGET
SYSTEM
OBJDELAY Objects in delayed status PACKED(5 0) 0-99999 TOTAL
OBJECTS
DELAYED
OBJERR Objects in error PACKED(5 0) 0-99999 TOTAL
OBJECTS IN
ERROR
DLOCFGCHG DLO configuration changed CHAR(10) *YES, *NO DLO CONFIG
CHANGED
IFSCFGCHG IFS configuration changed CHAR(10) *YES, *NO IFS CONFIG
CHANGED
OBJCFGCHG Object configuration changed CHAR(10) *YES, *NO OBJECT
CONFIG
CHANGED

794
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
PRITFRDFN Primary transfer definition CHAR(10) User-defined transfer definition name PRIMARY
TFRDFN
SECTFRDFN Secondary transfer definition CHAR(10) User-defined transfer definition name SECONDARY
TFRDFN
TFRDFN Current transfer definition CHAR(10) User-defined transfer definition name LAST USED
TFRDFN
TFRSTS Current transfer definition CHAR(10) *ACTIVE, *INACTIVE LAST USED
communications status TFRDFN
STATUS
SRCMGRSTS Source system manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN SOURCE SYS
MANAGER
STATUS
SRCJRNSTS Source journal manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN SOURCE JRN
MANAGER
STATUS
CNRSNDSTS Container send process status CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, CONTAINER
*NONE, *THRESHOLD SEND STATUS
OBJRTVSTS Object retrieve process status CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, OBJECT
*NONE, *THRESHOLD RETRIEVE
STATUS
TGTMGRSTS Target system manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN TARGET SYS
MANAGER
STATUS
TGTJRNSTS Target journal manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN TARGET JRN
MANAGER
STATUS
CURDBRCV Current database journal entry receiver CHAR(10) User-defined value DB JRNRCV
name
CURDBLIB Current database journal entry receiver CHAR(10) User-defined value DB JRNRCV
library name LIBRARY

795
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
CURDBCODE Current database journal code and CHAR(3) Valid journal entry types and codes DB ENTRY
entry type TYPE AND
CODE
CURDBSEQ Current database journal entry PACKED(10 0) 0-9999999999 DB ENTRY
sequence number SEQUENCE
CURDBTSP Current database journal entry TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB ENTRY
timestamp TIMESTAMP
CURDBTPH Current database journal entry PACKED(15 0) Calculated, 0-9999999999999 DB ARRIVAL
transactions per hour RATE
RDDBRCV Last read database journal entry CHAR(10) User-defined value DB READER
receiver name (DBSNTRCV) JRNRCV
RDDBLIB Last read database journal entry CHAR(10) User-defined value DB READER
receiver library name JRNRCV
LIBRARY
RDDBCODE Last read database journal code and CHAR(3) Valid journal entry types and codes DB READER
entry type TYPE AND
ENTRY CODE
RDDBSEQ Last read database journal entry PACKED(10 0) 0-9999999999 DB READER
sequence number (DBSNTSEQ) ENTRY
SEQUENCE
RDDBTSP Last read database journal entry TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB READER
timestamp (DBSNTDATE, ENTRY
DBSNTTIME) TIMESTAMP
RDDBTPH Last read database journal entry PACKED(15 0) Calculated, 0-999999999999999 DB READER
transactions per hour READ RATE
DBSNDBKLG Number of database entries not sent PACKED(15 0) Calculated, 0-999999999999999 DB SEND
BACKLOG
DBSNBKTIME Estimated time to process database PACKED(10 0) Calculated, 0-9999999999 DB SEND
entries not sent (seconds) BACKLOG
SECONDS

796
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
DBSNBKTIMF Estimated time to process database CHAR(10) Calculated, 0-999:99:99 DB SEND
entries not sent (HHH:MM:SS) BACKLOG
HHH:MM:SS
RCVDBRCV Last received database journal entry CHAR(10) User-defined value DB LAST
receiver name RECEIVED
JRNRCV
RCVDBLIB Last received database journal entry CHAR(10) User-defined value DB LAST
receiver library name RECEIVED
JRNRCV LIB
RCVDBCODE Last received database journal code CHAR(3) See the IBM OS/400 Backup and Recovery Guide DB LAST RCV
and entry type for journal and entry types TPE AND
ENTRY
RCVDBSEQ Last received database journal entry PACKED(10 0) 0-9999999999 DB LAST
sequence number RECEIVED
SEQUENCE
RCVDBTSP Last received database journal entry TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB LAST
timestamp RECEIVED
TIMESTAMP
RCVDBTPH Last received database journal entry PACKED(15 0) Calculated, 0-999999999999999 DB RECEIVE
transactions per hour ARRIVAL RATE
DBAPYREQ Number of database apply sessions PACKED(5 0) 1-6 REQUESTED
requested DB APPLY
SESSIONS
DBAPYMAX Number of database apply sessions PACKED(5 0) 1-6 CONFIGURED
configured DB APPLY
SESSIONS
DBAPYACT Number of database apply session PACKED(5 0) 1-6 ACTIVE DB
currently active (DBAPYPRC) APPLY
SESSIONS
DBAPYBKLG Number of database entries not applied PACKED(15 0) Calculated, 0-999999999999999 DB APPLY
BACKLOG

797
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
DBAPBKTIME Estimated time to process database PACKED(10 0) Calculated, 0-9999999999 DB APPLY TIME
entries not applied (seconds) SECONDS
DBAPBKTIMF Estimated time to process database CHAR(10) Calculated, 0-999:99:99 DB APPLY TIME
entries not applied (HHH:MM:SS) HHH:MM:SS
DBAPYTPH Database apply total transactions per PACKED(15 0) Calculated, 0-999999999999999 DB APPLY
hour PROCESSING
RATE
DBASTS Database apply session A status CHAR(10) *ACTIVE, *INACTIVE, *THRESHOLD, DB APPLY A
*UNKNOWN STATUS
DBARCVSEQ Database apply session A last received PACKED(10 0) 0-9999999999 DB APPLY A
sequence number LAST
RECEIVED
DBAPRCSEQ Database apply session A last PACKED(10 0) 0-9999999999 DB APPLY A
processed sequence number LAST
PROCESSED
DBABKLG Database apply session A number of PACKED(15 0) Calculated, 0-999999999999999 DB APPLY A
unprocessed entries BACKLOG
DBABKTIME Database apply session A estimated PACKED(10 0) Calculated, 0-9999999999 DB APPLY A
time to apply unprocessed transactions TIME SECONDS
(seconds)
DBABKTIMF Database apply session A estimated CHAR(10) Calculated, 0-999:99:99 DB APPLY A
time to apply unprocessed transactions TIME
(HHH:MM:SS) HHH:MM:SS
DBATPH Database apply session A number of PACKED(15 0) Calculated, 0-999999999999999 DB APPLY A
transactions per hour PROCESSING
RATE
DBAOPNCMT Database apply session A open commit CHAR(10) *YES, *NO DB APPLY A
indicator COMMIT
INDICATOR

798
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
DBACMTID Database apply session A oldest open CHAR(10) Journal-defined commit ID DB APPLY A
commit ID CURRENT
COMMIT ID
DBAAPYCODE Database apply session A last applied CHAR(3) See the IBM OS/400 Backup and Recovery Guide DB APPLY A
journal code and entry type for journal codes and entry types. TYPE AND
ENTRY
DBAAPYSEQ Database apply session A last applied PACKED(10 0) 0-9999999999 DB APPLY A
sequence number LAST APPLIED
DBAAPYTSP Database apply session A last applied TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB APPLY A
journal entry timestamp LAST
TIMESTAMP
DBAAPYOBJ Database apply session A object to CHAR(10) User-defined object name DB APPLY A
which last transaction was applied OBJECT NAME
DBAAPYLIB Database apply session A library of CHAR(10) User-defined object library name DB APPLY A
object to which last transaction was LIBRARY NAME
applied
DBAAPYMBR Database apply session A member of CHAR(10) User-defined object member name DB APPLY A
object to which last transaction was MEMBER NAME
applied.
DBAAPYTIME Database apply session A last applied PACKED(10 0) Calculated, 0-9999999999 DB APPLY A
journal entry clock time difference TIME DIFF
(seconds) SECONDS
DBAAPYTIMF Database apply session A last applied CHAR(10) Calculated, 0-999:99:99 DB APPLY A
journal entry clock time difference TIME DIFF
(HHH:MM:SS) HHH:MM:SS
DBAHLDSEQ Database apply session A hold MIMIX PACKED(10 0) 0-9999999999 DB APPLY A
log sequence number HOLD
SEQUENCE

799
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
DBxSTS through Reserved for up to 5 additional 885 bytes (5 x All DBx field values match the DBA field values. All DBx headings
DBxHLDSEQ, database apply sessions (‘B’ - ‘F’). 177) match the DBA
where x is Contains fields for each additional apply headings, with ‘x’
database apply session which correspond to fields for
session ‘B’ - ‘F’ apply session A (DBASTS through
DBAHLDSEQ).
CUROBJRCV Current object journal entry receiver CHAR(10) User-defined value OBJECT
name JRNRCV
CUROBJLIB Current object journal entry receiver CHAR(10) User-defined value OBJECT
library name JRNRCV
LIBRARY
CUROBJCODE Current object journal code and entry CHAR(3) See the IBM OS/400 Backup and Recovery Guide OBJECT TYPE
type for journal codes and entry types. AND ENTRY
CODES
CUROBJSEQ Current object journal entry sequence PACKED(10 0) 0-9999999999 OBJECT
number JOURNAL
SEQUENCES
CUROBJTSP Current object journal entry timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJECT JRN
ENTRY
TIMESTAMP
CUROBJTPH Current object journal entry transactions PACKED(15 0) 0-999999999999999 OBJECT
per hour ARRIVAL PER
HOUR
RDOBJRCV Last read object journal entry receiver CHAR(10) User-defined value OBJRDRPRC
name (OBJSNTRCV) JRNRCV
RDOBJLIB Last read object journal entry receiver CHAR(10) User-defined value OBJRDRPRC
library name JRNRCV
LIBRARY
RDOBJCODE Last read object journal code and entry CHAR(3) See the IBM OS/400 Backup and Recovery Guide OBJRDRPRC
type for journal entry codes and entry types. TYPE AND
ENTRY CODE

800
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
RDOBJSEQ Last read object journal entry sequence PACKED(10 0) 0-9999999999 OBJRDRPRC
number (OBJSNTSEQ) JOURNAL
SEQUENCE
RDOBJTSP Last read object journal entry timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJRDRPRC
(OBJSNTDATE, OBJSNTTIME) JRN ENTRY
TIMESTAMP
RDOBJTPH Last read object journal entry PACKED(15 0) Calculated, 0-999999999999999 OBJRDRPRC
transactions per hour READ RATE
OBJSNDBKLG Object entries not processed PACKED(15 0)) Calculated, 0-999999999999999 OBJSNDPRC
BACKLOG
OBJSNDNUM Number of object entries sent PACKED(15 0)) Calculated, 0-999999999999999 OBJSNDPRC
SENT IN TIME
SLICE
OBJSBKTIME Estimated time to process object entries PACKED(10 0) Calculated, 0-9999999999 OBJSNDPRC
not sent (seconds) BACKLOG
SECONDS
OBJSBKTIMF Estimated time to process entries not CHAR(10) Calculated, 0-999:99:99 OBJSNDPRC
sent (HHH:MM:SS) BACKLOG
HHH:MM:SS
RCVOBJRCV Last received object journal entry CHAR(10) User-defined value OBJRCVPRC
receiver name LAST RCVD
JRNRCV
RCVOBJLIB Last received object journal entry CHAR(10) User-defined value OBJRCVPRC
receiver library name LAST RCVD
JRNRCV LIB
RCVOBJCODE Last received object journal code and CHAR(3) See the IBM OS/400 Backup and Recovery Guide OBJRCVPRC
entry type for journal codes and entry types. LAST TYPE
AND ENTRY
RCVOBJSEQ Last received object journal entry PACKED(10 0) 0-9999999999 OBJRCVPRC
sequence number LAST ENTRY
SEQUENCE

801
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
RCVOBJTSP Last received object journal entry TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJRCVPRC
timestamp LAST ENTRY
TIMESTAMP
RCVOBJTPH Last received object journal entry PACKED(15 0) 0-999999999999999 OBJRCVPRC
transactions per hour RECEIVE RATE
OBJRTVMIN Minimum number of object retriever PACKED(3 0) 1-99 OBJRTVPRC
processes MIN NUMBER
OF JOBS
OBJRTVACT Active number of object retriever PACKED(3 0) 1-99 OBJRTVPRC
processes (OBJRTVPRC) NUMBER OF
JOBS
OBJRTVMAX Maximum number of object retriever PACKED(3 0) 1-99 OBJRTVPRC
processes MAX NUMBER
OF JOBS
OBJRTVBKLG Number of object retriever entries not PACKED(15 0) 0-999999999999999 OBJRTVPRC
processed BACKLOG
OBJRTVCODE Last processed object retrieve journal CHAR(3) See the IBM OS/400 Backup and Recovery Guide OBJRTVPRC
code and entry type for journal codes and entry types. LAST TYPE
AND ENTRY
OBJRTVSEQ Last processed object retrieve journal PACKED(10 0) 0-9999999999 OBJRTVPRC
sequence number LAST
SEQUENCE
OBJRTVTSP Last processed object retrieve journal TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJRTVPRC
entry timestamp (OBJRTVDATE, LAST
OBJRTVTIME) TIMESTAMP
OBJRTVTYPE Type of object last processed by object CHAR(10) Object type of user-defined object OBJRTVPRC
retrieve LAST OBJ TYPE
OBJRTVOBJ Qualified name of object last processed CHAR(1024) User-defined object name and path OBJRTVPRC
by object retrieve Note: Variable length of 75. LAST OBJ
NAME

802
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
CNRSNDMIN Minimum number of container send PACKED(3 0) 1-99 CNRSNDPRC
processes MIN NUMBER
OF JOBS
CNRSNDACT Active number of container send PACKED(3 0) 1-99 CNRSNDPRC
processes (CNRSNDPRC) NUMBER OF
JOBS
CNRSNDMAX Maximum number of container send PACKED(3 0) 1-99 CNRSNDPRC
processes MAX NUMBER
OF JOBS
CNRSNDBKLG Number of container send entries not PACKED(15 0) 0-999999999999999 CNRSNDPRC
processed BACKLOG
CNRSNDNUM Number of containers sent PACKED(15 0) 0-999999999999999 CNRSNDPRC
NUMBER SENT
CNRSNDCPH Containers per hour PACKED(15 0) 0-999999999999999 CNRSNDPRC
RATE
CNRSNDCODE Last processed container send journal CHAR(3) See the IBM OS/400 Backup and Recovery Guide CNRSNDPRC
code and entry type for journal codes and entry types. LAST TYPE
AND ENTRY
CNRSNDSEQ Last processed container send journal PACKED(10 0) 0-9999999999 CNRSNDPRC
sequence number (CNRSNTSEQ) LAST
SEQUENCE
CNRSNDTSP Last processed container send journal TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu CNRSNDPRC
entry timestamp (CNRSNTDATE, LAST
CNTRSNTTIME) TIMESTAMP
CNRSNDTYPE Type of object last processed by CHAR(10) Object type of user-defined object CNRSNDPRC
container send LAST OBJ TYPE
CNRSNDOBJ Qualified name of object last processed CHAR(1024) User-defined object name and path CNRSNDPRC
by container send Note: Variable length of 75. LAST OBJ
NAME

803
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
OBJAPYMIN Minimum number of object apply PACKED(3 0) 1-99 OBJAPYPRC
processes MIN NUMBER
OF JOBS
OBJAPYACT Active number of object apply PACKED(3 0) 1-99 OBJAPYPRC
processes (OBJAPYPRC) NUMBER OF
JOBS
OBJAPYMAX Maximum number of object apply PACKED(3 0) 1-99 OBJAPYPRC
processes MAX NUMBER
OF JOBS
OBJAPYBKLG Number of object apply entries not PACKED(15 0) Calculated, 0-999999999999999 OBJAPYPRC
processed BACKLOG
OBJAPYACTA Number of active objects PACKED(15 0) Calculated, 0-999999999999999 OBJAPYPRC
ACTIVE
BACKLOG
OBJAPYNUM Number of object entries applied PACKED(15 0) Calculated, 0-999999999999999 OBJAPYPRC
APPLIED IN
TIME SLICE
OBJABKTIME Estimated time to process object entries PACKED(10 0) Calculated, 0-9999999999 OBJAPYPRC
not applied (seconds) BACKLOG
SECONDS
OBJABKTIMF Estimated time to process object entries CHAR(10) Calculated, 0-999:99:99 OBJAPYPRC
not applied (HHH:MM:SS) BACKLOG
HHH:MM:SS
OBJAPYTPH Number of object entries applied per PACKED(15 0) Calculated, 0-999999999999999 OBJAPYPRC
hour RATE
OBJAPYCODE Last applied object journal code and CHAR(3) See the IBM OS/400 Backup and Recovery Guide OBJAPYPRC
entry type for journal codes and entry types. LAST TYPE
AND ENTRY
OBJAPYSEQ Last applied object journal sequence PACKED(10 0) 0-9999999999 OBJAPYPRC
number (OBJAPYSEQ) LAST
SEQUENCE

804
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
OBJAPYTSP Last applied object journal entry TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJAPYPRC
timestamp (OBJAPYDATE, LAST
OBJAPYTIME) TIMESTAMP
OBJAPYTYPE Type of object last processed by object CHAR(10) Object type of user-defined object OBJAPYPRC
apply LAST OBJ TYPE
OBJAPYOBJ Qualified name of object last processed CHAR(1024) User-defined object name and path OBJAPYPRC
by object apply Note: Variable length of 75. LAST OBJ
NAME
RJINUSE Remote journal (RJ) link used by data CHAR(10) *YES, *NO RJ LINK USED
group BY DG
RJSRCDFN RJ link source journal definition CHAR(10) User-defined journal definition name RJ LINK
SOURCE
JRNDFN
RJSRCSYS RJ link source system CHAR(8) User-defined system name RJ LINK
SOURCE
JRNDFN
RJTGTDFN RJ link target journal definition CHAR(10) User-defined journal definition name RJ LINK
TARGET
SYSTEM
RJTGTSYS RJ link target system CHAR(8) User-defined system name RJ LINK
TARGET
JRNDFN
RJPRIRDB RJ link primary RDB entry CHAR(18) User-defined or MIMIX generated RDB name RJ PRIMARY
RDB ENTRY
RJPRITFR RJ link primary transfer definition name CHAR(10) User-defined transfer definition name RJ PRIMARY
TFRDFN
RJSECRDB RJ link secondary RDB entry CHAR(18) User-defined or MIMIX generated RDB name RJ
SECONDARY
RDB ENTRY

805
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
RJSECTFR RJ link secondary transfer definition CHAR(10) User-defined transfer definition name RJ
name SECONDARY
TFRDFN
RJSTATE RJ link state CHAR(10) BLANK, *FAILED, *CTLINACT, *INACTPEND, RJ LINK STATE
*ASYNC, *SYNC, *ASYNPEND, *SYNCPEND,
*NOTBUILT, *UNKNOWN
RJDLVRY RJ link delivery mode CHAR(10) *ASYNC, *SYNC, BLANK RJ LINK
DELIVERY
MODE
RJSNDPTY RJ link send task priority PACKED(3 0) 0-99 0=*SYSDFT RJ LINK SEND
PRIORITY
RJRDRSTS RJ reader task status CHAR(10) BLANK, *UNKNOWN, *ACTIVE, *INACTIVE, RJREADER
*THRESHOLD STATUS
RJSMONSTS RJ link source monitor status CHAR(10) BLANK, *UNKNOWN, *ACTIVE, *INACTIVE RJ SOURCE
MONITOR
RJTMONSTS RJ link target monitor status CHAR(10) BLANK, *UNKNOWN, *ACTIVE, *INACTIVE RJ TARGET
MONITOR
ITECNT Total IFS tracking entries PACKED(10 0) 0-999999 TOTAL IFS
TRACKING
ENTRIES
ITEACTIVE Active IFS tracking entries PACKED(10 0) 0-999999 ACTIVE IFS
TRACKING
ENTRIES
ITENOTACT Inactive IFS tracking entries PACKED(10 0) 0-999999 INACT IFS
TRACKING
ENTRIES
ITENOTJRNS IFS tracking entries not journaled on PACKED(10 0) 0-999999 IFS TE NOT
source JOURNALED
ON SOURCE

806
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
ITENOTJRNT IFS tracking entries not journaled on PACKED(10 0) 0-999999 IFS TE NOT
target JOURNALED
ON TARGET
ITEHLDERR IFS tracking entries held due to error PACKED(10 0) 0-999999 IFS TE HELD
FOR ERRORS
ITEHLDOTHR IFS tracking entries held for other PACKED(10 0) 0-999999 IFS TE HELD
reasons FOR OTHER
OTECNT Total object tracking entries PACKED(10 0) 0-999999 TOTAL OBJ
TRACKING
ENTRIES
OTEACTIVE Active object tracking entries PACKED(10 0) 0-999999 ACTIVE OBJ
TRACKING
ENTRIES
OTENOTACT Inactive object tracking entries PACKED(10 0) 0-999999 INACT OBJ
TRACKING
ENTRIES
OTENOTJRNS Object tracking entries not journaled on PACKED(10 0) 0-999999 OBJ TE NOT
source JOURNALED
ON SOURCE
OTENOTJRNT Object tracking entries not journaled on PACKED(10 0) 0-999999 OBJ TE NOT
target JOURNALED
ON TARGET
OTEHLDERR Object tracking entries held due to error PACKED(10 0) 0-999999 OBJ TE HELD
FOR ERRORS
OTEHLDOTHR Object tracking entries held for other PACKED(10 0) 0-999999 OBJ TE HELD
reasons FOR OTHER
JRNCACHETA Journal cache target CHAR(10) *YES, *NO, *UNKNOWN JOURNAL
CACHE
TARGET

807
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
JRNCACHESA Journal cache source CHAR(10) *YES, *NO, *UNKNOWN JOURNAL
CACHE
SOURCE
JRNSTATETA Journal state target CHAR(10) *ACTIVE, *STANDBY, *INACTIVE JOURNAL
STATE TARGET
JRNSTATESA Journal state source CHAR(10) *ACTIVE, *STANDBY, *INACTIVE JOURNAL
STATE SOURCE
JRNCACHETS Journal cache status - target CHAR(10) *ERROR, *NONE, *OK, *WARNING, JRN CACHE
*NOFEATURE, *UNKNOWN TARGET
STATUS
JRNCACHESS Journal cache status - source CHAR(10) *ERROR, *NONE, *OK, *WARNING, JRN CACHE
*NOFEATURE, *UNKNOWN SOURCE
STATUS
JRNSTATETS Journal state target status CHAR(10) *ERROR, *NONE, *OK, *WARNING, JOURNAL
*NOFEATURE, *UNKNOWN STATE TARGET
JRNSTATESS Journal state source status CHAR(10) *ERROR, *NONE, *OK, *WARNING, JOURNAL
*NOFEATURE, *UNKNOWN STATE SOURCE
RJTGTRCV Last RJ target journal entry receiver CHAR(10) User-defined value RJ TGT
name JRNRCV
RJTGTLIB Last RJ target journal entry receiver CHAR(10) User-defined value RJ TGT
library name JRNRCV
LIBRARY
RJTGTCODE Last RJ target journal code and entry CHAR(3) Valid journal entry types and codes RJTGT TYPE
type AND ENTRY
CODE
RJTGTSEQ Last RJ target journal entry sequence PACKED(10 0) 0-9999999999 RJ TGT ENTRY
number SEQUENCE
RJTGTTSP Last RJ target journal entry timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu RJ TGT ENTRY
TIMESTAMP

808
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
OBJRTVUCS Qualified name of object last qualified GRAPHIC(512) User-defined object name and path LAST OBJ
by object retrieve - Unicode VARLEN(75) RETRIEVED
CCSID(13488) (UNICODE)
CNRSNDUCS Qualified name of object last qualified GRAPHIC(512) User-defined object name and path LAST OBJ SENT
by container send - Unicode VARLEN(75) (UNICODE)
CCSID(13488)
OBJAPYUCS Qualified name of object last qualified GRAPHIC(512) User-defined object name and path LAST OBJ
by object apply - Unicode VARLEN(75) APPLIED
CCSID(13488) (UNICODE)
FECNT2 Total database file entries PACKED(10 0) 0-9999999999 TOTAL
DB FILE
ENTRIES2
FEACTIVE2 Active database file entries (FEACT) PACKED(10 0) 0-9999999999 ACTIVE
DB FILE
ENTRIES2
FENOTACT2 Inactive database file entries PACKED(10 0) 0-9999999999 INACTIVE
DB FILE
ENTRIES2
FENOTJRNS2 Database file entries not journaled on PACKED(10 0) 0-9999999999 FILES NOT
source JOURNALED
ON SOURCE2
FENOTJRNT2 Database file entries not journaled on PACKED(10 0) 0-9999999999 FILES NOT
target JOURNALED
ON TARGET2
FEHLDERR2 Database file entries held due to error PACKED(10 0) 0-9999999999 FILES
HELD FOR
ERRORS2
FEHLDOTHR2 Database file entries held for other PACKED(10 0) 0-9999999999 FILES HELD
reasons (FEHLD) FOR OTHERS2
FECMPRPR2 Database file entries being repaired PACKED(10 0) 0-9999999999 FILES
BEING
REPAIRED2

809
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
RJLNKTHLDM RJ Link Threshold Exceeded (Time in PACKED(4 0) 0-9999 RJLNK
minutes) THRESHOLD
(TIME IN MIN)
RJLNKTHLDE RJ Link Threshold Exceeded (Number PACKED(7 0) 0-9999999 RJLNK
of journal entries) THRESHOLD
(NBR OF JRNE)
DBRDRTHLDM DB Send/Reader Threshold Exceeded PACKED(4 0) 0-9999 DBSND/DBRDR
(Time in minutes) THRESHOLD
(TIME IN MIN)
DBRDRTHLDE DB Send/Reader Threshold Exceeded PACKED(7 0) 0-9999999 DBSND/DBRDR
(Number of journal entries) THRESHOLD
(NBR OF JRNE)
DBAPYATHLD DB Apply A Threshold Exceeded PACKED(5 0) 0-99999 DB APPLY A
(Number of journal entries) THRESHOLD
DBAPYBTHLD DB Apply B Threshold Exceeded PACKED(5 0) 0-99999 DB APPLY B
(Number of journal entries) THRESHOLD
DBAPYCTHLD DB Apply C Threshold Exceeded PACKED(5 0) 0-99999 DB APPLY C
(Number of journal entries) THRESHOLD
DBAPYDTHLD DB Apply D Threshold Exceeded PACKED(5 0) 0-99999 DB APPLY D
(Number of journal entries) THRESHOLD
DBAPYETHLD DB Apply E Threshold Exceeded PACKED(5 0) 0-99999 DB APPLY E
(Number of journal entries) THRESHOLD
DBAPYFTHLD DB Apply F Threshold Exceeded PACKED(5 0) 0-99999 DB APPLY F
(Number of journal entries) THRESHOLD
OBJSNDTHDM Object Send Threshold Exceeded (Time PACKED(4 0) 0-9999 OBJSND
in minutes) THRESHOLD
(TIME IN MIN)
OBJSNDTHDE Object Send Threshold Exceeded PACKED(7 0) 0-9999999 OBJSND
(Number of journal entries) THRESHOLD
(NBR OF JRNE)

810
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
OBJRTVTHDE Object Retrieve Threshold Exceeded PACKED(5 0) 0-99999 OBJRTV
(Number of activity entries) THRESHOLD
CNRSNDTHDE Container Send Threshold Exceeded PACKED(5 0) 0-99999 CNRSND
(Number of activity entries) THRESHOLD
OBJAPYTHDE Object Apply Threshold Exceeded PACKED(5 0) 0-99999 OBJAPY
(Number of activity entries) THRESHOLD
RJBKLG RJ Backlog PACKED(15 0) Calculated 0-999999999999 RJ BACKLOG
CURDBSEQ2 Current database journal entry large ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB ENTRY
sequence number LARGE
SEQUENCE
RDDBSEQ2 Last read database journal entry large ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB READER
sequence number ENTRY LARGE
SEQUENCE
RCVDBSEQ2 Last received database journal entry ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB LAST
large sequence number RECEIVED
LARGE
SEQUENCE
DBARCVSEQ2 Database apply session A last received ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
large sequence number LAST
RECEIVED
LARGE
SEQUENCE
DBAPRCSEQ2 Database apply session A last ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
processed large sequence number LAST
PROCESSED
LARGE
SEQUENCE
DBACMTID2 Database apply session A oldest open CHAR(20) 0-99999999999999999999 (twenty 9s) DB APPLY A
large commit ID CURRENT
LARGE
COMMIT ID

811
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
DBAAPYSEQ2 Database apply session A last applied ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
large sequence number LAST APPLIED
LARGE
SEQUENCE
DBAHLDSEQ2 Database apply session A hold MIMIX ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
log large sequence number HOLD LARGE
SEQUENCE
DBxRCVSEQ2 Reserved for up to 5 additional 600 bytes (5 x All DBx field values match the DBA field values. All DBx headings
through database apply sessions (‘B’ - ‘F’). 120) match the DBA
DBxHLDSEQ2 Contains fields for each additional apply headings, with ‘x’
where x is session which correspond to fields for
database apply apply session A (DBARCVSEQ2
session ‘B’ - ‘F’ through DBAHLDSEQ2)
CUROBJSEQ2 Current object journal entry large ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJECT
sequence number JOURNAL
LARGE
SEQUENCE
RDOBJSEQ2 Last read object journal entry large ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJRDRPRC
sequence number JOURNAL
LARGE
SEQUENCE
RCVOBJSEQ2 Last received object journal entry large ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJRCVPRC
sequence number LAST ENTRY
LARGE
SEQUENCE
OBJRTVSEQ2 Last processed object retrieve journal ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJRTVPRC
entry large sequence number LAST ENTRY
LARGE
SEQUENCE
CNRSNDSEQ2 Last processed container send journal ZONED(20 0) 0-99999999999999999999 (twenty 9s) CNRSNDPRC
entry large sequence number LAST ENTRY
LARGE
SEQUENCE

812
MXDGSTS outfile (WRKDG command)

Table 178. MXDGSTS outfile (WRKDG command)


Field Description Type, length Valid values Column head-
ings
OBJAPYSEQ2 Last applied object journal entry large ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJAPYPRC
sequence number LAST ENTRY
LARGE
SEQUENCE
RJTGTSEQ2 Last RJ target journal entry large ZONED(20 0) 0-99999999999999999999 (twenty 9s) RJ TARGET
sequence number LAST ENTRY
LARGE
SEQUENCE
PRLAPMNTST Parallel Access Path Maintenance CHAR(10) For service pack 7.1.15.00 or higher: *NONE PARALLEL AP
Status For earlier service packs: *NONE, *UNKNOWN, MAINT STATUS
*ACTIVE, *INACTIVE, *PARTIAL
OBJSNDPFX Object send prefix CHAR(10) name, *DGDFN, *SHARED OBJECT SEND
PREFIX
APMNTSTS Access path maintenance job status CHAR(10) *NONE, *UNKNOWN, *ACTIVE, *INACTIVE AP MAINT
Note: Available in service pack 7.1.15.00 STATUS
or higher.
APMNTERR Access path maintenance error count PACKED (5 0) 0-99999 AP MAINT
Note: Available in service pack 7.1.15.00 ERROR COUNT
or higher.

WRKDG outfile SELECT statement examples


Following are some example SELECT statements that query a WRKDG outfile and produce various outfile reports. The first three
examples show how to use wild cards to produce reports about specific data groups in the outfile.
The last example adds a few field definitions, in request time sequence, to produce outfile reports with additional data group related
information.
These are basic examples, there may be additional formatting options that you may want to apply to your output.

813
MXDGSTS outfile (WRKDG command)

WRKDG outfile example 1


This SELECT statement uses a single wildcard character to query the outfile to retrieve and display all of the data group names that start
with an ‘A’ and have 0 or more characters following the ‘A’. The records are listed in record arrival order. The statement would be entered
as follows:
SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like 'A%'
The outfile report produced follows:
DGN SYS SYS
ACCTPAY CHICAGO LONDON
ACCTREC CHICAGO LONDON
APP1 CHICAGO LONDON
APP2 CHICAGO LONDON

WRKDG outfile example 2


This SELECT statement uses wildcard characters to query the outfile for all data group names that are in the outfile. The records are listed
in record arrival order. The statement would be entered as follows:
SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like '%%'
The outfile report produced follows:
DGN SYS SYS
INVENTORY CHICAGO LONDON
PAYROLL CHICAGO LONDON
ACCTPAY CHICAGO LONDON
ORDERS CHICAGO LONDON
ACCTREC CHICAGO LONDON
APP1 CHICAGO LONDON
APP2 CHICAGO LONDON
SUPERAPP CHICAGO LONDON

WRKDG outfile example 3


This SELECT statement uses wildcard characters to find all data groups with names that contain an ‘A’. The records are listed in record
arrival order. The statement would be entered as follows:
SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like '%A%'
The outfile report produced is follows:

814
MXDGSTS outfile (WRKDG command)

DGN SYS SYS


PAYROLL CHICAGO LONDON
ACCTPAY CHICAGO LONDON
ACCTREC CHICAGO LONDON
APP1 CHICAGO LONDON
APP2 CHICAGO LONDON
SUPERAPP CHICAGO LONDON

WRKDG outfile example 4


This SELECT statement selects all records that have a data group name containing an ‘A’. These records are listed in data group name
order with all duplicate data group names listed by the time the entry was placed in the outfile. All records for a data group are listed
together in ascending time sequence. Additionally, the time stamp that the entry was placed in the file and the current top sequence
number of the object journal are also listed with the entry. The statement would be entered as follows:
SELECT DGDFN, DGSYS1, DGSYS2, ENTRYTSP, CUROBJSEQ, FROM library/filename WHERE DGDFN like '%A%'
ORDER BY DGDFN, DGSYS1, DGSYS2, ENTRYTSP
The outfile report produced follows:
DGN SYS SYS ENTRYTSP SEQN
PAYROLL CHICAGO LONDON 2001-02-06-11.09.59.842000 29,034,877
ACCTPAY CHICAGO LONDON 2001-02-06-11.24.05.851000 29,035,093
ACCTREC CHICAGO LONDON 2001-02-06-11.09.59.842000 29,034,879
APP1 CHICAGO LONDON 2001-02-06-11.24.05.851000 29,035,095
APP2 CHICAGO LONDON 2001-02-06-14.24.49.793000 29,051,130
SUPERAPP CHICAGO LONDON 2001-02-06-11.09.59.842000 0

815
MXDGOBJE outfile (WRKDGOBJE command)

MXDGOBJE outfile (WRKDGOBJE command)

Table 179. MXDGOBJE outfile (WRKDGOBJE command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group CHAR(10) User-defined data group name DGDFN NAME
definition)
DGSYS1 System 1 name (Data group CHAR(8) User-defined system name DGDFN SYSTEM
definition) 1
DGSYS2 System 2 name (Data group CHAR(8) User-defined system name DGDFN SYSTEM
definition) 2
OBJ1 System 1 folder CHAR(10) User-defined name, *ALL SYSTEM 1
OBJECT
LIB1 System 1 library CHAR(10) User-defined name, generic* SYSTEM 1
LIBRARY
TYPE Object type CHAR(10) Refer to the OM5100P file for the list of valid values OBJECT TYPE
OBJATR Object attribute CHAR(10) Refer to the OM5200P file for the list of valid object OBJECT
attributes ATTRIBUTE
OBJ2 System 2 object CHAR(10) User-defined name, *ALL, generic*, *OBJ1 SYSTEM 2
OBJECT
LIB2 System 2 library CHAR(10) User-defined name, generic*, *LIB1 SYSTEM 2
LIBRARY
OBJAUD Object auditing value (configured CHAR(10) *CHANGE, *ALL, *NONE OBJECT
value) AUDITING
VALUE
PRCTYPE Process type CHAR(10) *INCLD, *EXCLD PROCESS TYPE
COOPDB Cooperate with database CHAR(10) *YES, *NO COOPERATE
WITH DATABASE
REPSPLF Replicate spooled files CHAR(10) *YES, *NO REPLICATE
SPOOLED FILES

816
MXDGOBJE outfile (WRKDGOBJE command)

Table 179. MXDGOBJE outfile (WRKDGOBJE command)


Field Description Type, length Valid values Column head-
ings
KEEPSPLF Keep deleted spooled files CHAR(10) *YES, *NO KEEP DLTD
SPOOLED FILES
OBJRTVDLY Retrieve delay (Object retrieve CHAR(10) 0-999, *DGDFT OBJRTVPRC
processing) DELAY
USRPRFSTS User profile status CHAR(10) *DGDFT, *DISABLED, *ENABLED, *SRC, *TGT USER PROFILE
STATUS
JRNIMG Journal image (File entry options) CHAR(10) *DGDFT, *AFTER, *BOTH FEOPT
JOURNAL IMAGE
OPNCLO Omit open and close entries (File CHAR(10) *DGDFT, *YES, *NO FEOPT OMIT
entry options) OPEN CLOSE
REPTYPE Replication type (File entry CHAR(10) *DGDFT, *POSITION, *KEYED FEOPT
options) REPLICATION
TYPE
APYLOCK Lock member during apply (File CHAR(10) *DGDFT, *EXCLRD, *SHRNUP, *NONE1 FEOPT LOCK
entry options) MBR ON APPLY
APYSSN Apply session (File entry options) CHAR(10) A-F, *DGDFT, *ANY FEOPT
CURRENT
APYSSN
CRCLS Collision resolution (File entry CHAR(10) User-defined name, *DGDFT, *HLDERR, FEOPT
options) *AUTOSYNC COLLISION
RESOLUTION
DSBTRG Disable triggers during apply (File CHAR(10) *YES, *NO, *DGDFT FEOPT DISABLE
entry options) TRIGGERS
PRCTRG Process trigger entries (File entry CHAR(10) *YES, *NO, *DGDFT FEOPT
options) PROCESS
TRIGGERS
PRCCST Process constraint entries (File CHAR(10) *YES FEOPT
entry options) PROCESS
CONSTRAINTS
LIB1ASP System 1 library ASP number PACKED(3,0) 0 = *SRCLIB, 1-32, SYSTEM 1
-1 = *ASPDEV LIBRARY ASP

817
MXDGOBJE outfile (WRKDGOBJE command)

Table 179. MXDGOBJE outfile (WRKDGOBJE command)


Field Description Type, length Valid values Column head-
ings
LIB1ASPD System 1 library ASP device CHAR(10) *LIB1ASP, User-defined name SYSTEM 1
(File entry options) LIBRARY ASP
DEV
LIB2ASP System 2 library ASP number PACKED(3,0) 0 = *SRCLIB, 1-32, SYSTEM 2
-1 = *ASPDEV LIBRARY ASP
LIB2ASPD System 2 library ASP device CHAR(10) *LIB2ASP, User-defined name SYSTEM 2
(File entry options) LIBRARY ASP
DEV
NBROMTDTA Number of omit content PACKED(3 0) 1-10 NUMBER OF
(OMTDTA) values OMIT CONTENT
VALUES
OMTDTA Omit content values (File entry CHAR(100) *NONE, *FILE, *MBR (10 characters each) OMIT CONTENT
options)
SPLFOPT Spooled file options CHAR(10) *NONE, *HLD, *HLDONSAV SPOOLED FILE
OPTIONS
NUMCOOPTYP Number of cooperating object PACKED(3 0) 0-999 NUMBER OF
types COOPERATING
OBJECT TYPES
COOPTYPE Cooperating object types CHAR(100) *FILE, *DTAARA, *DTAQ COOPERATING
OBJECT TYPES
NBRATROPT Number of attribute options PACKED (3 0) -1, 1-50 NUMBER OF
ATTRIBUTE
ATROPT Attribute options CHAR(500) *ALL ATTRIBUTE
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION
1. These values are supported on installations running MIMIX service pack 8.0.08.00 or higher. Previous software levels supported values *YES and *NO, which are
mapped to *EXCLRD and *NONE, respectively, on 8.0.08.00 or higher systems.

818
MXDGTSP outfile (WRKDGTSP command)

MXDGTSP outfile (WRKDGTSP command)

Table 180. MXDGTSP outfile (WRKDGTSP command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group CHAR(10) User-defined data group name DGDFN NAME
definition)
DGSYS1 System 1 name (Data group CHAR(8) User-defined system name DGDFN SYSTEM
definition) 1
DGSYS2 System 2 name (Data group CHAR(8) User-defined system name DGDFN SYSTEM
definition) 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
APYSSN Apply session CHAR(10) A-F APPLY SESSION
CRTTSP Create Timestamp (YYYY-MM- TIMESTAMP SAA timestamp - normalized to the target system CREATE
DD.HH.MM.SS.mmmmmm (Timestamp when the journal entry is created.) TIMESTAMP
SNDTSP Send Timestamp (YYYY-MM- TIMESTAMP SAA timestamp - normalized to the target system SEND
DD.HH.MM.SS.mmmmmm (Timestamp value is set equal to the create TIMESTAMP
timestamp (CRTTSP) when using remote journaling.
For non-remote journaling, this is the time the journal
entry is read on the source system and is sent by the
MIMIX send process.)
RCVTSP Receive Timestamp (YYYY-MM- TIMESTAMP SAA timestamp - normalized to the target system RECEIVE
DD.HH.MM.SS.mmmmmm (Timestamp when the journal entry is received by the TIMESTAMP
journal reader on the target system when using
remote journaling or received by the target system by
the MIMIX send process for non-remote journaling).
APYTSP Apply Timestamp (YYYY-MM- TIMESTAMP SAA timestamp - normalized to the target system APPLY
DD.HH.MM.SS.mmmmmm (Timestamp when the journal entry is applied on the TIMESTAMP
target system.)

819
MXDGTSP outfile (WRKDGTSP command)

Table 180. MXDGTSP outfile (WRKDGTSP command)


Field Description Type, length Valid values Column head-
ings
CRTSNDET Elapsed time between create and PACKED(10 0) Calculated, 0-9999999999 SEND ELAPSED
send process (milliseconds) (Elapsed time between generation of the timestamps TIME
and the time the MIMIX send process is received on
the target system for non-remote journaling. For
remote journaling, the create and send times are set
equal so elapsed time will be a value of 0.
SNDRCVET Elapsed time between send and PACKED(10 0) Calculated, 0-9999999999 RECEIVE
receive process (milliseconds) (Elapsed time between the send time and the receive ELAPSED TIME
time.)
RCVAPYET Elapsed time between receive and PACKED(10 0) Calculated, 0-9999999999 APPLY ELAPSED
apply process (milliseconds) Elapsed time between the receive time and the apply TIME
time.)
CRTAPYET Elapsed time between create and PACKED(10 0) Calculated, 0-9999999999 TOTAL ELAPSED
apply timestamps (milliseconds) (Elapsed time between generation of the timestamp TIME
to the time when the journal entry is applied on the
target system.)
SYSTDIFF The time differential between the PACKED(10 0) -9999999999-0, 0-9999999999 TIME
source and target systems, where DIFFERENCE
time differential = source time -
target time

820
MXJRNDFN outfile (WRKJRNDFN command)

MXJRNDFN outfile (WRKJRNDFN command)

Table 181. MXJRNDFN outfile (WRKJRNDFN command)


Field Description Type, length Valid values Column head-
ings
JRNDFN Journal definition name (Journal CHAR(10) User-defined journal definition name JRNDFN NAME
definition)
JRNSYS System name (Journal definition) CHAR(8) User-defined system name JRNDFN
SYSTEM
JRN Journal name (Journal) CHAR(10) Journal, *JRNDFN JOURNAL
JRNLIB Journal library (Journal) CHAR(10) Journal library JOURNAL
LIBRARY
JRNLIBASP Journal library ASP PACKED(3 0) Numeric value JOURNAL
0 = *CRTDFT LIBRARY ASP
1-32
- 1 = *ASPDEV
JRNRCVPFX Journal receiver prefix (Journal receiver CHAR(10) *GEN, user-defined name JRNRCV
prefix) PREFIX
JRNRCVLIB Journal receiver library (Journal CHAR(10) User-defined name, *JRNLIB JRNRCV
receiver prefix) LIBRARY
RCVLIBASP Journal receiver library ASP PACKED(3 0) Numeric value JRNRCV
0 = *CRTDFT LIBRARY ASP
1-32
- 1 = *ASPDEV
CHGMGT Receiver change management CHAR(20) 2 x CHAR(10) - *NONE, *TIME, *SIZE, RECEIVER
*SYSTEM The only valid combinations are: CHANGE
*TIME *SIZE MANAGEMENT
*TIME *SYSTEM
THRESHOLD Receiver threshold size (MB) PACKED(7 0) 10-1000000 RECEIVER
THRESHOLD
SIZE (MB)

821
MXJRNDFN outfile (WRKJRNDFN command)

Table 181. MXJRNDFN outfile (WRKJRNDFN command)


Field Description Type, length Valid values Column head-
ings
RCVTIME Time of day to change receiver ZONED(6 0) Time RECEIVER
CHANGE TIME
RESETTHLD Reset sequence threshold PACKED(5 0) 10-1000000 RESET
SEQUENCE
THRESHOLD
DLTMGT Receiver delete management CHAR(10) *YES, *NO RECEIVER
DELETE
MANAGEMENT
KEEPUNSAV Keep unsaved journal receivers CHAR(10) *YES, *NO KEEP
UNSAVED
JRNRCV
KEEPRCVCNT Keep journal receiver (days) PACKED(3 0) 0-999 KEEP JRNRCV
COUNT
KEEPJRNRCV Journal receiver ASP PACKED(3 0) 0-999 KEEP JRNRCV
(DAYS)
TEXT Description CHAR(50) *BLANK, User-defined text DESCRIPTION
JRNRCVASP Journal receiver ASP PACKED(3 0) Numeric value (0 = *LIBASP) JRNRCV ASP
MSGQ Threshold message queue CHAR(10) User-defined name, *JRNDFN MSGQ
THRESHOLD
MSGQ
MSGQLIB Threshold message queue library CHAR(10) *JRNLIB, user-defined name (See field JRNLIB if MSGQ
this field contains *JRNLIB) THRESHOLD
MSGQ
LIBRARY
RJLNK Remote journal link CHAR(10) *NONE, *SOURCE, *TARGET RJ LINK
EXITPGM Exit program CHAR(10) *NONE, user-defined name EXIT
PROGRAM
EXITPGMLIB Exit program library CHAR(10) User-defined name EXIT
PROGRAM
LIBRARY

822
MXJRNDFN outfile (WRKJRNDFN command)

Table 181. MXJRNDFN outfile (WRKJRNDFN command)


Field Description Type, length Valid values Column head-
ings
MINENTDTA Minimal journal entry data CHAR(100) Array of 10 CHAR(10) fields *DTAARA, MIN JRN
*FLDBDY, *FILE, *NONE ENTRY DATA
REQTHLDSIZ Requested threshold size PACKED(7 0) Numeric value REQUESTED
THRESHOLD
SIZE
SAVTYPE Save type CHAR(10) SAVE TYPE
JRNLAGLMT Journaling lag limit (seconds) PACKED(3 0) JOURNALING
LAG LIMIT
(SEC)
JRNLIBASPD Journal library ASP device CHAR(10) *JRNLIBASP, user-defined name JOURNAL
LIBRARY ASP
DEV
RCVLIBASPD Journal receiver library ASP device CHAR(10) *RCVLIBASP, user-defined name JRNRCV
LIBRARY ASP
DEV
TGTSTATE Target journal state CHAR(10) *ACTIVE, *STANDBY TARGET
JOURNAL
STATE
JRNCACHE Journal cache option CHAR(10) *SRC, *TGT, *BOTH, *NONE JOURNAL
CACHING
RCVSIZOPT Receiver size option CHAR(10) *MAXOPT2, *MAXOPT3 RECEIVER
SIZE OPTION
RESETTHLD2 Reset large sequence threshold PACKED(15, 0) 10-100,000,000,000,000 RESET
SEQUENCE
THRESHOLD2
TGTJRNINSP Target journal inspection CHAR(10) *YES, *NO TARGET
JOURNAL
INSPECTION
PROCESS Used by process CHAR(10) *REP, *INTERNAL USED BY
PROCESS

823
MXRJLNK outfile (WRKRJLNK command)

MXRJLNK outfile (WRKRJLNK command)

Table 182. MXRJLNK outfile (WRKRJLNK command)


Field Description Type, length Valid values Column head-
ings
SRCJRNDFN Journal definition name on source CHAR(10) Journal definition name SOURCE
JOURNAL
DEFINITION
SRCSYS Source system name of journal definition CHAR(8) System name SOURCE
SYSTEM
SRCJEJRNA Source Journal Library ASP DEC(3) "0 = *CRTDFT SRC JRN
-1 = *ASPDEV LIBRARY
ASP
SRCJEJLAD Source Journal Library ASP Device CHAR(10) *JRNLIBASP, *ASPDEV, ASP Primary SRC JRN
Group name LIBRARY
ASP DEV
SRCJERCVA Source Journal Receiver Library ASP DEC(3) "0 = *CRTDFT SRC JRNRCV
-1 = *ASPDEV LIBRARY
ASP
SRCJERLAD Source Journal Receiver Library ASP Device CHAR(10) *RCVLIBASP, *ASPDEV, ASP Primary SRC JRNRCV
Group name LIBRARY
ASP DEV
TGTJRNDFN Journal definition name on target CHAR(10) Journal definition name TARGET
JOURNAL
DEFINITION
TGTSYS Target system name of journal definition CHAR(8) System name TARGET
SYSTEM
TGTJEJRNA Target Journal Library ASP DEC(3) "0 = *CRTDFT TGT JRN
-1 = *ASPDEV LIBRARY
ASP

824
MXRJLNK outfile (WRKRJLNK command)

Table 182. MXRJLNK outfile (WRKRJLNK command)


Field Description Type, length Valid values Column head-
ings
TGTJEJLAD Target Journal Library ASP Device CHAR(10) *JRNLIBASP, *ASPDEV, ASP Primary TGT JRN
Group name LIBRARY
ASP DEV
TGTJERCVA Target Journal Receiver Library ASP DEC(3) "0 = *CRTDFT TGT JRNRCV
-1 = *ASPDEV LIBRARY
ASP
TGTJERLAD Target Journal Receiver Library ASP Device CHAR(10) *RCVLIBASP, *ASPDEV, ASP Primary TGT JRNRCV
Group name LIBRARY
ASP DEV
RJMODE Delivery mode of remote journaling CHAR(10) *ASYNC, *SYNC, blank RJ MODE
(DELIVERY)
RJSTATE Remote journal state CHAR(10) *ASYNC, *ASYNCPEND, *SYNC, STATE
*SYNCPEND, *INACTIVE, *CTLINACT,
*FAILED, *NOTBUILT, *UNKNOWN
PRITFRDFN Primary transfer definition CHAR(10) Transfer definition name, *SYSDFN PRIMARY
TFRDFN
SECTFRDFN Secondary transfer definition CHAR(10) Transfer definition name, *SYSDFN, SECONDARY
*NONE TFRDFN
PRIORITY Async process priority Packed(3 0) 0=*SYSDFN, 1-99 PRIORITY
TEXT Text description CHAR(50) Plain text TEXT
PROCESS Used by process CHAR(10) *REP, *INTERNAL USED BY
PROCESS

825
MXSYSDFN outfile (WRKSYSDFN command)

MXSYSDFN outfile (WRKSYSDFN command)

Table 183. MXSYSDFN outfile (WRKSYSDFN command)


Field Description Type, length Valid values Column head-
ings
SYSDFN System definition CHAR(8) User-defined name SYSDFN NAME
TYPE System type CHAR(10) *MGT, *NET SYSTEM TYPE
PRITFRDFN Configured primary transfer CHAR(10) User-defined name CONFIGURED
definition PRITFRDFN
SECTFRDFN Configured secondary transfer CHAR(10) User-defined name CONFIGURED
definition SECTFRDFN
CLUMBR Cluster member CHAR(10) *YES, *NO CLUSTER
MEMBER
CLUTFRDFN Cluster transfer definition CHAR(20) User-defined name, *PRITFRDFN, *SECTFRDFN CLUSTER
(Refer to the PRITFRNAME, PRITFRSYS1 and TFRDFN
PRITFRSYS2 fields if this field contains
*PRITFRDFN)
PRIMSGQ Primary message queue (Primary CHAR(10) User-defined name PRIMARY MSGQ
message handling)
PRIMSGQLIB Primary message queue library CHAR(10) User-defined name, *LIBL PRIMARY MSGQ
(Primary message handling) LIB
PRISEV Primary message queue severity CHAR(10) *SEVERE, *INFO, *WARNING, *ERROR, *TERM, PRIMARY MSGQ
(Primary message handling) *ALERT, *ACTION, 0-99 SEV
PRISEVNBR Primary message queue severity PACKED(3 0) 0-99 PRIMARY MSGQ
number (Primary message SEV NBR
handling)
PRIINFLVL Primary message queue CHAR(10) *SUMMARY, *ALL PRIMARY MSGQ
information level (Primary INFO LEVEL
message handling)
SECMSGQ Secondary message queue CHAR(10) User-defined name SECONDARY
(Secondary message handling) MSGQ

826
MXSYSDFN outfile (WRKSYSDFN command)

Table 183. MXSYSDFN outfile (WRKSYSDFN command)


Field Description Type, length Valid values Column head-
ings
SECMSGQLIB Secondary message queue library CHAR(10) User-defined name, *LIBL SECONDARY
(Secondary message handling) MSGQ LIB
SECSEV Secondary message queue CHAR(10) *SEVERE, *INFO, *WARNING, *ERROR, *TERM, SECONDARY
severity (Secondary message *ALERT, *ACTION, 0-99 MSGQ SEV
handling)
SECSEVNBR Secondary message queue PACKED(3 0) 0-99 SECONDARY
severity number (Secondary MSGQ SEV NBR
message handling)
SECINFLVL Secondary message queue CHAR(10) *SUMMARY, *ALL (Refer to the TFRSYS1 field if this SECONDARY
information level (Secondary field contains *SYS1) MSGQ INFO
message handling) LEVEL
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION
JRNMGRDLY Journal manager delay (seconds) PACKED(3 0) 5-900 JRNMGR DELAY
(SEC)
SYSMGRDLY System manager delay (seconds) PACKED(3 0) 5-900 SYSMGR DELAY
(SEC)
OUTQ Output queue (Output queue) CHAR(10) User-defined name OUTQ
OUTQLIB Output queue library (Output CHAR(10) User-defined name OUTQ LIBRARY
queue)
HOLD Hold on output queue CHAR(10) *YES, *NO HOLD ON OUTQ
SAVE Save on output queue CHAR(10) *YES, *NO SAVE ON OUTQ
KEEPSYSHST Keep system history (days) PACKED(3 0) 1-365 KEEP SYS
HISTORY (DAYS)
KEEPDGHST Keep data group history (days) PACKED(3 0) 0-365 KEEP DG
HISTORY (DAYS)
KEEPMMXDTA Keep MIMIX data (days) PACKED(3 0) 1-365, 0 = *NOMAX KEEP MIMIX
DATA (DAYS)
DTALIBASP MIMIX data library ASP PACKED(3 0) Numeric value, 0 = *CRTDFT MIMIX DATA LIB
ASP

827
MXSYSDFN outfile (WRKSYSDFN command)

Table 183. MXSYSDFN outfile (WRKSYSDFN command)


Field Description Type, length Valid values Column head-
ings
DSKSTGLMT Disk storage limit (GB) PACKED(5 0) 1-9999, 0 = *NOMAX DISK STORAGE
LIMIT (GB)
SBMUSR User profile for submit job CHAR(10) *JOBD, *CURRENT USRPRF FOR
SUBMIT JOB
MGRJOBD Manager job description (Manager CHAR(10) User-defined name MANAGER JOBD
job description)
MGRJOBDLIB Manager job description library CHAR(10) User-defined name MANAGER JOBD
(Manager job description) LIBRARY
DFTJOBD Default job description (Default job CHAR(10) User-defined name DEFAULT JOBD
description)
DFTJOBDLIB Default job description library CHAR(10) User-defined name DEFAULT JOBD
(Default job description) LIBRARY
PRDLIB MIMIX product library CHAR(10) User-defined name MIMIX PRODUCT
LIBRARY
RSTARTTIME Job restart time CHAR(8) 000000 - 235959, *NONE RESTART TIME
(Values are returned left-justified)
KEEPNEWNFY Keep new notification (days) PACKED(3 0) 1-365, 0 = *NOMAX KEEP NEW
NFY (DAYS)
KEEPACKNFY Keep acknowledged notification PACKED(3 0) 1-365, 0 = *NOMAX KEEP ACK
(days) NFY (DAYS)
ASPGRP ASP Group CHAR(10) *NONE, User-defined name ASP GROUP
DEVDMN Cluster device domain CHAR(10) *NONE, User-defined name CLUSTER
DEVICE DOMAIN
RJLIBLID Remote journal library ID PACKED(5 0) 1-99 RJ LIBRARY ID
MGTSYS Communicate with management CHAR(24) *ALL, User-defined name COMMUNICATE
systems WITH MGT
SYSTEMS

828
MXSYSSTS outfile (WRKSYS command)

MXSYSSTS outfile (WRKSYS command)


If the system on which the WRKSYS outfile request is run cannot communicate with a requested system definition, all status fields in the
outfile will have a value of *UNKNOWN.

Table 184. MXSYSSTS outfile (WRKSYS command)


Field Description Type, length Valid values Column head-
ings
SYSDFN System definition CHAR(8) User-defined system name SYSDFN NAME
TYPE System type CHAR(10) *MGT, *NET SYSTEM TYPE
SYSMGRSTS System manager status CHAR(10) *ACTIVE, *ACTREQ, *INACTIVE, *INACTRJ, SYSTEM
*UNKNOWN, *NONE MANAGER
STATUS
JRNMGRSTS Journal manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN, *NONE JOURNAL
MANAGER
STATUS
COLSRVSTS Collector services status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN COLLECTOR
SERVICES
STATUS
JRNINSPSTS Journal inspection status CHAR(10) *ACTIVE, *INACTIVE, *NEWDG, *NONE, *NOTCFG, JOURNAL
*NOTTGT, *PARTIAL, *UNKNOWN INSPECTION
STATUS
CLUSRVSTS Cluster services status CHAR(10) *ACTIVE, *ACTPEND, *INACTIVE, *INACTPEND, CLUSTER
*FAILED, *NEW, *NONE, *NOTAVAIL, *PARTITION, SERVICES
*RMVPEND, *UNKNOWN STATUS
PROCSTS *NODE procedure type status CHAR(10) *ACTIVE, *ATTN, *COMP, *NONE, *UNKNOWN PROCEDURE
STATUS

829
MXJRNINSP outfile (WRKJRNINSP command)

MXJRNINSP outfile (WRKJRNINSP command)

Table 185. MXDGTSP outfile (WRKDGTSP command)


Field Description Type, length Valid values Column head-
ings
JRNDFN Journal definition name (journal CHAR(10) User-defined journal definition name JRNDFN NAME
definition)
JRNSYS System name (journal definition) CHAR (8) User-defined system name JRNDFN
SYSTEM
JRN journal name (journal) CHAR(10) Journal name, *JRNDFN JOURNAL
JRNLIB Journal library (journal) CHAR(10) Journal library JOURNAL
LIBRARY
JRNRCV Journal receiver CHAR(10) User-defined value JOURNAL
RECEIVER
JRNRCVLIB Journal receiver library CHAR (10) User-defined value JOURNAL
RECEIVER
LIBRARY
ENTSEQNBR Last sequence number ZONED(20 0) 0-99999999999999999999 LAST
SEQUENCE
NUMBER
ENTTSP Last sequence timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu LAST
Default value is 0001-01-01-00.00.00.000000 SEQUENCE
TIMESTAMP
JRNINSPSTS Target journal inspection status CHAR(10) *ACTIVE, *INACTIVE, *NEWDG, *NOTCFG, JOURNAL
*NOTTGT INSPECTION
STATUS
INSPJRNRCV Last target journal inspection CHAR(10) User-defined value LAST TGT
receiver JRNINSP
RECEIVER
INSPRCVLIB Last target journal inspection CHAR(10) User-defined value LAST TGT
receiver library JRNINSP RCVLIB

830
MXJRNINSP outfile (WRKJRNINSP command)

Table 185. MXDGTSP outfile (WRKDGTSP command)


Field Description Type, length Valid values Column head-
ings
INSPSEQNBR Last target journal inspection ZONED(20 0) 0-99999999999999999999 LAST TGT
sequence number JRNINSP
SEQUENCE
INSPTSP Last target journal inspection TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu LAST TGT
timestamp Default value is 0001-01-01-00.00.00.000000 JRNINSP
TIMESTAMP

831
MXTFRDFN outfile (WRKTFRDFN command)

MXTFRDFN outfile (WRKTFRDFN command)


The Work with Transfer Definitions (WRKTFRDFN) command generates new outfiles based on the MXTFRDFN record format.

Table 186. MXTFRDFN outfile (WRKTFRDFN command)


Field Description Type, length Valid values Column
headings
TFRDFN Transfer definition name (Transfer CHAR(10) User-defined journal definition name TFRDFN
definition) NAME
TFRSYS1 System 1 name (Transfer CHAR(8) User-defined system name TFRDFN
definition) NAME
SYSTEM 1
TFRSYS2 System 2 name (Transfer CHAR(8) User-defined system name TFRDFN
definition) NAME
SYSTEM 2
PROTOCOL Transfer protocol CHAR(10) *TCP, *SNA, *OPTI TRANSFER
PROTOCOL
HOST1 System 1 host name or address CHAR(256) *SYS1, user-defined name (Refer to the TFRSYS1 field if SYSTEM 1
this field contains *SYS1) HOST OR
ADDRESS
HOST2 System 2 host name or address CHAR(256) *SYS2, user-defined name (Refer to the TFRSYS2 field if SYSTEM 2
this field contains *SYS2) HOST OR
ADDRESS
PORT1 System 1 port number or alias CHAR(14) User-defined port number SYSTEM 1
PORT NBR
OR ALIAS
PORT2 System 2 port number or alias CHAR(14) User-defined port number SYSTEM 2
PORT NBR
OR ALIAS
LOCNAME1 System 1 location name CHAR(8) *SYS1, user-defined name SYSTEM 1
LOCATION
LOCNAME2 System 2 location name CHAR(8) *SYS2, user-defined name SYSTEM 2
LOCATION

832
MXTFRDFN outfile (WRKTFRDFN command)

Table 186. MXTFRDFN outfile (WRKTFRDFN command)


Field Description Type, length Valid values Column
headings
NETID1 System 1 network identifier CHAR(8) *LOC, user-defined name, *NETATR, *NONE SYSTEM 1
NETWORK
IDENTIFIER
NETID2 System 2 network identifier CHAR(8) *LOC, User-defined name, *NETATR, *NONE SYSTEM 2
NETWORK
IDENTIFIER
MODE SNA mode CHAR(8) User-defined name, *NETATR SNA MODE
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTI
ON
THLDSIZE Reset sequence threshold PACKED(7 0) 0-9999999 THRESHOL
D SIZE
RDB Relational database CHAR(18) *GEN, user-defined name RELATIONA
L
DATABASE
RDBSYS1 System 1 Relational database CHAR(18) *SYS1, User-defined name RELATIONA
name L
DATABASE
RDBSYS2 System 2 Relational database CHAR(18) *SYS2, User-defined name RELATIONA
name L
DATABASE
MNGRDB Manage RDB Directory Entries CHAR(10) *DFT, *YES, *NO MANAGE
Indicator DIRECTORY
ENTRIES
TFRSHORTN Transfer definition short name CHAR(4) Name TFRDFN
SHORT
NAME
MNGAJE Manage Autostart Job Entry CHAR(10) *YES, *NO MANAGE
AJE

833
MXDGIFSTE outfile (WRKDGIFSTE command)

MXDGIFSTE outfile (WRKDGIFSTE command)

Table 187. MXDGIFSTE outfile (WRKDGIFSTE command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group definition) CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2
OBJ1 System 1 object name (unicode) GRAPHIC(512) User-defined name SYSTEM 1 IFS
VARLEN(75) OBJECT
(UNICODE)
FID1 System 1 file identifier (binary) BIN(16 0) IBM i-defined file identifier SYSTEM 1
FILE ID
(BINARY)
FID1HEX System 1 file identifier (hexadecimal- CHAR(32) IBM i-defined file identifier SYSTEM 1
readable) FILE ID (HEX)
OBJ2 System 2 object name (unicode) GRAPHIC(512) User-defined name SYSTEM 2 IFS
VARLEN(75) OBJECT
(UNICODE)
FID2 System 2 file identifier (binary) BIN(16 0) IBM i-defined file identifier SYSTEM 2
FILE ID
(BINARY)
FID2HEX System 2 file identifier (hexadecimal- CHAR(32) IBM i-defined file identifier SYSTEM 2
readable) FILE ID (HEX)
CCSID Object CCSID BIN(5 0) Defaults to job CCSID. If job CCSID is 65535 or data CCSID
cannot be converted to job CCSID, OBJ1 and OBJ2
values remain in Unicode.
OBJ1CVT System 1 object name (converted to job CHAR(512) User-defined name converted using CCSID value. SYSTEM 1 IFS
CCSID) VARLEN(75) Zero length if conversion not possible. OBJECT
CONVERTED

834
MXDGIFSTE outfile (WRKDGIFSTE command)

Table 187. MXDGIFSTE outfile (WRKDGIFSTE command)


Field Description Type, length Valid values Column head-
ings
OBJ2CVT System 2 object name (converted to job CHAR(512) User-defined name converted using CCSID value. SYSTEM 2 IFS
CCSID) VARLEN(75) Zero length if conversion not possible. OBJECT
CONVERTED
TYPE Object type CHAR(10) *DIR, *STMF, *SYMLNK OBJECT TYPE
STSVAL Entry status CHAR(10) *ACTIVE, *HLD, *HLDERR, *HLDIGN, *HLDRNM, CURRENT
*HLDRLTD, *RLSWAIT STATUS
JRN1STS Journaled on system 1 CHAR(10) *YES, *NO, *DIFFJRN SYSTEM 1
JOURNALED
JRN2STS Journaled on system 2 CHAR(10) *YES, *NO, *DIFFJRN SYSTEM 2
JOURNALED
APYSSN Apply session CHAR(10) ‘A’ (only supported apply session) APPLY
SESSION
PFID1 System 1 parent file identifier (binary) BIN(16 0) OS/400-defined file identifier SYSTEM 1
PARENT FID
(BINARY)
PFID1HEX System 1 parent file identifier CHAR(32) OS/400-defined file identifier SYSTEM 1
(hexadecimal - readable) PARENT FID
(HEX)
LNKNAM1 System 1 link name (unicode) VAR User-defined name SYSTEM 1
GRAPHIC(255) LINK NAME
VARLEN(75) (UNICODE)
LNKNAM1CVT System 1 link name (converted to job CHAR(255) User-defined name converted using CCSID value, SYSTEM 1
CCSID) VARLEN(75) length equal to 0 if conversion not supported LINK NAME
(CONVERTED)
PFID2 System 2 parent file identifier (binary) BIN(16 0) OS/400-defined file identifier SYSTEM 2
PARENT FILE
ID
(BINARY)

835
MXDGIFSTE outfile (WRKDGIFSTE command)

Table 187. MXDGIFSTE outfile (WRKDGIFSTE command)


Field Description Type, length Valid values Column head-
ings
PFID2HEX System 2 parent file identifier CHAR(255) User-defined name converted using CCSID value, SYSTEM 2
(hexadecimal - readable) VARLEN(75) length equal to 0 if conversion not supported PARENT
FILE ID
(HEX)

LNKNAM2 System 2 link name (unicode) VAR User-defined name SYSTEM 2


GRAPHIC(255) LINK NAME
VARLEN(75) (UNICODE)
LNKNAM2CVT System 2 link name (converted to job CHAR(255) User-defined name converted using CCSID value, SYSTEM 2
CCSID) VARLEN(75) length equal to 0 if conversion not supported LINK NAME
(CONVERTED)

836
MXDGOBJTE outfile (WRKDGOBJTE command)

MXDGOBJTE outfile (WRKDGOBJTE command)

Table 188. MXDGOBJTE outfile (WRKDGOBJTE command)


Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group definition) CHAR(10) User-defined data group name DGDFN
NAME
DGSYS1 System 1 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2
OBJ1 System 1 object CHAR(10) User-defined name SYSTEM 1
OBJECT
LIB 1 System 1 library CHAR(10) User-defined name SYSTEM 1
LIBRARY
TYPE Object type CHAR(10) *DTAARA, *DTAQ OBJECT
TYPE
OBJ2 System 2 object CHAR(10) User-defined name SYSTEM 2
OBJECT
LIB 2 System 2 library CHAR(10) User-defined name SYSTEM 2
LIBRARY
STSVAL Entry status CHAR(10) *ACTIVE, *HLD, *HLDERR, *HLDIGN, *RLSWAIT CURRENT
STATUS
JRN1STS Journaled on system 1 CHAR(10) *YES, *NO, *DIFFJRN SYSTEM 1
JOURNALED
JRN2STS Journaled on system 2 CHAR(10) *YES, *NO, *DIFFJRN SYSTEM 2
JOURNALED
APYSSN Current apply session CHAR(10) ‘A’ (only supported apply session) CURRENT
APYSSN
RQSAPYSSN Requested apply session CHAR(10) ‘A’ (only supported apply session) REQUESTED
APYSSN

837
MXDGOBJTE outfile (WRKDGOBJTE command)

Table 188. MXDGOBJTE outfile (WRKDGOBJTE command)


Field Description Type, length Valid values Column head-
ings
OBJ1APY System 1 object (known by apply) CHAR(10) User-defined name SYSTEM 1
OBJECT
(APPLY)
LIB1APY System 1 library (known by apply) CHAR(10) User-defined name SYSTEM 1
LIBRARY
(APPLY)
OBJ2APY System 2 object (known by apply) CHAR(10) User-defined name SYSTEM 2
OBJECT
(APPLY)
LIB2APY System 2 library (known by apply) CHAR(10) User-defined name SYSTEM 2
LIBRARY
(APPLY)

838
MXPROC outfile (WRKPROC command)

MXPROC outfile (WRKPROC command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Procedures (WRKPROC) command.

Table 189. MXPROC outfile (WRKPROC command)


Field Description Type, length Valid values Column head-
ings
PROC Procedure name CHAR(10) Procedure name PROCEDURE
AGDFN Application group definition CHAR(10) Application group definition name AGDFN
TYPE Type CHAR(10) *END, *NODE, *START, *SWTPLAN, TYPE
*SWTUNPLAN, *USER
DFT Default for type CHAR(10) *NO, *YES DEFAULT
FOR TYPE
TEXT Description CHAR(50) Description DESCRIPTION

839
MXPROCSTS outfile (WRKPROCSTS command)

MXPROCSTS outfile (WRKPROCSTS command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Procedure Status (WRKPROCSTS)
command.

Table 190. MXPROCSTS outfile (WRKPROCSTS command)


Field Description Type, length Valid values Column head-
ings
PROC Procedure name CHAR(10) Procedure name PROCEDURE
AGDFN Application group definition CHAR(10) Application group definition name AGDFN
TYPE Type CHAR(10) *END, *NODE, *START, *SWTPLAN, *SWTUNPLAN, TYPE
*USER
STATUS Status CHAR(10) *ACKCANCEL, *ACKFAILED, *ACTIVE, *ATTN, STATUS
*CANCELED, *COMPLETED, *COMPERR, *FAILED,
*MSGW, *QUEUED
DURATION Duration TIME HH.MM.SS DURATION
STRTSP Begin Timestamp CHAR(26) timestamp START
TIMESTAMP
ENDTSP End Timestamp CHAR(26) timestamp END
TIMESTAMP
NOD Started on node CHAR(8) system name NODE
JOBNAME Job name CHAR(10) job name JOB
NAME
JOBUSER Job user CHAR(10) job user JOB
USER
JOBNUM Job number CHAR(6) job number JOB
NUMBER

840
MXSTEPPGM outfile (WRKSTEPPGM command)

MXSTEPPGM outfile (WRKSTEPPGM command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Step Programs (WRKSTEPPGM)
command.

Table 191. MXSTEPPGM outfile (WRKSTEPPGM command)


Field Description Type, length Valid values Column head-
ings
STEPPGM Step program name CHAR(10) step name STEP
PROGRAM
PGM Program name CHAR(10) program name PROGRAM
PGMLIB Program library CHAR(10) library name PROGRAM
LIBRARY
TYPE Type CHAR(10) *AGDFN, *DTARSCGRP, *DGDFN, *NODE TYPE
NODETYP Run step on node type CHAR(10) *ALLNOD, *PRIMARY, *BACKUP, *NEWPRIM, *LOCAL, NODE
*PEER, *REPLICATE ENTRY
TYPE
USERDEF User defined CHAR(10) *NO, *YES USER
DEFINED
TEXT Description CHAR(50) description DESCRIPTION

841
MXSTEP outfile (WRKSTEP command)

MXSTEP outfile (WRKSTEP command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Step (WRKSTEP) command.

Table 192. MXSTEP outfile (WRKSTEP command)


Field Description Type, length Valid values Column head-
ings
PROC Procedure name CHAR(10) Procedure name PROCEDURE
AGDFN Application group definition CHAR(10) Application group definition name AGDFN
SEQNBR Sequence number PACKED (7 0) 1-9999999 SEQUENCE
NUMBER
STEPPGM Step program name CHAR(10) step name STEP
PROGRAM
BEFOREACT Action before step CHAR(10 *NONE, *WAIT, *MSGW BEFORE
ACTION
ERRACT Action on error CHAR(10) *QUIT, *CONTINUE, *MSGID, *MSGW ERROR
ACTION
STATE State CHAR(10) *REQUIRED, *ENABLED, *DISABLED STATE

842
MXSTEPMSG outfile (WRKSTEPMSG command)

MXSTEPMSG outfile (WRKSTEPMSG command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Procedures (WRKPROC) command.

Table 193. MXSTEPMSG outfile (WRKSTEPMSG command)


Field Description Type, length Valid values Column head-
ings
MSGID Step message CHAR(7) message ID MESSAGE
ERRACT Action on error CHAR(10) *QUIT, *CONTINUE, *MSGID, *MSGW ERROR
ACTION
TEXT Description CHAR(50) description DESCRIPTION

843
MXSTEPSTS outfile (WRKSTEPSTS command)

MXSTEPSTS outfile (WRKSTEPSTS command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Step Status (WRKSTEPSTS)
command.

Table 194. MXSTEPSTS Output file (WRKSTEPSTS)


Field Description Type, length Valid values Column head-
ings
PROCNAME Procedure name CHAR(10) Procedure name PROCEDURE
AGDFN Application group definition CHAR(10) Application group definition name AGDFN
STEPPGM Step program name CHAR(10) step name STEP
PROGRAM
SEQNBR Sequence number DEC(7 0) sequence number as char SEQUENCE
NUMBER
DTARSCGRP Data resource group CHAR(10) Data resource group name DATA
*blank if not specified RESOURCE
GROUP
DGDFN Data group name CHAR(10) user-defined data group name DGDFN
*blank if no DG specified on the command
DGSYS1 System 1 name CHAR(8) user-defined system name DGDFN
*blank if no DG specified SYSTEM 1
DGSYS2 System 2 name CHAR(8) user-defined system name DGDFN
*blank if no DG specified SYSTEM 2
STATE Step state CHAR(10) *REQUIRED, *ENABLED, *DISABLED STEP
STATE
STATUS Step status CHAR(7) *ACTIVE, *CANCEL, *COMP, *IGNERR, *DSBLD, STATUS
*FAILED, *MSGW
DURATION Step duration TIME HH.MM.SS DURATION
STRTIME Step start timestamp CHAR(26) timestamp START
TIMESTAMP
ENDTIME Step end timestamp CHAR(26) timestamp END
TIMESTAMP

844
MXSTEPSTS outfile (WRKSTEPSTS command)

Table 194. MXSTEPSTS Output file (WRKSTEPSTS)


Field Description Type, length Valid values Column head-
ings
NOD Node CHAR(8) node entry name NODE
MSGWTIME Last message wait timestamp CHAR(26) timestamp LAST
MSGW
TIMESTAMP
JOBNAME Job name CHAR(10) job name JOB
NAME
JOBUSER Job user CHAR(10) job user JOB
USER
JOBNUM Job number CHAR(6) job number JOB
NUMBER
PGM Program name CHAR(10) program name PROGRAM
PGMLIB Program library CHAR(10) program library PROGRAM
LIBRARY
TYPE Type CHAR(10) *AGDFN, *DTARSCGRP, *DGDFN, *NODE TYPE
NODETYP Run step on node type CHAR(10) *ALLNOD, *PRIMARY, *BACKUP, *NEWPRIM, NODE
*LOCAL, *PEER, *REPLICATE ENTRY
TYPE
BEFOREACT Action before step CHAR(10) *NONE, *WAIT, *MSGW ACTION
BEFORE
STEP
ERRACT Action on error CHAR(10) *QUIT, *CONTINUE, *MSGID, *MSGW ACTION
ON
ERROR

845
Index
Symbols concepts 659
*FAILED activity entry 46 group 660
*HLD, files on hold 104 independent 660
*HLDERR, held due to error 408 independent, benefits 659
*HLDERR, hold error status 80 independent, configuration tips 663
*MAXOPT3 sequence number size 222 independent, configuring 663
*MSGQ, maintaining private authorities 104 independent, configuring IFS objects 664
independent, configuring library-based ob-
A jects 664
access types (file) for T-ZC entries 415 independent, effect on library list 665
accessing independent, journal receiver considerations
MIMIX Main Menu 93 664
active server technology 465 independent, limitations 662
additional resources 22 independent, primary 660
advanced journaling independent, replication 658
add to existing data group 88 independent, requirements 662
apply session balancing 90 independent, restrictions 662
conversion examples 89 independent, secondary 660
convert data group to 88 SYSBAS 658
loading tracking entries 286 system 659
planning for 87 user 659
replication process 76 asynchronous delivery 67
serialized transactions with database 88 attributes of a step, changing 569
advanced journaling, data areas and data attributes, supported
queues CMPDLOA command 713
synchronizing 530 CMPFILA command 696
advanced journaling, IFS objects CMPIFSA command 710
journal receiver size 217 CMPOBJA command 701
restrictions 120 audit
synchronizing 530 check for reported problems 679
advanced journaling, large objects (LOBs) differences, resolving 679
journal receiver size 217 displaying compliance status 684
synchronizing 501 improve performance of #MBRRCDCNT 381
APPC/SNA, configuring 163 job log 684
application group 27 last performed 684
conversion checklist 145 scheduler, alternative 670
create resource groups for a 327 status
define node roles manually 328 runtime 679
define primary node 327 audit results 679
application group definition 37 #DGFE rule 687, 747
creating 326 #DLOATR rule 713, 749
apply session #DLOATR rule, ASP attributes 719
constraint induced changes 400 #FILATR rule 696, 751
default value 240 #FILATR rule, ASP attributes 719
specifying 237 #FILATR rule, journal attributes 715
apply session, database #FILATRMBR rule 696, 751
load balancing 90 #FILATRMBR rule, ASP attributes 719
ASP #FILATRMBR rule, journal attributes 715
basic 660 #FILDTA rule 689, 753
#IFSATR rule 710, 759

846
#IFSATR rule, ASP attributes 719 batch output 546
#IFSATR rule, journal attributes 715 benefits
#MBRRCDCNT rule 689, 757 independent ASPs 659
#OBJATR rule 701, 761 LOB replication 108
#OBJATR rule, ASP attributes 719 bi-directional data flow 390
#OBJATR rule, journal attributes 715 broadcast configuration 71
#OBJATR rule, user profile password attribute converting to application group 328
725 build journal environment
#OBJATR rule, user profile status attribute after changing receiver size option 204
722
interpreting, attribute comparisons 692 C
interpreting, file data comparisons 689 candidate objects
resolving problems 679, 687 defined 427
timestamp difference 128 cascade configuration 71
troubleshooting 684 cascading distributions, configuring 395
auditing and reporting, compare commands catchup mode 65
DLO attributes 459 change management
file and member attributes 450 overview 213
file data using active processing 490 remote journal environment 213
file data using subsetting options 493 change management, journal receivers 203
file data with repair capability 484 changing
file data without active processing 481 RJ link 228
files on hold 487 startup programs, remote journaling 146
IFS object attributes 456 changing from RJ to MIMIX processing
object attributes 453 permanently 230
auditing level, object temporarily 229
used for replication 343 checklist
auditing value, i5/OS object convert *DTAARA, *DTAQ to user journaling
set by MIMIX 60 151
auditing, i5/OS object 29 convert IFS objects to user journaling 151
performed by MIMIX 309 convert to application groups 145
audits 512 converting to remote journaling 146
authorities, private 104 copying configuration data 649
authorization lists legacy cooperative processing 157
to exclude from replication 85 manual configuration (source-send) 141
automation 531 MIMIX Dynamic Apply 148
autostart job entry 178 new preferred configuration 137
changing job description 193 pre-configuration 82
changing port information 194 cluster services 36
created by MIMIX 192 collision points 532
identifying 192 collision resolution 532
when to change 193 default value 241
requirements 409
B working with 408
backlog command authority 638
comparing file data restriction 467 commands
backup system 26 changing defaults 556
restricting access to files 240 displaying a list of 547
basic ASP 660 commands, by mnemonic

847
ADDMSGLOGE 541 STRJRNIFSE 350
ADDRJLNK 227 STRJRNOBJE 354
ADDSTEP 568 STRMMXMGR 306
CHGJRNDFN 220 STRSVR 191
CHGRJLNK 228 SYNCDGACTE 498, 504
CHGSYSDFN 170 SYNCDGFE 498, 505, 514
CHGTFRDFN 185 SYNCDLO 497, 503, 524
CHKDGFE 313, 687 SYNCIFS 497, 503, 520, 530
CLOMMXLST 555 SYNCOBJ 497, 503, 516, 530
CMPDLOA 446 VFYCMNLNK 196, 197
CMPFILA 446 VFYJRNFE 349
CMPFILDTA 465, 481 VFYJRNIFSE 352
CMPIFSA 446 VFYJRNOBJE 356
CMPOBJA 446 VFYKEYATR 389
CMPRCDCNT 462 WRKCRCLS 410
CPYCFGDTA 648 WRKDGDFN 257
CPYDGFE 300 WRKDGDLOE 300
CPYDGIFSE 300 WRKDGFE 300
CRTAGDFN 326 WRKDGIFSE 300
CRTCRCLS 410 WRKDGOBJE 300
CRTDGDFN 246, 250 WRKJRNDFN 257
CRTJRNDFN 218 WRKRJLNK 316
CRTSYSDFN 169 WRKSYSDFN 257
CRTTFRDFN 184 WRKTFRDFN 257
DLTCRCLS 411 commands, by name
DLTDGDFN 258 Add Message Log Entry 541
DLTJRNDFN 258 Add Remote Journal Link 227
DLTSYSDFN 258 Add Step 568
DLTTFRDFN 258 Change Journal Definition 220
DPYDGCFG 307 Change RJ Link 228
DSPDGFE 302 Change System Definition 170
DSPDGIFSE 302 Change Transfer Definition 185
ENDJRNFE 348 Check Data Group File Entries 313, 687
ENDJRNIFSE 351 Close MIMIX List 555
ENDJRNOBJE 355 Compare DLO Attributes 446
LODDGFE 275 Compare File Attributes 446
LODDGOBJE 272 Compare File Data 465, 481
LODDTARGE 327 Compare IFS Attributes 446
MIMIX 93 Compare Object Attributes 446
OPNMMXLST 555 Compare Record Counts 462
RMVDGFE 301 Copy Configuration Data 648
RMVDGIFSE 301 Copy Data Group File Entry 300
RMVRJCNN 231 Copy Data Group IFS Entry 300
RUNCMD 548 Create Application Group Definition 326
RUNCMDS 548 Create Collision Resolution Class 410
RUNRULE 669, 675 Create Data Group Definition 246, 250
RUNRULEGRP 669, 675 Create Journal Definition 218
SETDGAUD 309 Create System Definition 169
SETIDCOLA 401 Create Transfer Definition 184
STRJRNFE 347 Delete Collision Resolution Class 411

848
Delete Data Group Definition 258 Work with System Definition 257
Delete Journal Definition 258 Work with Transfer Definition 257
Delete System Definition 258 commands, run on remote system 548
Delete Transfer Definition 258 commit cycles
Deploy Data Group Configuration 307 effect on audit comparison 689, 691
Display Data Group File Entry 302 policy effect on compare record count 381
Display Data Group IFS Entry 302 commit mode 367
End Journaling File Entries 348 commitment control 108
End Journaling IFS Entries 351 #MBRRCDCNT audit performance 381
End Journaling Obj Entries 355 journal standby state, journal cache 362, 364
Load Data Group File Entries 275 journaled IFS objects 76
Load Data Group Object Entries 272 communications
Load Data Resource Group Entry 327 APPC/SNA 163
MIMIX 93 configuring system level 159
Open MIMIX List 555 native TCP/IP 159
Remove Data Group File Entry 301 OptiConnect 164
Remove Data Group IFS Entry 301 protocols 159
Remove Remote Journal Connection 231 starting TCP sever 191
Run Command 548 compare commands
Run Commands 548 completion and escape messages 534
Run Rule 669, 675 outfile formats 445
Run Rule Group 669, 675 report types and outfiles 444
Set Data Group Auditing 309 spooled files 444
Set Identity Column Attribute 401 comparing
Start Journaling File Entries 347 DLO attributes 459
Start Journaling IFS Entries 350 file and member attributes 450
Start Journaling Obj Entries 354 IFS object attributes 456
Start Lakeview TCP Server 191 object attributes 453
Start MIMIX Managers 306 when file content omitted 417
Synchronize Data Group Activity Entry 504 comparing attributes
Synchronize Data Group File Entry 505, 514 attributes to compare 448
Synchronize DG Activity Entry 498 overview 446
Synchronize DG File Entry 498 supported object attributes 447, 470
Synchronize DLO 497, 503, 524 comparing file data 465
Synchronize IFS 503 active server technology 465
Synchronize IFS Object 497, 520, 530 advanced subsetting 476
Synchronize Object 497, 503, 516, 530 allocated and not allocated records 467
Verify Communications Link 196, 197 comparing a random sample 476
Verify Journaling File Entry 349 comparing a range of records 473
Verify Journaling IFS Entries 352 comparing recently inserted data 473
Verify Journaling Obj Entries 356 comparing records over time 476
Verify Key Attributes 389 data correction 465
Work with Collision Resolution Classes 410 excluding unchanged members 476
Work with Data Group Definition 257 first and last subset 479
Work with Data Group DLO Entries 300 interleave factor 477
Work with Data Group File Entries 300 job ends due to network timeout 470
Work with Data Group IFS Entries 300 keys, triggers, and constraints 468
Work with Data Group Object Entries 300 multi-threaded jobs 466
Work with Journal Definition 257 network inactivity considerations 470
Work with RJ Links 316 number of subsets 477

849
parallel processing 466 configuring, collision resolution 409
processing with DBAPY 465, 487 confirmed journal entries 66
referential integrity considerations 469 considerations
repairing files in *HLDERR 465 journal for independent ASP 664
restrictions 466 what to not replicate 84
security considerations 467 constraints
thread groups 475 apply session for dependent files 400
transfer definition 475 auditing with CMPFILA 446
transitional states 466 comparing file data 468
using active processing 490 omit content and legacy cooperative process-
using subsetting options 493 ing 417
wait time 475 referential integrity considerations 469
with repair capability 484 requirements 399
with repair capability when files are on hold requirements when synchronizing 506
487 restrictions with high availability journal perfor-
without active processing 481 mance enhancements 364
comparing file record counts 462 support 399
concepts when journal is in standby state 362
procedures and steps 557 constraints, CMPFILA file-specific attribute 696
configuration constraints, physical files with
adding a directory to existing 293 apply session ignored 112
adding a library to existing 288 configuring 108
additional supporting tasks 303 legacy cooperative processing 112
copying existing data 653 constraints, referential 111
determining, IFS objects 155 contacting Vision Solutions 23
manually complete seelction rule 288, 293 container receive process
results of #DGFE audit after changing 687 description 56
configuration, deploying the 307 container send process
configuring defaults 244
advanced replication techniques 383 description 56
bi-directional data flow 390 threshold 244
cascading distributions 395 contextual transfer definitions
choosing the correct checklist 135 considerations 183
classes, collision resolution 410 RJ considerations 182
data areas and data queues 113 continuous mode 65
database apply commit mode 368 convert data group
DLO documents and folders 122 to advanced journaling 151
file routing, file combining 392 to application group environment 145
for improved performance 358 COOPDB (Cooperate with database) 114, 120
IFS objects 116 cooperative journal (COOPJRN)
independent ASP 663 behavior 107
Intra communications 656 cooperative processing
job restart time 318 and omitting content 417
keyed replication 386 configuring files 106
library-based objects 100 file, preferred method for 53
message queue objects for user profiles 104 introduction 53
omitting T-ZC journal entry content 416 journaled objects 54
spooled file replication 103 legacy 54
to replicate SQL stored procedures 421 legacy limitations 112
unique key replication 386 MIMIX Dynamic Apply limitations 111

850
cooperative processing, legacy description 28
limitations 112 object 271
requirements and limitations 112 procedures for configuring 270
COOPJRN 107 data group file entry 275
COOPJRN (Cooperative journal) 236 adding individual 281
COOPTYPE (Cooperating object types) 114 changing 282
copying loading from a journal definition 279
data group entries 300 loading from a library 278, 279
definitions 257 loading from FEs from another data group
create operation, how replicated 128 280
CustomerCare 23 loading from object entries 276
customize sources for loading 275
switch procedures 562 data group IFS entry 284
customizing 531 with independent ASPs 664
replication environment 532 data group object entry
adding individual 273
D custom loading 271
data area independent ASP 663
restrictions of journaled 114 with independent ASP 664
data areas data library 34, 167
journaling 75 data management techniques 390
polling interval 238 data queue
synchronizing an object tracking entry 530 restrictions of journaled 114
data areas and data queues data queues
verifying journaling 356 journaling 75
data distribution techniques 390 synchronizing journaled objects 530
data group 27 data resource group entry
convert to remote journaling 146 in data group definition 234
database only 111 data resource group entry, adding 327
determining if RJ link used 316 data resource group entry, adding manually 328
ending 44, 69 data source 235
journal definitions used by a 339 database apply
RJ link differences 69 caching 361
sharing an RJ link 69 serialization 88
short name 234 with compare file data (CMPFILDTA) 465,
starting 44 487
starting the first time 315 database apply caching 361
switching 28 database apply process 79
switching, RJ link considerations 73 description 68
timestamps, automatic 238 target side locking 413
type 235 threshold warning 242
data group definition 37, 233 database apply processing
creating 246 entries under commitment control 367
parameter tips 234 database reader process 68
data group DLO entry 297 description 68
adding individual 298 threshold 241
loading from a folder 297 database receive process 79
data group entry 428 database send process 79
defined 95 description 79
filtering 237

851
threshold 241 generic name support 122
DDM implicit parent object replication 122
password validation 188 keeping same name 243
server in startup programs 146 object processing 122
server, starting 187 documents, MIMIX 19
defaults, command 556 duplicate identity column values 401
definitions dynamic updates
application group 37 adding data group entries 281
data group 37 removing data group entries 301
journal 37
named 36 E
remote journal link 37 ending CMPFILDTA jobs 479
renaming 259 ending journaling
RJ link 37 data areas and data queues 355
system 36 files 348
transfer 36 IFS objects 351
delay times 167 IFS tracking entry 351
delay/retry processing object tracking entry 355
first and second 239 error code, files in error 730
third 239 error messages
delayed commit 367 switch procedures 561
delete management examples
journal receivers 203 convert to advanced journaling 89
overview 213 DLO entry matching 123
remote journal environment 214 IFS object selection, subtree 442
delete operations job restart time 320, 321
journaled *DTAARA, *DTAQ, IFS objects 134 journal definitions for multimanagement envi-
legacy cooperative processing 133 ronment 209
deleting journal definitions for switchable data group
data group entries 301 211
definitions 258 journal receiver exit program 632
procedure 566 load file entries for MIMIX Dynamic Apply 276
delivery mode object entry matching 102
asynchronous 67 object retrieval delay 419
synchronous 65 object selection process 435
deploy configuration 307 object selection, order precedence in 436
detail report 544 object selection, subtree 438
device description port alias, complex 161
to exclude from replication 85, 86 port alias, simple 160
directory entries querying content of an output file 813
managing 179 SETIDCOLA command increment values 405
RDB 178 target journal inspection 335
directory, IFS user-generated notification 674
adding to existing data group 293 WRKDG SELECT statements 813
display output 543 exit points 532
displaying journal receiver management 625, 628
data group entries 302 MIMIX Monitor 626
distribution request, data-retrieval 57 MIMIX Promoter 627
DLOs exit programs
example, entry matching 123

852
journal receiver management 204, 629 IFS objects 116
requesting customized programs 625 determining configuration 155
expand support 545 file ID (FID) use with journaling 78
extended attribute cache 369 file IDs (FIDs) 317
configuring 369 implicit parent object replication 118
journaled entry types, commitment control
F and 76
failed request resolution 46 journaling 75
FEOPT (file and tracking entry options) 239 not supported 116
file path names 117
new 343 supported object types 116
file id (FID) 78 verifying journaling 352
file identifiers (FIDs) 317 IFS objects, journaled
files restrictions 120
combining 393 supported operations 129
omitting content 415 sychronizing 507, 530
output 545 immediate commit 367
routing 394 implicit parent object replication
sharing 390 DLO object 122
synchronizing 505 IFS object 118
temporary 84 independent ASP 660
filtering limitations 662
database replication 79 primary 660
messages 49 replication 658
on database send 237 requirements 662
on source side 237 restrictions 662
remote journal environment 68 secondary 660
firewall, using CMPFILDTA with 467 synchronizing data within an 501
folder path names 122 independent ASP threshold monitor 667
independent ASP, journal receiver change 213
G information and additional resources 22
generic name support 429 inspecting of journals on target system 334
DLOs 122 installations, multiple MIMIX 26
generic user exit 625 interleave factor 477
Intra configuration 654
IPL, journal receiver change 213
H
history retention 167
hot backup 24 J
job classes 38
job description parameter 546
I
job descriptions 38, 167
IBM i5/OS option 42 362
in data group definition 244
IFS directory
in product library 38
created during installation 33
list of MIMIX 39
exclude from replication 85, 86
job log
IFS file systems 116
for audit 684
unsupported 116
job name parameter 546
IFS object selection
job names 51
examples, subtree 442
job restart time 318
subtree 432

853
data group definition procedure 324 for data area and data queues 733
examples 320 supported by MIMIX user journal processing
overview 318 732
parameter 167, 245 journal image 240, 385
shared object send jobs 319 journal inspection, target 334
system definition procedure 323 journals not checked 334
jobs journal manager 35
procedures, used in 558 journal receiver 29
jobs, restarted automatically 318 change management 203, 213
journal 28 delete management 203, 213, 214
improving performance of 358 prefix 201
maximum number of objects in 29 RJ processing earlier receivers 215
MXCFGJRN 200 size for advanced journaling 217
security audit (system) 55 starting point 29
system (security audit) 55 stranded on target 216
journal analysis 47 journal receiver management
journal at create 126, 238 interaction with other products 214
requirements 343 recommendations 213
requirements and restrictions 344 journal sequence number, change during IPL
journal caching 202, 363 213
configuring 365 journal standby state 362
journal caching alternative 361 configuring 365
journal code journaled data areas, data queues
failed objects 735 planning for 87
files in error 727 journaled IFS objects
system journal transactions 735 planning for 87
journal codes journaled object types
user journal transactions 727 user exit program considerations 90
journal definition 37 journaling 29
configuring 198 data areas and data queues 75
created by other processes 200 ending for data areas and data queues 355
creating 218 ending for files defined to a data group 348
fields on data group definition 236 ending for IFS objects 351
MXCFGJRN 200 IFS objects 75
parameter tips 201 IFS objects and commitment control 76
remote journal environment considerations implicitly started 343
206 requirements for starting 343
remote journal naming convention 210 starting for data areas and data queues 354
remote journal naming convention, default starting for IFS objects 350
208 starting for physical files 347
remote journaling example 211 starting, ending, and verifying 342
used by a data group 339 verifying 512
journal entries 29 verifying for data areas and data queues 356
confirmed 66 verifying for IFS objects 352
filtering on database send 237 verifying for physical files 349
minimized data 359 journaling environment
OM journal entry 129 automatically creating 236
receive journal entry (RCVJRNE) 377 building 221
unconfirmed 66, 73 changing to *MAXOPT3 222
journal entry codes 735 removing 231

854
source for values (JRNVAL) 221 M
journaling on target, RJ environment consider- manage directory entries 179
ations 216 management system 27
journaling status maximum size transmitted 178
data areas and data queues 354 MAXOPT3
files 347 change receiver size value 217
IFS objects 350 receiver size option 204
member data, locks on target side 413
K menu
keyed replication 385 MIMIX Configuration 305
comparing file data restriction 466 MIMIX Main 93
file entry option defaults 240 message handling 166
preventing before-image filtering 237 message log 541
verifying file attributes 389 message queues
associated with user profiles 104
L journal-related threshold 204
large object (LOB) support message, step 573
user exit program 108 messages 48
large objects (LOBs) CMPDLOA 536
minimized journal entry data 359 CMPFILA 534
legacy cooperative processing CMPFILDTA 537
configuring 109 CMPIFSA 535
limitations 112 CMPOBJA 535
requirements 112 CMPRCDCNT 536
libraries comparison completion and escape 534
iOptimize, to not replicate 85 MIMIX Dynamic Apply
MIMIX Availability, to not replicate 85 configuring 106, 109
MIMIX Director to not replicate 86 recommended for files 106
objects in installation libraries 84 requirements and limitations 111
system, to not replicate 84 MIMIX environment 33
library MIMIX installation 26
adding to existing data group 288 MIMIX jobs, restart time for 318
library list MIMIX Model Switch Framework 626
adding QSOC to 164 MIMIX performance, improving 358
library list, effect of independent ASP 665 MIMIX rules 669
library-based objects, configuring 100 command prompting 671
limitations MIMIXOWN user profile 40, 188
database only data group 111 MIMIXQGPL library 33
list detail report 544 MIMIXSBS subsystem 34, 92
list summary report 544 minimized journal entry data 359
load leveling 59 LOBs 108
loading MMNFYNEWE monitor 126
tracking entries 286 monitor
LOB replication 108 new objects not configured to MIMIX 126
local-remote journal pair 65 move/rename operations
locks, database apply process 413 system journal replication 129
log space 38 user journal replication 130
logical files 106, 107 multimanagement
long IFS path names 117 journal definition naming 208

855
limiting internal communications 170 set by MIMIX 60, 309
multi-threaded jobs 466 object auditing value
data areas, data queues 113
N DLOs 122
name pattern 432 IFS objects 119
name space 55 library-based objects 98
names, displaying long 117 omit T-ZC entry considerations 416
naming conventions object entry, data group
data group definitions 234 creating 271
journal definitions 201, 207, 210 object locking retry interval 239
multi-part 31 object processing
transfer definitions 176 data areas, data queues 113
transfer definitions, contextual (*ANY) 183 defaults 242
transfer definitions, multiple network systems DLOs 122
172 high volume objects 380
network inactivity IFS objects 116
comparing file data 470 retry interval 239
network systems 27 spooled files 103
multiple 172 object receive process
new objects description 56
automatically journal 238 object retrieval delay
automatically replicate 126 considerations 419
files 126 examples 419
files processed by legacy cooperative pro- selecting 419
cessing 127 object retrieve process
files processed with MIMIX Dynamic Apply defaults 244
126 description 56
IFS object journal at create requirements 343 threshold 244
IFS objects, data areas, data queues 127 with high volume objects 380
journal at create selection criteria 344 object selection 425
notification of objects not in configuration 126 audits 425
notification retention 167 commands which use 425
notifications examples, order precedence 436
user-defined 673 examples, process 435
user-generated 668 examples, subtree 438
name pattern 432
O order precedence 429
object parameter 428
changed on target system 334, 337 process 426
journal entry codes 735 subtree 431
object apply process object selector elements 428
defaults 244 by function 430
description 56 object selectors 428
threshold 244 object send job
object attributes, comparing 448 job restart time for shared 319
object auditing object send process
used for replication 343 description 55
object auditing level, i5/OS shared 59, 243
manually set for a data group 309 threshold 243
object types supported 97, 635

856
objects output
new 343 batch 546
Omit content (OMTDTA) parameter 416 considerations 542
and comparison commands 417 display 543
and cooperative processing 417 expand support 545
open commit cycles file 545
audit results 689, 691 parameter 542
OptiConnect, configuring 164 print 543
outfiles 737 output file
MCAG 739 querying content, examples of 813
MCDTACRGE 742 output file fields
MCNODE 745 Difference Indicator 689, 692
MXAUDHST 763 System 1 Indicator field 695
MXAUDOBJ 766 System 2 Indicator field 695
MXCDGFE 747 output queues 167
MXCMPDLOA 749 overview
MXCMPFILA 751 MIMIX operations 44
MXCMPFILD 753 remote journal support 63
MXCMPFILR 756 starting and ending replication 44
MXCMPIFSA 759 support for resolving problems 46
MXCMPOBJA 761 support for switching 28, 47
MXCMPRCDC 757 working with messages 48
MXDGACT 769
MXDGACTE 771 P
MXDGDFN 778 parallel processing 466
MXDGDLOE 786 path names, IFS 117
MXDGFE 787 implicit parent object replication 118
MXDGIFSE 790 path names, implicit DLO parent object replica-
MXDGIFSTE 834 tion 122
MXDGOBJE 816 performance
MXDGOBJTE 837 improved record count compare 381
MXDGSTS 791 policy, CMPRCDCNT commit threshold 381
MXDGTSP 819 polling interval 238
MXJRNDFN 821 port alias 160
MXJRNINSP 830 complex example 161
MXPROC 839 creating 162
MXPROCSTS 840 simple example 160
MXSTEP 842 primary node
MXSTEPMSG 843 configure for application group 327
MXSTEPPGM 841 print output 543
MXSTEPSTS 844 printing
MXSYSDFN 826 controlling characteristics of 168
MXSYSSTS 829 private authorities, *MSGQ replication of 104
MXTFRDFN 832 problems, journaling
user profile password 725 data areas and data queues 354
user profile status 722 files 347
WRKRJLNK 824 IFS objects 350
outfiles, supporting information problems, resolving
record format 737 audit results 687
work with panels 738

857
auditing 679 QAUDLVL system value 41, 55, 103
procedure QDFTJRN data area 238
begin at step 331, 560 QLIBLCKLVL system value 41
displaying steps 567 QMLTTHDACN system value 42
procedures 38, 557, 563 QRETSVRSEC system value 42
adding a step 568 QSECURITY system value 41
components 557 QSOC
creating type *NODE 565 library 164
creating type *USER 565 QTIME system value 42
creating types *END, *START, *SWSTPLAN, QTIMZON system value 42
*SWTUNPLAN 566
customizing user application steps 562 R
displaying available 563 RCVJRNE (Receive Journal Entry) 377
history 561 configuring values 378
invoking 560 determining whether to change the value of
job processing 558 378
last started run 568 understanding its values 377
programming support 571, 574 RDB
removing a step 570 directory entries 178, 180
status 561 reader wait time 235
step attributes 559 receiver library, changing for RJ target journal
step error processing 559 225
switch customizing 561 receivers
types of 558 change management 203
process delete management 203
database apply 79 recommendations
database reader 68 multimanagement journal definitions 208
database receive 79 relational database (RDB) 178
database send 79 entries 178, 186
names 51 remote journal
object send 55 i5/OS function 29, 63
process, object selection 426 i5/OS function, asynchronous delivery 67
processing defaults i5/OS function, synchronous delivery 65
container send 244 MIMIX support 63
database apply 241 relational database 178
file entry options 239 remote journal environment
object apply 244 changing 225
object retrieve 244 contextual transfer definitions 182
user journal entry 237 receiver change management 213
product authority receiver delete management 214
overview 638 restrictions 64
production system 26 RJ link 68
programs, step 570 security implications 188
publications, IBM 22 switch processing changes 48
remote journal ID 207, 208
Q remote journal link 37, 68
QALWOBJRST system value 41 remote journal link, See also RJ link
QALWUSRDMN system value 41 remote journaling
QAUDCTL system value 41, 55 data group definition 236

858
repairing queues 113
file data 484 restore operations, journaled *DTAARA, *DTAQ,
files in *HLDERR 465 IFS objects 134
files on hold 487 restrictions
replication bi-directional environments 334
advanced topic parameters 237 comparing file data 466
by object type 97 data areas and data queues 114
configuring advanced techniques 383 independent ASP 662
constraint-induced modifications 400 journal at create 344
defaults for object types 97 journal receiver management 214
direction of 26 journal receiver size *MAXOPT3 204
ending data group 44 journaled *DTAARA, *DTAQ objects 114
ending MIMIX 44 journaled IFS objects 120
implicitly identified parent objects 118, 122 legacy cooperative processing 112
independent ASP 658 LOBs 109
maximum size threshold 178 MIMIX Dynamic Apply 111
positional vs. keyed 385 number of objects in journal 29
process, remote journaling environment 68 QDFTJRN data area 344
retrieving extended attributes 369 remote journaling 64
spooled files 103 standby journaling 364
SQL stored procedures 421 target journal inspection 334
starting data group 44 retrying, data group activity entries 46
starting MIMIX 44 RJ link 37
supported paths 24 adding 227
system journal 24 changing 228
system journal process 55 data group definition parameter 236
unit of work for 27 description 68
user journal 24 end options 69
user profiles 500 identifying data groups that use 316
user-defined functions 421 sharing among data groups 69
what to exclude 84 switching considerations 73
replication manager 51, 337 threshold 238
replication path 50 RJ link monitors
reports description 71
detail 544 displaying status of 71
list detail 544 ending 71
list summary 544 not installed, status when 71
types for compare commands 444 operation 71
requirement rule groups
objects and journal in same ASP 29 MIMIX 677
requirements rules 669
independent ASP 662 messages from 671
journal at create 343 MIMIX 669
journaling 343 notifications from 671
keyed replication 385 relationship with rules 669
legacy cooperative processing 112 run command considerations 671
MIMIX Dynamic Apply 111 run on management system 671
standby journaling 364 user-defined 668
system values for installing 41 running
user journal replication of data areas and data rule groups 675

859
rules 675 journal caching 363
journal standby state 362
S MIMIX processing with 363
save-while-active 423 overview 362
considerations 423 requirements 364
examples 424 restrictions 364
options 424 starting
wait time 423 data groups initially 315
search process, *ANY transfer definitions 181 procedure at step 331, 560
security procedures 560
considerations, CMPFILDTA command 467 system and journal managers 306
functions provided by Vision Solutions 638 TCP server 191
general information 81 TCP server automatically 192
remote journaling implications 188 starting journaling
security audit journal 55 data areas and data queues 354
security class table, product 639 file entry 347
sequence number files 347
maximum size option 204 IFS objects 350
sequence number size option, *MAXOPT3 222 IFS tracking entry 350
serialization object tracking entry 354
database files and journaled objects 88 startup programs
object changes with database 75 changes for remote journaling 146
servers MIMIX subsystem 92
starting DDM 187 status
starting TCP 191 audit compliance 684
services journaling data areas and data queues 354
cluster 36 journaling files 347
short transfer definition name 176 journaling IFS objects 350
source physical files 106, 107 journaling tracking entries 350, 354
source system 26 procedures and steps 561
spooled files 103 status receive process
compare commands 444 description 56
keeping deleted 103 status send process
options 103 description 56
retaining on target system 243 status, values affecting updates to 238
SQL stored procedures 421 step
replication requirements 421 begin procedure at 331, 560
SQL table identity columns 401 step messages 573
alternatives to SETIDCOLA 403 adding 573
check for replication of 406 list available 573
problem 401 removing 574
SETIDCOLA command details 404 step program
SETIDCOLA command examples 405 changing 571
SETIDCOLA command limitations 402 creating a custom program 570
SETIDCOLA command usage notes 405 custom, for switching 561
setting attribute 406 ENDUSRAPP 562
when to use SETIDCOLA 402 format STEP0100 571
standby journaling STRUSRAPP 562
IBM i5/OS option 42 362 step programs 570
display available 570

860
steps 38, 567 limit maximum size 499
adding to procedure 568 LOB data 501
changing attributes 569 object tracking entries 530
enabling and disabling 569 object, IFS, DLO overview 503
remove from procedure 570 objects 516
runtime attributes 559 objects in a data group 516
storage, data libraries 167 objects without a data group 517
stranded journal on target, journal entries 216 related file 506
subsystem resources for 509
MIMIXSBS, starting 92 status changes caused by 501
subtree 431 tracking entries 507
IFS objects 432 user profiles 499, 500
switch procedure customization 561 synchronous delivery 65
switch procedure error messages 561 unconfirmed entries 66
switching SYSBAS 658, 660
allowing 235 system ASP 659
data group 28 system definition 36, 165
enabling journaling on target system 235 changing 170
example RJ journal definitions for 211 creating 169
independent ASP restriction 663 parameter tips 166
MIMIX Model Switch Framework with RJ link system journal 55
73 system journal replication 24
preventing identity column problems 401 advanced techniques 383
remote journaling changes to 48 journaling requirements 343
removing stranded journal receivers 216 omitting content 415
RJ link considerations 73 system library list 164, 665
synchronization check, automatic 238 system manager 34
synchronizing 497 multimanagement enviroment 170
activity entries overview 504 system value
commands for 499 QALWOBJRST (Allow object restore option)
considerations 499 41
data group activity entries 528 QALWUSRDMN (Allow user domain objects
database files 514 in libraries) 41
database files overview 505 QAUDCTL 55
DLOs 524 QAUDCTL (Auditing control) 41
DLOs in a data group 524 QAUDLVL 55, 103
DLOs without a data group 525 QAUDLVL (Security auditing level) 41
establish a start point 508 QLIBLCKLVL (Library locking level) 41
file entry overview 505 QMLTTHDACN (Multithreaded job action) 42
files with triggers 506 QRETSVRSEC (Retain server security data)
IFS objects 520 42
IFS objects by path name only 522 QSECURITY (System security level) 41
IFS objects in a data group 520 QSYSLIBL 164
IFS objects without a data group 522 QSYSLIBL (System part of the library list) 41
IFS tracking entries 530 QTIMZON (Time zone) 42
including logical files 506 system, RJ identifier for a 207, 208
independent ASP, data in an 501 system, roles 26
initial 510 sytem value
initial configuration 508 QTIME (Time of day) 42
initial configuration MQ environment 508

861
T trigger programs
target journal inspection 35, 334 defined 397
automatic corrections 337 synchronizing files 398
disabling 340 triggers
enabling 338 avoiding problems 468
example 335 comparing file data 468
false errors 335 disabling during synchronization 506
journals not inspected 334 read 468
restriction 334 update, insert, and delete 468
target journal state 202 T-ZC journal entries
target system 26 access types 415
TCP server, autostart job entry for 178 configuring to omit 416
TCP/IP omitting 415
adding to startup program 146
configuring native 159 U
creating port aliases for 160 unconfirmed journal entries 66, 73
temporary files to not replicate 84 unique key
thread groups 475 comparing file data restriction 466
threshold, backlog file entry options for replicating 240
adjusting 251 replication of 385
container send 244 user ASP 659
database apply 242 user exit points 628
database reader/send 241 user exit program
object apply 244 data areas and data queues 90
object retrieve 244 IFS objects 90
object send 243 large objects (LOBs) 108
remote journal link 238 user exit, generic 625
threshold, CMPRCDCNT commit 381 user journal replication 24
timestamps, automatic 238 advanced techniques 383
tracking entries journaling requirements 343
loading 286 requirements for data areas and data queues
loading for data areas, data queues 287 113
loading for IFS objects 286 supported journal entries for data areas, data
purpose 77 queues 733
tracking entry tracking entry 77
file identifiers (FIDs) 317 user profiles
transfer definition 36, 174, 475 default 167
changing 185 exclude from replication 84, 85, 86
contextual system support (*ANY) 32, 181 MIMIXOWN 188
fields in data group definition 235 password indicator attribute 725
fields in system definition 166 replication of 104
multiple network system environment 172 specifying status 243
other uses 174 status attribute 722
parameter tips 176 synchronizing 499
short name 176 system distribution directory entries 500
transfer protocols Vision-supplied 40
OptiConnect parameters 178 user-defined functions 421
SNA parameters 177
TCP parameters 176

862
V
verifying
communications link 196, 197
initial synchronization 512
journaling, IFS tracking entries 352
journaling, object tracking entries 356
journaling, physical files 349
key attributes 389
send and receive processes automatically
238

W
wait time
comparing file data 475
reader 235
WRKDG SELECT statement 813

863

You might also like