MIMIX Administrator Reference
MIMIX Administrator Reference
MIMIX Administrator Reference
Version 8.0
MIMIX Administrator Reference
Conceptual, Configuration, and Reference Information
Notices
3
Advanced journaling ............................................................................................ 54
System journal replication ......................................................................................... 55
Activity entry processing...................................................................................... 56
Processing self-contained activity entries...................................................... 57
Processing data-retrieval activity entries ....................................................... 57
Processes with shared jobs................................................................................. 59
Processes with multiple asynchronous jobs ........................................................ 59
Tracking object replication................................................................................... 60
Managing object auditing .................................................................................... 60
User journal replication.............................................................................................. 63
What is remote journaling?.................................................................................. 63
Benefits of using remote journaling with MIMIX .................................................. 63
Restrictions of MIMIX Remote Journal support ................................................... 64
Overview of IBM processing of remote journals .................................................. 65
Synchronous delivery .................................................................................... 65
Asynchronous delivery .................................................................................. 67
User journal replication processes ...................................................................... 68
The RJ link .......................................................................................................... 68
Sharing RJ links among data groups............................................................. 69
RJ links within and independently of data groups ......................................... 69
Differences between ENDDG and ENDRJLNK commands .......................... 69
RJ link monitors ................................................................................................... 71
RJ link monitors - operation........................................................................... 71
RJ link monitors in complex configurations ................................................... 71
Support for unconfirmed entries during a switch ................................................. 73
RJ link considerations when switching ................................................................ 73
User journal replication of IFS objects, data areas, data queues.............................. 75
Benefits ............................................................................................................... 75
Processes used................................................................................................... 76
Tracking entries ................................................................................................... 77
IFS object file identifiers (FIDs) ........................................................................... 78
Older source-send user journal replication processes .............................................. 79
Chapter 3 Preparing for MIMIX 81
Checklist: pre-configuration....................................................................................... 82
New configuration default environment ..................................................................... 83
Data that should not be replicated............................................................................. 84
Planning for journaled IFS objects, data areas, and data queues............................. 87
Is user journal replication appropriate for your environment? ............................. 87
Serialized transactions with database files.......................................................... 88
Converting existing data groups .......................................................................... 88
Conversion examples .................................................................................... 89
Database apply session balancing ...................................................................... 90
User exit program considerations........................................................................ 90
Starting the MIMIXSBS subsystem ........................................................................... 92
Accessing the MIMIX Main Menu.............................................................................. 93
Chapter 4 Planning choices and details by object class 95
Replication choices by object type ............................................................................ 97
Configured object auditing value for data group entries............................................ 98
4
Identifying library-based objects for replication ....................................................... 100
How MIMIX uses object entries to evaluate journal entries for replication ........ 101
Replication of implicitly defined parents of library-based objects ................ 102
Identifying spooled files for replication .............................................................. 103
Additional choices for spooled file replication.............................................. 103
Replicating user profiles and associated message queues .............................. 104
Identifying logical and physical files for replication.................................................. 106
Considerations for LF and PF files .................................................................... 106
Files with LOBs............................................................................................ 108
Configuration requirements for LF and PF files................................................. 109
Requirements and limitations of MIMIX Dynamic Apply.................................... 111
Requirements and limitations of legacy cooperative processing....................... 112
Identifying data areas and data queues for replication............................................ 113
Configuration requirements - data areas and data queues ............................... 113
Restrictions - user journal replication of data areas and data queues .............. 114
Identifying IFS objects for replication ...................................................................... 116
Supported IFS file systems and object types .................................................... 116
Considerations when identifying IFS objects..................................................... 117
MIMIX processing order for data group IFS entries..................................... 117
Long IFS path names .................................................................................. 117
Upper and lower case IFS object names..................................................... 117
Replication of implicitly defined IFS parent objects ..................................... 118
Configured object auditing value for IFS objects ......................................... 119
Support for multiple hard links ..................................................................... 119
Configuration requirements - IFS objects .......................................................... 119
Restrictions - user journal replication of IFS objects ......................................... 120
Identifying DLOs for replication ............................................................................... 122
How MIMIX uses DLO entries to evaluate journal entries for replication .......... 122
Replication of implicitly defined DLO parent objects ................................... 122
Sequence and priority order for documents ................................................ 123
Sequence and priority order for folders ....................................................... 124
Processing of newly created files and objects......................................................... 126
Newly created files ............................................................................................ 126
New file processing - MIMIX Dynamic Apply............................................... 126
New file processing - legacy cooperative processing.................................. 127
Newly created IFS objects, data areas, and data queues ................................. 127
Determining how an activity entry for a create operation was replicated .... 128
Processing variations for common operations ........................................................ 129
Move/rename operations - journaled replication ............................................... 129
Move/rename operations - user journaled data areas, data queues, IFS objects ...
130
Delete operations - files configured for legacy cooperative processing ............ 133
Delete operations - user journaled data areas, data queues, IFS objects ........ 134
Restore operations - user journaled data areas, data queues, IFS objects ...... 134
Chapter 5 Configuration checklists 135
Checklist: New remote journal (preferred) configuration ......................................... 137
Checklist: New MIMIX source-send configuration................................................... 141
Checklist: converting to application groups ............................................................. 145
Checklist: Converting to remote journaling.............................................................. 146
5
Converting to MIMIX Dynamic Apply....................................................................... 148
Converting using the Convert Data Group command ....................................... 148
Checklist: manually converting to MIMIX Dynamic Apply.................................. 149
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling .................... 151
Checklist: Converting IFS entries to user journaling using the CVTDGIFSE command
154
Requirements for using the CVTDGIFSE command ......................................... 154
Create a list of IFS objects eligible for converting to user journaling................. 155
Running the CVTDGIFSE command................................................................. 155
Responding to CVTDGIFSE command messages ........................................... 156
Checklist: Converting to legacy cooperative processing ......................................... 157
Chapter 6 System-level communications 159
Configuring for native TCP/IP.................................................................................. 159
Port aliases-simple example ............................................................................. 160
Port aliases-complex example .......................................................................... 161
Creating port aliases ......................................................................................... 162
Configuring APPC/SNA........................................................................................... 163
Configuring OptiConnect ......................................................................................... 164
Chapter 7 Configuring system definitions 165
Tips for system definition parameters ..................................................................... 166
Creating system definitions ..................................................................................... 169
Changing a system definition .................................................................................. 170
Limiting internal communications to a network system ........................................... 170
Multiple network system considerations.................................................................. 172
Chapter 8 Configuring transfer definitions 174
Tips for transfer definition parameters..................................................................... 176
Finding the system database name for RDB directory entries .......................... 180
Using IBM i commands to work with RDB directory entries ........................ 180
Using contextual (*ANY) transfer definitions ........................................................... 181
Search and selection process ........................................................................... 181
Considerations for remote journaling ................................................................ 182
Considerations for MIMIX source-send configurations...................................... 182
Naming conventions for contextual transfer definitions ..................................... 183
Additional usage considerations for contextual transfer definitions................... 183
Creating a transfer definition ................................................................................... 184
Changing a transfer definition ................................................................................. 185
Changing a transfer definition to support remote journaling.............................. 185
Starting the DDM TCP/IP server ............................................................................. 187
Verifying that the DDM TCP/IP server is running .............................................. 187
Checking the DDM password validation level ......................................................... 188
Option 1: Manually update MIMIXOWN user profile for DDM environment ...... 188
Option 2: Force MIMIX to change password for MIMIXOWN user profile ......... 189
Option 3: Allow user profiles without passwords ............................................... 189
Starting the TCP/IP server ...................................................................................... 191
Using autostart job entries to start the TCP server ................................................. 192
Identifying the current autostart job entry information ....................................... 192
Changing an autostart job entry and its related job description ........................ 193
Using a different job description for an autostart job entry .......................... 193
6
Updating host information for a user-managed autostart job entry ............. 194
Updating port information for a user-managed autostart job entry .............. 194
Verifying a communications link for system definitions ........................................... 196
Verifying the communications link for a data group................................................. 197
Verifying all communications links..................................................................... 197
Chapter 9 Configuring journal definitions 198
Configuration processes that create journal definitions........................................... 200
Journals and journal definitions for internal use ................................................ 200
Tips for journal definition parameters ...................................................................... 201
Journal definition considerations ............................................................................. 206
Journal definition naming conventions .................................................................... 207
Preferred target journal definition naming convention ....................................... 207
Example journal definitions for three management nodes .......................... 209
Target journal definition names generated by ADDRJLNK command .............. 210
Example journal definitions for a switchable data group ............................. 211
Journal receiver management................................................................................. 213
Interaction with other products that manage receivers...................................... 214
Processing from an earlier journal receiver ....................................................... 215
Considerations when journaling on target ......................................................... 216
Journal receiver size for replicating large object data ............................................. 217
Verifying journal receiver size options ............................................................... 217
Changing journal receiver size options ............................................................. 217
Creating a journal definition..................................................................................... 218
Changing a journal definition................................................................................... 220
Building the journaling environment ........................................................................ 221
Changing the journaling environment to use *MAXOPT3 ....................................... 222
Changing the remote journal environment .............................................................. 225
Adding a remote journal link.................................................................................... 227
Changing a remote journal link................................................................................ 228
Temporarily changing from RJ to MIMIX processing .............................................. 229
Changing from remote journaling to MIMIX processing .......................................... 230
Removing a remote journaling environment............................................................ 231
Chapter 10 Configuring data group definitions 233
Tips for data group parameters ............................................................................... 234
Additional considerations for data groups ......................................................... 245
Creating a data group definition .............................................................................. 246
Changing a data group definition ............................................................................ 250
Changing a data group to use a shared object send job......................................... 250
Fine-tuning backlog warning thresholds for a data group ....................................... 251
Optimizing performance for a shared object send process ..................................... 254
Identifying which data groups share an object send process ............................ 255
Moving a data group to a different object send job ........................................... 255
Chapter 11 Additional options: working with definitions 257
Copying a definition................................................................................................. 257
Deleting a definition................................................................................................. 258
Renaming definitions............................................................................................... 259
Renaming a system definition ........................................................................... 260
Renaming a transfer definition .......................................................................... 266
7
Renaming a journal definition with considerations for RJ link ........................... 267
Renaming a data group definition ..................................................................... 268
Chapter 12 Configuring data group entries 270
Creating data group object entries .......................................................................... 271
Loading data group object entries ..................................................................... 271
Adding or changing a data group object entry................................................... 272
Creating data group file entries ............................................................................... 275
Loading file entries ............................................................................................ 275
Loading file entries from a data group’s object entries ................................ 276
Loading file entries from a library ................................................................ 278
Loading file entries from a journal definition ................................................ 279
Loading file entries from another data group’s file entries........................... 280
Adding a data group file entry ........................................................................... 281
Changing a data group file entry ....................................................................... 282
Creating data group IFS entries .............................................................................. 284
Adding or changing a data group IFS entry....................................................... 284
Loading tracking entries .......................................................................................... 286
Loading IFS tracking entries.............................................................................. 286
Loading object tracking entries.......................................................................... 287
Adding a library to an existing data group ............................................................... 288
Adding an IFS directory to an existing data group .................................................. 293
Creating data group DLO entries ............................................................................ 297
Loading DLO entries from a folder .................................................................... 297
Adding or changing a data group DLO entry ..................................................... 298
Additional options: working with DG entries ............................................................ 300
Copying a data group entry ............................................................................... 300
Removing a data group entry ............................................................................ 301
Displaying a data group entry............................................................................ 302
Chapter 13 Additional supporting tasks for configuration 303
Accessing the Configuration Menu.......................................................................... 305
Starting the system and journal managers.............................................................. 306
Manually deploying configuration changes ............................................................. 307
Setting data group auditing values manually........................................................... 309
Examples of changing of an IFS object’s auditing value ................................... 310
Checking file entry configuration manually.............................................................. 313
Starting data groups for the first time ...................................................................... 315
Identifying data groups that use an RJ link ............................................................. 316
Using file identifiers (FIDs) for IFS objects .............................................................. 317
Configuring restart times for MIMIX jobs ................................................................. 318
Configurable job restart time operation ............................................................. 318
Affected jobs...................................................................................................... 318
Examples: job restart time ................................................................................. 320
Restart time examples: system definitions .................................................. 320
Restart time examples: system and data group definition combinations..... 321
Configuring the restart time in a system definition ............................................ 323
Configuring the restart time in a data group definition....................................... 324
Setting the system time zone and time ................................................................... 325
Creating an application group definition .................................................................. 326
8
Loading data resource groups into an application group ........................................ 327
Specifying the primary node for the application group ............................................ 327
Manually adding resource group and node entries to an application group............ 328
Starting, ending, or switching an application group................................................. 330
Starting an application group............................................................................. 331
Ending an application group .............................................................................. 332
Switching an application group.......................................................................... 332
Performing target journal inspection........................................................................ 334
Automatic correction of errors found by target journal inspection ..................... 337
Enabling target journal inspection ..................................................................... 338
Determining which data groups use a journal definition .................................... 339
Disabling target journal inspection .................................................................... 340
Chapter 14 Starting, ending, and verifying journaling 342
What objects need to be journaled.......................................................................... 343
Authority requirements for starting journaling.................................................... 344
MIMIX commands for starting journaling................................................................. 345
Forcing objects to use the configured journal.................................................... 346
Journaling for physical files ..................................................................................... 347
Displaying journaling status for physical files .................................................... 347
Starting journaling for physical files ................................................................... 347
Ending journaling for physical files .................................................................... 348
Verifying journaling for physical files ................................................................. 349
Journaling for IFS objects........................................................................................ 350
Displaying journaling status for IFS objects ...................................................... 350
Starting journaling for IFS objects ..................................................................... 350
Ending journaling for IFS objects ...................................................................... 351
Verifying journaling for IFS objects.................................................................... 352
Journaling for data areas and data queues............................................................. 354
Displaying journaling status for data areas and data queues............................ 354
Starting journaling for data areas and data queues .......................................... 354
Ending journaling for data areas and data queues............................................ 355
Verifying journaling for data areas and data queues ......................................... 356
Chapter 15 Configuring for improved performance 358
Minimized journal entry data ................................................................................... 359
Restrictions of minimized journal entry data...................................................... 359
Configuring for minimized journal entry data ..................................................... 360
Configuring database apply caching ....................................................................... 361
Configuring for high availability journal performance enhancements...................... 362
Journal standby state ........................................................................................ 362
Minimizing potential performance impacts of standby state ........................ 363
Journal caching ................................................................................................. 363
MIMIX processing of high availability journal performance enhancements....... 363
Requirements of high availability journal performance enhancements ............. 364
Restrictions of high availability journal performance enhancements................. 364
Configuring journal standby state ...................................................................... 365
Configuring journal caching ............................................................................... 365
Immediately applying committed transactions......................................................... 367
Changing the specified commit mode ............................................................... 368
9
Caching extended attributes of *FILE objects ......................................................... 369
Optimizing access path maintenance...................................................................... 370
Optimizing access path maintenance on service pack 7.1.15.00 or higher ...... 370
Eligible files and limitations.......................................................................... 370
Enabling the access path maintenance function ......................................... 371
Operation..................................................................................................... 371
Error recovery.............................................................................................. 372
Behavior during a switch ............................................................................. 372
Job status .................................................................................................... 373
Using parallel access path maintenance on earlier service packs .................... 374
Increasing data returned in journal entry blocks by delaying RCVJRNE calls ........ 377
Understanding the data area format.................................................................. 377
Determining if the data area should be changed............................................... 378
Configuring the RCVJRNE call delay and block values .................................... 378
Configuring high volume objects for better performance......................................... 380
Improving performance of the #MBRRCDCNT audit .............................................. 381
Chapter 16 Configuring advanced replication techniques 383
Keyed replication..................................................................................................... 385
Keyed vs positional replication .......................................................................... 385
Requirements for keyed replication ................................................................... 385
Restrictions of keyed replication........................................................................ 386
Implementing keyed replication ......................................................................... 386
Changing a data group configuration to use keyed replication.................... 386
Changing a data group file entry to use keyed replication........................... 387
Verifying key attributes ...................................................................................... 389
Data distribution and data management scenarios ................................................. 390
Configuring for bi-directional flow ...................................................................... 390
Bi-directional requirements: system journal replication ............................... 390
Bi-directional requirements: user journal replication.................................... 391
Configuring for file routing and file combining ................................................... 392
Configuring for cascading distributions ............................................................. 395
Trigger support ........................................................................................................ 397
How MIMIX handles triggers ............................................................................. 397
Considerations when using triggers .................................................................. 397
Enabling trigger support .................................................................................... 398
Synchronizing files with triggers ........................................................................ 398
Constraint support ................................................................................................... 399
Referential constraints with delete rules............................................................ 399
Replication of constraint-induced modifications .......................................... 400
Handling SQL identity columns ............................................................................... 401
The identity column problem explained ............................................................. 401
When the SETIDCOLA command is useful....................................................... 402
SETIDCOLA command limitations .................................................................... 402
Alternative solutions .......................................................................................... 403
SETIDCOLA command details .......................................................................... 404
Usage notes ................................................................................................ 405
Examples of choosing a value for INCREMENTS....................................... 405
Checking for replication of tables with identity columns .................................... 406
Setting the identity column attribute for replicated files ..................................... 406
10
Collision resolution .................................................................................................. 408
Additional methods available with CR classes .................................................. 408
Requirements for using collision resolution ....................................................... 409
Working with collision resolution classes .......................................................... 410
Creating a collision resolution class ............................................................ 410
Changing a collision resolution class........................................................... 411
Deleting a collision resolution class............................................................. 411
Displaying a collision resolution class ......................................................... 411
Printing a collision resolution class.............................................................. 412
Changing target side locking for DBAPY processes ............................................... 413
Omitting T-ZC content from system journal replication ........................................... 415
Configuration requirements and considerations for omitting T-ZC content ....... 416
Omit content (OMTDTA) and cooperative processing................................. 417
Omit content (OMTDTA) and comparison commands ................................ 417
Selecting an object retrieval delay........................................................................... 419
Object retrieval delay considerations and examples ......................................... 419
Configuring to replicate SQL stored procedures and user-defined functions.......... 421
Requirements for replicating SQL stored procedure operations ....................... 421
To replicate SQL stored procedure operations ................................................. 422
Using Save-While-Active in MIMIX.......................................................................... 423
Considerations for save-while-active................................................................. 423
Types of save-while-active options ................................................................... 424
Example configurations ..................................................................................... 424
Chapter 17 Object selection for Compare and Synchronize commands 425
Object selection process ......................................................................................... 426
Order precedence ............................................................................................. 429
Parameters for specifying object selectors.............................................................. 429
Object selection examples ...................................................................................... 434
Processing example with a data group and an object selection parameter ...... 435
Example subtree ............................................................................................... 438
Example Name pattern...................................................................................... 441
Example subtree for IFS objects ....................................................................... 442
Report types and output formats ............................................................................. 444
Spooled files ...................................................................................................... 444
Outfiles .............................................................................................................. 445
Chapter 18 Comparing attributes 446
About the Compare Attributes commands .............................................................. 446
Choices for selecting objects to compare.......................................................... 447
Unique parameters ...................................................................................... 447
Choices for selecting attributes to compare ...................................................... 448
CMPFILA supported object attributes for *FILE objects .............................. 449
CMPOBJA supported object attributes for *FILE objects ............................ 449
Comparing file and member attributes .................................................................... 450
Comparing object attributes .................................................................................... 453
Comparing IFS object attributes.............................................................................. 456
Comparing DLO attributes....................................................................................... 459
Chapter 19 Comparing file record counts and file member data 462
Comparing file record counts .................................................................................. 462
11
To compare file record counts ........................................................................... 463
Significant features for comparing file member data ............................................... 465
Repairing data ................................................................................................... 465
Active and non-active processing...................................................................... 465
Processing members held due to error ............................................................. 465
Additional features............................................................................................. 466
Considerations for using the CMPFILDTA command ............................................. 466
Recommendations and restrictions ................................................................... 466
Using the CMPFILDTA command with firewalls................................................ 467
Security considerations ..................................................................................... 467
Comparing allocated records to records not yet allocated ................................ 467
Comparing files with unique keys, triggers, and constraints ............................. 468
Avoiding issues with triggers ....................................................................... 468
Referential integrity considerations ............................................................. 469
Job priority .................................................................................................... 469
CMPFILDTA and network inactivity................................................................... 470
Specifying CMPFILDTA parameter values.............................................................. 470
Specifying file members to compare ................................................................. 470
Tips for specifying values for unique parameters .............................................. 471
Specifying the report type, output, and type of processing ............................... 474
System to receive output ............................................................................. 474
Interactive and batch processing................................................................. 474
Using the additional parameters........................................................................ 474
Advanced subset options for CMPFILDTA.............................................................. 476
Ending CMPFILDTA requests ................................................................................. 479
Comparing file member data - basic procedure (non-active) .................................. 481
Comparing and repairing file member data - basic procedure ................................ 484
Comparing and repairing file member data - members on hold (*HLDERR) .......... 487
Comparing file member data using active processing technology .......................... 490
Comparing file member data using subsetting options ........................................... 493
Chapter 20 Synchronizing data between systems 497
Considerations for synchronizing using MIMIX commands..................................... 499
Limiting the maximum sending size .................................................................. 499
Synchronizing user profiles ............................................................................... 499
Synchronizing user profiles with SYNCnnn commands .............................. 500
Missing system distribution directory entries ............................................... 500
Synchronizing large files and objects ................................................................ 501
Status changes caused by synchronizing ......................................................... 501
Synchronizing objects in an independent ASP.................................................. 501
About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 503
About synchronizing data group activity entries (SYNCDGACTE).......................... 504
About synchronizing file entries (SYNCDGFE command) ...................................... 505
About synchronizing tracking entries....................................................................... 507
Performing the initial synchronization...................................................................... 508
Establish a synchronization point ...................................................................... 508
Resources for synchronizing ............................................................................. 509
Using SYNCDG to perform the initial synchronization ............................................ 510
To perform the initial synchronization using the SYNCDG command defaults . 511
Verifying the initial synchronization ......................................................................... 512
12
Synchronizing database files................................................................................... 514
Synchronizing objects ............................................................................................. 516
To synchronize library-based objects associated with a data group ................. 516
To synchronize library-based objects without a data group .............................. 517
Synchronizing IFS objects....................................................................................... 520
To synchronize IFS objects associated with a data group ................................ 520
To synchronize IFS objects without a data group ............................................. 522
Synchronizing DLOs................................................................................................ 524
To synchronize DLOs associated with a data group ......................................... 524
To synchronize DLOs without a data group ...................................................... 525
Synchronizing data group activity entries................................................................ 528
Synchronizing tracking entries ................................................................................ 530
To synchronize an IFS tracking entry ................................................................ 530
To synchronize an object tracking entry ............................................................ 530
Chapter 21 Introduction to programming 531
Support for customizing........................................................................................... 532
User exit points.................................................................................................. 532
Collision resolution ............................................................................................ 532
Completion and escape messages for comparison commands ............................. 534
CMPFILA messages ......................................................................................... 534
CMPOBJA messages........................................................................................ 535
CMPIFSA messages ......................................................................................... 535
CMPDLOA messages ....................................................................................... 536
CMPRCDCNT messages .................................................................................. 536
CMPFILDTA messages..................................................................................... 537
Adding messages to the MIMIX message log ......................................................... 541
Output and batch guidelines.................................................................................... 542
General output considerations .......................................................................... 542
Output parameter ........................................................................................ 542
Display output.............................................................................................. 543
Print output .................................................................................................. 543
File output.................................................................................................... 545
General batch considerations............................................................................ 546
Batch (BATCH) parameter .......................................................................... 546
Job description (JOBD) parameter .............................................................. 546
Job name (JOB) parameter ......................................................................... 546
Displaying a list of commands in a library ............................................................... 547
Running commands on a remote system................................................................ 548
Benefits - RUNCMD and RUNCMDS commands ............................................. 548
Procedures for running commands RUNCMD, RUNCMDS.................................... 549
Running commands using a specific protocol ................................................... 549
Running commands using a MIMIX configuration element ............................... 551
Using lists of retrieve commands ............................................................................ 555
Changing command defaults................................................................................... 556
Chapter 22 Customizing procedures 557
Procedure components and concepts..................................................................... 557
Procedure types ................................................................................................ 558
Procedure job processing.................................................................................. 558
13
Attributes of a step ............................................................................................ 559
Operational control ............................................................................................ 560
Current status and run history ........................................................................... 561
Customizing user application handling for switching............................................... 561
Customize the step programs for user applications .......................................... 562
Working with procedures......................................................................................... 563
Accessing the Work with Procedures display.................................................... 563
Displaying the procedures for an application group .................................... 564
Displaying the procedures for a node.......................................................... 564
Displaying all procedures ............................................................................ 565
Creating a procedure of type *NODE ................................................................ 565
Creating a procedure of type *USER ................................................................ 565
Creating a procedure of type *END, *START, *SWTPLAN, *SWTUNPLAN ..... 566
Deleting a procedure ......................................................................................... 566
Working with the steps of a procedure .................................................................... 567
Displaying the steps within a procedure ............................................................ 567
Displaying step status for the last started run of a procedure ........................... 568
Adding a step to a procedure ............................................................................ 568
Changing attributes of a step ............................................................................ 569
Enabling or disabling a step .............................................................................. 569
Removing a step from a procedure ................................................................... 570
Working with step programs.................................................................................... 570
Accessing step programs .................................................................................. 570
Creating a custom step program ....................................................................... 570
Changing a step program .................................................................................. 571
Step program format STEP0100 ....................................................................... 571
Working with step messages................................................................................... 573
Assessing the Work with Step Messages display ............................................. 573
Adding or changing a step message ................................................................. 573
Removing a step message ................................................................................ 574
Additional programming support for procedures and steps..................................... 574
Chapter 23 Shipped procedures and step programs 576
Values for procedures and steps............................................................................. 576
Shipped procedures for application groups............................................................. 578
END ................................................................................................................... 579
ENDTGT............................................................................................................ 579
ENDIMMED ....................................................................................................... 579
PRECHECK ...................................................................................................... 580
START............................................................................................................... 581
SWTPLAN ......................................................................................................... 581
SWTUNPLAN .................................................................................................... 583
Shipped procedures for data protection reports ...................................................... 585
CRTDPRDIR ..................................................................................................... 585
CRTDPRFLR..................................................................................................... 585
CRTDPRLIB ...................................................................................................... 586
Shipped default procedures for IBM i cluster type application groups .................... 586
END for clustering ............................................................................................. 587
START for clustering ......................................................................................... 587
SWTPLAN for clustering ................................................................................... 588
14
SWTUNPLAN for clustering .............................................................................. 590
Shipped user procedures for cluster type application groups ................................. 592
APP_END.......................................................................................................... 592
APP_FAIL.......................................................................................................... 593
APP_STR .......................................................................................................... 593
APP_SWT ......................................................................................................... 594
Shipped user procedures for *GMIR resource groups ............................................ 594
GMIR_END ....................................................................................................... 594
GMIR_FAIL ....................................................................................................... 595
GMIR_JOIN ....................................................................................................... 595
GMIR_STR ........................................................................................................ 596
GMIR_SWT ....................................................................................................... 596
Shipped user procedures for *LUN resource groups .............................................. 597
LUN_FAIL.......................................................................................................... 597
LUN_SWT ......................................................................................................... 598
Shipped user procedures for Peer resource groups ............................................... 598
PEER_END ....................................................................................................... 598
PEER_STR ....................................................................................................... 598
Shipped user procedures for *PPRC resource groups............................................ 599
PPRC_END ....................................................................................................... 599
PPRC_FAIL ....................................................................................................... 599
PPRC_JOIN ...................................................................................................... 600
PPRC_STR ....................................................................................................... 600
PPRC_SWT ...................................................................................................... 601
Steps for application groups.................................................................................... 602
Steps for application groups included in procedures......................................... 602
Step programs not included in shipped MIMIX procedures............................... 609
Steps for data protection report procedures............................................................ 611
Steps for clustering environments ........................................................................... 613
Steps for MIMIX for MQ........................................................................................... 623
Chapter 24 Customizing with exit point programs 625
Summary of exit points............................................................................................ 625
MIMIX user exit points ....................................................................................... 625
MIMIX Monitor user exit points .......................................................................... 626
MIMIX Promoter user exit points ....................................................................... 627
Working with journal receiver management user exit points ................................... 628
Journal receiver management exit points.......................................................... 628
Change management exit points................................................................. 628
Delete management exit points ................................................................... 629
Requirements for journal receiver management exit programs................... 629
Journal receiver management exit program example ................................. 632
Appendix A Supported object types for system journal replication 635
Appendix B MIMIX product-level security 638
Authority levels for MIMIX commands..................................................................... 639
Substitution values for command authority ....................................................... 647
Appendix C Copying configurations 648
Supported scenarios ............................................................................................... 648
15
Checklist: copy configuration................................................................................... 649
Copying configuration procedure ............................................................................ 653
Appendix D Configuring Intra communications 654
Manually configuring Intra using TCP ..................................................................... 654
Manually configuring Intra using SNA ..................................................................... 656
Appendix E MIMIX support for independent ASPs 658
Benefits of independent ASPs................................................................................. 659
Auxiliary storage pool concepts at a glance ............................................................ 659
Requirements for replicating from independent ASPs ............................................ 662
Limitations and restrictions for independent ASP support....................................... 662
Configuration planning tips for independent ASPs.................................................. 663
Journal and journal receiver considerations for independent ASPs .................. 664
Configuring IFS objects when using independent ASPs ................................... 664
Configuring library-based objects when using independent ASPs .................... 664
Avoiding unexpected changes to the library list ................................................ 665
Detecting independent ASP overflow conditions..................................................... 667
Appendix F Advanced auditing topics 668
What are rules and how they are used by auditing ................................................. 669
Using a different job scheduler for audits ................................................................ 670
Considerations for rules .......................................................................................... 671
Creating user-generated notifications ..................................................................... 673
Example of a user-generated notification .......................................................... 674
Running rules and rule groups manually................................................................. 675
Running rules .................................................................................................... 675
Running rule groups .......................................................................................... 676
MIMIX rule groups ................................................................................................... 677
Appendix G Interpreting audit results 678
Resolving auditing problems ................................................................................... 679
Resolving audit runtime status problems .......................................................... 679
Checking the job log of an audit ........................................................................ 684
Resolving audit compliance status problems .................................................... 684
When the difference is “not found” .......................................................................... 686
Interpreting results for configuration data - #DGFE audit........................................ 687
Interpreting results of audits for record counts and file data ................................... 689
What differences were detected by #FILDTA.................................................... 689
What differences were detected by #MBRRCDCNT ......................................... 691
Interpreting results of audits that compare attributes .............................................. 692
What attribute differences were detected .......................................................... 692
Where was the difference detected................................................................... 695
What attributes were compared ........................................................................ 695
Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 696
Attributes compared and expected results - #OBJATR audit ............................ 701
Attributes compared and expected results - #IFSATR audit ............................. 710
Attributes compared and expected results - #DLOATR audit ........................... 713
Comparison results for journal status and other journal attributes .................... 715
How configured journaling settings are determined .................................... 718
Comparison results for auxiliary storage pool ID (*ASP)................................... 719
16
Comparison results for user profile status (*USRPRFSTS) .............................. 722
How configured user profile status is determined........................................ 723
Comparison results for user profile password (*PRFPWDIND)......................... 725
Appendix H Journal Codes and Error Codes 727
Journal entry codes for user journal transactions.................................................... 727
Journal entry codes for files .............................................................................. 727
Error codes for files in error ............................................................................... 730
Journal codes and entry types for journaled IFS objects .................................. 732
Journal codes and entry types for journaled data areas and data queues........ 733
Journal entry codes for system journal transactions ............................................... 735
Appendix I Outfile formats 737
Work panels with outfile support ............................................................................. 738
MCAG outfile (WRKAG command) ......................................................................... 739
MCDTACRGE outfile (WRKDTARGE command) ................................................... 742
MCNODE outfile (WRKNODE command)............................................................... 745
MXCDGFE outfile (CHKDGFE command) .............................................................. 747
MXCMPDLOA outfile (CMPDLOA command)......................................................... 749
MXCMPFILA outfile (CMPFILA command) ............................................................. 751
MXCMPFILD outfile (CMPFILDTA command) ........................................................ 753
MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 756
MXCMPRCDC outfile (CMPRCDCNT command)................................................... 757
MXCMPIFSA outfile (CMPIFSA command) ............................................................ 759
MXCMPOBJA outfile (CMPOBJA command) ......................................................... 761
MXAUDHST outfile (WRKAUDHST command) ...................................................... 763
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands) ......................... 766
MXDGACT outfile (WRKDGACT command)........................................................... 769
MXDGACTE outfile (WRKDGACTE command)...................................................... 771
MXDGDFN outfile (WRKDGDFN command) .......................................................... 778
MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 786
MXDGFE outfile (WRKDGFE command)................................................................ 787
MXDGIFSE outfile (WRKDGIFSE command) ......................................................... 790
MXDGSTS outfile (WRKDG command) .................................................................. 791
WRKDG outfile SELECT statement examples .................................................. 813
WRKDG outfile example 1........................................................................... 814
WRKDG outfile example 2........................................................................... 814
WRKDG outfile example 3........................................................................... 814
WRKDG outfile example 4........................................................................... 815
MXDGOBJE outfile (WRKDGOBJE command) ...................................................... 816
MXDGTSP outfile (WRKDGTSP command) ........................................................... 819
MXJRNDFN outfile (WRKJRNDFN command) ....................................................... 821
MXRJLNK outfile (WRKRJLNK command) ............................................................. 824
MXSYSDFN outfile (WRKSYSDFN command)....................................................... 826
MXSYSSTS outfile (WRKSYS command) .............................................................. 829
MXJRNINSP outfile (WRKJRNINSP command) ..................................................... 830
MXTFRDFN outfile (WRKTFRDFN command) ....................................................... 832
MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 834
MXDGOBJTE outfile (WRKDGOBJTE command).................................................. 837
MXPROC outfile (WRKPROC command) ............................................................... 839
17
MXPROCSTS outfile (WRKPROCSTS command) ................................................. 840
MXSTEPPGM outfile (WRKSTEPPGM command)................................................. 841
MXSTEP outfile (WRKSTEP command) ................................................................. 842
MXSTEPMSG outfile (WRKSTEPMSG command)................................................. 843
MXSTEPSTS outfile (WRKSTEPSTS command) ................................................... 844
Index 846
18
Who this book is for
19
MIMIX Operations with IBM i Clustering
This book is for administrators and operators in an IBM i clustering environment
who use MIMIX® for PowerHA® to integrate cluster management with MIMIX
logical replication or supported hardware-based replication techniques. This book
focuses on addressing problems reported in MIMIX status and basic operational
procedures such as starting, ending, and switching.
MIMIX Operations - 5250
This book provides high level concepts and operational procedures for managing
your high availability environment using MIMIX products from a native user
interface. This book focuses on tasks typically performed by an operator, such as
checking status, starting or stopping replication, performing audits, and basic
problem resolution.
Using MIMIX Monitor
This book describes how to use the MIMIX Monitor user and programming
interfaces available with MIMIX products. This book also includes programming
information about MIMIX Model Switch Framework.
Using MIMIX Promoter
This book describes how to use MIMIX commands for copying and reorganizing
active files. MIMIX Promoter functionality is included with MIMIX® Enterprise™.
MIMIX for IBM WebSphere MQ
This book identifies requirements for the MIMIX for MQ feature which supports
replication in IBM WebSphere MQ environments. This book describes how to
configure MIMIX for this environment and how to perform the initial
synchronization and initial startup. Once configured and started, all other
operations are performed as described in the MIMIX Operations - 5250 book.
MIMIX DR Limitations
Environments that replicate in one direction between only two systems may not
require the features offered by a high availability product. Instead, a disaster recovery
solution that enables data to be readily available is all that is needed. MIMIX® DR
features many of the capabilities found in Vision Solutions high availability products,
with the following specifications that make it strictly a disaster recovery solution:
• MIMIX Instance - Only one MIMIX Instance is allowed on each participating
system. A MIMIX DR installation comprises two systems that transfer data and
objects between them. Both systems within the instance use the same name for
the library in which the product is installed.
• Switching - MIMIX DR does not support switching. Switching is the process by
which a production environment is automatically moved from one system to
another system and replication of data can be done in either direction.
• Priority based auditing - MIMIX DR supports only scheduled object auditing.
Priority based object auditing is an advanced configuration of auditing that is only
available with other products within the MIMIX family.
20
MIMIX DR Limitations
21
Sources for additional information
This book refers to other published information. The following information, plus
additional technical information, can be located in the IBM Knowledge Center.
From the Information center you can access these IBM Power™ Systems topics,
books, and redbooks:
• Backup and Recovery
• Journal management
• DB2 Universal Database for IBM Power™ Systems Database Programming
• Integrated File System Introduction
• Independent disk pools
• TCP/IP Setup
• IBM redbook Striving for Optimal Journal Performance on DB2 Universal
Database for iSeries, SG24-6286
• IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189
• IBM redbook Power™ Systems iASPs: A Guide to Moving Applications to
Independent ASPs, SG24-6802
The following information may also be helpful if you replicate journaled data areas,
data queues, or IFS objects:
• DB2 UDB for iSeries SQL Programming Concepts
• DB2 Universal Database for iSeries SQL Reference
• IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189
22
How to contact us
How to contact us
For contact information, visit our Contact CustomerCare web page.
If you are current on maintenance, support for MIMIX products is also available when
you log in to Support Central.
It is important to include product and version information whenever you report
problems.
23
MIMIX overview
This book provides concepts, configuration procedures, and reference information for
using MIMIX® Enterprise™, MIMIX® Professional™, or MIMIX® DR. For simplicity, this
book uses the term MIMIX to refer to the functionality provided unless a more specific
name is necessary. Concepts and reference information also apply to MIMIX®
Global™. Some topics may not apply to all products.
MIMIX version 8 provides high availability for your critical data in a production
environment on IBM Power™ Systems through real-time replication of changes and
the ability to quickly switch your production environment to a ready backup system.
These capabilities allow your business operations to continue when you have planned
or unplanned outages in your System i environment. MIMIX also provides advanced
capabilities that can help ensure the integrity of your MIMIX environment.
Replication: MIMIX continuously captures changes to critical database files and
objects on a production system, sends the changes to a backup system, and applies
the changes to the appropriate database file or object on the backup system. The
backup system stores exact duplicates of the critical database files and objects from
the production system.
MIMIX uses two replication paths to address different pieces of your replication
needs. These paths operate with configurable levels of cooperation or can operate
independently.
• The user journal replication path captures changes to critical files and objects
configured for replication through a user journal. When configuring this path,
shipped defaults use the remote journaling function of the operating system to
simplify sending data to the remote system. In previous versions, MIMIX DB2
Replicator provided this function.
• The system journal replication path handles replication of critical system objects
(such as user profiles, program objects, or spooled files), integrated file system
(IFS) objects, and document library object (DLOs) using the system journal. In
previous versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between
the system journal and user journal replication paths when replicating database files,
IFS objects, data areas, and data queues.
Switching: One common use of MIMIX is to support a hot backup system to which
operations can be switched in the event of a planned or unplanned outage. If a
production system becomes unavailable, its backup is already prepared for users. In
the event of an outage, you can quickly switch users to the backup system where they
can continue using their applications. MIMIX captures changes on the backup system
for later synchronization with the original production system. When the original
production system is brought back online, MIMIX assists you with analysis and
synchronization of the database files and other objects.
24
Automatic verification and correction: MIMIX enables earlier and easier detection
of problems known to adversely affect maintaining availability and switch-readiness of
your replication environment. MIMIX automatically detects and corrects potential
problems during replication and auditing. MIMIX also helps to ensure the integrity of
your MIMIX configuration by automatically verifying that the files and objects being
replicated are what is defined to your configuration.
MIMIX is shipped with these capabilities enabled. Incorporated best practices for
maintaining availability and switch-readiness are key to ensuring that your MIMIX
environment is in tip-top shape for protecting your data. User interfaces allow you to
fine-tune to the needs of your environment.
Analysis: MIMIX also provides advanced analysis capabilities through the MIMIX
portal application for Vision Solutions Portal (VSP). When using the VSP user
interface, you can see what objects are configured for replication as well as what
replicated objects on the target system have been changed by people or programs
other than MIMIX. (Objects changed on the target system affect your data integrity.)
You can also check historical arrival and backlog rates for replication to help you
identify trends in your operations that may affect MIMIX performance.
Uses: MIMIX is typically used among systems in a network to support a hot backup
system. Simple environments have one production system and one backup system.
More complex environments have multiple production systems or backup systems.
MIMIX can also be used on a single system.
You can view the replicated data on the backup system at any time without affecting
productivity. This allows you to generate reports, submit (read-only) batch jobs, or
perform backups to tape from the backup system. In addition to real-time backup
capability, replicated databases and objects can be used for distributed processing,
allowing you to off-load applications to a backup system.
The topics in this chapter include:
• “MIMIX concepts” on page 26 describes concepts and terminology that you need
to know about MIMIX.
• “The MIMIX environment” on page 33 describes components of the MIMIX
operating environment.
• “System value settings for MIMIX” on page 41 identifies the system value settings
that MIMIX requires for installing or upgrading software and for operation, and
identifies which system values are changed by MIMIX.
• “Operational overview” on page 44 provides information about day to day MIMIX
operations.
25
MIMIX concepts
This topic identifies concepts and terminology that are fundamental to how MIMIX
performs replication. You should be familiar with the relationships between systems,
the concepts of data groups and switching, and role of the IBM i journaling function in
replication.
26
MIMIX concepts
The terms management system and network system define the role of a system
relative to how the products interact within a MIMIX installation. These roles remain
associated with the system within the MIMIX installation to which they are defined.
Typically one system in the MIMIX installation is designated as the management
system and the remaining one or more systems are designated as network systems.
A management system is the system in a MIMIX installation that is designated as the
control point for all installations of the product within the MIMIX installation. The
management system is the location from which work to be performed by the product
is defined and maintained. Often the system defined as the management system also
serves as the backup system during normal operations. A network system is any
system in a MIMIX installation that is not designated as the management system
(control point) of that MIMIX installation. Work definitions are automatically distributed
from the management system to a network system. Often a system defined as a
network system also serves as the production system during normal operations.
27
You also define the data to be replicated and many other characteristics the
replication process uses on the defined data. The replication process is started and
ended by operations on a data group.
A data group entry identifies a source of information that can be replicated. Once a
data group definition is created, you can define data group entries. MIMIX uses the
data group entries that you create during configuration to determine whether a journal
entry should be replicated. If you are using both user journal and system journal
replication, a data group can have any combination of entries for files, IFS objects,
library-based objects, and DLOs.
28
MIMIX concepts
29
• Journal receivers that are, or were, associated with the journal while the current
journal receiver is attached.
Remote journaling requires unique considerations for journaling and journal receiver
management. For additional information, see “Journal receiver management” on
page 213.
30
Multi-part naming convention
MIMIX uses named definitions to identify related user-defined configuration
information. A multi-part, qualified naming convention uniquely describes certain
types of definitions. This includes a two-part name for journal definitions and a three-
part name for transfer definitions and data group definitions. Newly created data
groups use remote journaling as the default configuration, which has unique
requirements for naming data group definitions. For more information, see “Target
journal definition names generated by ADDRJLNK command” on page 210.
The multi-part name consists of a name followed by one or two participating system
names (actually, names of system definitions). Together the elements of the multi-part
name define the entire environment for that definition. As a whole unit, a fully-qualified
two-part or three-part name must be unique. The first element, the name, does not
need to be unique. In a three-part name, the order of the system names is also
important, since two valid definitions may share the same three elements but with the
system names in different orders.
For example, MIMIX automatically creates a journal definition for the security audit
journal when you create a system definition. Each of these journal definitions is
named QAUDJRN, so the name alone is not unique. The name must be qualified with
the name of the system to which the journal definition applies, such as QAUDJRN
CHICAGO or QAUDJRN NEWYORK. Similarly, the data group definitions
INVENTORY CHICAGO HONGKONG and INVENTORY HONGKONG CHICAGO
are unique because of the order of the system names.
When using command interfaces which require a data group definition, MIMIX can
derive the fully-qualified name of a data group definition if a partial name provided is
sufficient to determine the unique name. If the first part of the name is unique, it can
be used by itself to designate the data group definition. For example, if the data group
definition INVENTORY CHICAGO HONGKONG is the only data group with the name
INVENTORY, then specifying INVENTORY on any command requiring a data group
name is sufficient. However, if a second data group named INVENTORY NEWYORK
LONDON is created, the name INVENTORY by itself no longer describes a unique
data group. INVENTORY CHICAGO would be the minimum parts of the name of the
first data definition necessary to determine its uniqueness. If a third data group named
INVENTORY CHICAGO LONDON was added, then the fully qualified name would be
required to uniquely identify the data group. The order in which the systems are
identified is also important. The system HONGKONG appears in only one of the data
groups definitions. However, specifying INVENTORY HONGKONG will generate a
“not found” error because HONGKONG is not the first system in any of the data group
definitions. This applies to all external interfaces that reference multi-part definition
names.
MIMIX can also derive a fully qualified name for a transfer definition. Data group
definitions and system definitions include parameters that identify associated transfer
definitions. When a subsequent operation requires the transfer definition, MIMIX uses
the context of the operation to determine the fully qualified name. For example, when
starting a data group, MIMIX uses information in the data group definition, the
systems specified in the data group name, and the specified transfer definition name
to derive the fully qualified transfer definition name. If MIMIX cannot find the transfer
31
definition, it reverses the order of the system names and checks again, avoiding the
need for redundant transfer definitions.
You can also use contextual system support (*ANY) to configure transfer definitions.
When you specify *ANY in a transfer definition, MIMIX uses information from the
context in which the transfer definition is called to resolve to the correct system.
Unlike the conventional configuration case, a specific search order is used if MIMIX is
still unable to find an appropriate transfer definition. For more information, see “Using
contextual (*ANY) transfer definitions” on page 181.
32
The MIMIX environment
IFS directories
Vision Solutions products have an IFS directory structure used in replication for each
family of products. The IFS directory structure is created during the installation
process for License Manager and for each product.
The following two root directories contain all IFS-based objects:
/LakeviewTech
/VisionSolutions
There is a unique sub-directory structure for each product installation by instance
name. Streamfiles for the install wizard are stored in the following subdirectory:
/LakeviewTech/Upgrades
33
objects in this library. If you place objects in these libraries, they may be
deleted during the next installation process. Also, do not replicate the
MIMIXQGPL library. For additional information, see “Data that should not be
replicated” on page 84.
MIMIXSBS subsystem
The MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related
processing. This subsystem is shipped with the proper job queue entries and routing
entries for correct operation of the MIMIX jobs.
Data libraries
MIMIX uses the concept of data libraries. Currently there are two series of data
libraries:
• MIMIX uses data libraries for storing the contents of the object cache. MIMIX
creates the first data library when needed and may create additional data libraries.
The names of data libraries are of the form product-library_n (where n is a number
starting at 1).
• For system journal replication, MIMIX creates libraries named product-library_x,
where x is derived from the ASP. For example, A for ASP 1, B for ASP 2. These
ASP-specific data libraries are created when needed and are not deleted until the
product is uninstalled.
System managers
System manager processes automatically distribute configuration and status changes
among systems in a MIMIX installation. There are multiple system manager
processes associated with each system.
Each system manager process consists of a remote journal link (RJ Link) that
transmits journal entries in one direction between a pair of systems and a system
manager processing job that applies the entries to MIMIX files on the target system of
the RJ link. Between each pair of communicating systems there are always two
system manager processes.
Figure 1 shows a MIMIX installation with a management system and two network
systems. Each arrow represents one system manager process and the direction in
which it transmits data.
In environments with more than two systems, MIMIX restricts which systems can
communicate based on their designation as a management (*MGT) or network
(*NET) system. By default, each management systems communicates with all
network systems and all other management systems. Network systems communicate
only with management systems. When licensed for multiple management systems,
you can limit which management systems communicate with each network system.
Figure 1. System manager jobs in a MIMIX installation with one management system and
34
The MIMIX environment
Journal managers
MIMIX uses journal managers to maintain journal receivers used by replication
processes and system manager processes. A journal manager job runs on each
system in a MIMIX installation.
By default, MIMIX performs both change management and delete management for
journal receivers. Parameters in a journal definition allow you to customize details of
how the change and delete operations are performed.
The Journal manager delay parameter in the system definition determines how
frequently the journal manager looks for work.
Journal manager jobs are included in a group of jobs that MIMIX automatically
restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to
restart these MIMIX jobs at midnight (12:00 a.m.). The Job restart time parameter in
the system definition determines when the journal manager for that system restarts.
35
MIMIX modified replicated objects on the target system and sends a notification when
activity is detected. These jobs provide environment analysis functions but they are
not required to perform replication.
MIMIX also automatically corrects objects identified by target journal inspection as
"changed on target by user". Depending on the details of the problem, the correction
may be performed by a separate job or by the next audit to compare the object.
Target journal inspection jobs are started on the target system when MIMIX is started,
or as necessary when MIMIX managers or data groups are started. Target journal
inspection jobs are included in a group of jobs that MIMIX automatically restarts daily
to maintain the MIMIX environment. The default operation of MIMIX is to restart these
MIMIX jobs at midnight (12:00 a.m.). MIMIX determines when to restart the journal
inspection jobs based on the value of the Job restart time parameter in the system
definitions for the network and management systems.
For more information, see “Performing target journal inspection” on page 334.
Collector services
Collector services refers to a group of jobs that are necessary for MIMIX operation on
the native user interface as well as for the MIMIX portal application within the Vision
Solutions Portal. One or more collector service jobs collect and combine MIMIX status
from all systems. Collector services submits a cleanup job on each system at
midnight for that system’s local time.
Cluster services
When MIMIX is configured and licensed for IBM i clustering, MIMIX uses the cluster
services function provided by IBM i to integrate the system management functions
needed for clustering. Cluster services must be active in order for a cluster node to be
recognized by the other nodes in the cluster. MIMIX integrates starting and stopping
cluster services into status and commands for controlling processes that run at the
system level.
36
The MIMIX environment
37
contained in folders (except for first-level folders). To select DLOs for replication
you select individual DLOs by specific or generic folder and DLO name, and
owner.
A single data group can contain any combination of these types of data group entries.
If your license is for only one of the MIMIX products rather than for MIMIX®
Enterprise™ or MIMIX® Professional™, only the entries associated with the product to
which you are licensed will be processed for replication.
Log spaces
Based on user space objects (*USRSPC), a log space is a MIMIX object that
provides an efficient storage and manipulation mechanism for replicated data that is
temporarily stored on the target system during the receive and apply processes. All
internal structures and objects that make up a log space are created and manipulated
by MIMIX.
38
The MIMIX environment
MIMIXDFT MIMIX Default. Used for all MIMIX jobs that do not have a X
specific job description.
MIMIXSND MIMIX Send. Used for database send, object send, object X
retrieve, container send, and status send jobs in MIMIX.
39
Table 1. Job descriptions used by MIMIX
PORTnnnnn MIMIX TCP Server, where nnnnn identifies the server port X1
or alias name number or alias. A job description exists for each transfer
definition which uses TCP protocol and enables MIMIX to
create and manage autostart job entries. Characters
nnnnn in the name identify the server port.
1. The job descriptions are created in the installation library when transfer definitions which specify PROTOCOL(*TCP)
and MNGAJE(*YES) are created or changed. The associated autostart job entries are added to the subsystem
description for the MIMIXSBS subsystem in library MIMIXQGPL.
User profiles
All of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user
profile. This profile owns all MIMIX objects, including the objects in the MIMIX product
libraries and in the MIMIXQGPL library. The profile is created with sufficient authority
to run all MIMIX products and perform all the functions provided by the MIMIX
products. The authority of this user profile can be reduced, if business practices
require, but this is not recommended. Reducing the authority of the MIMIXOWN
requires significant effort by the user to ensure that the products continue to function
properly and to avoid adversely affecting the performance of MIMIX products. See the
Using License Manager book for additional security information for the MIMIXOWN
user profile.
Note: Do not replicate the MIMIXOWN or LAKEVIEW user profiles. For additional
information, see “Data that should not be replicated” on page 84.
40
System value settings for MIMIX
41
Setting: Multiple values set by MIMIX as described below.
Required for system journal (QAUDJRN) replication. Set by MIMIX when starting
replication processes and when MIMIX commands are used to build the journaling
environment for system journal replication. Set as follows:
– MIMIX adds the values *CREATE, *DELETE, *OBJMGT, and *SAVRST.
– MIMIX checks for values *SECURITY, *SECCFG, *SECRUN, and *SECVLDL.
If the value *SECURITY is set, no change is made. If *SECURITY is not set,
MIMIX adds the values *SECCFG, *SECRUN and *SECVLDL
– If there is any data group configured to replicate spooled files, MIMIX adds the
values *SPLFDTA and *PRTDTA
• QMLTTHDACN - Multithreaded job action
Setting: cannot be set to 3
Affects only environments licensed for MIMIX for PowerHA, which cannot have
the value 3 set on any node in the cluster.
• QPWDLVL and other QPWDnnnn system values
Setting: Strongly recommend using the same settings on all systems in the
instance.
When a data group is configured to replicate user profiles, MIMIX replication
enforces the QPWD system value settings on each system. If values on the target
system are more restrictive, replication failures can occur for user profiles with
replicated passwords. These system values are set by MIMIX.
Note: Changes to QPWDLVL require an IPL to become effective and should be
made only with careful consideration.
• QRETSVRSEC - Retain server security data
Setting: 1
Required on each system in the MIMIX product instance for MIMIX operations that
use remote journaling. If the value is not 1, MIMIX sets this value to 1 when MIMIX
system manager processes start and when a transfer definition is created or
changed.
• QTIME - Time of day
Setting: Correct value for time zone in which the partition runs.
All systems in an instance must be properly set to prevent issues when running
procedures. Not set by MIMIX.
• QTIMZON - Time zone
Setting: Time zone in which the partition runs
All systems in an instance must be properly set to prevent issues when running
procedures. Not set by MIMIX.
For additional information, see these topics:
• “System journal replication” on page 55
• “Identifying spooled files for replication” on page 103
• “Replicating user profiles and associated message queues” on page 104
• “Setting the system time zone and time” on page 325
42
System value settings for MIMIX
43
Operational overview
Before replication can begin, the following requirements must be met through the
installation and configuration processes:
• MIMIX software must be installed on each system in the MIMIX installation.
• At least one communications link must be in place for each pair of systems
between which replication will occur.
• The MIMIX operating environment must be configured and be available on each
system.
• Journaling must be active for the database files and objects configured for user
journal replication.
• For objects to be replicated from the system journal, the object auditing
environment must be set up.
• The files and objects must be initially synchronized between the systems
participating in replication.
Once MIMIX is configured and files and objects are synchronized, day-to-day
operations for MIMIX can be performed
44
Operational overview
The primary areas for which status can surface are: system-level processes,
replication activity, replication processes, and auditing.
System-level processes are reported in the Nodes portlet in VSP and on the Work
with Systems (WRKSYS) display.
In environments configured with application groups, application group status includes
roll up status of replication errors, replication processes, and auditing. Application
group status is found in the Applications Group portlet in VSP and on the Work with
Application Groups (WRKAG) display.
In environments configured with only data groups, data group status includes
replication errors and replication processing. Data group status is found in the Data
Groups portlet in VSP and on the Work with Data Groups (WRKDG) display.
Auditing status is reflected at the application group and data group level interfaces, as
well on the Audits portlet in VSP and the Work with Audits (WRKAUD) display.
45
summarizes replication errors and the status of user journal (database) and system
journal (object) processes for both source and target systems. By using function keys,
you can display additional detailed views of only database or only object status.
Database views - These views provide information about replication performed by
user journal replication processes, including journaled files, IFS objects, data
areas, and data queues. They also include information about the replication of
user journal transactions, including journal progress, performance, and recent
activity.
Object views - These views provide information about replication performed by
system journal replication processes, including journal progress, performance,
and recent activity.
When a data group is experiencing replication problems, you can use these options
from the Work with Data Groups display to view problems grouped by type of activity:
12=Files not active, 13=Objects in error, 51=IFS trk entries not active, and 53=Obj trk
entries not active.
46
Operational overview
resubmit individual failed entries or all of the entries for an object. This option calls the
Retry Data Group Activity Entries (RTYDGACTE) command. From the Work with
Data Group Activity display, you can also specify a time at which to start the request,
thereby delaying the retry attempt until a time when it is more likely to succeed.
Files on hold: When the database apply process detects a data synchronization
problem, it places the file (individual member) on “error hold” and logs an error. File
entries are in held status when an error is preventing them from being applied to the
target system. You need to analyze the cause of the problem in order to determine
how to correct and release the file and ensure that the problem does not occur again.
An option on the Work with Data Groups display provides quick access to the subset
of file entries that are in error for a data group. From the Work with DG File Entries
display, you can see the status of an entry and use a number of options to assist in
resolving the error. An alternative view shows the database error code and journal
code. Available options include access to the Work with DG Files on Hold
(WRKDGFEHLD) command. The WRKDGFEHLD command allows you to work with
file entries that are in a held status. When this option is selected from the target
system, you can view and work with the entry for which the error was detected and
work with all other entries following the entry in error.
Journal analysis: With user journal replication, when the system that is the source of
replicated data fails, it is possible that some of the generated journal entries may not
have been transmitted to or received by the target system. However, it is not always
possible to determine this until the failed system has been recovered. Even if the
failed system is recovered, damage to a disk unit or to the journal itself may prevent
an accurate analysis of any missed data. Once the source system is available again,
if there is no damage to the disk unit or journal and its associated journal receivers,
you can use the journal analysis function to help determine what journal entries may
have been missed and to which files the data belongs. You can only perform journal
analysis on the system where a journal resides.
Missed transactions for IFS objects, data areas and data queues that are replicated
through the user journal will not be detected by journal analysis.
47
To enable a switchable data group to function properly for default user journal
replication processes, four journal definitions (two RJ links) are required. “Journal
definition considerations” on page 206 contains examples of how to set up these
journal definitions.
You can specify whether to end the RJ link during a switch. Default behavior for a
planned switch is to leave the RJ link running. Default behavior during an unplanned
switch is to end the RJ link. Once you have a properly configured data group that
supports switching, you should be aware of how MIMIX supports unconfirmed entries
and the state of the RJ link following a switch. For more information, see “Support for
unconfirmed entries during a switch” on page 73 and “RJ link considerations when
switching” on page 73.
For additional information about switching, see the MIMIX Operations book. For
additional information about MIMIX Model Switch Framework, see the Using MIMIX
Monitor book.
48
Operational overview
messages issued by MIMIX as an audit trail. In addition, the message log provides
robust subset and filter capabilities, the ability to locate and display related job logs,
and a powerful debug tool. When messages are issued, they are initially sent to the
specified primary and secondary message queues. In the event that these message
queues are erased, placing messages into the message log file secures a second
level of information concerning MIMIX operations.
The message log on the management system contains messages from the
management system and each network system defined within the installation. The
system manager is responsible for collecting messages from all network systems. On
a network system, the message log contains only those messages generated by
MIMIX activity on that system.
MIMIX automatically performs cleanup of the message log on a regular basis. The
system manager deletes entries from the message log file based on the value of the
Keep system history parameter in the system definition. However, if you process an
unusually high volume of replicated data, you may want to also periodically delete
unnecessary message log entries since the file grows in size depending on the
number of messages issued in a day.
49
Replication process overview
50
Replication job and supporting job names
51
Table 2. (Continued) MIMIX processes and their corresponding job names
52
Cooperative processing introduction
When a data group definition meets the requirements for MIMIX Dynamic Apply, any
logical files and physical (source and data) files properly identified for cooperative
processing will be processed via MIMIX Dynamic Apply unless a known restriction
prevents it.
When a data group definition does not meet the requirements for MIMIX Dynamic
Apply but still meets legacy cooperative processing requirements, any PF-DTA or
PF38-DTA files properly configured for cooperative processing will be replicated using
legacy cooperative processing. All other types of files are processed using system
journal replication.
IFS objects, data areas, or data queues that can be journaled are not automatically
configured for advanced journaling, by default. These object types must be manually
configured to use advanced journaling.
In all variations of cooperative processing, the system journal is used to replicate the
following operations:
• The creation of new objects that do not deposit an entry in a user journal when
they are created.
• Restores of objects on the source system
• Move and rename operates from a non-replicated library or path into a library or
path that is configured for replication.
53
relationships by assigning them to the same or appropriate apply sessions. It is also
much better at maintaining data integrity of replicated objects which previously
needed legacy cooperative processing in order to replicate some operations such as
creates, deletes, moves, and renames. Another benefit of MIMIX Dynamic Apply is
more efficient hold log processing by enabling multiple files to be processed through a
hold log instead of just one file at a time.
New data groups created with the shipped default configuration values are configured
to use MIMIX Dynamic Apply. This configuration requires data group object entries
and data group file entries.
For more information, see “Identifying logical and physical files for replication” on
page 106 and “Requirements and limitations of MIMIX Dynamic Apply” on page 111.
Advanced journaling
The term advanced journaling refers to journaled IFS objects, data areas, or data
queues that are configured for cooperative processing. When these objects are
configured for cooperative processing, replication of changed bytes of the journaled
objects’ data occurs through the user journal. This is more efficient than replicating an
entire object through the system journal each time changes occur.
Such a configuration also allows for the serialization of updates to IFS objects, data
areas, and data queues with database journal entries. In addition, processing time for
these object types may be reduced, even for equal amounts of data, as user journal
replication eliminates the separate save, send, and restore processes necessary for
system replication.
Frequently you will see the phrase “user journal replication of IFS objects, data areas,
and data queues” used interchangeably with the term advanced journaling. These
terms are the same.
For more information, see “User journal replication of IFS objects, data areas, data
queues” on page 75 and “Planning for journaled IFS objects, data areas, and data
queues” on page 87.
54
System journal replication
55
name space, the object send process creates a MIMIX construct called an activity
entry. The process also determines whether any additional information is needed
for replication, and transmits the activity entry to the target system. Data groups
can be configured to use a shared object send job or to use a dedicated job.
• Object receive process: This process receives the activity entry and waits for
notification that any additional source system processing is complete before
passing the activity entry to the object apply process.
• Object retrieve process: If any additional information is needed for replication,
the object retrieve process obtains it and places it in a container within a holding
area. This process is also used when additional processing is required on the
source system prior to transmission to the target system. The object retrieve
process uses multiple asynchronous jobs. The minimum and maximum number of
jobs is configurable for a data group.
• Container send process: When any needed additional information has been
retrieved, the container send process transmits it from a holding area to the target
system. This process also updates the activity entry and notifies the object send
process that the additional information is on the target system and the activity
entry is ready to be applied. The container send and receive processes use
multiple asynchronous jobs. The minimum and maximum number of jobs is
configurable for a data group.
• Container receive process: This process receives any needed additional
information, places it into a holding area on the target system, and notifies the
container send process when it completes these operations.
• Object apply process: This process uses the information in the activity entry as
well as any additional information that was transmitted to the target system to
replicate the operation represented by the entry. The object apply process uses
multiple asynchronous jobs. The minimum and maximum number of jobs is
configurable for a data group.
• Status send process: This process notifies the source system of the status of the
replication.
• Status receive process: This process updates the status on the source system
and, if necessary, passes control information back to the object send process.
MIMIX uses a collection of structures and customized functions for controlling these
structures during replication. Collectively the customized functions and structures are
referred to as the work log. The structures in the work log consist of log spaces, work
lists (implemented as user queues), and distribution status file.
56
System journal replication
There are two categories of activity entries: those that are self-contained and those
that require the retrieval of additional information. “Processing self-contained activity
entries” on page 57 describes the simplest object replication scenario. “Processing
data-retrieval activity entries” on page 57 describes the object replication scenario in
which additional data must be retrieved from the source system and sent to the target
system.
57
After the object send process determines that an entry is to be replicated and that
additional processing or information on the source system is required, it performs the
following actions:
• Sets the status of the entry to PR (pending retrieve)
• Adds the “sent” date and time to the activity entry
• Writes the activity entry to the log space and adds a record to the distribution
status file
• Transmits the activity entry to a corresponding object receive process on the
target system.
• Adds the entry to the object retrieve work list on the source system.
The object receive process adds the “received” date and time to the activity entry,
writes the activity entry to the log space, and adds a record to the distribution status
file. Now each system has a copy of the activity entry. The object receive process
waits until the source system processing is complete before it adds the activity entry
to the object apply work list.
Concurrently, the object send process reads the object send work list. When the
object send process finds an activity entry in the object send work list, the object send
process performs one or more of the following additional steps on the entry:
• If an object retrieve job packaged the object, the activity entry is routed to the
container send work list.
• The activity entry is transmitted to the target system, its status is updated, and a
“retrieved” date and time is added to the activity entry.
On the source system the next available object retrieve process for the data group
retrieves the activity entry from the object retrieve work list and processes the
referenced object. In addition to retrieving additional information for the activity entry,
additional processing may be required on the source system. The object retrieve
process may perform some or all of the following steps:
• Retrieve the extended attribute of the object. This may be one step in retrieving
the object or it may be the primary function required of the retrieve process.
• If necessary, cooperative processing activities, such as adding or removing a data
group file entry, are performed.
• The object identified by the activity entry is packaged into a container in the data
library. The object retrieve process adds the “retrieved” date and time to the
activity entry and changes the status of the entry to “pending send.”
• The activity entry is added to the object send work list. From there the object send
job takes the appropriate action for the activity, which may be to send the entry to
the target system, add the entry to the container send work list, or both.
The container send and receive processes are only used when an activity entry
requires information in addition to what is contained within the journal entry. The next
available job for the container send process for the data group retrieves the activity
entry from the container send work list and retrieves the container for the packaged
object from the data library. The container send job transmits the container to a
corresponding job of the container receive process on the target system. The
58
System journal replication
container receive process places the container in a data library on the target system.
The container send process waits for confirmation from the container receive job, then
adds the “container sent” date and time to the activity entry, changes the status of the
activity entry to PA (pending apply), and adds the entry to the object send work list.
The next available object apply process job for the data group retrieves the activity
entry from the object apply work list, locates the container for the object in the data
library, and replicates the operation represented by the entry. The object apply
process adds the “applied” date and time to the activity entry, changes the status of
the entry to CP (completed processing), and adds the entry to the status send work
list.
The status send process retrieves the activity entry from the status send work list
and transmits the updated entry to a corresponding job for status receive process
on the source system. The status receive process updates the activity entry in the log
space and the distribution status file. If the activity entry requires further processing,
such as if an updated container is needed on the target system, the status receive job
adds the entry to the object send work list.
59
Tracking object replication
After you start a data group, you need to monitor the status of the replication
processes and respond to any error conditions. Regular monitoring and timely
responses to error conditions significantly reduce the amount of time and effort
required in the event that you need to switch a data group.
MIMIX provides an indication of high level status of the processes used in object
replication and error conditions. You can access detailed status information through
the Data Group Status window.
When an operation cannot complete on either the source or target system (such as
when the object is in use by another process and cannot be accessed), the activity
entry may go to a failed state. MIMIX attempts to rectify many failures automatically,
but some failures require manual intervention. Objects with at least one failed entry
outstanding are considered to be “in error.” You should periodically review the objects
in error, and the associated failed entries, and determine the appropriate action. You
may retry or delete one or all of the failed entries for an object. You can check the
progress of activity entries and take corrective action through the Work with Data
Group Activity display and the Work with DG Activity Entries display. You can also
subset directly to the activity entries in error from the Work with Data Groups display.
If you have new objects to replicate that are not within the MIMIX name space, you
need to add data group entries for them. Before any new data group entries can be
replicated, you must end and restart the system journal replication processes in order
for the changes to take effect.
The system manager removes old activity entries from the work log on each system
after the time specified in the system definition passes. The Keep data group history
(days) parameter (KEEPDGHST) indicates how long the activity entries remain on the
system. You can also manually delete activity entries. Containers in the data libraries
are deleted after the time specified in the Keep MIMIX data (days) parameter
(KEEPMMXDTA).
60
System journal replication
During replication - MIMIX may change the auditing value during replication when
an object is replicated because it was created, restored, moved, or renamed into the
MIMIX name space (the group of objects defined to MIMIX).
While starting a data group - MIMIX may change the auditing value while
processing a STRDG request if the request specified processes that cause object
send (OBJSND) jobs to start and the request occurred after a data group switch or
after a configuration change to one or more data group entries (object, IFS, or DLO).
Shipped command defaults for the STRDG command allow MIMIX to set object
auditing if necessary. If you would rather set the auditing level for replicated objects
yourself, you can specify *NO for the Set object auditing level (SETAUD) parameter
when you start data groups.
Invoking manually - The Set Data Group Auditing (SETDGAUD) command provides
the ability to manually set the object auditing level of existing objects identified for
replication by a data group. When the command is invoked, MIMIX checks the audit
value of existing objects identified for system journal replication. Shipped default
values on the command cause MIMIX to change the object auditing value of objects
to match the configured value when an object’s actual value is lower than the
configured value.
The SETDGAUD command is used during initial configuration of a data group.
Otherwise, it is not necessary for normal operations and should only be used under
the direction of a trained MIMIX support representative.
The SETDGAUD command also supports optionally forcing a change to a configured
value that is lower than the existing value through its Force audit value (FORCE)
parameter.
Evaluation processing - Regardless of how the object auditing evaluation is
invoked, MIMIX may find that an object is identified by more than one data group
entry within the same class of object (IFS, DLO, or library-based). It is important to
understand the order of precedence for processing data group entries.
Data group entries are processed in order from most generic to most specific. IFS
entries are processed using the unicode character set; object entries and DLO entries
are processed using the EBCDIC character set. The first entry (more generic) found
that matches the object is used until a more specific match is found.
The entry that most specifically matches the object is used to process the object. If
the object has a lower audit value, it is set to the configured auditing value specified in
the data group entry that most specifically matches the object.
When MIMIX processes a data group IFS entry and changes the auditing level of
objects which match the entry, the object is checked and, if necessary, changed to the
new auditing value. In the case of an IFS entry with a generic name, all descendants
of the IFS object may also have their auditing value changed.
When you change a data group entry, MIMIX updates all objects identified by the
same type of data group entry in order to ensure that auditing is set properly for
objects identified by multiple entries with different configured auditing values. For
example, if a new DLO entry is added to a data group, MIMIX sets object auditing for
all objects identified by the data group’s DLO entries, but not for its object entries or
IFS entries.
61
For more information and examples of setting auditing values with the SETDGAUD
command, see “Setting data group auditing values manually” on page 309.
62
User journal replication
63
synchronous delivery mode is used, the journal entries are
guaranteed to be in main storage on the target system prior to
control being returned to the application on the source machine.
• It allows the journal receiver save and restore operations to be
moved to the target system. This way, the resource utilization on
the source machine can be reduced.”
64
Overview of IBM processing of remote journals
Several key concepts within the IBM i remote journal function are important to
understanding its impact on MIMIX replication.
A local-remote journal pair refers to the relationship between a configured source
journal and target journal. The key point about a local-remote journal pair is that data
flows only in one direction within the pair, from source to target.
When the remote journal function is activated and all journal entries from the source
are requested, existing journal entries for the specified journal receiver on the source
system which have not already been replicated are replicated as quickly as possible.
This is known as catchup mode. Once the existing journal entries are delivered to
the target system, the system begins sending new entries in continuous mode
according to the delivery mode specified when the remote journal function was
started. New journal entries can be delivered either synchronously or asynchronously.
Synchronous delivery
In synchronous delivery mode the target system is updated in real time with journal
entries as they are generated by the source applications. The source applications do
not continue processing until the journal entries are sent to the target journal.
Each journal entry is first replicated to the target journal receiver in main memory on
the target system (1 in Figure 2). When the source system receives notification of the
delivery to the target journal receiver, the journal entry is placed in the source journal
receiver (2) and the source database is updated (3).
With synchronous delivery, journal entries that have been written to memory on the
target system are considered unconfirmed entries until they have been written to
auxiliary storage on the source system and confirmation of this is received on the
target system (4).
65
Figure 2. Synchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications
Source
2 Journal 3
Receiver Production
(Local) Database
Source Journal
Message Queue
1
Target System
4
Target
Journal Target Journal
Receiver Message Queue
(Remote)
Unconfirmed journal entries are entries replicated to a target system but the state of
the I/O to auxiliary storage for the same journal entries on the source system is not
known. Unconfirmed entries only pertain to remote journals that are maintained
synchronously. They are held in the data portion of the target journal receiver. These
entries are not processed with other journal entries unless specifically requested or
until confirmation of the I/O for the same entries is received from the source system.
Confirmation typically is not immediately sent to the target system for performance
reasons.
Once the confirmation is received, the entries are considered confirmed journal
entries. Confirmed journal entries are entries that have been replicated to the target
system and the I/O to auxiliary storage for the same journal entries on the source
system is known to have completed.
With synchronous delivery, the most recent copy of the data is on the target system. If
the source system becomes unavailable, you can recover using data from the target
system.
Since delivery is synchronous to the application layer, there are application
performance and communications bandwidth considerations. There is some
performance impact to the application when it is moved from asynchronous mode to
synchronous mode for high availability purposes. This impact can be minimized by
ensuring efficient data movement. In general, a minimum of a dedicated 100
megabyte ethernet connection is recommended for synchronous remote journaling.
MIMIX includes special switch processing for unconfirmed entries to ensure that the
most recent transactions are preserved in the event of a source system failure. For
more information, see “Support for unconfirmed entries during a switch” on page 73.
66
Asynchronous delivery
In asynchronous delivery mode, the journal entries are placed in the source journal
first (A in Figure 3) and then applied to the source database (B). An independent job
sends the journal entries from a buffer (C) to the target system journal receiver (D) at
some time after control is returned to the source applications that generated the
journal entries.
Because the journal entries on the target system may lag behind the source system’s
database, in the event of a source system failure, entries may become trapped on the
source system.
Figure 3. Asynchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications
Source
A Journal B
Receiver Production
(Local) Database
Target System
D
Target Journal
Message Queue
Target
Journal
Receiver
(Remote)
With asynchronous delivery, the most recent copy of the data is on the source system.
Performance critical applications frequently use asynchronous delivery.
Default values used in configuring MIMIX for remote journaling use asynchronous
delivery. This delivery mode is most similar to the MIMIX database send and receive
processes.
67
User journal replication processes
Data groups created using default values are configured to use remote journaling
support for user journal replication.
The replication path for database information includes the IBM i remote journal
function, the MIMIX database reader process, and one or more database apply
processes.
The IBM i remote journal function transfers journal entries to the target system.
The database reader process (DBRDR) process reads journal entries from the
target journal receiver of a remote journal configuration and places those journal
entries that match replication criteria for the data group into a log space.
All journal entries deposited into the source journal will be transmitted to the target
system. The database reader process performs the filtering that is identified in the
data group definition parameters and file and tracking entry options.
The database apply process applies the changes stored in the target log space to
appropriate database or replicated object on the target system. MIMIX uses multiple
apply processes in parallel for maximum efficiency. Transactions that are not part of a
commit cycle are immediately applied to the target system. For transactions that are
part of a commit cycle, processing varies depending on how the data group is
configured. With default configuration values, MIMIX processes transactions that are
part of a commit cycle but does not apply those transactions until an open commit
cycle completes. Optionally, data groups can be configured to immediately apply
transactions that are part of a commit cycle.
The RJ link
To simplify tasks associated with remote journaling, MIMIX implements the concept of
a remote journal link. A remote journal link (RJ link) is a configuration element that
identifies an IBM i remote journaling environment. An RJ link identifies:
• A “source” journal definition that identifies the system and journal which are the
source of journal entries being replicated from the source system.
• A “target” journal definition that defines a remote journal.
• Primary and secondary transfer definitions for the communications path for use by
MIMIX.
• Whether the IBM i remote journal function sends journal entries asynchronously or
synchronously.
Once an RJ link is defined and other configuration elements are properly set, user
journal replication processes will use the IBM i remote journaling environment within
its replication path.
The concept of an RJ link is integrated into existing commands. The Work with RJ
Links display makes it easy to identify the state of the IBM i remote journaling
environment defined by the RJ link.
68
Sharing RJ links among data groups
It is possible to configure multiple data groups to use the same RJ link. However, data
groups should only share an RJ link if they are intended to be switched together or if
they are non-switchable data groups. Otherwise, there is additional communications
overhead from data groups replicating in opposite directions and the potential for
journal entries for database operations to be routed back to their originating system.
See “Support for unconfirmed entries during a switch” on page 73 and “RJ link
considerations when switching” on page 73 for more details.
Table 3. End option values on the End Remote Journal Link (ENDRJLNK) command.
*IMMED The target journal is deactivated immediately. Journal entries that are already
queued for transmission are not sent before the target journal is deactivated.
The next time the remote journal function is started, the journal entries that
were queued but not sent are prepared again for transmission to the target
journal.
*CNTRLD Any journal entries that are queued for transmission to the target journal will
be transmitted before the IBM i remote journal function is ended. At any time,
the remote journal function may have one or more journal entries prepared for
transmission to the target journal. If an asynchronous delivery mode is used
over a slow communications line, it may take a significant amount of time to
transmit the queued entries before actually ending the target journal.
69
(DELIVERY(*SYNC)).
• When the remote journal function is performing catch-up processing.
70
RJ link monitors
User journal replication processes monitor the journal message queues of the
journals identified by the RJ link. Two RJ link monitors are created automatically, one
on the source system and one on the target system. These monitors provide added
value by allowing MIMIX to automatically monitor the state of the remote journal link,
to notify the user of problems, and to automatically recover the link when possible.
71
originated the replication and holds the source journal definition for the next system in
the cascade.
For more information about configuring for these environments, see “Data distribution
and data management scenarios” on page 390.
72
Support for unconfirmed entries during a switch
The MIMIX Remote Journal support implements synchronous mode processing in a
way that reduces data latency in the movement of journal entries from the source to
the target system. This reduces the potential for and the degree of manual
intervention when an unplanned outage occurs.
Whenever an RJ link failure is detected MIMIX saves any unconfirmed entries on the
target system so they can be applied to the backup database if an unplanned switch
is required. The unconfirmed entries are the most recent changes to the data.
Maintaining this data on the target system is critical to your managed availability
solution.
In the event of an unplanned switch, the unconfirmed entries are routed to the MIMIX
database apply process to be applied to the backup database. As a result, you will
see the database apply process jobs run longer than they would under standard
switch processing. If the apply process is ended by a user before the switch, MIMIX
will restart the apply jobs to preserve these entries.
As part of the unplanned switch processing, MIMIX checks whether the apply jobs are
caught up. Then, unconfirmed entries are applied to the target database and added to
a journal that will be transferred to the source system when that system is brought
back up. When the backup system is brought online as the temporary the source
system, the unconfirmed entries are processed before any new journal entries
generated by the application are processed. Furthermore, to ensure full data integrity,
once the original source system is operational these unconfirmed entries are the first
entries replicated back to that system.
73
used during a planned switch cause the RJ link to remain active. You may need to
end the RJ link after a planned switch.
74
User journal replication of IFS objects, data areas, data queues
Benefits
One of the most significant benefits of journaling through the user journal is that IFS
objects, data areas, and data queues are processed by replicating only changed
bytes.
Another key advantage for IFS support is that environments performing many create,
move, rename, and delete operations, where all objects are journaled at birth and
remain within the replication namespace, will replicate robustly and without timing
issues related to QAUDJRN latency.
Another significant benefit of user journaling for IFS objects, data areas, and data
queues is that transactions can be applied in lock-step with a database file. This
requires that the objects and database files are configured to the same data group
and the same database apply session.
For example, assume that a hotel uses a database application to reserve rooms.
Within the application, a data area contains a counter to indicate the number of rooms
reserved for a particular day and a database file contains detailed information about
reservations. Each time a room is reserved, both the counter and the database file are
updated. If these updates do not occur in the same order on the target system, the
75
hotel risks reserving too many or too few rooms. When using system journal
replication, serialization of these transactions cannot be guaranteed on the target
system due to inherent differences in MIMIX processing from the user journal
(database file) and the system journal (default for objects). With user journal
processing, MIMIX serializes these transactions on the target system by updating
both the file and the data area. Thus, as long as both the database file and data area
are configured to be processed by the same apply session and processing of an
object is not held due to an error, updates occur on the target system in the same
order they were originally made on the source system.
Additional benefits of replicating IFS objects, data areas, and data queues from the
user journal include:
• Replication is less intrusive. In system-based object replication, the save/restore
process places locks on the replicated object on the source system. Database
replication touches the user journal only, leaving the source object alone.
• More robust handling of environments with a high volume of move, rename,
create, and delete operations.
• Changes to objects replicated from the user journal may be replicated to the target
system in a more timely manner. In traditional object replication, system journal
replication processes must contend with potential locks placed on the objects by
user applications.
• Processing time may be reduced, even for equal amounts of data. Database
replication eliminates the separate save, send, and restore processes necessary
for object replication.
• The objects replicated from the user journal can reduce burden on object
replication processes when there is a lot of activity being replicated through the
system journal.
• Commitment control is supported for B journal entry types for IFS objects
journaled to a user journal.
• Support for multiple hard links to a single stream file.
Restrictions and configuration requirements vary for IFS objects and data area or data
queue objects. For detailed information, including supported journal entry types, see
“Identifying data areas and data queues for replication” on page 113 and “Identifying
IFS objects for replication” on page 116.
Processes used
When IFS objects, data areas, and data queues are properly configured, replication
occurs through the user journal replication path. Processing occurs through the IBM i
remote journal function, the MIMIX database reader process1, and one database
apply process (session A).
1. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ
support.
76
User journal replication of IFS objects, data areas, data queues
Tracking entries
A tracking entry is associated with each IFS object, data area, and data queue that is
replicated through the user journal.
The collection of data group IFS entries for a data group determines the subset of
existing IFS objects on the source system that are eligible for user journal replication
techniques. Similarly, the collection of data group object entries determines the subset
of existing data areas and data queues on the source system that are eligible for user
journal replication techniques. MIMIX requires a tracking entry for each of the eligible
objects to identify how it is defined for replication and to assist with tracking status
when it is replicated. IFS tracking entries identify IFS stream files, symbolic links and
directories, including the source and target file ID (FID), while object tracking entries
identify data areas or data queues.
When you initially configure a data group you must load tracking entries, start
journaling for the objects which they identify, and synchronize the objects with the
target system. The same is true when you add new or change existing data group IFS
entries or object entries.
It is also possible for tracking entries to be automatically created. After creating or
changing data group IFS entries or object entries that are configured for replication
through the user journal, tracking entries are created the next time the data group is
started. However, this method has disadvantanges.This can significantly increase the
amount of time needed to start a data group. If the objects you intend to replicate
through the user journal are not journaled before the start request is made, MIMIX
places the tracking entries in *HLDERR state. Error messages indicate that journaling
must be started and the objects must be synchronized between systems.
Once a tracking entry exists, it remains until one of the following occurs:
• The object identified by the tracking entry is deleted from the source system and
replication of the delete action completes on the target system.
• The object identified by the tracking entry is moved or renamed to a name that is
not configured for replication.
• The data group configuration changes so that an object is no longer identified for
replication through the user journal.
77
Figure 4 shows an IFS user directory structure, the include and exclude processing
selected for objects within that structure, and the resultant list of tracking entries
created by MIMIX.
The status of tracking entries is included with other data group status. You also can
see what objects they identify, whether the objects are journaled, and their replication
status. You can also perform operations on tracking entries, such as holding and
releasing, to address replication problems.
78
Older source-send user journal replication processes
79
use of disk storage and allows valuable system resources to be available for other
processing.
Besides indicating the mapping between source and target file names, data group file
entries identify additional information used by database processes. The data group
file entry can also specify a particular apply session to use for processing on the
target system.
A status code in the data group file entry also stores the status of the file or member in
the MIMIX process. If a replication problem is detected, MIMIX puts the member in
hold error (*HLDERR) status so that no further transactions are applied. Files can
also be put on hold (*HLD) manually.
Putting a file on hold causes MIMIX to retain all journal entries for the file in log
spaces on the target system. If you expect to synchronize files at a later time, it is
better to put the file in an ignored state. By setting files to an ignored state, journal
entries for the file in the log spaces are deleted and additional entries received from
the target system are discarded. This keeps the log spaces to a minimal size and
improves efficiency for the apply process.
The file entry option Lock member during apply indicates whether or not to allow only
restricted access (read-only) to the file on the backup system. This file entry option
can be specified on the data group definition or on individual data group entries.
80
CHAPTER 3 Preparing for MIMIX
This chapter outlines what you need to do to prepare for using MIMIX.
Preparing for the installation and use of MIMIX is a very important step towards
meeting your availability management requirements. Because of their shared
functions and their interaction with other MIMIX products, it is best to determine IBM
System i requirements for user journal and system journal processing in the context of
your total MIMIX environment.
Give special attention to planning and implementing security for MIMIX. General
security considerations for all MIMIX products can be found in the Using License
Manager book. In addition, you can make your systems more secure with MIMIX
product-level and command-level security. Each product has its own product-level
security, but now you must consider the security implications of common functions
used by each product. Information about setting security for common functions is also
found in the Using License Manager book.
The topics in this chapter include:
• “Checklist: pre-configuration” on page 82 provides a procedure to follow to
prepare to configure MIMIX on each system that participates in a MIMIX
installation.
• “Data that should not be replicated” on page 84 describes how to consider what
data should not be replicated.
• “Planning for journaled IFS objects, data areas, and data queues” on page 87
describes considerations when planning to use advanced journaling for IFS
objects, data areas, or data queues.
• “Starting the MIMIXSBS subsystem” on page 92 describes how to start the
MIMIXSBS subsystem which all MIMIX products run in.
• “Accessing the MIMIX Main Menu” on page 93 describes the MIMIX Main Menu
and its two assistance levels, basic and intermediate which provide options to help
simplify daily interactions with MIMIX.
81
Checklist: pre-configuration
You need to configure MIMIX on each system that participates in a MIMIX installation.
Do the following:
1. By now, you should have completed the following tasks:
• The checklist for installing MIMIX software in the Using License Manager book
• You should have also turned on product-level security and granted authority to
user profiles to control access to the MIMIX products.
2. At this time, you should review the information in “Data that should not be
replicated” on page 84.
3. Decide what replication choices are appropriate for your environment. Review the
following topics:
• “New configuration default environment” on page 83
• “Planning for journaled IFS objects, data areas, and data queues” on page 87
• For detailed information see the chapter “Planning choices and details by
object class” on page 95.
4. If it is not already active, start the MIMIXSBS subsystem using topic “Starting the
MIMIXSBS subsystem” on page 92.
5. Configure each system in the MIMIX installation, beginning with the management
system. The chapter “Configuration checklists” on page 135 identifies the primary
options you have for configuring MIMIX.
6. Once you complete the configuration process you choose, you may also need to
do one or more of the following:
• If you plan to use MIMIX Monitor in conjunction with MIMIX, you may need to
write exit programs for monitoring activity and you may want to ensure that
your monitor definitions are replicated. See the MIMIX Operations book for
more information.
• Verify the configuration.
• Verify any exit programs that are called by MIMIX.
• Update any automation programs you use with MIMIX and verify their
operation.
• If you plan to use switching support, you or your Certified MIMIX Consultant
may need to take additional action to set up and test switching. Customization
of the procedures and steps for application groups may be appropriate and
should be considered. In environments that do not use application groups, a
default model switch framework must be configured and identified in MIMIX
policies. For more information about switching and policies, see the MIMIX
Operations book.
82
New configuration default environment
83
Data that should not be replicated
There are some considerations to keep in mind when defining data for replication. Not
only do you need to determine what is critical to replicate, but you also need to
consider data that should not be replicated.
As you identify your critical data, consider the following:
• Do not place user created objects or programs in the LAKEVIEW, MIMIXQGPL, or
VSI001LIB libraries or in the IFS location /visionsolutions/http/vsisvr.
Any user created objects or programs in these locations will be deleted during the
installation process. Move any such objects or programs to a different location
before installing software. The one exception is that job descriptions, such as the
MIMIX Port job, can continue to be placed into the MIMIXQGPL library.
• Only user created objects or programs that are related to a product installation
should be placed within the product’s installation library or a data library.
Examples of related objects for MIMIX products include user created step
programs, user exit programs, and programs created as part of a MIMIX Model
Switch Framework implementation.
• Certain types of information must not be replicated. Also, some temporary data
associated with applications may not need to be replicated. Table 4 identifies what
data to exclude from replication.
Application Temporary objects or files You may not need to replicate temporary files, work
Environment files, and temporary objects, including DLOs and
stream files. Evaluate how your applications use such
files to determine if they need to be replicated.
84
Data that should not be replicated
iOptimize If iOptimize is installed on the same system or in the same partition as MIMIX, do not
Environment replicate the following:
Libraries IOPT
(and contents) IOPT71
IOPTSPLARC
IOPTOBJARC
Note: IOPT is the default name for the iOptimize
installation library -- the library in which iOptimize is
installed. iOptimize data libraries are associated with
an iOptimize installation library and begin with the
default name.
85
Table 4. Data to exclude from replication (Continued)
MIMIX Director™ For MIMIX Director, 8n is the release level. For example, n=1 in release 8.1. If MIMIX
Environment Director is installed on the same system or in the same partition as MIMIX, do not
replicate the following:
86
Planning for journaled IFS objects, data areas, and data queues
87
Serialized transactions with database files
Transactions completed for database files and objects (IFS objects, data areas, or
data queues) can be serialized with one another when they are applied on the target
system. If you require serialization, these objects and database files must share the
same data group as well as the same database apply session, session A. For
example, when a database record contains a reference to a corresponding stream file
that is associated with the record, serialization may be desired.
Since MIMIX uses apply session A for all objects configured for user journal
replication, serialization may require that you change the configuration for database
files to ensure that they use the same apply session. Load balancing may also
become a concern. See “Database apply session balancing” on page 90.
88
Planning for journaled IFS objects, data areas, and data queues
Conversion examples
To illustrate a simple conversion, assume that the systems defined to data group
KEYAPP are running on an IBM i. You use this data group for system journal
replication of the objects in library PRODLIB. The data group has one data group
object entry which has the following values:
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*FILE)
Example 1 - You decide to use user journal replication for all *DTAARA and *DTAQ
objects replicated with data group KEYAPP. You have confirmed that the data group
definition specifies TYPE(*ALL) and does not need to change. After performing a
controlled end of the data group, you change the data group object entry to have the
following values:
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*DFT)
Note: COOPTYPE (*DFT) is equivalent to specifying COOPTYPE(*FILE *DTAARA
*DTAQ).
When the data group is started, object tracking entries are loaded for the data area
and data queue objects in PRODLIB. Those objects will now be replicated from a user
journal. Any other object types in PRODLIB continue to be replicated from the system
journal.
Example 2 - You want to use user journal replication for data group KEYAPP but one
data area, XYZ, must remain replicated from the system journal. You will need the
data group object entry described in Example 1.
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*DFT)
You will also need a new data group object entry that specifies the following so that
data area XYZ can be replicated from the system journal:
LIB1(PRODLIB) OBJ1(XYZ) OBJTYPE(*DTAARA) PRCTYPE(*INCLD)
COOPDB(*NO)
Example 3 - You want to use user journal replication for objects with the IFS Directory
IFSDIR but one object, abcXYZ, must remain replicated from the system journal. You
will need to do the following:
1. Run the Convert Data Group IFS Entries (CVTDGIFSE) command. See "Running
the CVTDGIFSE command" on page 147. A tracking entry (IFSTE) is created for
all IFS objects.
2. Add the data group IFS entry for the one object, abcXYZ, that you do not want to
replicate through the user journal:
OBJ1('/ifsdir/abcXYZ') PRCTYPE(*INCLD) COOPDB(*NO)
3. If the object previously was not journaled, but is now because CVTDGIFSE
journaled it, end journaling for the object’s tracking entry:
ENDJRNIFSE OBJ(('/ifsdir/abcXYZ'))
89
4. Remove the object’s tracking entry:
RMVDGIFSTE OBJ1('/ifsdir/abcXYZ')
90
Planning for journaled IFS objects, data areas, and data queues
duplicate journal entry sequence numbers and journal codes and types to the user
exit program when the data for the incomplete entry is retrieved. Programs need
to correctly handle these duplicate entries representing the single, original journal
entry.
• Journal entries for journaled IFS objects, data areas, and data queues will be
routed to the user exit program. This may be a performance consideration relative
to user exit program design.
Contact your Certified MIMIX Consultant for assistance with user exit programs.
91
Starting the MIMIXSBS subsystem
By default, all MIMIX products run in the MIMIXSBS subsystem that is created when
you install the product. This subsystem must be active before you can use the MIMIX
products.
If the MIMIXSBS is not already active, start the subsystem by typing the command
STRSBS SBSD(MIMIXQGPL/MIMIXSBS) and pressing Enter.
Any autostart job entries listed in the MIMIXSBS subsystem will start when the
subsystem is started.
Note: You can ensure that the MIMIX subsystem is started after each IPL by adding
this command to the end of the startup program for your system. Due to the
unique requirements and complexities of each MIMIX implementation, it is
strongly recommended that you contact your Certified MIMIX Consultant to
determine the best way in which to design and implement this change.
92
Accessing the MIMIX Main Menu
93
Note: On the MIMIX Basic Main Menu, options 5 (Start or complete switch using
Switch Asst.) and 10 (Availability Status) are not recommended for
installations that use application groups.
94
CHAPTER 4 Planning choices and details by
object class
This chapter describes the replication choices available for objects and identifies
critical requirements, limitations, and configuration considerations for those choices.
Many MIMIX processes are customized to provide optimal handling for certain
classes of related object types and differentiate between database files, library-based
objects, integrated file system (IFS) objects, and document library objects (DLOs).
Each class of information is identified for replication by a corresponding class of data
group entries. A data group can have any combination of data group entry classes.
Some classes even support multiple choices for replication.
In each class, a data group entry identifies a source of information that can be
replicated by a specific data group. When you configure MIMIX, each data group
entry you create identifies one or more objects to be considered for replication or to
be explicitly excluded from replication. When determining whether to replicate a
journaled transaction, MIMIX evaluates all of the data group entries for the class to
which the object belongs. If the object is within the name space determined by the
existing data group entries, the transaction is replicated.
When configuring installations that are licensed for MIMIX DR, name mapping is not
supported in data group entries.
The topics in this chapter include:
• “Replication choices by object type” on page 97 identifies the available replication
choices for each object class.
• “Configured object auditing value for data group entries” on page 98 describes
how MIMIX uses a configured object auditing value that is identified in data group
entries and when MIMIX will change an object’s auditing value to match this
configuration value.
• “Identifying library-based objects for replication” on page 100 includes information
that is common to all library-based objects, such as how MIMIX interprets the data
group object entries defined for a data group. This topic also provides examples
and additional detail about configuring entries to replicate spooled files and user
profiles.
• “Identifying logical and physical files for replication” on page 106 identifies the
replication choices and considerations for *FILE objects with logical or physical file
extended attributes. This topic identifies the requirements, limitations, and
configuration requirements of MIMIX Dynamic Apply and legacy cooperative
processing.
• “Identifying data areas and data queues for replication” on page 113 identifies the
replication choices and configuration requirements for library-based objects of
type *DTAARA and *DTAQ. This topic also identifies restrictions for replication of
these object types when user journal processes (advanced journaling) is used.
95
Planning choices and details by object class
• “Identifying IFS objects for replication” on page 116 identifies supported and
unsupported file systems, replication choices, and considerations such as long
path names and case sensitivity for IFS objects. This topic also identifies
restrictions and configuration requirements for replication of these object types
when user journal processes (advanced journaling) is used.
• “Identifying DLOs for replication” on page 122 describes how MIMIX interprets the
data group DLO entries defined for a data group and includes examples for
documents and folders.
• “Processing of newly created files and objects” on page 126 describes how new
IFS objects, data areas, data queues, and files that have journaling implicitly
started are replicated from the user journal.
• “Processing variations for common operations” on page 129 describes
configuration-related variations in how MIMIX replicates move/rename, delete,
and restore operations.
96
Replication choices by object type
Objects of type *FILE, Default: user journal with Object entries and “Identifying logical and
extended attributes: MIMIX Dynamic Apply1 File entries physical files for
• PF (data, source) replication” on page 106
Other: For PF data files, Object entries and
• LF
legacy cooperative File entries
processing2. (For PF
source and LF files, system
journal)
• *FILE, other Default: For other files, Object entries “Identifying library-based
extended attributes system journal objects for replication” on
page 100
Objects of type Default: advanced Object entries and “Identifying data areas
*DTAARA journaling2 Object tracking entries and data queues for
replication” on page 113
Other: system journal Object entries
IFS objects Default: system journal IFS entries “Identifying IFS objects for
replication” on page 116
Other: advanced IFS entries and IFS
journaling2 tracking entries
97
Configured object auditing value for data group entries
When you create data group entries for library-based objects, IFS objects, or DLOs,
you can specify an object auditing value within the configuration. This configured
object auditing value affects how MIMIX handles changes to attributes of objects. It is
particularly important for, but not limited to, objects configured for system journal
replication.
The Object auditing value (OBJAUD) parameter defines a configured object auditing
level for use by MIMIX. This configured value is associated with all objects identified
for processing by the data group entry. An object’s actual auditing level determines
the extent to which changes to the object are recorded in the system journal and
replicated by MIMIX. The configured value is used during initial configuration and
during processing of requests to compare objects that are identified by configuration
data.
In specific scenarios, MIMIX evaluates whether an object’s auditing value matches
the configured value of the data group entry that most closely matches the object
being processed. If the actual value is lower than the configured value, MIMIX sets
the object to the configured value so that future changes to the object will be recorded
as expected in the system journal and therefore can be replicated.
Note: MIMIX only considers changing an object’s auditing value when the data
group object entry is configured for system journal replication. MIMIX does not
change the object’s value for files that are configured for MIMIX Dynamic
Apply or legacy cooperative processing or for data areas and data queues that
are configured for user journal replication.
The configured value specified in data group entries can affect replication of some
journal entries generated when an object attribute changes. Specifically, the
configured value can affect replication of T-ZC journal entries for files and IFS objects
and T-YC entries for DLOs. Changes that generate other types of journal entries are
not affected by this parameter.
When MIMIX changes the audit level, the possible values have the following results:
• The default value, *CHANGE, ensures that all changes to the object by all users
are recorded in the system journal.
• The value *ALL ensures that all changes or read accesses to the object by all
users are recorded in the system journal. The journal entries generated by read
accesses to objects are not used for replication and their presence can adversely
affect replication performance.
• The value *NONE results in no entries recorded in the system journal when the
object is accessed or changed.
The values *CHANGE and *ALL result in replication of T-ZC and T-YC journal entries.
The value *NONE prevents replication of attribute and data changes for the identified
object or DLO because T-ZC and T-YC entries are not recorded in the system journal.
For files configured for MIMIX Dynamic Apply and any IFS objects, data areas, or
data queues configured for user journal replication, the value *NONE can improve
MIMIX performance by preventing unneeded entries from being written to the system
journal.
98
Configured object auditing value for data group entries
When a compare request includes an object with a configured object auditing value of
*NONE, any differences found for attributes that could generate T-ZC or T-YC journal
entries are reported as *EC (equal configuration).
You may also want to read the following:
• For more information about when MIMIX sets an object’s auditing value, see
“Managing object auditing” on page 60.
• For more information about manually setting values and examples, see “Setting
data group auditing values manually” on page 309.
• To see what attributes can be compared and replicated, see the following topics:
– “Attributes compared and expected results - #FILATR, #FILATRMBR audits”
on page 696
– “Attributes compared and expected results - #OBJATR audit” on page 701
– “Attributes compared and expected results - #DLOATR audit” on page 713.
– “Attributes compared and expected results - #IFSATR audit” on page 710
99
Identifying library-based objects for replication
MIMIX uses data group object entries to identify whether to process transactions for
library-based objects. Collectively, the object entries identify which library-based
objects can be replicated by a particular data group.
Each data group object entry identifies one or more library-based objects. An object
entry can specify either a specific or a generic name for the library and object. In
addition, each object entry also identifies the object types and extended object
attributes (for *FILE and *DEVD objects) to be selected, defines a configured object
auditing level for the identified objects, and indicates whether the identified objects
are to be included in or excluded from replication.
For most supported object types which can be identified by data group object entries,
only the system journal replication path is available. For a list of object types, see
“Supported object types for system journal replication” on page 635. This list includes
information about what can be specified for the extended attributes of *FILE objects.
A limited number of object types which use the system journal replication path have
unique configuration requirements. These are described in are described in
“Identifying spooled files for replication” on page 103 and “Replicating user profiles
and associated message queues” on page 104.
For detailed procedures, see “Configuring data group entries” on page 270.
Replication options for object types journaled to a user journal - For objects of
type *FILE, *DTAARA, and *DTAQ, MIMIX supports multiple replication methods. For
these object types, additional configuration data is evaluated when determining what
replication path to use for the identified objects.
For *FILE objects, the extended attribute and other configuration data are considered
when MIMIX determines what replication path to use for identified objects.
• For logical and physical files, MIMIX supports several methods of replication.
Each method varies in its efficiency, in its supported extended attributes, and in
additional configuration requirements. See “Identifying logical and physical files
for replication” on page 106 for additional details.
• For other extended attribute types, MIMIX supports only system journal
replication. Only data group object entries are required to identify these files for
replication.
For *FILE objects configured for replication through the system journal, MIMIX caches
extended file attribute information for a fixed set of *FILE objects. Also, the Omit
content (OMTDTA) parameter provides the ability to omit a subset of data-changing
operations from replication. For more information, see “Caching extended attributes of
*FILE objects” on page 369 and “Omitting T-ZC content from system journal
replication” on page 415.
For *DTAARA and *DTAQ object types, MIMIX supports replication using either
system journal or user journal replication processes. A configuration that uses the
user journal is also called an advanced journaling configuration. Additional
information, including configuration requirements are described in “Identifying data
areas and data queues for replication” on page 113.
100
Identifying library-based objects for replication
How MIMIX uses object entries to evaluate journal entries for replication
The following information and example can help you determine whether the objects
you specify in data group object entries will be selected for replication. MIMIX
determines which replication process will be used only after it determines whether the
library-based object will be replicated.
When determining whether to process a journal entry for a library-based object,
MIMIX looks for a match between the object information in the journal entry and one
of the data group object entries. The library name is the first search element, then
followed by the object type, attribute (for files and device descriptions), and the object
name. The most significant match found (if any) is checked to determine whether to
include or exclude the journal entry in replication.
Table 6 shows how MIMIX checks a journal entry for a match with a data group object
entry. The columns are arranged to show the priority of the elements within the object
entry, with the most significant (library name) at left and the least significant (object
name) at right.
When configuring data group object entries, the flexibility of the generic support
allows a variety of include and exclude combinations for a given library or set of
101
libraries. But, generic name support can also cause unexpected results if it is not well
planned. Consider the search order shown in Table 6 when configuring data group
object entries to ensure that objects are not unexpectedly included or excluded in
replication.
Example - For example, say you that you have a data group configured with data
group object entries like those shown in Table 8. The journal entries MIMIX is
evaluating for replication are shown in Table 7.
A transaction is received from the system journal for program BOOKKEEP in library
FINANCE. MIMIX will replicate this object since it fits the criteria of the first data group
object entry shown in Table 8.
A transaction for file ACCOUNTG in library FINANCE would also be replicated since it
fits the third entry.
A transaction for data area BALANCE in library FINANCE would not be replicated
since it fits the second entry, an Exclude entry.
Table 8. Sample of data group object entries, arranged in order from most to least specific
Entry Source Library Object Type Object Name Attribute Process Type
1 Finance *PGM *ALL *ALL *INCLD
2 Finance *DTAARA *ALL *ALL *EXCLD
3 Finance *ALL acc* *ALL *INCLD
Likewise, a transaction for data area ACCOUNT1 in library FINANCE would not be
replicated. Although the transaction fits both the second and third entries shown in
Table 8, the second entry determines whether to replicate because it provides a more
significant match in the second criteria checked (object type). The second entry
provides an exact match for the library name, an exact match for object type, and a
object name match to *ALL.
In order for MIMIX to process the data area ACCOUNT1, an additional data group
object entry with process type *INCLD could be added for object type of *DTAARA
with an exact name of ACCOUNT1 or a generic name ACC*.
102
Identifying library-based objects for replication
Table 9. Data group object entry parameter values for spooled file replication
Parameter Value
Is it important to consider which spooled files must be replicated and which should
not. Some output queues contain a large number of non-critical spooled files and
probably should not be replicated. Most likely, you want to limit the spooled files that
you replicate to mission-critical information. It may be useful to direct important
spooled files that should be replicated to specific output queues instead of defining a
large number of output queues for replication.
When an output queue is selected for replication and the data group object entry
specifies *YES for Replicate spooled files, MIMIX ensures that the values *SPLFDTA
and *PRTDTA are included in the system value for the security auditing level
(QAUDLVL). This causes the system to generate spooled file (T-SF) entries in the
system journal. When a spooled file is created, moved, deleted, or its attributes are
changed, the resulting entries in the system journal are processed by a MIMIX object
send job and are replicated.
103
*HLD All replicated spooled files are put on hold on the target system regardless
of their status on the source system.
*HLDONSAV All replicated spooled files that have a saved status on the source
system will be put on hold on the target system. Spooled files on the source
system which have other status values will have the same status on the target
system.
This parameter can be helpful if your environment includes programs which
automatically process spooled files on the target system. For example, if you have a
program that automatically prints spooled files, you may want to use one of these
values to control what is printed after replication when printers writers are active.
If you move a spooled file between output queues which have different configured
values for the SPLFOPT parameter, consider the following:
• Spooled files moved from an output queue configured with SPLFOPT(*NONE) to
an output queue configured with SPLFOPT(*HLD) are placed in a held state on
the target system.
• Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an
output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV)
remain in a held state on the target system until you take action to release them.
104
Identifying library-based objects for replication
For example, Table 10 shows the data group object entries required to replicate user
profiles beginning with the letter A and maintain identical private authorities on
associated message queues. In this example, the user profile ABC and its associated
message queue are excluded from replication.
Table 10. Sample data group object entries for maintaining private authorities of message
queues associated with user profiles
Entry Source Library Object Type Object Name Process Type
1 QSYS *USRPRF A* *INCLD
2 QUSRSYS *MSGQ A* *INCLD
3 QSYS *USRPRF ABC *EXCLD
4 QUSRSYS *MSGQ ABC *EXCLD
105
Identifying logical and physical files for replication
MIMIX supports multiple ways of replicating *FILE objects with extended attributes of
LF, PF-DTA, PF38-DTA, PF-SRC, PF38-SRC. MIMIX configuration data determines
the replication method used for these logical and physical files. The following
configurations are possible:
• MIMIX Dynamic Apply - MIMIX Dynamic Apply is strongly recommended. In this
configuration, logical files and physical files (source and data) are replicated
primarily through the user (database) journal. This configuration is the most
efficient way to replicate LF, PF-DTA, PF38-DTA, PF-SRC, and PF38-SRC files. In
this configuration, files are identified by data group object entries and file entries.
• Legacy cooperative processing - Legacy cooperative processing supports only
data files (PF-DTA and PF38-DTA). It does not support source physical files or
logical files. In legacy cooperative processing, record data and member data
operations are replicated through user journal processes, while all other file
transactions such as creates, moves, renames, and deletes are replicated
through system journal processes. The database processes can use either
remote journaling or MIMIX source-send processes, making legacy cooperative
processing the recommended choice for physical data files when the remote
journaling environment required by MIMIX Dynamic Apply is not possible. In this
configuration, files are identified by data group object entries and file entries.
• User journal (database) only configurations - Environments that do not meet
MIMIX Dynamic Apply requirements but which have data group definitions that
specify TYPE(*DB) can only replicate data changes to physical files. These
configurations may not be able to replicate other operations such as creates,
restores, moves, renames, and some copy operations. In this configuration, files
are identified by data group file entries.
• System journal (object) only configurations - Data group definitions which
specify TYPE(*OBJ) are less efficient at processing logical and physical files. The
entire member is updated with each replicated transaction. Members must be
closed in order for replication to occur. In this configuration, files are identified by
data group object entries.
You should be aware of common characteristics of replicating library-based objects,
such when the configured object auditing value is used and how MIMIX interprets
data group entries to identify objects eligible for replication. For this information, see
“Configured object auditing value for data group entries” on page 98 and “How MIMIX
uses object entries to evaluate journal entries for replication” on page 101.
Some advanced techniques may require specific configurations. See “Configuring
advanced replication techniques” on page 383 for additional information.
For detailed procedures, see “Creating data group object entries” on page 271.
106
Identifying logical and physical files for replication
used. With this configuration, logical and physical files are processed primarily from
the user journal.
Cooperative journal - The value specified for the Cooperative journal (COOPJRN)
parameter in the data group definition is critical to determining how files are
cooperatively processed. When creating a new data group, you can explicitly specify
a value or you can allow MIMIX to automatically change the default value (*DFT) to
either *USRJRN or *SYSJRN based on whether operating system and configuration
requirements for MIMIX Dynamic Apply are met. When requirements are met, MIMIX
changes the value *DFT to *USRJRN. When the MIMIX Dynamic Apply requirements
are not met, MIMIX changes *DFT to *SYSJRN.
Note: Data groups set to *SYSJRN will retain the value until you take action as
described in “Converting to MIMIX Dynamic Apply” on page 148.
When a data group definition meets the requirements for MIMIX Dynamic Apply, any
logical files and physical (source and data) files properly identified for cooperative
processing will be processed via MIMIX Dynamic Apply unless a known restriction
prevents it.
When a data group definition does not meet the requirements for MIMIX Dynamic
Apply but still meets legacy cooperative processing requirements, any PF-DTA or
PF38-DTA files properly configured for cooperative processing will be replicated using
legacy cooperative processing. All other types of files are processed using system
journal replication.
107
• Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. See “Requirements and limitations of MIMIX Dynamic
Apply” on page 111 and “Requirements and limitations of legacy cooperative
processing” on page 112 for additional information. For more information about
load balancing apply sessions, see “Database apply session balancing” on
page 90.
Commitment control - This database technique allows multiple updates to one or
more files to be considered a single transaction. When used, commitment control
maintains database integrity by not exposing a part of a database transaction until the
whole transaction completes. This ensures that there are no partial updates when the
process is interrupted prior to the completion of the transaction. This technique is also
useful in the event that a partially updated transaction must be removed, or rolled
back, from the files or when updates identified as erroneous need to be removed.
MIMIX provides two modes of processing transactions that are part of a commit cycle.
The default mode, delayed commit, processes the transactions but does not apply
them until the open commit cycle is complete. You can also use immediate mode,
where the transactions are applied immediately, before the commit cycle completes.
Changing commit mode is considered an advanced technique. The benefits and
limitations of each mode are described in “Immediately applying committed
transactions” on page 367.
If your application dynamically creates database files that are subsequently used in a
commitment control environment, use MIMIX Dynamic Apply for replication.
Without MIMIX Dynamic Apply, replication of the create operation may fail if a commit
cycle is open when MIMIX tries to save the file. The save operation will be delayed
and may fail if the file being saved has uncommitted transactions.
108
Identifying logical and physical files for replication
journal entries, two or more entries with duplicate journal sequence numbers and
journal codes and types will be provided to the user exit program when the data for
the incomplete entry is retrieved and segmented. Programs need to correctly handle
these duplicate entries representing the single, original journal entry.
You should also be aware of the following restrictions:
• When using the Compare File Data (CMPFILDTA) command to compare and
repair files with LOBs, you must specify a data group when you specify a value
other than *NONE for Repair on system (REPAIR).
• Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work
against database files with LOB fields.
• There is no collision detection for LOB data. Most collision detection classes
compare the journal entries with the content of the record on the target system.
• Journaled changes cannot be removed for files with LOBs that are replicated by a
data group that does not use remote journaling (RJLNK(*NO)). In this scenario,
the F-RC entry generated by the IBM command Remove Journaled Changes
(RMVJRNCHG) cannot be applied on the target system.
109
Table 11. Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing
Corresponding data group file entries - Both MIMIX Dynamic Apply and legacy
cooperative processing require that existing files identified by a data group object
entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must
also be identified by data group file entries.
When a file is identified by both a data group object entry and an data group file entry,
the following are also required:
• The object entry must enable the cooperative processing of files by specifying
COOPDB(*YES) and COOPTYPE(*FILE).
110
Identifying logical and physical files for replication
• If name mapping is used between systems, the data group object entry and file
entry must have the same name mapping defined.
• If the data group object entry and file entry specify different values for the File and
tracking ent. opts (FEOPT) parameter, the values specified in the data group file
entry take precedence.
• Files defined by data group file entries must have journaling started and must be
synchronized. If journaling is not started, MIMIX cannot replicate activity for the
file.
Typically, data group object entries are created during initial configuration and are
then used as the source for loading the data group file entries. The #DGFE audit can
be used to determine whether corresponding data group file entries exist for the files
identified by data group object entries.
111
• If using referential constraints with *CASCADE or *SETNULL actions you must
specify *YES for the Journal on target (JRNTGT) parameter in the data group
definition.
• Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. If a particular preferred apply session has been specified
in file entry options (FEOPT), MIMIX may ignore the specification in order to
satisfy this restriction.
112
Identifying data areas and data queues for replication
113
identified by object tracking entries.
Table 12. Critical configuration parameters for replicating *DTAARA and *DTAQ objects
from a user journal
Additionally, if any of the following apply, see “Planning for journaled IFS objects, data
areas, and data queues” on page 87 for additional details:
• Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether data area or data queue objects should be replicated in a
data group that also replicates database files.
• Serialized transactions - If you need to serialize transactions for database files
and data area or data queue objects replicated from a user journal, you may need
to adjust the configuration for the replicated files.
• Apply session load balancing - One database apply session, session A, is used
for all data area and data queue objects are replicated from a user journal. Other
replication activity can use this apply session, and may cause it to become
overloaded. You may need to adjust the configuration accordingly.
• User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.
Be aware of the following restrictions when replicating data areas and data queues
using MIMIX user journal replication processes:
• MIMIX does not support before-images for data updates to data areas, and
cannot perform data integrity checks on the target system to ensure that data
being replaced on the target system is an exact match to the data replaced on the
source system. Furthermore, MIMIX does not provide a mechanism to prevent
users or applications from updating replicated data areas on the target system
114
Identifying data areas and data queues for replication
accidentally. To guarantee the data integrity of replicated data areas between the
source and target systems, you should run audits on a regular basis.
• The apply of data area and data queue objects is restricted to a single database
apply job (DBAPYA). If a data group has too much replication activity, this job may
fall behind in the processing of journal entries. If this occurs, you should load-level
the apply sessions by moving some or all of the database files to another
database apply job.
• Pre-existing data areas and data queues to be selected for replication must have
journaling started on both the source and target systems before the data group is
started.
• The ability to replicate Distributed Data Management (DDM) data areas and data
queues is not supported. If you need to replicate DDM data areas and data
queues, use standard system journal replication methods.
• The subset of E and Q journal code entry types supported for user journal
replication are listed in “Journal codes and entry types for journaled data areas
and data queues” on page 733.
115
Identifying IFS objects for replication
MIMIX uses data group IFS entries to determine whether to process transactions for
objects in the integrated file system (IFS), and what replication process is used. IFS
entries can be configured so that the identified objects can be replicated from journal
entries recorded in the system journal (default) or in a user journal (optional).
The most efficient way to convert IFS entries for an enabled data group from system
journal (object) replication to user journal (database) replication is using the Convert
Data Group IFS Entries (CVTDGIFSE) command. For more information, see
“Checklist: Converting IFS entries to user journaling using the CVTDGIFSE
command” on page 154.
One of the most important decisions in planning for MIMIX is determining which IFS
objects you need to replicate. Most likely, you want to limit the IFS objects you
replicate to mission-critical objects and the directories that contain them.
User journal replication, also called advanced journaling, is well suited to the dynamic
environments of IFS objects. While user journal replication has significant
advantages, you must decide whether it is appropriate for your environment. For more
information, see “Planning for journaled IFS objects, data areas, and data queues” on
page 87.
For detailed procedures, see “Creating data group IFS entries” on page 284.
Objects configured for user journal replication may have create, restore, delete,
move, and rename operations. Differences in implementation details are described in
“Processing variations for common operations” on page 129.
Table 13. IFS file systems that are not supported by MIMIX
116
Identifying IFS objects for replication
Journaling is not supported for files in Network Work Storage Spaces (NWSS), which
are used as virtual disks by IXS and IXA technology. Therefore, IFS objects
configured to be replicated from a user journal must be in the Root (‘/’) or QOpenSys
file systems.
Refer to the IBM book OS/400 Integrated File System Introduction for more
information about IFS.
117
Replication will not alter the character case of objects that already exist on the target
system (unless the object is deleted and recreated). In the root file system, /AbCd and
/ABCD are equivalent names. If /ABCD exists as such on the target system, changes
to /AbCd will be replicated to /ABCD, but the object name will not be changed to
/AbCd on the target system.
When character case is not a concern (root file system), MIMIX may present path
names as all upper case or all lower case. For example, the WRKDGACTE display
shows all lower case, while the WRKDGIFSE display shows all upper case. Names
can be entered in either case. For example, subsetting WRKDGACTE by /AbCd and
/ABCD will produce the same result.
When character case does matter (QOpenSys file system), MIMIX presents path
names in the appropriate case. For example, the WRKDGACTE display and the
WRKDGIFSE display would show /QOpenSys/AbCd, if that is the actual object path.
Names must be entered in the appropriate character case. For example, subsetting
the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd.
Table 14. Example of a data group IFS entry with implicit and explicit objects
118
Identifying IFS objects for replication
119
replication needs.
• You can specify an object auditing value within the configuration. For details, see
“Configured object auditing value for data group entries” on page 98.
Additional requirements for user journal replication - The following additional
requirements must be met before IFS objects identified by data group IFS entries can
be replicated with user journal processes.
• IFS tracking entries must exist for the objects identified by properly configured IFS
entries. Typically these are created automatically when the data group is started.
• Journaling must be started on both the source and target systems for the objects
identified by IFS tracking entries.
Table 15. Critical configuration parameters for replicating IFS objects from a user journal
Additionally, see “Planning for journaled IFS objects, data areas, and data queues” on
page 87 for additional details if any of the following apply:
• Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether IFS objects should be replicated in a data group that also
replicated database files.
• Serialized transactions - If you need to serialize transactions for database files
and IFS objects replicated from a user journal, you may need to adjust the
configuration for the replicated files.
• Apply session load balancing - One database apply session, session A, is used
for all IFS objects that are replicated from a user journal. Other replication activity
can use this apply session, and may cause it to become overloaded. You may
need to adjust the configuration accordingly.
• User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.
When considering replicating IFS objects using MIMIX user journal replication
processes, be aware of the following restrictions:
• The apply of IFS objects is restricted to a single database apply job (DBAPYA). If
a data group has too much replication activity, this job may fall behind in the
120
Identifying IFS objects for replication
processing of journal entries. If this occurs, you should load-level the apply
sessions by moving some or all of the database files to another database apply
job.
• The ability to prevent unauthorized updates from occurring on the target system
by configuring the “Lock member during apply” file entry option (FEOPT) is not
supported when user journal replication is configured.
• The ability to use the Remove Journaled Changes (RMVJRNCHG) command for
removing journaled changes for IFS tracking entries is not supported.
• It is recommended that option 14 (Remove related) on the Work with Data Group
Activity (WKRDGACT) display not be used for failed activity entries representing
actions against cooperatively processed IFS objects. Because this option does
not remove the associated tracking entries, orphan tracking entries can
accumulate on the system.
• It is recommended that option 4 (Remove) on the Work with DG IFS Trk. Entries
display only be used under the guidance of your Certified MIMIX Consultant as
replication will be affected.
• When moving or renaming a directory within the namespace of IFS objects
configured for user journal replication, MIMIX will move or rename the directory
without regard for include, exclude, and name mapping characteristics of items
beneath the directory being moved or renamed. This applies in both their 'from' or
'to' path names. If non-corresponding include, exclude, or name mapping
configuration entries exist for the 'from' and 'to' locations, the result may be
excess, missing, or incorrectly named objects on the target. Any differences will
be detected during the next full IFS attribute audit.
• Most B journal code entry types are supported for user journal replication and are
listed in “Journal codes and entry types for journaled IFS objects” on page 732.
121
Identifying DLOs for replication
MIMIX uses data group DLO entries to determine whether to process system journal
transactions for document library objects (DLOs). Each DLO entry for a data group
includes a folder path, document name, owner, an object auditing level, and an
include or exclude indicator. In addition to specific names, MIMIX supports generic
names for DLOs. In a data group DLO entry, the folder path and document can be
generic or *ALL.
When you create data DLO object entries, you can specify an object auditing value
within the configuration. The configured object auditing value affects how MIMIX
handles changes to attributes of DLOs. For detailed information, see “Configured
object auditing value for data group entries” on page 98.
For detailed procedures, see “Creating data group DLO entries” on page 297.
How MIMIX uses DLO entries to evaluate journal entries for replication
How items are specified within a DLO determines whether MIMIX selects or omits
them from processing. This information can help you understand what is included or
omitted.
When determining whether to process a journal entry for a DLO, MIMIX looks for a
match between the DLO information in the journal entry and one of the data group
DLO entries. The folder path is the most significant search element, followed by the
document name, then the owner. The most significant match found (if any) is checked
to determine whether to process the entry.
An exact or generic folder path name in a data group DLO entry applies to folder
paths that match the entry as well as to any unnamed child folders of that path which
are not covered by a more explicit entry. For example, a data group DLO entry with a
folder path of “ACCOUNT” would also apply to a transaction for a document in folder
path ACCOUNT/JANUARY. If a second data group DLO entry with a folder path of
“ACCOUNT/J*” were added, it would take precedence because it is more specific.
For a folder path with multiple elements (for example, A/B/C/D), the exact checks and
generic checks against data group DLO entries are performed on the path. If no
match is found, the lowest path element is removed and the process is repeated. For
example, A/B/C/D is reduced to A/B/C and is rechecked. This process continues until
a match is found or until all elements of the path have been removed. If there is still no
match, then checks for folder path *ALL are performed.
122
Identifying DLOs for replication
• Create or change operations for an implicitly defined parent object are replicated.
• Move/rename operations of an implicitly defined parent object that is within the
configured namespace or that would cause the parent object to be moved into the
configured namespace are replicated.
• Move/rename operations that would cause an implicitly defined parent object to
no longer be part of (moved out of) the configured namespace are not replicated.
• Delete operations for an implicitly defined parent object are not replicated.
Table 16. Example of a data group DLO entry with implicit and explicit objects
Document example - Table 18 illustrates some sample data group DLO entries. For
example, a transaction for any document in a folder named FINANCE would be
123
blocked from replication because it matches entry 6. A transaction for document
ACCOUNTS in FINANCE1 owned by JONESB would be replicated because it
matches entry 4. If SMITHA owned ACCOUNTS in FINANCE1, the transaction would
be blocked by entry 3. Likewise, documents LEDGER.JUL and LEDGER.AUG in
FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1
would be blocked by entry 1. A transaction for any document in FINANCE2 would be
blocked by entry 6. However, transactions for documents in FINANCE2/Q1, or in a
child folder of that path, such as FINANCE2/Q1/FEB, would be replicated because of
entry 5.
Table 18. Sample data group DLO entries, arranged in order from most to least specific
Entry Folder Path Document Owner Process Type
1 FINANCE1 PAYROLL *ALL *EXCLD
2 FINANCE1 LEDGER* *ALL *EXCLD
3 FINANCE1 *ALL SMITHA *EXCLD
4 FINANCE1 *ALL *ALL *INCLD
5 FINANCE2/Q1 *ALL *ALL *INCLD
6 FIN* *ALL *ALL *EXCLD
124
Identifying DLOs for replication
path, document value of *ALL, and an owner of *ALL, and the only include entry that
would cause it to be replicated specifies folder path *ALL. The exception also affects
all child folders in the ACCOUNT folder path. Note that the exception holds true even
if ACCOUNT is owned by user profile JONESB (entry 4) because the more specific
folder name match takes precedence.
125
Processing of newly created files and objects
Your production environment is dynamic. New objects continue to be created after
MIMIX is configured and running. When properly configured, MIMIX automatically
recognizes entries in the user journal that identify new create operations and
replicates any that are eligible for replication. Optionally, MIMIX can also notify you of
newly created objects not eligible for replication so that you can choose whether to
add them to the configuration.
Configurations that replicate files, data areas, data queues, or IFS objects from user
journal entries require journaling to be started on the objects before replication can
occur. When a configuration enables journaling to be implicitly started on new objects,
a newly created object is already journaled. When the journaled object falls within the
group of objects identified for replication by a data group, MIMIX replicates the create
operation. Processing variations exist based on how the data group and the data
group entry with the most specific match to the object are configured. These
variations are described in the following subtopics.
The MMNFYNEWE monitor is a shipped journal monitor that watches the security
audit journal (QAUDJRN) for newly created libraries, folders, or directories that are
not already included or excluded for replication by a data group and sends warning
notifications when its conditions are met. This monitor is shipped disabled. User
action is required to enable this monitor on the source system within your MIMIX
environment. Once enabled, the monitor will automatically start with the master
monitor. For more information about the conditions that are checked, see topic
‘Notifications for newly created objects’ in the MIMIX Operations book.
For more information about requirements and restrictions for implicit starting of
journaling as well as examples of how MIMIX determines whether to replicate a new
object, see “What objects need to be journaled” on page 343.
126
Processing of newly created files and objects
127
to a user journal. New MIMIX installations that are configured for MIMIX Dynamic
Apply of files automatically have this behavior.
For requirements for implicitly starting journaling on new objects, see “What objects
need to be journaled” on page 343.
If the object is journaled to the user journal, MIMIX user journal replication processes
can fully replicate the create operation. The user journal entries contain all the
information necessary for replication without needing to retrieve information from the
object on the source system. MIMIX creates a tracking entry for the newly created
object and an activity entry representing the T-CO (create) journal entry for data areas
and data queues.
If the object is not journaled to the user journal, then the create of the object is
processed with system journal processing and an activity entry is created which
represents the T-CO journal entry.
If the specified values in data group entry that identified the object as eligible for
replication do not allow the object type to be cooperatively processed, the create of
the object and subsequent operations are replicated through system journal
processes.
When MIMIX replicates a create operation through the user journal, the create
timestamp (*CRTTSP) attribute may differ between the source and target systems.
128
Processing variations for common operations
1. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry,
it is not guaranteed that an object with the same name exists on the backup system or
that it is really the same object as on the source system. To ensure the integrity of the
target (backup) system, a copy of the source object must be brought over from the
source system.
2. If the target object is not defined to MIMIX or if it is defined by an Exclude entry, there is
no guarantee that the target library exists on the target system. Further, the customer is
assumed not to care if the target object is replicated, since it is not defined with an
Include entry, so deleting the object on the target is the most straight forward approach.
129
Move/rename operations - user journaled data areas, data queues, IFS
objects
IFS, data area, and data queue objects replicated by user journal replication
processes can be moved or renamed while maintaining the integrity of the data. If the
new location or new name on the source system remains within the set of objects
identified as eligible for replication, MIMIX will perform the move or rename operation
on the object on the target system.
When a move or rename operation starts with or results in an object that is not within
the name space for user journal replication, MIMIX may need to perform additional
operations in order to replicate the operation. MIMIX may use a create or delete
operation and may need to add or remove tracking entries.
Each row in Table 21 summarizes a move/rename scenario and identifies the action
taken by MIMIX.
Table 21. MIMIX actions when processing moves or renames of objects when user journal replication pro-
cesses are involved
Identified for Within name space of Moves or renames the object on the target system and
replication with user objects to be renames the associated tracking entry. See example 1.
journal processing replicated with user
journal processing
Identified for Not identified for Deletes the target object and deletes the associated
replication with user replication tracking entry. The object will no longer be replicated. See
journal processing example 3.
Identified for Within name space of Moves or renames the object using system journal
replication with user objects to be processes and removes the associated tracking entry.
journal processing replicated with See example 4.
system journal
processing
Identified for Within name space of Creates tracking entry for the object using the new name
replication with objects to be or location and moves or renames the object using user
system journal replicated with user journal processes. If the object is a library or directory,
processing journal processing MIMIX creates tracking entries for those objects within the
library or directory that are also within name space for
user journal replication and synchronizes those objects.
See example 5.
130
Processing variations for common operations
Table 21. MIMIX actions when processing moves or renames of objects when user journal replication pro-
cesses are involved
Not identified for Within name space of Creates tracking entry for the object using the new name
replication objects to be or location. If the object is a library or directory, MIMIX
replicated with user creates tracking entries for those objects within the library
journal processing or directory that are also within name space for user
journal replication. Synchronizes all of the objects
identified by these new tracking entries. See example 6.
The following examples use IFS objects and directories to illustrate the MIMIX
operations in move/rename scenarios that involve user journal replication (advanced
journaling). The MIMIX behavior described is the same as that for data areas and
data queues that are within the configured name space for advanced journaling. Table
22 identifies the initial set of source system objects, data group IFS entries, and IFS
tracking entries before the move/rename operation occurs.
Table 22. Initial data group IFS entries, IFS tracking entries, and source IFS objects for
examples
Table 23. Results of move/rename operations within name space for advanced journaling
Resulting Target IFS objects Resulting data group IFS tracking entries
/TEST/stmf2 /TEST/stmf2
131
Table 23. Results of move/rename operations within name space for advanced journaling
Resulting Target IFS objects Resulting data group IFS tracking entries
/TEST/dir2/doc1 /TEST/dir2
/TEST/dir2/doc1
Table 24. Results of move/rename operations from advanced journaling to system journal
name space
Resulting target IFS objects Resulting data group IFS tracking entries
/TEST/notajstmf1 (removed)
/TEST/notajdir1/doc1 (removed)
132
Processing variations for common operations
objects identified by these tracking entries are individually synchronized from the
source to the target system. Table 25 illustrates the results on the target system.
Table 25. Results of move/rename operations from system journal to advanced journaling
name space
/TEST/stmf1 /TEST/stmf1
/TEST/dir1/doc1 /TEST/dir1
/TEST/dir1/doc1
Table 26. Results of move/rename operations from outside to within advanced journaling
name space
/TEST/stmf1 /TEST/stmf1
/TEST/dir1/doc1 /TEST/dir1
/TEST/dir1/doc1
133
removed from the replication processes. If the dynamic update option is not used,
the data group changes are not recognized until all data group processes are
ended and restarted.
• MIMIX system journal replication processes delete the file on the target system.
Delete operations - user journaled data areas, data queues, IFS objects
When a T-DO (delete) journal entry for an IFS, data area, or data queue object is
encountered in the system journal and advanced journaling is not being used, MIMIX
system journal replication processes generate an activity entry representing the
delete operation and handle the delete of the object from the target system. The user
journal replication processes remove the corresponding tracking entry.
Restore operations - user journaled data areas, data queues, IFS objects
When an IFS, data area, or data queue object is restored, any pre-existing object is
replaced by a save from the source system. With user journal replication, restores of
IFS, data area, and data queue objects on the source system are supported through
cooperative processing between MIMIX system journal and user journal replication
processes.
Provided the object was journaled when it was saved, a restored IFS, data area, or
data queue object is also journaled.
During cooperative processing, system journal replication processes generate an
activity entry representing the T-OR (restore) journal entry from the system journal
and perform a save and restore operation on the IFS, data area, or data queue object.
Meanwhile, user journal replication processes handle the management of the
corresponding IFS or object tracking entry. MIMIX may also start journaling, or end
and restart journaling on the object so that the journaling characteristics of the IFS,
data area, or data queue object match the data group definition.
134
CHAPTER 5 Configuration checklists
MIMIX can be configured in a variety of ways to support your replication needs. Each
configuration requires a combination of definitions and data group entries. Definitions
identify systems, journals, communications, and data groups that make up the
replication environment. Data group entries identify what to replicate and the
replication option to be used. For available options, see “Replication choices by object
type” on page 97. Also, advanced techniques, such as keyed replication, have
additional configuration requirements. For additional information see “Configuring
advanced replication techniques” on page 383.
New installations: Before you start configuring MIMIX, system-level configuration
for communications (lines, controllers, IP interfaces) must already exist between the
systems that you plan to include in the MIMIX installation. Choose one of the following
checklists to configure a new installation of MIMIX.
• “Checklist: New remote journal (preferred) configuration” on page 137 uses
shipped default values to create a new installation. Unless you explicitly configure
them otherwise, new data groups will use the IBM i remote journal function as part
of user journal replication processes.
• “Checklist: New MIMIX source-send configuration” on page 141 configures a new
installation and is appropriate when your environment cannot use remote
journaling. New data groups will use MIMIX source-send processes in user journal
replication.
• To configure a new installation that is to use the integrated MIMIX support for IBM
WebSphere MQ (MIMIX for MQ), refer to the MIMIX for IBM WebSphere MQ
book.
Upgrades and conversions: You can use any of the following topics, as appropriate,
to change a configuration:
• “Checklist: converting to application groups” on page 145 provides the instructions
needed to change your environment to implement application groups. Application
groups are best practice and provide the ability to group and control multiple data
groups as one entity.
• “Checklist: Converting to remote journaling” on page 146 changes an existing
data group to use remote journaling within user journal replication processes.
• “Converting to MIMIX Dynamic Apply” on page 148 provides checklists for two
methods of changing the configuration of an existing data group to use MIMIX
Dynamic Apply for logical and physical file replication. Data groups that existed
prior to installing version 5 must use this information in order to use MIMIX
Dynamic Apply.
• “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on
page 151 changes the configuration of an existing data group to use user journal
replication processes for these objects.
• To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an
135
Configuration checklists
existing installation, use topic ‘Choosing the correct checklist for MIMIX for MQ’ in
the MIMIX for IBM WebSphere MQ book.
• “Checklist: Converting to legacy cooperative processing” on page 157 changes
the configuration of an existing data group so that logical and physical source files
are processed from the system journal and physical data files use legacy
cooperative processing.
Other checklists: The following configuration checklist employs less frequently used
configuration tools and is not included in this chapter.
• Use “Checklist: copy configuration” on page 649 if you need to copy configuration
data from an existing product library into another MIMIX installation.
136
Checklist: New remote journal (preferred) configuration
137
10. Confirm that the journal definitions which have been automatically created have
the values you require. For information, see “Configuration processes that create
journal definitions” on page 200, “Tips for journal definition parameters” on
page 201, and “Journal definition considerations” on page 206.
11. Build the necessary journaling environments for the RJ links using “Building the
journaling environment” on page 221. If the data group is switchable, be sure to
build the journaling environments for both directions--source system A to target
system B (target journal @R) and for source system B to target system A (target
journal @R).
Note: The use of application groups is considered best practice. Step 12 through
Step 14 create the additional configuration needed for application groups. If
you are not using application groups, skip to Step 15.
12. Create the application groups to which you will associate the data groups using
topic “Creating an application group definition” on page 326.
13. Load the data resource group entries and nodes that define the association
between application groups and data groups. “Loading data resource groups into
an application group” on page 327.
14. Identify what node (system) will be the primary node for each application group,
using “Specifying the primary node for the application group” on page 327.
15. Use Table 27 to create data group entries for this configuration. This configuration
requires object entries and file entries for LF and PF files. For other object types or
classes, any replication options identified in planning topic “Replication choices by
object type” on page 97 are supported.
Table 27. How to configure data group entries for the remote journal (preferred) configuration.
Library- 1. Create object entries using. Use“Creating data group “Identifying library-based
based object entries” on page 271. objects for replication” on
objects 2. After creating object entries, load file entries for LF and page 100
PF (source and data) *FILE objects using “Loading file “Identifying logical and physical
entries from a data group’s object entries” on page 276. files for replication” on
Note: If you cannot use MIMIX Dynamic Apply for logical files or page 106
PF data files, you should still create file entries for PF data “Identifying data areas and data
files to ensure that legacy cooperative processing can be queues for replication” on
used. page 113
3. After creating object entries, load object tracking entries
for any *DTAARA and *DTAQ objects to be replicated
from a user journal. Use “Loading object tracking entries”
on page 287.
138
Checklist: New remote journal (preferred) configuration
Table 27. How to configure data group entries for the remote journal (preferred) configuration.
IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 284. replication” on page 116
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
“Loading IFS tracking entries” on page 286.
DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 297. replication” on page 122
16. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
“Resolving auditing problems” on page 679 and “Interpreting results for
configuration data - #DGFE audit” on page 687.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
17. Optionally, you can manually deploy data group configuration within MIMIX.
Although MIMIX will automatically deploy configuration information when data
groups are started, manually deploying is recommended for new data groups.
Manual deploying allows you the opportunity to validate the list of objects to be
replicated and the initial start of the data groups will be faster. Use the procedure
“Manually deploying configuration changes” on page 307.
18. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
“Setting data group auditing values manually” on page 309. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
19. Start journaling using the following procedures as needed for your configuration.
139
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (JRNSYS) parameter in the commands
to start journaling.
• For user journal replication, use “Journaling for physical files” on page 347 to
start journaling on both source and target systems.
• For IFS objects, configured for user journal replication, use “Journaling for IFS
objects” on page 350.
• For data areas or data queues configured for user journal replication, use
“Journaling for data areas and data queues” on page 354.
20. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 508
identifies options available for synchronizing and identifies how to establish a
synchronization point that identifies the journal location that will be used later to
initially start replication.
21. Confirm that the systems are synchronized by checking the libraries, folders and
directories contain expected objects on both systems.
22. Start the data group using “Starting data groups for the first time” on page 315.
23. For configurations that use application groups, after you have started data groups
as described in Step 22, start the application groups using “Starting an application
group” on page 331.
24. Customize the step programs that end and start user applications before and
following a switch using “Customizing user application handling for switching” on
page 561.
25. Verify the configuration. Topic “Verifying the initial synchronization” on page 512
identifies the additional aspects of your configuration that are necessary for
successful replication.
140
Checklist: New MIMIX source-send configuration
141
9. If the journaling environment does not exist, use topic “Building the journaling
environment” on page 221 to create the journaling environment.
Note: The use of application groups is considered best practice. Step 10 through
Step 12 create the additional configuration needed for application groups. If
you are not using application groups, skip to Step 13.
10. Create the application groups to which you will associate the data groups using
topic “Creating an application group definition” on page 326.
11. Load the data resource group entries and nodes that define the association
between application groups and data groups. “Loading data resource groups into
an application group” on page 327.
12. Identify what node (system) will be the primary node for each application group,
using “Specifying the primary node for the application group” on page 327.
13. Use Table 28 to create data group entries for this configuration. This configuration
requires object entries and file entries for legacy cooperative processing of PF
data files. For other object types or classes, any replication options identified in
planning topic “Replication choices by object type” on page 97 are supported.
Table 28. How to configure data group entries for a new MIMIX source-send configuration.
Library- 1. Create object entries using “Creating data group object “Identifying library-based
based entries” on page 271. objects for replication” on
objects 2. After creating object entries, load file entries for PF (data) page 100
*FILE objects using “Loading file entries from a data “Identifying logical and physical
group’s object entries” on page 276. files for replication” on
3. After creating object entries, load object tracking entries page 106
for *DTAARA and *DTAQ objects to be replicated from a “Identifying data areas and data
user journal. Use “Loading object tracking entries” on queues for replication” on
page 287. page 113
IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 284. replication” on page 116
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
“Loading IFS tracking entries” on page 286.
DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 297. replication” on page 122
14. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
142
Checklist: New MIMIX source-send configuration
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
“Resolving auditing problems” on page 679 and “Interpreting results for
configuration data - #DGFE audit” on page 687.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
15. Optionally, you can manually deploy data group configuration within MIMIX.
Although MIMIX will automatically deploy configuration information when data
groups are started, manually deploying is recommended for new data groups.
Manual deploying allows you the opportunity to validate the list of objects to be
replicated and the initial start of the data groups will be faster. Use the procedure
“Manually deploying configuration changes” on page 307.
16. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
“Setting data group auditing values manually” on page 309. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
17. Start journaling using the following procedures as needed for your configuration.
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (JRNSYS) parameter in the commands
to start journaling.
• For user journal replication, use “Journaling for physical files” on page 347 to
start journaling on both source and target systems.
• For IFS objects, configured for user journal replication, use “Journaling for IFS
objects” on page 350.
• For data areas or data queues configured for user journal replication, use
“Journaling for data areas and data queues” on page 354.
18. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 508
identifies options available for synchronizing and identifies how to establish a
synchronization point that identifies the journal location that will be used later to
initially start replication.
19. Confirm that the systems are synchronized by checking the libraries, folders and
directories contain expected objects on both systems.
20. Start the data group using “Starting data groups for the first time” on page 315.
21. For configurations that use application groups, after you have started data groups
143
as described in Step 20, start the application groups using “Starting an application
group” on page 331.
22. Customize the step programs that end and start user applications before and
following a switch using “Customizing user application handling for switching” on
page 561.
23. Verify your configuration. Topic “Verifying the initial synchronization” on page 512
identifies the additional aspects of your configuration that are necessary for
successful replication.
144
Checklist: converting to application groups
1. This is required in versions 7.1.05.00 and earlier. In 7.1.06.00 and higher, the DTACRG
parameter on these commands defaults to *DFT, which allows the requested command to
run when the data group belongs to a data resource group with two nodes. *DFT prevents
the requested command from running when there are three or more nodes, where it is partic-
ularly important to treat all members of an application group as one entity
145
Checklist: Converting to remote journaling
Use this checklist to convert an existing data group from using MIMIX source-send
processes to using MIMIX Remote Journal support for user journal replication.
Note: This checklist does not change values specified in data group entries that
affect how files are cooperatively processed or how data areas, data queues,
and IFS objects are processed. For example, files configured for legacy
processing prior to this conversion will continue to be replicated with legacy
cooperative processing.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. If you use startup programs, make any changes necessary to ensure that they will
start the TCP/IP server and the DDM server on all systems before starting
replication.
2. Do the following to ensure that you have a functional transfer definition:
a. Modify the transfer definition to identify the RDB directory entry. Use topic
“Changing a transfer definition to support remote journaling” on page 185.
b. If you have implemented DDM password validation, verify that your
environment will allow MIMIX RJ support to work properly. Use topic “Checking
the DDM password validation level” on page 188.
c. Verify the communications link using “Verifying the communications link for a
data group” on page 197.
3. If you are using the TCP protocol, ensure that the DDM TCP server is running
using topic “Starting the DDM TCP/IP server” on page 187.
4. Connect the journal definitions for the local and remote journals using “Adding a
remote journal link” on page 227. This procedure also creates the target journal
definition.
5. Build the journaling environment on each system defined by the RJ pair using
“Building the journaling environment” on page 221.
6. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *YES for the Use remote journal link prompt.
d. When you are ready to accept the changes, press Enter.
7. To make the configuration changes effective, you need to end the data group you
are converting to remote journaling and start it again as follows:
a. Perform a controlled end of the data group (ENDDG command), specifying
*ALL for Process and *CNTRLD for End process. Refer to topic “Ending all
replication in a controlled manner” in the MIMIX Operations book.
146
Checklist: Converting to remote journaling
b. Start data group replication using the procedure “Starting selected data group
processes” in the MIMIX Operations book. Be sure to specify *ALL for Start
processes prompt (PRC parameter) and *LASTPROC as the value for the
Database journal receiver and Database large sequence number prompts.
147
Converting to MIMIX Dynamic Apply
Use either procedure in this topic to change a data group configuration to use MIMIX
Dynamic Apply. In a MIMIX Dynamic Apply configuration, objects of type *FILE (LF,
PF source and data) are replicated using primarily user journal replication processes.
This configuration is the most efficient way to process these files.
• “Converting using the Convert Data Group command” on page 148 automatically
converts a data group configuration.
• “Checklist: manually converting to MIMIX Dynamic Apply” on page 149 enables
you to perform the conversion yourself.
It is recommended that you contact your Certified MIMIX Consultant for assistance
before performing this procedure.
Requirements: Before starting, consider the following:
• Any data groups set with *SYSJRN must use one of these procedures in order to
use MIMIX Dynamic Apply. Newly created data groups are automatically
configured to use MIMIX Dynamic Apply when its requirements and restrictions
are met and shipped command defaults are used.
• Any data group to be converted must already be configured to use remote
journaling.
• Any data group to be converted must have *SYSJRN specified as the value of
Cooperative journal (COOPJRN).
• A minimum level of IBM i PTFs are required on both systems. For a complete list
of required and recommended IBM PTFs, log in to Support Central and refer to
the Technical Documents page.
• The conversion must be performed from the management system. The data group
must be active when starting the conversion.
For additional information about configuration requirements and limitations of MIMIX
Dynamic Apply, see “Identifying logical and physical files for replication” on page 106.
148
Converting to MIMIX Dynamic Apply
149
file entries from the target system. Ensure that the value you specify (*SYS1 or
*SYS2) for the LODSYS parameter identifies the target system.
LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE)
UPDOPT(*ADD) LODSYS(value) SELECT(*NO)
For additional information about loading file entries, see “Loading file entries from
a data group’s object entries” on page 276.
12. Start journaling for all files not previously journaled. See “Starting journaling for
physical files” on page 347.
13. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CLRPND(*YES)
14. Verify that data groups are synchronized by running the MIMIX audits. See
“Verifying the initial synchronization” on page 512.
150
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling
151
“Adding or changing a data group IFS entry” on page 284. For additional
information, see “Restrictions - user journal replication of IFS objects” on
page 120.
6. Add or change data group object entries for the data areas and data queues you
want to replicate using the procedure “Adding or changing a data group object
entry” on page 272. For additional information, see “Restrictions - user journal
replication of data areas and data queues” on page 114.
Note: New data group object entries created in MIMIX version 7.0 or higher
automatically default to values that result in user journal replication of
*DTAARA and *DTAQ objects.
7. Load the tracking entries associated with the data group IFS entries and data
group object entries you configured. Use the procedures in “Loading tracking
entries” on page 286.
8. Optionally, you can manually deploy data group configuration within MIMIX.
Although MIMIX will automatically deploy configuration information when data
groups are started, manually deploying is recommended for data groups with
large amounts of configured IFS objects. Manual deploying allows you the
opportunity to validate the list of objects to be replicated and the subsequent start
of the data groups will be faster. Use the procedure “Manually deploying
configuration changes” on page 307.
9. Start journaling using the following procedures as needed for your configuration. If
you ever plan to switch the data groups, you must start journaling on both the
source system and on the target system.
• For IFS objects, use “Starting journaling for IFS objects” on page 350
• For data areas or data queues, use “Starting journaling for data areas and data
queues” on page 354
10. Verify that journaling is started correctly. This step is important to ensure the IFS
objects, data areas and data queues are actually replicated. For IFS objects, see
“Verifying journaling for IFS objects” on page 352. For data areas and data
queues, see “Verifying journaling for data areas and data queues” on page 356.
11. If you anticipate a delay between configuring data group IFS, object, or file entries
and starting the data group, use the SETDGAUD command before synchronizing
data between systems. Doing so will ensure that replicated objects are properly
audited and that any transactions for the objects that occur between configuration
and starting the data group are replicated. Use the procedure “Setting data group
auditing values manually” on page 309.
12. Synchronize the IFS objects, data areas and data queues between the source and
target systems. For IFS objects, follow the Synchronize IFS Object (SYNCIFS)
procedures. For data areas and data queues, follow the Synchronize Object
(SYNCOBJ) procedures. Refer to chapter “Synchronizing data between systems”
on page 497 for additional information.
13. If you are replicating large amounts of data, you should specify IBM i journal
receiver size options that provide large journal receivers and large journal entries.
Journals created by MIMIX are configured to allow maximum amounts of data.
152
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling
153
Checklist: Converting IFS entries to user journaling
using the CVTDGIFSE command
Use the procedures in this topic to convert IFS entries from system journal (object)
replication to user journal (database) replication using the Convert Data Group IFS
Entries (CVTDGIFSE) command. The CVTDGIFSE command provides the most
efficient way to convert IFS entries to user journaling.
Topic “User journal replication of IFS objects, data areas, data queues” on page 75
describes the benefits and restrictions of replicating these objects from user journal
entries. It also identifies the MIMIX processes used for replication and the purpose of
tracking entries.
You can choose to convert all or some of the IFS entries currently configured for
system journaling within a data group. The CVTDGIFSE command uses a temporary
data group that allows the specified data group to remain active during most of the
conversion process.
The IFS entries are copied to the temporary data group, changed to allow cooperative
processing, and IFS tracking entries are created. Journaling is then started for the IFS
objects on the source system and, if specified, also on the target system. The copied
IFS entries replace the existing entries, and IFS tracking entries are moved to the
existing data group. If necessary, the data group is changed to the values specified on
this command and to a data group type of *ALL. The existing data group is ended and
restarted to make all the changes effective and the temporary data group is removed.
If requested, an #IFSATR audit request is submitted after the conversion completes.
The CVTDGIFSE command runs interactively and requires your response to inquiry
messages while the conversion is in progress. You may also be prompted to provide
input for additional commands.
154
Checklist: Converting IFS entries to user journaling using the CVTDGIFSE command
155
desired and press Enter to run the command.
Note: For MIMIX DR installations, the Audit after conversion (AUDIT) parameter
cannot be *PRIORITY.
4. The Confirm Convert DG IFS Entry display appears. To start the conversion,
press Enter. The display will remain until a message is issued which requires a
response.
5. Respond to any messages issued as appropriate for your environment. See
“Responding to CVTDGIFSE command messages” on page 156.
156
Checklist: Converting to legacy cooperative processing
157
a data group’s object entries” on page 276.
7. Examine the data group file entries with those saved in the outfile created in
Step 5. Any differences need to be manually updated.
8. If you replicate journaled *DTAARA or *DTAQ objects with this data group, skip to
Step 10.
9. Optional step: This step prevents journaling from starting on new files, which
may not be desired because the journal image (JRNIMG) value for these files may
be different than the value specified in the MIMIX configuration. Such a difference
will be detected by the file attributes (#FILATR) audit.
For the libraries that you want to prevent journaling from starting on new files, do
one of the following:
• For systems running IBM i 5.4, to delete the QDFTJRN data areas use the
command:
DLTDTAARA DTAARA(library/QDFTJRN)
• For systems running IBM i 6.1 or higher, to end library journaling use the
command:
ENDJRNLIB LIB(library)
10. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CRLPND(*YES)
158
Configuring for native TCP/IP
This information is provided to assist you with configuring the IBM Power™ Systems
communications that are necessary before you can configure MIMIX.
MIMIX supports the Transmission Control Protocol/Internet Protocol (TCP/IP)
communications protocol.
Note: MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols. Vision
Solutions will only assist customers to determine possible workarounds with
communication-related issues arise when using SNA or OptiConnect. If you
create transfer definitions for MIMIX to use these protocols, be certain that you
business can accept this limitation.
MIMIX should have a dedicated communications line that is not shared with other
applications, jobs, or users on the production system. A dedicated path will make it
easier to fine-tune your MIMIX environment and to determine the cause of problems.
For TCP/IP, it is recommended that the TCP/IP host name or interface used be in its
own subnet. For SNA, it is recommended that MIMIX have its own communication line
instead of sharing an existing SNA device.
Your Certified MIMIX Consultant can assist you in determining your communications
requirements and ensuring that communications can efficiently handle peak volumes
of journal transactions.
If you plan to use system journal replication processes, you need to consider
additional aspects that may affect the communications speed. These aspects include
the type of objects being transferred and the size of data queues, user spaces, and
files defined to cooperate with user journal replication processes.
MIMIX IntelliStart can help you determine your communications requirements.
The topics in this chapter include:
• “Configuring for native TCP/IP” on page 159 describes using native TCP/IP
communications and provides steps to prepare and configure your system for it.
• “Configuring APPC/SNA” on page 163 describes basic requirements for SNA
communications.
• “Configuring OptiConnect” on page 164 describes basic requirements for
OptiConnect communications and identifies MIMIX limitations when this
communications protocol is used.
159
System-level communications
Using TCP/IP communications may or may not improve your CPU usage, but if your
primary communications protocol is TCP/IP, this can simplify your network
configuration.
Native TCP/IP communications allow MIMIX users greater flexibility and provides
another option in the communications available for use on their Power™ Systems.
MIMIX users can also continue to use IBM ANYNET support to run SNA protocols
over TCP networks.
Preparing your system to use TCP/IP communications with MIMIX requires the
following:
1. Configure both systems to use TCP/IP. The procedure for configuring a system to
use TCP/IP is documented in the information included with the IBM i software.
Refer to the IBM TCP/IP Fastpath Setup book, SC41-5430, and follow the
instructions to configure the system to use TCP/IP communications.
2. If you need to use port aliases, do the following:
a. Refer to the examples “Port aliases-simple example” on page 160 and “Port
aliases-complex example” on page 161.
b. Create the port aliases for each system using the procedure in topic “Creating
port aliases” on page 162.
3. Once the system-level communication is configured, you can begin the MIMIX
configuration process.
Figure 7. Creating Ports. In this example, the MIMIX installation consists of two systems.
Figure 8. Creating Ports. In this example, the MIMIX installation consists of three systems,
160
Configuring for native TCP/IP
In both Figure 7 and Figure 8, if you need to use port aliases for port 50410, you need
to have a service table entry on each system that equates the port number to the port
alias. For example, you might have a service table entry on system LONDON that
defines an alias of MXMGT for port number 50410. Similarly, you might have service
table entries on systems HONGKONG and CHICAGO that define an alias of MXNET
for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in
the transfer definition.
Figure 9. Creating Port Aliases. In this example, the system CHICAGO participates in two
161
System-level communications
MIMIX installations and uses a separate port for each MIMIX installation.
If you need to use port aliases in an environment such as Figure 9, you need to have
a service table entry on each system that equates the port number to the port alias. In
this example, CHICAGO would require two port aliases and two service table entries.
For example, you might use a port alias of LIBAMGT for port 50410 on LONDON and
an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. You might
use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for
port 50411 on both CAIRO and MEXICITY. You would use these port aliases in the
PORT1 and PORT2 parameters on the transfer definitions.
162
Configuring APPC/SNA
3. The Configure Related Tables display appears. Select option 1 (Work with
service table entries) and press Enter.
4. The Work with Service Table Entries display appears. Do the following:
a. Type a 1 in the Opt column next to the blank lines at the top of the list.
b. In the blank at the top of the Service column, use uppercase characters to
specify the alias that the System i will use to identify this port as a MIMIX native
TCP port.
Note: Port alias names are case sensitive and must be unique to the system
on which they are defined. For environments that have only one MIMIX
installation, Vision Solutions recommends that you use the same port
number or same port alias on each system in the MIMIX installation.
c. In the blank at the top of the Port column, specify the number of an unused port
ID to be associated with the alias. The port ID can be any number greater than
1024 and less than 55534 that is not being used by another application. You
can page down through the list to ensure that the number is not being used by
the system.
d. In the blank at the top of the Protocol column, type TCP to identify this entry as
using TCP/IP communications.
e. Press Enter.
5. The Add Service Table Entry (ADDSRVTBLE) display appears. Verify that the
information shown for the alias and port is what you want. At the Text 'description'
prompt, type a description of the port alias, enclosed in apostrophes, and then
press Enter.
Configuring APPC/SNA
Before you create a transfer definition that uses the SNA protocol, a functioning SNA
(APPN or APPC) line, controller, and device must exist between the systems that will
be identified by the transfer definition. If a line, controller, and device do not exist,
consult your network administrator before continuing.
Note: MIMIX no longer fully supports the SNA protocol. Vision Solutions will only
assist customers to determine possible workarounds if communication related
issues arise when using SNA. If you create transfer definitions that specify
*SNA for protocol, be certain that your business environment can accept this
limitation.
163
System-level communications
Configuring OptiConnect
If you plan to use the OptiConnect protocol, a functioning OptiConnect line must exist
between the two system that you identify in the transfer definition.
Note: MIMIX no longer fully supports the OptiConnect/400 protocol. Vision Solutions
will only assist customers to determine possible workarounds if
communication related issues arise when using SNA. If you create transfer
definitions that specify *OPTI for protocol, be certain that your business
environment can accept this limitation.
You can use the OptiConnect® product from IBM for all communication for most1
MIMIX processes. Use the IBM book OptiConnect for OS/400 to install and verify
OptiConnect communications. Then you can do the following:
• Ensure that the QSOC library is in the system portion of the library list. Use the
command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library
is in the system portion of the library list. If it not, use the CHGSYSVAL command
to add this library to the system library list.
• When you create the transfer definition, specify *OPTI for the transfer protocol.
1. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP
communicaitons.
164
CHAPTER 7 Configuring system definitions
165
Tips for system definition parameters
This topic provides tips for using the more common options for system definitions.
Context-sensitive help is available online for all options on the system definition
commands.
System definition (SYSDFN) This parameter is a single-part name that represents a
system within a MIMIX installation. This name is a logical representation and does not
need to match the system name that it represents. It is recommended that you avoid
naming system definitions based on their roles. System roles such as source, target,
production, and backup change upon switching.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
System type (TYPE) This parameter indicates the role of this system within the
MIMIX installation. A system can be a management (*MGT) system or a network
(*NET) system. Only one system in the MIMIX installation can be a management
system.
Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the
primary and secondary transfer definitions used for communicating with the system.
The communications path and protocol are defined in the transfer definitions. For
MIMIX to be operational, the transfer definition names you specify must exist. MIMIX
does not automatically create transfer definitions. If you accept the default value
primary for the Primary transfer definition, create a transfer definition by that name.
If you specify a Secondary transfer definition, it will be used by MIMIX if
communications path specified by the primary transfer definition is not available.
Cluster member (CLUMBR) You can specify if you want this system definition to be
a member of a cluster. The system (node) will not be added to the cluster until the
system manager is started the first time.
Cluster transfer definition (CLUTFRDFN) You can specify the transfer definition
that cluster resource services will use to communicate to the node and for the node to
communicate with other nodes in the cluster. You must specify *TCP as the transfer
protocol.
Message handling (PRIMSGQ, SECMSGQ) MIMIX uses the centralized message
log facility which is common to all MIMIX products. These parameters provide
additional flexibility by allowing you to identify the message queues associated with
the system definition and define the message filtering criteria for each message
queue. By default, the primary message queue, MIMIX, is located in the MIMIXQGPL
library. You can specify a different message queue or optionally specify a secondary
message queue. You can also control the severity and type of messages that are sent
to each message queue.
Communicate with mgt systems (MGTSYS) This parameter is ignored for system
definitions of type *MGT. For system definitions of type *NET, MIMIX uses this
parameter to determine which management systems will communicate with the
network system via system manager processes. The default value, *ALL, allows all
management systems to communicate with the specified network system. In
166
Tips for system definition parameters
environments licensed for multiple management systems which also have numerous
network systems, you may want to limit the number of management systems that
communicate with a network system to reduce the amount of communications
resources used by MIMIX system managers. It is recommended that you try the
default environment first. In environments with multiple management systems, system
manager processes between a management system and a network system must
exist before data groups can be created between the systems. If you need to limit the
communication resources, contact your Certified MIMIX Consultant for assistance in
balancing MIMIX communication needs with available resources.
Manager delay times (JRNMGRDLY, SYSMGRDLY) Two parameters define the
delay times used for all journal management and system management jobs. The
value of the journal manager delay parameter determines how often the journal
manager process checks for work to perform. The value of the system manager delay
parameter determines how often the system manager process checks for work to
perform.
Output queue values (OUTQ, HOLD, SAVE) These parameters identify an output
queue used by this system definition and define characteristics of how the queue is
handled. Any MIMIX functions that generate reports use this output queue. You can
hold spooled files on the queue and save spooled files after they are printed.
Keep history (KEEPSYSHST, KEEPDGHST) Two parameters specify the number of
days to retain MIMIX system history and data group history. MIMIX system history
includes the system message log. Data group history includes time stamps and
distribution history. You can keep both types of history information on the system for
up to a year.
Keep notifications (KEEPNEWNFY, KEEPACKNFY) Two parameters specify the
number of days to retain new and acknowledged notifications. The Keep new
notifications (days) parameter specifies the number of days to retain new notifications
in the MIMIX data library. The Keep acknowledged notifications (days) parameter
specifies the number of days to retain acknowledged notifications in the MIMIX data
library.
MIMIX data library, storage limit (KEEPMMXDTA, DTALIBASP, DSKSTGLMT)
Three parameters define information about MIMIX data libraries on the system. The
Keep MIMIX data (days) parameter specifies the number of days to retain objects in
the MIMIX data library, including the container cache used by system journal
replication processes. The MIMIX data library ASP parameter identifies the auxiliary
storage pool (ASP) from which the system allocates storage for the MIMIX data
library. For libraries created in a user ASP, all objects in the library must be in the
same ASP as the library. The Disk storage limit (GB) parameter specifies the
maximum amount of disk storage that may be used for the MIMIX data libraries.
User profile and job descriptions (SBMUSR, MGRJOBD, DFTJOBD) MIMIX runs
under the MIMIXOWN user profile and uses several job descriptions to optimize
MIMIX processes. The default job descriptions are stored in the MIMIXQGPL library.
Job restart time (RSTARTTIME) System-level MIMIX jobs, including the system
manager and journal manager, restart daily to maintain the MIMIX environment. You
can change the time at which these jobs restart. The management or network role of
167
the system affects the results of the time you specify on a system definition. Changing
the job restart time is considered an advanced technique.
Printing (CPI, LPI, FORMLEN, OVRFLW, COPIES) These parameters control
characteristics of printed output.
Product library (PRDLIB) This parameter is used for installing MIMIX into a
switchable independent ASP, and allows you to specify a MIMIX installation library
that does not match the library name of the other system definitions. The only time
this parameter should be used is in the case of an INTRA system or in replication
environments where it is necessary to have extra MIMIX system definitions that will
“switch locations” along with the switchable independent ASP. Due to its complexity,
changing the product library is considered an advanced technique and should not be
attempted without the assistance of a Certified MIMIX Consultant.
Note: For INTRA environments, the PRDLIB parameter must be manually
specified and must be a name ending in “I”. For more information, see
Appendix D, “Configuring Intra communications.
ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable
independent ASP, and defines the ASP group (independent ASP) in which the
product library exists. Again, this parameter should only be used in replication
environments involving a switchable independent ASP. Due to its complexity,
changing the ASP group is considered an advanced technique and should not be
attempted without the assistance of a Certified MIMIX Consultant.
168
Creating system definitions
169
Changing a system definition
To change a system definition, do the following:
1. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
2. The Work with System Definitions display appears. Type a 2 (Change) next to the
system definition you want and press Enter.
3. The Change System Definition (CHGSYSDFN) display appears. Press F10
(Additional parameters)
4. Locate the prompt for the parameter you need to change and specify the value
you want. Press F1 (Help) for more information about the values for each
parameter.
5. To save the changes press Enter.
170
Limiting internal communications to a network system
c. Press Enter.
5. When you have completed changing the network system definitions, start MIMIX
using the command:
STRMMX
171
Multiple network system considerations
When configuring an environment that has multiple network systems, it is
recommended that each system definition in the environment specify the same name
for the Primary transfer definition prompt. This configuration is necessary for the
MIMIX system managers to communicate between the management system and all
systems in the network. Data groups can use the same transfer definitions that the
system managers use, or they can use differently named transfer definitions.
Similarly, if you use secondary transfer definitions, it is recommended that each
system definition in the multiple network environment specifies the same name for the
Secondary transfer definition prompt. (The value of the Secondary transfer definition
should be different than the value of the Primary transfer definition.)
Figure 10 shows system definitions in a multiple network system environment. The
management system (LONDON) specifies the value PRIMARY for the primary
transfer definition in its system definition. The management system can communicate
with the other systems using any transfer definition named PRIMARY that has a value
for System 1 or System 2 that resolves to its system name (LONDON). Figure 11
shows the recommended transfer definition configuration which uses the value *ANY
for both systems identified by the transfer definition.
The management system LONDON could also use any transfer definition that
specified the name LONDON as the value for either System 1 or System 2.
The default value for the name of a transfer definition is PRIMARY. If you use a
different name, you need to specify that name as the value for the Primary transfer
definition prompt in all system definitions in the environment.
Figure 10. Example of system definition values in a multiple network system environment.
Figure 11. Example of a contextual (*ANY) transfer definition in use for a multiple network
172
Multiple network system considerations
system environment.
---------Definition--------- Threshold
Opt Name System 1 System 2 Protocol (MB)
__ __________ _______ ________
PRIMARY *ANY *ANY *TCP *NOMAX
173
Configuring transfer definitions
By creating a transfer definition, you identify to MIMIX the communications path and
protocol to be used between two systems. You need at least one transfer definition for
each pair of systems between which you want to perform replication. A pair of
systems consists of a management system and a network system. If you want to be
able to use different transfer protocols between a pair of systems, create a transfer
definition for each protocol.
System-level communication must be configured and operational before you can use
a transfer definition.
You can also define an additional communications path in a secondary transfer
definition. If configured, MIMIX can automatically use a secondary transfer definition if
the path defined in your primary transfer definition is not available.
In an Intra environment, a transfer definition defines a communications path and
protocol to be used between the two product libraries used by Intra. For detailed
information about configuring an Intra environment, refer to “Configuring Intra
communications” on page 654.
Once transfer definitions exist for MIMIX, they can be used for other functions, such
as the Run Command (RUNCMD), or by other MIMIX products for their operations.
The topics in this chapter include:
• “Tips for transfer definition parameters” on page 176 provides tips for using the
more common options for transfer definitions.
• “Using contextual (*ANY) transfer definitions” on page 181 describes using the
value (*ANY) when configuring transfer definitions.
• “Creating a transfer definition” on page 184 provides the steps to follow for
creating a transfer definition.
• “Changing a transfer definition” on page 185 provides the steps to follow for
changing a transfer definition. This topic also includes sub-task for how to
changing a transfer definition when converting to a remote journaling
environment.
• “Starting the DDM TCP/IP server” on page 187 describes how to start the DDM
server that is required in configurations that use remote journaling.
• “Checking the DDM password validation level” on page 188 describes how to
check whether the DDM communications infrastructure used by MIMIX Remote
Journal support requires a password. This topic also describes options for
ensuring that systems in a MIMIX configuration have the same password and
describes implications of these options.
• “Starting the TCP/IP server” on page 191 provides the steps to follow if you need
to start the Lakeview TCP/IP server.
• “Using autostart job entries to start the TCP server” on page 192 provides the
steps to configure the Lakeview TCP server to start automatically every time the
174
MIMIX subsystem is started
• “Verifying a communications link for system definitions” on page 196 provides the
steps to verify that the communications link defined for each system definition is
operational.
• “Verifying the communications link for a data group” on page 197 provides a
procedure to verify the primary transfer definition used by the data group.
175
Tips for transfer definition parameters
This topic provides tips for using the more common options for transfer definitions.
Context-sensitive help is available online for all options on the transfer definition
commands.
Transfer definition (TFRDFN) This parameter is a three-part name that identifies a
communications path between two systems. The first part of the name identifies the
transfer definition. The second and third parts of the name identify two different
system definitions which represent the systems between which communication is
being defined. It is recommended that you use PRIMARY as the name of one transfer
definition. To support replication, a transfer definition must identify the two systems
that will be used by the data group. You can explicitly specify the two systems, or you
can allow MIMIX to resolve the names of the systems. For more information about
allowing MIMIX to resolve the system names, see “Using contextual (*ANY) transfer
definitions” on page 181.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
For more information, see “Target journal definition names generated by ADDRJLNK
command” on page 210.
Short transfer definition name (TFRSHORTN) This parameter specifies the short
name of the transfer definition to be used in generating a relational database (RDB)
directory name. The short transfer definition name must be a unique, four-character
name if you specify to have MIMIX manage your RDB directory entries. It is
recommended that you use the default value *GEN to generate the name. The
generated name is a concatenation of the first character of the transfer definition
name, the last character of the system 1 name, the last character of the system 2
name, and the fourth character will be either a blank, a letter (A - Z), or a single digit
number (0 - 9).
Transfer protocol (PROTOCOL) This parameter specifies the communications
protocol to be used. Each protocol has a set of related parameters. If you change the
protocol specified after you have created the transfer definition, MIMIX saves
information about both protocols.
Notes:
• MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols. Vision
Solutions will only assist customers to determine possible workarounds with
communication-related issues arise when using SNA or OptiConnect. If you
create transfer definitions for MIMIX to use these protocols, be certain that you
business can accept this limitation.
• TCP/IP is the only communications protocol that is supported by MIMIX in an IBM
i clustering environment.
For the *TCP protocol the following parameters apply:
• System x host name or address (HOST1, HOST2) These two parameters
176
Tips for transfer definition parameters
specify the host name or address of system 1 and system 2, respectively. The
name is a mixed-case host alias name or a TCP address (nnn.nnn.nnn.nnn) and
can be up to 256 characters in length. For the HOST1 parameter, the special
value *SYS1 indicates that the host name is the same as the name specified for
System 1 in the Transfer definition parameter. Similarly, for the HOST2 parameter,
the special value *SYS2 indicates that the host name is the same as the name
specified for System 2 in the Transfer definition parameter.
Note: The specified value is also used when starting the Lakeview TCP Server
(STRSVR command). The HOST parameter on the STRSVR command is
limited to 80 or fewer characters.
• System x port number or alias (PORT1, PORT2) These two parameters specify
the port number or port alias of system1 and system 2, respectively. The value of
each parameter can be a 14-character mixed-case TCP port number or port alias
with a range from 1000 through 55534. To avoid potential conflicts with
designations made by the operating system, it is recommended that you use
values between 40000 and 55500. By default, the PORT1 parameter uses the
port 50410. For the PORT2 parameter, the default special value *PORT1
indicates that the value specified on the System 1 port number or alias (PORT1)
parameter is used. If you configured TCP using port aliases in the service table,
specify the alias name instead of the port number.
Note: If you have transfer definitions for multiple MIMIX installations, ensure that
there is a 10-digit gap between the port numbers specified in the transfer
definitions. For example, if port 40000 is used in the transfer definition for
the MIMIXA installation, then the transfer definition for MIMIXB installation
should be 40010 or higher.
The Relational database (RDB) parameter also applies to *TCP protocol.
For the *SNA protocol the following parameters apply:
• System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
• System x network identifier (NETID1, NETID2) These two parameters specify
name of the network for system 1 and system 2, respectively. The default value
*LOC indicates that the network identifier for the location name associated with
the system is used. The special value *NETATR indicates that the value specified
in the system network attributes is used. The special value *NONE indicates that
the network has no name. For the NETID2 parameter, the special value *NETID1
indicates that the network identifier specified on the System 1 network identifier
(NETID1) parameter is used.
• SNA mode (MODE) This parameter specifies the name of mode description used
for communication. The default name is MIMIX. The special value *NETATR
177
indicates that the value specified in the system network attributes is used.
The following parameters apply for the *OPTI protocol:
• System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
Threshold size (THLDSIZE) This parameter is accessible when you press F10
(Additional parameters). This controls the size of files and objects by specifying the
maximum size of files and objects that are sent. If the file or object exceeds the
threshold it is not sent. Valid values range from 1 through 9999999. The special
value *NOMAX indicates that no maximum value is set. Transmitting large files and
objects can consume excessive communications bandwidth and negatively impact
communications performance, especially for slow communication lines.
Manage autostart job entries (MNGAJE) This parameter is accessible when you
press F10 (Additional parameters). This determines whether MIMIX will use this
transfer definition to manage an autostart job entry for starting the TCP server for the
MIMIXQGPL/MIMIXSBS subsystem description. The shipped default is *YES,
whereby MIMIX will add, change, or remove an autostart job entry based on changes
to this transfer definition. This parameter only affects transfer definitions for TCP
protocol which have host names of 80 or fewer characters. For a given port number or
alias, only one autostart job entry will be created regardless of how many transfer
definitions use that port number or alias. An autostart job entry is created on each
system related to the transfer definition.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
Relational database (RDB) This parameter is accessible when you press F10
(Additional parameters) and is valid in transfer definitions used internally by system
manager processes and by transfer definitions used in environments configured to
use remote journaling (default) in user journal replication processes. This parameter
consists of four relational database values which identify the communications path
used by the IBM i remote journal function to transport journal entries: a relational
database directory entry name, two system database names, and a management
indicator for directory entries. This parameter creates two RDB directory entries, one
on each system identified in the transfer definition. Each entry identifies the other
system’s relational database.
Note: If you use the value *ANY for both system 1 and system 2 on the transfer
definition, *NONE is used for the directory entry name, and no directory entry
is generated.
178
Tips for transfer definition parameters
179
Finding the system database name for RDB directory entries
If you are managing the RDB directory entries and you need to determine the system
database name, do the following:
1. Login to the system that was specified for System 1 in the transfer definition.
2. From the command line type DSPRDBDIRE and press Enter. Look for the
relational database directory entry that has a corresponding remote location name
of *LOCAL.
3. Repeat steps 1 and 2 to find the system database name for System 2.
180
Using contextual (*ANY) transfer definitions
181
transfer definition that matches the transfer definition that you specified, for example,
(PRIMARY SYSA SYSB).
182
Using contextual (*ANY) transfer definitions
183
Creating a transfer definition
System-level communication must be configured and operational before you can use
a transfer definition.
To create a transfer definition, do the following:
1. Access the Work with Transfer Definitions display by doing one of the following:
• From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitions) and press Enter.
• From the MIMIX Cluster Menu, select option 21 (Work with transfer definitions)
and press Enter.
2. The Work with Transfer Definitions display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create Transfer Definition display appears. Do the following:
a. At the Transfer definition prompts, specify a name and the two system
definitions between which communications will occur.
b. At the Short transfer definition name prompt, accept the default value *GEN to
generate a short transfer definition name. This short transfer definition name is
used in generating relational database directory entry names if you specify to
have MIMIX manage your RDB directory entries.
c. At the Transfer protocol prompt, specify the communications protocol you
want, then press Enter. The value *TCP is strongly recommended for all
environments and the only protocol supported by MIMIX when used in an IBM i
clustering environment.
Note: MIMIX no longer fully supports configurations using Systems Network
Architecture (SNA) or OptiConnect/400 for communications protocols.
Vision Solutions will only assist customers to determine possible
workarounds with communication-related issues arise when using SNA
or OptiConnect. If you create transfer definitions for MIMIX to use these
protocols, be certain that you business can accept this limitation.
4. Additional parameters for the protocol you selected appear on the display. Verify
that the values shown are what you want. Make any necessary changes.
5. At the Description prompt, type a text description of the transfer definition,
enclosed in apostrophes.
6. Optional step: If you need to set a maximum size for files and objects to be
transferred, press F10 (Additional parameters). At the Threshold size (MB)
prompt, specify a valid value.
7. Optional step: If you need to change the relational database information, press
F10 (Additional parameters). See “Tips for transfer definition parameters” on
page 176 for details about the Relational database (RDB) parameter. If MIMIX is
not managing the RDB directory entries, it may be necessary to change the RDB
values.
8. To create the transfer definition, press Enter.
184
Changing a transfer definition
185
To support remote journaling, modify the transfer definition you plan to use as follows:
1. From the MIMIX Configuration menu, select option 2 (Work with transfer
definitions) and press Enter.
2. The Work with Transfer Definitions display appears. Type a 2 (Change) next to the
definition you want and press Enter.
3. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10
(Additional parameters), then press Page Down.
4. At the Relational database (RDB) prompt, specify the desired values for each of
the four elements and press Enter.
Note: See “Tips for transfer definition parameters” on page 176 for detailed
information about the Relational database (RDB) parameter and “Finding
the system database name for RDB directory entries” on page 180 for
information when changing transfer definitions configured to use RDB
directory entries.
186
Starting the DDM TCP/IP server
187
Checking the DDM password validation level
MIMIX Remote Journal support uses the DDM communications infrastructure. This
infrastructure can be configured to require a password when a server connection is
made. The MIMIXOWN user profile, which establishes the remote journal connection,
ships with a preset password so that it is consistent on all systems.
If you have implemented DDM password validation on any systems where MIMIX will
be used, you should verify the DDM level. If the MIMIXOWN password is not the
same on all systems in the MIMIX environment, you may need to change the
MIMIXOWN user profile or the DDM security level to allow MIMIX Remote Journal
support to function properly. These changes have security implications of which you
should be aware.
If the MIMIXOWN password has not been changed from its shipped, preset value no
action is necessary.
If the MIMIXOWN password has been changed from its shipped value, do the
following on both systems to check the DDM password validation level in use:
1. From a command line, type CHGDDMTCPA and press F4 (prompt).
2. Check the value of the Lowest authentication method (PWDRQD) field:
• If the value is *NO, *USRID, or *VLDONLY no further action is required. Press
F12 (Cancel).
• If the field contains any other value, you must take further action to enable
MIMIX RJ support to function in your environment. Press F12, then continue
with the next step.
3. Use one of the following options to change your environment to enable MIMIX RJ
support to function. Each option has security implications. You must decide which
option is best for your environment.
• “Option 1: Manually update MIMIXOWN user profile for DDM environment” on
page 188
• “Option 2: Force MIMIX to change password for MIMIXOWN user profile” on
page 189.
• “Option 3: Allow user profiles without passwords” on page 189.
Use the License Manager command CHGMMXPRF1 to change the MIMIXOWN user
profile.
188
Checking the DDM password validation level
1. The CHGMMXPRF command is available in the version of License Manager shipped with
service pack 8.0.05.00 and later.
189
after MIMIX is installed. However, this option should be performed before configuring
or starting MIMIX.
Do the following from a command line on each system in the installation:
Specify either *VLDONLY or *USRID as the value for PWDRQD in the following
command and press Enter:
CHGDDMTCPA PWDRQD(value)
190
Starting the TCP/IP server
191
Using autostart job entries to start the TCP server
To use TCP/IP communications, the MIMIX TCP/IP server must be started each time
the MIMIX subsystem (MIMIXSBS) is started. Because this can become a time
consuming task that can be mistakenly forgotten, MIMIX supports automatically
creating and managing autostart job entries for the TCP server with the MIMIXSBS
subsystem. MIMIX does this when transfer definitions for TCP protocol specify *YES
for the Manage autostart job entries (MNGAJE) parameter.
The autostart job entry uses a job description that contains the STRSVR command
which will automatically start the Lakeview TCP server when the MIMIXSBS
subsystem is started. The STRSVR command is defined in the Request data or
command (RQSDTA) parameter of the job description.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
If you prefer, you can create and manage autostart job entries yourself. The transfer
definition must specify MNGAJE(*NO) and you must have an autostart job entry on
each system that can use the transfer definition.
192
Using autostart job entries to start the TCP server
193
Updating host information for a user-managed autostart job entry
Use this procedure to update a user-managed autostart job entry which starts the
STRSVR command with the MIMIXSBS subsystem so that the request is submitted
with the correct host information. Autostart job entries for the server are user-
managed when the transfer definition specifies MNGAJE(*NO).
Important! Do not use this procedure for MIMIX-managed autostart job entries.
Perform this procedure from the local system, which is the system for which
information changed within the transfer definition. Do the following:
1. Identify the job description and library for the autostart job entry using the
procedure in “Identifying the current autostart job entry information” on page 192.
This information is needed in the following step.
2. Type CHGJOBD and press F4 (Prompt). The Change Job Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library names from in Step 1.
a. Press F10 (Additional parameters), then Page Down to locate Request data or
command (RQSDTA).
b. The Request data or command prompt shows the current values of the
STRSVR command in the following format. Change the value specified for
HOST so that the local_host-name is the host name or address specified
for the local system in the transfer definition.
'installation_library/STRSVR HOST(''local_host_name'')
PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)'
c. Press Enter.
194
Using autostart job entries to start the TCP server
DLTJOBD JOBD(library/job_description)
4. Create a new job description for the autostart job entry using the following
command:
CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD)
TOLIB(installation-library) NEWOBJ(job_description_name)
where installation_library is the name of the library for the MIMIX
installation and where job_description_name follows the recommendation to
identify the port for the local system by specifying the port number in the format
PORTnnnnn or the port alias.
5. Type CHGJOBD and press F4 (Prompt). The Change Job Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library you created in Step 4.
b. Press F10 (Additional parameters).
c. Page Down to locate Request data or command (RQSDTA).
d. At the Request data or command prompt, specify the STRSVR command in
the following format:
'installation_library/STRSVR HOST(''local_host_name'')
PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)'
Where the values to specify are:
• installation_library is the name of the library for the MIMIX
installation
• local_host_name is the host name or address from the transfer definition
for the local system
• nnnnn is the new port information from the transfer definition for the local
system, specified as either the port number or the port alias.
e. Press Enter. The job description is changed.
6. Create a new autostart job entry using the following command:
ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(autostart_job_name)
JOBD(installation_library/job_description_name)
Where installation_library/job_description_name specifies the job
description from Step 4 and autostart_job_name specifies the same port
information and format as specified for the job description name.
195
Verifying a communications link for system definitions
Do the following to verify that the communications link defined for each system
definition is operational:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, type a 1 (Work with system definitions) and
press Enter.
3. From the Work with System Definitions display, type an 11 (Verify
communications link) next to the system definition you want and press Enter. You
should see a message indicating the link has been verified.
Note: If the system manager is not active, this process will only verify that
communications to the remote system is successful. You will also see a
message in the job log indicating that “communications link failed after 1
request.” This indicates that the remote system could not return
communications to the local system.
4. Repeat this procedure for all system definitions. If the communications link
defined for a system definition uses SNA protocol, do not check the link from the
local system.
Note: If your transfer definition uses the *TCP communications protocol, then
MIMIX uses the Verify Communications Link command to validate the
information that has been specified for the Relational database (RDB)
parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and
System 2 relational database names exist and are available on each
system.
196
Verifying the communications link for a data group
197
Configuring journal definitions
By creating a journal definition you identify to MIMIX a journal environment that can
be used in the replication process. MIMIX uses the journal definition to manage the
journaling environment, including journal receiver management.
A journal definition does not automatically build the underlying journal environment
that it defines. If the journal environment does not exist, it must be built. This can be
done after the journal definition is created. Configuration checklists indicate when to
build the journal environment.
The topics in this chapter include:
• “Configuration processes that create journal definitions” on page 200 describes
the security audit journal (QAUDJRN) and other journal definitions that are
automatically created by MIMIX.
• “Tips for journal definition parameters” on page 201 provides tips for using the
more common options for journal definitions.
• “Journal definition considerations” on page 206 provides things to consider when
creating journal definitions for remote journaling.
• “Journal definition naming conventions” on page 207 describes the naming
conventions used for journal definitions and the specific conventions used by
processes that create target journal definitions for data groups which use remote
journaling.
• “Journal receiver management” on page 213 describes how MIMIX performs
change management and delete management for replication processes.
• “Journal receiver size for replicating large object data” on page 217 provides
procedures to verify that a journal receiver is large enough to accommodate large
IFS stream files and files containing LOB data, and if necessary, to change the
receiver size options.
• “Creating a journal definition” on page 218 provides the steps to follow for creating
a journal definition.
• “Changing a journal definition” on page 220 provides the steps to follow for
changing a journal definition.
• “Building the journaling environment” on page 221 describes the journaling
environment and provides the steps to follow for building it.
• “Changing the journaling environment to use *MAXOPT3” on page 222 describes
considerations and provides procedures for changing the journaling environment
to use the *MAXOPT3 receiver size option.
• “Changing the remote journal environment” on page 225 provides steps to follow
when changing an existing remote journal configuration. The procedure is
appropriate for changing a journal receiver library for the target journal in a remote
journaling environment or for any other changes that affect the target journal.
• “Adding a remote journal link” on page 227 describes how to create a MIMIX RJ
198
link, which will in turn create a target journal definition with appropriate values to
support remote journaling. In most configurations, the RJ link is automatically
created for you when you follow the steps of the configuration checklists.
• “Changing a remote journal link” on page 228 describes how to change an
existing RJ link.
• “Temporarily changing from RJ to MIMIX processing” on page 229 describes how
to change a data group configured for remote journaling to temporarily use MIMIX
send processing.
• “Changing from remote journaling to MIMIX processing” on page 230 describes
how to change a data group that uses remote journaling so that it uses MIMIX
send processing. Remote journaling is preferred.
• “Removing a remote journaling environment” on page 231 describes how to
remove a remote journaling environment that you no longer need.
199
Configuration processes that create journal definitions
You can explicitly create journal definitions using the Create Journal Definition
(CRTJRNDFN) command. However, other configuration processes may automatically
create them for you. Journal definitions created by other processes can be changed if
necessary.
When you create system definitions, MIMIX automatically creates a journal definition
named QAUDJRN for the security audit journal (QAUDJRN) on that system. The
QAUDJRN journal, also called the system journal, is used by MIMIX system journal
replication processes. If you do not already have a journaling environment for the
security audit journal, it will be created when the first data group that replicates from
the system journal is started.
When you create a data group definition, MIMIX automatically creates a user journal
definition if one does not already exist. Any journal definitions that are created in this
manner will be named with the value specified in the data group definition.
If the data group was created using default values, the user journal created will be
used with IBM i remote journaling support. Creating a data group definition also
creates a remote journal link which in turn creates the journal definition for the target
journal. The target journal definition is created using values appropriate for remote
journaling.
When system manager processes are started, MIMIX will create all the internal
journal definitions and remote journal links necessary for all system managers in the
installation if they do not already exist.
200
Tips for journal definition parameters
201
uses #MXJRNIASP for the default journal receiver library name. Otherwise, the
default library name is #MXJRN. You can specify a different name or specify the value
*JRNLIB to use the same library that is used for the associated journal.
Journal receiver library ASP (RCVLIBASP) This parameter specifies the auxiliary
storage pool (ASP) from which the system allocates storage for the journal receiver
library. You can use the default value *CRTDFT or you can specify the number of an
ASP in the range 1 through 32.
The value *CRTDFT indicates that the command default value for the IBM i Create
Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP)
from which the system allocates storage for the library.
For libraries that are created in a user ASP, all objects in the library must be in the
same ASP as the library.
Target journal state (TGTSTATE) This parameter specifies the requested status of
the target journal, and can be used with active journaling support or journal standby
state. Use the default value *ACTIVE to set the target journal state to active when the
data group associated with the journal definition is journaling on the target system
(JRNTGT(*YES)). Use the value *STANDBY to journal objects on the target system
while preventing most journal entries from being deposited into the target journal.
Note: Journal standby state requires that the IBM feature for High Availability
Journal Performance be installed. For more information, see “Configuring for
high availability journal performance enhancements” on page 362.
Target journal inspection (TGTJRNINSP) This parameter specifies whether to
enable target journal inspection on the specified journal. The shipped default value,
*YES, allows the journal to be inspected when the system identified in the journal
definition is the target system for data group replication. Target journal inspection
checks the specified journal for changes to replicated objects that were initiated on
the target system by users or processes other than MIMIX and reports the activity.
When *YES is specified and the specified journal is a user journal, the value *ACTIVE
must be specified for the Target journal state (TGTSTATE). (The IBM feature for High
Availability Journal Performance is not required.) Also, any data groups definitions
using this journal definition must allow the data groups to journal on the target system.
Because inspection occurs at the journal level on a system, enabling inspection for a
system journal (QAUDJRN) affects all data groups using the system identified in the
journal definition as their target system. Similarly, enabling inspection for a user
journal affects any data groups using the journal definition as their target system.
Note: To allow full inspection to occur for a specific data group, both its target system
journal definition and its target user journal definition must specify *YES for the
TGTJRNINSP parameter.
Journal caching (JRNCACHE) This parameter specifies whether the system should
cache journal entries in main storage before writing them to disk. This option is only
available if a separately chargeable feature from IBM (Option 42) is available on the
system. The default value, *NONE, prevents unintentional use of this feature. The
value *BOTH results in journal caching on both the source and the target systems.
You can also specify values *SRC or *TGT to perform journal caching on only the
source or target system.
202
Tips for journal definition parameters
Note: Journal caching requires that the IBM feature for High Availability Journal
Performance be installed. For more information, see “Configuring for high
availability journal performance enhancements” on page 362.
Receiver change management (CHGMGT, THRESHOLD, TIME, RESETTHLD2 or
RESETTHLD) Several parameters control how journal receivers associated with the
replication process are changed.
The Receiver change management (CHGMGT) parameter controls whether MIMIX
performs change management operations for the journal receivers used in the
replication process. The shipped default value of *TIMESIZE results in MIMIX
changing journal receivers by both threshold size and time of day.
The following parameters specify conditions that must be met before change
management can occur.
• Receiver threshold size (MB) (THRESHOLD) You can specify the size, in
megabytes, of the journal receiver at which it is changed. The default value is
6600 MB. This value is used when MIMIX or the system changes the receivers.
If you decide to decrease the size of the Receiver threshold size you will need to
manually change your journal receiver to reflect this change.
If you change the journal receiver threshold size in the journal definition, the
change is effective with the next receiver change.
• Time of day to change receiver (TIME) You can specify the time of day at which
MIMIX changes the journal receiver. The time is based on a 24 hour clock and
must be specified in HHMMSS format.
• Reset large sequence threshold (RESETTHLD2) You can specify the sequence
number (in millions) at which to reset the receiver sequence number. When the
threshold is reached, the next receiver change resets the sequence number to 1.
Note: RESETTHLD2 accepts larger sequence number values than
RESETTHLD. You can specify a value for only one of these parameters.
RESETTHLD2 is recommended.
For information about how change management occurs in a remote journal
environment and about using other change management choices, see “Journal
receiver management” on page 213
Receiver delete management (DLTMGT, KEEPUNSAV, KEEPRCVCNT,
KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal
receivers associated with the replication process.
The Receiver delete management (DLTMGT) parameter specifies whether or not
MIMIX performs delete management for the journal receivers. By default, MIMIX
performs the delete management operations. MIMIX operations can be adversely
affected if you allow the system or another process to handle delete management.
For example, if another process deletes a journal receiver before MIMIX is finished
with it, replication can be adversely affected.
All of the requirements that you specify in the following parameters must be met
before MIMIX deletes a journal receiver:
• Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to
203
have MIMIX retain any unsaved journal receivers. Retaining unsaved receivers
allows you to back out (rollback) changes in the event that you need to recover
from a disaster. The default value *YES causes MIMIX to keep unsaved journal
receivers until they are saved.
• Keep journal receiver count (KEEPRCVCNT) You can specify the number of
detached journal receivers to retain. For example, if you specify 2 and there are
10 journal receivers including the attached receiver (which is number 10), MIMIX
retains two detached receivers (8 and 9) and deletes receivers 1 through 7.
• Keep journal receivers (days) (KEEPJRNRCV) You can specify the number of
days to retain detached journal receivers. For example, if you specify to keep the
journal receiver for 7 days and the journal receiver is eligible for deletion, it will be
deleted after 7 days have passed from the time of its creation. The exact time of
the deletion may vary. For example, the deletion may occur within a few hours
after the 7 days have passed.
For information see “Journal receiver management” on page 213
Journal receiver ASP (JRNRCVASP) This parameter specifies the auxiliary storage
pool (ASP) from which the system allocates storage for the journal receivers. The
default value *LIBASP indicates that the storage space for the journal receivers is
allocated from the same ASP that is used for the journal receiver library.
Threshold message queue (MSGQ) This parameter specifies the qualified name of
the threshold message queue to which the system sends journal-related messages
such as threshold messages. The default value *JRNDFN for the queue name
indicates that the message queue uses the same name as the journal definition. The
value *JRNLIB for the library name indicates that the message queue uses the library
for the associated journal.
Exit program (EXITPGM) This parameter allows you to specify the qualified name of
an exit program to use when journal receiver management is performed by MIMIX.
The exit program will be called when a journal receiver is changed or deleted by the
MIMIX journal manager. For example, you might want to use an exit program to save
journal receivers as soon as MIMIX finishes with them so that they can be removed
from the system immediately.
Receiver size option (RCVSIZOPT) This parameter specifies what option to use for
determining the maximum size of sequence numbers in journal entries written to the
attached journal receiver. Changing this value requires that you change to a new
journal receiver. In order for a change to take effect the journaling environment must
be built. When the value *MAXOPT3 is used, the journal receivers cannot be saved
and restored to systems with operating system releases earlier than V5R3M0.
To support a switchable data group, a change to this parameter requires more than
one journal definition to be changed. For additional information, see “Changing the
journaling environment to use *MAXOPT3” on page 222
Minimize entry specific data (MINENTDTA) This parameter specifies which object
types allow journal entries to have minimized entry-specific data. For additional
information about improving journaling performance with this capability, see
“Minimized journal entry data” on page 359.
204
Tips for journal definition parameters
Reset sequence threshold (RESETTHLD) You can specify the sequence number
(in millions) at which to reset the receiver sequence number. When the threshold is
reached, the next receiver change resets the sequence number to 1. You can specify
a value for this parameter or for the RESETTHLD2 parameter, but not both.
RESETTHLD2 is recommended.
205
Journal definition considerations
Consider the following as you create journal definitions for user journal replication
environments that implement remote journaling:
• The source journal definition identifies the local journal and the system on
which the local journal exists. Similarly, the target journal definition identifies
the remote journal and the system on which the remote journal exists.
Therefore, the source journal definition identifies the source system of the
remote journal process and the target journal definition identifies the target
system of the remote journal process.
• You can use an existing journal definition as the source journal definition to
identify the local journal. However, using an existing journal definition for the
target journal definition is not recommended. The existing definition is likely to
be used for journaling and therefore is not appropriate as the target journal
definition for a remote journal link.
• MIMIX recognizes the receiver change management parameters (CHGMGT,
THRESHOLD, TIME, RESETTHLD2 or RESETTHLD) specified in the source
journal definition and ignores those specified in the target journal definition.
When a new receiver is attached to the local journal, a new receiver with the
same name is automatically attached to the remote journal. The receiver prefix
specified in the target journal definition is ignored.
• Each remote journal link defines a local-remote journal pair that functions in
only one direction. Journal entries flow from the local journal to the remote
journal. The direction of a defined pair of journals cannot be switched. If you
want to use the RJ process in both directions for a switchable data group, you
need to create journal definitions for two remote journal links (four journal
definitions). For more information, see “Journal definition naming conventions”
on page 207.
• After the journal environment is built for a target journal definition, MIMIX
cannot change the value of the target journal definition’s Journal receiver prefix
(JRNRCVPFX) or Threshold message queue (MSGQ), and several other
values. To change these values see the procedure in the IBM topic “Library
Redirection with Remote Journals” in the IBM eServer iSeries Information
Center.
• If you are configuring MIMIX for a scenario in which you have one or more
target systems, there are additional considerations for the names of journal
receivers. Each source journal definition must specify a unique value for the
Journal receiver prefix (JRNRCVPFX) parameter. MIMIX ensures that the
same prefix is not used more than once on the same system but cannot
determine if the prefix is used on a target journal while it is being configured. If
the prefix defined by the source journal definition is reused by target journals
that reside in the same library and ASP, attempts to start the remote journals
will fail with message CPF699A (Unexpected journal receiver found).
When you create a target journal definition instead of having it generated using
the Add Remote Journal Link (ADDRJLNK) command, use the default value
*GEN for the prefix name for the JRNRCVPFX on a target journal definition.
206
Journal definition naming conventions
The receiver name for source and target journals will be the same on the
systems but will not be the same in the journal definitions. In the target journal,
the prefix will be the same as that specified in the source journal definition.
207
This preferred naming convention for target journal definitions ensures that local and
remote journal receivers will have unique names, as required by the IBM i remote
journal function, and that target journal names are unique.
Changing or manually creating target journal definitions: If you manually create
target journal definitions with the CRTJRNDFN command, it is recommended that you
use the preferred naming convention. Implementing this convention in two-node
environments simplifies any future transition to a three-or-more node environment
and avoids having conflicting journal names.
If you change library values in a source journal definition before creating a data group
which uses the journal definition, the target journal definition is created with the
correct library names. Similarly, if you want multiple journals with the same name in
different libraries, change the source journal name before creating a data group which
uses that journal definition. However, if you change the journal name or any of the
library values in a source journal definition after the data group which uses it exits, you
must also change the library names in the target journal definition.
When implementing the naming convention, it is helpful to consider one source node
at a time and create all the journal definitions necessary for replication from that
source, as shown in Table 29.
You can find the remote journal ID for a system by displaying the details of its system
definition. System definitions that existed before MIMIX 7.1 were assigned an ID
during the 7.1 installation. All new system definitions are assigned an ID in the order
they are created.
Multimanagement environments: In environments that use multimanagement
functions1,it is possible that each node that is a management system is also both a
source and target for replication activity. The preferred naming convention helps you
keep track of all the journaling environments needed for a switchable implementation
of MIMIX. The following is strongly recommended:
• Limit the data group name to six characters. This will simplify keeping an
association between the data group name and the names of associated journal
definitions by allowing space for the source node identifier within those names.
• Allow the CRTDGDFN command to create the target journal definitions.
• Once the appropriately named journal definitions are created for source and target
systems, manually create the remote journal links between them (ADDRJLNK
command).
1. Either a MIMIX Global or MIMIX for PowerHA license key is required for multimanagement
functions.
208
Journal definition naming conventions
Table 29. Example showing journal definitions needed to replicate from each source node
209
Figure 12 shows the RJ links needed for this example.
Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Add F9=Retrieve F11=View 2
F12=Cancel F13=Repeat F16=Jrn Definitions F18=Subset F21=Print list
210
Journal definition naming conventions
211
which in turn generates the target journal definition PAYABLES@R CHICAGO (the
third entry listed in Figure 13).
Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Create F9=Retrieve
F10=View receivers F12=Cancel F13=Repeat F16=RJ Links F24=More keys
Identifying the correct journal definition on the Work with Journal Definition display
can be confusing. Fortunately, the Work with RJ Links display (Figure 14) shows the
association between journal definitions much more clearly.
Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Add F9=Retrieve F11=View 2
F12=Cancel F13=Repeat F16=Jrn Definitions F18=Subset F21=Print list
212
Journal receiver management
213
recommended that you use the value *YES to allow MIMIX to perform delete
management.
When MIMIX performs delete management, the journal receivers are only deleted
after MIMIX is finished with them and all other criteria specified on the journal
definition are met. The criteria includes how long to retain unsaved journal receivers
(KEEPUNSAV), how many detached journal receivers to keep (KEEPRCVCNT), and
how long to keep detached journal receivers (KEEPJRNRCV).
Note: If more than one MIMIX installation uses the same journal, the journal
manager for each installation can delete the journal regardless of whether the
other installations are finished with it. If you have this scenario, you need to
use the journal receiver delete management exit points to control deleting the
journal receiver. For more information, see “Working with journal receiver
management user exit points” on page 628.
Delete management of the source and target receivers occur independently from
each other. MIMIX operations can be affected if you allow the system to handle delete
management. The system may delete a journal receiver before MIMIX has completed
its use. It is highly recommended that you configure the journal definitions to have
MIMIX perform journal delete management. By default, the IBM i remote journal
function does not allow a receiver to be deleted until it is sent from the local journal
(source) to the remote journal (target). When MIMIX manages deletion, a remote
journal receiver on the target system, and the corresponding local journal receiver on
the source system, cannot be deleted until it is processed by the database reader
(DBRDR) and the database apply (DBAPY) processes and it meets the other criteria
defined in the journal definition.
214
Journal receiver management
4
2
3
1
2
1
215
Considerations when journaling on target
The default behavior for MIMIX is to have journaling enabled on the target systems for
the target files. After a transaction is applied to the target system, MIMIX writes the
journal entry to a separate journal on the target system. This journaling on the target
system makes it easier and faster to start replication from the backup system
following a switch. As part of the switch processing, the journal receiver is changed
before the data group is started.
In a remote journaling environment, these additional journal receivers can become
stranded on the backup system following a switch. When starting a data group after a
switch, the IBM i remote journal function begins transmitting journal entries from the
just changed journal receiver. Because the backup system is now temporarily acting
as the source system, the remote journal function interprets any earlier receivers as
unprocessed source journal receivers and prevents them from being deleted.
To remove these stranded journal receivers, you need to use the IBM command
DLTJRNRCV with *IGNTGTRCV specified as the value of the DLTOPT parameter.
216
Journal receiver size for replicating large object data
217
Creating a journal definition
Do the following to create a journal definition:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu select option 3 (Work with journal definitions)
and press Enter.
3. The Work with Journal Definitions display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
4. The Create Journal Definition display appears. At the Journal definition prompts,
specify a two-part name.
Note: Journal definition names cannot be UPSMON or begin with the characters
MM.
5. Verify that the following prompts contain the values that you want. If you have not
journaled before, the default values are appropriate. If you need to identify an
existing journaling environment to MIMIX, specify the information you need.
Journal
Library
Journal library ASP
Journal receiver prefix
Library
Journal receiver library ASP
6. At the Target journal state prompt, specify the requested status of the target
journal. The shipped default value, *ACTIVE, is required for target journal
inspection. The value *ACTIVE can be used with active journaling support or
journal standby state.
Note: Journal standby state requires the IBM feature for High Availability Journal
Performance. For more information see “Configuring for high availability
journal performance enhancements” on page 362.
7. At the Target journal inspection prompt, the shipped default value, *YES, allows
the specified journal to be inspected for activity by users or programs other than
MIMIX. Inspection occurs at the journal level when the system on which the
specified journal exists is the target system for replication by one or more enabled
data groups. To prevent journal inspection, specify *NO.
8. At the Journal caching prompt, the shipped default value, *NONE, prevents
caching of journals in main storage before writing them to disk which is only
possible if a separate, chargeable feature from IBM (Option 42) is available on the
system. To use journal caching on both systems, only the source system, or only
the target system, specify *BOTH, *SRC, or *TGT.
Note: Journal caching requires the IBM feature for High Availability Journal
Performance. For more information see “Configuring for high availability
journal performance enhancements” on page 362.
218
Creating a journal definition
9. Set the values you need to manage changing journal receivers, as follows:
a. At the Receiver change management prompt, specify the value you want. The
default values are recommended. For more information about valid
combinations of values, press F1 (Help).
b. Press Enter.
c. One or more additional prompts related to receiver change management
appear on the display. Verify that the values shown are what you want and, if
necessary, change the values.
Receiver threshold size (MB)
Time of day to change receiver
Reset large sequence threshold
d. Press Enter.
10. Set the values you need to manage deleting journal receivers, as follows:
a. It is recommended that you accept the default value *YES for the Receiver
delete management prompt to allow MIMIX to perform delete management.
b. Press Enter.
c. One or more additional prompts related to receiver delete management appear
on the display. If necessary, change the values.
Keep unsaved journal receivers
Keep journal receiver count
Keep journal receivers (days)
11. At the Description prompt, type a brief text description of the journal definition.
12. This step is optional. If you want to access additional parameters that are
considered advanced functions, press F10 (Additional parameters). Make any
changes you need to the additional prompts that appear on the display.
13. To create the journal definition, press Enter.
219
Changing a journal definition
Before making changes, review the naming convention requirements for a remote
journaling environment. Some changes may require changing additional journal
definitions. See “Journal definition naming conventions” on page 207.
To change a journal definition, do the following:
1. Access the Work with Journal Definitions display according to your configuration
needs:
• In a clustering environment, from the MIMIX Cluster Menu select option 20
(Work with system definitions) and press Enter. When the Work with System
Definitions display appears, type 12 (Journal Definitions) next to the system
name you want and press Enter.
• In a standard MIMIX environment, from the MIMIX Configuration Menu select
option 3 (Work with journal definitions) and press Enter.
2. The Work with Journal Definitions display appears. Type 2 (Change) next to the
definition you want and press Enter.
3. The Change Journal Definition (CHGJRNDFN) display appears. Press Enter twice
to see all prompts for the display.
4. Make any changes you need to the prompts. Press F1 (Help) for more information
about the values for each parameter.
5. If you need to access advanced functions, press F10 (Additional parameters).
When the additional parameters appear on the display, make the changes you
need.
6. To accept the changes, press Enter.
Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective
with the next receiver change. Before a change to any other parameter is
effective, you must rebuild the journal environment. Rebuilding the journal
environment ensures that it matches the journal definition and prevents
problems starting the data group.
220
Building the journaling environment
221
to build and press Enter.
Option 14 calls the Build Journal Environment (BLDJRNENV) command. For
environments using remote journaling, the command is called twice (first for the
source journal definition and then for the target journal definition). A status
message is issued indicating that the journal environment was created for each
system.
4. To verify that the source journals have been created for a data group, do the
following from each system in the data group:
a. Enter the command WRKDGDFN
b. From the Work with DG Definitions display, type 12 (Journal definitions) next
the data group and press Enter.
c. The Work with Journal Definitions display is subsetted to the journal definitions
for the data group. Type 17 (Work with jrn attributes) next to the definition that
is the source for the local system.
222
Changing the journaling environment to use *MAXOPT3
Replicates Switchable
From
User journal Yes Journal definition for normal source system (local)
with remote Journal definition for normal target system (remote, @R)
journaling Journal definition for switched source system (local)
Journal definition for switched target system (remote,
@R)
Do the following:
1. For data groups which use the journal definitions that will be changed, do the
following:
a. If commitment control is used, ensure that there are no open commit cycles.
b. End replication in a controlled manner using topic “Ending a data group in a
controlled manner” in the MIMIX Operations book. Procedures within this topic
will direct how to:
• Prepare for a controlled end of a data group
• Perform the controlled end - When ending, specify *ALL for the Process
prompt and *CNTRLD for the End process prompt.
• Confirm the end request completed without problems - This includes how to
check for and resolve any open commits.
Note: Resolve any open commits before continuing.
2. From the management system, select option 11 (Configuration menu) on the
MIMIX Main Menu. Then select option 3 (Work with journal definitions) to access
the Work with Journal Definitions display.
3. From the Work with Journal Definitions display, do the following to a journal
definition:
a. Type option 2 (Change) next to a journal definition and press Enter.
223
b. Optionally, specify a value for the Reset large sequence threshold prompt. If no
new value is specified, MIMIX will automatically use the default value
associated the value you specify for the receiver size option in Step 3d.
c. Press F10 (Additional parameters).
d. At the Receiver size option prompt, specify *MAXOPT3.
e. Press Enter.
f. Repeat Step 3 for each of the journal definitions you need to change, as
indicated in Table 30. After all the necessary journal definitions are changed,
continue with the next step.
4. From the Work with Journal Definitions display, type a 14 (Build) next to the
journal definitions you changed and press Enter.
Note: For remote journaling environments, only perform this step for a source
journal definition. Building the environment for the source journal will
automatically result in the building of the environment for the associated
target journal definition.
5. Verify that the changed journal definitions have appropriate values. Do the
following:
a. From the Work with Journal Definitions display, type a 5 (Display) next to each
changed journal definition and press Enter.
b. Verify that *MAXOPT3 is specified for the Receiver size option.
c. Verify that the Reset large sequence threshold prompt contains the value you
specified for Step 3b. If you did not specify a value, the value should be
between 9901 and 18446640000000.
6. Verify that the journals have been changed and now have appropriate values. Do
the following:
a. From the appropriate system (source or target), access the Work with Journal
Definitions display. Then do the following:
• From the source system, type 17 (Work with jrn attributes) next to a changed
source journal definition and press Enter.
• From the target system, type 17 (Work with jrn attributes) next to a changed
target journal definition and press Enter.
b. Verify that *MAXOPT3 is specified as one of the values for the Receiver size
options field.
7. Update any automation programs. Any programs that include journal sequence
numbers must be changed to use the Reset large sequence threshold
(RESETTHLD2) and the Receiver size option (RCVSIZOPT) parameters.
8. Start the data groups using default values. Refer to topic “Starting selected data
group processes” in the MIMIX Operations book.
224
Changing the remote journal environment
225
a. Type a 24 (Delete target jrn environment) next to the link that you want and
press Enter.
b. A confirmation display appears. To continue deleting the journal, its associated
message queue, and the journal receiver, press Enter.
6. Make the changes you need for the target journal.
For example, to change the target (remote) journal definition to a new receiver
library, do the following:
a. Press F12 to return to the Work with Journal Definitions display.
b. Type option 2 (Change) next to the journal definition for the target system you
want and press Enter.
7. From the Work with Journal Definitions display, type a 14 (Build) next to the target
journal definition and press Enter.
Note: The target journal definition will end with @R.
8. Return to the Work with Data Groups display. Then do the following:
a. Type an 8 (Display status) next to the data group you want and press Enter.
b. Locate the name of the receiver in the Last Read field for the Database
process.
9. Do the following to start the RJ link:
a. From the Work with Data Groups display, type a 44 (RJ links) next to the data
group you want and press Enter.
b. Locate the link you want based on the name in the Target Jrn Def column. Type
a 9 (Start) next to the link with the target journal definition and press F4
(Prompt)
c. The Start Remote Journal Link (STRRJLNK) appears. Specify the receiver
name from Step 8b as the value for the Starting journal receiver (STRRCV)
and press Enter.
10. Start the data group using default values Refer to topic “Starting selected data
group processes” in the MIMIX Operations book.
226
Adding a remote journal link
227
Changing a remote journal link
Changes to the delivery and sending task priority take effect only after the remote
journal link has been ended and restarted.
To change characteristics of the link between source and target journal definitions, do
the following:
1. Before you change a remote journal link, end activity for the link. The MIMIX
Operations book describes how to end only the RJ link.
Notes:
• If you plan to change the primary transfer definition or secondary transfer
definition to a definition that uses a different RDB directory entry, you also need
to remove the existing connection between objects. Use topic “Removing a
remote journaling environment” on page 231 before changing the remote
journal link.
• Before making changes, review the naming convention requirements for a
remote journaling environment. Some changes may require changing
additional journal definitions. See “Journal definition naming conventions” on
page 207.
2. From the Work with RJ Links display, type a 2 (Change) next to the entry you want
and press Enter.
3. The Change Remote Journal Link (CHGRJLNK) display appears. Specify the
values you want for the following prompts:
• Delivery
• Sending task priority
• Primary transfer definition
• Secondary transfer definition
• Description
4. When you are ready to accept the changes, press Enter.
5. To make the changes effective, do the following:
a. If you removed the RJ connection in Step 1, you need to use topic “Building the
journaling environment” on page 221.
b. Start the data group which uses the RJ link.
228
Temporarily changing from RJ to MIMIX processing
229
Changing from remote journaling to MIMIX processing
Use this procedure when you no longer want to use remote journaling for a data
group and want to permanently change the data group to use MIMIX send
processing.
Important! If the data group is configured for MIMIX Dynamic Apply, you must
complete the procedure in “Checklist: Converting to legacy cooperative
processing” on page 157 before you remove remote journaling.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. Perform a controlled end for the data group that you want to change using topic
“Ending a data group in a controlled manner” in the MIMIX Operations book. On
the ENDDG command, specify the following:
• *ALL for the Process prompt
• *CNTRLD for the End process prompt
Note: Do not end the RJ link at this time. Step 2 verifies that the RJ link is not
in use by any other processes or data groups before ending and
removing the RJ environment.
2. Perform the procedure in topic “Removing a remote journaling environment” on
page 231.
3. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *NO for the Use remote journal link prompt.
d. To accept the change, press Enter.
4. Start data group replication using the procedure “Starting selected data group
processes” in the MIMIX Operations book and specify *ALL for the Start
processes prompt (PRC parameter).
230
Removing a remote journaling environment
2. End the remote journal link and verify that it has a state value of *INACTIVE
before you continue. Refer to topics “Ending a remote journal link independently”
and “Checking status of a remote journal link” in the MIMIX Operations book.
3. From the management system, do the following to remove the connection to the
remote journal:
a. Access the journal definitions for the data group whose environment you want
to change. From the Work with Data Groups display, type a 45 (Journal
definitions) next to the data group that you want and press Enter.
b. Type a 12 (Work with RJ links) next to either journal definition you want and
press Enter. You can select either the source or target journal definition.
c. From the Work with RJ Links display, type a 15 (Remove RJ connection) next
to the link that you want and press Enter.
Note: If more than one RJ link is available for the data group, ensure that you
choose the link you want.
d. A confirmation display appears. To continue removing the connections for the
selected links, press Enter.
4. From the Work with RJ Links display, do the following to delete the target system
objects associated with the RJ link:
a. Type a 24 (Delete target jrn environment) next to the link that you want and
press Enter.
231
b. A confirmation display appears. To continue deleting the journal, its associated
message queue, the journal receiver, and to remove the connection to the
source journal receiver, press Enter.
5. Delete the target journal definition using topic “Deleting a definition” on page 258.
When you delete the target journal definition, its link to the source journal
definition is removed.
6. Use option 4 (Delete) on the Work with Monitors display to delete the RJLNK
monitors which have the same name as the RJ link.
232
CHAPTER 10 Configuring data group definitions
By creating a data group definition, you identify to MIMIX the characteristics of how
replication occurs between two systems. You must have at least one data group
definition in order to perform replication.
In an Intra environment, a data group definition defines how replication occurs
between the two product libraries used by INTRA.
Once data group definitions exist for MIMIX, they can also be used by the MIMIX
Promoter product.
The topics in this chapter include:
• “Tips for data group parameters” on page 234 provides tips for using the more
common options for data group definitions.
• “Creating a data group definition” on page 246 provides the steps to follow for
creating a data group definition.
• “Changing a data group definition” on page 250 provides the steps to follow for
changing a data group definition.
• “Fine-tuning backlog warning thresholds for a data group” on page 251 describes
what to consider when adjusting the values at which the backlog warning
thresholds are triggered.
233
Tips for data group parameters
This topic provides tips for using the more common options for data group definitions.
Context-sensitive help is available online for all options on the data group definition
commands. Refer to “Additional considerations for data groups” on page 245 for more
information.
Shipped default values for the Create Data Group Definition (CRTDGDFN) command
result in data groups configured for MIMIX Dynamic Apply. For additional information
see Table 11 in “Considerations for LF and PF files” on page 106.
Data group names (DGDFN, DGSHORTNAM) These parameters identify the data
group.
The Data group definition (DGDFN) is a three-part name that uniquely identifies
a data group. The three-part name must be unique to a MIMIX installation. The
first part of the name identifies the data group. The second and third parts of the
name (System 1 and System 2) specify system definitions representing the
systems between which the files and objects associated with the data group are
replicated.
Notes:
• In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_). Data group names cannot be UPSMON or
begin with the characters MM.
• For Clustering environments only, MIMIX recommends using the value
*RCYDMN in System 1 and System 2 fields for Peer CRGs.
One of the system definitions specified must represent a management system.
Although you can specify the system definitions in any order, you may find it
helpful if you specify them in the order in which replication occurs during normal
operations. For many users normal replication occurs from a production system to
a backup system, where the backup system is defined as the management
system for MIMIX. For example, if you normally replicate data for an application
from a production system (MEXICITY) to a backup system (CHICAGO) and the
backup system is the management system for the MIMIX cluster, you might name
your data group SUPERAPP MEXICITY CHICAGO.
The Short data group name (DGSHORTNAM) parameter indicates an
abbreviated name used as a prefix to identify jobs associated with a data group.
MIMIX will generate this prefix for you when the default *GEN is used. The short
name must be unique to the MIMIX cluster and cannot be changed after the data
group is created.
Data resource group entry (DTARSCGRP) This parameter identifies the data
resource group entry in which you want the data group to participate. The data
resource group entry provides the association to an application group. When the
specified value is a name or resolves to a name, operations to start, end, or switch are
typically performed at the level of the application group instead of the data group. The
default value, *DFT, will check for the existence of application groups in the
installation library to determine behavior. If there are application groups, the first part
234
Tips for data group parameters
of the three-part data group name is used for the name of the data resource group
entry. When application groups exist, the data resource group entry specified or to
which *DFT will resolve to must exist. If application groups do not exist, *DFT is the
same as *NONE and the data group will not be associated with a data resource group
entry. You can also specify the name of an existing data resource group entry.
Data source (DTASRC) This parameter indicates which of the systems in the data
group definition is used as the source of data for replication.
Allow to be switched (ALWSWT) This parameter determines whether the direction
in which data is replicated between systems can be switched. If you plan to use the
data group for high availability purposes, use the default value *YES. This allows you
to use one data group for replicating data in either direction between the two systems.
If you do not allow switching directions, you need to have second data group with
similar attributes in which the roles of source and target are reversed in order to
support high availability.
Data group type (TYPE) The default value *ALL indicates that the data group can be
used by both user journal and system journal replication processes. This enables you
to use the same data group for all of the replicated data for an application. The value
*ALL is required for user journal replication of IFS objects, data areas, and data
queues. MIMIX Dynamic Apply also supports the value *DB. For additional
information, see “Requirements and limitations of MIMIX Dynamic Apply” on page 111
Note: In Clustering environments only, the data group value of *PEER is available.
This provides you with support for system values and other system attributes
that MIMIX currently does not support.
Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the
transfer definitions used to communicate between the systems defined by the data
group. The name you specify in these parameters must match the first part of a
transfer definition name. By default, MIMIX uses the name PRIMARY for a value of
the primary transfer definition (PRITFRDFN) parameter and for the first part of the
name of a transfer definition.
If you specify a secondary transfer definition (SECTFRDFN), it is used if the
communications path specified in the primary transfer definition is not available.
Once MIMIX starts using the secondary transfer definition, it continues to use it even
after the primary communication path becomes available again.
Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of
seconds that the send process waits when there are no entries available to process.
Jobs go into a delay state when there are no entries to process. Jobs wait for the time
you specify even when new entries arrive in the journal. A value of 0 uses more
system resources.
Common database parameters (JRNTGT, JRNDFN1, JRNDFN2, ASPGRP1,
ASPGRP2, RJLNK, COOPJRN, NBRDBAPY, DBJRNPRC) These parameters apply
to data groups that can include database files or tracking entries. Data group types of
*ALL or *DB include database files. Data group types of *ALL may also include
tracking entries.
Journal on target (JRNTGT) The default value *YES enables journaling on the
target system, which allows you to switch the direction of a data group more
235
quickly. For data groups that perform user journal replication, the value *YES is
required to allow target journal inspection.
Replication of files with some types of referential constraint actions may require a
value of *YES. For more information, see “Considerations for LF and PF files” on
page 106.
If you specify *NO, you must ensure that, in the event of a switch to the direction
of replication, you manually start journaling on the target system before allowing
users to access the files. Otherwise, activity against those files may not be
properly recorded for replication.
System 1 journal definition (JRNDFN1) and System 2 journal definition
(JRNDFN2) parameters identify the user journal definitions associated with the
systems defined as System 1 and System 2, respectively, of the data group. The
value *DGDFN indicates that the journal definition has the same name as the data
group definition.
The DTASRC, ALWSWT, JRNTGT, JRNDFN1, and JRNDFN2 parameters interact
to automatically create as much of the journaling environment as possible. The
DTASRC parameter determines whether system 1 or system 2 is the source
system for the data group. When you create the data group definition, if the
journal definition for the source system does not exist, a journal definition is
created. If you specify to journal on the target system and the journal definition for
the target system does not exist, that journal definition is also created. The
names of journal definitions created in this way are taken from the values of the
JRNDFN1 and JRNDFN2 parameters according to which system is considered
the source system at the time they are created. You may need to build the
journaling environment for these journal definitions.
System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2)
parameters identify the name of the primary auxiliary storage pool (ASP) device
within an ASP group on each system. The value *NONE allows replication from
libraries in the system ASP and basic user ASPs 2-32. Specify a value when you
want to replicate IFS objects from a user journal or when you want to replicate
objects from ASPs 33 or higher. For more information see “Benefits of
independent ASPs” on page 659.
Use remote journal link (RJLNK) This parameter identifies how journal entries
are moved to the target system. The default value, *YES, uses remote journaling
to transfer data to the target system. This value results in the automatic creation of
the journal definitions (CRTJRNDFN command) and the RJ link (ADDRJLNK
command), if needed. The RJ link defines the source and target journal definitions
and the connection between them. When ADDRJLNK is run during the creation of
a data group, the data group transfer definition names are used for the
ADDRJLNK transfer definition parameters.
MIMIX Dynamic Apply requires the value *YES. The value *NO is appropriate
when MIMIX source-send processes must be used.
Cooperative journal (COOPJRN) This parameter determines whether
cooperatively processed operations for journaled objects are performed primarily
by user (database) journal replication processes or system (audit) journal
replication processes. Cooperative processing through the user journal is
236
Tips for data group parameters
recommended and is called MIMIX Dynamic Apply. For newly created data
groups, the shipped default value *DFT resolves to *USRJRN (user journal) when
configuration requirements for MIMIX Dynamic Apply are met. If those
requirements are not met, *DFT resolves to *SYSJRN and cooperative processing
is performed through system journal replication processes.
Number of DB apply sessions (NBRDBAPY) You can specify the number of
apply sessions allowed to process the data for the data group.
DB journal entry processing (DBJRNPRC) This parameter allows you to
specify several criteria that MIMIX will use to filter user journal entries before they
are sent to the database apply (DBAPY) process. Entries that are filtered out are
not replicated. In data groups that use remote journaling, the filtering is performed
by the database reader (DBRDR) process. In data groups configured to use
MIMIX source-send processes, filtering is performed by the database send
(DBSND) process.
Each element of the parameter identifies a criteria that can be set to either *SEND
or *IGNORE. The value *SEND causes the journal entries to be processed and
sent to the database apply process. The value *IGNORE prevents the entries from
being sent to the database apply process. Certain database techniques, such as
keyed replication, may require that an element be set to a specific value.
For data groups which use the DBSND process, the value *IGNORE can minimize
the amount of data sent over a communications path.
The following available elements describe how journal entries are handled by the
database reader (DBRDR) or the database send (DBSND) processes.
• Before images This criteria determines whether before-image journal entries
are filtered out and are not sent to the database apply process. If *IGNORE is
specified and *IMMED is specified for the Commit mode element of Database
apply processing (DBAPYPRC), journal entry before images are processed
and sent to the database apply process. If you use keyed replication, the
before-images are often required and you should specify *SEND. The value
*SEND is also required for the IBM RMVJRNCHG (Remove Journal Change)
command. See “Additional considerations for data groups” on page 245 for
more information.
• For files not in data group This criteria determines whether journal entries for
files that are not configured for replication by the data group are filtered out and
are not sent to the database apply process.
• Generated by MIMIX activity This criteria determines whether journal entries
resulting from the MIMIX database apply process are filtered out and are not
sent to the database apply process. Filtering out these entries may be
necessary in environments which perform bi-directional replication.
• Not used by MIMIX This criteria determines whether journal entries not used by
MIMIX are filtered out and are not sent to the database apply process.
Additional parameters: Use F10 (Additional parameters) to access the following
parameters. These parameters are considered advanced configuration topics.
237
Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog
threshold criteria for the remote journal function. When the backlog reaches any of the
specified criterion, the threshold exceeded condition is indicated in the status of the
RJ link. The threshold can be specified as a time difference, a number of journal
entries, or both. When a time difference is specified, the value is amount of time, in
minutes, between the timestamp of the last source journal entry and the timestamp of
the last remote journal entry. When a number of journal entries is specified, the value
is the number of journal entries that have not been sent from the local journal to the
remote journal. If *NONE is specified for a criterion, that criterion is not considered
when determining whether the backlog has reached the threshold.
Synchronization check interval (SYNCCHKITV) This parameter, which is only valid
for database processing, allows you to specify how many before-image entries to
process between synchronization checks. For MIMIX to use this feature, the journal
image file entry option (FEOPT parameter) must allow before-image journaling
(*BOTH). When you specify a value for the interval, a synchronization check entry is
sent to the apply process on the target system. The apply process compares the
before-image to the image in the file (the entire record, byte for byte). If there is a
synchronization problem, MIMIX puts the data group file entry on hold and stops
applying journal entries. The synchronization check transactions still occur even if
you specify to ignore before-images in the DB journal entry processing (DBJRNPRC)
parameter.
Time stamp interval (TSPITV) This parameter, which is only valid for database
processing, allows you to specify the number of entries to process before MIMIX
creates a time stamp entry. Time stamps are used to evaluate performance.
Note: The TSPITV parameter does not apply for remote journaling (RJ) data groups.
Verify interval (VFYITV) This parameter allows you to specify the number of journal
transactions (entries) to process before MIMIX performs additional processing.
When the value specified is reached, MIMIX verifies that the communications path
between the source system and the target system is still active and that the send and
receive processes are successfully processing transactions. A higher value uses less
system resources. A lower value provides more timely reaction to error conditions.
Larger, high-volume systems should have higher values. This value also affects how
often the status is updated with the “Last read” entries. A lower value results in more
accurate status information.
Data area polling interval (DTAARAITV) This parameter specifies the number of
seconds that the data area poller waits between checks for changes to data areas.
The poller process is only used when configured data group data area entries exist.
The preferred methods of replicating data areas require that data group object entries
be used to identify data areas. When object entries identify data areas, the value
specified in them for cooperative processing (COOPDB) determines whether the data
areas are processed through the user journal with advanced journaling, or through
the system journal.
Journal at creation (JRNATCRT) This parameter specifies whether to start
journaling on new objects of type *FILE, *DTAARA, and *DTAQ when they are
created. The decision to start journaling for a new object is based on whether the data
group is configured to cooperatively process any object of that type in a library. All
238
Tips for data group parameters
new objects of the same type are journaled, including those not replicated by the data
group.
If multiple data groups include the same library in their configurations, only allow one
data group to use journal at object creation (*YES or *DFT). The default for this
parameter is *DFT which allows MIMIX to determine the objects to journal at creation.
Note: There are some IBM library restrictions identified within the requirements for
implicit starting of journaling described in “What objects need to be journaled”
on page 343. For additional information, see “Processing of newly created
files and objects” on page 126.
Parameters for automatic retry processing: MIMIX may use delay retry cycles
when performing system journal replication to automatically retry processing an object
that failed due to a locking condition or an in-use condition. It is normal for some
pending activity entries to undergo delay retry processing—for example, when a
conflict occurs between replicated objects in MIMIX and another job on the system.
The following parameters define the scope of two retry cycles:
Number of times to retry (RTYNBR) This parameter specifies the number of
attempts to make during a delay retry cycle.
First retry delay interval (RTYDLYITV1) This parameter specifies the amount of
time, in seconds, to wait before retrying a process in the first (short) delay retry
cycle.
Second retry delay interval (RTYDLYITV2) specifies the amount of time, in
seconds, to wait before retrying a process in the second (long) delay retry cycle.
This is only used after all the retries for the RTYDLYITV1 parameter have been
attempted.
After the initial failed save attempt, MIMIX delays for the number of seconds specified
for the First retry delay interval (RTYDLYITV1) before retrying the save operation.
This is repeated for the specified number of times (RTYNBR).
If the object cannot be saved after all attempts in the first cycle, MIMIX enters the
second retry cycle. In the second retry cycle, MIMIX uses the number of seconds
specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the
save attempt for the specified number of times (RTYNBR).
If the object identified by the entry is in use (*INUSE) after the first and second retry
cycle attempts have been exhausted, a third retry cycle is attempted if the Automatic
object recovery policy is enabled. The values in effect for the Number of third
delay/retries policy and the Third retry interval (min.) policy determine the scope of the
third retry cycle. After all attempts have been performed, if the object still cannot be
processed because of contention with other jobs, the status of the entry will be
changed to *FAILED.
File and tracking entry options (FEOPT) This parameter specifies default options
that determine how MIMIX handles file entries and tracking entries for the data group.
All database file entries, object tracking entries, and IFS tracking entries defined to
the data group use these options unless they are explicitly overridden by values
specified in data group file or object entries. File entry options in data group object
entries enable you to set values for files and tracking entries that are cooperatively
processed.
239
The options are as follows:
• Journal image This option allows you to control the kinds of record images that
are written to the journal when data updates are made to database file records,
IFS stream files, data areas or data queues. The default value *AFTER causes
only after-images to be written to the journal. The value *BOTH causes both
before-images and after-images to be written to the journal. Some database
techniques, such as keyed replication, may require the use of both before-image
and after-images. *BOTH is also required for the IBM RMVJRNCHG (Remove
Journal Change) command. See “Additional considerations for data groups” on
page 245 for more information.
• Omit open/close entries This option allows you to specify whether open and close
entries are omitted from the journal. The default value *YES indicates that open
and close operations on file members or IFS tracking entries defined to the data
group do not create open and close journal entries and are therefore omitted from
the journal. If you specify *NO, journal entries are created for open and close
operations and are placed in the journal.
• Replication type This option allows you to specify the type of replication to use for
database files defined to the data group. The default value *POSITION indicates
that each file is replicated based on the position of the record within the file.
Positional replication uses the values of the relative record number (RRN) found
in the journal entry header to locate a database record that is being updated or
deleted. MIMIX Dynamic Apply requires the value *POSITION.
The value *KEYED indicates that each file is replicated based on the value of the
primary key defined to the database file. The value of the key is used to locate a
database record that is being deleted or updated. MIMIX strongly recommends
that any file configured for keyed replication also be enabled for both before-
image and after-image journaling. Files defined using keyed replication must have
at least one unique access path defined. For additional information, see “Keyed
replication” on page 385.
• Lock member during apply This option allows you to choose the type of lock the
database apply process will use for the data of replicated file members on the
target node. The default value allows the database apply process to obtain an
exclusive, allow read (*EXCLRD) lock on the data of file members being
processed to prevent other jobs from performing updates, thereby ensuring
access to complete replication. Locking occurs when the apply process is started
and affects members whose file entry status is active. Database apply processing
will also lock objects as needed to replicate changes from the source node. This
option does not apply to cooperatively processed data areas, data queues, or IFS
objects.
Note: If the value *NONE is specified, the apply process may hold *SHRUPD
locks on data for improved performance.
• Apply session With this option, you can assign a specific apply session for
processing files defined to the data group. The default value *ANY indicates that
MIMIX determines which apply session to use and performs load balancing.
Notes:
240
Tips for data group parameters
• Any changes made to the apply session option are not effective until the data
group is started with *YES specified for the clear pending and clear error
parameters.
• For IFS and object tracking entries, only apply session A is valid. For additional
information see “Database apply session balancing” on page 90.
• Collision resolution This option determines how data collisions are resolved. The
default value *HLDERR indicates that a file is put on hold if a collision is detected.
The value *AUTOSYNC indicates that MIMIX will attempt to automatically
synchronize the source and target file. You can also specify the name of the
collision resolution class (CRCLS) to use. A collision resolution class allows you to
specify how to handle a variety of collision types, including calling exit programs to
handle them. See the online help for the Create Collision Resolution Class
(CRTCRCLS) command for more information.
Note: The *AUTOSYNC value should not be used if the Automatic database
recovery policy is enabled.
• Disable triggers during apply This option determines if MIMIX should disable any
triggers on physical files during the database apply process. The default value
*YES indicates that triggers should be disabled by the database apply process
while the file is opened.
• Process trigger entries This option determines if MIMIX should process any
journal entries that are generated by triggers. The default value *YES indicates
that journal entries generated by triggers should be processed.
Database reader/send threshold (DBRDRTHLD) This parameter specifies the
backlog threshold criteria for the database reader (DBRDR) process. When the
backlog reaches any of the specified criterion, the threshold exceeded condition is
indicated in the status of the DBRDR process. If the data group is configured for
MIMIX source-send processing instead of remote journaling, this threshold applies to
the database send (DBSND) process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.
Database apply processing (DBAPYPRC) This parameter allows you to specify
defaults for operations associated with the database apply processes. Each
configured apply session uses the values specified in this parameter. The areas for
which you can specify defaults are as follows:
• Force data interval You can specify the number of records that are processed
before MIMIX forces the apply process information to disk from cache memory. A
lower value provides easier recovery for major system failures. A higher value
provides for more efficient processing.
• Maximum open members You can specify the maximum number of members
(with journal transactions to be applied) that the apply process can have open at
one time. Once the limit specified is reached, the apply process selectively closes
241
one file before opening a new file. A lower value reduces disk usage by the apply
process. A higher value provides more efficient processing because MIMIX does
not open and close files as often.
• Threshold warning (1000s) You can specify the number of entries, in thousands1,
that the apply process can have waiting to be applied before a warning message
is sent. When the threshold is reached, the threshold exceeded condition is
indicated in the status of the database apply process and a message is sent to the
primary and secondary message queues.
• Apply history log spaces You can specify the maximum number of history log
spaces that are kept after the journal entries are applied. Any value other than
zero (0) affects performance of the apply processes.
• Keep journal log user spaces You can specify the maximum number of journal log
spaces to retain after the journal entries are applied. Log user spaces are
automatically deleted by MIMIX. Only the number of user spaces you specify are
kept.
• Size of log user spaces (MB) You can specify the size of each log space (in
megabytes) in the log space chain. Log spaces are used as a staging area for
journal entries before they are applied. Larger log spaces provide better
performance.
• Commit mode You can specify when to apply journal entries that are under
commitment control. Default configuration values result in delaying the apply of
transactions under commitment control until the journal entry that indicates the
commit cycle completed is processed. Many users can benefit from using
immediate commit mode. For detailed information, see “Immediately applying
committed transactions” on page 367.
Object processing (OBJPRC) This parameter allows you to specify defaults for
object replication. The areas for which you can specify defaults are as follows:
• Object default owner You can specify the name of the default owner for objects
whose owning user profile does not exist on the target system. The product
default uses QDFTOWN for the owner user profile.
• DLO transmission method You can specify the method used to transmit the DLO
content and attributes to the target system. The value *OPTIMIZED uses IBM i
APIs and does not support doclists. The *SAVRST uses IBM i save and restore
commands.
• IFS transmission method You can specify the method used to transmit IFS object
content and attributes to the target system. The default value *OPTIMIZED uses
IBM i APIs for better performance. The value *SAVRST uses IBM i save and
restore commands.
Note: It is recommended that you use the *OPTIMIZED method of IFS
transmission only in environments in which the high volume of IFS activity
results in persistent replication backlogs. The IBM i save and restore
method guarantees that all attributes of an IFS object are replicated.
1. Prior to service pack 7.1.07.00, the specified value was not multiplied by 1000.
242
Tips for data group parameters
• User profile status You can specify the user profile Status value for user profiles
when they are replicated. This allows you to replicate user profiles with the same
status as the source system in either an enabled or disabled status for normal
operations. If operations are switched to the backup system, user profiles can
then be enabled or disabled as needed as part of the switching process.
• Keep deleted spooled files You can specify whether to retain replicated spooled
files on the target system after they have been deleted from the source system.
When you specify *YES, the replicated spooled files are retained on the target
system after they are deleted from the source system. MIMIX does not perform
any clean-up of these spooled files. You must delete them manually when they
are no longer needed. If you specify *NO, the replicated spooled files are deleted
from the target system when they are deleted from the source system.
• Keep DLO system object name You can specify whether the DLO on the target
system is created with the same system object name as the DLO on the source
system. The system object name is only preserved if the DLO is not being
redirected during the replication process. If the DLO from the source system is
being directed to a different name or folder on the target system, then the system
object name will not be preserved.
• Object retrieval delay You can specify the amount of time, in seconds, to wait after
an object is created or updated before MIMIX packages the object. This delay
provides time for your applications to complete their access of the object before
MIMIX begins packaging the object.
• Object send prefix The prefix specified determines whether the data group uses a
dedicated job for the object send process or a job shared by multiple data groups.
All data groups that specify the value *SHARED will share the same job on the
source system. If you specify a three-character prefix, only other data groups that
specify the same prefix will share the object send job with this prefix. To change
this value, end the data group, change the value specified, and restart the data
group.
Note: A shared object send job can process journal entries for objects within
SYSBAS or within only one independent ASP. In environments with
independent ASPs, each system within the set of data groups sharing the
same object send job must be identified consistently within those data
groups in the appropriate ASP group parameter (ASPGRP1 or
ASPGRP2). This is required regardless of whether a system is currently
the source or target for the data groups.
Object send threshold (OBJSNDTHLD) This parameter specifies the backlog
threshold criteria for the object send (OBJSND) process. When the backlog reaches
any of the specified criterion, the threshold exceeded condition is indicated in the
status of the OBJSND process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.
243
Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object retrieve requests
and the threshold at which the number of pending requests queued for processing
causes additional temporary jobs to be started. The specified minimum number of
jobs will be started when the data group is started. During periods of peak activity, if
the number of pending requests exceeds the backlog jobs threshold, additional jobs,
up to the maximum, are started to handle the extra work. When the backlog is
handled and activity returns to normal, the extra jobs will automatically end. If the
backlog reaches the warning message threshold, the threshold exceeded condition is
indicated in the status of the object retrieve (OBJRTV) process. If *NONE is specified
for the warning message threshold, the process status will not indicate that a backlog
exists.
Container send processing (CNRSNDPRC) This parameter allows you to specify
the minimum and maximum number of jobs allowed to handle container send
requests and the threshold at which the number of pending requests queued for
processing causes additional temporary jobs to be started. The specified minimum
number of jobs will be started when the data group is started. During periods of peak
activity, if the number of pending requests exceeds the backlog jobs threshold,
additional jobs, up to the maximum, are started to handle the extra work. When the
backlog is handled and activity returns to normal, the extra jobs will automatically end.
If the backlog reaches the warning message threshold, the threshold exceeded
condition is indicated in the status of the container send (CNRSND) process. If
*NONE is specified for the warning message threshold, the process status will not
indicate that a backlog exists.
Object apply processing (OBJAPYPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object apply requests and
the threshold at which the number of pending requests queued for processing triggers
additional temporary jobs to be started. The specified minimum number of jobs will be
started when the data group is started. During periods of peak activity, if the number
of pending requests exceeds the backlog threshold, additional jobs, up to the
maximum, are started to handle the extra work. When the backlog is handled and
activity returns to normal, the extra jobs will automatically terminate. You can also
specify a threshold for warning message that indicates the number of pending
requests waiting in the queue for processing before a warning message is sent. When
the threshold is reached, the threshold exceeded condition is indicated in the status of
the object apply process and a message is sent to the primary and secondary
message queues.
User profile for submit job (SBMUSR) This parameter allows you to specify the
name of the user profile used to submit jobs. The default value *JOBD indicates that
the user profile named in the specified job description is used for the job being
submitted. The value *CURRENT indicates that the same user profile used by the job
that is currently running is used for the submitted job.
Send job description (SNDJOBD) This parameter allows you to specify the name
and library of the job description used to submit send jobs. The product default uses
MIMIXSND in library MIMIXQGPL for the send job description.
244
Tips for data group parameters
Apply job description (APYJOBD) This parameter allows you to specify the name
and library of the job description used to submit apply requests. The product default
uses MIMIXAPY in library MIMIXQGPL for the apply job description.
Reorganize job description (RGZJOBD) This parameter, used by database
processing, allows you to specify the name and library of the job description used to
submit reorganize jobs. The product default uses MIMIXRGZ in library MIMIXQGPL
for the reorganize job description.
Synchronize job description (SYNCJOBD) This parameter, used by database
processing, allows you to the name and library of the job description used to submit
synchronize jobs. The product default uses MIMIXSYNC in library MIMIXQGPL for
synchronization job description. This is valid for any synchronize command that does
not have JOBD parameter on the display.
Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the
MIMIX environment. You can change the time at which these jobs restart. The source
or target role of the system affects the results of the time you specify on a data group
definition. Results may also be affected if you specify a value that uses the job restart
time in a system definition defined to the data group. Changing the job restart time is
considered an advanced technique.
1. Recovery windows and recovery points are supported with the MIMIX CDP™ feature, which
requires an additional access code.
245
Creating a data group definition
Shipped default values for the Create Data Group Definition (CRTDGDFN) command
result in data groups configured for MIMIX Dynamic Apply. These data groups use
remote journaling as an integral part of the user journal replication processes. For
additional information see Table 11 in “Considerations for LF and PF files” on
page 106. For information about command parameters, see “Tips for data group
parameters” on page 234.
To create a data group, do the following:
1. To access the appropriate command, do the following:
a. From the From the MIMIX Basic Main Menu, type 11 (Configuration menu) and
press Enter
b. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
c. From the Work with Data Group Definitions display, type a 1 (Create) next to
the blank line at the top of the list area and press Enter.
2. The Create Data Group Definition (CRTDGDFN) display appears. Specify a valid
three-part name at the Data group definition prompts.
Note: Data group names cannot be UPSMON or begin with the characters MM.
3. For the remaining prompts on the display, verify the values shown are what you
want. If necessary, change the values.
a. If you want a specific prefix to be used for jobs associated with the data group,
specify a value at the Short data group name prompt. Otherwise, MIMIX will
generate a prefix.
b. The default value for the Data resource group entry prompt will use the data
group name to create an association, through a data resource group entry,
between the data group and an application group when application groups are
configured within the installation. To have the data group associated with a
different data resource group entry, specify a name. When application groups
exist but you want to prevent the data group from participating in them, specify
*NONE.
c. Ensure that the value of the Data source prompt represents the system that
you want to use as the source of data to be replicated.
d. Verify that the value of the Allow to be switched prompt is what you want.
e. Verify that the value of the Data group type prompt is what you need. MIMIX
Dynamic Apply requires either *ALL or *DB. Legacy cooperative processing
and user journal replication of IFS objects, data areas, and data queues
require *ALL.
f. Verify that the value of the Primary transfer definition prompt is what you want.
g. If you want MIMIX to have access to an alternative communications path,
specify a value for the Secondary transfer definition prompt.
246
Creating a data group definition
h. Verify that the value of the Reader wait time (seconds) prompt is what you
want.
i. Press Enter.
4. If you specified *OBJ for the Data group type, skip to Step 9.
5. The Journal on target prompt appears on the display. Verify that the value shown
is what you want and press Enter.
Note: If you specify *YES and you require that the status of journaling on the
target system is accurate, you should perform a save and restore
operation on the target system prior to loading the data group file entries. If
you are performing your initial configuration, however, it is not necessary
to perform a save and restore operation. You will synchronize as part of
the configuration checklist.
6. More prompts appear on the display that identify journaling information for the
data group. You may need to use the Page Down key to see the prompts. Do the
following:
a. Ensure that the values of System 1 journal definition and System 2 journal
definition identify the journal definitions you need.
Notes:
• If you have not journaled before, the value *DGDFN is appropriate. If you
have an existing journaling environment that you have identified to MIMIX in
a journal definition, specify the name of the journal definition.
• If you only see one of the journal definition prompts, you have specified *NO
for both the Allow to be switched prompt and the Journal on target prompt.
The journal definition prompt that appears is for the source system as
specified in the Data source prompt.
b. If any objects to replicate are located in an auxiliary storage pool (ASP) group
on either system, specify values for System1 ASP group and System 2 ASP
group as needed. The ASP group name is the name of the primary ASP device
within the ASP group.
c. The default for the Use remote journal link prompt is *YES, which required for
MIMIX Dynamic Apply and preferred for other configurations. MIMIX creates a
transfer definition and an RJ link, if needed. To create a data group definition
for a source-send configuration, change the value to *NO.
d. At the Cooperative journal (COOPJRN) prompt, specify the journal for
cooperative operations. For new data groups, the value *DFT automatically
resolves to *USRJRN when Data group type is *ALL or *DB and Remote
journal link is *YES. The value *USRJRN processes through the user
(database) journal while the value *SYSJRN processes through the system
(audit) journal.
7. At the Number of DB apply sessions prompt, specify the number of apply sessions
you want to use.
8. Verify that the values shown for the DB journal entry processing prompts are what
you want.
247
Note: *SEND is required for the IBM RMVJRNCHG (Remove Journal Change)
command. See “Additional considerations for data groups” on page 245
for more information.
9. At the Description prompt, type a text description of the data group definition,
enclosed in apostrophes.
10. Do one of the following:
• To accept the basic data group configuration, Press Enter. Most users can
accept the default values for the remaining parameters. The data group is
created when you press Enter.
• To access prompts for advanced configuration, press F10 (Additional
Parameters) and continue with the next step.
Advanced Data Group Options: The remaining steps of this procedure are only
necessary if you need to access options for advanced configuration topics. The
prompts are listed in the order they appear on the display. Because IBM i does not
allow additional parameters to be prompt-controlled, you will see all parameters
regardless of the value specified for the Data group type prompt.
11. Specify the values you need for the following prompts associated with user journal
replication:
• Remote journaling threshold
• Synchronization check interval
• Time stamp interval
• Verify interval
• Journal at creation
12. Specify the values you need for the following prompts associated with system
journal replication:
• Number of times to retry
• First retry delay interval
• Second retry delay interval
13. Specify the values you need for each of the prompts on the File and tracking ent.
opts (FEOPT) parameter.
Notes:
• Replication type must be *POSITION for MIMIX Dynamic Apply.
• Apply session A is used for IFS objects, data areas, and data queues that are
configured for user journal replication. For more information see “Database
apply session balancing” on page 90.
• The journal image value *BOTH is required for the IBM RMVJRNCHG
(Remove Journal Change) command. See “Additional considerations for data
groups” on page 245 for more information.
14. Specify the values you need for each element of the following parameters:
248
Creating a data group definition
249
Changing a data group definition
For information about command parameters, see “Tips for data group parameters” on
page 234.
To change a data group definition, do the following:
1. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
2. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to
see additional prompts.
3. Make any changes you need for the values of the prompts. Page Down to see
more of the prompts.
Note: If you change the Number of DB apply sessions prompt (NBRDBAPY),
you need to start the data group specifying *YES for the Clear pending
prompt (CLRPND).
4. If you need to access advanced functions, press F10 (Additional parameters).
Make any changes you need for the values of the prompts.
5. When you are ready to accept the changes, press Enter.
250
Fine-tuning backlog warning thresholds for a data group
3. The Change Data Group Definition (CHGDGDFN) display appears. Press F9 (All
parameters), then Page Down multiple times to locate the Object processing
parameter.
4. At the Object send prefix prompt, do one of the following
• To use the default shared job specify *SHARED. The data group will use the
default shared job on its current source system.
• To use a shared job that is limited to a subset of data groups, specify a three-
character prefix. Only the data groups that you explicitly set to use the same
prefix will share the same the object send job.
5. Press Enter.
6. Ensure the following parameters have the same value for all data groups that
share the same object send job:
• System 1 ASP group
• System 2 ASP group
7. Start the data group.
251
Table 31 lists the shipped values for thresholds available in a data group definition,
identifies the risk associated with a backlog for each replication process, and
identifies available options to address a persistent threshold condition. For each data
group, you may need to use multiple options or adjust one or more threshold values
multiple times before finding an appropriate setting.
Table 31. Shipped threshold values for replication processes and the risk associated with a backlog
Remote journaling All journal entries in the backlog for the remote Option 3
threshold journaling function exist only in the source Option 5
10 minutes system journal and are waiting to be
transmitted to the remote journal. These entries
cannot be processed by MIMIX user journal
replication processes and are at risk of being
lost if the source system fails. After the source
system becomes available again, journal
analysis may be required.
Database reader/send For data groups that use remote journaling, all Option 2
threshold journal entries in the database reader backlog Option 3
10 minutes are physically located on the target system but Option 5
MIMIX has not started to replicate them. If the
source system fails, these entries need to be
read and applied before switching.
For data groups that use MIMIX source-send
processing, all journal entries in the database
send backlog, are waiting to be read and to be
transmitted to the target system. The
backlogged journal entries exist only in the
source system and are at risk of being lost if the
source system fails. After the source system
becomes available again, journal analysis may
be required.
Database apply threshold All of the entries in the database apply backlog Option 2
warning (1000s) are waiting to be applied to the target system. If Option 3
100,000 entries1) the source system fails, these entries need to Option 5
be applied before switching. A large backlog
can also affect performance.
252
Fine-tuning backlog warning thresholds for a data group
Table 31. Shipped threshold values for replication processes and the risk associated with a backlog
Object send threshold All of the journal entries in the object send Option 2
10 minutes backlog exist only in the system journal on the Option 3
source system and are at risk of being lost if the Option 4
source system fails. MIMIX may not have
Option 5
determined all of the information necessary to
replicate the objects associated with the journal
entries. As this backlog clears, subsequent
processes may have backlogs as replication
progresses. If the object send process is
shared among multiple data groups and the
backlog is persistent, it may be necessary to
reduce the number of data groups sharing the
same object send process.
Object retrieve warning All of the objects associated with journal entries Option 1
message threshold in the object retrieve backlog are waiting to be Option 2
100 entries packaged so they can be sent to the target Option 3
system. The latest changes to these objects
Option 5
exist only in the source system and are at risk
of being lost if the source system fails. As this
backlog clears, subsequent processes may
have backlogs as replication progresses.
Container send warning All of the packaged objects associated with Option 1
message threshold journal entries in the container send backlog Option 2
100 entries are waiting to be sent to the target system. The Option 3
latest changes to these objects exist only in the
Option 5
source system and are at risk of being lost if the
source system fails. As this backlog clears,
subsequent processes may have backlogs as
replication progresses
Object apply warning All of the entries in the object apply backlog are Option 1
message threshold waiting to be applied to the target system. If the Option 2
100 requests source system fails, these entries need to be Option 3
applied before switching. Any related objects
Option 5
for which an automatic recovery action was
collecting data may be lost.
1. This appears as 100 on the CRTDGDFN command beginning with service pack 7.1.07.00, and higher, where the data-
base apply threshold is specified as a number which MIMIX multiplies by 1000.
The following options are available, listed in order of preference. Some options are
not available for all thresholds.
Option 1 - Adjust the number of available jobs. This option is available only for the
object retrieve, container send, and object apply processes. Each of these processes
have a configurable minimum and maximum number of jobs, a threshold at which
253
more jobs are started, and a warning message threshold. If the number of entries in a
backlog divided by the number of active jobs exceeds the job threshold, extra jobs are
automatically started in an attempt to address the backlog. If the backlog reaches the
higher value specified in the warning message threshold, the process status reflects
the threshold condition. If the process frequently shows a threshold status, the
maximum number of jobs may be too low or the job threshold value may be too high.
Adjusting either value in the data group configuration can result in more throughput.
Option 2 - Temporarily increase job performance. This option is available for all
processes except the RJ link. Use work management functions to increase the
resources available to a job by increasing its run priority or its timeslice (CHGJOB
command). These changes are effective only for the current instance of the job. The
changes do not persist if the job is ended manually or by nightly cleanup operations
resulting from the configured job restart time (RESTARTTIME) on the data group
definition.
Option 3 - Change threshold values or add criterion. All processes support
changing the threshold value. In addition, if the quantity of entries is more of a
concern than time, some processes support specifying additional threshold criteria
not used by shipped default settings. For the remote journal, database reader (or
database send), and object send processes, you can adjust the threshold so that a
number of journal entries is used as criteria instead of, or in conjunction with a time
value. If both time and entries are specified, the first criterion reached will trigger the
threshold condition. Changes to threshold values are effective the next time the
process status is requested.
Option 4 - Adjust the number of object send jobs. This option is only available for
the object send process. Determine if the data group uses a shared object send job. If
the threshold is persistent, it may be necessary to reduce the number of data groups
sharing the same object send process. For details, see “Optimizing performance for a
shared object send process” on page 254.
Option 5 - Get assistance. If you tried the other options and threshold conditions
persist, contact your Certified MIMIX Consultant for assistance. It may be necessary
to change configurations to adjust what is defined to each data group or to make
permanent work management changes for specific jobs.
254
Optimizing performance for a shared object send process
Conversely, a consistently low difference between the last read entry and the current
journal entry can indicate that more data groups may be able to share the object send
job.
The optimal number of data groups sharing an object send process is unique to every
environment. Determining the optimal number of data groups that can share an object
send process in your environment may require incremental adjustments. Add or
remove small numbers of data groups at a time to or from a shared object send
process and monitor the impact on performance and throughput.
Factors that affect performance of a shared object send process include:
• The number of data groups sharing the same object send job
• The type of data replicated by the data groups sharing the object send job. It may
be beneficial to share an object send process among data groups that replicate
only IFS objects or only DLO objects.
255
3. The Change Data Group Definition (CHGDGDFN) display appears. Press F9 (All
parameters), then Page Down multiple times to locate the Object processing
parameter.
4. At the Object send prefix prompt, do one of the following:
• To use a shared job that is limited to a subset of data groups, specify a three-
character prefix. Only the data groups that you explicitly set to use the same
prefix will share the same the object send job.
• To use a job dedicated to the data group, type *DGDFN.
5. Press Enter.
6. To make the change effective, end and re-start the data group.
256
Copying a definition
The procedures for performing common functions, such as copying, displaying, and
renaming, are very similar for all types of definitions used by MIMIX. The generic
procedures in this topic can be used for copying, deleting, displaying, and printing
definitions. Specific procedures are included for renaming each type of definition.
The topics in this chapter include:
• “Copying a definition” on page 257 provides a procedure for copying a system
definition, transfer definition, journal definition, or a data group definition.
• “Deleting a definition” on page 258 provides a procedure for deleting a system
definition, transfer definition, journal definition, or a data group definition.
• “Renaming definitions” on page 259 provides procedure for renaming definitions,
such as renaming a system definition which is typically done as a result in a
change of software.
Copying a definition
Use this procedure on a management system to copy a system definition, transfer
definition, journal definition, or a data group definition.
Notes for data group definitions:
• The data group entries associated with a data group definition are not copied.
• Before you copy a data group definition, ensure that activity is ended for the
definition to which you are copying.
Notes for journal definitions:
• The journal definition identified in the From journal definition prompt must exist
before it can be copied. The journal definition identified in the To journal defining
prompt cannot exist when you specify *NO for the Replace definition prompt.
• If you specify *YES for the Replace definition prompt, the To journal defining
prompt must exist. It is possible to introduce conflicts in your configuration when
replacing an existing journal definition. These conflicts are automatically resolved
or an error message is sent when the journal environment for the definition is built.
To copy a definition, do the following:
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 93 for information about using these.
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
257
Additional options: working with definitions
3. The "Work with" display for the definition type appears. Type a 3 (Copy) next to
definition you want and press Enter.
4. The Copy display for the definition type you selected appears. At the To definition
prompt, specify a name for the definition to which you are copying information.
5. If you are copying a journal definition or a data group definition, the display has
additional prompts. Verify that the values of prompts are what you want.
6. If you are copying a system definition, specify the type of the new system
definition at the To system type prompt.
7. The value *NO for the Replace definition prompt prevents you from replacing an
existing definition. If you want to replace an existing definition, specify *YES.
8. To copy the definition, press Enter.
Deleting a definition
Use this procedure on a management system to delete a system definition, transfer
definition, journal definition, or a data group definition.
Attention:
When you delete a system or data group definition, information
associated with the definition is also deleted. Ensure that the
definition you delete is not being used for replication and be aware
of the following:
• If you delete a system definition, all other configuration
elements associated with that definition are deleted. This
includes journal definitions, transfer definitions, and data group
definitions with their associated data group entries. Journal
definitions and RJ links associated with system managers are
also deleted.
• If you delete a data group definition, all of its associated data
group entries are also deleted.
• The delete function does not clean up any records for files in
the error/hold file.
When you delete a journal definition, only the definition is deleted.
The objects being journaled, the journal, and the journal receivers
are not deleted. Journal definitions for internal MIMIX processes
cannot be deleted by users.
258
Renaming definitions
a. From the Work with Systems (WRKSYS) display, type option 8 (Work with data
groups) next to the system that you want deleted and press Enter.
The result is a list of data groups for the system you selected.
b. Optional: Type a 17 (File entries) next to the data group and press Enter. On
the Work with DG File Entries display, use option 10 (End journaling) to end
journaling for files associated with the data group.
c. Repeat Step b for additional data groups with files to end journaling on.
d. From the Work with Data Groups display, use option 10 (End data group) next
to all data groups for that system.
e. Before deleting a system definition, ensure all managers for the system
definition are ended. From the Work with Systems display, type a 10 (End) next
to the system definition you want and press Enter.
f. The End MIMIX Managers display appears. Specify the value for the type of
manager you want to end at the Manager prompt and press Enter. The
selected managers are ended.
g. If using application groups, remove the node from the recovery domain. From
the Work with application groups (WRKAG) display, take option 12 (Work with
node entries) and then option 4 (Remove) to remove the system.
2. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
3. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
4. The "Work with" display for the definition type appears. Type a 4 (Delete) next to
definition you want and press Enter.
5. When deleting system definitions, transfer definitions, or journal definitions, a
confirmation display appears with a list of definitions to be deleted. To delete the
definitions press F16.
Renaming definitions
The procedures for renaming a system definition, transfer definition, journal definition,
or data group definition must be run from a management system.
Attention:
Before you rename any definition, ensure that all other
configuration elements related to it are not active.
259
Additional options: working with definitions
Attention:
Before you rename a system definition, ensure that MIMIX activity
is ended and that remote journal links used by the MIMIX
environment are ended.
Attention:
Before you rename a system definition, ensure that MIMIX activity
is ended, that remote journal links used by the MIMIX environment
are ended, and that VSP servers that run on or have instances
which connect to the affected system are ended.
These instructions use MIMIX menus. See “Accessing the MIMIX Main Menu” on
page 93 for how to use them.
Do not attempt to start renaming multiple system definitons at the same time. If
you have multiple system definitions to rename, these instructions identify when it is
safe to begin renaming the next system definition.
Use these instructions to rename a single system definition. Variations in steps
needed for a management system or a network system are identified. Do the
following:
1. If you are using Vision Solutions Portal (VSP), you must end any VSP server that
runs on the system whose system definition will be renamed, or any VSP server
that connects to a product instance in which a system definition will be renamed.
Use the appropriate command, as follows:
• If the VSP server runs on an IBM i platform, use the following command:
VSI001LIB/ENDVSISVR
• If the VSP server runs on a Windows platform, from the Windows Start menu,
select:
All Programs > Vision Solutions Portal > Stop Server and click Stop Server.
2. From a management system, use the following command to perform a controlled
end of the MIMIX installation:
ENDMMX
It may take some time for all processes to end.
3. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems)
and press Enter.
When all processes have ended you should see the value *INACTIVE in the
System Manager, Journal Manager, Journal Inspection, and Collector Services
columns.
4. From the Work with Systems display, select option 8 (Work with data groups) for
260
Renaming definitions
the system whose definition you are renaming, and press Enter.
5. For each data group listed on the Work with Data Groups display, do the following
to ensure that replication activity is quiesced and to record information you will
need later for verifying the starting points in the journals.
a. Select option 8 (Display status) and press Enter.
b. Record the Last Read Receiver name and Sequence # for both database and
object. You will need this information to verify the starting points in a later step.
Note: We strongly recommend you also record the full three-part name of the
data group and identify which system will be renamed. This will be
useful when verifying that each data group has the correct journal
starting points after the system definition has been renamed. This will
be particularly useful if you will be attempting to rename more than one
system definition (when directed) or when renaming within a multi-node
environment.
c. Repeat Step 5 for each data group that includes the system definition to be
renamed.
d. When you addressed all of the data groups, press F12 (Cancel) to return to the
Work with Systems display.
6. To determine which transfer definitions need to be changed, do the following:
a. From the Work with Systems display, press F16 (System definitions).
b. From the Work With System Definitions display, locate the system to be
renamed and select option 14 (Transfer defintion) for that system.
c. On the Work with Transfer Definitions display, check the following:
• Each transfer definition in the list that has the system to be renamed
identified in the System1 or System 2 columns must be changed using
Step 7.
• If there is a transfer definition with *ANY specified for the System 1 or
System 2 columns, you will need to restart the system port jobs with the new
host names when directed.
7. Perform this step for each transfer definition that includes the system to be
renamed. To change a transfer definition, do the following:
a. Select option 2 (Change) and press Enter.
b. Press F10 to access additional parameters.
c. If the system to be renamed is System 1 and *SYS1 is shown for the System 1
host name or address prompt, specify the actual host name or IP address
currently used for that system.
d. If the system to be renamed is System 2 and *SYS2 is shown for the System 2
host name or address prompt, specify the actual host name or IP address
currently used for that system.
e. Press Enter.
Note: Many installations will have an autostart entry for the STRSVR command.
261
Additional options: working with definitions
b. Re-start the port job on the system, specifying the new value from the transfer
definition in the command:
STRSVR HOST(host-name-or-address) PORT(port-number)
c. For every transfer definition that you changed in Step 7, verify that
communication links start by using the command:
VFYCMNLNK PROTOCOL(*TFRDFN) TFRDFN(name system1 system2)
IMPORTANT!
Perform only one of the next two steps. Perform only the step that is for the type of
system definition you are renaming (Step 9 to rename a network system or Step 10 to
rename a management system.) Never attempt to perform both steps.
262
Renaming definitions
263
Additional options: working with definitions
ENDMMX
Wait for all processes to end before continuing.
17. From a command line on a management system, enter the following command to
start the system managers:
STRMMXMGR SYSDFN(*ALL) MGR(*SYS)
Perform Step 18 through Step 21 from the system that was renamed.
18. From the system that was renamed, enter the following command:
WRKSYS
19. On the Work with Systems display, select option 8 (Work with data groups) for the
system that was renamed and press Enter.
In the resulting list of data groups, one of the systems in each data group has
been renamed. For the following step, you will need to know the original names of
both systems in each data group and the new name of the renamed system. The
information you recorded in Step 5 to verify the starting point shows the old
system name. In multi-node environments, the verify step can become confusing
if you are not diligent about keeping track of what name has changed.
20. For each data group listed, do the following:
a. From the Work with Data Groups display, select option 9 (Start DG) and press
Enter.
b. The Start Data Group (STRDG) display appears. Press F10 to display
additional parameters.
c. At the Show confirmation screen prompt, specify *YES.
d. If the data group being started is controlled by an application group, press
PageDown. Then specify *YES for the Override if in data rsc. group prompt.
e. Press Enter.
f. The Confirmation display appears. Use the information you recorded in
Step 5b to verify the information displayed has the correct starting point for the
journal receivers.
264
Renaming definitions
265
Additional options: working with definitions
• For a VSP server runs on a Windows platform, from the Windows Start menu,
select:
All Programs > Vision Solutions Portal > Start.
266
Renaming definitions
15. From the Work with RJ Links menu, press F11 to display the transfer definitions.
16. Type a 2 (Change) next to the RJ link where you changed the transfer definition
and press Enter.
17. From the Change Remote Journal Link display, specify the new name for the
transfer definition and press Enter.
267
Additional options: working with definitions
5. The Rename Journal Definition display for the definition you selected appears. At
the To journal definition prompts, specify the values you want for the new name.
a. If the journal name is *JRNDFN, ensure that there are no journal receiver
prefixes in the specified library whose names start with the journal receiver
prefix. See “Building the journaling environment” on page 221 for more
information.
6. Press Enter. The Work with Journal Definitions display appears.
7. If using remote journaling, do the following to change the corresponding definition
for the remote journal. Otherwise, continue with Step 8:
a. Type a 2 (Change) next to the corresponding remote journal definition name
you changed and press Enter.
b. Specify the values entered in Step 5 and press Enter.
8. From the Work with Journal Definitions menu, type a 14 (Build) next to the journal
definition names you changed and press F4.
9. The Build Journaling Environment display appears. At the Source for values
prompt, specify *JRNDFN.
10. Press Enter. You should see a message that indicates the journal environment
was created.
11. Press F12 to return to the MIMIX Configuration Menu. From the MIMIX
Configuration Menu, select option 4 (Work with data group definitions) and press
Enter.
12. From the Work with DG Definitions menu, type a 2 (Change) next to the data
group name that uses the journal definition you changed and press Enter.
13. Press F10 to access additional parameters.
14. From the Change Data Group Definition display, specify the new name for the
System 1 journal definition and System 2 journal definition and press Enter twice.
Attention:
Before you rename a data group definition, ensure that the data
group has a status of *INACTIVE.
1. Ensure that the data group is ended. If the data group is active, end it using the
procedure “Ending a data group in a controlled manner” in the MIMIX Operations
book.
2. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
3. From the MIMIX Configuration Menu, select option 4 (Work with data group
268
Renaming definitions
269
Configuring data group entries
Data group entries can identify one or many objects to be replicated or excluded from
replication. You can add individual data group entries, load entries from an existing
source, and change entries as needed.
The topics in this chapter include:
• “Creating data group object entries” on page 271 describes data group object
entries which are used to identify library-based objects for replication. Procedures
for creating these are included.
• “Creating data group file entries” on page 275 describes data group file entries
which are required for user journal replication of *FILE objects. Procedures for
creating these are included.
• “Creating data group IFS entries” on page 284 describes data group IFS entries
which identify IFS objects for replication. Procedures for creating these are
included.
• “Loading tracking entries” on page 286 describes how to manually load tracking
entries for IFS objects, data areas, and data queues that are configured for user
journal replication.
• “Adding a library to an existing data group” on page 288 describes how to add a
new library to an existing data group configuration, start journaling, and
synchronize its contents,
• “Adding an IFS directory to an existing data group” on page 293 describes how to
add a new directory to an existing data group configuration, start journaling, and
synchronize its contents,
• “Creating data group DLO entries” on page 297 describes data group DLO entries
which identify document library objects (DLOs) for replication by MIMIX system
journal replication processes. Procedures for creating these are included.
• “Additional options: working with DG entries” on page 300 provides procedures for
performing data group entry common functions, such as copying, removing, and
displaying,
The appendix “Supported object types for system journal replication” on page 635
lists IBM i object types and indicates whether each object type is replicated by MIMIX.
In environments where multiple data groups exist within a single resource group,
changes to data group configuration entries that identify objects to replicate are
propagated to the data groups within a resource group entry as follows:
• If the configuration entries are created or changed from an enabled data group,
they are propagated to all data groups within the resource group entry, including
disabled data groups.
• If configuration entries are created or changed from a disabled data group, they
are not propagated to the other data groups in the resource group entry.
270
Creating data group object entries
271
2. From the Work with Data Groups display, type a 20 (Object entries) next to the
data group you want and press Enter.
3. The Work with DG Object Entries display appears. Press F19 (Load).
4. The Load DG Object Entries (LODDGOBJE) display appears. Do the following to
specify the selection criteria:
a. Identify the library and objects to be considered. Specify values for the System
1 library and System 1 object prompts.
b. If necessary, specify values for the Object type, Attribute, System 2 library, and
System 2 object prompts.
c. At the Process type prompt, specify whether resulting data group object entries
should include or exclude the identified objects.
d. Specify appropriate values for the Cooperate with database and Cooperating
object types prompts. These prompts determine how *FILE, *DTAARA, and
*DTAQ objects are replicated. Change the values if you want to explicitly
replicate from the system journal or if you want to limit which object types are
cooperatively processed with the user journal.
e. Ensure that the remaining prompts contain the values you want for the data
group object entries that will be created. Press Page Down to see all of the
prompts.
5. To specify file entry options that will override those set in the data group definition,
do the following:
a. Press F9 (All parameters).
b. Press Page Down until you locate the File entry options prompt.
c. Specify the values you need on the elements of the File entry options prompt.
6. To generate the list of objects, press Enter.
Note: If you skipped Step 5, you may need to press Enter multiple times.
7. The Load DG Object Entries display appears with the list of objects that matched
your selection criteria. Either type a 1 (Select) next to the objects you want or
press F21 (Select all). Then press Enter.
8. If necessary, you can use “Adding or changing a data group object entry” on
page 272 to customize values for any of the data group object entries.
Synchronize the objects identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
272
Creating data group object entries
From the management system, do the following to add a new data group object entry
or change an existing entry:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 20 (Object entries) next to the
data group you want and press Enter.
3. The Work with DG Object Entries display appears. Do one of the following:
• To add a new entry, type a 1 (Add) next to the blank line at the top of the list
and press Enter.
• To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter.
4. The appropriate Data Group Object Entry display appears. When adding an entry,
you must specify values for the System 1 library and System 1 object prompts.
Note: When changing an existing object entry to enable replication of data areas
or data queues from a user journal (COOPDB(*YES)), make sure that you
specify only the objects you want to enable for the System 1 object
prompt. Otherwise, all objects in the library specified for System 1 library
will be enabled.
5. If necessary, specify a value for the Object type prompt.
6. Press F9 (All parameters).
7. If necessary, specify values for the Attribute, System 2 library, System 2 object,
and Object auditing value prompts.
8. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
9. Specify appropriate values for the Cooperate with database and Cooperating
object types prompts.
Note: These prompts determine how *FILE, *DTAARA, and *DTAQ objects are
replicated. Change the values if you want to explicitly replicate from the
system journal or if you want to limit which object types are cooperatively
processed with the user journal.
10. Ensure that the remaining prompts contain the values you want for the data group
object entries that will be created. Press Page Down to see more prompts.
11. If there are library (*LIB) objects to replicate and you do not want them replicated
to the same auxiliary storage pool (ASP) or independent ASP device on each
system, specify values for System 1 library ASP number, System 1 library ASP
device, System 2 library ASP number, and System 2 library ASP device prompts.
12. To specify file entry options that will override those set in the data group definition,
do the following:
a. If necessary, press Page Down to locate the File and tracking entry options
(FEOPT) prompts.
273
b. Specify the values you need for the elements of the File and tracking entry
options prompts.
13. If the changes you specify on the command result in adding objects into the
replication namespace, those objects need to be synchronized between systems.
At the Synchronize on start prompt, specify the value for how synchronization will
occur:
• The default value is *NO. Use this value when you will use save and restore
processes to manually synchronize the objects. If you do not synchronize
before replication starts, the next audit that checks all objects (scheduled or
manually invoked) will attempt to synchronize the objects if recoveries are
enabled and differences are found.
• The value *YES will request to synchronize any objects added to the
replication namespace through the system journal replication processes. This
may temporarily cause threshold conditions in replication processes.
14. Press Enter.
15. For object entries configured for user journal replication of data areas or data
queues, if you were directed to this procedure from “Checklist: Change *DTAARA,
*DTAQ, IFS objects to user journaling” on page 151, return to Step 7 to proceed
with the additional steps necessary to complete the conversion.
16. Manually synchronize the objects identified by this data group object entry before
starting replication processes. You can skip this step if you specified *YES in
Step 13. The entries will be available to replication processes after the data group
is ended and restarted. This includes after the nightly restart of MIMIX jobs. The
next time an audit that checks all objects runs, the entries will be available and the
MIMIX audits will attempt to synchronize the objects they identify if recoveries are
enabled.
274
Creating data group file entries
275
parameter override the values loaded from the FEOPTSRC parameter for all data
group file entries created by a load request.
Regardless of where the configuration source and file entry option source are located,
the Load Data Group File Entries (LODDGFE) command must be used from a system
designated as a management system.
Note: The Load Data Group File Entries (LODDGFE) command performs a journal
verification check on the file entries using the Verify Journal File Entries
(VFYJRNFE) command. In order to accurately determine whether files are
being journaled to the target system, you should first perform a save and
restore operation to synchronize the files to the target system before loading
the data group file entries.
276
Creating data group file entries
Procedure: Use this procedure to create data group file entries from the object
entries defined to a data group.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears. The name of the
data group for which you are creating file entries and the Configuration source
value of *DGOBJE are pre-selected. Press Enter.
5. The following prompts appear on the display. Specify appropriate values.
a. From data group definition - To load from entries defined to a different data
group, specify the three-part name of the data group.
b. Load from system - Ensure that the value specified is appropriate. For most
environments, files should be loaded from the source system of the data group
you are loading. (This value should be the same as the value specified for Data
source in the data group definition.)
c. Update option - If necessary, specify the value you want.
d. Default FE options source - Specify the source for loading values for default file
entry options. Each element in the file entry options is loaded from the
specified location unless you explicitly specify a different value for an element
in Step 6.
6. Optionally, you can specify a file entry option value to override those loaded from
the configuration source. Do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
7. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
8. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
9. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 282 to customize
values for any of the data group file entries.
277
Loading file entries from a library
Example: The data group file entries are created by loading from a library named
TESTLIB on the source system. This example assumes the configuration is set up so
that system 1 in the data group definition is the source for replication.
LODDGFE DGDFN(DGDFN1) CFGSRC(*NONE) LIB1(TESTLIB)
Since the FEOPT parameter was not specified, the resulting data group file entries
are created with a value of *DFT for all of the file entry options. Because there is no
MIMIX configuration source specified, the value *DFT results in the file entry options
specified in the data group definition being used.
Procedure: Use this procedure to create data group file entries from a library on
either the source system or the target system.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source
prompt, specify *NONE and press Enter.
5. Identify the location of the files to be used for loading. For common configurations,
you can accomplish this by specifying a library name at the System 1 library
prompt and accepting the default values for the System 2 library, Load from
system, and File prompts.
If you are using system 2 as the data source for replication or if you want the
library name to be different on each system, then you need to modify these values
to appropriately reflect your data group defaults. If the data group is configured for
COOPJRN(*USRJRN), then an object entry must also be configured which
includes the file and is cooperatively processed.
6. If necessary, specify the values you want for the following:
Update option prompt
Add entry for each member prompt
7. The value of the Default FE options source prompt is ignored when loading from a
library. To optionally specify file entry options, do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
278
Creating data group file entries
8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. If necessary, you can use “Changing a data group file entry” on
page 282 to customize values for any of the data group file entries.
279
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 282 to customize
values for any of the data group file entries.
280
Creating data group file entries
281
4. The Add Data Group File Entry (ADDDGFE) display appears. At the System 1 File
and Library prompts, specify the file that you want to replicate.
5. By default, all members in the file are replicated. If you want to replicate only a
specific member, specify its name at the Member prompt.
Note: All replicated members of a file must be in the same database apply
session. For data groups configured for multiple apply sessions, specify
the apply session on the File entry options prompt. See Step 7.
6. Verify that the values of the remaining prompts on the display are what you want.
If necessary, change the values as needed.
Notes:
• If you change the value of the Dynamically update prompt to *NO, you need to
end and restart the data group before the addition is recognized.
• If you change the value of the Start journaling of file prompt to *NO and the file
is not already journaled, MIMIX will not be able to replicate changes until you
start journaling the file.
7. Optionally, you can specify file entry options that will override those defined for the
data group. Do the following:
a. Press F10 (Additional parameters), then press Page Down.
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure
8. Press Enter to create the data group file entry.
282
Creating data group file entries
• All replicated members of a file must be in the same database apply session.
For data groups configured for multiple apply sessions, specify the apply
session on the File entry options prompt.
5. To accept your changes, press Enter.
The replication processes do not recognize the change until the data group has been
ended and restarted.
283
Creating data group IFS entries
Data group IFS entries identify IFS objects for replication. The identified objects are
replicated through the system journal unless the data group IFS entries are explicitly
configured to allow the objects to be replicated through the user journal.
Topic “Identifying IFS objects for replication” on page 116 provides detailed concepts
and identifies requirements for configuration variations for IFS objects. Supported file
systems are included, as well as examples of the effect that multiple data group IFS
entries have on object auditing values.
284
Creating data group IFS entries
prompts.
6. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
7. Specify the appropriate value for the Cooperate with database prompt. To ensure
that journaled IFS objects can be replicated from the user journal, specify *YES.
To replicate from the system journal, specify *NO.
8. If necessary, specify a value for the Object retrieval delay prompt.
9. If the changes you specify on the command result in adding objects into the
replication namespace, those objects need to be synchronized between systems.
At the Synchronize on start prompt, specify the value for how synchronization will
occur:
• The default value is *NO. Use this value when you will use save and restore
processes to manually synchronize the objects. If you do not synchronize
before replication starts, the next audit that checks all objects (scheduled or
manually invoked) will attempt to synchronize the objects if recoveries are
enabled and differences are found.
• The value *YES will request to synchronize any objects added to the
replication namespace through the system journal replication processes. This
may temporarily cause threshold conditions in replication processes.
10. Press Enter to create the IFS entry.
11. For IFS entries configured for user journal replication, if you were directed to this
procedure from “Checklist: Change *DTAARA, *DTAQ, IFS objects to user
journaling” on page 151, return to Step 7 to proceed with the additional steps
necessary to complete the conversion.
12. Manually synchronize the IFS objects identified by this data group object entry
before starting replication processes. You can skip this step if you specified *YES
in Step 9. The entries will be available to replication processes after the data
group is ended and restarted. This includes after the nightly restart of MIMIX jobs.
The next time an audit that checks all objects runs, the entries will be available
and the MIMIX audits will attempt to synchronize the objects they identify if
recoveries are enabled.
285
Loading tracking entries
Tracking entries are associated with the replication of IFS objects, data areas, and
data queues with advanced journaling techniques. A tracking entry must exist for
each existing IFS object, data area, or data queue identified for replication.
IFS tracking entries identify existing IFS stream files on the source system that have
been identified as eligible for replication with advanced journaling by the collection of
data group IFS entries defined to a data group. Similarly, object tracking entries
identify existing data areas and data queues on the source system that have been
identified as eligible for replication using advanced journaling by the collection of data
group object entries defined to a data group.
When you initially configure a data group, you must load tracking entries and start
journaling for the objects which they identify. Similarly, if you add new or change
existing data group IFS entries or object entries, tracking entries for any additional IFS
objects, data areas, or data queues must be loaded and journaling must be started on
the objects which they identify.
286
Loading tracking entries
9. You should receive message LVI3E2B indicating the number of tracking entries
loaded for the data group.
Note: The command used in this procedure does not start journaling on the tracking
entries. Start journaling for the tracking entries when indicated by your
configuration checklist.
287
Adding a library to an existing data group
These instructions describe how to create the configuration necessary to add a library
to replication for a data group, synchronize the library and its contents, and make the
configuration changes effective.
If you started the process of adding selection rules for a new library from Vision
Solutions Portal and chose the option to manually synchronize, you can also use
these instructions to complete configuration and synchronize the library.
Notes:
• Perform instructions from a management system unless otherwise directed.These
instructions are intended to be used without delays between steps.
• These instructions assume the following:
– The library to be added is located on the system specified as system 1 in the
three-part data group name.
– The data group to which the library will be added is configured using best
practices. Specifically, the data group should specify the following values: *ALL
for Data group type (TYPE), *YES for Use remote journal link (RJLNK), and
*USRJRN for Cooperative journal (COOPJRN).
• For some configurations, you may be able to skip steps in these instructions. If the
data group type is *OBJ, you do not need data group file entries or object tracking
entries. For data groups of type *ALL, if the object entries you create in Step 5
specify COOPDB(*NO), you do not need file entries or object tracking entries. If
either of these scenarios apply, you can skip Step 6 though Step 13 and Step 21
through Step 23.
• Some steps in these instructions use an advanced user technique that combines
specifying an option on a display, using a repeat function key, and specifying
parameters on the command line to be passed to the option so that the same
action is performed for all items in the list. Be sure to read each step in its entirety
before taking action.
Do the following from the management system:
Steps to ensure replication is ended.
1. Perform a controlled end of the data group using the command:
ENDDG DGDFN(dgname) ENDOPT(*CNTRLD) DTACRG(*YES)
2. Display the data group and verify that its replication processes become inactive
(red I) using the command:
WRKDG DGDFN(dgname)
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
3. When replication processes have ended, do the following from the Work with Data
Groups display to check for any open commit cycles.
288
Adding a library to an existing data group
a. Type 8 (Display status) next to the data group name and press Enter. Then
press F8 (Database).
b. Check the value of the Open Commit column for all listed database apply
sessions.
• If *YES is displayed for any apply session, you must complete Step 4.
• If *NO is displayed for all apply sessions, continue with Step 5.
4. If open commit cycles exist, this step is necessary to prevent data loss for
currently replicated objects. Do the following:
a. Use the following command to start the data group:
STRDG DGDFN(dgname) DTACRG(*YES)
b. Take action to resolve the open commit cycles, such as ending or quiescing the
application or closing the commit cycle.
c. Repeat the controlled end again using Step 1.
If you are unable to end the data group without open commits, you may need to
use these instructions when the data group is not as busy.
Steps to create configuration.
5. Use the following commands to create two data group object entries, one for the
library itself, and one for its contents. (If you were directed here from Vision
Solutions Portal when creating library selection rules and chose to synchronize
manually, you can skip Step 5.)
ADDDGOBJE DGDFN(dgname) LIB1(QSYS) OBJ1(library-name)
TYPE(*LIB)
ADDDGOBJE DGDFN(dgname) LIB1(library-name) OBJ1(*ALL)
TYPE(*ALL)
6. Create data group file entries for any objects of type *FILE in the library using the
command:
LODDGFE DGDFN(dgname) CFGSRC(*NONE) LIB1(library-name)
BATCH(*YES) SELECT(*NO)
7. Before continuing, confirm the job ran successfully using the command:
WRKJOB LODDGFE
8. Create object tracking entries for any objects of type *DTAARA or *DTAQ in the
library using the command:
LODDGOBJTE DGDFN(dgname) LODSYS(*SRC) UPDOPT(*ADD)
9. Before continuing, confirm the job ran successfully using the command:
WRKJOB LODDGOBJTE
Steps to start journaling on source.
10. This step must be performed from the source system of the data group.
a. Use the following command to display object tracking entries created for
*DTAARA and *DTAQ objects in the library:
289
WRKDGOBJTE DGDFN(dgname) OBJ1(library-name/*ALL)
b. Type 9 (Start journaling) next to the first tracking entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*SRC) FORCE(*YES)
e. Press Enter.
11. This step must be performed from the source system of the data group.
a. Use the following command to display a list of data group file entries created
for *FILE objects in the library.
WRKDGFE DGDFN(dgname) LIB1(library-name)
b. Type 9 (Start journaling) next to the first file entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*SRC) FORCE(*YES)
e. Press Enter.
Steps that temporarily change environment while synchronizing. These steps
prevent MIMIX from attempting to use audits and recoveries that would otherwise
attempt actions that may not be desired when manually synchronizing the library.
12. Display the data group file entries created for *FILE objects in the library using the
command.
WRKDGFE DGDFN(dgname) LIB1(library-name)
13. Do the following to change the status of the listed file entries to *HLD.
a. Type 23 (Hold file) next to the first file entry. Do not press Enter.
b. Press F13 (Repeat).
c. Press Enter.
d. Press F5 (Refresh) and verify that all file entries listed show *HLD as their
requested status.
14. Identify and record current policy values for the data group by doing the following:
a. Type the following command and press F4 (Prompt):
SETMMXPCY DGDFN(dgname)
b. Press Enter to display the current values. Record the values displayed for the
following fields:
• Automatic object recovery
• Automatic database recovery
• Automatic audit recovery
15. Disable automatic recoveries for the data group using the following command:
290
Adding a library to an existing data group
291
e. Press Enter.
23. From the Work with DG File Entries display, do the following to change the status
of the file entries for the library to *RLSWAIT.
a. Type 25 (Release file) next to the first file entry. Do not press Enter.
b. Press F13 (Repeat).
c. Press Enter.
d. Press F5 (Refresh) and verify that all file entries listed show *ACTIVE as their
current status.
Steps to return environment for normal operations. Perform these steps from the
management system when replication activity for that data group is caught up.
24. End the data group using the command:
ENDDG DGDFN(dgname) DTACRG(*YES)
25. Set automatic recovery policies for the data group back to their previous values.
Use the values recorded in Step 14 in the following command:
SETMMXPCY DGDFN(dgname) OBJRCY(value) DBRCY(value)
AUDRCY(value)
26. Display the data group and verify that its replication processes become inactive
(red I) using the command:
WRKDG DGDFN(dgname)
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
27. Start the data group using the command:
STRDG DGDFN(dgname) DTACRG(*YES)
28. Do the following to address expected notifications associated with actions
performed in Step 18.
a. Display notifications for the data group using the command:
WRNFY DGDFN(dgname)
b. If you see a notification from target journal inspection for the CRTLIB or
CLRLIB command, type 46 (Acknowledge) next to it and press Enter.
292
Adding an IFS directory to an existing data group
293
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
Steps to create configuration.
3. Use the following commands to create two data group IFS entries, one for the
directory itself, and one for its contents. (If you were directed here from Vision
Solutions Portal when creating directory selection rules and chose to synchronize
manually, you can skip Step 3.)
Note: Consider the types of objects you have within the directory and how
frequently they change when selecting a value for the COOPDB
parameter in the following commands.
*YES - The directories and objects are journaled, which allows more
efficient replication processing for frequent changes. This is best suited for
use with objects that change frequently.
*NO - Processing occurs only through system journal replication. This is
appropriate for objects that do not change frequently, such as images. This
is the default. If you use this value, you should skip the steps below that
are associated with IFS tracking entries.
ADDDGIFSE DGDFN(dgname) OBJ1('/directory-name')
COOPDB(value)
ADDDGIFSE DGDFN(dgname) OBJ1('/directory-name/*')
COOPDB(value)
4. Create data group IFS tracking entries for the objects in the directory using the
command:
LODDGIFSTE DGDFN(dgname) BATCH(*YES)
5. Before continuing, confirm the job ran successfully using the command:
WRKJOB LODDGIFSTE
Steps to start journaling on source.
6. This step must be performed from the source system of the data group.
a. Use the following command to display IFS tracking entries created for objects
and subdirectories in the directory:
WRKDGIFSTE DGDFN(dgname) OBJ1('/directory-name*')
b. Type 9 (Start journaling) next to the first tracking entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*SRC) FORCE(*YES)
e. Press Enter.
Steps that temporarily change environment while synchronizing. These steps
prevent MIMIX from attempting to use audits and recoveries that would otherwise
attempt actions that may not be desired when manually synchronizing the library.
7. Identify and record current policy values for the data group by doing the following:
a. Type the following command and press F4 (Prompt):
294
Adding an IFS directory to an existing data group
SETMMXPCY DGDFN(dgname)
b. Press Enter to display the current values. Record the values displayed for the
following fields:
• Automatic object recovery
• Automatic database recovery
• Automatic audit recovery
8. Disable automatic recoveries for the data group using the following command:
SETMMXPCY DGDFN(dgname) OBJRCY(*DISABLED) DBRCY(*DISABLED)
AUDRCY(*DISABLED)
Steps to synchronize the directory. If the directory is too large for the SYNCIFS
command, you will need to use media to save/restore the directory from one system
to another instead of the steps in this subsection.
9. Verify that there is no user activity on the IFS directory on the source system.
There should be no locks on the objects in the directory on the source or target
system.
10. From the source system, synchronize the directory using the following command:
SYNCIFS OBJ(('/directory-name' *ALL) SYS2(target-system-
name)
11. Verify the job has completed using the command:
WRKJOB SYNCIFS
Do not continue until the job has completed.
Steps to start journaling on target.
12. Start the data group using the command:
STRDG DGDFN(dgname) SETAUD(*YES) DTACRG(*YES)
13. This step must be performed from a management (*MGT) system:
a. Use the following command to display IFS tracking entries created for objects
and subdirectories in the directory:
WRKDGIFSTE DGDFN(dgname) OBJ1('/directory-name*')
b. Type 9 (Start journaling) next to the first tracking entry. Do not press Enter.
c. Press F13 (Repeat).
d. On the command line type the following:
JRNSYS(*TGTC) FORCE(*YES)
e. Press Enter.
Steps to return environment for normal operations. Perform these steps from the
management system when replication activity for that data group is caught up.
14. End the data group using the command:
ENDDG DGDFN(dgname) DTACRG(*YES)
295
15. Set automatic recovery policies for the data group back to their previous values.
Use the values recorded in Step 7 in the following command:
SETMMXPCY DGDFN(dgname) OBJRCY(value) DBRCY(value)
AUDRCY(value)
16. Display the data group and verify that its replication processes become inactive
(red I) using the command:
WRKDG DGDFN(dgname)
The RJ link, reported in Source DB column of the Work with Data Groups display,
can remain active.
17. Start the data group using the command:
STRDG DGDFN(dgname) DTACRG(*YES)
296
Creating data group DLO entries
297
press F21 (Select all). Then press Enter.
7. If necessary, you can use “Adding or changing a data group DLO entry” on
page 298 to customize values for any of the data group DLO entries.
Synchronize the DLOs identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
298
Creating data group DLO entries
9. Press Enter.
10. Manually synchronize the DLOs identified by this data group object entry before
starting replication processes. You can skip this step if you specified *YES in
Step 8. The entries will be available to replication processes after the data group
is ended and restarted. This includes after the nightly restart of MIMIX jobs. The
next time an audit that checks all objects runs, the entries will be available and the
MIMIX audits will attempt to synchronize the objects they identify if recoveries are
enabled.
299
Additional options: working with DG entries
The procedures for performing common functions, such as copying, removing, and
displaying, are very similar for all types of data group entries used by MIMIX. Each
generic procedure in this topic indicates the type of data group entry for which it can
be used.
Table 32. Values to specify for each type of data group entry.
5. The value *NO for the Replace definition prompt prevents you from replacing an
300
Additional options: working with DG entries
existing entry in the definition to which you are copying. If you want to replace an
existing entry, specify *YES.
6. To copy the entry, press Enter.
7. For file entries, end and restart the data group being copied.
301
Displaying a data group entry
Use this procedure to display a data group entry for a data group definition.
To display a data group entry, do the following:
1. From the Work with DG Definitions display, type the option for the entry you want
next to the data group and press Enter. Any of these options will allow an entry to
be displayed:
Option 17 (File entries)
Option 20 (Object entries)
Option 21 (DLO entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 5 (Display)
next to the entry you want and press Enter.
3. The appropriate data group entry display appears. Page Down to see all of the
values.
302
CHAPTER 13Additional supporting tasks for
configuration
The tasks in this chapter provide supplemental configuration tasks. Always use the
configuration checklists to guide you though the steps of standard configuration
scenarios.
• “Accessing the Configuration Menu” on page 305 describes how to access the
menu of configuration options from the native user interface.
• “Starting the system and journal managers” on page 306 provides procedures for
starting these jobs. System and journal manager jobs must be running before
replication can be started.
• “Manually deploying configuration changes” on page 307 describes when
configuration is automatically deployed and when you may want to manually
deploy it. Instructions for manually deploying configuration are included.
• “Setting data group auditing values manually” on page 309 describes when to
manually set the object auditing level for objects defined to MIMIX and provides a
procedure for doing so.
• “Checking file entry configuration manually” on page 313 provides a procedure
using the CHKDGFE command to check the data group file entries defined to a
data group.
Note: The preferred method of checking is to use automatic scheduling for the
#DGFE audit, which calls the CHKDGFE command and can automatically
correct detected problems. For additional information, see “Interpreting results
for configuration data - #DGFE audit” on page 687.
• “Starting data groups for the first time” on page 315 describes how to start
replication once configuration is complete and the systems are synchronized. Use
this only when directed to by a configuration checklist.
• “Identifying data groups that use an RJ link” on page 316 describes how to
determine which data groups use a particular RJ link.
• “Using file identifiers (FIDs) for IFS objects” on page 317 describes the use of FID
parameters on commands for IFS tracking entries. When IFS objects are
configured for replication through the user journal, commands that support IFS
tracking entries can specify a unique FID for the object on each system. This topic
describes the processing resulting from combinations of values specified for the
object and FID prompts.
• “Configuring restart times for MIMIX jobs” on page 318 describes how to change
the time at which MIMIX jobs automatically restart. MIMIX jobs restart daily to
ensure that the MIMIX environment remains operational.
• “Setting the system time zone and time” on page 325 describes how to set time
zone values so that the timestamps used within status of application group
procedures will display correctly on all systems.
303
Additional supporting tasks for configuration
304
Accessing the Configuration Menu
305
Starting the system and journal managers
This procedure starts all the system managers, journal managers, target journal
inspection jobs, and collector services. If the system managers are running, they will
automatically send configuration information to the network system as you complete
configuration tasks.
System and journal managers must be active to support replication. Journal
inspection jobs support analysis functionality, and collector services is needed to use
MIMIX from within the Vision Solutions Portal, and to allow collection of historical
statistics. For systems participating in an IBM i cluster with a MIMIX Global license,
this procedure also starts cluster services, which is needed to start replication.
Do the following:
1. Access the MIMIX Basic Main Menu. See “Accessing the MIMIX Main Menu” on
page 93.
2. From the MIMIX Basic Main Menu press the F21 key (Assistance level) to access
the MIMIX Intermediate Main Menu.
3. Select option 2 (Work with Systems) and press Enter.
4. The Work with Systems display appears with a list of the system definitions. Type
a 9 (Start) next to each of the system definitions you want and press Enter. This
will start all managers on all of these systems in the MIMIX environment.
5. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:
a. Verify that *ALL appears as the value for the Manager prompt.
b. Verify that *YES appears as the value for the Target journal inspection and
Collector services prompts.
c. If you are configuring a cluster environment, press F10 (Additional parameters)
and accept the value *YES for the Start cluster services prompt. If the specified
system definition is not associated with an IBM i cluster or if the cluster does
not exist, this value has no effect.
d. Press Enter to complete this request.
6. If you selected more than one system definition in Step 4, the Start MIMIX
Managers (STRMMXMGR) display will be shown for each system definition that
you selected. Repeat Step 5 for each system definition that you selected.
306
Manually deploying configuration changes
307
• To submit the job for batch processing, accept the default. Press Enter to
continue with the next step.
5. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
6. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
7. To start deploying, press Enter.
If you want to validate the list of objects to be replicated resulting from deploying
configuration, use the Replicated Objects portlet in Vision Solutions Portal.
308
Setting data group auditing values manually
309
3. At the Object type prompt, specify the type of objects for which you want to set
auditing values.
4. If you want to allow MIMIX to force a change to a configured value that is lower
than the object’s existing value, specify *YES for the Force audit value prompt.
Note: This may affect the operation of your replicated applications. We
recommend that you force auditing value changes only when you have
specified *ALLIFS for the Object type.
5. Press Enter.
For this scenario, running the SETDGAUD command with FORCE(*NO) does not
change the auditing values on any existing IFS objects because the configured values
from the data group IFS entries are lower than the existing values.
310
Setting data group auditing values manually
Table 34. Intermediate audit values which occur during FORCE(*YES) processing for example 1.
Example 2: This example begins with the same set of data group IFS entries used in
example 1 (Table 33) and uses the results of the forced change in example 1 as the
auditing values for the existing objects in Table 35.
Table 35 shows how running the SETDGAUD command with FORCE(*NO) causes
changes to auditing values. This scenario is quite possible as a result of a normal
STRDG request. Complex data group IFS entries and multiple configured values
cause these potentially undesirable results.
Note: Any addition or change to the data group IFS entries can cause these results
to occur.
There is no way to maintain the existing values in Table 35 without ensuring that a
forced change occurs every time SETDGAUD is run, which may be undesirable. In
this example, the next time data groups are started, the objects’ auditing values will
be set to those shown in Table 35 for FORCE(*NO).
311
Any addition or change to the data group IFS entries can potentially cause similar
results the next time the data group is started. To avoid this situation, we recommend
that you configure a consistent auditing value of *CHANGE across data group IFS
entries which identify objects with common parent directories.
Example 3: This scenario illustrates the results of SETDGAUD command when the
object’s auditing value is determined by the user profile which accesses the object
(value *USRPRF). Table 36 shows the configured data group IFS entry.
Table 37 compares the results running the SETDGAUD command with FORCE(*NO)
and FORCE(*YES).
Running the command with FORCE(*NO) does not change the value. The value
*USRPRF is not in the range of valid values for MIMIX. Therefore, an object with an
auditing value of *USRPRF is not considered for change.
Running the command with FORCE(*YES) does force a change because the existing
value and the configured value are not equal.
312
Checking file entry configuration manually
313
• To submit the job for batch processing, accept *YES. Press Enter and continue
with the next step.
9. At the Job description prompts, specify the name and library of the job description
used to submit the batch request. Accept MXAUDIT to submit the request using
the default job description, MXAUDIT.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To start the data group file entry check, press Enter.
314
Starting data groups for the first time
315
Identifying data groups that use an RJ link
Use this procedure to determine which data groups use a remote journal link before
you end a remote journal link or remove a remote journaling environment.
1. Enter the command WRKRJLNK and press Enter.
2. Make a note of the name indicated in the Source Jrn Def column for the RJ Link
you want.
3. From the command line, type WRKDGDFN and press Enter.
4. For all data groups listed on the Work with DG Definitions display, check the
Journal Definition column for the name of the source journal definition you
recorded in Step 2.
• If you do not find the name from Step 2, the RJ link is not used by any data
group. The RJ link can be safely ended or can have its remote journaling
environment removed without affecting existing data groups.
• If you find the name from Step 2 associated with any data groups, those data
groups may be adversely affected if you end the RJ link. A request to remove
the remote journaling environment removes configuration elements and
system objects that need to be created again before the data group can be
used. Continue with the next step.
5. Press F10 (View RJ links). Consider the following and contact your MIMIX
administrator before taking action that will end the RJ link or remove the remote
journaling environment.
• When *NO appears in the Use RJ Link column, the data group will not be
affected by a request to end the RJ link or to end the remote journaling
environment.
Note: If you allow applications other than MIMIX to use the RJ link, they will be
affected if you end the RJ link or remove the remote journaling
environment.
• When *YES appears in the Use RJ Link column, the data group may be
affected by a request to end the RJ link. If you use the procedure for ending a
remote journal link independently in the MIMIX Operations book, ensure that
any data groups that use the RJ link are inactive before ending the RJ link.
316
Using file identifiers (FIDs) for IFS objects
317
Configuring restart times for MIMIX jobs
Certain MIMIX jobs are restarted on a regular basis in order to maintain the MIMIX
environment. The ability to configure this activity can ease conflicts with your
scheduled workload by changing when the MIMIX jobs restart to a more convenient
time for your environment.
You can configure the job restart time on system definitions to affect when system-
level jobs are restarted, on data group definitions to affect when replication-level jobs
are restarted, or both. To make effective use of this capability, you may need to set the
job restart time in more than one location.
Attention: The value *NONE for the Job restart time parameter is not
recommended.
If not restarted every day, target journal inspection becomes less effective
because reporting results per user would no longer occur every day.
If you specify *NONE in a system definition or a data group definition, you
need to develop and implement alternative procedures to ensure that the
affected MIMIX jobs are periodically restarted. Restarting the jobs ensures
that long running MIMIX jobs are not ended by the system due to resource
constraints and refreshes the job log to avoid overflow and abnormal job
termination.
Affected jobs
Results of what you specify are also affected by the following:
• The time zone in which each system exists.
• The replication role (source or target) of the system within a data group affects
which data group-level jobs are started on a system. Also, target journal
inspection jobs run at the system-level based on the system’s current role for
replication processes.
318
Configuring restart times for MIMIX jobs
Note: Each system has a nightly cleanup job (SM_CLEANUP) that is not affected by
the configurable restart time. These cleanup jobs run shortly after midnight on
the local system.
MIMIX system-level jobs restart when they detect that the time specified in the
system definition has passed. The affected system level jobs are listed in Table 38.
Table 38. System level jobs that restart and the effect of the value specified in a system definition
Journal managers Each system Job on each system restarts at the time
(JRNMGR) specified in its system definition.
Target journal inspection Only on systems that are Jobs running on a current target system
(TGTJRNINSP) currently target systems for data restart at the time specified in the system
group replication. definition for the target system.
MIMIX data group-level jobs have a delay of 2 to 35 minutes from the specified time
is built into the job restart processing. The actual delay is unique to each job. By
distributing the jobs within this range the load on systems and communications is
more evenly distributed, reducing bottlenecks caused by many jobs simultaneously
attempting to end, start, and establish communications.
Table 39. Data group level jobs that restart and the effect of the value specified in a data group definition
Restarted Data Group Level Where Jobs Run Effect of Specified Restart Time
Jobs
Object send (OBJSND) Replication Source The actual restart time is based on the timestamp
of the system on which the OBJSND job runs.
Object retrieve (OBJRTV) Restart occurs within the allowed delay following
Container send (CNRSND) the time specified in the data group definition.
When an object send job is shared by multiple
Status receive (STSRCV) data groups, the restart time of all data groups
which share that job are evaluated for restart
Object receive (OBJRCV) Replication Target
times other than *NONE. The data group with the
Container receive (CNRRCV) earliest configured restart time is used to restart
the object send job and related object replication
Status send (STSSND) jobs for all of the sharing data groups. If all of the
sharing data groups have a restart time of
*NONE, then none of those data groups restart
the shared object send job and related object
replication jobs.
Database reader (DBRDR) Replication Target Restarts when time specified in the data group
definition occurs on the target system.
319
Table 39. Data group level jobs that restart and the effect of the value specified in a data group definition
Restarted Data Group Level Where Jobs Run Effect of Specified Restart Time
Jobs
Database send (DBSND) Replication Source The actual restart time is based on the timestamp
of the source system where the DBSND job runs.
Database receive (DBRCV) Replication Target Restart occurs within the allowed delay following
the time specified in the data group definition.
These jobs only run in data groups configured for
source-send replication.
Object apply (OBJAPY) Replication Target The actual restart time is based on the timestamp
of the target system. Restart occurs within the
allowed delay following the time specified in the
data group definition.
320
Configuring restart times for MIMIX jobs
target for replication. LONDON is the associated network system and its system
definition uses the default setting 000000 (midnight). You end and restart the MIMIX
jobs to make the change effective. The journal manager and target journal inspection
on HONGKONG are no longer restarted. In your runbook you document the new
procedures to manually restart the journal manager on HONGKONG and to restart
target journal inspection on HONGKONG when that system is the target for
replication.
Example 4: Wednesday evening you change the system definitions for LONDON and
HONG KONG to both have a job restart time of *NONE. HONGKONG is the
management system and the target for replication. You restart the MIMIX jobs to
make the change effective. In your runbook you document the new procedures to
manually restart the journal managers on HONGKONG and LONDON and to restart
target journal inspection on the system that is currently the target system.
321
time of 020000 (2 a.m.). There is a one hour time difference between the two
systems; said another way, NEWYORK is an hour ahead of CHICAGO.
Figure 16 and Figure 17 show the effect of the time zone difference and replication
processes used by the data group.
The journal manager on CHICAGO restarts at midnight Chicago time. The journal
manager and target journal inspection on NEWYORK restart at 2 a.m. New York time.
Figure 16 shows the data group as being configured with MIMIX Remote Journal
support. The database reader (DBRDR) and object apply (OBJAPY) job restart based
on the time on NEWYORK, the target system. The remaining replication processes
restart on the system where they run based on the time on CHICAGO, the source
system.
Figure 16. The data group in this environment uses MIMIX Remote Journal support.
Figure 17 shows the data group as configured to use source-send processing for user
journal replication. With the exception of the object apply jobs (OBJAPY), the data
group jobs restart during the same 2 to 35 minute timeframe based on Chicago time
(between 2 and 35 minutes after 3 a.m. in Chicago; after 4 a.m. in New York).
Because the OBJAPY jobs are based on the time on the target system, which is an
322
Configuring restart times for MIMIX jobs
hour ahead of the source system time used for the other jobs, the OBJAPY jobs
restart between 3:02 and 3:35 a.m. New York time.
Figure 17. The data group in this environment is configured for source-send replication.
323
Configuring the restart time in a data group definition
To configure the restart time for MIMIX data group-level jobs in an existing
environment, do the following:
1. On the Work with Data Group Definitions display, type a 2 (Change) next to the
data group definition you want and press F4 (Prompt).
2. Press F10 (Additional parameters), then scroll down to the bottom of the display.
3. At the Job restart time prompt, specify the value you want.
Notes:
• The time is based on a 24 hour clock, and must be specified in HHMMSS
format. Although seconds are ignored, the complete time format must be
specified. Valid values range from 000000 to 235959. The value 000000 is the
default and is equivalent to midnight.
• Consider the effect of any time zone differences between the management
system and the network system.
4. To accept the change, press Enter.
Changes have no effect on jobs that are currently running. The value for the Job
restart time is retrieved at the time the jobs are started. The change is effective the
next time the jobs are started.
324
Setting the system time zone and time
325
Creating an application group definition
Use this topic to create an application group. Application groups are best practice and
provide the ability to group and control multiple data groups as one entity. Default
procedures for starting, switching, and ending the application group are also created.
To create an application group definition, do the following:
1. From the MIMIX Basic Main Menu, type 1 (Work with application groups) and
press Enter.
2. The Work with Application Groups display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create Application Group Def. (CRTAGDFN) appears. Do the following:
a. At the Application group definition prompt, specify a name.
b. The Application group type prompt defaults to *NONCLU. This indicates that the
application group will not participate in a cluster controlled by the IBM i
operating system.
c. Press Enter.
4. An additional prompt appears. Specify a description of the application group.
5. Press Enter.
326
Loading data resource groups into an application group
327
3. The Work with Node Entries display appears. Press F10 to toggle between
configured view and status view.
Note: While configuring, the status view of this display will show the Current Role
and Data Provider with values of *UNDEFINED until the application group
is started.
4. From the configured view, type 2 (Change) next to the node that you want to be
the primary node and press Enter.
5. The Change Node Entry (CHGNODE) command appears. Specify *PRIMARY at
the Role prompt.
6. Press Enter.
328
Manually adding resource group and node entries to an application group
For this example, the following steps create a data resource group entry, add the data
groups to the resource group entry, and define the node role of each system within the
application group.
1. Add a data resource group entry to the application group APPGRP1.
ADDDTARGE AGDFN(APPGRP1) DTARSCGRP(RCSGRP1) TYPE(*DTA)
2. Change the data group definitions, specifying RSCGRP1 as the value for the Data
resource group entry.
CHGDGDFN DGDFN(DTA1) DTARSCGRP(RSCGRP1)
CHGDGDFN DGDFN(DTA2) DTARSCGRP(RSCGRP1)
CHGDGDFN DGDFN(DTA3) DTARSCGRP(RSCGRP1)
3. Define the correct node role for each node as you add the nodes to the application
group.
ADDNODE AGDFN(APPGRP1) RSCGRP(*AGDFN) NODE(SYSA)
ROLE(*PRIMARY)
ADDNODE AGDFN(APPGRP1) RSCGRP(*AGDFN) NODE(SYSB)
ROLE(*BACKUP) POSITION(1)
ADDNODE AGDFN(APPGRP1) RSCGRP(*AGDFN) NODE(SYSC)
ROLE(*BACKUP) POSITION(2)
329
Starting, ending, or switching an application group
Application group commands that start (STRAG), end (ENDAG), or switch (SWTAG)
the replication environment invoke procedures to perform the requested operation.
For the purpose of describing their use, these commands are quite similar.
This topic describes behavior of the commands for application groups that do not
participate in a cluster controlled by the IBM i operating system (*NONCLU
application groups).
The following parameters are available on all of the commands unless otherwise
noted.
What is the scope of the request? The following parameters identify the scope of
the requested operation:
Application group definition (AGDFN) - Specifies the requested application group.
You can either specify a name or the value *ALL.
Resource groups (TYPE) - Specifies the types of resource groups to be
processed for the requested application group.
Data resource group entry (DTARSCGRP) - Specifies the data resource groups to
include in the request. The default is *ALL or you can specify a name. This
parameter is ignored when TYPE is *ALL or *APP.
What is the expected behavior? The following parameters, when available, define
the expected behavior:
Switch type (SWTTYP) - Only available on the SWTAG command, this specifies
the reason the application group is being switched. The procedure called to
perform the switch and the actions performed during the switch differ based on
whether the current primary node (data source) is available at the start of the
switch procedure. The default value, *PLANNED, indicates that the primary node
is still available and the switch is being performed for normal business processes
(such as to perform maintenance on the current source system or as part of a
standard switch procedure). The value *UNPLANNED indicates that the switch is
an unplanned activity and the data source system may not be available.
Current node roles (ROLE) - Only available on the STRAG command, this
parameter is ignored for non-cluster application groups.
Node roles (ROLE) - Only available on the SWTAG command, this specifies
which set of node roles will determine the node that becomes the new primary
node as a result of the switch. The default value *CURRENT uses the current
order of node roles. If the application group participates in a cluster, the current
roles defined within the CRGs will be used. If *CONFIG is specified, the
configured primary node will become the new primary node and the new role of
other nodes in the recovery domain will be determined from their current roles. If
you specify a name of a node within the recovery domain for the application
group, the node will be made the new primary node and the new role of other
nodes in the recovery domain will be determined from their current roles.
What procedure will be used? The following parameters identify the procedure to
use and its starting point:
330
Starting, ending, or switching an application group
Begin at step (STEP) - Specifies where the request will start within the specified
procedure. This parameter is described in detail below.
Procedure (PROC) - Specifies the name of the procedure to run to perform the
requested operation when starting from its first step. The value *DFT will use the
procedure designated as the default for the application group. The value
*LASTRUN uses the same procedure used for the previous run of the command.
You can also specify the name of a procedure that is valid the specified
application group and type of request.
Where should the procedure begin? The value specified for the Begin at step
(STEP) parameter on the request to run the procedure determines the step at which
the procedure will start. The status of the last run of the procedure determines which
values are valid.
The default value, *FIRST, will start the specified procedure at its first step. This value
can be used when the procedure has never been run, when its previous run
completed (*COMPLETED or *COMPERR), or when a user acknowledged the status
of its previous run which failed, was canceled, or completed with errors
(*ACKFAILED, *ACKCANCEL, or *ACKERR respectively).
Other values are for resolving problems with a failed or canceled procedure. When a
procedure fails or is canceled, subsequent attempts to run the same procedure will
fail until user action is taken. You will need to determine the best course of action for
your environment based on the implications of the canceled or failed steps and any
steps which completed.
The value *RESUME will start the last run of the procedure beginning with the step at
which it failed, the step that was canceled in response to an error, or the step
following where the procedure was canceled. The value *RESUME may be
appropriate after you have investigated and resolved the problem which caused the
procedure to end. Optionally, if the problem cannot be resolved and you want to
resume the procedure anyway, you can override the attributes of a step before
resuming the procedure.
The value *OVERRIDE will override the status of all runs of the specified procedure
that did not complete. The *FAILED or *CANCELED status of these procedures are
changed to acknowledged (*ACKFAILED or *ACKCANCEL) and a new run of the
procedure begins at the first step.
.
The MIMIX Operations book describes the operational level of working with
procedures and steps in detail.
331
3. If you are starting after addressing problems with the previous start request,
specify the value you want for Begin at step. Be certain that you understand the
effect the value you specify will have on your environment.
4. Press Enter.
5. The Procedure prompt appears. Do one of the following:
• To use the default start procedure, press Enter.
• To use a different start procedure for the application group, specify its name.
Then press Enter.
332
Starting, ending, or switching an application group
switch request, specify the value you want for Begin at step. Be certain that you
understand the effect the value you specify will have on your environment.
6. Press Enter.
7. The Procedure prompt appears. Do one of the following:
• To use the default switch procedure for the specified switch type, press Enter.
• To use a different switch procedure for the application group, specify its name.
Then press Enter.
8. A switch confirmation panel appears. To perform the switch, press F16.
333
Performing target journal inspection
The data integrity of replicated objects can be affected if they are changed on the
target system by programs or users other than MIMIX. For new installations, shipped
default values for journal definitions and data group definitions allow MIMIX to
automatically perform target journal inspection to check for such actions. On any
given target system, both the system journal (QAUDJRN) and user journals are
inspected. MIMIX also notifies you so that you can take appropriate action.
Target journal inspection consists of a set of processes that run on a system only
when that system is currently the target system for replication. Each process reads a
journal to check for users or programs other than MIMIX that have modified replicated
objects. The number of inspection process on a system depends on how many user
journals on the target system are defined to the data groups replicating to that system.
There is one inspection process for the system journal (QAUDJRN) regardless of how
many data groups use the system as a target system. Each user journal on the target
system also has an inspection process, which may be used by one or more data
groups.
Any detected modifications are logged in an internal database of replicated objects.
The example below shows the relationships between data groups, journals, and
configuration in a simple switchable replication environment.
Each target journal inspection process sends a notification once per day per user that
changed objects on the target node. Only the first object changed by the user is
identified in the notification. However, additional objects changed by the same user
are marked in the replicated objects database with the unique ID of the already sent
notification.
When using MIMIX through Vision Solutions Portal, you can use the Replicated
Objects portlet to easily view a list of all the objects changed by a particular user,
program, or job, or a list of those that have the same notification ID. This capability is
only available through Vision Solutions Portal.
Notes:
• Target journal inspection does not occur for the journals identified in the
MXCFGJRN journal definition and journal definitions that identify the remote
journal used in RJ configurations (whose names typically end with @R).
• In environments that perform bi-directional replication, target journal inspection
does not report a target object as changed by user when that object is also
replicated by a different data group using that system as its source.
• MIMIX automatically creates journal definitions for the target system for
QAUDJRN and user journals. In environments where only user journal replication
is configured, the QAUDJRN journal definition on the target system is still needed
so that target journal inspection can check for all transactions.
Target journal inspection is started and ended when MIMIX starts or ends. Starting
data groups will start inspection jobs on the target system if necessary. You can also
manually start or end the inspection processes for a system with commands that act
on system-level processes (STRMMXMGR, ENDMMXMGR). Inspection processes
334
Performing target journal inspection
are included with other system-level jobs that restart daily. The name of each
inspection process job is the name of the journal definition.
The first time that a target journal inspection process is started, it begins with the last
sequence number in the currently attached journal receiver on the target system.1
(The starting point is not associated with the location in the source journal from which
replication is started.)
When a target journal inspection process ends, MIMIX retains information about the
last sequence number it processed. On subsequent start requests, the target journal
inspection process starts at the next journal sequence number following the last
sequence number it processed if the journal receiver is still available. If the receiver is
no longer available, processing starts with the last sequence number in the currently
attached journal receiver.1 Any time a target journal inspection process starts,
message LVI3901 is issued to the MIMIX message log and to the job log, identifying
where the journal inspection process started.
When starting target journal inspection after enabling target journal inspection in a
journal definition where it was previously disabled, processing begins with the last
sequence number in the currently attached journal receiver on the target system.1
For each data group, status of target journal inspection is included with other target
system manager processes. At the system level, the status reported is the combined
status of all target inspection processes currently running on that system. More
status-related information is available in the MIMIX Operations book.
Example: An application group controls two switchable data groups. Both data
groups perform system and user journal replication, but only one data group is
1. This behavior applies to service pack 7.1.06.00 and higher. In earlier version 7.1 service
packs, processing begins at the first entry in the currently attached journal receiver on the
target system, which can result in false target journal inspection notifications being reported
on initial startup.
335
configured for remote journaling. Figure 18 shows the journals associated with this
configuration.
Note that the remote journals used by the remote journaling environment of data
group ABC will never be used for target journal inspection.
To enable target journal inspection for this example environment requires:
• Data group definitions ABC and DEF must specify *YES for Journal on target
(JRNTGT). Also, because these data groups perform user journal replication, the
values of System 1 journal definition (JRNDFN1) and System 2 journal definition
(JRNDFN2) must identify the journal definitions for the systems identified as
system 1 or system 2, respectively, in the data group name (DGDFN). Often the
journal definition names use the same name as the data group. However, it is
possible that a data group may be using a journal definition with a different name
or sharing a journal definition with a different data group.
• All of the journal definitions in Table 40 must specify *ACTIVE for the Target
journal state (TGTSTATE) and *YES for Target journal inspection (TGTJRNINSP).
336
Performing target journal inspection
Table 40. Example: inspected target journals and their associated journal definitions
337
the value is *DELETE, the object or member is deleted from the target system. If
the value is *DISABLED, no recovery action is taken.
• Replicated object or member was deleted on target system. The replication
manager determines if the object or member still exists on the source system. If
the source object exists and is within the replication scope, it is synchronized to
the target system by the next run of an audit which checks the object type. For
physical file members, if the source member exists and is within the replication
scope, the member is synchronized to the target system by the next run of the
#FILATRMBR audit.
MIMIX tracks error conditions for three days. Once an error condition is corrected, the
object will no longer be identified as being “changed on target by user” in the
Replicated Objects portlet.
If a recovery that was submitted into replication processes fails, the replication
manager sends an error notification.
338
Performing target journal inspection
b. From the MIMIX Configuration Menu, type a 3 (Work with journal definitions)
and press Enter.
4. The Work with Journal Definitions display appears. For each journal definition you
need to change, do the following:
a. Type a 2 (Change) next to the journal definition for the system you want and
press Enter. The Change Journal Definition (CHGJRNDFN) display appears.
b. Verify that the value of the Target journal state prompt is *ACTIVE.
c. Specify *YES for the Target journal inspection prompt.
d. Press Enter.
5. The next step to perform depends on your configuration. Do one of the following:
• If you have only data groups of type *OBJ, skip to Step 7.
• If you have data groups of type *ALL or *DB, those data groups which include
the system identified in a user journal definition must be verified and changed if
necessary. Type 13 (Data group definitions) next to the journal definition you
changed and press Enter.
6. The Work with Data Group Definitions display appears with a list of the data
groups that use the selected journal definition. For each of the data groups on the
display do the following:
Note: If the selected journal definition was for a system journal (QAUDJRN), a
target journal of an RJ environment, or is no longer used by a data group,
the list will be blank.
a. Type a 2 (Change) next to the data group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press F9
(All parameters).
c. Check the value specified for Journal on target (JRNTGT). Change the value to
*YES if necessary.
d. Press Enter.
7. To make the changes effective, do one of the following:
• If you changed data group definitions, end and restart the data groups.
• If you changed only journal definitions (you did not perform Step 6), specify the
name of the target system in the following command and press Enter:
STRMMXMGR SYSDFN(name) TGTJRNINSP(*YES)
339
e. From the MIMIX Configuration Menu, type a 4 (Work with data group
definitions) and press Enter.
2. The Work with Data Group Definitions display appears. Press F18 (Subset).
3. The Subset DG Definitions appears. Do the following:
• To display a list of all data groups that use the system journal, specify *ALL
and *OBJ for the Data group type prompt and press Enter.
• To display a list of data groups that use a specific user journal, specify *ALL
and *DB for the Data group type prompt and specify the name of journal
definition at the Journal definition prompt and press Enter.
4. The resulting list includes the data groups that use the journal definition on either
its source or target system. The value displayed in the Data Source column
identifies which system is the current source system.
5. To identify whether a user journal is currently being used as a source or a target
journal, type a 5 (Display) next to the data group you want and press Enter.
6. Journal definition names for user journals are often the same name as the data
group. Therefore, to determine with certainty whether the journal definition is
being used as a source or target journal, evaluate whether the value specified for
Data Source resolves to System 1 or System 2 of the data group. Then check the
name specified in the appropriate System journal definition prompt (JRNDFN1 or
JRNDFN2).
340
Performing target journal inspection
2. From the MIMIX Configuration Menu, type a 3 (Work with journal definitions) and
press Enter.
3. The Work with Journal Definitions display appears. Do the following:
a. Press F18 (Subset). The Subset Journal Definitions display appears.
b. At the System prompt, specify the name of the system on which you want to
disable target journal inspection and press Enter.
4. The resulting list includes only the journal definitions for the specified system. For
each journal definition you want to change, do the following:
a. Type 2 (Change) next to the journal definition and press Enter.
b. The Change Journal Definition (CHGJRNDFN) display appears. Specify *NO
for the Target journal inspection prompt and press Enter.
Notes:
• You do not need to change journal definitions for journals that are excluded
from inspection when the system is the target for replication. Inspection does
not occur for the journals identified in the MXCFGJRN journal definition and
journal definitions that identify the remote journal used in RJ configurations
(whose names typically end with @R).
• When you change a QAUDJRN journal definition, all data groups that perform
system journal replication or any form of cooperative processing with a user
journal and are using that system as their target system are affected. When
you change a journal definition for a user journal, any data groups that perform
database replication or any form of cooperative processing and are using that
system as their target system are affected.
Any active journal inspection jobs are ended when the configuration change is
made. Inspection processes with status of *ACTIVE, *INACTIVE, and *NEWDG
will change to a status of not configured (*NOTCFG).
341
Starting, ending, and verifying journaling
This chapter describes procedures for starting and ending journaling. Journaling must
be active on all files, IFS objects, data areas and data queues that you want to
replicate through a user journal. Normally, journaling is started during configuration.
However, there are times when you may need to start or end journaling on items
identified to a data group.
The topics in this chapter include:
• “What objects need to be journaled” on page 343 describes, for supported
configuration scenarios, what types of objects must have journaling started before
replication can occur. It also describes when journaling is started implicitly, as well
as the authority requirements necessary for user profiles that create the objects to
be journaled when they are created.
• “What objects need to be journaled” on page 343 describes, for supported
configuration scenarios, what types of objects must have journaling started before
replication can occur. It also describes when journaling is started implicitly, as well
as the authority requirements necessary for user profiles that create the objects to
be journaled when they are created.
• “MIMIX commands for starting journaling” on page 345 identifies the MIMIX
commands available for starting journaling and describes the checking performed
by the commands. It also includes information for specifying journaling to the
configured journal.
• “Journaling for physical files” on page 347 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
physical files identified by data group file entries.
• “Journaling for IFS objects” on page 350 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
IFS objects replicated cooperatively (advanced journaling). IFS tracking entries
are used in these procedures.
• “Journaling for data areas and data queues” on page 354 includes procedures for
displaying journaling status, starting journaling, ending journaling, and verifying
journaling for data area and data queue objects replicated cooperatively
(advanced journaling). IFS tracking entries are used in these procedures.
342
What objects need to be journaled
343
TABLE statement is automatically journaled if the library in which it is created
contains a journal named QSQJRN or if the library is journaled with appropriate
inherit rules.
• New *FILE, *DTAARA, *DTAQ objects - The default value (*DFT) for the Journal at
creation (JRNATCRT) parameter in the data group definition enables MIMIX to
automatically start journaling for physical files, data areas, and data queues when
they are created.
– On systems running IBM i 6.1 or higher releases, MIMIX uses the support
provided by the IBM i command Start Journal Library (STRJRNLIB).
Customers are advised not to re-create the QDFTJRN data area on systems
running IBM i 6.1 or higher.
When configuration requirements are met, MIMIX will start library journaling for
the appropriate libraries as well as enable automatic journaling for the configured
cooperatively processed object types. When journal at creation configuration
requirements are met, all new objects of that type are journaled, not just those
which are eligible for replication.
When the data group is started, MIMIX evaluates all data group object entries for
each object type. (Entries for *FILE objects are only evaluated when the data
group specifies COOPJRN(*USRJRN).) Entries properly configured to allow
cooperative processing of the object type determine whether MIMIX will enforce
library journaling. MIMIX uses the data group entry with the most specific match to
the object type and library that also specifies *ALL for its System 1 object (OBJ1)
and Attribute (OBJATR).
Note: MIMIX prevents library journaling from starting in the following libraries:
QSYS*, QRECOVERY, QRCY*, QUSR*, QSPL*, QRPL*, QRCL*, QRPL*,
QGPL, QTEMP and SYSIB*.
For example, if MIMIX finds only the following data group object entries for library
MYLIB, it would use the first entry when determining whether to enforce library
journaling because it is the most specific entry that also meets the OBJ1(*ALL)
and OBJATR(*ALL) requirements. The second entry is not considered in the
determination because its OBJ1 and OBJATR values do not meet these
requirements.
LIB1(MYLIB) OBJ1(*ALL) OBJTYPE(*FILE) OBJATR(*ALL) COOPDB(*YES)
PRCTYPE(*INCLD)
LIB1(MYLIB) OBJ1(MYAPP) OBJTYPE(*FILE) OBJATR(DSPF) COOPDB(*YES)
PRCTYPE(*INCLD)
344
MIMIX commands for starting journaling
345
Forcing objects to use the configured journal
Journaled objects must use the journal defined in the data group definition in order for
replication to occur. Objects that are journaled to a journal that is different than the
journal defined in the data group definition can result in data integrity issues. MIMIX
identifies these objects with a journaling status of *DIFFJRN. A journal status of
*DIFFJRN should be investigated to determine the reason for using the different
journal. See “Resolving a problem for a journal status of *DIFFJRN” on page 139.
The MIMIX commands STRJRNFE, STRJRNIFSE, and STRJRNOBJE provide the
ability to specify the Force to configured journal (FORCE) prompt which determines
whether to end journaling for the selected objects that are currently journaled to a
different journal than the configured journal (*DIFFJRN), and then start journaling to
the configured journal.
To force journaled objects to use the journal configured in the data group definition,
specify *YES for the FORCE prompt in the MIMIX commands for starting journaling.
FORCE(*NO) is the command default for the STRJRNFE, STRJRNIFSE and
STRJRNOBJE commands when run from the native interface. See “MIMIX
commands for starting journaling” on page 345.
346
Journaling for physical files
347
• To start journaling using the command defaults, press Enter.
• To modify command defaults, press F4 (Prompt) then continue with the next
step.
3. The Start Journaling File Entries (STRJRNFE) display appears. The Data group
definition and the System 1 file identify your selection.
4. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is started on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will start journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. Optional: If the file is journaled to a journal that is different than the configured
journal and you have determined that it is ok to force journaling to the configured
journal, press F10 (Additional parameters) to display the Force to configured
journal (FORCE) prompt.
Change the FORCE prompt to *YES to end journaling to a journal that is different
than the configured journal and then start journaling using the configured journal.
This value will also attempt to start journaling for objects not currently journaled.
Journaling will not be ended for objects already journaled to the configured
journal. For more information, see “Forcing objects to use the configured journal”
on page 346.
7. To start journaling for the physical file associated with the selected data group,
press Enter.
The system returns a message to confirm the operation was successful.
348
Journaling for physical files
3. The End Journaling File Entries (ENDJRNFE) display appears. If you want to end
journaling for all files in the library, specify *ALL for the System 1 file prompt.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is ended on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will end journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. To end journaling, press Enter.
349
Journaling for IFS objects
IFS tracking entries are loaded for a data group after they are configured for
replication through the user journal and the data group has been started. However,
loading IFS tracking entries does not automatically start journaling on the IFS objects
they identify. In order for replication to occur, journaling must be started on the source
system for the IFS objects identified by IFS tracking entries using the journal defined
in the data group definition.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for IFS objects identified for replication through the user journal.
You should be aware of the information in “Long IFS path names” on page 117
350
Journaling for IFS objects
3. From the Work with DG IFS Trk. Entries display, type a 9 (Start journaling) next to
the IFS tracking entries you want. Then do one of the following:
• To start journaling using the command defaults, press Enter.
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
4. The Start Journaling IFS Entries (STRJRNIFSE) display appears. The Data group
definition and IFS objects prompts identify the IFS object associated with the
tracking entry you selected. You cannot change the values shown for the IFS
objects prompts1.
5. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is started on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will start journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
6. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
7. Optional: If the file is journaled to a journal that is different than the configured
journal and you have determined that it is ok to force journaling to the configured
journal, press F10 (Additional parameters) to display the Force to configured
journal (FORCE) prompt.
Change the FORCE prompt to *YES to end journaling to a journal that is different
than the configured journal and then start journaling using the configured journal.
This value will also attempt to start journaling for objects not currently journaled.
Journaling will not be ended for objects already journaled to the configured
journal. For more information, see “Forcing objects to use the configured journal”
on page 346.
8. The System 1 file identifier and System 2 file identifier prompts identify the file
identifier (FID) of the IFS object on each system. You cannot change the values2.
9. To start journaling on the IFS objects specified, press Enter.
1. When the command is invoked from a command line, you can change values specified for
the IFS objects prompts. Also, you can specify as many as 300 object selectors by using the
+ for more values prompt.
2. When the command is invoked from a command line, use F10 to see the FID prompts. Then
you can optionally specify the unique FID for the IFS object on either system. The FID values
can be used alone or in combination with the IFS object path name.
351
To end journaling for IFS objects, do the following:
1. Access the journaled view of the Work with DG IFS Trk. Entries display as
described in “Displaying journaling status for IFS objects” on page 350.
2. From the Work with DG IFS Trk. Entries display, type a 10 (End journaling) next to
the IFS tracking entries you want. Then do one of the following:
• To end journaling using the command defaults, press Enter.
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The End Journaling IFS Entries (ENDJRNIFSE) display appears. The Data group
definition and IFS objects prompts identify the IFS object associated with the
tracking entry you selected. You cannot change the values shown for the IFS
objects prompts1.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is ended on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will end journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. The System 1 file identifier and System 2 file identifier identify the file identifier
(FID) of the IFS object on each system. You cannot change the values shown2.
7. To end journaling on the IFS objects specified, press Enter.
352
Journaling for IFS objects
group definition and IFS objects prompts identify the IFS object associated with
the tracking entry you selected. You cannot change the values shown for the IFS
objects prompts1.
4. Specify the value you want for the Verify journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN is specified, MIMIX considers whether the data group is
configured for journaling on the target system (JRNTGT) and verifies journaling on
the appropriate systems as required.
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. The System 1 file identifier and System 2 file identifier identify the file identifier
(FID) of the IFS object on each system. You cannot change the values shown2.
7. To verify journaling on the IFS objects specified, press Enter.
“Using file identifiers (FIDs) for IFS objects” on page 317.
353
Journaling for data areas and data queues
Object tracking entries are loaded for a data group after they are configured for
replication through the user journal and the data group has been started. However,
loading object tracking entries does not automatically start journaling on the objects
they identify. In order for replication to occur, journaling must be started on the source
system for objects identified by tracking entries that use the journal defined in the data
group definition.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for data areas and data queues identified for replication through the user
journal.
354
Journaling for data areas and data queues
355
• To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The End Journaling Obj Entries (ENDJRNOBJE) display appears. The Data group
definition and IFS objects prompts identify the object associated with the tracking
entry you selected. Although you can change the values shown for these prompts,
it is not recommended unless the command was invoked from a command line.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
If journaling is ended on the source system, a journal entry will be generated into
the user journal. As a result, replication processes will end journaling for these
objects on the target system if the data group definition specifies *YES for Journal
on target (JRNTGT).
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. To end journaling on the objects specified, press Enter.
356
Journaling for data areas and data queues
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. To verify journaling on the objects specified, press Enter.
357
Configuring for improved performance
This chapter describes how to modify your configuration to use advanced techniques
to improve journal performance and MIMIX performance.
Journal performance: The following topics describe how to improve journal
performance:
• “Minimized journal entry data” on page 359 describes benefits of and restrictions
for using minimized user journal entries for *FILE and *DTAARA objects. A
discussion of large object (LOB) data in minimized entries and configuration
information are included.
• “Configuring database apply caching” on page 361 describes benefits of and how
to configure MIMIX functionality for database apply caching.
• “Configuring for high availability journal performance enhancements” on page 362
describes journal caching and journal standby state within MIMIX to support IBM’s
High Availability Journal Performance IBM i option 42, Journal Standby feature
and Journal caching. Requirements and restrictions are included.
MIMIX performance: The following topics describe how to improve MIMIX
performance:
• “Immediately applying committed transactions” on page 367 describes the
benefits and limitations of both immediate and delayed commit modes for the
database apply process.
• “Optimizing access path maintenance” on page 370 describes methods available
and how each can be used to improve performance for database apply
processes.
• “Caching extended attributes of *FILE objects” on page 369 describes how to
change the maximum size of the cache used to store extended attributes of *FILE
objects replicated from the system journal.
• “Increasing data returned in journal entry blocks by delaying RCVJRNE calls” on
page 377 describes how you can improve object send performance by changing
the size of the block of data from a receive journal entry (RCVJRNE) call and
delaying the next call based on a percentage of the requested block size.
• “Configuring high volume objects for better performance” on page 380 describes
how to change your configuration to improve system journal performance.
• “Improving performance of the #MBRRCDCNT audit” on page 381 describes how
to use the CMPRCDCNT commit threshold policy to limit comparisons and
thereby improve performance of this audit in environments which use commitment
control.
358
Minimized journal entry data
359
RPTs requires the presence of a full, non-minimized, record.
See the IBM book, Backup and Recovery for restrictions and usage of journal entries
with minimized entry-specific data.
360
Configuring database apply caching
361
Configuring for high availability journal performance
enhancements
MIMIX supports IBM’s High Availability Journal Performance IBM i option 42, Journal
Standby feature and Journal caching. These high availability performance
enhancements improve replication performance on the target system and provide
significant performance improvement by eliminating the need to start journaling at
switch time.
MIMIX support of IBM’s high availability performance enhancements consists of two
independent components: journal standby state and journal caching. These
components work individually or together, but are enabled separately.
Journal standby state minimizes replication impact on the target system by providing
the benefits of an active journal without writing the journal entries to disk. This is
particularly helpful in saving disk space in environments that do not rely on journal
entries for other purposes.
Journal caching enables the system to cache journal entries and their corresponding
database records into main storage and write to disks only as necessary. Journal
caching is particularly helpful during batch operations when large numbers of add,
update, and delete operations against journaled objects are performed.
Journal standby state and journal caching can be used in source send configuration
environments as well as in environments where remote journaling is enabled. For
restrictions of MIMIX support of IBM’s high availability performance enhancements,
see “Restrictions of high availability journal performance enhancements” on
page 364.
Note: For more information, also see the topics on journal management and system
performance in the IBM eServer iSeries Information Center.
362
Configuring for high availability journal performance enhancements
Journal caching
Journal caching can be used in replication environments as well as by journals used
internally by MIMIX. Journal caching is an attribute of the journal that is defined in the
journal definition. When journal caching is enabled, the system caches journal entries
and their corresponding database records into main storage. This means that neither
the journal entries nor their corresponding database records are written to disk until
an efficient disk write can be scheduled. This usually occurs when the buffer is full or
at the first commit, close, or force end of data. Because most database transactions
must no longer wait for a synchronous write of the journal entries to disk, the
performance gain can be significant.
For example, batch operations must usually wait for each new journal entry to be
written to disk. Journal caching can be helpful during batch operations when large
numbers of add, update, and delete operations against journaled objects are
performed.
For more information about journal caching, see IBM’s Redbooks Technote””Journal
Caching: Understanding the Risk of Data Loss”.
363
“Configuring journal standby state” on page 365, and “Configuring journal caching” on
page 365.
When journaling is used on the target system, the TGTSTATE parameter specifies the
requested status of the target journal. Valid values for the TGTSTATE parameter are
*ACTIVE and *STANDBY. When *ACTIVE is specified and the data group associated
with the journal definition is journaled on the target system (JRNTGT(*YES)), the
target journal state is set to active when the data group is started. When *STANDBY is
specified, objects are journaled on the target system, but most journal entries are
prevented from being deposited into the target journal.
The JRNCACHE parameter specifies whether the system should cache journal
entries in main storage before writing them to disk. Valid values for the JRNCACHE
parameter are *TGT, *BOTH, *NONE, or *SRC. The default value is *NONE which
prevents unintentional usage charges.
364
Configuring for high availability journal performance enhancements
365
b. The Build Journaling Environment (BLDJRNENV) panel is displayed. Specify
*JRNDFN for the Source for values (JRNVAL) parameter and press Enter.
366
Immediately applying committed transactions
367
In immediate commit mode, it is possible that applied entries may be rolled back once
all the journal entries in the commit cycle are applied. At any time while entries in the
commit cycle are being processed, the target system may contain partial data or extra
data that would not be available in delayed mode. This can be a concern if you use
data on the target system for more than high availability or disaster recovery, such as
for running backups or reports or for supporting cascading environments.
368
Caching extended attributes of *FILE objects
369
Optimizing access path maintenance
MIMIX provides the ability to improve the performance of database apply processes
by delaying access path maintenance. Leveraging the ability to change the Access
path maintenance (MAINT) attribute on files removes responsibility for access path
maintenance from the regular jobs for database apply sessions, thereby allowing
those jobs to process journal entries more efficiently.
The service pack level installed determines which of the following optimization
methods is available for use:
• For installations running service pack 7.1.15.00 or higher, the available method is
“Optimizing access path maintenance on service pack 7.1.15.00 or higher” on
page 370.
• For installations running earlier version 7.1 service packs, the available method is
“Using parallel access path maintenance on earlier service packs” on page 374.
370
Optimizing access path maintenance
Operation
When the APMNT policy is enabled, starting the database apply process for a data
group also starts an asynchronous access path maintenance job that remains active
when the apply process is active. When the database apply process opens a physical
file to apply a replicated transaction, the apply process also checks whether the file
and any associated logical files affected by the transaction are eligible for access path
maintenance optimization.
For eligible files, the apply process changes the access path maintenance (MAINT)
attribute from *IMMED to *DLY and keeps track of the number of record changes to
the physical file. When the record count exceeds 100 records and a predetermined
threshold (five percent of the file records being applied), the apply process requests
that access path maintenance job “catch up” on the delayed maintenance associated
with that physical file. The access path maintenance job performs delayed
maintenance on eligible files, using additional transient jobs if needed. The file’s
MAINT attribute is changed back to *IMMED when the apply process closes the file or
when the apply process ends.
Note: Any eligible files that were already set to *DLY before being opened by the
apply process will remain set to *DLY after the apply process closes the files.
MIMIX tracks any failed attempts to change a file’s MAINT attribute back to *IMMED
but does not report these as errors on the associated data group file entry while the
data group is active.
When the database apply process ends, the last apply session to end notifies the
access path maintenance job, then ends. The access path maintenance job uses
additional jobs, if needed, to change the access path maintenance attribute to
371
*IMMED on all files that MIMIX had previously changed to *DLY. Any failed requests
to change the MAINT attribute are retried. Before the maintenance jobs end, any files
that could not be changed to *IMMED are identified as having an access path
maintenance failure on the associated data group file entry.
Error recovery
If any access path maintenance errors exist when a data group (or the database apply
process) ends, MIMIX attempts to recover any access path maintenance errors the
next time the data group (or database apply process) is started.
If these attempts fail during start operations, MIMIX will attempt to change the
maintenance attribute to *IMMED the next time the data group (or the database apply
process) ends.
If errors exist for a physical file and associated logical files, the physical files are
addressed first.
For persistent access path maintenance errors, you can also manually retry changing
the MAINT attribute using option 40 from the Work with DG File Entries display or the
Retry Access Path Maint. Files (RTYAPMNT) command.
372
Optimizing access path maintenance
Job status
The persistent access path maintenance job is included in the summary of processes
whose status is reported in the Target DB column on the Work with Data Groups
display. When all other processes reported in this column are active but access path
maintenance is enabled and does not have at least one active job, Partial status is
displayed.
In detailed status for a data group, status for access path maintenance is displayed in
the AP Maint field on the merged view and database views 1 and 2 when the APMNT
policy is enabled. The following status values are possible:
A (Active) - One or more access path maintenance jobs exist.
I (Inactive) - No access path maintenance jobs exist. The APMNT policy is enabled.
U (Unknown) - An unknown error occurred.
Error status
The number of logical (LF) and physical (PF) files that have access path maintenance
failures for a data group is included in the number of errors specified in the DB Errors
column on the Work with Data Groups display. Option 12 (Files needing attention)
provides access to detailed information for the file entries associated with replication
errors and access path maintenance errors.
Only replication errors appear on the initial view of the Work with DG File Entries
display. Therefore, you must use F10 multiple times to see the view showing the AP
Maint. Status column. The value in this column identifies status of access path
maintenance processing for the file identified by the data group file entry. The APMNT
policy in effect determines whether the database apply process can optimize access
path maintenance.
If an access path maintenance error exists for a physical file or a logical file that is
identified by a data group file entry, the error status is *FAILED. If an access path
maintenance error exists on a logical file which does not have a data group file entry,
the error status *FAILEDLF is reported on the file entry for its associated physical
file.Therefore, a file entry for a physical file may have errors for itself and for multiple
associated logical files which are not identified by file entries. When this scenario
occurs, the *FAILED status takes precedence for the PF is displayed, and when
resolved, the *FAILEDLF status will be displayed.
When one of the files included in a join logical is not associated with a file entry and
an access path maintenance error occurs, all of the file entries associated with the
join logical are tracked as errors if they are not already in error. The error cannot be
reported on the join files that are not represented by file entries.
Table 41. Possible access path maintenance status values for data group file entries
Value Description
*AVAILABLE The file is eligible for access path maintenance. The policy in effect
allows the database apply process to temporarily delay access path
maintenance for the file on the target system.
373
Table 41. Possible access path maintenance status values for data group file entries
Value Description
*FAILED Access path maintenance failed for the file. The failure occurred while
resetting access path maintenance for the file from delayed (*DLY) to
immediate (*IMMED).
*FAILEDLF Access path maintenance failed for a logical file associated with the file.
The failure occurred while resetting access path maintenance for the
logical file from delayed (*DLY) to immediate (*IMMED).
*NOTALW MIMIX cannot perform access path maintenance for the file because the
operating system does not allow it.
374
Optimizing access path maintenance
Table 42. Parallel AP maintenance (PRLAPMNT) policy. This policy is available only on
installations running service packs below 7.1.15.00.
Parameter Element
Method
375
Table 42. Parallel AP maintenance (PRLAPMNT) policy. This policy is available only on
installations running service packs below 7.1.15.00.
Parameter Element
Specifies the method by which the parallel access path maintenance function is
implemented. The shipped default for the installation level policy is *NONE.
• *NONE—The parallel access path maintenance function is not used. The values
specified for all other elements are ignored.
• *AUTO—All eligible access paths are automatically assigned to access path
maintenance jobs and are applied in parallel.
• *MANUAL—The access paths to be maintained in parallel are specified manually.
Use this method only under the direction of a certified MIMIX representative.
Number of jobs
Specifies the number of parallel jobs to use for access path maintenance. The shipped
default for the installation level policy is *CALC.
• *CALC—MIMIX calculates the number of parallel access path maintenance jobs to
use, with a minimum of two jobs.
• number-of-jobs—Specifies the number of parallel access path maintenance jobs to
use. Valid values range from 1 through 1000.
Specifies the number of seconds to wait between iterations of access path maintenance
operations. The shipped default for the installation level policy is 60 seconds.
• number-of-seconds—Specifies the number of seconds to wait between iterations.
Valid values range from 5 through 900 seconds.
Specifies the number of days to retain log records for the parallel access path
maintenance function. The shipped default for the installation level policy is 1 day.
• *NONE—No logging is performed.
• number-of-days–Specifies the number of days to retain log records for parallel
access path maintenance jobs. Valid values range from 1 through 365 days.
Note: All elements of the PRLAPMNT parameter support the value *INST (use the value for the
installation). For data group level policies, this value is the shipped default. You can specify
this value when a data group name or a value other than *INST is specified for the data group
definition on the SETMMXPCY command
Status reporting: The monitors associated with parallel access path maintenance
report monitor status on the target node of data group replication processes. When
the target node is the local node on which you view the Work with Application Groups
display, the summary of monitor status for the local system that is shown in the
Monitors field includes status of the monitors associated with parallel access path
maintenance.
Detailed status displays for the data group show the process status in the Prl AP Mnt
field in the Target Statistics section.
376
Increasing data returned in journal entry blocks by delaying RCVJRNE calls
377
Note: Delays are not applied to blocks larger than the specified medium block
percentage. In the previous example, no delays will be applied to blocks larger
than 30 percent of the RCVJRNE block size, or 60,000 bytes.
378
Increasing data returned in journal entry blocks by delaying RCVJRNE calls
379
Configuring high volume objects for better performance
Some objects, such as data areas and data queues can have significant activity
against them and can cause MIMIX to use significant CPU resource.
One or several programs can use the QSNDDTAQ and QRCVDTAQ APIs to generate
thousands of journal entries for a single *DTAQ. For each journal entry, system journal
replication processes package all of the entries of the *DTAQ and sends it to the apply
system. MIMIX then individually applies each *DTAQ entry using the QSNDDTAQ
API.
If the data group is configured for multiple Object retrieve processing (OBJRTVPRC)
jobs, then several object retrieve jobs could be started (up to the maximum
configured) to handle the activity against the *DTAQ.
MIMIX contains redundancy logic that eliminates multiple journal entries for the same
object when the entire object is replicated. When you configure a data group for
system journal replication, you should:
• Place all *DTAQs in the same object-only data group
• Limit the maximum number of object retrieve jobs for the data group to one.
Defaults can be used for the other object data group jobs.
380
Improving performance of the #MBRRCDCNT audit
381
Example: This example shows the result of setting the policy for a data group to a
value of 10,000. Table 43 shows the files replicated by each of the apply sessions
used by the data group and the result of comparison. Because of the number of
uncommitted record operations present at the time of the request, files processed by
apply sessions A and C are not compared.
382
Configuring advanced replication
CHAPTER 16
techniques
383
Configuring advanced replication techniques
retrieval delay value so that a MIMIX lock on an object does not interfere with your
applications. This topic includes several examples.
• “Configuring to replicate SQL stored procedures and user-defined functions” on
page 421 describes the requirements for replicating these constructs and how
configure MIMIX to replicate them.
• “Using Save-While-Active in MIMIX” on page 423 describes how to change type of
save-while-active option to be used when saving objects. You can view and
change these configuration values for a data group through an interface such as
SQL or DFU.
384
Keyed replication
Keyed replication
By default, MIMIX user journal replication processes use positional replication. You
can change from positional replication to keyed replication for database files.
Keyed replication is not supported in environments licensed for MIMIX DR.
385
You can use the Verify Key Attributes (VFYKEYATR) command to determine whether
a physical file is eligible for keyed replication. See “Verifying key attributes” on
page 389.
386
Keyed replication
• Verify that you have the value you need specified for the Journal image
element of the File and tracking ent. options. *BOTH is recommended.
• File and tracking ent. options must specify *KEYED for the Replication type
element.
3. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic “Verifying Key Attributes” in the MIMIX Operations
book.
4. If you have modified file entry options on individual data group file entries, you
need to ensure that the values used are compatible with keyed replication.
5. Start journaling for the file entries using “Starting journaling for physical files” on
page 347.
387
• Use topic “Changing a data group file entry” on page 282 to modify an
existing file entry.
5. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic “Verifying Key Attributes” in the MIMIX Operations
book.
6. After you have changed individual data group file entries, you need to start
journaling for the file entries using “Starting journaling for physical files” on
page 347.
388
Verifying key attributes
Before you configure for keyed replication, verify that the file or files you for which you
want to use keyed replication are actually eligible.
Do the following to verify that the attributes of a file are appropriate for keyed
replication:
1. On a command line, type VFYKEYATR (Verify Key Attributes). The Verify Key
Attributes display appears.
2. Do one of the following:
• To verify a file in a library, specify a file name and a library.
• To verify all files in a library, specify *ALL and a library.
• To verify files associated with the file entries for a data group, specify
*MIMIXDFN for the File prompt and press Enter. Prompts for the Data group
definition appear. Specify the name of the data group that you want to check.
3. Press Enter.
4. A spooled file is created that indicates whether you can use keyed replication for
the files in the library or data group you specified. Display the spooled file
(WRKSPLF command) or use your standard process for printing. You can use
keyed replication for the file if *BOTH appears in the Replication Type Allowed
column. If a value appears in the Replication Type Defined column, the file is
already defined to the data group with the replication type shown.
389
Data distribution and data management scenarios
MIMIX supports a variety of scenarios for data distribution and data management
including bi-directional data flow, file combining, file sharing, and file merging. MIMIX
also supports data distribution techniques such as broadcasting, and cascading.
Often, this support requires a combination of advanced replication techniques as well
as customizing. These techniques require additional planning before you configure
MIMIX. You may need to consider the technical aspects of implementing a technique
as well as how your business practices may be affected. Consider the following:
• Can each system involved modify the data?
• Do you need to filter data before sending to it to another system?
• Do you need to implement multiple techniques to accomplish your goal?
• Do you need customized exit programs?
• Do any potential collision points exist and how will each be resolved?
MIMIX user journal replication provides filtering options within the data group
definition. Also, MIMIX provides options within the data group definition and for
individual data group file entries for resolving most collision points. Additionally,
collision resolution classes allow you to specify different resolution methods for each
collision point.
390
Data distribution and data management scenarios
• A data group (DG) definition is unique to its three part name (Name, System 1,
System 2). This allows two DG definitions to be configured to share the same data
group name with system 1 and system 2 reversed. You must specify both DG
definitions to use the same Data source (DTASRC) parameter value. For
example, in the following table both DG definitions use DataGroup1 as the data
group name and both specify *SYS1 (System 1) as their DTASRC. This results in
one DG definition that replicates from A to B, while the other replicates from B to
A.
Table 44. Example of DG Definitions with reversed system names for bi-directional replica-
tion.
DataGroup1 A B
DataGroup1 B A
• Each data group definition should specify *NO for the Allow to be switched
(ALWSWT) parameter.
Note: In system journal replication, MIMIX does not support simultaneous updates to
the same object on multiple systems and does not support conflict resolution
for objects. Once an object is replicated to a target system, system journal
replication processes prevent looping by not allowing the same object,
regardless of name mapping, to be replicated back to its original source
system.
Table 45. Example of DG Definitions with reversed system names for bi-directional replica-
tion.
DataGroup1 A B
DataGroup1 B A
• For each data group definition, set the DB journal entry processing (DBJRNPRC)
parameter so that its Generated by MIMIX element is set to *IGNORE. This
prevents any journal entries that are generated by MIMIX from being sent to the
391
target system and prevents looping.
• The files defined to each data group must be configured for keyed replication. Use
topics “Keyed replication” on page 385 and “Verifying key attributes” on page 389
to determine if files can use keyed replication.
Note: In order for bi-directional keyed replication to work correctly, the data
group names must be the same with the System 1 and System 2 values
reversed.
• Analyze your environment to determine the potential collision points in your data.
You need to understand how each collision point will be resolved. Consider the
following:
– Can the collision be resolved using the collision resolution methods provided in
MIMIX or do you need customized exit programs? See “Collision resolution” on
page 408.
– How will your business practices be affected by collision scenarios?
For example, say that you have an order entry application that updates shared
inventory records such as Figure 19. If two locations attempt to access the last item in
stock at the same time, which location will be allowed to fill the order? Does the other
location automatically place a backorder or generate a report?
392
Data distribution and data management scenarios
File combining is a scenario in which all or partial information from files on multiple
systems can be sent to and combined in a single file on a target system. In its user
journal replication processes, MIMIX implements file combining between multiple
source systems and a target system that are defined to the same MIMIX installation.
MIMIX determines what data from the multiple source files is sent to the target system
based on the contents of a journal transaction. An example of file combining is when
many locations within an enterprise update a local file and the updates from all local
files are sent to one location to update a composite file. The example in Figure 20
shows file combining from multiple source systems onto a composite file on the
management system.
To enable file combining between two systems, MIMIX user journal replication must
be configured as follows:
• Configure the data group definition for keyed replication. See topic “Keyed
replication” on page 385.
• If only part of the information from the source system is to be sent to the target
system, you need an exit program to filter out transactions that should not be sent
to the target system.
• If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file combining operation
effectively becomes a file routing operation. To ensure that the data group will
perform file combining operations after a switch, you need an exit program that
allows the appropriate transactions to be processed regardless of which system is
acting as the source for replication.
• After the combining operating is complete, if the combined data will be replicated
393
or distributed again, you need to prevent it from returning to the system on which it
originated.
File routing is a scenario in which information from a single file can be split and sent
to files on multiple target systems. In user journal replication processes, MIMIX
implements file routing between a source system and multiple target systems that are
defined to the same MIMIX installation. To enable file routing, MIMIX calls a user exit
program that makes the file routing decision. The user exit program determines what
data from the source file is sent to each of the target systems based on the contents
of a journal transaction. An example of file routing is when one location within an
enterprise performs updates to a file for all other locations, but only updated
information relevant to a location is sent back to that location. The example in Figure
21 shows the management system routing only the information relevant to each
network system to that system.
To enable file routing, MIMIX user journal replication processes must be configured as
follows:
• Configure the data group definition for keyed replication. See topic “Keyed
replication” on page 385.
• The data group definition must call an exit program that filters transactions so that
only those transactions which are relevant to the target system are sent to it.
• If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file routing operation
effectively becomes a file combining operation. To ensure that the data group will
perform file routing operations after a switch, you need an exit program that allows
the appropriate transactions to be processed regardless of which system is acting
394
Data distribution and data management scenarios
395
scenario because changes that originate on the Hong Kong system pass through an
intermediate system (Chicago) before being distributed to the Mexico City system and
other network systems in the MIMIX installation. Exit programs are required for the
data groups acting between the management system and the destination systems
and need to prevent updates from flowing back to their system of origin.
Figure 23. Bi-directional example that implements cascading for file distribution.
396
Trigger support
Trigger support
A trigger program is a user exit program that is called by the database when a
database modification occurs. Trigger programs can be used to make other database
modifications which are called trigger-induced database modifications.
397
This is because the database apply process checks each transaction before
processing to see if filtering is required, and firing the trigger adds additional
overhead to database processing.
398
Constraint support
Constraint support
A constraint is a restriction or limitation placed on a file. There are four types of
constraints: referential, unique, primary key and check. Unique, primary key and
check constraints are single file operations transparent to MIMIX. If a constraint is met
for a database operation on the source system, the same constraint will be met for the
replicated database operation on the target. Referential constraints, however, ensure
the integrity between multiple files. For example, you could use a referential constraint
to:
• Ensure when an employee record is added to a personnel file that it has an
associated department from a company organization file.
• Empty a shopping cart and remove the order records if an internet shopper exits
without placing an order.
When constraints are added, removed or changed on files replicated by MIMIX, these
constraint changes will be replicated to the target system. With the exception of files
that have been placed on hold, MIMIX always enables constraints and applies
constraint entries. MIMIX tolerates mismatched before images or minimized journal
entry data CRC failures when applying constraint-generated activity. Because the
parent record was already applied, entries with mismatched before images are
applied and entries with minimized journal entry data CRC failures are ignored. To
use this support:
• Ensure that your target system is at the same release level or greater than the
source system to ensure the target system is able to use all of the IBM i function
that is available on the source system. If an earlier IBM i level is installed on the
target system the operation will be ignored.
• You must have your MIMIX environment configured for either MIMIX Dynamic
Apply or legacy cooperative processing.
399
Referential constraint handling for these dependent files is supported through the
replication of constraint-induced modifications.
MIMIX does not provide the ability to disable constraints because IBM i would check
every record in the file to ensure constraints are met once the constraint is re-
enabled. This would cause a significant performance impact on large files and could
impact switch performance. If the need exists, this can be done through automation.
400
Handling SQL identity columns
401
Detailed technical descriptions of all attributes are available in the IBM eServer
iSeries Information Center. Look in the Database section for the SQL Reference for
CREATE TABLE and ALTER TABLE statements.
402
Handling SQL identity columns
Not supported -The following scenarios are known to be problematic and are not
supported. If you cannot use the SETIDCOLA command in your environment,
consider the “Alternative solutions” on page 403.
• Columns that have cycled - If an identity column allows cycling and adding a row
increments its value beyond the maximum range, the restart value is reset to the
beginning of the range. Because cycles are allowed, the assumption is that
duplicate keys will not be a problem. However, unexpected behavior may occur
when cycles are allowed and old rows are removed from the table with a
frequency such that the identity column values never actually complete a cycle. In
this scenario, the ideal starting point would be wherever there is the largest gap
between existing values. The SETIDCOLA command cannot address this
scenario; it must be handled manually.
• Rows deleted on production table - An application may require that an identity
column value never be generated twice. For example, the value may be stored in
a different table, data area or data queue, given to another application, or given to
a customer. The application may also require that the value always locate either
the original row or, if the row is deleted, no row at all. If rows with values at the end
of the range are deleted and you perform a switch followed by the SETIDCOLA
command, the identity column values of the deleted rows will be re-generated for
newly inserted rows. The SETIDCOLA command is not recommended for this
environment. This must be handled manually.
• No rows in backup table - If there are no rows in the table on the backup system,
the restart value will be set to the initial start value. Running the SETIDCOLA
command on the backup system may result in re-generating values that were
previously used. The SETIDCOLA command cannot address this scenario; it
must be handled manually.
• Application generated values - Optionally, applications can supply identity column
values at the time they insert rows into a table. These application-generated
identity values may be outside the minimum and maximum values set for the
identity column. For example, a table’s identity column range may be from 1
through 100,000,000 but an application occasionally supplies values in the range
of 200,000,000 through 500,000,000. If cycling is permitted and the SETIDCOLA
command is run, the command would recognize the higher values from the
application and would cycle back to the minimum value of 1. Because the result
would be problematic, the SETIDCOLA command is not recommended for tables
which allow application-generated identity values. This must be handled manually.
Alternative solutions
If you cannot use the SETIDCOLA command because of its known limitations, you
have these options.
Manually reset the identity column starting point: Following a switch to the
backup system, you can manually reset the restart value for tables with identity
columns. The SQL statement ALTER TABLE name ALTER COLUMN can be used for
this purpose.
Convert to SQL sequence objects: To overcome the limitations of identity column
switching and to avoid the need to use the SETIDCOLA command, SQL sequence
403
objects can be used instead of identity columns. Sequence objects are implemented
using a data area which can be replicated by MIMIX. The data area for the sequence
object must be configured for replication through the user journal (cooperatively
processed).
404
Handling SQL identity columns
Usage notes
• The reason you are using this command determines which system you should run
it from. See “When the SETIDCOLA command is useful” on page 402 for details.
• The command can be invoked manually or as part of a MIMIX Model Switch
Framework custom switching program. Evaluation of your environment to
determine an appropriate increment value is highly recommended before using
the command.
• This command can be long running when many files defined for replication by the
specified data group contain identity columns. This is especially true when
affected identity columns do not have indexes over them or when they are
referenced by constraints. Specifying a higher number of jobs (JOBS) can reduce
this time.
• This command creates a work library named SETIDCOLA which is used by the
command. The SETIDCOLA library is not deleted so that it can be used for any
error analysis.
• Internally, the SETIDCOLA command builds RUNSQLSTM scripts (one for each
job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts.
RUNSQLSTM produces spooled files showing the ALTER TABLE statements
executed, along with any error messages received. If any statement fails, the
RUNSQLSTM will also fail, and return the failing status back to the job where
SETIDCOLA is running and an escape message will be issued.
405
• Scenario 1. You performed a planned switch for test purposes. Because
replication of all transactions completed before the switch and no users have been
allowed on the backup system, the backup system has the same values as the
production. Before starting replication in the reverse direction you run the
SETIDCOLA command with an INCREMENTS value of 1. The next rows added to
table A and B will have values of 76 and 31,000, respectively.
• Scenario 2. You performed an unplanned switch. From previous experience, you
know that the latency of changes being transferred to the backup system is
approximately 15 minutes. Rows are inserted into Table A at the highest rate. In
15 minutes, approximately 150 rows will have been inserted into Table A (600
rows/hour * 0.25 hours). This suggests an INCREMENTS value of 150. However,
since all measurements are approximations or based on historical data, this
amount should be adjusted by a factor of at least 100% to 300 to ensure that
duplicate identity column values are not generated on the backup system. The
next rows added to table A and B will have values of 75+(300*1) = 375 and 30,000
+ (300*1000)= 330,000 respectively.
406
Handling SQL identity columns
useful” on page 402 specify a data group and the number of increments to skip in
the command:
SETIDCOLA DGDFN(name system1 system2) ACTION(*SET)
INCREMENTS(number)
407
Collision resolution
Collision resolution is a function within MIMIX user journal replication that
automatically resolves detected collisions without user intervention. MIMIX supports
the following choices for collision resolution that you can specify in the file entry
options (FEOPT) parameter in either a data group definition or in an individual data
group file entry:
• Held due to error: (*HLDERR) This is the default value for collision resolution in
the data group definition and data group file entries. MIMIX flags file collisions as
errors and places the file entry on hold. Any data group file entry for which a
collision is detected is placed in a "held due to error" state (*HLDERR). This
results in the journal entries being replicated to the target system but they are not
applied to the target database. If the file entry specifies member *ALL, a
temporary file entry is created for the member in error and only that file entry is
held. Normal processing will continue for all other members in the file. You must
take action to apply the changes and return the file entry to an active state. When
held due to error is specified in the data group definition or the data group file
entry, it is used for all 12 of the collision points.
• Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically
synchronize file members when an error is detected. The member is put on hold
while the database apply process continues with the next transaction. The file
member is synchronized using copy active file processing, unless the collision
occurred at the compare attributes collision point. In the latter case, the file is
synchronized using save and restore processing. When automatic
synchronization is specified in the data group definition or data group file entry, it
is used for all 12 of the collision points.
• Collision resolution class: A collision resolution class is a named definition
which provides more granular control of collision resolution. Some collision points
also provide additional methods of resolution that can only be accessed by using
a collision resolution class. With a defined collision resolution class, you can
specify how to handle collision resolution at each of the 12 collision points. You
can specify multiple methods of collision resolution to attempt at each collision
point. If the first method specified does not resolve the problem, MIMIX uses the
next method specified for that collision point.
408
Collision resolution
409
• You must specify either *AUTOSYNC or the name of a collision resolution class
for the Collision resolution element of the File entry option (FEOPT) parameter.
Specify the value as follows:
– If you want to implement collision resolution for all files processed by a data
group, specify a value in the parameter within the data group definition.
– If you want to implement collision resolution for only specific files, specify a
value in the parameter within an individual data group file entry.
Note: Ensure that data group activity is ended before you change a data group
definition or a data group file entry.
• If you plan to use an exit program for collision resolution, you must first create a
named collision resolution class. In the collision resolution class, specify
*EXITPGM for each of the collision points that you want to be handled by the exit
program and specify the name of the exit program.
410
Collision resolution
7. At the Number of retry attempts prompt, specify the number of times to try to
automatically synchronize a file. If this number is exceeded in the time specified in
the Retry time limit, the file will be placed on hold due to error
8. At the Retry time limit prompt, specify the number of maximum number of hours to
retry a process if a failure occurs due to a locking condition or an in-use condition.
Note: If a file encounters repeated failures, an error condition that requires
manual intervention is likely to exist. Allowing excessive synchronization
requests can cause communications bandwidth degradation and
negatively impact communications performance.
9. To create the collision resolution class, press Enter.
411
Printing a collision resolution class
Use this procedure to create a spooled file of a collision resolution class which you
can print.
1. From the Work with CR Classes display, type a 6 (Print) next to the collision
resolution class you want and press Enter.
2. A spooled file is created with the name MXCRCLS on which you can use your
standard printing procedure.
412
Changing target side locking for DBAPY processes
Changing target side locking for a data group object entry or file entry
To change target side locking for a library selection rule, do the following from a
management system:
1. From the Work with Data Group Definitions display, do one of the following to
access configured entries in the data group you want:
• Use option 20 (Object entries) to access configured data group object entries
for library-based objects.
• Use option 17 (File entries) to access configured data group file entries. Use
this only for data groups configured to use database-only replication
processes.
2. On the resulting display, press F10 (Additional parameters), then press Page
413
Down multiple times to locate the Lock member during apply prompt, (It is an
element of the File and tracking ent. opts (FEOPT) parameter.
3. Specify the value you want for the Lock member during apply prompt,
4. Press Enter.
The change is not effective until replication processes for the data group are ended
and started again.
414
Omitting T-ZC content from system journal replication
Table 46. T-ZC journal entry access types generated by file objects. These T-ZC journal entries are eligible
for replication through the system journal.
Access Access Type Operation Type Operations that Generate T-ZC Access Type
Type Description
File Member Data
By default, MIMIX replicates file attributes and file member data for all T-ZC entries
generated for logical and physical files configured for system journal replication. While
415
MIMIX recreates attribute changes on the target system, member additions and data
changes require MIMIX to replicate the entire object using save, send, and restore
processes. This can cause unnecessary replication of data and can impact
processing time, especially in environments where the replication of file data
transactions is not necessary.
Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data
group object entry commands, you can specify a predetermined set of access types
for *FILE objects to be omitted from system journal replication. T-ZC journal entries
with access types within the specified set are omitted from processing by MIMIX.
The OMTDTA parameter is useful when a file or member’s data does not need to be
the replicated. For example, when replicating work files and temporary files, it may be
desirable to replicate the file layout but not the file members or data. The OMTDTA
parameter can also help you reduce the number of transactions that require
substantial processing time to replicate, such as T-ZC journal entries with access type
30 (Open).
Each of the following values for the OMTDTA parameter define a set of access types
that can be omitted from replication:
*NONE - No T-ZCs are omitted from replication. All file, member, and data
operations in transactions for the access types listed in Table 46 are replicated.
This is the default value.
*MBR - Data operations are omitted from replication. File and member operations
in transactions for the access types listed in Table 46 are replicated. Access type
7 (Change) for both file and member operations are replicated.
*FILE - Member and data operations are omitted from replication. Only file
operations in transactions for the access types listed in Table 46 are replicated.
Only file operations in transactions with access type 7 (Change) are replicated.
416
Omitting T-ZC content from system journal replication
For all library-based objects, MIMIX evaluates the object auditing level when starting
data a group after a configuration change. If the configured value specified for the
OBJAUD parameter is higher than the object’s actual value, MIMIX will change the
object to use the higher value. If you use the SETDGAUD command to force the
object to have an auditing level of *NONE and the data group object entry also
specifies *NONE, any changes to the file will no longer generate T-ZC entries in the
system journal. For more information about object auditing, see “Managing object
auditing” on page 60.
Object attribute considerations - When MIMIX evaluates a system journal entry
and finds a possible match to a data group object entry which specifies an attribute in
its Attribute (OBJATR) parameter, MIMIX must retrieve the attribute from the object in
order to determine which object entry is the most specific match.
If the object attribute is not needed to determine the most specific match to a data
group object entry, it is not retrieved.
After determining which data group object entry has the most specific match, MIMIX
evaluates that entry to determine how to proceed with the journal entry. When the
matching object entry specifies *FILE or *MBR for OMTDTA, MIMIX does not need to
consider the object attribute in any other evaluations. As a result, the performance of
the object send job may improve.
417
information, the files are synchronized between source and target systems, but the
files are not the same.
A similar situation can occur when OMTDTA is used to prevent replication of
predetermined types of changes. For example, if *MBR is specified for OMTDTA, the
file and member attributes are replicated to the target system but the member data is
not. The file is not identical between source and target systems, but it is synchronized
according to configuration. Comparison commands will report these attributes as *EC
(equal configuration) even though member data is different. MIMIX audits, which call
comparison commands with a data group specified, will have the same results.
Running a comparison command without specifying a data group will report all the
synchronized-but-not-identical attributes as *NE (not equal) because no configuration
information is considered.
Consider how the following comparison commands behave when faced with non-
identical files that are synchronized according to the configuration.
• The Compare File Attributes (CMPFILA) command has access to configuration
information from data group object entries for files configured for system journal
replication. When a data group is specified on the command, files that are
configured to omit data will report those omitted attributes as *EC (equal
configuration). When CMPFILA is run without specifying a data group, the
synchronized-but-not-identical attributes are reported as *NE (not equal).
• The Compare File Data (CMPFILDTA) command uses data group file entries for
configuration information. As a result, when a data group is specified on the
command, any file objects configured for OMTDTA will not be compared. When
CMPFILDTA is run without specifying a data group, the synchronized-but-not-
identical file member attributes are reported as *NE (not equal).
• The Compare Object Attributes (CMPOBJA) command can be used to check for
the existence of a file on both systems and to compare its basic attributes (those
which are common to all object types). This command never compares file-
specific attributes or member attributes and should not be used to determine
whether a file is synchronized.
418
Selecting an object retrieval delay
419
• The Object Retrieve job encounters the create/change journal entry at 10:45:52. It
retrieves the “last change date/time” attribute from the object and determines that
the delay time (object last changed date/time of 10:45:51 + configured delay value
of :02 = 10:45:53) exceeds the current date/time (10:45:52). Because the object
retrieval delay value has not be met or exceeded, the object retrieve job delays for
1 second to satisfy the configured delay value.
• After the delay (at time 10:45:53), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 10:45:51 + configured delay value of :02 =
10:45:53) is equal to the current date/time (10:45:53). Because the object retrieval
delay value has been met, the object retrieve job continues with normal
processing and attempts to package the object.
Example 3 - The object retrieval delay value is configured to be 4 seconds:
• Object A is created or changed at 13:20:26.
• The Object Retrieve job encounters the create/change journal entry at 13:20:27. It
retrieves the “last change date/time” attribute from the object and determines that
the delay time (object last changed date/time of 13:20:26 + configured delay value
of :04 = 13:20:30) exceeds the current date/time (13:20:27) and delays for 3
seconds to satisfy the configured delay value.
• While the object retrieve job is waiting to satisfy the configured delay value, the
object is changed again at 13:20:28.
• After the delay (at time 13:20:30), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 + configured delay value of :04 =
13:20:32) again exceeds the current date/time (13:20:30) and delays for 2
seconds to satisfy the configured delay value.
• After the delay (at time 13:20:32), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 + configured delay value of :04 =
13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval
delay value has now been met, the object retrieve job continues with normal
processing and attempts to package the object.
420
Configuring to replicate SQL stored procedures and user-defined functions
421
To replicate SQL stored procedure operations
Do the following:
1. Ensure that the replication requirements for the various operations are followed.
See “Requirements for replicating SQL stored procedure operations” on
page 421.
2. Ensure that you have a data group object entry that includes the associated
program object. For example:
ADDDGOBJE DGDFN(name system1 system2) LIB1(library)
OBJ1(*ALL) OBJTYPE(*PGM)
422
Using Save-While-Active in MIMIX
423
value will also use save-while-active. All other attempts to save the object will use a
normal save.
Note: Although MIMIX has the capability to replicate DLOs using save/restore
techniques, it is recommended that DLOs be replicated using optimized
techniques, which can be configured using the DLO transmission method
under Object processing in the data group definition.
Example configurations
The following examples describe the SQL statements that could be used to view or
set the configuration settings for a data group definition (data group name, system 1
name, system 2 name) of MYDGDFN, SYS1, SYS2.
Example - Viewing: Use this SQL statement to view the values for the data group
definition:
SELECT DGDGN, DGSYS, DGSYS2, DGSWAT FROM MIMIX/DM0200P WHERE
DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Example - Disabling: If you want to modify the values for a data group definition to
disable use of save-while-active for a data group and use a normal save, you could
use the following statement:
UPDATE MIMIX/DM0200P SET DGSWAT=-1 WHERE DGDGN=’MYDGDFN’ AND
DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Example - Modifying: If you want to modify a data group definition to enable use of
save-while-active with a wait time of 30 seconds for files, DLOs and IFS objects, you
could use the following statement:
UPDATE MIMIX/DM0200P SET DGSWAT=30 WHERE DGDGN=’MYDGDFN’ AND
DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Note: You only have to make this change on the management system; the network
system will be automatically updated by MIMIX.
424
Object selection for Compare and
CHAPTER 17
Synchronize commands
Many of the Compare and Synchronize commands, which provide underlying support
for auditing, use an enhanced set of common parameters and a common processing
methodology that is collectively referred to as ‘object selection.’ Object selection
provides powerful, granular capability for selecting objects by data group, object
selection parameters, or a combination.
Table 47 identifies the commands and audits that use this object selection capability.
Table 47. Commands and audits that use MIMIX object selection.
Commands Audits
Audits use object selection when submitted
manually or as an automatically scheduled
audit. Prioritized auditing does not use this
method of object selection.
425
Object selection for Compare and Synchronize commands
426
Object selection process
The object selection process takes a candidate group of objects, subsets them as
defined by a list of object selectors, and produces a list of objects to be processed.
Figure 24 illustrates the process flow for object selection.
Candidate objects are those objects eligible for selection. They are input to the
object selection process. Initially, candidate objects consist of all objects on the
system. Based on the command, the set of candidate objects may be narrowed down
to objects of a particular class (such as IFS objects).
427
Object selection for Compare and Synchronize commands
The values specified on the command determine the object selectors used to further
refine the list of candidate objects in the class. An object selector identifies an object
or group of objects. Object selectors can come from the configuration information for
a specified data group, from items specified in the object selector parameter, or both.
MIMIX processing for object selection consists of two distinct steps. Depending on
what is specified on the command, one or both steps may occur.
The first major selection step is optional and is performed only if a data group
definition is entered on the command. In that case, data group entries are the source
for object selectors. Data group entries represent one of four classes of objects: files,
library-based objects, IFS objects, and DLOs. Only those entries that correspond to
the class associated with the command are used. The data group entries subset the
list of candidate objects for the class to only those objects that are eligible for
replication by the data group.
Note: Only explicitly identified IFS objects and DLOs objects that are eligible for
replication are included. The audits and commands which use this method of
object selection do not include any implicitly identified parent objects for IFS or
DLO objects.
If the command specifies a data group and items on the object selection parameter,
the data group entries are processed first to determine an intermediate set of
candidate objects that are eligible for replication by the data group. That intermediate
set is input to the second major selection step. The second step then uses the input
specified on the object selection parameter to further subset the objects selected by
the data group entries.
If no data group is specified on the data group definition parameter, the object
selection parameter can be used independently to select from all objects on the
system.
The second major object selection step subsets the candidate objects based on
Object selectors from the command’s object selector parameter (file, object, IFS
object, or DLO). Up to 300 object selectors may be specified on the parameter. If
none are specified, the default is to select all candidate objects.
Note: A single object selector can select multiple objects through the use of generic
names and special values such as *ALL, so the resulting object list can easily
exceed the limit of 300 object selectors that can be entered on a command.
The selection parameter is separate and distinct from the data group
configuration entries. If a data group is specified, the possible object selectors are 1
to N, where N is defined by the number of data group entries. The remaining
candidate objects make up the resultant list of objects to be processed.
Each object selector consists of multiple object selector elements, which serve as
filters on the object selector. The object selector elements vary by object class.
Elements provide information about the object such as its name, an indicator of
whether the objects should be included in or omitted from processing, and name
mapping for dual-system and single-system environments. See Table 48 for a list of
object selector elements by object class.
428
Parameters for specifying object selectors
Order precedence
Object selectors are always processed in a well-defined sequence, which is important
when an object matches more than one selector.
Selectors from a data group follow data group rules and are processed in most- to
least-specific order. Selectors from the object selection parameter are always
processed last to first. If a candidate object matches more than one object selector,
the last matching selector in the list is used.
As a general rule when specifying items on an object selection parameter, first specify
selectors that have a broad scope and then gradually narrow the scope in subsequent
selectors. In an IFS-based command, for example, include /A/B* and then omit /A/B1.
“Object selection examples” on page 434 illustrates the precedence of object
selection.
For each object selector, the elements are checked according to a priority defined for
the object class. The most specific element is checked for a match first, then the
subsequent elements are checked according to their priority. For additional, detailed
information about order precedence and priority of elements, see the following topics:
• “How MIMIX uses object entries to evaluate journal entries for replication” on
page 101
• “Identifying IFS objects for replication” on page 116
• “How MIMIX uses DLO entries to evaluate journal entries for replication” on
page 122
• “Processing variations for common operations” on page 129
429
Object selection for Compare and Synchronize commands
For all classes of objects, you can specify as many as 300 object selectors. However,
the specific object selector elements that you can specify on the command is
determined by the class of object.
Object selector elements provide three functions:
• Object identification elements define the selected object by name, including
generic name specifications.
• Filtering elements provide additional filtering capability for candidate objects.
• Name mapping elements are required primarily for environments where objects
exist in different libraries or paths.
• Include or omit elements identify whether the object should be processed or
explicitly excluded from processing.
Table 48 lists object selection elements by function and identifies which elements are
available on the commands.
Name mapping System 2 file1 System 2 object System 2 path System 2 path
elements: System 2 library1 System 2 library System 2 name System 2 name
pattern pattern
1. The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping.
File name and object name elements: The File name and Object name elements
allow you to identify a file or object by name. These elements allow you to choose a
specific name, a generic name, or the special value *ALL.
Using a generic name, you can select a group of files or objects based on a common
character string. If you want to work with all objects beginning with the letter A, for
example, you would specify A* for the object name.
To process all files within the related selection criteria, select *ALL for the file or object
name. When a data group is also specified on the command, a value of *ALL results
430
Parameters for specifying object selectors
in the selection of files and objects defined to that data group by the respective data
group file entries or data group object entries. When no data group is specified on the
command, specifying *ALL and a library name, only the objects that reside within the
given library are selected.
Library name element: The library name element specifies the name of the library
that contains the files or objects to be included or omitted from the resultant list of
objects. Like the file or object name, this element allows you to define a library a
specific name, a generic name, or the special value *ALL.
Note: The library value *ALL is supported only when a data group is specified.
Member element: For commands that support the ability to work with file members,
the Member element provides a means to select specific members. The Member
element can be a specific name, a generic name, or the special value *ALL.
Refer to the individual commands for detailed information on member processing.
Object path name (IFS) and DLO path name elements: The Object path name
(IFS) and DLO path name elements identify an object or DLO by path name. They
allow a specific path, a generic path, or the special value *ALL.
Traditionally, DLOs are identified by a folder path and a DLO name. Object selection
uses an element called DLO path, which combines the folder path and the DLO
name.
If you specify a data group, only those objects explicitly defined to that data group by
the respective data group IFS entries or data group DLO entries are selected. The
implicitly defined parent objects within the object path are not selected.
Directory subtree and folder subtree elements: The Directory subtree and Folder
subtree elements allow you to expand the scope of selected objects and include the
descendants of objects identified by the given object or DLO path name. By default,
the subtree element is *NONE, and only the named objects are selected. However, if
*ALL is used, all descendants of the named objects are also selected.
Figure 25 illustrates the hierarchical structure of folders and directories prior to
processing, and is used as the basis for the path, pattern, and subtree examples
431
Object selection for Compare and Synchronize commands
shown later in this document. For more information, see the graphics and examples
beginning with “Example subtree” on page 438.
Directory subtree elements for IFS objects: When selecting IFS objects, only the
objects in the file system specified will be included. Object selection will not cross file
system boundaries when processing subtrees with IFS objects. Objects from other file
systems do not need to be explicitly excluded, however you will need to specify if you
want to include objects from other file systems. For more information, see the graphic
and examples beginning with “Example subtree for IFS objects” on page 442.
Name pattern element: The Name pattern element provides a filter on the last
component of the object path name. The Name pattern element can be a specific
name, a generic name, or the special value *ALL.
If you specify a pattern of $*, for example, only those candidate objects with names
beginning with $ that reside in the named DLO path or IFS object path are selected.
Keep in mind that improper use of the Name pattern element can have undesirable
results. Let us assume you specified a path name of /corporate, a subtree of *NONE,
and pattern of $*. Since the path name, /corporate, does not match the pattern of $*,
the object selector will identify no objects. Thus, the Name pattern element is
generally most useful when subtree is *ALL.
For more information, see the “Example Name pattern” on page 441.
Object type element: The Object type element provides the ability to filter objects
based on an object type. The object type is valid for library-based objects, IFS
objects, or DLOs, and can be a specific value or *ALL. The list of allowable values
varies by object class.
432
Parameters for specifying object selectors
When you specify *ALL, only those object types which MIMIX supports for replication
are included. For a list of replicated object types, see “Supported object types for
system journal replication” on page 635.
Supported object types for CMPIFSA and SYNCIFS are listed in Table 49.
*ALL All directories, stream files, and symbolic links are selected
*DIR Directories
Supported object types for CMPDLOA and SYNCDLO are listed in Table 50.
*DOC Documents
*FLR Folders
For unique object types supported by a specific command, see the individual
commands.
Object attribute element: The Object attribute element provides the ability to filter
based on extended object attribute. For example, file attributes include PF, LF, SAVF,
and DSPF, and program attributes include CLP and RPG. The attribute can be a
specific value, a generic value, or *ALL.
Although any value can be entered on the Object attribute element, a list of supported
attributes is available on the command. Refer to the individual commands for the list
of supported attributes.
Owner element: The Owner element allows you to filter DLOs based on DLO owner.
The Owner element can be a specific name or the special value *ALL. Only candidate
DLOs owned by the designated user profile are selected.
Include or omit element: The Include or omit element determines if candidate
objects or included in or omitted from the resultant list of objects to be processed by
the command.
Included entries are added to the resultant list and become candidate objects for
further processing. Omitted entries are not added to the list and are excluded from
further processing.
System 2 file and system 2 object elements: The System 2 file and System 2
object elements provide support for name mapping. Name mapping is useful when
433
Object selection for Compare and Synchronize commands
434
Object selection examples
AB LIBX *SBSD
A LIBX *OUTQ
DE LIBX *DTAARA
D LIBX *CMD
Next, Table 52 represents the object selectors based on the data group object entry
configuration for data group DG1. Objects are evaluated against data group entries in
the same order of precedence used by replication processes.
Table 52. Object selectors from data group entries for data group DG1
The object selectors from the data group subset the candidate object list, resulting in
the list of objects defined to the data group shown in Table 53. This list is internal to
MIMIX and not visible to users.
A LIBX *OUTQ
AB LIBX *SBSD
Note: Although job queue DEF in library LIBX did not appear in Table 51, it would be
added to the list of candidate objects when you specify a data group for some
commands that support object selection. These commands are required to
identify or report candidate objects that do not exist.
435
Object selection for Compare and Synchronize commands
Perhaps you now want to include or omit specific objects from the filtered candidate
objects listed in Table 53. Table 54 shows the object selectors to be processed based
on the values specified on the object selection parameter. These object selectors
serve as an additional filter on the candidate objects.
The objects compared by the CMPOBJA command are shown in Table 55. These are
the result of the candidate objects selected by the data group (Table 53) that were
subsequently filtered by the object selectors specified for the Object parameter on the
CMPOBJA command (Table 54).
A LIBX *OUTQ
AB LIBX *SBSD
In this example, the CMPOBJA command is used to compare a set of objects. The
input source is a selection parameter. No data group is specified.
The data in the following tables show how candidate objects would be processed in
order to achieve a resultant list of objects.
Table 56 lists all the candidate objects on your system.
AB LIBX *SBSD
A LIBX *OUTQ
DE LIBX *DTAARA
D LIBX *CMD
Table 57 represents the object selectors chosen on the object selection parameter.
The sequence column identifies the order in which object selectors were entered. The
object selectors serve as filters to the candidate objects listed in Table 56.
436
Object selection examples
The last object selector entered on the command is the first one used when
determining whether or not an object matches a selector. Thus, generic object
selectors with the broadest scope, such as A*, should be specified ahead of more
specific generic entries, such as ABC*. Specific entries should be specified last.
Table 59 represents the included objects from Table 58. This filtered set of candidate
objects is the resultant list of objects to be processed by the CMPOBJA command.
A LIBX *OUTQ
AB LIBX *SBSD
D LIBX *CMD
DE LIBX *DTAARA
437
Object selection for Compare and Synchronize commands
Example subtree
In the following graphics, the shaded area shows the objects identified by the
combination of the Object path name and Subtree elements of the Object parameter
for an IFS command. Circled objects represent the final list of objects selected for
processing.
Figure 26 illustrates a path name value of /corporate/accounting, a subtree
specification of *ALL, a pattern value of *ALL, and an object type of *ALL. The
candidate objects selected include /corporate/accounting and all descendants.
438
Object selection examples
filtering is performed on the objects identified by the path and subtree. The candidate
objects selected consist of the specified objects only.
439
Object selection for Compare and Synchronize commands
440
Object selection examples
441
Object selection for Compare and Synchronize commands
scenario, only those candidate objects which match the generic pattern value ($123,
$236, and $895) are selected for processing.
442
Object selection examples
Figure 31 illustrates a directory with a subtree that contains IFS objects. The shaded
areas are the file systems. Table 60 contains examples showing what file systems
would be selected with the path names specified and a subtree specification of *ALL.
Table 60. Examples of specified paths and objects selected for Figure 31
443
Report types and output formats
The following compare commands support output in spooled files and in output files
(outfiles): the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA,
CMPDLOA), the Compare Record Count (CMPRCDCNT) command, the Compare
File Data (CMPFILDTA) command, and the Check DG File Entries (CHKDGFE)
command.
The spooled output is a human-readable print format that is intended to be delivered
as a report. The output file, on the other hand, is primarily intended for automated
purposes such as automatic synchronization. It is also a format that is easily
processed using SQL queries.
The level of information in the output is determined by the value specified on the
Report type parameter. These values vary by command.
For the CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA commands, the levels of
output available are *DIF, *SUMMARY, *OPTIMIZED, and *ALL.
• The report type of *DIF includes information on objects with detected differences.
• A report type of *SUMMARY provides a summary of all objects compared as well
as an object-level indication whether differences were detected. *SUMMARY
does not, however, include details about specific attribute differences.
• Specifying *ALL for the report type will provide you with information found on both
*DIF and *SUMMARY reports.
• The value *OPTIMIZED creates a combined report that indicates at an object level
when the objects are equal. For objects that are not equal, the individual attributes
that are not equal are included in the report. Audits based on the compare
attribute commands use this report type to return results.
The CMPRCDCNT command supports the *DIF and *ALL report types. The report
type of *DIF includes information on objects with detected differences. Specifying
*ALL for the report type will provide you with information found on all objects and
attributes that were compared.
The CMPFILDTA supports the *DIF and *ALL report types, as well as *RRN. The
*RRN value allows you to output, using the MXCMPFILR outfile format, the relative
record number of the first 1,000 objects that failed to compare. Using this value can
help resolve situations where a discrepancy is known to exist, but you are unsure
which system contains the correct data. In this case, the *RRN value provides
information that enables you to display the specific records on the two systems and to
determine the system on which the file should be repaired.
Spooled files
The spooled output is generated when a value of *PRINT is specified on the Output
parameter. The spooled output consists of four main sections—the input or header
section, the object selection list section, the differences section, and the summary
section.
First, the header section of the spooled report includes all of the input values specified
on the command, including the data group value (DGDFN), comparison level
444
Report types and output formats
Outfiles
The output file is generated when a value of *OUTFILE is specified on the Output
parameter. Similar to the spooled output, the level of output in the output file is
dependent on the report type value specified on the Report type parameter.
Each command is shipped with an outfile template that uses a normalized database
to deliver a self-defined record, or row, for every attribute you compare. Key
information, including the attribute type, data group name, timestamp, command
name, and system 1 and system 2 values, helps define each row. A summary row
precedes the attribute rows. The normalized database feature ensures that new
object attributes can be added to the audit capabilities without disruption to current
automation processing.
The template files for the various commands are located in the MIMIX product library.
445
Comparing attributes
This chapter describes the commands that compare attributes: Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA). These commands
are designed to audit the attributes, or characteristics, of the objects within your
environment and report on the status of replicated objects. Together, these command
are collectively referred to as the compare attributes commands.
You are already using the compare attributes commands when they are called by
audits. When called by an audit and used in combination with the automatic recovery
capabilities of audits, the compare attributes commands provide robust functionality to
help you determine whether your system is in a state to ensure a successful rollover
for planned events or failover for unplanned events.
The topics in this chapter include:
• “About the Compare Attributes commands” on page 446 describes the unique
features of the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA,
and CMPDLOA.
• “Comparing file and member attributes” on page 450 includes the procedure to
compare the attributes of files and members.
• “Comparing object attributes” on page 453 includes the procedure to compare
object attributes.
• “Comparing IFS object attributes” on page 456 includes the procedure to compare
IFS object attributes.
• “Comparing DLO attributes” on page 459 includes the procedure to compare DLO
attributes.
446
About the Compare Attributes commands
and others that check the size of data within a file. Comparing these attributes
provides you with assurance that files are most likely synchronized.
• The CMPOBJA command supports many attributes important to other library-
based objects, including extended attributes. Extended attributes are attributes
unique to given objects, such as auto-start job entries for subsystems.
• The CMPIFSA and CMPDLOA commands provide enhanced audit capability for
IFS objects and DLOs, respectively.
Unique parameters
The following parameters for object selection are unique to the compare attributes
commands and allow you to specify an additional level of detail when comparing
objects or files.
Unique File and Object elements: The following are unique elements on the File
parameter (CMPFILA command) and Objects parameter (CMPOBJA command):
• Member: On the CMPFILA command, the value specified on the Member
element is only used when *MBR is also specified on the Comparison level
parameter.
• Object attribute: The Object attribute element enables you to select particular
characteristics of an object or file, and provides a level of filtering. For details, see
“CMPFILA supported object attributes for *FILE objects” on page 449 and
“CMPOBJA supported object attributes for *FILE objects” on page 449.
System 2: The System 2 parameter identifies the remote system name, and
represents the system to which objects on the local system are compared.
This parameter is ignored when a data group is specified, since the system 2
447
Comparing attributes
information is derived from the data group. A value is required if no data group is
specified.
Comparison level (CMPFILA only): The Comparison level parameter indicates
whether attributes are compared at the file level or at the member level.
System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only):
The System 1 ASP group and System 2 ASP group parameters identify the name of
the auxiliary storage pool (ASP) group where objects configured for replication may
reside. The ASP group name is the name of the primary ASP device within the ASP
group. This parameter is ignored when a data group is specified.
448
About the Compare Attributes commands
report, the auto-start job entry attribute is ignored for object types that are not of type
*SBSD.
If a data group is specified on a compare request, configuration data is used when
comparing objects that are identified for replication through the system journal. If an
object’s configured object auditing value (OBJAUD) is *NONE, its attribute changes
are not replicated. When differences are detected on attributes of such an object, they
are reported as *EC (equal configuration) instead of being reported as *NE (not
equal).
For *FILE objects configured for replication through the system journal and configured
to omit T-ZC journal entries, also see “Omit content (OMTDTA) and comparison
commands” on page 417.
*ALL All physical and logical file types are selected for processing
LF Logical file
449
Comparing file and member attributes
You can compare file attributes to ensure that files and members needed for
replication exist on both systems or any time you need to verify that files are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring escape messages for differences
in file attributes, be aware that differences due to active replication (Step 16)
are signaled via a new difference indicator (*UA) and escape message. See
the auditing and reporting topics in this book.
To compare the attributes of files and members, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 1
(Compare file attributes) and press Enter.
3. The Compare File Attributes (CMPFILA) command appears. At the Data group
definition prompts, do one of the following:
• To compare attributes for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare files by name only, specify *NONE and continue with the next step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
f. Press Enter.
450
Comparing file and member attributes
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Comparison level prompt, accept the default to compare files at a file level
only. Otherwise, specify *MBR to compare files at a member level.
Note: If *FILE is specified, the Member prompt is ignored (see Step 4b).
7. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes based on whether the comparison is at a file or member level or
press F4 to see a valid list of attributes.
8. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 7, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Report type prompt, specify the level of detail for the output report.
12. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 14.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 14.
13. The User data prompt appears if you selected *PRINT or *BOTH in Step 12.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 18.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the Maximum replication lag prompt, specify the maximum amount of time
between when a file in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
451
Note: This parameter is only valid when a data group is specified in Step 3.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, specify *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
452
Comparing object attributes
453
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing objects not defined
to a data group. If necessary, specify the name of the remote system to which
objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Report type prompt, specify the level of detail for the output report.
11. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 13.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 13.
12. The User data prompt appears if you selected *PRINT or *BOTH in Step 11.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 17.
13. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
14. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
15. At the Maximum replication lag prompt, specify the maximum amount of time
between when an object in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
454
Comparing object attributes
16. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
17. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.
455
Comparing IFS object attributes
You can compare IFS object attributes to ensure that IFS objects needed for
replication exist on both systems or any time you need to verify that IFS objects are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring for differences in IFS object
attributes, be aware that differences due to active replication (Step 13) are
signaled via a new difference indicator (*UA) and escape message. See the
auditing and reporting topics in this book.
To compare the attributes of IFS objects, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 3
(Compare IFS attributes) and press Enter.
3. The Compare IFS Attributes (CMPIFSA) command appears. At the Data group
definition prompts, do one of the following:
• To compare attributes for all IFS objects defined by the data group IFS object
entries for a particular data group definition, specify the data group name and
skip to Step 6.
• To compare IFS objects by object path name only, specify *NONE and continue
with the next step.
• To compare a subset of IFS objects defined to a data group, specify the data
group name and continue with the next step.
4. At the IFS objects prompts, you can specify elements for one or more object
selectors that either identify IFS objects to compare or that act as filters to the IFS
objects defined to the data group indicated in Step 3. For more information, see
“Object selection for Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the Object path name prompt, accept *ALL or specify the name or the
generic value you want.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
Note: The *ALL default is not valid if a data group is specified on the Data
group definition prompts.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
compare.
e. At the Include or omit prompt, specify the value you want.
456
Comparing IFS object attributes
f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which IFS objects on the local system are compared.
Note: The System 2 object path name and System 2 name pattern values are
ignored if a data group is specified on the Data group definition prompts.
g. Press Enter.
5. The System 2 parameter prompt appears if you are comparing IFS objects not
defined to a data group. If necessary, specify the name of the remote system to
which IFS objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept
the default to use the command name to identify the spooled output or specify a
unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when an IFS object in the data group changes and when replication of
the change is expected to be complete, or accept *DFT to use the default
maximum time of 300 seconds (5 minutes). You can also specify *NONE, which
indicates that comparisons should occur without consideration for replication in
progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
457
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.
458
Comparing DLO attributes
459
f. At the Include or omit prompt, specify the value you want.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which DLOs on the local system are compared.
Note: The System 2 DLO path name and System 2 DLO name pattern values
are ignored if a data group is specified on the Data group definition
prompts.
h. Press Enter.
5. The System 2 parameter prompt appears if you are comparing DLOs not defined
to a data group. If necessary, specify the name of the remote system to which
DLOs on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept
the default to use the command name to identify the spooled output or specify a
unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when a DLO in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
460
Comparing DLO attributes
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.
461
Comparing file record counts and file member data
This chapter describes the features and capabilities of the Compare Record Counts
(CMPRCDCNT) command and the Compare File Data (CMPFILDTA) command.
The topics in this chapter include:
• “Comparing file record counts” on page 462 describes the CMPRCDCNT
command and provides a procedure for performing the comparison.
• “Significant features for comparing file member data” on page 465 identifies
enhanced capabilities available for use when comparing file member data.
• “Considerations for using the CMPFILDTA command” on page 466 describes
recommendations and restrictions of the command. This topic also describes
considerations for security, use with firewalls, comparing records that are not
allocated, as well as comparing records with unique keys, triggers, and
constraints.
• “Specifying CMPFILDTA parameter values” on page 470 provides additional
information about the parameters for selecting file members to compare and using
the unique parameters of this command.
• “Advanced subset options for CMPFILDTA” on page 476 describes how to use the
capability provided by the Advanced subset options (ADVSUBSET) parameter.
• “Ending CMPFILDTA requests” on page 479 describes how to end a CMPFILDTA
request that is in progress and describes the results of ending the job.
• “Comparing file member data - basic procedure (non-active)” on page 481
describes how to compare file data in a data group that is not active.
• “Comparing and repairing file member data - basic procedure” on page 484
describes how to compare and repair file data in a data group that is not active.
• “Comparing and repairing file member data - members on hold (*HLDERR)” on
page 487 describes how to compare and repair file members that are held due to
error using active processing.
• “Comparing file member data using active processing technology” on page 490
describes how to use active processing to compare file member data.
• “Comparing file member data using subsetting options” on page 493 describes
how to use the subset feature of the CMPFILDTA command to compare a portion
of member data at one time.
462
Comparing file record counts
deleted records (*NBRDLTRCDS) for members of physical files that are defined for
replication by an active data group. In resource-constrained environments, this
capability provides a less-intensive means to gauge whether files are likely to be
synchronized.
Note: Equal record counts suggest but do not guarantee that members are
synchronized. To check for file data differences, use the Compare File Data
(CMPFILDTA) command. To check for attribute differences, use the Compare
File Attributes (CMPFILA) command.
Replication processes must be active for the data group when this command is used.
Members on both systems can be actively modified by applications and by MIMIX
apply processes while this command is running.
For information about the results of a comparison, see “What differences were
detected by #MBRRCDCNT” on page 691.
The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase.
Unlike other audits, the #MBRRCDCNT audit does not have an associated recovery
phase. Differences detected by this audit appear as not recovered in the Audit
Summary user interfaces. Any repairs must be undertaken manually, in the following
ways:
• When using Vision Solutions Portal, repair actions are available for specific errors
when viewing the output file for the audit.
• Run the #FILDTA audit for the data group to detect and correct problems.
• Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems.
463
Comparing file record counts and file member data
464
Significant features for comparing file member data
Repairing data
You can optionally choose to have the CMPFILDTA command repair differences it
detects in member data between systems.
When files are not synchronized, the CMPFILDTA command provides the ability to
resynchronize the file at the record level by sending only the data for the incorrect
member to the target system. (In contrast, the Synchronize DG File Entry
(SYNCDGFE) command would resynchronize the file by transferring all data for the
file from the source system to the target system.)
465
Comparing file record counts and file member data
restore them to an active state. To repair members in *HLDERR status, you must also
specify that the repair be performed on the target system and request that active
processing be enabled.
To support the cooperative efforts of CMPFILDTA and DBAPY, the following
transitional states are used for file entries undergoing compare and repair processing:
• *CMPRLS - The file in *HLDERR status has been released. DBAPY will clear the
journal entry backlog by applying the file entries in catch-up mode.
• *CMPACT - The journal entry backlog has been applied. CMPFILDTA and DBAPY
are cooperatively repairing the member previously in *HLDERR status, and
incoming journal entries continue to be applied in forgiveness mode.
When a member held due to error is being processed by the CMPFILDTA command,
the entry transitions from *HLDERR status to *CMPRLS to *CMPACT. The member
then changes to *ACTIVE status if compare and repair processing is successful. In
the event that compare and repair processing is unsuccessful, the member-level entry
is set back to *HLDERR.
Additional features
The CMPFILDTA command incorporates many other features to increase
performance and efficiency.
Subsetting and advanced subsetting options provide a significant degree of flexibility
for performing periodic checks of a portion of the data within a file.
Parallel processing uses multi-threaded jobs to break up file processing into smaller
groups for increased throughput. Rather than having a single-threaded job on each
system, multiple “thread groups” break up the file into smaller units of work. This
technology can benefit environments with multiple processors as well as systems with
a single processor.
466
Considerations for using the CMPFILDTA command
Security considerations
You should take extra precautions when using CMPFILDTA’s repair function, as it is
capable of accessing and modifying data on your system.
To compare file data, you must have read access on both systems. When using the
repair function, write access on the system to be repaired may also be necessary
when active technology is not used.
CMPFILDTA builds upon the RUNCMD support in MIMIX. CMPFILDTA starts a
remote process using RUNCMD, which requires two conditions to be true. First, the
user profile of the job that is invoking CMPFILDTA must exist on the remote system
and have the same password on the remote system as it does on the local system.
Second, the user profile must have appropriate read or update access to the
members to be compared or repaired. If active processing and repair is requested,
only read access is needed. In this case, the repair processing would be done by the
database apply process.
467
Comparing file record counts and file member data
Update, insert, and *NEW Any value other than *NO Not supported
delete *NONE
Update, insert, and *NEW Any value other than *YES Supported
delete *NONE
468
Considerations for using the CMPFILDTA command
ACTGRP(NAMED)
• Use the Update Program (UPDPRG) command to change to ACTGRP(NAMED)
• Disable trigger programs on the file
• Use the Synchronize Objects (SYNCOBJ) command rather than CMPFILDTA
• Use the Synchronize Data Group File Entries (SYNCDGFE) command rather than
CMPFILDTA
• Use the Copy Active File (CPYACTF) command rather than CMPFILDTA
• Save and restore outside of MIMIX
Job priority
When run, the remote CMPFILDTA job uses the run priority of the local CMPFILDTA
job. However, the run priority of either CMPFILDTA job is superseded if a
CMPFILDTA class object (*CLS) exists in the installation library of the system on
which the job is running.
Note: Use the Change Job (CHGJOB) command on the local system to modify the
run priority of the local job. CMPFILDTA uses the priority of the local job to set
the priority of the remote job, so that both jobs have the same run priority. To
set the remote job to run at a different priority than the local job, use the
469
Comparing file record counts and file member data
Create Class (CRTCLS) command to create a *CLS object for the job you
want to change.
470
Specifying CMPFILDTA parameter values
471
Comparing file record counts and file member data
When members in *HLDERR status are processed, the CMPFILDTA command works
cooperatively with the database apply (DBAPY) process to compare and repair
members held due to error—and when possible, restore them to an active state.
Valid values for the File entry status parameter are *ALL, *ACTIVE, and *HLDERR. A
data group must also be specified on the command or the parameter is ignored. The
default value, *ALL, indicates that all supported entry statuses (*ACTIVE and
*HLDERR) are included in compare and repair processing. The value *ACTIVE
processes only those members that are active1. When *HLDERR is specified, only
member-level entries being held due to error are selected for processing. To repair
members held due to error using *ALL or *HLDERR, you must also specify that the
repair be performed on the target system and request that active processing be used.
System 1 ASP group and System 2 ASP group: The System 1 ASP group and
System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP)
group where objects configured for replication may reside. The ASP group name is
the name of the primary ASP device within the ASP group. This parameter is ignored
when a data group is specified. You must be running on OS V5R2 or greater to use
these parameters.
Subsetting option: The Subsetting option parameter provides a robust means by
which to compare a subset of the data within members. In some instances, the value
you select will determine which additional elements are used when comparing data.
Several options are available on this parameter: *ALL, *ADVANCED, *ENDDTA, or
*RANGE. If *ALL is specified, all data within all selected files is compared, and no
additional subsetting is performed. The other options compare only a subset of the
data.
The following are common scenarios in which comparing a subset of your data is
preferable:
• If you only need to check a specific range of records, use *RANGE.
• When a member, such as a history file, is primarily modified with insert operations,
only recently inserted data needs to be compared. In this situation, use *ENDDTA.
• If time does not permit a full comparison, you can compare a random sample
using *ADVANCED.
• If you do not have time to perform a full comparison all at once but you want all
data to be compared over a number of days, use *ADVANCED.
*RANGE indicates that the Subset range parameter will be used to specify the subset
of records to be compared. For more information, see the “Subset range” section.
If you select *ENDDTA, the Records at end of file parameter specifies how many
trailing records are compared. This value allows you to compare a selected number of
records at the end of all selected members. For more information, see the section
titled “Records at end of file.”
Advanced subsetting can be used to audit your entire database over a number of
days or to request that a random subset of records be compared. To specify
1. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve
previous behavior, specify STATUS(*ACTIVE).
472
Specifying CMPFILDTA parameter values
473
Comparing file record counts and file member data
474
Specifying CMPFILDTA parameter values
Transfer definition: The default for the Transfer definition parameter is *DFT. If a
data group was specified, the default uses the transfer definition associated with the
data group. If no data group was specified, the transfer definition associated with
system 2 is used.
The CMPFILDTA command requires that you have a TCP/IP transfer definition for
communication with the remote system. If your data group is configured for SNA,
override the SNA configuration by specifying the name of the transfer definition on the
command.
Number of thread groups: The Number of thread groups parameter indicates how
many thread groups should be used to perform the comparison. You can specify from
1 to 100 thread groups.
When using this parameter, it is important to balance the time required for processing
against the available resources. If you increase the number of thread groups in order
to reduce processing time, for example, you also increase processor and memory
use. The default, *CALC, will determine the number of thread groups automatically. To
maximize processing efficiency, the value *CALC does not calculate more than 25
thread groups.
The actual number of threads used in the comparison is based on the result of the
formula 2x + 1, where x is the value specified or the value calculated internally as the
result of specifying *CALC. When *CALC is specified, the CMPFILDTA command
displays a message showing the value calculated as the number of thread groups.
Note: Thread groups are created for primary compare processing only. During
setup, multiple threads may be utilized to improve performance, depending on
the number of members selected for processing. The number of threads used
during setup will not exceed the total number of threads used for primary
compare processing. During active processing, only one thread will be used.
Wait time (seconds): The Wait time (seconds) value is only valid when active
processing is in effect and specifies the amount of time to wait for active processing to
complete. You can specify from 0 to 3600 seconds, or the default *NOMAX.
If active processing is enabled and a wait time is specified, CMPFILDTA processing
waits the specified time for all pending compare operations processed through the
MIMIX replication path to complete. In most cases, the *NOMAX default is highly
recommended.
DB apply threshold: The DB apply threshold parameter is only valid during active
processing and requires that a data group be specified. The parameter specifies what
action CMPFILDTA should take if the database apply session backlog exceeds the
threshold warning value configured for the database apply process. The default value
*END stops the requested compare and repair action when the database apply
threshold is reached; any repair actions that have not been completed are lost. The
value *NOMAX allows the compare and repair action to continue even when the
database apply threshold has been reached. Continuing processing when the apply
process has a large backlog may adversely affect performance of the CMPFILDTA
job and its ability to compare a file with an excessive number of outstanding entries.
Therefore, *NOMAX should only be used in exceptional circumstances.
475
Comparing file record counts and file member data
Change date: The Change date parameter provides the ability to compare file
members based on the date they were last changed or restored on the source
system. This parameter specifies the date and time that MIMIX will use in determining
whether to process a file member. Only members changed or restored after the
specified date and time will be processed.
Members that have not been updated or restored since the specified timestamp will
not be compared. These members are identified in the output by a difference indicator
value of *EQ (DATE), which is omitted from results when the requested report type is
*DIF.
The shipped default value is *ALL. All available dates are considered when
determining whether to include or exclude a file member. However, the last changed
and last restored timestamps are ignored by the decision process.
When *AUDIT is specified, the compare start timestamp of the #FILDTA audit is used
in the determination. The command must specify a data group when this value is
used. The *AUDIT value can only be used if audit level *LEVEL30 was in effect at the
time the last audit was performed. If the audit level is lower, an error message is
issued. The audit level is available by displaying details for the audit (WRKAUD
command).
When *ALL or *AUDIT is specified for Date, the value specified for Time is ignored.
Note: Exercise caution when specifying actual date and time values. A specified
timestamp that is later than the start of the last audit can result in one or more
file members not being compared. Any member changed between the time of
its last audit and the specified timestamp will not be compared and therefore
cannot be reported if it is not synchronized. The recommended values for this
parameter are either *ALL or *AUDIT.
476
Advanced subset options for CMPFILDTA
151 through 300. Records 101 through 150 will not get checked at all. Advanced
subsetting provides you with an alternative that does not skip records when members
are growing.
Advanced subset options are applied independently for each member processed. The
advanced subset function assigns the data in each member to multiple non-
overlapping subsets in one of two ways. It also allows a specified range of these
subsets to be compared, which permits a representative sample subset of the data to
be compared. It also permits a full compare to be partitioned into multiple
CMPFILDTA requests that, in combination, assures that all data that existed at the
time of the first request is compared.
To use advanced subsetting, you will need to identify the following:
• The number of subsets or “bins” to define for the compare
• The manner in which records are assigned to bins
• The specific bins to process
Number of subsets: The first issue to consider when performing advanced subset
options is how many subsets or bins to establish. The Number of subsets element is
the number of approximately equal-sized bins to define. These bins are numbered
from 1 up to the number specified (N). You must specify at least one bin. Each record
is assigned to one of these bins.
The Interleave element specifies the manner in which members are assigned to a bin.
Interleave: The Interleave factor specifies the mapping between the relative record
number and the bin number. There are two approaches that can be used.
If you specify *NONE, records in each member are divided on a percentage basis. For
example:
Note that when the total number of records in a member changes, the mapping also
changes. Records that were once assigned to bin 2 may in the future be assigned to
bin 1. If you wish to compare all records over the course of a few days, the changing
mapping may cause you to miss records. A specific Interleave value is preferable in
this case.
Using bytes, the Interleave value specifies a number of contiguous records that
should be assigned to each bin before moving to the next bin. Once the last bin is
477
Comparing file record counts and file member data
filled, assignment restarts at the first bin. Let us assume you have specified in
interleave value of 20 bytes. The following example is based on the one provided in
Table 64:
Interleave (bytes): 20 20
Interleave (records): 2 2
If the Interleave and Number of Subsets is constant, the mapping of relative record
numbers to bins is maintained, despite the growth of member size. Because every bin
is eventually selected, comparisons made over several days will compare every
record that existed on the first day.
In most circumstances, *CALC is recommended for the interleave specification. When
you select *CALC, the system determines how many contiguous bytes are assigned
to each bin before subsequent bytes are placed in the next bin. This calculated value
will not change due to member size changes.
478
Ending CMPFILDTA requests
Note: You can automate these tasks using MIMIX Monitor. Refer to the MIMIX
Monitor documentation for more information.
479
Comparing file record counts and file member data
The CMPFILDTA command recognizes requests to end the job in a controlled manner
(ENDJOB OPTION(*CNTRLD)). Messages indicate the step within CMPFILDTA
processing at which the end was requested. The report and output file contain as
much information as possible with the data available at the step in progress when the
job ended. The output may not be accurate because the full CMPFILDTA request did
not complete.
The content of the report and output file is most valuable if the command completed
processing through the end of phase 1 compare. The output may be incomplete if the
end occurred earlier. If processing did not complete to a point where MIMIX can
accurately determine the result of the compare, the value *UN (unknown) is placed in
the Difference Indicator.
Note: If the CMPFILDTA command has been long running or has encountered many
errors, you may need to specify more time on the ENDJOB command’s Delay
time, if *CNTRLD (DELAY) parameter. The default value of 30 seconds may
not be adequate in these circumstances.
480
Comparing file member data - basic procedure (non-active)
481
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, accept *NONE to indicate that no repair action is
done.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
• If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
• The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
13. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
482
Comparing file member data - basic procedure (non-active)
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
483
Comparing and repairing file member data - basic proce-
dure
You can use the CMPFILDTA command to repair data on the local or remote system.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 466. You
should also read “Specifying CMPFILDTA parameter values” on page 470 for
additional information about parameters and values that you can specify.
To compare and repair data, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare data by file name only, specify *NONE and continue with the next
step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
f. Press Enter.
484
Comparing and repairing file member data - basic procedure
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, specify *SYS1, *SYS2, *LOCAL, *TGT, *SRC, or
the system definition name to indicate the system on which repair action should
be performed.
Note: *TGT and *SRC are only valid if you are comparing files defined to a data
group. *SRC is not valid if active processing is in effect.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
13. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
485
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
486
Comparing and repairing file member data - members on hold (*HLDERR)
487
5. At the Repair on system prompt, specify *TGT to indicate that repair action be
performed on the target system.
6. At the Process while active prompt, specify *YES to indicate that active
processing technology should be used in the comparison.
7. At the File entry status prompt, specify *HLDERR to process members being held
due to error only.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 15.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the System to receive output prompt, specify the system on which the output
should be created.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
488
Comparing and repairing file member data - members on hold (*HLDERR)
• To submit the job for batch processing, accept the default. Press Enter.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To compare and repair the file, press Enter.
489
Comparing file member data using active processing
technology
You can set the CMPFILDTA command to use active processing technology when a
data group is specified on the command.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 466. You
should also read “Specifying CMPFILDTA parameter values” on page 470 for
additional information about parameters and values that you can specify.
Note: Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
To compare data using the active processing, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 425.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, accept the defaults.
f. Press Enter.
5. At the Repair on system prompt, specify *TGT to indicate that repair action be
performed on the target system of the data group.
6. At the Process while active prompt, specify *YES or *DFT to indicate that active
490
Comparing file member data using active processing technology
491
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used when the command is invoked from outside of
shipped audits. When used as part of shipped audits, the default value is *OMIT
since the results are already placed in an outfile.
17. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.
492
Comparing file member data using subsetting options
493
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, specify a value if you want repair action
performed.
Note: To process members in *HLDERR status, you must specify *TGT. See
Step 8.
7. At the Process while active prompt, specify whether active processing technology
should be used in the comparison.
Notes:
• To process members in *HLDERR status, you must specify *YES. See
Step 8.
• If you are comparing files associated with a data group, *DFT uses active
processing. If you are comparing files not associated with a data group,
*DFT does not use active processing.
• Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
8. At the File entry status prompt, you can select files with specific statuses for
compare and repair processing. Do one of the following:
a. To process active members only, specify *ACTIVE.
b. To process both active members and members being held due to error
(*ACTIVE and *HLDERR), specify the default value *ALL.
c. To process members being held due to error only, specify *HLDERR.
Note: When *ALL or *HLDERR is specified for the File entry status prompt,
*TGT must also be specified for the Repair on system prompt (Step 6)
and *YES must be specified for the Process while active prompt
(Step 7).
9. At the Subsetting option prompt, you must specify a value other than *ALL to use
additional subsetting. Do one of the following:
• To compare a fixed range of data, specify *RANGE then press Enter to see
additional prompts. Skip to Step 10.
• To define how many subsets should be established, how member data is
assigned to the subsets, and which range of subsets to compare, specify
*ADVANCED and press Enter to see additional prompts. Skip to Step 11.
• To indicate that only data specified on the Records at end of file prompt is
compared, specify *ENDDTA and press Enter to see additional prompts. Skip to
Step 12.
10. At the Subset range prompts, do the following:
a. At the First record prompt, specify the relative record number of the first record
to compare in the range.
494
Comparing file member data using subsetting options
b. At the Last record prompt, specify the relative record number of the last record
to compare in the range.
c. Skip to Step 12.
11. At the Advanced subset options prompts, do the following:
a. At Number of subsets prompt, specify the number of approximately equal-
sized subsets to establish. Subsets are numbered beginning with 1.
b. At the Interleave prompt, specify the interleave factor. In most cases, the
default *CALC is highly recommended.
c. At the First subset prompt, specify the first subset in the sequence of subsets
to compare.
d. At the Last subset prompt, specify the last subset in the sequence of subsets to
compare.
12. At the Records at end of file prompt, specify the number of records at the end of
the member to compare. These records are compared regardless of other
subsetting criteria.
Note: If *ENDDTA is specified on the Subsetting option prompt, you must specify
a value other than *NONE.
13. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
• If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
• The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
14. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 19.
495
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
15. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
16. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
17. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
18. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
19. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
20. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
21. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
22. To start the comparison, press Enter.
496
CHAPTER 20 Synchronizing data between
systems
This chapter contains information about support provided by MIMIX commands for
synchronizing data between two systems. The data that MIMIX replicates must by
synchronized on several occasions.
• During initial configuration of a data group, you need to ensure that the data to be
replicated is synchronized between both systems defined in a data group.
• If you change the configuration of a data group to add new data group entries, the
objects must be synchronized.
• You may also need to synchronize a file or object if an error occurs that causes
the two systems to become not synchronized.
• Automatic recovery features also use synchronize commands to recover
differences detected during replication and audits. If automatic recovery policies
are disabled, you may need to use synchronize commands to correct a file or
object in error or to correct differences detected by audits or compare commands.
The synchronize commands provided with MIMIX can be loosely grouped by common
characteristics and the level of function they provide. Topic “Considerations for
synchronizing using MIMIX commands” on page 499 describes subjects that apply to
more than one group of commands, such as the maximum size of an object that can
be synchronized, how large objects are handled, and how user profiles are
addressed.
Initial synchronization: Initial synchronization can be performed manually with a
variety of MIMIX and IBM commands, or by using the Synchronize Data Group
(SYNCDG) command. The SYNCDG command is intended especially for performing
the initial synchronization of one or more data groups. The command can be long-
running. For information about initial synchronization, see these topics:
• “Performing the initial synchronization” on page 508 describes how to establish a
synchronization point and identifies other key information.
• Environments using MIMIX support for IBM WebSphere MQ have additional
requirements for the initial synchronization of replicated queue managers. For
more information, see the MIMIX for IBM WebSphere MQ book.
Synchronize commands: The commands Synchronize Object (SYNCOBJ),
Synchronize IFS Object (SYNCIFS), and Synchronize DLO (SYNCDLO) provide
robust support in MIMIX environments, for synchronizing library-based objects, IFS
objects, and DLOs, as well as their associated object authorities. Each command has
considerable flexibility for selecting objects associated with or independent of a data
group. Additionally, these commands are often called by other functions and by
options to synchronize objects identified in tracking entries used for journaling. For
additional information, see:
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
497
Synchronizing data between systems
page 503
• “About synchronizing tracking entries” on page 507
Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry
(SYNCDGACTE) command provides the ability to synchronize library-based objects,
IFS objects, and DLOs that are associated with data group activity entries which have
specific status values. The contents of the object and its attributes and authorities are
synchronized. For additional information, see “About synchronizing data group activity
entries (SYNCDGACTE)” on page 504.
Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE)
command provides the means to synchronize database files associated with a data
group by data group file entries. Additional options provide the means to address
triggers, referential constraints, logical files, and related files. For more information
about this command, see “About synchronizing file entries (SYNCDGFE command)”
on page 505.
Procedures: The procedures in this chapter are for commands that are accessible
from the MIMIX Compare, Verify, and Synchronize menu. Typically, when you need to
synchronize individual items in your configuration, the best approach is to use the
options provided on the displays where they are appropriate to use. The options call
the appropriate command and, in many cases, pre-select some of the fields. The
following procedures are included:
• “Synchronizing database files” on page 514
• “Synchronizing objects” on page 516
• “Synchronizing IFS objects” on page 520
• “Synchronizing DLOs” on page 524
• “Synchronizing data group activity entries” on page 528
• “Synchronizing tracking entries” on page 530
498
Considerations for synchronizing using MIMIX commands
1. To preserve behavior prior to changes made in V4R4 service pack SPC05SP4, specify
*TFRDFN.
499
Synchronizing user profiles with SYNCnnn commands
The SYNCOBJ command explicitly synchronizes user profiles when you specify
*USRPRF for the object type on the command. The status of the user profile on the
target system is affected as follows:
• If you specified a data group and a user profile which is configured for replication,
the status of the user profile on the target system is the value specified in the
configured data group object entry.
• If you specified a user profile but did not specify a data group, the following
occurs:
– If the user profile exists on the target system, its status on the target system
remains unchanged.
– If the user profile does not exist on the target system, it is synchronized and its
status on the target system is set to *DISABLED.
When synchronizing other object types, the SYNCOBJ, SYNCIFS, and SYNCDLO
commands implicitly synchronize user profiles associated with the object if they do
not exist on the target system. Although only the requested object type, such as
*PGM, is specified on these commands, the owning user profile, the primary group
profile, and user profiles that have private authorities to an object are implicitly
synchronized, as follows:
• When the Synchronize command specifies a data group and that data group has
a data group object entry which includes the user profile, the object and the user
profile are synchronized. The status of the user profile on the target system is set
to match the value from the data group object entry.
• If a data group object entry excludes the user profile from replication, the object is
synchronized and its owner is changed to the default owner indicated in the data
group definition. The user profile is not synchronized.
• When the Synchronize command specifies a data group and that data group does
not have a data group object entry for the user profile, the object and the
associated user profile are synchronized. The status of the user profile on the
target system is set to *DISABLED.
500
Considerations for synchronizing using MIMIX commands
501
– If no data group is specified, you must specify values for the System 1 ASP
group or device, System 2 ASP device number, and System 2 ASP device
number parameters.
502
About MIMIX commands for synchronizing objects, IFS objects, and DLOs
503
Additional parameters: On each command, the following parameters provide
additional control of the synchronization process.
• The Save active parameter provides the ability to save the object in an active
environment using IBM's save while active support. Values supported are the
same as those used in related IBM commands. When you use this capability, the
following parameters further qualify save while active operations:
– On the SYNCOBJ and SYNCDLO commands, the Save active wait time
parameter specifies the amount of time to wait for a commit boundary or for a
lock on an object. If a lock is not obtained in the specified time, the object is not
saved. If a commit boundary is not reached in the specified time, the save
operation ends and the synchronization attempt fails.
– On the SYNCIFS command, the Save active option parameter defaults to
*NONE, which is appropriate for most users. Optionally, you can specify that
the IBM command (SAV) should use the value *ALWCKPWRT for its
SAVACTOPT parameter. This allows the object being saved to be opened for
write when the checkpoint is achieved.
Note: The SYNCIFS and SYNCDLO commands ignore the Save active
parameter and their respective qualifying parameter when the synchronize
request specifies a data group which is configured to use the value
*OPTIMIZED (default) for the respective transmission method element of
the Object processing (OBJPRC) parameter.
• The Maximum sending size (MB) parameter specifies the maximum size that an
object can be in order to be synchronized. For more information, see “Limiting the
maximum sending size” on page 499.
504
About synchronizing file entries (SYNCDGFE command)
then find the next non-completed entry and synchronize it. The SYNCDGACTE
request will continue to synchronize these non-completed entries until all entries for
that object have been synchronized.
Any existing active, delayed, or failed activity entries for the specified object are
processed and set to ‘completed by synchronization’ (PZ) when the synchronization
request completes successfully.
When all activity entries are completed for the specified object, when the
synchronization request completes successfully, only the status of the very last
completed entry is changed from complete (CP) to ‘completed by synchronization’
(CZ).
Not supported: Spooled files and cooperatively processed files are not eligible to be
synchronized using the SYNCDGACTE command.
Status changes during synchronization: During synchronization processing, if the
data group is active, the status of the activity entries being synchronized are set to a
status of ‘pending synchronization’ (PZ) and then to ‘pending completion’ (PC). When
the synchronization request completes, the status of the activity entries is set to either
‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ).
If the data group is inactive, the status of the activity entries remains either ‘pending
synchronization’ (PZ) or ‘pending completion’ (PC) when the synchronization request
completes. When the data group is restarted, the status of the activity entries is set to
either ‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ).
*DATA This is the default value. Only the physical file data is replicated using
MIMIX Copy Active File processing. File attributes are not replicated
using this method.
If the file exists on the target system, MIMIX refreshes its contents. If the
file format is different on the target system, the synchronization will fail. If
the file does not exist on the target system, MIMIX uses save and restore
operations to create the file on the target system and then uses copy
active file processing to fill it with data from the file on the source system.
*ATR Only the physical file attributes are replicated and synchronized.
*AUT Only the authorities for the physical file are replicated and synchronized.
505
Table 67. Sending mode (METHOD) choices on the SYNCDGFE command.
*SAVRST The content and attributes are replicated using the IBM i save and
restore commands. This method allows save-while-active operations.
This method also has the capability to save associated logical files.
Files with triggers: The SYNCDGFE command provides the ability to optionally
disable triggers during synchronization processing and enable them again when
processing is complete. The Disable triggers on file (DSBTRG) parameter specifies
whether the database apply process (used for synchronization) disables triggers
when processing a file.
The default value *DGFE uses data group file entry to determine whether triggers
should be disabled. The value *YES will disable triggers on the target system during
synchronization.
If configuration options for the data group, or optionally for a data group file entry,
allow MIMIX to replicate trigger-generated entries and disable the triggers, when
synchronizing a file with triggers you must specify *DATA as the sending mode.
Including logical files: The Include logical files (INCLF) parameter allows you to
include any attached logical files in the synchronization request. Logical files that are
explicitly excluded from replication are not sent. This parameter is only valid when
*SAVRST is specified for the Sending mode prompt.
Physical files with referential constraints: Physical files with referential constraints
require a field in another physical file to be valid. When synchronizing physical files
with referential constraints, ensure all files in the referential constraint structure are
synchronized concurrently during a time of minimal activity on the source system.
Doing so will ensure the integrity of synchronization points.
Including related files: You can optionally choose whether the synchronization
request will include files related to the file specified by specifying *YES for the Include
related (RELATED) parameter. Related files are those physical files which have a
relationship with the selected physical file by means of one or more join logical files.
Join logical files are logical files attached to fields in two or more physical files.
The Include related (RELATED) parameter defaults to *NO. In some environments,
specifying *YES could result in a high number of files being synchronized and could
potentially strain available communications and take a significant amount of time to
complete.
A physical file being synchronized cannot be name mapped if it has logical files
associated with it. Logical files may be name mapped by using object entries.
506
About synchronizing tracking entries
507
Performing the initial synchronization
Ensuring that data is synchronized before you begin replication is crucial to
successful replication. How you perform the initial synchronization can be influenced
by the available communications bandwidth, the complexity of describing the data,
the size of the data, as well as time.
Note: If you have configured or migrated a MIMIX configuration to use integrated
support for IBM WebSphere MQ, you must use the procedure ‘Initial
synchronization for replicated queue managers’ in the MIMIX for IBM
WebSphere MQ book. Large IBM WebSphere MQ environments should plan
to perform this during off-peak hours.
508
Performing the initial synchronization
with synchronizing.
File F MS, SS
b. Record the exact time and the sequence number of the journal entry
associated with the first synchronize request. Typically, a synchronize request
is represented by a journal entry for a save operation.
c. Type 5 (Display entire entry) next to the entry and press Enter.
d. Press F10 (Display only entry details).
e. The Display Journal Entry Details display appears. Page down to locate the
Receiver name. This should be the same name as recorded in Step 2.
6. Identify the synchronization starting point in the source system journal. This
information will be needed when starting replication.
a. Specify the date from Step 5a for mm/dd/yyyy and specify the time from
Step 5b for hh:mm:ss in the following command:
DSPJRN JRN(QSYS/QAUDJRN) RCVRNG(*CURRENT)
FROMTIME('mm/dd/yyyy' 'hh:mm:ss')
b. Record the sequence number associated with the first journal entry with the
specified time stamp.
c. Type 5 (Display entire entry) next to the entry and press Enter.
d. Press F10 (Display only entry details).
e. The Display Journal Entry Details display appears. Page down to locate the
Receiver name. This should be the same name as recorded in Step 3.
509
more flexibility in object selection and also provide the ability to synchronize object
authorities. By specifying a data group on any of these commands, you can
synchronize the data defined by its data group entries.
You can also use the Synchronize Data Group File Entry (SYNCDGFE) to
synchronize database files and members. This command provides the ability to
choose between MIMIX copy active file processing and save/restore processing
and provides choices for handling trigger programs during synchronization.
If you have configured or migrated to integrated advanced journaling, follow the
SYNCIFS procedures for IFS objects, SYNCOBJ procedures for data areas and
data queues, and SYNCDGFE procedures for files containing LOB data. You can
also use options to synchronize objects associated with tracking entries from the
Work with DG IFS Trk. Entries display and the Work with DG Obj. Trk. Entries
display.
• SYNCDG command: The SYNCDG command is intended especially for
performing the initial synchronization of one or more data groups by MIMIX
IntelliStart™. The SYNCDG command synchronizes by using the auditing and
automatic recovery support provided by MIMIX AutoGuard. This command can be
long-running. Because this command requires that journaling and data group
replication processes be started before synchronization starts, it may not be
appropriate for some environments.
This chapter (“Synchronizing data between systems” on page 497) includes additional
information about the MIMIX SYNC commands.
510
Using SYNCDG to perform the initial synchronization
Ensure the following conditions are met for each data group that you want to
synchronize, before running this command:
• Apply any IBM PTFs (or their supersedes) associated with IBM i releases as
they pertain to your environment. Log in to Support Central and access the
Technical Documents page for a list of required and recommended IBM PTFs.
• Journaling is started on the source system for everything defined to the data
group.
• All replication processes are active.
• The user ID submitting the SYNCDG has *MGT authority in product level
security if it is enabled for the installation.
• No other audits (comparisons or recoveries) are in progress when the
SYNCDG is requested.
• Collector services has been started.
• If DLOs are identified for replication, before running the SYNDG command,
ensure that the DLOs exist only on the source system.
While the synchronization is in progress, other audits for the data group are prevented
from running.
511
Verifying the initial synchronization
This procedure uses audits to ensure your environment is ready to start replication.
Shipped policy settings for MIMIX allow audits to automatically attempt recovery
actions for any problems they detect. You should not use this procedure if you have
already synchronized your systems using the Synchronize Data Group (SYNCDG)
command or the automatic synchronization method in MIMIX IntelliStart.
The audits used in this procedure will:
• Verify that journaling is started on the source and target systems for the items you
identified in the deployed replication patterns. Without journaling, replication will
not occur.
• Verify that data is synchronized between systems. Audits will detect potential
problems with synchronization and attempt to automatically recover differences
found.
Do the following:
1. Check whether all necessary journaling is started for each data group. Enter the
following command:
(installation-library-name)/DSPDGSTS DGDFN(data-group-name)
VIEW(*DBFETE)
On the File and Tracking Entry Status display, the File Entries column identifies
how many file entries were configured from your replication patterns and indicates
whether any file entries are not journaled on the source or target systems. If your
configuration permits user journal replication of IFS objects, data areas, or data
queues, the Tracking Entries columns provide similar information.
2. Audit your environment. To access the audits, enter the following command:
(installation-library-name)/WRKAUD
3. Each audit listed on the Work with Audits display is a unique combination of data
group and MIMIX rule. When verifying an initial configuration, you need to perform
a subset of the available audits for each data group in a specific order, shown in
Table 69. Do the following:
a. To change the number of active audits at any one time, enter the following
command:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY)
MAXACT(*NOMAX)
b. Use F18 (Subset) to subset the audits by the name of the rule you want to run.
c. Type a 9 (Run rule) next to the audit for each data group and press Enter.
Repeat Step 3b and Step 3c for each rule in Table 69 until you have started all the
listed audits for all data groups.
Table 69. Rules for initial validation, listed in the order to be performed.
Rule Name
1. #DGFE
512
Verifying the initial synchronization
Table 69. Rules for initial validation, listed in the order to be performed.
Rule Name
2. #OBJATR
3. #FILATR
4. #IFSATR
5. #FILATRMBR
6. #DLOATR
d. Reset the number of active audit jobs to values consistent with regular
auditing:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY)
MAXACT(5)
4. Wait for all audits to complete. Some audits may take time to complete. Then
check the results and resolve any problems. You may need to change subsetting
values again so you can view all rule and data group combinations at once. On
the Work with Audits display, check the Audit Status column for the following
value:
*NOTRCVD - The comparison performed by the rule detected differences. Some
of the differences were not automatically recovered. Action is required. View
notifications for more information and resolve the problem.
Note: For more information about resolving reported problems, see “Interpreting
audit results” on page 678.
513
Synchronizing database files
The procedures in this topic use the Synchronize DG File Entry (SYNCDGFE)
command to synchronize selected database files associated with a data group,
between two systems. If you use this command when performing the initial
synchronization of a data group, use the procedure from the source system to send
database files to the target system.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About synchronizing file entries (SYNCDGFE command)” on page 505.
To synchronize a database file between two systems using the SYNCDGFE
command defaults, do the following or use the alternative process described below:
1. From the Work with DG Definitions display, type 17 (File entries) next to the data
group to which the file you want to synchronize is defined and press Enter.
2. The Work with DG File Entries display appears. Type 16 (Sync DG file entry) next
to the file entry for the file you want to synchronize and press Enter.
Note: If you are synchronizing file entries as part of your initial configuration, you
can type 16 next to the first file entry and then press F13 (Repeat). When
you press Enter, all file entries will be synchronized.
Alternative Process:
You will need to identify the data group and data group file entry in this procedure. In
Step 8 and Step 9, you will need to make choices about the sending mode and trigger
support. For additional information, see “About synchronizing file entries
(SYNCDGFE command)” on page 505.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 41
(Synchronize DG File Entry) and press Enter.
3. The Synchronize DG File Entry (SYNCDGFE) display appears. At the Data group
definition prompts, specify the name of the data group to which the file is
associated.
4. At the System 1 file and Library prompts, specify the name of the database file
you want to synchronize and the library in which it is located on system 1.
5. If you want to synchronize only one member of a file, specify its name at the
Member prompt.
6. At the Data source prompt, ensure that the value matches the system that you
want to use as the source for the synchronization.
7. The default value *YES for the Release wait prompt indicates that MIMIX will hold
the file entry in a release-wait state until a synchronization point is reached. Then
it will change the status to active. If you want to hold the file entry for your
intervention, specify *NO.
514
Synchronizing database files
8. At the Sending mode prompt, specify the value for the type of data to be
synchronized.
9. At the Disable triggers on file prompt, specify whether the database apply process
should disable triggers when processing the file. Accept *DGFE to use the value
specified in the data group file entry or specify another value. Skip to Step 14.
10. At the Save active prompt, accept *SYSDFN so that objects in use are saved while
in use, or specify another value.
11. At the Save active wait time prompt, specify the number of seconds to wait for a
commit boundary or a lock on the object before continuing the save.
12. At the Allow object differences prompt, accept the default value *ALL.
13. If you specified *SAVRST for Step 8, at the Include logical files prompt, indicate
whether you want to include attached logical files when sending the file. The
default, *YES, includes attached logical files that are not explicitly excluded from
replication.
14. To change any of the additional parameters, press F10 (Additional parameters).
Verify that the values shown for Include related files, Maximum sending file size
(MB) and Submit to batch are what you want.
15. To synchronize the file, press Enter.
515
Synchronizing objects
The procedures in this topic use the Synchronize Object (SYNCOBJ) command to
synchronize library-based objects between two systems. The objects to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503
516
Synchronizing objects
517
a. At the Object and library prompts, specify the name or the generic value you
want.
b. At the Object type prompt, accept *ALL or specify a specific object type to
synchronize.
c. At the Object attribute prompt, accept *ALL to synchronize the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
e. At the System 2 object and System 2 library prompts, if the object and library
names on system 2 are equal to the system 1 names, accept the defaults.
Otherwise, specify the name of the object and library on system 2 to which you
want to synchronize the objects.
f. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system to
which to synchronize the objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
Note: When you specify *ONLY and a data group name is not specified, if any
files that are processed by this command are cooperatively processed and
the data group that contains these files is active, the command could fail if
the database apply job has a lock on these files.
7. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *SYSDFN to allow saving objects in use, or
specify another value.
b. At the Save active wait time prompt, specify the number of seconds to wait for
a commit boundary or a lock on the object before continuing the save. This
parameter is ignored when *NO is specified for Save active.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. At the System 1 ASP group or device prompt, specify the name of the auxiliary
storage pool (ASP) group or device where objects configured for replication may
reside on system 1. Otherwise, accept the default to use the current job’s ASP
group name.
10. At the System 2 ASP device number prompt, specify the number of the auxiliary
storage pool (ASP) where objects configured for replication may reside on system
2. Otherwise, accept the default to use the same ASP number from which the
object was saved (*SAVASP). Only the libraries in the system ASP and any basic
user ASPs from system 2 will be in the library name space.
11. At the System 2 ASP device name prompt, specify the name of the auxiliary
storage pool (ASP) device where objects configured for replication may reside on
system 2. Otherwise, accept the default to use the value specified for the system
518
Synchronizing objects
519
Synchronizing IFS objects
The procedures in this topic use the Synchronize IFS Object (SYNCIFS) command to
synchronize IFS objects between two systems. The IFS objects to be synchronized
can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503
520
Synchronizing IFS objects
e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 object path name and System 2 name pattern values are
ignored when a data group is specified.
f. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
6. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *YES to allow saving objects in use, or
specify another value.
b. At the Save active option prompt, accept *NONE as the default, or specify
another value. This parameter is ignored when *NO is specified for Save
active.
Note: Both parameters are ignored if the data group specifies *OPTIMIZED for
the IFS transmission method.
7. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
8. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 11.
9. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on
page 317.
12. To start the synchronization, press Enter.
521
To synchronize IFS objects without a data group
To synchronize IFS objects not associated with a data group between two systems,
do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43
(Synchronize IFS object) and press Enter. The Synchronize IFS Object
(SYNCIFS) command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the IFS objects prompts, specify elements for one or more object selectors that
identify IFS objects to synchronize. You can specify as many as 300 object
selectors by using the + for more prompt for each selector. For more information,
see the topic on object selection in the MIMIX Administrator Reference book.
For each selector, do the following:
a. At the Object path name prompt, you can optionally accept *ALL or specify the
name or generic value you want.
Note: The IFS object path name can be used alone or in combination with FID
values. See Step 12.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
synchronize.
e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the IFS objects.
g. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the IFS objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *YES to allow saving objects in use, or
specify another value.
522
Synchronizing IFS objects
b. At the Save active option prompt, accept *NONE as the default, or specify
another value. This parameter is ignored when *NO is specified for Save
active.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 12.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
12. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on
page 317.
13. To start the synchronization, press Enter.
523
Synchronizing DLOs
The procedures in this topic use the Synchronize DLO (SYNCDLO) command to
synchronize document library objects (DLOs) between two systems. The DLOs to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503
524
Synchronizing DLOs
525
Synchronize commands” on page 425.
For each selector, do the following:
a. At the DLO path name prompt, accept *ALL or specify the name or the generic
value you want.
b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the
scope of DLOs to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
synchronize.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
f. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the DLOs.
h. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the DLOs.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. Specify whether the synchronize request can save the object in an active
environment using IBM's save while active support. Press F1 (Help) for additional
information about these fields. Do the following:
a. At the Save active prompt, accept *YES to allow saving objects in use, or
specify another value.
b. At the Save active wait time prompt, if needed, you can change the number of
seconds to wait for a lock on the object before continuing the save. This
parameter is ignored when *NO is specified for Save active.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
10. At the Submit to batch prompt, do one of the following:
526
Synchronizing DLOs
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
11. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
12. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
13. To start the synchronization, press Enter.
527
Synchronizing data group activity entries
The procedures in this topic use the Synchronize DG Activity Entry (SYNCDGACTE)
command to synchronize an object that is identified by a data group activity entry with
any status value—*ACTIVE, *DELAYED, *FAILED, or *COMPLETED.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About synchronizing data group activity entries (SYNCDGACTE)” on page 504
To synchronize an object identified by a data group activity entry, do the following:
1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next
to the activity entry that identifies the object you want to synchronize and press
Enter.
2. The Confirm Synchronize of Object display appears. Press Enter to confirm the
synchronization.
Alternative Process:
You will need to identify the data group and data group activity entry in this procedure.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45
(Synchronize DG File Entry) and press Enter.
3. At the Data group definition prompts, specify the data group name.
4. At the Object type prompt, specify a specific object type to synchronize or press
F4 to see a valid list.
5. Additional parameters appear based on the object type selected. Do one of the
following:
• For files, you will see the Object, Library, and Member prompts. Specify the
object, library and member that you want to synchronize.
• For objects, you will see the Object and Library prompts. Specify the object and
library of the object you want to synchronize.
• For IFS objects, you will see the IFS object prompt. Specify the IFS object that
you want to synchronize.
• For DLOs, you will see the Document library object and Folder prompts.
Specify the folder path and DLO name of the DLO you want to synchronize.
6. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
528
Synchronizing data group activity entries
7. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
8. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
9. To start the synchronization, press Enter.
529
Synchronizing tracking entries
Tracking entries are MIMIX constructs which identify IFS objects, data areas, or data
queues configured for replication with MIMIX advanced journaling. You can use a
tracking entry to synchronize the contents, attributes, and authorities of the item it
represents.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 499
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 503
• “About synchronizing tracking entries” on page 507
530
CHAPTER 21 Introduction to programming
MIMIX includes a variety of functions that you can use to extend MIMIX capabilities
through automation and customization.
The topics in this chapter include:
• “Support for customizing” on page 532 describes several functions you can use to
customize your replication environment.
• “Completion and escape messages for comparison commands” on page 534 lists
completion, diagnostic, and escape messages generated by comparison
commands.
• The MIMIX message log provides a common location to see messages from all
MIMIX products. “Adding messages to the MIMIX message log” on page 541
describes how you can include your own messaging from automation programs in
the MIMIX message log.
• MIMIX supports batch output jobs on numerous commands and provides several
forms of output, including outfiles. For more information, see “Output and batch
guidelines” on page 542.
• “Displaying a list of commands in a library” on page 547 describes how to display
the super set of all commands known to License Manager or subset the list by a
particular library.
• “Running commands on a remote system” on page 548 describes how to run a
single command or multiple commands on a remote system.
• “Procedures for running commands RUNCMD, RUNCMDS” on page 549
provides procedures for using run commands with a specific protocol or by
specifying a protocol through existing MIMIX configuration elements.
• “Using lists of retrieve commands” on page 555 identifies how to use MIMIX list
commands to include retrieve commands in automation.
• Commands are typically set with default values that reflect the recommendation of
Vision Solutions. “Changing command defaults” on page 556 provides a method
for customizing default values should your business needs require it.
531
Support for customizing
MIMIX includes several functions that you can use to customize processing within
your replication environment.
Collision resolution
In the context of high availability, a collision is a clash of data that occurs when a
target object and a source object are both updated at the same time. When the
change to the source object is replicated to the target object, the data does not match
and the collision is detected.
With MIMIX user journal replication, the definition of a collision is expanded to include
any condition where the status of a file or a record is not what MIMIX determines it
should be when MIMIX applies a journal transaction. Examples of these detected
conditions include the following:
• Updating a record that does not exist
• Deleting a record that does not exist
• Writing to a record that already exists
• Updating a record for which the current record information does not match the
before image
The database apply process contains 12 collision points at which MIMIX can attempt
to resolve a collision.
When a collision is detected, by default the file is placed on hold due to an error
(*HLDERR) and user action is needed to synchronize the files. MIMIX provides
additional ways to automatically resolve detected collisions without user intervention.
This process is called collision resolution. With collision resolution, you can specify
different resolution methods to handle these different types of collisions. If a collision
does occur, MIMIX attempts the specified collision resolution methods until either the
collision is resolved or the file is placed on hold.
You can specify collision resolution methods for a data group or for individual data
group file entries. If you specify *AUTOSYNC for the collision resolution element of
the file entry options, MIMIX attempts to fix any problems it detects by synchronizing
the file.
You can also specify a named collision resolution class. A collision resolution class
allows you to define what type of resolution to use at each of the collision points.
Collision resolution classes allow you to specify several methods of resolution to try
532
Support for customizing
for each collision point and support the use of an exit program. These additional
choices for resolving collisions allow customized solutions for resolving collisions
without requiring user action. For more information, see “Collision resolution” on
page 408.
533
Completion and escape messages for comparison com-
mands
When the comparison commands finish processing, a completion or escape message
is issued. In the event of an escape message, a diagnostic message is issued prior to
the escape message. The diagnostic message provides additional information
regarding the error that occurred.
All completion or escape messages are sent to the MIMIX message log. To find
messages for comparison commands, specify the name of the command as the
process type. For more information about using the message log, see the MIMIX
Operations book.
CMPFILA messages
The following are the messages for CMPFILA, with a comparison level specification of
*FILE:
• Completion LVI3E01 – This message indicates that all files were compared
successfully.
• Diagnostic LVE3E0D – This message indicates that a particular attribute
compared differently.
• Diagnostic LVE3385 – This message indicates that differences were detected for
an active file.
• Diagnostic LVE3E12 – This message indicates that a file was not compared. The
reason the file was not compared is included in the message.
• Escape LVE3E05 – This message indicates that files were compared with
differences detected. If the cumulative differences include files that were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3381 – This message indicates that compared files were different but
active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E09 – This message indicates that the CMPFILA command ended
abnormally.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
The following are the messages for CMPFILA, with a comparison level specification of
*MBR:
• Completion LVI3E05 – This message indicates that all members compared
successfully.
• Diagnostic LVE3388 – This message indicates that differences were detected for
an active member.
534
Completion and escape messages for comparison commands
• Escape LVE3E16 – This message indicates that members were compared with
differences detected. If the cumulative differences include members that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
CMPOBJA messages
The following are the messages for CMPOBJA:
• Completion LVI3E02 – This message indicates that objects were compared but no
differences were detected.
• Diagnostic LVE3384 – This message indicates that differences were detected for
an active object.
• Escape LVE3E06 – This message indicates that objects were compared and
differences were detected. If the cumulative differences include objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3380 – This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
The LVI3E02 includes message data containing the number of objects compared, the
system 1 name, and the system 2 name. The LVE3E06 message includes the same
message data as LVI3E02, and also includes the number of differences detected.
CMPIFSA messages
The following are the messages for CMPIFSA:
• Completion LVI3E03 – This message indicates that all IFS objects were compared
successfully.
• Diagnostic LVE3E0F – This message indicates that a particular attribute was
compared differently.
• Diagnostic LVE3386 – This message indicates that differences were detected for
an active IFS object.
• Diagnostic LVE3E14 – This message indicates that a IFS object was not
compared. The reason the IFS object was not compared is included in the
message.
• Escape LVE3E07 – This message indicates that IFS objects were compared with
differences detected. If the cumulative differences include IFS objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3382 – This message indicates that compared IFS objects were
535
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Escape LVE3E0B – This message indicates that the CMPIFSA command ended
abnormally.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
CMPDLOA messages
The following are the messages for CMPDLOA:
• Completion LVI3E04 – This message indicates that all DLOs were compared
successfully.
• Diagnostic LVE3E11 – This message indicates that a particular attribute
compared differently.
• Diagnostic LVE3387 – This message indicates that differences were detected for
an active DLO.
• Diagnostic LVE3E15 – This message indicates that a DLO was not compared.
The reason the DLO was not compared is included in the message.
• Escape LVE3E08 – This message indicates that DLOs were compared and
differences were detected. If the cumulative differences include DLOs that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3383 – This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Escape LVE3E0C – This message indicates that the CMPDLOA command ended
abnormally.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
CMPRCDCNT messages
The following are the messages for CMPRCDCNT:
• Escape LVE3D4D – This message indicates that ACTIVE(*YES) outfile
processing failed and identifies the reason code.
• Escape LVE3D5A – This message indicates that system journal replication is not
active.
• Escape LVE3D5F – This message indicates that an apply session exceeded the
unprocessed entry threshold.
536
Completion and escape messages for comparison commands
• Escape LVE3D6D – This message indicates that user journal replication is not
active.
• Escape LVE3D6F – This message identifies the number of members compared
and how many compared members had differences.
• Escape LVE3D72 – This message identifies a child process that ended
unexpectedly.
• Escape LVE3E17 – This message indicates that no object was found for the
specified selection criteria.
• Informational LVI306B – This message identifies a child process that started
successfully.
• Informational LVI306D – This message identifies a child process that completed
successfully.
• Informational LVI3D45 – This message indicates that active processing
completed.
• Informational LVI3D50 – This message indicates that work files are not deleted.
• Informational LVI3D5A – This message indicates that system journal replication is
not active.
• Informational LVI3D5F – This message identifies an apply session that has
exceeded the unprocessed entry threshold.
• Informational LVI3D6D – This message indicates that user journal replication is
not active.
• Informational LVI3E05 – This message identifies the number of members
compared. No differences were detected.
• Informational LVI3E06 – This message indicates that no object was selected for
processing.
CMPFILDTA messages
The following are the messages for CMPFILDTA:
• Completion LVI3D59 – This message indicates that all members compared were
identical or that one or more members differed but were then completely repaired.
• Diagnostic LVE3031 - This message indicates the name of the local system is
entered on the System 2 (SYS2) prompt. Using the name of the local system on
the SYS2 prompt is not valid.
• Diagnostic LVE3D40 – This message indicates that a record in one of the
members cannot be processed. In this case, another job is holding an update lock
on the record and the wait time has expired.
• Diagnostic LVE3D42 - This message indicates that a selected member cannot be
processed and provides a reason code.
• Diagnostic LVE3D46 – This message indicates that a file member contains one or
more field types that are not supported for comparison. These fields are excluded
from the data compared.
537
• Diagnostic LVE3D50 – This message indicates that a file member contains one or
more large object (LOB) fields and a value of *NONE was specified on the data
group definition (DGDFN) parameter or the process while active (ACTIVE)
parameter is *NO. In this case, files containing LOB fields cannot be repaired and
the request to process the file member is ignored.
• Diagnostic LVE3D64 – This message indicates that the compare detected minor
differences in a file member. In this case, one member has more records
allocated. Excess allocated records are deleted. This difference does not affect
replication processing, however.
• Diagnostic LVE3D65 – This message indicates that processing failed for the
selected member. The member cannot be compared. Error message LVE0101 is
returned.
• Escape LVE3358 – This message indicates that the compare has ended
abnormally, and is shown only when the conditions of messages LVI3D59,
LVE3D5D, and LVE3D59 do not apply.
• Escape LVE3D5D – This message indicates that insignificant differences were
found or remain after repair. The message provides a statistical summary of the
differences found. Insignificant differences may occur when a member has
deleted records while the corresponding member has no records yet allocated at
the corresponding positions. It is also possible that one or more selected
members contains excluded fields, such as large objects (LOBs).
• Escape LVE3D5D – This message indicates that insignificant differences were
found or remain after repair. The message provides a statistical summary of the
differences found. Insignificant differences may occur when a member has
deleted records while the corresponding member has no records yet allocated at
the corresponding positions.
• Escape LVE3D5E – This message indicates that the compare request ended
because the data group was not fully active. The request included active
processing (ACTIVE), which requires a fully active data group. Output may not be
complete or accurate.
• Escape LVE3D5F – This message indicates that the apply session exceeded the
specified threshold for unprocessed entries. The DB apply threshold
(DBAPYTHLD) parameter determines what action should be taken when the
threshold is exceeded. In this case, the value *END was specified for
DBAPYTHLD, thereby ending the requested compare and repair action.
• Escape LVE3D59 – This message indicates that significant differences were
found or remain after repair, or that one or more selected members could not be
compared. The message provides a statistical summary of the differences found.
• Escape LVE3D56 – This message indicates that no member was selected by the
object selection criteria.
• Escape LVE3D60 – This message indicates that the status of the data group
could not be determined. The WRKDG (MXDGSTS) outfile returned a value of
*UNKNOWN for one or more fields used in determining the overall status of the
data group.
538
Completion and escape messages for comparison commands
• Escape LVE3D62 – This message indicates the number of mismatches that will
not be fully processed for a file due to the large number of mismatches found for
this request. The compare will stop processing the affected file and will continue to
process any other files specified on the same request.
• Escape LVE3D67 – This message indicates that the value specified for the File
entry status (STATUS) parameter is not valid. To process members in *HLDERR
status, a data group must be specified on the command and *YES must be
specified for the Process while active parameter.
• Escape LVE3D68 – This message indicates that a switch cannot be performed
due to members undergoing compare and repair processing.
• Escape LVE3D69 – This message indicates that the data group is not configured
for database. Data groups used with the CMPFILDTA command must be
configured for database, and all processes for that data group must be active.
• Escape LVE3D6C – This message indicates that the CMPFILDTA command
ended before it could complete the requested action. The processing step in
progress when the end was received is indicated. The message provides a
statistical summary of the differences found.
• Escape LVE3E41 – This message indicates that a database apply job cannot
process a journal entry with the indicated code, type, and sequence number
because a supporting function failed. The journal information and the apply
session for the data group are indicated. See the database apply job log for details
of the failed function.
• Informational LVI3727 – This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and is now in
*CMPRLS state.
• Informational LVI3728 – This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and has been
changed from *CMPRLS to *CMPACT state.
• Informational LVI3729 – This message indicates that the repair request for a
specific member was not successful. As a result, the CMPFILDTA command has
changed the data group file entry for the member back to *HLDERR status.
• Informational LVI372C – The CMPFILDTA command is ending controlled because
of a user request. The command did not complete the requested compare or
repair. Its output may be incomplete or incorrect.
• Informational LVI372D – The CMPFILDTA command exceeded the maximum rule
recovery time policy and is ending. The command did not complete the requested
compare or repair. Its output may be incomplete or incorrect.
• Informational LVI372E – The CMPFILDTA command is ending unexpectedly. It
received an unexpected request from the remote CMPFILDTA job to shut down
and is ending. The command did not complete the requested compare or repair.
Its output may be incomplete or incorrect.
• Informational LVI3D4B – This message indicates that work files are not
539
automatically deleted because the time specified on the Wait time (seconds)
(ACTWAIT) prompt expired or an internal error occurred.
• Informational LVI3D59 – This message indicates that the CMPFILDTA command
completed successfully. The message also provides a statistical summary of
compare processing.
• Informational LVI3D5E - This message indicates that the compare request ended
because the request required Active processing and the data group was not
active. Results of the comparison may not be complete or accurate.
• Informational LVI3D5F – This message indicates that the apply session exceeded
the specified threshold for unprocessed entries, thereby ending the requested
compare and repair action. In this case, the value *END was specified for the DB
apply threshold (DBAPYTHLD) parameter, which determines what action should
be taken when the threshold is exceeded.
• Informational LVI3D60 - This message indicates that the status of the data group
could not be determined. The MXDGSTS outfile returned a value of *UNKNOWN
for one or more status fields associated with systems, journals, system managers,
journal managers, system communications, remote journal link, and database
send and apply processes.
• Informational LVI3E06 – This message indicates that the data group specified
contains no data group file entries.
When active processing and ACTWAIT(*NONE) is specified, or when the active wait
time out occurs, some members will have unconfirmed differences if none of the
differences initially found was verified by the MIMIX database apply process.
The CMPFILDTA outfile contains more detail on the results of each member compare,
including information on the types of differences that are found and the number of
differences found in each member.
Messages LVI3D59, LVE3D5D, LVE3D59, and LVE3D6C include message data
containing the number of members selected on each system, the number of members
compared, the number of members with confirmed differences, the number of
members with unconfirmed differences, the number of members successfully
repaired, and the number of members for which repair was unsuccessful.
540
Adding messages to the MIMIX message log
541
Output and batch guidelines
This topic provides guidelines for display, print, and file output. In addition, the user
interface, the mechanics of selecting and producing output, and content issues such
as formatting are described.
Batch job submission guidelines are also provided. These guidelines address the
user interface as well as the mechanics of submitting batch jobs that are not part of
the mainline replication process.
Output parameter
Some commands can produce output of more than one type—display, print, or output
file. In these cases, the selection is made on the Output parameter. Table 70 lists the
values supported by the Output parameter.
542
Output and batch guidelines
Note: Not all values are supported for all commands. For some commands, a
combination of values is supported.
* Display only
Commands that support OUTPUT(*) that can also run in batch are required to support
the other forms of output as well.
Commands called from a program or submitted to batch with a specification of
OUTPUT(*) default to OUTPUT(*PRINT). Displaying a panel during batch processing
or when called from another program would otherwise fail.
With the exception of messages generated as a result of running a command,
commands that support OUTPUT(*NONE) will generate no other forms of output.
Commands that support combinations of output values do not support OUTPUT(*) in
combination with other output values.
Display output
Commands that support OUTPUT(*) provide the ability to display information
interactively. Display (DSP) and Work (WRK) commands commonly use display
support. Display commands typically display detailed information for a specific entity,
such as a data group definition. Work commands display a list of entries and provide a
summary view of list of entries. Display support is required to work interactively with
the MIMIX product.
Work commands often provide subsetting capabilities that allow you to select a
subset of information. Rather than viewing all configuration entries for all data groups,
for example, subsetting allows you to view the configuration entries for a specific data
group. This ability allows you to easily view data that is important or relevant to you at
a given time.
Print output
Spooled output is generated by specifying OUTPUT(*PRINT), and is intended to
provide a readable form of output for print or distribution purposes. Output is
generated in the form of spooled output files that can easily be printed or distributed.
On commands that support spooled output, the spooled output is generated as a
result of specifying OUTPUT(*PRINT). Most Display (DSP) or Work (WRK)
commands support this form of output. Other commands, such as Compare (CMP)
and Verify (VFY), also support spooled output in most cases.
543
The Work (WRK) and Display (DSP) commands support different categories of
reports. The following are standard categories of reports available from these
commands:
• The detail report contains information for one item, such as an object, definition,
or entry. A detail report is usually obtained by using option 6 (Print) on a Work
(WRK) display, or by specifying *PRINT on the Output parameter on a Display
(DSP) command.
• The list summary report contains summary information for multiple objects,
definitions, or entries. A list summary is usually obtained by pressing F21 (Print)
on a Work (WRK) display. You can also get this report by specifying *BASIC on
the Detail parameter on a Work (WRK) command.
• The list detail report contains detailed information for multiple objects,
definitions, or entries. A list detail report is usually obtained by specifying *PRINT
on the Output parameter of a Work (WRK) command.
Certain parameters, which vary from command to command, can affect the contents
of spooled output. The following list represents a common set of parameters that
directly impact spooled output:
• EXPAND(*YES or *NO) - The expand parameter is available on the Work with
Data Group Object Entries (WRKDGOBJE), the Work with Data Group IFS
Entries (WRKDGIFSE), and the Work with Data Group DLO Entries
(WRKDGDLOE) commands. Configuration for objects, IFS objects, and DLOs can
be accomplished using generic entries, which represent one or more actual
objects on the system. The object entry ABC*, for example, can represent many
entries on a system. Expand support provides a means to determine that actual
objects on a system are represented by a MIMIX configuration. Specifying *NO on
the EXPAND parameter prints the configured data group entries.
• DETAIL(*FULL or *BASIC) - Available on the Work (WRK) commands, the detail
option determines the level of detail in the generated spool file. Specifying
DETAIL(*BASIC) prints a summary list of entries. For example, this specification
on the Work with Data Group Definitions (WRKDGDFN) command will print a
summary list of data group definitions. Specifying DETAIL(*FULL) prints each data
group definition in detail, including all attributes of the data group definition.
Note: This parameter is ignored when OUTPUT(*) or OUTPUT(*OUTFILE) is
specified.
• RPTTYPE(*DIF, *ALL, *SUMMARY or *RRN, depending on command) - The
Report Type (RPTTYPE) parameter controls the amount of information in the
spooled file. The values available for this parameter vary, depending on the
command.
The values *DIF, *ALL, and *SUMMARY are available on the Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands.
Specifying *DIF reports only detected differences. A value of *SUMMARY reports
a summary of objects compared, including an indication of differences detected.
*ALL provides a comprehensive listing of objects compared as well as difference
detail.
544
Output and batch guidelines
The Compare File Data (CMPFILDTA) command supports *DIF and *ALL values,
as well as the value *RRN. Specifying *RRN allows you to output the relative
record number of the first 1,000 objects that failed to compare. Using the *RRN
value can help resolve situations where a discrepancy is known to exist, but you
are unsure which system contains the correct data. In this case, *RRN provides
the information that enables you to display the specific records on the two
systems and to determine the system on which the file should be repaired.
File output
Output files can be generated by specifying OUTPUT(*OUTFILE). Having full outfile
support across the MIMIX product is important for a number of reasons. Outfile
support is a key enabler for advanced automation purposes. The support also allows
MIMIX customers and qualified MIMIX consultants to develop and deliver solutions
tailored to the individual needs of the user.
As with the other forms of output, output files are commonly supported across certain
classes of commands. The Work (WRK) commands commonly support output files. In
addition, many audit-based reports, such as Comparison (CMP) commands, also
provide output file support. Output file support for Work (WRK) commands provides
access to the majority of MIMIX configuration and status-related data. The Compare
(CMP) commands also provide output files as a key enabler for automatic error
detection and correction capabilities.
When you specify OUTPUT(*OUTFILE), you must also specify the OUTFILE and
OUTMBR parameters. The OUTFILE parameter requires a qualified file and library
name. As a result of running the command, the specified output file will be used. If the
file does not exist, it will automatically be created.
Note: If a new file is created for CMPFILA, for example, the record format used is
from the supplied model database file MXCMPFILA, found in the installation
library. The text description of the created file is “Output file for CMPFILA.” The
file cannot reside in the product library.
The Outmember (OUTMBR) parameter allows you to specify which member to use in
the output file. If no member exists, the default value of *FIRST will create a member
name with the same name as the file name. A second element on the Outmember
parameter indicates the way in which information is stored for an existing member. A
value of *REPLACE will clear the current contents of the member and add the new
records. A value of *ADD will append the new records to the existing data.
Expand support: The Expand support was developed specifically as a feature for
data group configuration entries that support generic specifications. Data group object
entries, IFS entries, and DLO entries can all be configured using generic name
values. If you specify an object entry with an object name of ABC* in library XYZ and
accept the default values for all other fields, for example, all objects in library XYZ are
replicated. Specifying EXPAND(*NO) will write the specific configuration entries to the
output files. Using EXPAND(*YES) will list all objects from the local system that match
the configuration specified. Thus, if object name ABC* for library XYZ represented
1000 actual objects on the system, EXPAND(*YES) would add 1000 rows to the
output file. EXPAND(*NO) would add a single generic entry.
Note: EXPAND(*YES) support locates all objects on the local system.
545
General batch considerations
MIMIX functions that are identified as long-running processes typically allow you to
submit the requests to batch and avoid the unnecessary use of interactive resources.
Parameters typically associated with the Batch (BATCH) parameter include Job
description (JOBD) and Job name (JOB).
546
Displaying a list of commands in a library
547
Running commands on a remote system
The Run Command (RUNCMD) and Run Commands (RUNCMDS) commands
provide a convenient way to run a single command or multiple commands on a
remote system. The RUNCMD and RUNCMDS commands replace and extend the
capabilities available in the IBM commands, Submit Remote Command
(SBTRMTCMD) and Run Remote Command (RUNRMTCMD).
The MIMIX commands provide a protocol-independent way of running commands
using MIMIX constructs such as system definitions, data group definitions, and
transfer definitions. The MIMIX commands enable you to run commands and receive
messages from the remote system.
In addition, the RUNCMD and RUNCMDS commands use the current data group
direction to determine where the command is to be run. This capability simplifies
automation by eliminating the need to manually enter source and target information at
the time a command is run.
Note: Do not change the RUNCMD or RUNCMDS commands to
PUBLIC(*EXCLUDE) without giving MIMIXOWN proper authority.
548
Procedures for running commands RUNCMD, RUNCMDS
Table 71. Specific protocols and specifications used for RUNCMD and RUNCMDS
549
Table 71. Specific protocols and specifications used for RUNCMD and RUNCMDS
550
Procedures for running commands RUNCMD, RUNCMDS
11. At the User prompt, specify the user profile to use when the command is run on
the remote system.
12. To run the commands or monitor for messages, press Enter.
551
Table 72. MIMIX configuration protocols and specifications
552
Procedures for running commands RUNCMD, RUNCMDS
553
Table 73. Options for processing journal entries with MIMIX *DGJRN protocol
Run when the database apply job for the Do the following:
specified file receives the journal entry 1. At the Protocol prompt, specify *DGJRN.
2. At the When to run prompt, specify *RCV.
554
Using lists of retrieve commands
555
Changing command defaults
Nearly all MIMIX processes are based on commands that have been shipped with
default values that reflect best practices recommendations. This ensures the easiest
and best use of each command. MIMIX implements named configuration definitions
through which you can customize your configuration by using options on commands
without resorting to changing command defaults.
If you wish to customize command defaults to fit a specific business need, use the
IBM Change Command Default (CHGCMDDFT) command. Be aware that by
changing a command default, you may be affecting the operation of other MIMIX
processes. Also, each update of MIMIX software will cause any changes to be lost.
556
Procedure components and concepts
557
Customizing procedures
check activity for switching, and switch the application group. For node procedures,
the shipped defaults provide the ability to automatically run data protection reports to
help you ensure your environment is protected the way you want.
Each operation is performed by a procedure that consists of a sequence of steps.
Each step calls a predetermined step program to perform a specific subtask of the
larger operation. Steps also identify runtime attributes for handling before and after
the program call within the context of the procedure.
Each step program is a reusable configuration element that identifies a task
performing program and its attributes which determine where it runs and what type of
work it performs. A step program can perform work on an application group, its data
resource groups, their respective data groups, or on specified nodes. A set of shipped
step programs provides functionality for the default procedures created for application
groups and nodes.
In addition, you can copy or create your own procedures and step programs to
perform custom activity, change which procedure is the default of its type for an
application group, and change attributes of steps within a procedure.
You can also optionally create step messages. These are configuration elements that
define the error action to be taken for a specific error message identifier. A step
message provides the ability to determine the error action taken by a step based on
attributes defined in the error message identifier. Each step message is defined for an
installation so it can be used by multiple steps or by steps in multiple procedures.
Procedure types
Procedures have a type (TYPE) value which determines the operations for which the
procedure can be used. The following types are supported:
*END - The procedure is usable with the End Application Group (ENDAG)
command.
*NODE - The procedure only runs on a single node and is not associated with an
application group.
*START - The procedure is usable with the Start Application Group (STRAG)
command.
*SWTPLAN - The procedure is usable with the Switch Application Group
(SWTAG) command for a *PLANNED switch type.
*SWTUNPLAN - The procedure is usable with the Switch Application Group
(SWTAG) command for an *UNPLANNED switch type.
*USER - The procedure is user defined and is associated with an application
group.
558
Procedure components and concepts
started for each of its data resource groups. These jobs operate independently and
persist until the procedure ends. Each persistent job evaluates each step in sequence
for work to be performed within the scope of the step program type. When a job for a
data resource group encounters a step that acts on data groups, it spawns an
additional job for each of its associated data groups. Each spawned data group job
performs the work for that data group and then ends.
Attributes of a step
A step defines attributes to be used at runtime for a specified step program in the
context of the specified procedure for an application group or node. The following
parameters identify the attributes of a step:
Sequence number (SEQNBR) -The sequence number determines the order in which
the step will be performed.
Action before step (BEFOREACT) - This parameter identifies what action is taken
by all jobs for the procedure before starting the step. The default value *NONE
indicates that the step will begin without additional action. Users can also specify
*WAIT so that jobs wait for all asynchronous jobs to complete processing previous
steps before the starting the step. The value *MSGW will cause the step to be
started, then wait until all asynchronous jobs have completed processing all previous
steps and an operator has responded to an inquiry message from the procedure
which indicated the step has been waiting to run. Then the step takes the action
indicated by the operator’s response to the message. A response of G (Go) will run
the program specified in the step. A response of C (Cancel) will cancel the procedure.
Action on error (ERRACT) - This parameter identifies what action to take for a job
used in processing the step when the job ends in error.
• The default value *QUIT will set the status of the job that ended in error *FAILED,
as indicated in the expanded view of step status. The type of step program used
by this step determines what happens to other jobs for the step and whether
subsequent steps are prevented from starting, as follows:
– If the step program is of type *DGDFN, jobs that are processing other data
groups within the same data resource group continue. When they complete,
the data resource group job ends. No subsequent steps that apply to that data
resource group or its data groups will be started. However, subsequent steps
will still be processed for other data resource groups and their data groups.
– If the step program is of type *DTARSCGRP, no subsequent steps that apply to
that data resource group or its data groups will be started. Jobs for other data
resource groups may still be running and will process subsequent steps that
apply to their data resource groups and data groups.
– If the step program is of type *AGDFN or *NODE, subsequent steps will not be
started. Jobs for data resource group or data group steps may still be running
and will process subsequent steps that apply to their data resource groups and
data groups.
• For the value *CONTINUE, the job continues processing as if the job had not
ended in error. The status of the job in error is set to *IGNERR and is indicated in
the expanded view of step status.
559
Customizing procedures
Operational control
Procedures of type *USER or *NODE can be invoked by the Run Procedure
(RUNPROC) command. For procedures of type *NODE, the RUNPROC command
always runs on the local system. For procedures of other types, the application group
command which corresponds to the procedure type must be used to invoke the
procedure. For example, a procedure of type *START must be invoked by the Start
Application Group (STRAG) command.
Where should the procedure begin? The value specified for the Begin at step
(STEP) parameter on the request to run the procedure determines the step at which
the procedure will start. The status of the last run of the procedure determines which
values are valid.
The default value, *FIRST, will start the specified procedure at its first step. This value
can be used when the procedure has never been run, when its previous run
completed (*COMPLETED or *COMPERR), or when a user acknowledged the status
of its previous run which failed, was canceled, or completed with errors
(*ACKFAILED, *ACKCANCEL, or *ACKERR respectively).
Other values are for resolving problems with a failed or canceled procedure. When a
procedure fails or is canceled, subsequent attempts to run the same procedure will
fail until user action is taken. You will need to determine the best course of action for
your environment based on the implications of the canceled or failed steps and any
steps which completed.
The value *RESUME will start the last run of the procedure beginning with the step at
which it failed, the step that was canceled in response to an error, or the step
following where the procedure was canceled. The value *RESUME may be
appropriate after you have investigated and resolved the problem which caused the
procedure to end. Optionally, if the problem cannot be resolved and you want to
resume the procedure anyway, you can override the attributes of a step before
resuming the procedure.
560
Customizing user application handling for switching
The value *OVERRIDE will override the status of all runs of the specified procedure
that did not complete. The *FAILED or *CANCELED status of these procedures are
changed to acknowledged (*ACKFAILED or *ACKCANCEL) and a new run of the
procedure begins at the first step.
.
The MIMIX Operations book describes the operational level of working with
procedures and steps in detail.
561
Customizing procedures
Note: Any procedure with a step that invokes the step programs identified in Table
74 will issue the same error messages if action is not taken.
Step Description
Program
ENDUSRAPP Customize to end user applications on the current primary node before a
switch occurs.
Where used: Procedures of type *SWTPLAN that use shipped default
steps.
Source code template: ENDUSRAPP in source physical file
MCTEMPLSRC in the installation library.
STRUSRAPP Customize to start user applications on the new primary system following
a switch.
Where used: Procedures of type *SWTPLAN and *SWTUNPLAN that
use shipped default steps.
Source code template: STRUSRAPP source physical file.
MCTEMPLSRC in the installation library.
562
Working with procedures
program. All procedures that use the step programs listed in Table 74 will use the
customization.
Do the following:
1. Copy the source code template for the step program from the location indicated in
Table 74.
2. Create and compile a custom version of the program that will perform the
necessary activity for your applications. See “Step program format STEP0100” on
page 571 for details.
3. Copy the complied step program to all systems in the installation. Ensure that it
has the same name and location on all systems.
Note: To prevent having your custom program replaced when a service pack is
installed, either the name of the program object or the library where it is
located must be different than the name and location specified in the
shipped default step program.
4. From the management system, enter the command:
installation_library/WRKSTEPPGM
5. Type 2 (Change) next to the step program you want and press Enter.
6. The Change Step Program (CHGSTEPPGM) command appears. Specify the
name and library of your custom program and press Enter.
563
Customizing procedures
primarily for configuring and modifying procedures. Only procedures of type *USER or
*NODE can be run from this display.
Bottom
Parameters or command
===> _________________________________________________________________________
F3=Exit F4=Prompt F5=Refresh F6=Create F9=Retrieve F12=Cancel
F13=Repeat F14=Procedure status F18=Subset F21=Print list
For detailed information about status for steps and procedures, see the MIMIX
Operations book.
564
Working with procedures
• To see the status of procedures that run on a node, type 21 (Procedure Status)
next to the system you want and press Enter.
565
Customizing procedures
Enter.
7. Customize the procedure by adding or removing steps and adjusting step
attributes as needed using the topics within “Working with the steps of a
procedure” on page 567 and “Working with step programs” on page 570.
Deleting a procedure
Use these instructions to delete a procedure, including the runtime attributes of steps
within the procedure. The step programs referenced by the steps of the procedure are
not deleted.
The procedure cannot be in use. The default *USER procedure, PRECHECK, and
*NODE procedures CRTDPRDIR, CRTDPRFLR, and CRTDPRLIB cannot be
deleted.
Do the following from the management system:
1. On the Work with Procedures display, type 4 (Delete) next to the procedure you
want press Enter.
2. A confirmation display appears. To delete the procedure, press Enter.
566
Working with the steps of a procedure
567
Customizing procedures
568
Working with the steps of a procedure
last sequence number in the procedure. If you want the step to run in a
different relative order within the procedure, specify a different value.
c. Specify the values you want for other runtime attributes in the remaining
prompts. Default values will allow asynchronous jobs to process the step
without waiting for other jobs reach the step, and will quit if a job ends in error.
For details about the resulting behavior of other values for Action before step,
Action on error, and State see “Attributes of a step” on page 559.
d. To add the step, press Enter.
569
Customizing procedures
• To disable a step, type 21 (Disable) next to the step you want and press Enter.
570
Working with step programs
installation:
a. Type ADDSTEPPGM and press F4 (Prompt).
b. Specify a name for the step program.
c. Specify the name of the program object and the library in which it is located.
d. Specify the type of step program. This indicates the operational level at which
the program will run.
e. Specify the type of node on which the program will run.
f. Specify a description of the step program. This will be displayed when you view
details of a step which uses the step program.
g. To add the step program, press Enter.
571
Customizing procedures
Programs can be written in C, RPG, or CL. Source code templates ENDUSRAPP and
STRUSRAPP in source physical file MCTEMPLSRC in the installation library can be
used as templates for any step program. These templates can be used for any
custom step program; however, avoid using these names for your program.
A step program is called with the following parameters.
Application Group Name
INPUT; CHAR (10)
The name that identifies the application group definition. If the step program is of type
*NODE, this parameter contains all blanks.
Resource Group Name
INPUT; CHAR (10)
The name that identifies the resource group. If the resource group is not applicable or
if the step program is of type *NODE, this parameter contains all blanks.
Data Group Name
INPUT; CHAR (26)
The name that identifies the data group definition (name system1 system2). If the
data group is not applicable or if the step program is of type *NODE, this parameter
contains all blanks.
Data Group Source
INPUT; CHAR (1)
The value that identifies the data source as configured in the data group definition. If
the step program is of type *NODE, this parameter contains all blanks.
572
Working with step messages
573
Customizing procedures
574
Additional programming support for procedures and steps
575
Shipped procedures and step programs
This chapter includes information about procedures and step programs that are
shipped with MIMIX. Information is provided for application groups that use IBM i
clustering and those that do not. Procedures and steps are different depending on the
type of application group.
“Values for procedures and steps” on page 576 describes the values documented
throughout this section for shipped procedures and steps.
Shipped Procedures
Information for shipped procedures is provided in the following sections. The steps
within each procedure are listed in the order in which they will be performed.
• “Shipped procedures for application groups” on page 578
• “Shipped procedures for data protection reports” on page 585
• “Shipped default procedures for IBM i cluster type application groups” on
page 586
• “Shipped user procedures for cluster type application groups” on page 592
Shipped Steps
Information for shipped steps is provided in the following sections. The steps are
listed in alphabetical order. Details for the steps include a description of the step, the
procedures where the step runs, and an indication whether the step is required or can
be changed.
• “Steps for application groups” on page 602
• “Steps for data protection report procedures” on page 611
• “Steps for clustering environments” on page 613
• “Steps for MIMIX for MQ” on page 623
For information about how to add a step program to a procedure, see “Adding a step
to a procedure” on page 552.
576
Values for procedures and steps
Resource Group - The step runs in a spawned job for each data resource group
within the specified application group.
Data Group - The step runs in a spawned job for each data group within the
specified application group.
Node - The step runs in the persistent job for the *NODE procedure.
Node — Identifies the type of node where the step program runs. This is determined
when the step program is created.
All - The step runs on all nodes.
Primary - The step runs only on the primary node.
Backup - The step runs on all backup nodes.
New Primary - If the step is added to a switch procedure, it runs on the new
primary node. Steps that are not part of a switch procedure run on the primary
node.
Local - The step runs only on the node on which the procedure started.
Peer - The step runs on all peer nodes.
Replicate - The step runs on all replicate nodes.
Before Action — Identifies what action is taken by all jobs for the procedure before
starting the step. This is determined for the step when it is added to a procedure.
None (*NONE) - No special action is taken. Processing continues with this step.
Message Wait (*MSGW) - The step is started, then waits until all asynchronous
jobs have completed processing all previous steps and a user has responded to
an inquiry message from the procedure which indicated the step has been waiting
to run. Then the step takes the action indicated by the user’s response to the
message. The action can be specified in the MIMIX portal application using the
Message Details dialog from the Procedures portlet, Step Status portlet, or
Procedure History window.
Wait (*WAIT) - The step is started only after all asynchronous jobs have
completed processing all previous steps.
Error Action — Identifies what action to take when a job used in processing the step
ends in error. This is determined for the step when it is added to a procedure.
Quit (*QUIT) - The status of the job that ended in error is set to Failed (*FAILED).
The type of step program determines what happens to other jobs for the step and
whether subsequent steps are prevented from starting. These behaviors are:
• Application Group - For step programs that run at the application group level,
subsequent steps that apply to the application group will not be started. Jobs
for data resource group or data group steps may still be running and will
process subsequent steps that apply to their data resource groups and data
groups.
• Data Resource Group - For step programs that run at the data resource
group level, no subsequent steps that apply to that data resource group or
its data groups will be started. Jobs for other data resource groups may still
577
Shipped procedures and step programs
be running and will process subsequent steps that apply to their data
resource groups and data groups.
• Data Group - For step programs that run at the data group level, jobs that are
processing other data groups within the same data resource group continue.
When they complete, the data resource group job ends. No subsequent
steps that apply to that data resource group or its data groups will be started.
However, subsequent steps will still be processed for other data resource
groups and their data groups.
Continue (*CONTINUE) - The job continues processing as if the job had not
ended in error. The status of the job in error is set to Ignored Error (*IGNERR) and
is indicated in the expanded view of step status.
Error Message Identifier (*MSGID) - Error processing is determined by whether
the error message ID has been predefined as a step message within the
installation. If a step message for the error ID exists, the step message
determines the action taken. If a step message is not found for the error message
ID, the error action defaults to Quit.
Message Wait (*MSGW) - The step ran and an inquiry message issued by the job
requires a response before any additional processing for the job can occur. The
possible responses and their resulting behaviors are:
• Retry the step and continue - This response will retry processing the step
program within the same job.
• Ignore the error and continue - This response will set the job’s status to Ignored
Error (*IGNERR), as indicated in the expanded view of step status, and
processing continues as if the job had not ended in error.
• Cancel the procedure - This response cancels the step and sets the status for
the step and procedure to Failed. Subsequent steps are handled in the same
manner described for the value Quit.
578
Shipped procedures for application groups
END
When END is created for a new application group, it is set as the default end
procedure. The End Application Group (ENDAG) command automatically uses the
application group’s current set default END procedure unless you specify a different
procedure. Steps in the END procedure are included in Table 75.
Table 75. Steps in END procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
ENDTGT
The ENDTGT command ends replication processes that run on the target system.
Steps in the ENDTGT procedure are included in Table 76.
Table 76. Steps in ENDTGT procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
ENDIMMED
Steps in the ENDIMMED procedure are included in Table 77.
Table 77. Steps ENDIMMED procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
579
Shipped procedures and step programs
PRECHECK
Steps in the PRECHECK user procedure are included in Table 78.
Table 78. Steps in PRECHECK procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
580
Shipped procedures for application groups
START
Steps in the START procedure are included in Table 79.
Note: If you are using the MIMIX for MQ feature, you will have to add customized
steps to these procedures. For more information, see the MIMIX for IBM
WebSphere MQ book.
Table 79. Steps in START procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
SWTPLAN
Steps in the planned switch (SWTPLAN) procedure are included in Table 80.
Note: If you are using the MIMIX for MQ feature, you will have to add customized
steps to these procedures. For more information, see the MIMIX for IBM
WebSphere MQ book.
Table 80. Steps in SWTPLAN procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
581
Shipped procedures and step programs
Table 80. Steps in SWTPLAN procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
582
Shipped procedures for application groups
Table 80. Steps in SWTPLAN procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
SWTUNPLAN
Steps in the unplanned switch (SWTUNPLAN) procedure are included in Table 81.
Note: If you are using the MIMIX for MQ feature, you will have to add customized
steps to these procedures. For more information, see the MIMIX for IBM
WebSphere MQ book.
Table 81. Steps in SWTUNPLAN procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
583
Shipped procedures and step programs
Table 81. Steps in SWTUNPLAN procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
584
Shipped procedures for data protection reports
CRTDPRDIR
The CRTDPRDIR procedure creates a data protection report for directories. Steps in
the CRTDPRDIR procedure are included in Table 82.
Table 82. Steps in CRTDPRDIR procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
CRTDPRFLR
The CRTDPRFLR procedure creates a data protection report for folders. Steps in the
CRTDPRFLR procedure are included in Table 83.
Table 83. Steps in CRTDPRFLR procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
585
Shipped procedures and step programs
CRTDPRLIB
The CRTDPRLIB procedure creates a data protection report for libraries. Steps in the
CRTDPRLIB procedure are included in Table 84.
Table 84. Steps in CRTDPRLIB procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
586
Shipped default procedures for IBM i cluster type application groups
Table 85. Steps in END procedure for IBM i cluster application groups. Click on a step name to see
more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
Table 86. Steps in START procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
587
Shipped procedures and step programs
Table 87. Steps in shipped default SWTPLAN procedure. Click on a step name to see more informa-
tion.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
588
Shipped default procedures for IBM i cluster type application groups
Table 87. Steps in shipped default SWTPLAN procedure. Click on a step name to see more informa-
tion.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
589
Shipped procedures and step programs
Table 88. Steps in SWTUNPLAN procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
590
Shipped default procedures for IBM i cluster type application groups
Table 88. Steps in SWTUNPLAN procedure. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
591
Shipped procedures and step programs
APP_END
Steps for the cluster user procedure, application end, are included in Table 89.
Table 89. Steps in cluster user procedure application end. Click on a step name to see more informa-
tion.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
592
Shipped user procedures for cluster type application groups
APP_FAIL
Steps for the cluster user procedure, application failover, are included in Table 90.
Table 90. Steps in cluster user procedure application failover. Click on a step name to see more infor-
mation.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
APP_STR
Steps for the cluster user procedure, application start, are included in Table 91.
Table 91. Steps in cluster user procedure application start. Click on a step name to see more informa-
tion.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
593
Shipped procedures and step programs
APP_SWT
Steps for the cluster user procedure, application switch, are included in Table 92.
Table 92. Steps in the cluster user procedure application switch. Click on a step name to see more
information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
GMIR_END
Steps in the Global Mirror (GMIR) user procedure, end, are included in Table 93.
Table 93. Steps in GMIR user procedure end. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
594
Shipped user procedures for *GMIR resource groups
Table 93. Steps in GMIR user procedure end. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
GMIR_FAIL
Steps in the shipped Global Mirror (GMIR) user procedure, failover, are included in
Table 94.
Table 94. Steps in GMIR user procedure failover. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
GMIR_JOIN
Steps in the Global Mirror (GMIR) user procedure, rejoin, are included in Table 95.
Table 95. Steps in GMIR user procedure rejoin. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
595
Shipped procedures and step programs
Table 95. Steps in GMIR user procedure rejoin. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
GMIR_STR
Steps in the Global Mirror (GMIR) user procedure, start, are included in Table 96.
Table 96. Steps in GMIR user procedure start. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
GMIR_SWT
Steps in the Global Mirror (GMIR) user procedure, switch, are included in Table 97.
Table 97. Steps in GMIR user procedure switch. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
596
Shipped user procedures for *LUN resource groups
Table 97. Steps in GMIR user procedure switch. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
LUN_FAIL
Steps in the LUN user procedure, fail, are included in Table 98.
Table 98. Steps in LUN user procedure fail. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
597
Shipped procedures and step programs
LUN_SWT
Steps in the LUN user procedure, switch, are included in Table 99.
Table 99. Steps in LUN user procedure switch. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
PEER_END
Steps in the PEER user procedure, end, are included in Table 100.
Table 100. Steps in PEER user procedure end. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
PEER_STR
Steps in the PEER user procedure, start, are included in Table 101.
Table 101. Steps in PEER user procedure start. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
598
Shipped user procedures for *PPRC resource groups
PPRC_END
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, end, are included in
Table 102.
Table 102. Steps in PPRC user procedure end. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
PPRC_FAIL
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, failover, are included
in Table 103.
Table 103. Steps in PPRC user procedure failover. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
599
Shipped procedures and step programs
PPRC_JOIN
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, rejoin, are included
in Table 104.
Table 104. Steps in PPRC user procedure rejoin. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
PPRC_STR
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, start, are included in
Table 105.
Table 105. Steps in PPRC user procedure start. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
600
Shipped user procedures for *PPRC resource groups
PPRC_SWT
Steps in the Peer to Peer Remote Copy (PPRC) user procedure, switch, are included
in Table 106.
Table 106. Steps in PPRC user procedure switch. Click on a step name to see more information.
Procedure
Where Step Runs Step can be
Step/Step Behavior for Step
Disabled or
Program Name Before Error
Type Node Removed
Action Action
601
Steps for application groups
SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step
SWTPLAN
Step Description
ENDTGT
Program Name Type Node
START
END
ENDUSRAPP End user application. Application Group Primary C
This step can be customized to perform any actions
necessary to end the production application.
.
602
Steps for application groups
SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step
SWTPLAN
Step Description
ENDTGT
Program Name Type Node
START
END
MXAUDDIFF Audit difference verification. Data Group New Primary C C C
This step checks for audits that have detected differences that
have not been corrected.
.
603
Steps for application groups
SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step
SWTPLAN
Step Description
ENDTGT
Program Name Type Node
START
END
MXCHKCFG Configuration verification. Data Group New Primary C C C
This step verifies that data groups are switchable. Data
groups with replicate nodes are not checked.
.
MXCHKDGEND Checks for ended data group, maximum of 30 checks. Data Group New Primary C
This step periodically checks the data group status to verify
that replication processes are ended. The RJ Link status is
not checked. If processes are not ended, a 10 second delay
occurs before status is checked again. The step fails if
replication processes are not ended after 30 checks.
MXDBPND Ensures there are no pending database transactions. Data Group New Primary C C C
This step checks for active commit cycles or unprocessed
entries in the data group.
.
604
Steps for application groups
SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step
SWTPLAN
Step Description
ENDTGT
Program Name Type Node
START
END
MXDSBCST Disables foreign key constraints.
. Data Group Backup C C C C C
This step disables all foreign key constraints on the target
node if the data group is configured for disabled constraints.
.
This step is available to be manually added.
MXENDDGTGT End target data group controlled. Data Group New Primary C
This step ends the processes running on the target node for
the data group in a controlled manner.
.
605
Steps for application groups
SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step
SWTPLAN
Step Description
ENDTGT
Program Name Type Node
START
END
MXENDDGUNP End data group controlled for an unplanned switch. Data Group New Primary R
This step ends the processes running on the target node for
the data group in a controlled manner during an unplanned
switch using a timeout of 3600 seconds. If the timeout is
reached before the controlled end is completed, an inquiry
message is issued. Errors are ignored.
.
MXENDJRNT End journaling on the new target node. Data Group New Primary C C
This step ends journaling on the new target node when the
data group is not configured for journaling on the target node.
.
MXOBJPND Ensure there are no pending object transactions. Data Group New Primary C C C
This step checks for pending object activity entries for the
data group.
.
606
Steps for application groups
SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step
SWTPLAN
Step Description
ENDTGT
Program Name Type Node
START
END
MXSETSTRJ Set journal starting points. Data Group New Primary R R
This step sets the starting point for the data group that is in
switch mode. A journal entry is sent to notify Target Journal
Inspection about this event.
.
MXSTRJRNS Starts journaling on the new source node. Data Group New Primary R R
This step starts journaling for all applicable objects on the new
source node of the data group if the data group does not allow
journaling on target. This includes files and any data areas,
data queues, and IFS objects configured for user journal
replication.
.
607
Steps for application groups
SWTUNPLAN
PRECHECK
ENDIMMED
Step / Step
SWTPLAN
Step Description
ENDTGT
Program Name Type Node
START
END
MXSWTCMP Denotes the end of a switch procedure. Data Group New Primary R R
This step sets the switch to complete for a data group when
the switch has completed.
.
MXUPDNOD Update node roles after switch. Application Group New Primary R R
This step updates node roles after a switch has completed.
.
608
Steps for application groups
MXSTRDBS Starts data group database source jobs. Data Group Local
This step starts the database replication processes on the source node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.
MXSTRDBT Starts data group database target jobs. Data Group Local
This step starts the database replication processes on the target node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.
609
Steps for application groups
MXSTRDGUNP Starts data groups during unplanned cluster switch. Data Group New
This step starts data group at the last processed entry for both receiver and sequence for Primary
an unplanned cluster switch. Communication errors are ignored. If a data group cannot be
started because it requires pending entries to be cleared, a second start request is issued
which clears pending entries.
.
MXSTROBJS Starts data group object source jobs. Data Group Local
This step starts the object replication processes on the source node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.
.
MXSTROBJT Starts data group object target jobs. Data Group Local
This step starts the object replication processes on the target node starting with the last
processed entry for the receiver and sequence number. If a data group cannot be started
because it requires pending entries to be cleared, a second start request is issued which
clears pending entries.
.
610
Steps for data protection report procedures
Used in
Where Step Runs
Procedure
CRTDPRFLR
Step / Step Program
CRTDPRDIR
CRTDPRLIB
Step Description
Name
Type Node
611
Steps for data protection report procedures
Used in
Where Step Runs
Procedure
CRTDPRFLR
Step / Step Program
CRTDPRDIR
CRTDPRLIB
Step Description
Name
Type Node
612
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
GMIR_SWT
PEER_END
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
APP_SWT
LUN_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCADDVOL Global Mirror - add volumes to session. Data Local R R R R
This step adds a volume to the global Resource
mirror session. Group
.
613
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCENDIJOB End iASP MIMIX system jobs Data Local R R R R R R
This step ends the jobs associated with Resource
the switchable iASP. Group
.
614
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCFOPPRC Peer to Peer Remote Copy (PPRC) Data Local R R
failover Resource
This step switches the direction of Group
mirroring using the failover PPRC
(failoverpprc) API.
.
615
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCIDAFAIL Changes the IDA status to be *INUSE. Application Local R
This step changes the QCSTHAAPPI Group
status to be *INUSE. This is used by the
CRG exit programs to indicate that the
application has been successfully started
on the new primary node following a
cluster failover scenario.
.
616
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCODAAVL Sets the ODA status on all nodes to Application All R
*AVAILABLE. Group
This step sets the QCSTHAAPPO status
on all nodes to *AVAILABLE. This is used
by the application CRG exit program to
recognize that all data and device CRGs
have successfully completed their switch
processing.
.
617
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCODAPAVL Sets the ODA status on primary node to Application New R R R
*AVAILABLE. Group Primary
This step sets the QCSTHAAPPO status
on the new primary node to *AVAILABLE
as the last step of a switch or failover.
.
618
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCRMVGMIR Remove Global Mirror. Data Local R R
This step removes the Global Mirror Resource
relationship using the remove GMIR Group
(rmgmir) API.
.
619
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCRVSFLASH Reverse flash for Global Mirror. Data Local R R
This step reverses the flash copy Resource
relationship for global mirror during a Group
switch.
.
620
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCSWTICRG Switch iASP CRG. Data New C C
This step switches the iASP using the Resource Primary
SWTAG command. Group
.
621
Steps for clustering environments
SWTUNPLAN
PPRC_JOIN
PPRC_SWT
Step / Step
GMIR_JOIN
PPRC_FAIL
PPRC_END
PEER_END
GMIR_SWT
PPRC_STR
GMIR_FAIL
PEER_STR
GMIR_END
GMIR_STR
SWTPLAN
LUN_SWT
APP_SWT
Step Description
LUN_FAIL
APP_FAIL
APP_END
APP_STR
Program Name Type Node
START
END
MCWAITODA Wait for ODA status on all nodes to be Application New R R
*AVAILABLE. Group Primary
This step waits for the QCSTHAAPPO
status on the new primary node to
become *AVAILABLE. This is used by the
application CRG exit program to know
when the data CRG has ended all
replication and is ready to proceed with
the switch processing.
.
622
Steps for MIMIX for MQ
623
Shipped procedures and step programs
Table 111 lists the step programs that are shipped for MIMIX for MQ and the
procedures where they are used. The values for the Used in Procedure column
indicate the following:
• ‘R’ - The step is required and cannot be changed or disabled.
• ‘C’ - The step is included and can be changed or disabled.
• Blank - The step is not used in the procedure.
Used in
Where Step Runs
Procedure
SWTUNPLAN3
Step/Step Program Name
SWTPLAN2
and Description
START 1
Type Node
624
Summary of exit points
The MIMIX family of products provide a variety of exit points to enable you to extend
and customize your operations.
The topics in this chapter include:
• “Summary of exit points” on page 625 provides tables that summarize the exit
points available for use.
• “Working with journal receiver management user exit points” on page 628
describes how to use user exit points safely.
MIMIX also supports a generic interface to existing database and object replication
process exit points that provides enhanced filtering capability on the source system.
This generic user exit capability is only available through a Certified MIMIX
Consultant.
625
Customizing with exit point programs
Event program exit point After condition check (pre-defined and user-defined)
626
Summary of exit points
627
Working with journal receiver management user exit
points
User exit points in critical processing areas enable you to incorporate specialized
processing with MIMIX to extend function to meet additional needs for your
environment. Access to user exit processing is provided through the use of an exit
program that can be written in any language supported by IBM i.
Since user exit programming allows for user code to be run within MIMIX processes,
great care must be exercised to prevent the user code from interfering with the proper
operation of MIMIX. For example, a user exit program that inadvertently causes an
entry to be discarded that is needed by MIMIX could result in a file not being available
in case of a switch. Use caution in designing a configuration for use with user exit
programming. You can safely use user exit processing with proper design,
programming, and testing. Services are also available to help customers implement
specialized solutions.
628
Working with journal receiver management user exit points
the name of the first entry in the currently attached journal receiver.)
Restrictions for Change Management Exit Points: The following restriction applies
when the exit program is called from either of the change management exit points:
• Do not include the Change Data Group Receiver (CHGDGRCV) command in your
exit program.
• Do not submit batch jobs for journal receiver change or delete management from
the exit program. Submitting a batch job would allow the in-line exit point
processing to continue and potentially return to normal MIMIX journal
management processing, thereby conflicting with journal manager operations. By
not submitting journal receiver change management to a batch job, you prevent a
potential problem where the journal receiver is locked when it is accessed by a
batch program.
629
program fails and signals an exception to MIMIX, MIMIX processing continues as if
the exit program was not specified.
Return Code
OUTPUT; CHAR (1)
This value indicates how to continue processing the journal receiver when the exit
program returns control to the MIMIX process. This parameter must be set. When the
exit program is called from Function C2, the value of the return code is ignored. Pos-
sible values are:
0 Do not continue with MIMIX journal management processing for this journal
receiver.
1 Continue with MIMIX journal management processing.
Function
INPUT; CHAR (2)
The exit point from which this exit program is called. Possible values are:
Note: Restrictions for exit programs called from the C1 and C2 exit points are
described within topic “Change management exit points” on page 628.
Journal Definition
INPUT; CHAR (10)
The name that identifies the journal definition.
System
INPUT; CHAR (8)
The name of the system defined to MIMIX on which the journal is defined.
Reserved1
INPUT; CHAR (10)
This field is reserved and contains blank characters.
Journal Name
INPUT; CHAR (10)
The name of the journal that MIMIX is processing.
630
Working with journal receiver management user exit points
Journal Library
INPUT; CHAR (10)
The name of the library in which the journal is located.
Receiver Name
INPUT; CHAR (10)
The name of the journal receiver associated with the specified journal. This is the jour-
nal receiver on which journal management functions will operate. For receiver change
management functions, this always refers to the currently attached journal receiver.
For receiver delete management functions, this always refers to the same journal
receiver.
Receiver Library
INPUT; CHAR (10)
The library in which the journal receiver is located.
Sequence Option
INPUT; CHAR (6)
The value of the Sequence option (SEQOPT) parameter on the CHGJRN command
that MIMIX processing would have used to change the journal receiver. It is recom-
mended that you specify this parameter to prevent synchronization problems if you
change the journal receiver. This parameter is only used when the exit program is
called at the C1 (pre-change) exit point. Possible values are:
*CONT The journal sequence number of the next journal entry created is 1 greater than
the sequence number of the last journal entry in the currently attached journal
receiver.
*RESET The journal sequence number of the first journal entry in the newly attached
journal receiver is reset to 1. The exit program should either reset the sequence
number or set the return code to 0 to allow MIMIX to change the journal receiver
and reset the sequence number.
Threshold Value
INPUT; DECIMAL(15, 5)
The value to use for the THRESHOLD parameter on the CRTJRNRCV command.
This parameter is only used when the exit program is called at the C1 (pre-change)
exit point. Possible values are:
0 Do not change the threshold value. The exit program must not change the
threshold size for the journal receiver.
value The exit program must create a journal receiver with this threshold value, specified
in kilobytes. The exit program must also change the journal to use that receiver, or
send a return code value of 0 so that MIMIX processing can change the journal
receiver.
Reserved2
INPUT; CHAR (1)
This field is reserved and contains blank characters.
631
Reserved3
INPUT; CHAR (1)
This field is reserved and contains blank characters.
/*--------------------------------------------------------------*/
/* Program....: DMJREXIT */
/* Description: Example user exit program using CL */
/*--------------------------------------------------------------*/
632
Working with journal receiver management user exit points
/*--------------------------------------------------------------*/
/* Constants and misc. variables */
/*--------------------------------------------------------------*/
DCL VAR(&STOP) TYPE(*CHAR) LEN(1) VALUE('0')
DCL VAR(&CONTINUE) TYPE(*CHAR) LEN(1) VALUE('1')
DCL VAR(&PRECHG) TYPE(*CHAR) LEN(2) VALUE('C1')
DCL VAR(&POSTCHG) TYPE(*CHAR) LEN(2) VALUE('C2')
DCL VAR(&PRECHK) TYPE(*CHAR) LEN(2) VALUE('D0')
DCL VAR(&PREDLT) TYPE(*CHAR) LEN(2) VALUE('D1')
DCL VAR(&POSTDLT) TYPE(*CHAR) LEN(2) VALUE('D2')
DCL VAR(&RTNJRNE) TYPE(*CHAR) LEN(165)
DCL VAR(&PRVRCV) TYPE(*CHAR) LEN(10)
DCL VAR(&PRVRLIB) TYPE(*CHAR) LEN(10)
/*--------------------------------------------------------------*/
/* MAIN */
/*--------------------------------------------------------------*/
CHGVAR &RETURN &CONTINUE /* Continue processing receiver*/
/*--------------------------------------------------------------*/
/*--------------------------------------------------------------*/
IF (&JRNLIB *EQ 'MYLIB') THEN(DO)
IF (&THRESHOLD *GT 0) THEN(DO)
CRTJRNRCV JRNRCV(&RCVLIB/NEWRCV0000) +
THRESHOLD(&THRESHOLD)
CHGJRN JRN(&JRNLIB/&JRNNAME) +
JRNRCV(&RCVLIB/NEWRCV0000) SEQOPT(&SEQOPT)
ENDDO /* There has been a threshold change */
ELSE (CHGJRN JRN(&JRNLIB/&JRNNAME) JRNRCV(*GEN) +
SEQOPT(&SEQOPT)) /* No threshold change */
CHGVAR &RETURN &STOP /* Stop processing entry */
ENDDO /* &JRNLIB is MYLIB */
ENDDO /* &FUNCTION *EQ &PRECHG */
/*--------------------------------------------------------------*/
/* At the post-change user exit point if the journal library is */
/* ABCLIB, save the just detached journal receiver. */
/*--------------------------------------------------------------*/
ELSE IF (&FUNCTION *EQ &POSTCHG) THEN(DO)
IF COND(&JRNLIB *EQ 'ABCLIB') THEN(DO)
RTVJRNE JRN(&JRNLIB/&JRNNAME) +
RCVRNG(&RCVLIB/&RCVNAME) FROMENTLRG(*FIRST) +
RTNJRNE(&RTNJRNE)
633
Table 115. Sample journal receiver management exit program
/*----------------------------------------------------------*/
/* Retrieve the journal entry, extract the previous receiver*/
/* name and library to do the save with. */
/*----------------------------------------------------------*/
CHGVAR &PRVRCV (%SUBSTRING(&RTNJRNE 126 10))
CHGVAR &PRVRLIB (%SUBSTRING(&RTNJRNE 136 10))
SAVOBJ OBJ(&PRVRCV) LIB(&PRVRLIB) DEV(TAP02) +
OBJTYPE(*JRNRCV) /* Save detached receiver */
ENDDO /* &JRNLIB is ABCLIB */
ENDDO /* &FUNCTION is &POSTCHG */
/*--------------------------------------------------------------*/
/* Handle processing for the pre-check exit point. */
/*--------------------------------------------------------------*/
ELSE IF (&FUNCTION *EQ &PRECHK) THEN(DO)
IF (&JRNLIB *EQ 'TEAMLIB') THEN( +
SAVOBJ OBJ(&RCVNAME) LIB(&RCVLIB) DEV(TAP01) +
OBJTYPE(*JRNRCV))
ENDDO /* &FUNCTION is &PRECHK */
ENDPGM
634
APPENDIX ASupported object types for system
journal replication
This list identifies IBM i object types and indicates whether MIMIX can replicate these
through the system journal.
Note: Not all object types exist in all releases of IBM i.
635
Supported object types for system journal replication
636
Object Type Description Replicated
*SYMLNK Symbolic link Yes2
*TBL Table Yes
*USRIDX User index Yes
*USRPRF User profile Yes13
*USRQ User queue Yes4
*USRSPC User space Yes10
*VLDL Validation list Yes
*WSCST Workstation customizing object Yes
Notes:
1. Replicating configuration objects to a previous version of IBM i may cause unpredictable
results.
2. Objects in QDLS, QSYS.LIB, QFileSvr.400, QLANSrv, QOPT, QNetWare, QNTC, QSR,
and QFPNWSSTG file systems are not currently supported via Data Group IFS Entries.
Objects in QSYS.LIB and QDLS are supported via Data Group Object Entries and Data
Group DLO Entries. Excludes stream files associated with a server storage space.
3. File attribute types include: DDMF, DSPF, DSPF36, DSPF38, ICFF, LF, LF38, MXDF38,
PF-DTA, PF-SRC, PF38-DTA, PF38-SRC, PRTF, PRTF38, and SAVF.
4. Content is not replicated.
5. Spooled files are replicated separately from the output queue.
6. These objects are system specific. Duplicating them could cause unpredictable results on
the target system.
7. Duplicating these objects can potentially cause problems on the target system.
8. These objects are not duplicated due to size and IBM recommendation.
9. These object types can be supported by MIMIX for replication through the system journal,
but are not currently included. Contact CustomerCare if you need support for these object
types.
10.Changes made though external interfaces such as APIs and commands are replicated.
Direct update of the content through a pointer is not supported.
11.To replicate *PGM objects to an earlier release of IBM i you must be able to save them to
that earlier release of IBM i.
12.Device description attributes include: APPC, ASC, ASP, BSC, CRP, DKT, DSPLCL,
DSPRMT, DSPVRT, FNC, HOST, INTR, MLB, NET, OPT, PRTLAN, PRTLCL, PRTRMT,
PRTVRT, RTL, SNPTUP, SNPTDN, SNUF, and TAP.
13.The MIMIX-supplied user profiles MIMIXOWN and LAKEVIEW, as well as IBM supplied
user profiles, should not be replicated.
14.Files linked to a data dictionary, such as files starting with QIDCT, are not supported.
637
MIMIX product-level security
License Manager provides the capability to enable additional security to protect your
MIMIX environment and limit access to the product. These functions provide an
additional level of security beyond that available with IBM i.
When enabled, product-level security, enforces the additional Vision-provided product
authority and command authority functions.
• Product authority allows an administrator to set or change the product authority
level needed for a user profile or for public access to a specific MIMIX product.
These authority levels are in addition to the standard IBM i security levels.
• Command authority allows an administrator to change the authority level of
specific MIMIX commands. When product-level security is enabled, you can use
the command authority function to raise or lower the authority level for a command
or to reset it to the shipped authority values.
Any authorization levels that you set for specific user profiles to control access to a
product or command are not enabled or enforced unless you take explicit action to set
product-level security to "On" for each product. It is recommended that you take
advantage of this additional security.
This appendix lists the authority level of MIMIX commands when product-level
security is turned on. For more information about authority levels for License Manager
commands, setting product-level security, and using the product authority and
command authority functions, see the Using License Manager book.
638
Authority levels for MIMIX commands
Table 116 shows the commands and menu interfaces within MIMIX products that can
be controlled with security functions provided by Vision Solutions. The left side of the
table indicates the products in which the commands are available; to use the
command from within a product, you must first have a valid license key that includes
the product. The right side of the table shows the minimum authority level needed for
the command when you use the provided product authority or command authority.
Be aware of the security considerations for commands and interfaces that are used
by more than one product in the same library. When you have multiple products in the
same library, in each product, you should set command authority to use the same
product-security level. This is also true of product level authority to commands for
individual user profiles.
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X X ABOUT X
O ADDDGDLOE X
U ADDDGFE X
U ADDDGFEALS X
O ADDDGIFSE X
O X ADDDGOBJE X
X X ADDDTARGE c X
X ADDEXITPGM X
639
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X ADDMMXDMNE d X
X X ADDMONINF X
X X X X ADDMSGLOGE X
X X ADDNFYE X
X X ADDNODE X
U ADDRJLNK X
X X ADDSTEP X
X X ADDSTEPMSG X
X X ADDSTEPPGM X
X ADDSWTDEVE X
X X BLDAGENV X
X X BLDCLUOBJ X
X BLDJRNENV X
X X CHGAGDFN X
X X CHGCLUOBJ X
X CHGDG X
X X CHGDGDFN X
O CHGDGDLOE X
U CHGDGFE X
U CHGDGFEALS X
O CHGDGIFSE X
O X CHGDGOBJE X
X CHGDGRCV X
X CHGDTARGE X
X CHGJRNDFN X
X X CHGMMXCLU X
X X CHGMONINF X
X X CHGMONOBJ X
X X CHGMONSTS X
X X CHGNODE X
X X CHGNODSTS X
U X CHGPRMGRP X
X X CHGPROC X
X X CHGPROCSTS X
U CHGRJLNK X
X X CHGSTEP X
X X CHGSTEPMSG X
X X CHGSTEPPGM X
X CHGSWTDEVE X
X CHGSWTDFN X
X X CHGSWTFWK X
X X X X CHGSYSDFN e X
X X X X CHGTFRDFN X
X CHKDGFE X
X X CHKR3PRF X
X X CHKSWTFWK X
640
Authority levels for MIMIX commands
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
O CLRCNRCCH X
X CLRDGRCYP X
X X CLOMMXLST X
X CMPDLOA X
X CMPFILA X
X CMPFILDTA X
O CMPIFSA X
O X CMPOBJA X
X CMPRCDCNT X
O CNLDGACTE X
X X CNLPROC X
U X CPYACTF X
X X CPYCFGDTA X
X X CPYDGDFN X
O CPYDGDLOE X
U CPYDGFE X
O CPYDGIFSE X
O X CPYDGOBJE X
X CPYJRNDFN X
X X CPYMONOBJ X
X X CPYPROC X
X X X X CPYSYSDFN X
X X X X CPYTFRDFN X
X X CRTAGDFN X
U CRTCRCLS X
X X CRTDGDFN X
U CRTDGTSP X
X CRTJRNDFN X
X X CRTMMXCLU X
X X CRTMMXDFN X
X X CRTMONOBJ X
X X CRTPROC X
X CRTSWTDFN X
X X CRTSWTFWK X
X X X X CRTSYSDFN e X
X X X X CRTTFRDFN X
O CVTDG X
X CVTDGIFSE X
X X DLTAGDFN X
U DLTCRCLS X
X X DLTDGDFN X
U DLTDGTSP X
X DLTJRNDFN X
X DLTJRNENV X
X X DLTMMXCLU X
X X DLTMONOBJ X
641
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X X DLTPROC X
X DLTSWTDFN X
X X DLTSWTFWK X
X X X X DLTSYSDFN X
X X X X DLTTFRDFN X
U DMLOGOUT X
X DPYDGCFG X
X X DSPAGDFN X
O DSPPATH X
X DSPCPYDTL X
X X DSPDGDFN X
O DSPDGDLOE X
U DSPDGFE X
U DSPDGFEALS X
O DSPDGIFSE X
X DSPDGIFSTE X
O X DSPDGOBJE X
X DSPDGOBJTE X
X X DSPDGSTS X
X DSPDTARGE X
X DSPJRNDFN X
X DSPJRNSTC X
X X X X DSPMMXMSGQ X
X X DSPMONINF X
X X DSPMONOBJ X
X X DSPMONSTS X
X X DSPNODE X
U DSPRJLNK X
X DSPSWTDEVE X
X DSPSWTDFN X
X DSPSWTSTS X
X X X X DSPSYSDFN X
X X X X DSPTFRDFN X
X X ENDAG X
X X ENDCOLSRV X
X X ENDDG X
U ENDJRNFE X
X ENDJRNIFSE X
X ENDJRNOBJE X
X X X X ENDMMXf X
X X ENDMMXMGR X
X X ENDMON X
X X ENDMSTMON X
U ENDRJLNK X
X X X ENDSVR X
X ENDSWT X
642
Authority levels for MIMIX commands
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X ENDSWTSCN X
X X EXPMONOBJ X
X X X X X EXPPRDINF X
X X X X X GRTPRDAUT X
U HLDDGLOG X
X X HLDMON X
X X IMPMONOBJ X
X X INZR3SWT X
X X LODAG X
O LODDGDLOE X
U LODDGFE X
U LODDGIFSTE X
O X LODDGOBJE X
U LODDGOBJTE X
X X LODDTARGE X
X LODSWTDEVE X
X X X X MIMIX X
X MIMIXPRM X
U MMXSNDJRNE X
X X MOVCLUMSG X
X X OPNMMXLST X
X X OVRSTEP X
X RGZACTF X
U RLSDGLOG X
X X RLSMON X
O RMVDGACTE X
O RMVDGDLOE X
U RMVDGFE X
U RMVDGFEALS X
O RMVDGIFSE X
X RMVDGIFSTE X
O X RMVDGOBJE X
X RMVDGOBJTE X
X X RMVDTARGE X
X RMVMMXDMNE d X
X X RMVMMXNOD X
X X X X RMVMSGLOGE X
X X RMVNODE X
U RMVRJCNN X
U RMVRJLNK X
X X RMVSTEP X
X X RMVSTEPMSG X
X X RMVSTEPPGM X
X RMVSWTDEVE X
X X RNMDGDFN X
X RNMJRNDFN X
643
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X X RNMMONOBJ X
X X RNMPROC X
X X RNMSTEPPGM X
X X X X RNMSYSDFN X
X X X X RNMTFRDFN X
X RTVAPYSTS X
X X RTVDGDFN X
X X RTVDGDFN2 X
O RTVDGDLOE X
U RTVDGEXIT X
U RTVDGFE X
X RTVDGIFSTE X
O X RTVDGOBJE X
X X RTVDGSTS X
X RTVJRNDFN X
X RTVJRNSTS X
X X RTVMMXLSTE X
X X RTVPROC X
X RTVPRMGRP X
U RTVRJLNK X
X X RTVSPSTS X
X X RTVSTEP X
X X RTVSTEPMSG X
X X RTVSTEPPGM X
X X RTVSWTFWK X
X X X X RTVSYSDFN X
X X RTVSYSSTS X
X X X X RTVTFRDFN X
X RTYAPMNT X
O RTYDGACTE X
X X RSMPROC X
X X X X RUNCMD X
X X X X RUNCMDS X
X X RUNMON X
X X RUNPROC X
X X RUNRULE X
X X RUNRULEGRP X
X X RUNSWTFWK X
X X X X X RVKPRDAUT X
O SETDGAUD X
U SETDGEXIT X
U SETDGFE X
X SETDGIFSTE X
X SETDGOBJTE X
X SETDGRCYP X
X SETEXTPCY X
644
Authority levels for MIMIX commands
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
X SETIDCOLA X
X X X SETLCLSYS X
X X SETODASTS X
X X SETMMXPCY X
X X SETMMXSCD X
X SETSWTSRC X
X X SNDCLUOBJ X
X X STRAG g X
X X STRCOLSRV X
X X STRCVTAG X
X X STRDG X
U STRJRNFE X
X STRJRNIFSE X
X STRJRNOBJE X
X X X X STRMMXf X
X X X X STRMMXMGR X
X X STRMON X
X X STRMSTMON X
U STRRJLNK X
X X X STRSVR X
X STRSWT X
X STRSWTSCN X
X X SWTAG X
X SWTDG X
X X SWTR3PRF X
X SYNCDG X
O SYNCDGACTE X
U SYNCDGFE X
O SYNCDLO X
O SYNCIFS X
O X SYNCOBJ X
X X TSTPWRCHG X
X X X X VFYCMNLNK X
U VFYDGFE X
U VFYJRNFE X
X VFYJRNIFSE X
X VFYJRNOBJE X
U VFYKEYATR X
X X WRKAG X
X X WRKAUD X
X X WRKAUDHST X
X X WRKAUDOBJ X
X X WRKAUDOBJH X
U X WRKCPYSTS X
U WRKCRCLS X
X X WRKDG X
645
Table 116. Commands and menu interfaces available by product, showing their shipped minimum authority
level settings when the provided security functions are used.
MIMIX DR, MIMIX Enterprise, or MIMIX MIMIX for Command and Minimum Authority Level
MIMIX Professional for PowerHA Menu Interfaces
Replication MIMIX MIMIX SAP/R3 or *ADM *MGT *OPR *DSP
U=DB only Monitor Promoterb MIMIX
O=Obj only Globala
O WRKDGACT X
O WRKDGACTE X
X X WRKDGDFN X
O WRKDGDLOE X
U WRKDGFE X
U WRKDGFEALS X
U WRKDGFEHLD X
O WRKDGIFSE X
X WRKDGIFSTE X
O X WRKDGOBJE X
X WRKDGOBJTE X
U WRKDGTSP X
X X WRKDTARGE X
X WRKIFSREF X
X WRKJRNDFN X
X X WRKMON X
X X WRKMONINF X
X X X X WRKMSGLOG X
X X WRKNFY Xh X
X X WRKNODE X
X X WRKPROC X
X X WRKPROCSTS X
X WRKRCY X
X WRKRJLNK X
X X WRKSPSTS X
X X WRKSTEP X
X X WRKSTEPMSG X
X X WRKSTEPPGM X
X X WRKSTEPSTS X
X WRKSWT X
X WRKSWTDEVE X
X X X X WRKSYS X
X X X X WRKSYSDFN X
X X X X WRKTFRDFN X
a. Includes licenses for MIMIX Global or MIMIX for PowerHA unless otherwise noted.
b. MIMIX Promoter is not included with MIMIX Professional or MIMIX DR.
c. Supported values for the resource group type (TYPE) parameter vary depending on which license keys are present.
When MIMIX DR is licensed, the only supported type is *DTA. When MIMIX Enterprise or MIMIX Professional is
licensed, the supported types are *DTA and *PEER. When MIMIX for PowerHA is licensed, the supported types are
*DEV, *GMIR, *LUN, *PEER, *PPRC, and *XSM.
d. Supported for MIMIX for PowerHA licenses only.
e. A license for MIMIX Global (C1) requires either a MIMIX Enterprise or a MIMIX Professional license.
f. This command is not protected by product level security. Authority to use this command is controlled by the product level
security assigned to any commands used by this command.
646
Authority levels for MIMIX commands
g. Supported values for the resource group (TYPE) parameter vary depending on which license keys are present.
When MIMIX DR is licensed, the only supported type is *DTA.When MIMIX Enterprise or MIMIX Professional is licensed,
the supported types are *APP, *DTA, and *PEER. When MIMIX for PowerHA is licensed, the supported type are *DEV,
*GMIR, *LUN, *PEER, *PPRC, and *XSM.
h. The following options available from the Work with Notifications (WRKNFY) display require *OPR or higher authority:
4=Delete, 46=Acknowledge, 47=Mark as New. To specify authority for these options, see “Substitution values for com-
mand authority” on page 647.
DLTDGDFN DLTDGDFN2
RTVDGDFN RTVDGDFN2
SETLCLSYS SETLOCSYS
WRKDGDFN WRKDGDFN2
WRKDGDLOE WRKDGDLOE2
WRKDGFE WRJDGFE2
WRKDGOBJE WRKDGOBJE2
WRKJRNDFN WRKJRNDFN3
WRKMSGLOG WRKMSGLOG2
WRKSYSDFN WRKSYSDFN2
WRKTFRDFN WRKTFRDFN2
647
Copying configurations
This section provides information about how you can copy configuration data between
systems.
• “Supported scenarios” on page 648 identifies the scenarios supported in version 8
of MIMIX.
• “Checklist: copy configuration” on page 649 directs you through the correct order
of steps for copying a configuration and completing the configuration.
• “Copying configuration procedure” on page 653 documents how to use the Copy
Configuration Data (CPYCFGDTA) command.
Supported scenarios
The Copy Configuration Data (CPYCFGDTA) command supports copying
configuration data from one library to another library on the same system. After MIMIX
is installed, you can use the CPYCFGDTA command.
The supported scenarios are as follows:
From To
648
Checklist: copy configuration
If you change this name Also change the name in this location
649
7. Verify the data group definitions created have the correct job descriptions. Verify
that the values of parameters for job descriptions are what you want to use.
MIMIX provides default job descriptions that are tailored for their specific tasks.
Note: You may have multiple data groups created that you no longer need.
Consider whether or not you can combine information from multiple data
groups into one data group. For example, it may be simpler to have both
database files and objects for an application be controlled by one data
group.
8. Verify that the options which control data group file entries are set appropriately.
a. For data group definitions, ensure that the values for file entry options (FEOPT)
are what you want as defaults for the data group.
b. Check the file entry options specified in each data group file entry. Any file
entry options (FEOPT) specified in a data group file entry will override the
default FEOPT values specified in the data group definition. You may need to
modify individual data group file entries.
9. Check the data group entries for each data group. Ensure that all of the files and
objects that you need to replicate are represented by entries for the data group.
Be certain that you have checked the data group entries for your critical files and
objects. Use the procedures in the MIMIX Operations book to verify your
configuration.
10. Check how the apply sessions are mapped for data group file entries. You may
need to adjust the apply sessions.
11. Use Table 120 to entries for any additional database files or objects that you need
to add to the data group.
Table 120. How to configure data group entries for the preferred configuration.
Library- 1. Create object entries using “Creating data group object “Identifying library-based
based entries” on page 271. objects for replication” on
objects 2. After creating object entries, load file entries for LF and page 100
PF (source and data) *FILE objects using “Loading file “Identifying logical and physical
entries from a data group’s object entries” on page 276. files for replication” on
Note: If you cannot use MIMIX Dynamic Apply for logical files or page 106
PF data files, you should still create file entries for PF “Identifying data areas and data
source files to ensure that legacy cooperative processing queues for replication” on
can be used. page 113
3. After creating object entries, load object tracking entries
for *DTAARA and *DTAQ objects that are journaled to a
user journal. Use “Loading object tracking entries” on
page 287.
650
Checklist: copy configuration
Table 120. How to configure data group entries for the preferred configuration.
IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 284. replication” on page 116
2. After creating IFS entries, load IFS tracking entries for
IFS objects that are journaled to a user journal. Use
“Loading IFS tracking entries” on page 286.
DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 297. replication” on page 122
12. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
“Resolving auditing problems” on page 679 and “Interpreting results for
configuration data - #DGFE audit” on page 687.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
13. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
“Setting data group auditing values manually” on page 309. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
14. Verify that system-level communications are configured correctly.
a. If you are using SNA as a transfer protocol, verify that the MIMIX mode and
that the communications entries are added to the MIMIXSBS subsystem.
b. If you are using TCP as a transfer protocol, verify that the MIMIX TCP server is
started on each system (on each "side" of the transfer definition). You can use
the WRKACTJOB command for this. Look for a job under the MIMIXSBS
subsystem with a function of LV-SERVER.
651
c. Use the Verify Communications Link (VFYCMNLNK) command to ensure that
a MIMIX installation on one system can communicate with a MIMIX installation
on another system. Refer to topic “Verifying the communications link for a data
group” on page 197.
15. Ensure that there are no users on the system that will be the source for replication
for the rest of this procedure. Do not allow users onto the source system until you
have successfully completed the last step of this procedure.
16. Start journaling using the following procedures as needed for your configuration.
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (JRNSYS) parameter in the commands
to start journaling.
• For user journal replication, use “Journaling for physical files” on page 347 to
start journaling on both source and target systems
• For IFS objects, configured for user journal replication, use “Journaling for IFS
objects” on page 350.
• For data areas or data queues configured for user journal replication, use
“Journaling for data areas and data queues” on page 354.
17. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 508
includes instructions for how to establish a synchronization point and identifies the
options available for synchronizing.
18. Start the system managers using topic “Starting the system and journal managers”
on page 306.
19. Start the data group using “Starting data groups for the first time” on page 315.
652
Copying configuration procedure
653
Configuring Intra communications
The MIMIX set of products supports a unique configuration called Intra. Intra is a
special configuration that allows the MIMIX products to function fully within a single-
system environment. Intra support replicates database and object changes to other
libraries on the same system by using system facilities that allow for communications
to be routed back to the same system. This provides an excellent way to have a test
environment on a single machine that is similar to a multiple-system configuration.
The Intra environment can also be used to perform backups while the system remains
active. You can have multiple Intra configurations in a MIMIX installation.
Intra is supported only in environments licensed for MIMIX Enterprise.
In an Intra configuration, the product is installed into two libraries on the same system
and configured in a special way. An Intra configuration uses these libraries to replicate
data to additional disk storage on the same system. The second library in effect
becomes a “backup” library.
By using an Intra configuration you can reduce or eliminate your downtime for routine
operations such as performing daily and weekly backups. When replicating changes
to another library, you can suspend the application of the replicated changes. This
enables you to concurrently back up the copied library to tape while your application
remains active. When the backup completes, you can resume operations that apply
replicated changes to the "backup" library.
An Intra configuration enables you to have a "live" copy of data or objects that can be
used to offload queries and report generations. You can also use an Intra
configuration as a test environment prior to installing MIMIX on another system or
connecting your applications to another system.
Because both libraries exist on the same system, an Intra configuration does not
provide protection from disaster.
Database replication within an Intra configuration requires that the source and target
files either have different names or reside in different libraries. Similarly, objects
cannot be replicated to the same named object in the same named library, folders, or
directory.
654
Manually configuring Intra using TCP
the end of the library name. For example, a library named ABC would need to be
named ABCI, ABCII, ABCIII, etc... in order to be valid for an Intra configuration.
Important! We recommend that these steps be performed by MIMIX Services
personnel. Also, the system name for Intra must be ‘INTRAnnn’ where ‘INTRA’ is
required for the first part of the system name and the second part of the name,
nnn, can be up to 3 valid system definition characters.
In this example, the MIMIX library is the management system and the MIMIXI library
is the network system. If you manually configure the communications necessary for
Intra, consider the MIMIX library as the local system and the MIMIXI library as the
remote system. You may already have a management system defined and need to
add an Intra network system. All the configuration should be done in the MIMIX library
on the management system.
Note: If you have multiple network systems, you need to configure your transfer
definitions to have the same name with system1 and system2 being different.
For more information, see “Multiple network system considerations” on
page 172.
To add an entry in the host name table, use the command Configure TCP/IP
(CFGTCP) command to access the Configure TCP/IP menu.
Select option 10 (Work with TCP/IP Host Table Entries) from the menu. From the
Work with TCP/IP Host Table display, type a 2 (Change) next to the LOOPBACK
entry and add 'INTRA' to that entry.
For this example, the host name of the management system is Source and the host
name for the network or target system is Intra.
1. Create the system definitions for the product libraries used for Intra as follows:
a. For the MIMIX library (local system) enter the following command:
MIMIX/CRTSYSDFN SYSDFN(source) TYPE(*MGT) TEXT(‘management
system’)
Note: You may have already configured this system.
b. For the MIMIXI library (remote system), use the following command:
MIMIX/CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(‘network
system’) PRDLIB(MIMIXI)
2. Create the transfer definition between the two product libraries with the following
command. Note that the values for PORT1 and PORT2 must be unique.
MIMIX/CRTTFRDFN TFRDFN(PRIMARY SOURCE INTRA) HOST1(SOURCE)
HOST2(INTRA) PORT1(55501) PORT2(55502) MNGAJE(*YES)
3. Start the server for the management system (source) by entering the following
command:
MIMIX/STRSVR HOST(SOURCE) PORT(55501)
4. Start the server for the network system (Intra) by entering the following command:
MIMIXI/STRSVR HOST(INTRA) PORT(55502)
5. Start the system managers from the management system by entering the
655
Configuring Intra communications
following command:
MIMIX/STRMMXMGR SYSDFN(*ALL)
Start the remaining managers normally.
Note: You will still need to configure journal definitions and data group definitions on
the management system.
656
Manually configuring Intra using SNA
657
MIMIX support for independent ASPs
MIMIX has always supported replication of library-based objects and IFS objects to
and from the system auxiliary storage pool (ASP 1) and basic storage pools (ASPs 2-
32). Now, MIMIX also supports replication of library-based objects and IFS objects,
including journaled IFS objects, data areas and data queues, located in independent
ASPs1 (33-255).
The system ASP and basic ASPs are collectively known as SYSBAS. Figure 34
shows that MIMIX supports replication to and from SYSBAS and to and from
independent ASPs. Figure 35 shows that MIMIX also supports replication from
SYSBAS to an independent ASP and from an independent ASP to SYSBAS.
Figure 34. MIMIX supports replication to and from an independent ASP as well as standard
replication to and from SYSBAS (the system ASP and basic ASPs).
Figure 35. MIMIX also supports replication between SYSBAS and an independent ASP.
Restrictions: There are several permanent and temporary restrictions that pertain to
replication when an independent ASP is included in the MIMIX configuration. See
658
Benefits of independent ASPs
“Requirements for replicating from independent ASPs” on page 662 and “Limitations
and restrictions for independent ASP support” on page 662.
659
MIMIX support for independent ASPs
One type of user ASP is the basic ASP. Data that resides in a basic ASP is always
accessible whenever the server is running. Basic ASPs are identified as ASPs 2
through 32. Attributes, such as those for spooled files, authorization, and ownership
of an object, stored in a basic ASP reside in the system ASP. When storage for a
basic ASP is filled, the data overflows into the system ASP.
Collectively, the system ASP and the basic ASPs are called SYSBAS.
Another type of user ASP is the independent ASP. Identified by device name and
numbered 33 through 255, an independent ASP can be made available or
unavailable to the server without restarting the system. Unlike basic ASPs, data in an
independent ASP cannot overflow into the system ASP. Independent ASPs are
configured using iSeries Navigator.
1. MIMIX does not support UDFS independent ASPs. UDFS independent ASPs contain only
user-defined file systems and cannot be a member of an ASP group unless they are con-
verted to a primary or secondary independent ASP.
660
Auxiliary storage pool concepts at a glance
While this processing occurs, the ASP group is in an active state and recovery steps
are performed. The primary independent ASP is synchronized with any secondary
independent ASPs in the ASP group, and journaled objects are synchronized with
their associated journal.
While being varied on, several server jobs are started in the QSYSWRK subsystem to
support the independent ASP. To ensure that their names remain unique on the
server, server jobs that service the independent ASP are given their own job name
when the independent ASP is made available.
Once the independent ASP is made available, it is ready to use. Completion message
CPC2605 (vary on completed for device name) is sent to the history log.
661
Requirements for replicating from independent ASPs
The following requirements must be met before MIMIX can support your independent
ASP environment:
• License Program 5722-SS1 option 12 (Host Server) must be installed in order for
MIMIX to properly replicate objects in an independent ASP on the source and
target systems.
• Any PTFs for IBM i that are identified as being required need to be installed on
both the source and target systems. Log in to Support Central and check the
Technical Documents page for a list of IBM i PTFs that may be required.
• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into *SYSBAS.
662
Configuration planning tips for independent ASPs
• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into SYSBAS. These libraries cannot exist in an independent ASP.
• Any *MSGQ libraries, *JOBD libraries, and *OUTFILE libraries specified on MIMIX
commands must reside in SYSBAS.
• For successful replication, ASP devices in ASP groups that are configured in data
group definitions must be made available (varied on). Objects in independent
ASPs attached to the source system cannot be journaled if the device is not
available. Objects cannot be applied to an independent ASP on the target system
if the device is not available.
• Planned switchovers of data groups that include an ASP group must take place
while the ASP devices on both the source and target systems are available. If the
ASP device for the data group on either the source or target system is unavailable
at the time the planned switchover is attempted, the switchover will not complete.
• To support an unplanned switch (failover), the independent ASP device on the
backup system (which will become the temporary production system) must be
available in order for the failover to complete successfully.
• In order for MIMIX to access objects located in an independent ASP, do one of the
following on the Synchronize Object (SYNCOBJ) command:
– Specify the data group definition.
– If no data group is specified, you must specify values for the System 1 ASP
group or device, System 2 ASP device number, and System 2 ASP device
number parameters.
Also be aware of the following temporary restrictions:
• MIMIX does not perform validity checking to determine if the ASP group specified
in the data group definition actually exists on the systems. This may cause error
conditions when running commands.
• Any monitors configured for use with MIMIX must specify the ASP group. Monitors
of type *JRN or *MSGQ that watch for events in an independent ASP must specify
the name of the ASP group where the journal or message queue exists. This is
done with the ASPGRP parameter of the CRTMONOBJ command.
663
separated into different data groups. This precaution ensures that the data group will
start and that objects residing in SYSBAS will be replicated when the independent
ASP is not available.
Note: To avoid replicating an object by more than one data group, carefully plan
what generic library names you use when configuring data group object
entries in an environment that includes independent ASPs. Make every
attempt to avoid replicating both SYSBAS data and independent ASP data for
objects within the same data group. See the example in “Configuring library-
based objects when using independent ASPs” on page 664.
664
Configuration planning tips for independent ASPs
LIBASP/XYZ exists in both independent ASPs and matches the generic data group
object entry defined in each data group, both data groups replicate the corresponding
object. This is considered normal behavior for replication between independent ASPs,
as shown in Figure 37.
However, in this example, if SYSBAS contains an object that matches the generic
data group object entry defined for each data group, the same object is replicated by
both data groups. Figure 37 shows that object LIBBAS/XYZ meets the criteria for
replication by both data groups, which is not desirable.
Figure 37. Object XYZ in library LIBBAS is replicated by both data groups APP1 and APP2
because the data groups contain the same generic data group object entry. As a result, this
presents a problem if you need to perform a switch.
665
When a MIMIX command runs the SETASPGRP command during processing, MIMIX
resets the user portion of the library list and the current library in the library list to their
initial values. The system portion of the library list is not restored to its initial value.
Figure 38, Figure 39, and Figure 40 show how the system portion of the library list is
affected on the Display Library List (DSPLIBL) display when the SETASPGRP
command is run.
Figure 38. Before a MIMIX command runs. The library list contains three independent ASP
libraries, including a library in independent ASP WILLOW in the system portion of the library
list.
Figure 39. During the running of a MIMIX command. The independent ASP libraries are
removed from the library list.
Figure 40. After the MIMIX command runs. The library in independent ASP WILLOW in the
system portion of the library list is removed. The libraries in independent ASP OAK in the user
666
Detecting independent ASP overflow conditions
portion of the library list and the current library are restored.
667
Advanced auditing topics
MIMIX provides the capability to create user-defined rules and integrate the status of
those rules into status reporting for MIMIX. This can be useful to perform specialized
checks of your environment that augment your regularly scheduled audits. This
appendix describes how to create user-defined rules and notifications.
This appendix also describes advanced topics associated with auditing. Auditing and
the policies which control audit are described fully in the MIMIX Operations book.
Topics in this appendix include:
• “What are rules and how they are used by auditing” on page 669 defines the
differences between MIMIX rules used for auditing and user-defined rules.
• “Using a different job scheduler for audits” on page 670 identifies what is needed if
you choose to not use the automatic auditing job scheduling support in MIMIX.
• “Considerations for rules” on page 671 identifies considerations for using the Run
Rule command and replacement variables with user-defined rules.
• “Creating user-generated notifications” on page 673 describes how to create a
notification that can be used with custom automation.
• “Running rules and rule groups manually” on page 675 describes how to use the
Run Rule and Run Rule Group commands.
• “Running user rules and rule groups programmatically” on page 615 describes
running rules when initiated by a job scheduling task.
• “MIMIX rule groups” on page 677 lists the pre-configured sets of MIMIX rules that
are shipped with MIMIX.
668
What are rules and how they are used by auditing
669
Using a different job scheduler for audits
If you do not want to use the job scheduling capabilities within MIMIX to schedule
audits, you need to ensure that all of the MIMIX rules are scheduled to run on a
regular basis using your preferred scheduling mechanism.
Note: Only scheduled audits can be run using a different job scheduler. Scheduled
audits select all configured objects associated with the class of the audit.
Prioritized audits cannot be run with a different job scheduler. Prioritized audits
select only those replicated objects that are eligible for auditing based on their
eligibility category and category frequency.
It is recommended that you do the following:
• Schedule the audits to run from the management system.
• Schedule all audits to run every day in the same order described for shipped
default schedules in the MIMIX Operations book.
• Specify the same Run Rule command that is displayed when you use prompt
option 9 (Run rule) on the Work with Audits display.
• Address starting and ending the scheduling jobs in your operations at points
where you need to start or end MIMIX. The Start MIMIX (STRMMX) and End
MIMIX (ENDMMX) commands only address audits scheduled by MIMIX.
• Put appropriate checks in place to prevent scheduled jobs from starting when
MIMIX would otherwise need to be ended (such as during installation).
• Disable MIMIX scheduling for all audits. For installations running service pack
7.1.12.00 or higher, specify *DISABLED for the State element in the Audit
schedule policy of every audit. For installations running earlier service pack levels,
specify *NONE for the Frequency element in the Audit schedule policy of every
audit.
670
Considerations for rules
671
Rule-related messages are marked with a Process value of *NOTIFY to facilitate the
filtering of rules- and notification-related messages.
672
Creating user-generated notifications
673
Example of a user-generated notification
A MIMIX administrator wants to see a notification reflected in MIMIX status when TCP
communications fails. A message queue monitor on a specific system can check for a
message indicating a communications failure and issue a notification when the
message occurs.
Note: The administrator in this example must use care when determining where to
create the monitor. A monitor runs only on a single system but the notification
it will generate may be available on multiple systems. The role of the system
(management or network) on which the monitor runs and the values specified
for the Add Notification Entry command in the monitor’s event program
determine where the notification will be available. (For details, see the DGDFN
information in “Creating user-generated notifications” on page 673.) Because
the communications problem being monitored may also prevent the
notification from reaching the appropriate systems, the administrator chose to
create this monitor on multiple systems in the installation.
The following command creates a message queue monitor named COMPROB to
check for message LVE0113 (TCP communications request failed with error &1) in
the MIMIX message queue in the MIMIXQGPL library:
CRTMONOBJ MONITOR(COMPROB) EVTCLS(*MSGQ)
EVTPGM(user_library/COMPROB) MSGQ(MIMIXQGPL/MIMIX)
MSGID(LVE0113) AUTOSTR(*YES) TEXT('Issue notification
entry for TCP communication problem')
The event program includes the instruction to issue the following command, which will
add a notification to MIMIX in the specified installation library:
installation_library/ADDNFYE TEXT('comm failure')
SEVERITY(*ERROR) DGDFN(*NONE) DETAIL(‘TCP communications
failed. Investigation needed.’)
Once the monitor is enabled and started, the event program COMPROB will run when
the message LVE0113 is detected. For additional information about creating monitors
and writing event programs, see the Using MIMIX Monitor book.
674
Running rules and rule groups manually
Running rules
This procedure outlines the steps required to run a rule.
Typically, this procedure should be performed from the system and installation where
you wan the rule to run. Do the following:
1. On a command line, type RUNRULE and press F4 (Prompt). The Run Rule
(RUNRULE) display appears.
2. At the Rule name prompt, specify the rule names for the rules you want to run.
You can specify up to 100 rules to run from the command.
3. At the Data group definition prompt, specify the value you want. The default is
*NONE, but you can specify that rules be run against an individual data group or all
data groups.
4. Press F10 for additional parameters.
5. At the Notification severity prompt, specify the severity level to assign to the
notification that is sent if the rule ends in error. This value overrides values
specified in policies or in the rule itself.
For a MIMIX rule, the default value *DFT is the same as the value *POLICY,
where the Notification severity policy in effect determines the severity of the
notification. For a user rule, *DFT is the same as the value *RULE, where the rule
determines the severity of the notification.
6. At the Notification on success prompt, specify whether you want the rule to
generate a notification when the specified rule ends successfully. This value
overrides values specified in policies or in the rule itself.
For a MIMIX rule, the default value *DFT is the same as *POLICY, where the Audit
notify on success policy in effect determines whether a notification is sent. If the
policy is set for both the installation and for the data group, the data group value is
used. For a user rule, *DFT is the same as the value *RULE, where the value
specified in the rule determines whether a notification is sent.
7. At the Use run rule on system policy prompt, specify whether the rule should use
675
the policy in effect when run.This value is only used when a data group is
selected. The default value *NO will run the rule on the local system.
8. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request. The default value, MXAUDIT,
submits the request using the default job description, MXAUDIT.
9. To run the rule, press Enter.
676
MIMIX rule groups
#ALL Set of all shipped DLO, file, IFS, #DGFE, #DLOATR, #FILATR,
and object rules. #FILATRMBR, #FILDTA,
#IFSATR, #MBRRCDCNT,
#OBJATR
677
Interpreting audit results
Audits use commands that compare and synchronize data. The results of the audits
are placed in output files associated with the commands. The following topics provide
supporting information for interpreting data returned in the output files.
• “Resolving auditing problems” on page 679 describes how to check the status of
an audit and resolve any problems that occur.
• “Checking the job log of an audit” on page 684 describes how to use an audit’s job
log to determine why an audit failed.
• “When the difference is “not found”” on page 686 provides additional
considerations for interpreting result of not found in priority audits.
• “Interpreting results for configuration data - #DGFE audit” on page 687 describes
the #DGFE audit which verifies the configuration data defined to your
configuration using the Check Data Group File Entries (CHKDGFE) command.
• “Interpreting results of audits for record counts and file data” on page 689
describes the audits and commands that compare file data or record counts.
• “Interpreting results of audits that compare attributes” on page 692 describes the
Compare Attributes commands and their results.
678
Resolving auditing problems
679
You may also need to view the output file or the job log, which are only available from
the system where the audits ran. In most cases, this is the management system.
Work with Audits
System: AS01
Type options, press Enter.
5=Display 6=Print 7=History 8=Recoveries 9=Run rule 10=End
14=Audited objects 46=Mark recovered ...
Status Action
*FAILED If the failed audit selected objects by priority and its timeframe for starting has not passed,
the audit will automatically attempt to run again.
The rule called by the audit failed or ended abnormally.
• To run the rule for the audit again, select option 9 (Run rule). This will check all objects
regardless of how the failed audit selected objects to audit.
• To check the job log, see “Checking the job log of an audit” on page 684.
680
Resolving auditing problems
Status Action
*ENDED The #FILDTA audit or the #MBRRCDCNT audit ended either because of a policy in effect
or the data group status. The recovery varies according to why the audit ended.
To determine why the audit ended:
1. Select option 5 (Display) for the audit and press Enter.
2. Check the value indicated in the Reason for ending field. Then perform the appropriate
recovery, below.
If the reason for ending is *DGINACT, the data group status became inactive while the
audit was in progress.
1. From the command line, type WRKDG and press Enter.
• If all processes for the data group are active, skip to Step 2.
• If processes for the data group show a red I, L, or P in the Source and Target columns,
use option 9 (Start DG). Note that if the data group was inactive for some time, it may
have a threshold condition after being started. Wait for the threshold condition to clear
before continuing with Step 2.
2. When the data group is active and does not have a threshold condition, return to the
Work with Audits display and use option 9 (Run rule) to run the audit. This will check all
objects regardless of how the ended audit had selected objects to audit.
If the reason for ending is *MAXRUNTIM, the Maximum rule runtime (MAXRULERUN)
policy in effect was exceeded. Do one of the following:
• Wait for the next priority-based audit to run according to its timeframe for starting.
• Change the value specified for the MAXRULERUN policy using “Setting policies -
general” on page 30. Then, run the audit again, either by using option 9 (Run rule) or by
waiting for the next scheduled audit or priority-based audit to run.
If the reason for ending is *THRESHOLD, the DB apply threshold action (DBAPYTACT)
policy in effect caused the audit to end.
1. Determine if the data group still has a threshold condition. From the command line, type
WRKDG and press Enter.
2. If the data group shows a turquoise T in the Target DB column, the threshold exceeded
condition is still present. Wait for the threshold to resolve. If the threshold persists for an
extended time, you may need to contact your MIMIX administrator.
3. When the data group no longer has a threshold condition, return to the Work with Audits
display and use option 9 (Run rule) to run the audit. (This will check all objects.)
681
Table 122. Addressing problems with audit runtime status
Status Action
*DIFFNORCY The comparison performed by the audit detected differences. No recovery actions were
attempted because of a policy in effect when the audit ran. Either the Automatic audit
recovery policy is disabled or the Action for running audits policy prevented recovery
actions while the data group was inactive or had a replication process which exceeded its
threshold.
If policy values were not changed since the audit ran, checking the current settings will
indicate which policy was the cause. Use option 36 to check data group level policies and
F16 to check installation level policies.
• If the Automatic audit recovery policy was disabled, the differences must be manually
resolved.
• If the Action for running audits policy was the cause, either manually resolve the
differences or correct any problems with the data group status. You may need to start
the data group and wait for threshold conditions to clear. Then run the audit again.
To manually resolve differences do the following:
1. Type 7 (History) next to the audit with *DIFFNORCY status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column. All differences shown for an audit with
*DIFFNORCY status need to be manually resolved. For more information about the
possible values, see “Interpreting audit results” on page 678.
To have MIMIX always attempt to recover differences on subsequent audits, change the
value of the automatic audit recovery policy.
*NOTRCVD The comparison performed by the audit detected differences. Either some attempts to
recover differences failed, or the audit job ended before recoveries could be attempted on
all differences.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits,
such as #FILDTA, may correct the detected differences.
Do the following:
1. Type 7 (History) next to the audit with *NOTRCVD status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column. Any objects with a value of *RCYFAILED must
be manually resolved. For any objects with values other than *RCYFAILED or
*RECOVERED, run the audit again. For more information about the possible values,
see “Interpreting audit results” on page 678.
682
Resolving auditing problems
Status Action
*NOTRUN The audit request was submitted but did not run. Either the audit was prevented from
running by the Action for running audits policy in effect, or the data group was inactive
when a #FILDTA or #MBRRCDCNT audit was requested.
Note: An audit with *NOTRUN status may or may not be considered a problem in your environment.
The value of the Audit severity policy in effect determines whether the audit status *NOTRUN
has been assigned an error, warning, or informational severity. The severity determines
whether the *NOTRUN status rolls up into the overall status of MIMIX and affects the order in
which audits are displayed in interfaces.
A status of *NOTRUN may be expected during periods of peak activity or when data group
processes have been ended intentionally. However, if the audit is frequently not run due to
the Action for running audits policy, action may be needed to resolve the cause of the
problem.
To resolve this status:
1. From the command line, type WRKDG and press Enter.
2. Check the data group status for inactive or partially active processes and for processes
with a threshold condition.
3. When the data group no longer has a threshold condition and all processes are active,
return to the Work with Audits display and use option 9 (Run rule) to run the audit. (This
will check all objects.)
*IGNOBJ The audit ignored one or more objects because they were considered active or could not
be compared because of locks or authorizations that prevented access. All other selected
objects were compared and any detected differences were recovered.
Note: An audit with *IGNOBJ status may or may not be considered a problem in your environment.
The value of the Audit severity policy in effect determines whether the audit status *IGNOBJ
has been assigned a warning or informational severity. The severity determines whether the
*IGNOBJ status rolls up into the overall status of MIMIX, determines whether the ignored
objects are counted as differences, and affects the order in which audits are displayed in
interfaces.
To resolve this status:
1. From the Work with Audits display on the target system, type 7 (History) next to the audit
with *IGNOBJ status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column to identify the objects with a status of *UN or *UA.
4. When the locks are released or replication activity for the object completes, do one of
the following:
• Wait for the next prioritized audit to run.
• Run the audit manually using option 9 (Run rule) from the Work with Audits display,
For more information about the values displayed in the audit results, see “Interpreting
results for configuration data - #DGFE audit” on page 687, “Interpreting results of
audits for record counts and file data” on page 689, and “Interpreting results of audits
that compare attributes” on page 692.
683
Checking the job log of an audit
An audit’s job log can provide more information about why an audit failed. If it still
exists, the job log is available on the system where the audit ran. Typically, this is the
management system.
You must display the notifications from an audit in order to view the job log. Do the
following:
1. From the Work with Audits display, type 7 (History) next to the audit and press
Enter.
2. The Work with Audit History display appears with the most recent run of the audit
at the top of the list.
3. Use option 12 (Display job) next to the audit you want and press Enter.
4. The Display Job menu opens. Select option 4 (Display spooled files). Then use
option 5 (Display) from the Display Job Spooled Files display.
5. Look for messages from the job log for the audit in question. Usually the most
recent messages are at the bottom of the display.
Message LVE3197 is issued when errors remain after an audit completed.
Message LVE3358 is issued when an audit failed. Check for following
messages in the job log that indicate a communications problem (LVE3D5E,
LVE3D5F, or LVE3D60) or a problem with data group status (LVI3D5E,
LVI3D5F, or LVI3D60).
684
Resolving auditing problems
685
When the difference is “not found”
For audits that compare replicated data, a difference indicating the object was not
found requires additional explanation. This difference can be returned for these
audits:
• For the #FILDTA and #MBRRCDCNT audits, a value of *NF1 or *NF2 for the
difference indicator (DIFIND) indicates the object was not found on one of the
systems in the data group. The 1 and 2 in these values refer to the system as
identified in the three-part name of the data group.
• For the #FILATR, #FILATRMBR, #IFSATR, #OBJATR, and #DLOATR audits, a
not found condition is indicated by a value of *NOTFOUND in either the system 1
indicator (SYS1IND) or system 2 indicator (SYS2IND) fields. Typically, the DIFIND
field result is *NE.
Audits can report not found conditions for objects that have been deleted from the
source system. A not found condition is reported when a delete transaction is in
progress for an object eligible for selection when the audit runs. This is more likely to
occur when there are replication errors or backlogs, and when policy settings do not
prevent audits from comparing when a data group is inactive or in a threshold
condition.
A scheduled audit will not identify a not found condition for an object that does not
exist on either system because it selects existing objects based on whether they are
configured for replication by the data group. This is true regardless of whether the
audit is automatically submitted or run immediately.
Because a priority audit selects already replicated objects, it will not audit objects for
which a create transaction is in progress.
Prioritized audits will not identify a not found condition when the object is not found on
the target system because prioritized auditing selects objects based on the replicated
objects database. Only objects that have been replicated to the target system are
identified in the database.
Priority audits can be more likely to report not found conditions when replication errors
or backlogs exist.
686
Interpreting results for configuration data - #DGFE audit
Table 123. CHKDGFE - possible results and actions for resolving errors
*NODGFE No file entry exists. The object is identified by an object entry which
specifies COOPDB(*YES) but the file entry necessary for cooperative
processing is missing.
Create the DGFE or change the DGOBJE to COOPDB(*NO)
Note: Changing the object entry affects all objects using the object entry. If you
do not want all objects changed to this value, copy the existing DGOBJE
to a new, specific DGOBJE with the appropriate COOPDB value.
*EXTRADGFE An extra file entry exists. The object is identified by a file entry and an
object entry which specifies COOPDB(*NO). The file entry is extra when
cooperative processing is not used.
Delete the DGFE or change the DGOBJE to COOPDB(*YES)
Note: Changing the object entry affects all objects using the object entry. If you
do not want all objects changed to this value, copy the existing DGOBJE
to a new, specific DGOBJE with the appropriate COOPDB value.
*RCYFAILED Automatic audit recovery actions were attempted but failed to correct the
detected error.
Run the audit again.
The Option column of the report provides supplemental information about the
comparison. Possible values are:
687
*NONE - No options were specified on the comparison request.
*NOFILECHK - The comparison request included an option that prevented an
error from being reported when a file specified in a data group file entry does not
exist.
*DGFESYNC - The data group file entry was not synchronized between the
source and target systems. This may have been resolved by automatic recovery
actions for the audit.
One possible reason why actual configuration data in your environment may not
match what is defined to your configuration is that a file was deleted but the
associated data group file entries were left intact. Another reason is that a data group
file entry was specified with a member name, but a member is no longer defined to
that file. If you use #DGFE audit with automatic scheduling and automatic audit
recovery enabled, these configuration problems can be automatically detected and
recovered for you. Table 124 provides examples of when various configuration errors
might occur.
688
Interpreting results of audits for record counts and file data
Table 125. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)
Values Description
689
Table 125. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)
Values Description
*EQ (DATE) Member excluded from comparison because it was not changed or
restored after the timestamp specified for the CHGDATE
parameter.
*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.
*REP The file member is being processed for repair by another job
running the Compare File Data (CMPFILDTA) command.
*SW The source file is journaled but not to the journal specified in the
journal definition.
See “When the difference is “not found”” on page 686 for additional information.
690
Interpreting results of audits for record counts and file data
Table 126. Possible values for Compare Record Count (CMPRCDCNT) output file field Dif-
ference Indicator (DIFIND)
Values Description
*EQ Record counts match. No difference was detected within the record
counts compared. Global difference indicator.
*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.
*SW The source file is journaled but not to the journal specified in the
journal definition.
See “When the difference is “not found”” on page 686 for additional information.
691
Interpreting results of audits that compare attributes
Each audit that compares attributes does so by calling a Compare Attributes1
command and places the results in an output file. Each row in an output file for a
Compare Attributes command can contain either a summary record format or a
detailed record format. Each summary row identifies a compared object and includes
a prioritized object-level summary of whether differences were detected. Each detail
row identifies a specific attribute compared for an object and the comparison results.
The type of data included in the output file is determined by the report type specified
on the Compare Attributes command. The data included for each report type is as
follows:
• Difference reports (RPTTYPE(*DIF)) return information about detected
differences. Only summary rows for objects that had detected differences are
included. Detail rows for all compared attributes are included. Difference reports
are the default for the Compare Attributes commands.
• Full reports (RPTTYPE(*ALL)) return information about all objects and attributes
compared. For each object compared there is a summary row as well as a detail
row for each attribute compared. Full reports include both differences and objects
that are considered synchronized.
• Summary reports (RPTTYPE(*SUMMARY)) return only a summary row for each
object compared. Specific attributes compared are not included.
For difference and full reports of compare attribute commands, several of the attribute
selectors return an indicator (*INDONLY) rather than an actual value. Attributes that
return indicators are usually variable in length, so an indicator is returned to conserve
space. In these instances, the attributes are checked thoroughly, but the report only
contains an indication of whether it is synchronized.
For example, an authorization list can contain a variable number of entries. When
comparing authorization lists, the CMPOBJA command will first determine if both lists
have the same number of entries. If the same number of entries exist, it will then
determine whether both lists contain the same entries. If differences in the number of
entries are found or if the entries within the authorization list are not equal, the report
will indicate that differences are detected. The report will not provide the list of
entries—it will only indicate that they are not equal in terms of count or content.
You can see the full set of fields in the output file by viewing it from the native user
interface.
1. The Compare Attribute commands are: Compare File Attributes (CMPFILA), Compare
Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attri-
butes (CMPDLOA).
692
Interpreting results of audits that compare attributes
When the output file is viewed from the native user interface, the summary row is the
first record for each compared object and is indicated by an asterisk (*) in the
Compared Attribute (CMPATR) field. The summary row’s Difference Indicator value is
the prioritized summary of the status of all attributes checked for the object. When
included, detail rows appear below the summary row for the object compared and
show the actual result for the attributes compared.
The Priority2 column in Table 127 indicates the order of precedence MIMIX uses
when determining the prioritized summary value for the compared object.
Table 127. Possible values for output file field Difference Indicator (DIFIND)
*EC The values are based on the MIMIX configuration settings. The 5
actual values may or may not be equal.
*EM Established mapping for file identifier (FID). Attribute indicator only 6
for CMPIFSA *FID attribute.
*NA The values are not compared. The actual values may or may not 5
be equal.
*NC The values are not equal based on the MIMIX configuration 3
settings. The actual values may or may not be equal.
*NS Indicates that the attribute is not supported on one of the systems. 5
Will not cause a global not equal condition.
*NM Not mapping consistently for file identifier (FID). Attribute indicator 2
only for CMPIFSA *FID attribute.
693
Table 127. Possible values for output file field Difference Indicator (DIFIND)
For most attributes, when the outfile is viewed from the native user interface, when a
detailed row contains blanks in either of the System 1 Indicator or System 2 Indicator
fields, MIMIX determines the value of the Difference Indicator field according to Table
128. For example, if the System 1 Indicator is *NOTFOUND and the System 2
Indicator is blank (Object found), the resultant Difference Indicator is *NE.
Table 128. Difference Indicator values that are derived from System Indicator values.
Difference Indicator
System 1 Indicator
Object *NOTCMPD *NOTFOUND *NOTSPT *RTVFAILED *DAMAGED
Found (blank
value)
Object Found *EQ / *NE / *NA *NE *NS *UN *NE
(blank value) *UA / *EC /
*NC
System *NOTCMPD *NA *NA *NE *NS *UN *NE
2 *NOTFOUND *NE / *UA *NE / *UA *EQ *NE / *UA *NE / *UA *NE
Indicator
*NOTSPT *NS *NS *NE *NS *UN *NE
*RTVFAILED *UN *UN *NE *UN *UN *NE
*DAMAGED *NE *NE *NE *NE *NE *NE
694
Interpreting results of audits that compare attributes
Table 129. Possible values for output file fields SYS1IND and SYS2IND
*NOTCMPD Attribute not compared. Due to MIMIX configuration settings, this N/A2
attribute cannot be compared.
*NOTSPT Attribute not supported. Not all attributes are supported on all IBM N/A2
i releases. This is the value that is used to indicate an
unsupported attribute has been specified.
*RTVFAILED Unable to retrieve the attributes of the object. Reason for failure 4
may be a lock condition.
1. The priority indicates the order of precedence MIMIX uses when setting the system indicators fields in the summary
record.
2. This value is not used in determining the priority of summary level records.
For comparisons which include a data group, the Data Source (DTASRC) field
identifies which system is configured as the source for replication.
695
Attributes compared and expected results - #FILATR, #FILATRMBR audits
The Compare File Attribute (CMPFILA) command supports comparisons at the file
and member level. Most of the attributes supported are for file-level comparisons. The
#FILATR audit and the #FILATRMBR audit each invoke the CMPFILA command for
the comparison phase of the audit.
Some attributes are common file attributes such as owner, authority, and creation
date. Most of the attributes, however, are file-specific attributes. Examples of file-
specific attributes include triggers, constraints, database relationships, and journaling
information.
The difference Indicator (DIFIND) returned after comparing file attributes may depend
on whether the file is defined by file entries or object entries. For instance, a attribute
could be equal (*EC) to the database configuration but not equal (*NC) to the object
configuration. See “What attribute differences were detected” on page 692.
Table 130 lists the attributes that can be compared and the value shown in the
Compared Attribute (CMPATR) field in the output file. The Returned Values column
lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value
(SYS2VAL) columns as a result of running the comparison.
*ALWOPS Allow operations Group which checks attributes *ALWDLT, *ALWRD, *ALWUPD,
*ALWWRT
696
Table 130. Compare File Attributes (CMPFILA) attributes
*AUT File authorities Group which checks attributes *AUTL, *PGP, *PRVAUTIND,
*PUBAUTIND
697
Table 130. Compare File Attributes (CMPFILA) attributes
*EXPDATE1 Expiration date for Blank for *NONE or date in CYYMMDD format, where C equals
member the century. Value 0 is 19nn and 1 is 20nn.
*EXTENDED Pre-determined, Valid only for Comparison level of *FILE, this group compares
extended set the basic set of attributes (*BASIC) plus an extended set of
attributes. The following attributes are compared: *ACCPTH,
*AUT (group), *CCSID, *CST (group), *CURRCDS, *DBR
(group), *MAXKEYL, *MAXMBRS, *MAXRCDL, *NBRMBR,
*OBJATR, *OWNER, *PFSIZE (group), *RCDFMT, *REUSEDLT,
*SELOMT, *SQLTYP, *TEXT, and *TRIGGER (group).
*JOURNAL Journal attributes Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG,
*JRNOMIT. Results are described in “Comparison results for
journal status and other journal attributes” on page 715.
698
Table 130. Compare File Attributes (CMPFILA) attributes
*LONGNAME SQL long name long SQL name (128 char value)
*PFSIZE File size attributes Group which checks *CURRCDS, *INCRCDS, *MAXINC,
*NBRDLTRCD, *NBRRCDS
699
Table 130. Compare File Attributes (CMPFILA) attributes
700
Attributes compared and expected results - #OBJATR audit
The #OBJATR audit calls the Compare Object Attributes (CMPOBJA) command and
places the results in an output file. Table 131 lists the attributes that can be compared
by the CMPOBJA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The command supports attributes that are common
among most library-based objects as well as extended attributes which are unique to
specific object types, such as subsystem descriptions, user profiles, and data areas.
The Returned Values column lists the values you can expect in the System1 Value
(SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the
compare.
*ATTNPGM2 Attention key handling *SYSVAL, *NONE, *ASSIST, attention program name
program
Valid for user profiles
only.
701
Table 131. Compare Object Attributes (CMPOBJA) attributes
*CRTAUT2 Authority given to users *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE, *SYSVAL,
who do not have specific *CHANGE, *ALL, *USE, *EXCLUDE
authority to the object.
Valid for libraries only.
*CRTOBJAUD2 Auditing value for objects *SYSVAL, *NONE, *USRPRF, *CHANGE, *ALL
created in this library
Valid for libraries only.
702
Table 131. Compare Object Attributes (CMPOBJA) attributes
*DTAARAEXT Data area extended Group which checks *DECPOS, *LENGTH, *TYPE, *VALUE
attributes
*EXTENDED Pre-determined, Group which compares the basic set of attributes (*BASIC)
extended set plus an extended set of attributes. The following attributes
are compared: *AUT, *CRTTSP, *DOMAIN, *INFSTS,
*OBJATR, *TEXT, and *USRATR.
*INFSTS Information status *OK (No errors occurred), *RTVFAILED (No information
returned - insufficient authority or object is locked),
*DAMAGED (Object is damaged or partially damaged).
*JOBDEXT Job description extended Group which checks *DDMCNV, *JOBQ, *JOBQLIB,
attributes *JOBQPRI, *LIBLIND, *LOGOUTPUT, *OUTQ, *OUTQLIB,
*OUTQPRI, *PRTDEV
703
Table 131. Compare Object Attributes (CMPOBJA) attributes
*JOBQEXT Job queue extended Group which checks *AUTCHK, *JOBQSBS, *JOBQSTS,
attributes *OPRCTL
704
Table 131. Compare Object Attributes (CMPOBJA) attributes
*MAXSTG7 Maximum allowed Numeric value, *NOMAX (2,147,483,647KB for IBM i 7.1
storage and earlier releases, 9,223,372,036,854,775,807KB for IBM
Valid for user profiles i 7.2 and higher releases)
only. Not compared for
QSECOFR or QTCM
user profiles.
705
Table 131. Compare Object Attributes (CMPOBJA) attributes
*PRFPWDIND User profile password See “Comparison results for user profile password
indicator (*PRFPWDIND)” on page 725 for details.
706
Table 131. Compare Object Attributes (CMPOBJA) attributes
707
Table 131. Compare Object Attributes (CMPOBJA) attributes
*USREXPDAT User expiration date Date (in job format of the job running CMPOBJA), *NONE,
Valid for user profiles *USREXPITV
only. This attribute is only available on systems running IBM i 7.1
and higher.
*USREXPITV User expiration interval 1-366 when the user profile specifies
Valid for user profiles USREXPDATE(*USREXPITV), otherwise 0 is returned.
only. This attribute is only available on systems running IBM i 7.1
and higher.
*USRPRFEXT User profile extended Group which checks *ATTNPGM, *CCSID, *CNTRYID,
attributes *CRTOBJOWN, *CURLIB, *GRPAUT, *GRPAUTTYP,
*GRPPRF, *INLMNU, *INLPGM, *JOBD, *LANGID,
*LMTCPB, *MAXSTG *MSGQ, *PRFOUTQ, *PWDEXPITV,
*PWDIND, *SPCAUTIND, *SUPGRPIND, *USRCLS,
*USREXPDAT, *USREXPITV.
708
Table 131. Compare Object Attributes (CMPOBJA) attributes
709
Attributes compared and expected results - #IFSATR audit
The #IFSATR audit calls the Compare IFS Attributes (CMPIFSA) command and
places the results in an output file. Table 132 lists the attributes that can be compared
by the CMPIFSA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.
*BASIC Pre-determined set of Group which checks a pre-determined set of attributes. The
basic attributes following set of attributes are compared: *CCSID,
*DATASIZE, *OBJTYPE, and the group *PCATTR.
*EXTAUT4 Extended authority for A bit string indicates the permissions and privileges of the
permissions to IFS file.
objects in QSHELL.
710
Table 132. Compare IFS Attributes (CMPIFSA) attributes
711
5. The *FID attribute checks to ensure that stream files referenced by multiple hard links are the same on each system. It
does not compare the actual FIDs; instead it checks to ensure that objects with the equivalent FID on one system
have matching FIDs on the other system.
6. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is
specified, these values are blank.
712
Attributes compared and expected results - #DLOATR audit
The #DLOATR audit calls the Compare DLO Attributes (CMPDLOA) command and
places the results in an output file. Table 133 lists the attributes that can be compared
by the CMPDLOA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.
713
Table 133. Compare DLO Attributes (CMPDLOA) attributes
714
Comparison results for journal status and other journal attributes
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA),
and Compare IFS Attributes (CMPIFSA) commands support comparing the journaling
attributes listed in Table 134 for objects replicated from the user journal. These
commands function similarly when comparing journaling attributes.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether the file is journaled, whether the
request includes a data group, and the data group’s configured settings for journaling.
Regardless of which journaling attribute is specified on the command, MIMIX always
checks the journaling status first (*JOURNALED attribute). If the file or object is
journaled on both systems, MIMIX then considers whether the command specified a
data group definition before comparing any other requested attribute.
When specified on the CMPOBJA command, these values apply only to files, data areas,
or data queues. When specified on the CMPFILA command, these values apply only to
PF-DTA and PF38-DTA files.
*JRN 1 Journal. Indicates the name of the current or last journal. If blank, the
object has never been journaled.
*JRNIMG 1 2 Journal Image. Indicates the kinds of images that are written to the
journal receiver for changes to objects.
*JRNLIB 1 Journal Library. Identifies the library that contains the journal. If blank,
the object has never been journaled.
*JRNOMIT 1 Journal Omit. Indicates whether file open and close journal entries
are omitted.
1. When these values are specified on a Compare command, the journal status (*JOURNALED) attri-
bute is always evaluated first. The result of the journal status comparison determines whether the
command will compare the specified attribute.
2. Although *JRNIMG can be specified on the CMPIFSA command, it is not compared even when the
journal status is as expected. The journal image status is reflected as not supported (*NS) because
the operating system only supports after (*AFTER) images.
Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the journaled status (*JOURNALED attribute).
Table 135 shows the result displayed in the Differences Indicator field. If the file or
715
object is not journaled on both systems, the compare ends. If both source and target
systems are journaled, MIMIX then compares any other specified journaling attribute.
Table 135. Difference indicator values for *JOURNALED attribute when no data group is
specified
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EQ *NE *NE
Source No *NE *EQ *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Compares that specify a data group - When a data group is specified on the
compare request, MIMIX compares the journaled status (*JOURNALED attribute) to
the configuration values. If both source and target systems are journaled according to
the expected configuration settings, then MIMIX compares any other specified
journaling attribute against the configuration settings.
The Compare commands vary slightly in which configuration settings are checked.
• For CMPFILA requests, if the journaled status is as configured, any other
specified journal attributes are compared. Possible results from comparing the
*JOURNALED attribute are shown in Table 136.
• For CMPOBJA and CMPIFSA requests, if the journaled status is as configured
and the configuration specifies *YES for Cooperate with database (COOPDB),
then any other specified journal attributes are compared. Possible results from
comparing the *JOURNALED attribute are shown in Table 136 and Table 137. If
the configuration specifies COOPDB(*NO), only the journaled status is compared;
possible results are shown in Table 138.
Table 136, Table 137, and Table 138 show results for the *JOURNALED attribute that
can appear in the Difference Indicator field when the compare request specified a
data group and considered the configuration settings.
716
Table 136 shows results when the configured settings for Journal on target and
Cooperate with database are both *YES.
Table 136. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *YES for JRNTGT and COOPDB
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EC *EC *NE
Source No *NC *NC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Table 137 shows results when the configured settings are *NO for Journal on target
and *YES for Cooperate with database. .
Table 137. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for JRNTGT and *YES for COOPDB.
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *NC *EC *NE
Source No *NC *NC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Table 138 shows results when the configured setting for Cooperate with database is
*NO. In this scenario, you may want to investigate further. Even though the Difference
Indicator shows values marked as configured (*EC), the object can be not journaled
717
on one or both systems. The actual journal status values are returned in the System 1
Value (SYS1VAL) and System 2 Value (SYS2VAL) fields.
Table 138. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for COOPDB.
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EC *EC *NE
Source No *EC *EC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
718
Comparison results for auxiliary storage pool ID (*ASP)
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA),
Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA)
commands support comparing the auxiliary storage pool (*ASP) attribute for objects
replicated from the user journal. These commands function similarly.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether a data group was specified on the
compare request.
Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the *ASP attribute for all files or objects that
match the selection criteria specified in the request. The result displayed in the
Differences Indicator field. Table 139 shows the possible results in the Difference
Indicator field.
Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *EQ *NE *NE
Source ASP2 *NE *EQ *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.
Compares that specify a data group - When a data group is specified on the
compare request (CMPFILA, CMPDLOA, CMPIFSA commands), MIMIX does not
compare the *ASP attribute. When a data group is specified on a CMPOBJA request
which specifies an object type except libraries (*LIB), MIMIX does not compare the
*ASP attribute. Table 140 shows the possible results in the Difference Indicator field
Table 140. Difference Indicator values for non-library objects when the request specified a
data group
Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *NOTCMPD *NOTCMPD *NE
Source ASP2 *NOTCMPD *NOTCMPD *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.
719
For CMPOBJA requests which specify a a data group and an object type of *LIB,
MIMIX considers configuration settings for the library. Values for the System 1 library
ASP number (LIB1ASP), System 1 library ASP device (LIB1ASPD), System 2 library
ASP number (LIB2ASP), and System 2 library ASP device (LIB2ASPD) are retrieved
from the data group object entry and used in the comparison. Table 141, Table 142,
and Table 143 show the possible results in the Difference Indicator field.
Note: For Table 141, Table 142, and Table 143, the results are the same even if the
system roles are switched.
Table 141 shows the expected values for the ASP attribute when the request specifies
a data group and the configuration specifies *SRCLIB for the System 1 library ASP
number and the data source is system 2. .
Table 141. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*SRCLIB) and DTASRC(*SYS2).
Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *EC *NC *NE
Source ASP2 *NC *EC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.
Table 142 shows the expected values for the ASP attribute the request specifies a
data group and the configuration specifies 1 for the System 1 library ASP number and
the data source is system 2.
Table 142. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(1) and DTASRC(*SYS2)
Difference Indicator
Target
1
ASP Values 1 2 *NOTFOUND
1 *EC *NC *NE
Source 2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.
Table 143 shows the expected values for the ASP attribute when the request specifies
a data group and the configuration specifies *ASPDEV for the System 1 library ASP
720
number, DEVNAME is specified for the System 1 library ASP device, and data source
is system 2. .
Table 143. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*ASPDEV), LIB1ASPD(DEVNAME) and
DTASRC(*SYS2)
Difference Indicator
Target
ASP Values 1 DEVNAME 2 *NOTFOUND
1 *EC *NC *NE
Source 2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS-
1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of
the DTASRC field.
721
Comparison results for user profile status (*USRPRFSTS)
When comparing the attribute *USRPRFSTS (user profile status) with the Compare
Object Attributes (CMPOBJA) command, MIMIX determines the result displayed in
the Differences Indicator field by considering the following:
• The status values of the object on both the source and target systems
• Configured values for replicating user profile status, at the data group and object
entry levels
• The value of the Data group definition (DGDFN) parameter specified on the
CMPOBJA command.
Compares that do not specify a data group - When the CMPOBJA command does
not specify a data group, MIMIX compares the status values between source and
target systems. The result is displayed in the Differences Indicator field, according to
Table 127 in “Interpreting results of audits that compare attributes” on page 692.
Compares that specify a data group - When the CMPOBJA command specifies a
data group, MIMIX checks the configuration settings and the values on one or both
systems. (For additional information, see “How configured user profile status is
determined” on page 723.)
When the configured value is *SRC, the CMPOBJA command compares the values
on both systems. The user profile status on the target system must be the same as
the status on the source system, otherwise an error condition is reported. Table 144
shows the possible values.
Table 144. Difference Indicator values when configured user profile status is *SRC
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *EC *NC *NE
Source *DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN
722
146 show the possible values when configured values are *ENABLED or *DISABLED,
respectively.
Table 145. Difference Indicator values when configured user profile status is *ENABLED
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *EC *NC *NE
Source *DISABLED *EC *NC *NE
*NOTFOUND *NE *NE *UN
Table 146. Difference Indicator values when configured user profile status is *DISABLED
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *NC *EC *NE
Source *DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN
When the configured value is *TGT, the CMPOBJA command does not compare the
values because the result is indeterminate. Any differences in user profile status
between systems are not reported. Table 147 shows possible values.
Table 147. Difference Indicator values when configured user profile status *TGT
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *NA *NA *NE
Source *DISABLED *NA *NA *NE
*NOTFOUND *NE *NE *UN
723
in an object entry, the default is to use the value *SRC from the data group definition.
Table 148 shows the possible values at both the data group and object entry levels.
*DGDFT Only available for data group object entries, this indicates that the specified
in the data group definition is used for the user profile statue. This is the
default value for object entries.
*DISABLE 1 The status of the user profile is set to *DISABLED when the user profile is
created or changed on the target system.
*ENABLE 1 The status of the user profile is set to *ENABLED when the user profile is
created or changed on the target system.
*SRC This is the default value in the data group definition. The status of the user
profile on the source system is always used when the user profile is created
or changed on the target system.
724
Comparison results for user profile password (*PRFPWDIND)
When comparing the attribute *PRFPWDIND (user profile password indicator) with
the Compare Object Attributes (CMPOBJA) command, MIMIX assumes that the user
profile names are the same on both systems. User profile passwords are only
compared if the user profile name is the same on both systems and the user profile of
the local system is enabled and has a defined password.
If the local or remote user profile has a password of *NONE, or if the local user profile
is disabled or expired, the user profile password is not compared. The System
Indicator fields will indicate that the attribute was not compared (*NOTCMPD). The
Difference Indicator field will also return a value of not compared (*NA).
The CMPOBJA command does not support name mapping while comparing the
*PRFPWDIND attribute. If the user profile names are different, or if you attempt name
mapping, the System Indicator fields will indicate that comparing the attribute is not
supported (*NOTSPT). The Difference Indicator field will also return a value of not
supported (*NS).
The following tables identify the expected results when user profile password is
compared. Note that the local system is the system on which the command is being
run, and the remote system is defined as System 2.
Table 149 shows the possible Difference Indicator values when the user profile
passwords are the same on the local and remote systems and are not defined as
*NONE.
Table 149. Difference Indicator values when user profile passwords are the same, but not
*NONE.
Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *EQ *EQ *EQ *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
725
Table 150 shows the possible Difference Indicator values when the user profile
passwords are different on the local and remote systems and are not defined as
*NONE.
Table 150. Difference Indicator values when user profile passwords are different, but not
*NONE
Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *NE *NE *NE *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
Table 151 shows the possible Difference Indicator values when the user profile
passwords are defined as *NONE on the local and remote systems.
Table 151. Difference Indicator values when user profile passwords are *NONE.
Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *NA *NA *NA *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
726
Journal entry codes for user journal transactions
This appendix lists journal codes and error codes associated with replication activity,
including:
• “Journal entry codes for files” on page 727 identifies journal codes supported for
files, IFS objects, data areas, and data queues configured for replication through
the user journal. This section also includes a list of error codes associated with
files held due to error.
• “Journal entry codes for system journal transactions” on page 735 identifies
journal codes associated with object replicated through the system journal.
Table 152. Journal entry codes and types supported for files
727
Journal Codes and Error Codes
Table 152. Journal entry codes and types supported for files
D AC Add constraint
D CG Change file
D CT Create file
D DC Remove constraint
D DH File saved
D DT Delete file
D DZ File restored
D FM Move file
D FN Rename File
D GC Change constraint
D GO Change owner
D GT Grant file
D RV Revoke file
D TC Add trigger
D TD Delete trigger
D TG Change trigger
D TQ Refresh table
F DM Delete member
728
Journal entry codes for user journal transactions
Table 152. Journal entry codes and types supported for files
F IT Identity value
F MC Create member
F RM Member reorganized
U MX MIMIX-generated entry
Note:
1. This journal code is not supported by the Display Journal Statistics (DSPJRNSTC)
command
729
Journal Codes and Error Codes
04 Record in use
05 Allocation error
730
Journal entry codes for user journal transactions
R1 Error with data group file entry after the database apply reorganized a
file
R3 Error applying held entries after the database apply reorganized a file
731
Journal Codes and Error Codes
Table 154. Journal entry codes and types supported for IFS objects
B B3 Move/rename object
B B2 Add link
B BD Delete object
B FR Restore object 1
B FW Start of save-while-active 1
732
Journal entry codes for user journal transactions
Table 154. Journal entry codes and types supported for IFS objects
B WA Write after-image
Note:
1. The action identified in these entries are replicated cooperatively through the security
audit journal.
Journal codes and entry types for journaled data areas and data queues
The operating system uses journal codes E and Q to indicate that journal entries are
related to operations on data areas and data queues, respectively. When configured
for user journal replication, MIMIX recognizes specific E and Q journal entry types as
eligible for replication from a user journal.
Table 155 shows the currently supported journal entry types for data areas.
Table 155. Journal entry codes and types supported for data areas
E ZA Change authority
E ZO Ownership change
733
Journal Codes and Error Codes
Table 155. Journal entry codes and types supported for data areas
E ZT Auditing change
Table 156 shows the currently supported journal entry types for data queues.
Table 156. Journal entry codes and types supported for data queues
Q ZA Change authority
Q ZO Ownership change
Q ZT Auditing change
734
Journal entry codes for system journal transactions
For more information about journal entries, see Journal Entry Information (Appendix
D) in the iSeries Backup and Recovery guide in the IBM eServer iSeries Information
Center.
Table 157. Journal entry codes and subtypes for replicated system journal entries
735
Journal Codes and Error Codes
Table 157. Journal entry codes and subtypes for replicated system journal entries
T-YC DLO object change Access Various access types are supported.
736
APPENDIX I Outfile formats
This section contains the output files (outfile) formats for those MIMIX commands that
provide outfile support.
For each command that can produce an outfile, MIMIX provides a model database file
that defines the record format for the outfile. These database files can be found in the
product installation library.
Public authority to the created outfile is the same as the create authority of the library
in which the file is created. Use the Display Library Description (DSPLIBD) command
to see the create authority of the library.
You can use the Run Query (RUNQRY) command to display outfiles with column
headings and data type formatting if you have the licensed program 5722QU1, Query,
installed.
Otherwise, you can use the Display File Field Description (DSPFFD) command to see
detailed outfile information, such as the field length, type, starting position, and
number of bytes.
737
Work panels with outfile support
The following table lists the work panels with outfile support.
Panel Description
738
MCAG outfile (WRKAG command)
739
MCAG outfile (WRKAG command)
740
MCAG outfile (WRKAG command)
741
MCDTACRGE outfile (WRKDTARGE command)
742
MCDTACRGE outfile (WRKDTARGE command)
743
MCDTACRGE outfile (WRKDTARGE command)
744
MCNODE outfile (WRKNODE command)
745
MCNODE outfile (WRKNODE command)
746
MXCDGFE outfile (CHKDGFE command)
747
MXCDGFE outfile (CHKDGFE command)
748
MXCMPDLOA outfile (CMPDLOA command)
749
MXCMPDLOA outfile (CMPDLOA command)
750
MXCMPFILA outfile (CMPFILA command)
751
MXCMPFILA outfile (CMPFILA command)
752
MXCMPFILD outfile (CMPFILDTA command)
753
MXCMPFILD outfile (CMPFILDTA command)
754
MXCMPFILD outfile (CMPFILDTA command)
755
MXCMPFILR outfile (CMPFILDTA command, RRN report)
Table 166. Compare File Data (CMPFILDTA) relative record number (RRN) output file (MXCMPFILR)
Field Description Type, length Valid values Column head-
ings
SYSTEM 1 System 1 CHAR(8) User-defined system name SYSTEM 1
*local system name if no DG specified
SYSTEM 2 System 2 CHAR(8) User-defined system name SYSTEM 2
*local system name if no DG specified
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJECT
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
RRN Relative record number DECIMAL(10) Number RRN
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1 ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2 ASP
DEVICE
756
MXCMPRCDC outfile (CMPRCDCNT command)
757
MXCMPRCDC outfile (CMPRCDCNT command)
758
MXCMPIFSA outfile (CMPIFSA command)
759
MXCMPIFSA outfile (CMPIFSA command)
760
MXCMPOBJA outfile (CMPOBJA command)
761
MXCMPOBJA outfile (CMPOBJA command)
762
MXAUDHST outfile (WRKAUDHST command)
763
MXAUDHST outfile (WRKAUDHST command)
764
MXAUDHST outfile (WRKAUDHST command)
765
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)
766
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)
767
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)
768
MXDGACT outfile (WRKDGACT command)
769
MXDGACT outfile (WRKDGACT command)
770
MXDGACTE outfile (WRKDGACTE command)
771
MXDGACTE outfile (WRKDGACTE command)
772
MXDGACTE outfile (WRKDGACTE command)
773
MXDGACTE outfile (WRKDGACTE command)
TGTOBJLIB Target system object library name CHAR(10) User-defined name, BLANK TARGET
OBJECT
LIBRARY
TGTOBJ Target system object name CHAR(10) User-defined name, BLANK TARGET
OBJECT
TGTOBJMBR Target system object member name CHAR(10) User-defined name, BLANK TARGET
MEMBER
TGTDLO Target system DLO name CHAR(12) User-defined name, BLANK TARGET DLO
TGTFLR Target system object folder name CHAR(63) User-defined name, BLANK TARGET
FOLDER
TGTSPLFJOB Target system spooled file job name CHAR(26) Three part spooled file name, BLANK TARGET SPLF
TGTSPLF Target system spooled file name CHAR(10) User-defined name, BLANK JOB
TGTSPLFNBR Target system spooled file job number PACKED(7 0) 1-999999, BLANK TARGET SPLF
NUMBER
TGTOUTQ Target system output queue CHAR(10) User-defined name, BLANK TARGET OUTQ
TGTOUTQLIB Target system output queue library CHAR(10) User-defined name, BLANK TARGET OUTQ
LIBRARY
TGTIFS Target system IFS name CHAR(1024) User-defined name, BLANK TARGET IFS
VARLEN(100) OBJECT
774
MXDGACTE outfile (WRKDGACTE command)
775
MXDGACTE outfile (WRKDGACTE command)
776
MXDGACTE outfile (WRKDGACTE command)
777
MXDGDFN outfile (WRKDGDFN command)
778
MXDGDFN outfile (WRKDGDFN command)
779
MXDGDFN outfile (WRKDGDFN command)
780
MXDGDFN outfile (WRKDGDFN command)
781
MXDGDFN outfile (WRKDGDFN command)
782
MXDGDFN outfile (WRKDGDFN command)
783
MXDGDFN outfile (WRKDGDFN command)
784
MXDGDFN outfile (WRKDGDFN command)
785
MXDGDLOE outfile (WRKDGDLOE command)
786
MXDGFE outfile (WRKDGFE command)
787
MXDGFE outfile (WRKDGFE command)
788
MXDGFE outfile (WRKDGFE command)
789
MXDGIFSE outfile (WRKDGIFSE command)
790
MXDGSTS outfile (WRKDG command)
791
MXDGSTS outfile (WRKDG command)
792
MXDGSTS outfile (WRKDG command)
793
MXDGSTS outfile (WRKDG command)
794
MXDGSTS outfile (WRKDG command)
795
MXDGSTS outfile (WRKDG command)
796
MXDGSTS outfile (WRKDG command)
797
MXDGSTS outfile (WRKDG command)
798
MXDGSTS outfile (WRKDG command)
799
MXDGSTS outfile (WRKDG command)
800
MXDGSTS outfile (WRKDG command)
801
MXDGSTS outfile (WRKDG command)
802
MXDGSTS outfile (WRKDG command)
803
MXDGSTS outfile (WRKDG command)
804
MXDGSTS outfile (WRKDG command)
805
MXDGSTS outfile (WRKDG command)
806
MXDGSTS outfile (WRKDG command)
807
MXDGSTS outfile (WRKDG command)
808
MXDGSTS outfile (WRKDG command)
809
MXDGSTS outfile (WRKDG command)
810
MXDGSTS outfile (WRKDG command)
811
MXDGSTS outfile (WRKDG command)
812
MXDGSTS outfile (WRKDG command)
813
MXDGSTS outfile (WRKDG command)
814
MXDGSTS outfile (WRKDG command)
815
MXDGOBJE outfile (WRKDGOBJE command)
816
MXDGOBJE outfile (WRKDGOBJE command)
817
MXDGOBJE outfile (WRKDGOBJE command)
818
MXDGTSP outfile (WRKDGTSP command)
819
MXDGTSP outfile (WRKDGTSP command)
820
MXJRNDFN outfile (WRKJRNDFN command)
821
MXJRNDFN outfile (WRKJRNDFN command)
822
MXJRNDFN outfile (WRKJRNDFN command)
823
MXRJLNK outfile (WRKRJLNK command)
824
MXRJLNK outfile (WRKRJLNK command)
825
MXSYSDFN outfile (WRKSYSDFN command)
826
MXSYSDFN outfile (WRKSYSDFN command)
827
MXSYSDFN outfile (WRKSYSDFN command)
828
MXSYSSTS outfile (WRKSYS command)
829
MXJRNINSP outfile (WRKJRNINSP command)
830
MXJRNINSP outfile (WRKJRNINSP command)
831
MXTFRDFN outfile (WRKTFRDFN command)
832
MXTFRDFN outfile (WRKTFRDFN command)
833
MXDGIFSTE outfile (WRKDGIFSTE command)
834
MXDGIFSTE outfile (WRKDGIFSTE command)
835
MXDGIFSTE outfile (WRKDGIFSTE command)
836
MXDGOBJTE outfile (WRKDGOBJTE command)
837
MXDGOBJTE outfile (WRKDGOBJTE command)
838
MXPROC outfile (WRKPROC command)
839
MXPROCSTS outfile (WRKPROCSTS command)
840
MXSTEPPGM outfile (WRKSTEPPGM command)
841
MXSTEP outfile (WRKSTEP command)
842
MXSTEPMSG outfile (WRKSTEPMSG command)
843
MXSTEPSTS outfile (WRKSTEPSTS command)
844
MXSTEPSTS outfile (WRKSTEPSTS command)
845
Index
Symbols concepts 659
*FAILED activity entry 46 group 660
*HLD, files on hold 104 independent 660
*HLDERR, held due to error 408 independent, benefits 659
*HLDERR, hold error status 80 independent, configuration tips 663
*MAXOPT3 sequence number size 222 independent, configuring 663
*MSGQ, maintaining private authorities 104 independent, configuring IFS objects 664
independent, configuring library-based ob-
A jects 664
access types (file) for T-ZC entries 415 independent, effect on library list 665
accessing independent, journal receiver considerations
MIMIX Main Menu 93 664
active server technology 465 independent, limitations 662
additional resources 22 independent, primary 660
advanced journaling independent, replication 658
add to existing data group 88 independent, requirements 662
apply session balancing 90 independent, restrictions 662
conversion examples 89 independent, secondary 660
convert data group to 88 SYSBAS 658
loading tracking entries 286 system 659
planning for 87 user 659
replication process 76 asynchronous delivery 67
serialized transactions with database 88 attributes of a step, changing 569
advanced journaling, data areas and data attributes, supported
queues CMPDLOA command 713
synchronizing 530 CMPFILA command 696
advanced journaling, IFS objects CMPIFSA command 710
journal receiver size 217 CMPOBJA command 701
restrictions 120 audit
synchronizing 530 check for reported problems 679
advanced journaling, large objects (LOBs) differences, resolving 679
journal receiver size 217 displaying compliance status 684
synchronizing 501 improve performance of #MBRRCDCNT 381
APPC/SNA, configuring 163 job log 684
application group 27 last performed 684
conversion checklist 145 scheduler, alternative 670
create resource groups for a 327 status
define node roles manually 328 runtime 679
define primary node 327 audit results 679
application group definition 37 #DGFE rule 687, 747
creating 326 #DLOATR rule 713, 749
apply session #DLOATR rule, ASP attributes 719
constraint induced changes 400 #FILATR rule 696, 751
default value 240 #FILATR rule, ASP attributes 719
specifying 237 #FILATR rule, journal attributes 715
apply session, database #FILATRMBR rule 696, 751
load balancing 90 #FILATRMBR rule, ASP attributes 719
ASP #FILATRMBR rule, journal attributes 715
basic 660 #FILDTA rule 689, 753
#IFSATR rule 710, 759
846
#IFSATR rule, ASP attributes 719 batch output 546
#IFSATR rule, journal attributes 715 benefits
#MBRRCDCNT rule 689, 757 independent ASPs 659
#OBJATR rule 701, 761 LOB replication 108
#OBJATR rule, ASP attributes 719 bi-directional data flow 390
#OBJATR rule, journal attributes 715 broadcast configuration 71
#OBJATR rule, user profile password attribute converting to application group 328
725 build journal environment
#OBJATR rule, user profile status attribute after changing receiver size option 204
722
interpreting, attribute comparisons 692 C
interpreting, file data comparisons 689 candidate objects
resolving problems 679, 687 defined 427
timestamp difference 128 cascade configuration 71
troubleshooting 684 cascading distributions, configuring 395
auditing and reporting, compare commands catchup mode 65
DLO attributes 459 change management
file and member attributes 450 overview 213
file data using active processing 490 remote journal environment 213
file data using subsetting options 493 change management, journal receivers 203
file data with repair capability 484 changing
file data without active processing 481 RJ link 228
files on hold 487 startup programs, remote journaling 146
IFS object attributes 456 changing from RJ to MIMIX processing
object attributes 453 permanently 230
auditing level, object temporarily 229
used for replication 343 checklist
auditing value, i5/OS object convert *DTAARA, *DTAQ to user journaling
set by MIMIX 60 151
auditing, i5/OS object 29 convert IFS objects to user journaling 151
performed by MIMIX 309 convert to application groups 145
audits 512 converting to remote journaling 146
authorities, private 104 copying configuration data 649
authorization lists legacy cooperative processing 157
to exclude from replication 85 manual configuration (source-send) 141
automation 531 MIMIX Dynamic Apply 148
autostart job entry 178 new preferred configuration 137
changing job description 193 pre-configuration 82
changing port information 194 cluster services 36
created by MIMIX 192 collision points 532
identifying 192 collision resolution 532
when to change 193 default value 241
requirements 409
B working with 408
backlog command authority 638
comparing file data restriction 467 commands
backup system 26 changing defaults 556
restricting access to files 240 displaying a list of 547
basic ASP 660 commands, by mnemonic
847
ADDMSGLOGE 541 STRJRNIFSE 350
ADDRJLNK 227 STRJRNOBJE 354
ADDSTEP 568 STRMMXMGR 306
CHGJRNDFN 220 STRSVR 191
CHGRJLNK 228 SYNCDGACTE 498, 504
CHGSYSDFN 170 SYNCDGFE 498, 505, 514
CHGTFRDFN 185 SYNCDLO 497, 503, 524
CHKDGFE 313, 687 SYNCIFS 497, 503, 520, 530
CLOMMXLST 555 SYNCOBJ 497, 503, 516, 530
CMPDLOA 446 VFYCMNLNK 196, 197
CMPFILA 446 VFYJRNFE 349
CMPFILDTA 465, 481 VFYJRNIFSE 352
CMPIFSA 446 VFYJRNOBJE 356
CMPOBJA 446 VFYKEYATR 389
CMPRCDCNT 462 WRKCRCLS 410
CPYCFGDTA 648 WRKDGDFN 257
CPYDGFE 300 WRKDGDLOE 300
CPYDGIFSE 300 WRKDGFE 300
CRTAGDFN 326 WRKDGIFSE 300
CRTCRCLS 410 WRKDGOBJE 300
CRTDGDFN 246, 250 WRKJRNDFN 257
CRTJRNDFN 218 WRKRJLNK 316
CRTSYSDFN 169 WRKSYSDFN 257
CRTTFRDFN 184 WRKTFRDFN 257
DLTCRCLS 411 commands, by name
DLTDGDFN 258 Add Message Log Entry 541
DLTJRNDFN 258 Add Remote Journal Link 227
DLTSYSDFN 258 Add Step 568
DLTTFRDFN 258 Change Journal Definition 220
DPYDGCFG 307 Change RJ Link 228
DSPDGFE 302 Change System Definition 170
DSPDGIFSE 302 Change Transfer Definition 185
ENDJRNFE 348 Check Data Group File Entries 313, 687
ENDJRNIFSE 351 Close MIMIX List 555
ENDJRNOBJE 355 Compare DLO Attributes 446
LODDGFE 275 Compare File Attributes 446
LODDGOBJE 272 Compare File Data 465, 481
LODDTARGE 327 Compare IFS Attributes 446
MIMIX 93 Compare Object Attributes 446
OPNMMXLST 555 Compare Record Counts 462
RMVDGFE 301 Copy Configuration Data 648
RMVDGIFSE 301 Copy Data Group File Entry 300
RMVRJCNN 231 Copy Data Group IFS Entry 300
RUNCMD 548 Create Application Group Definition 326
RUNCMDS 548 Create Collision Resolution Class 410
RUNRULE 669, 675 Create Data Group Definition 246, 250
RUNRULEGRP 669, 675 Create Journal Definition 218
SETDGAUD 309 Create System Definition 169
SETIDCOLA 401 Create Transfer Definition 184
STRJRNFE 347 Delete Collision Resolution Class 411
848
Delete Data Group Definition 258 Work with System Definition 257
Delete Journal Definition 258 Work with Transfer Definition 257
Delete System Definition 258 commands, run on remote system 548
Delete Transfer Definition 258 commit cycles
Deploy Data Group Configuration 307 effect on audit comparison 689, 691
Display Data Group File Entry 302 policy effect on compare record count 381
Display Data Group IFS Entry 302 commit mode 367
End Journaling File Entries 348 commitment control 108
End Journaling IFS Entries 351 #MBRRCDCNT audit performance 381
End Journaling Obj Entries 355 journal standby state, journal cache 362, 364
Load Data Group File Entries 275 journaled IFS objects 76
Load Data Group Object Entries 272 communications
Load Data Resource Group Entry 327 APPC/SNA 163
MIMIX 93 configuring system level 159
Open MIMIX List 555 native TCP/IP 159
Remove Data Group File Entry 301 OptiConnect 164
Remove Data Group IFS Entry 301 protocols 159
Remove Remote Journal Connection 231 starting TCP sever 191
Run Command 548 compare commands
Run Commands 548 completion and escape messages 534
Run Rule 669, 675 outfile formats 445
Run Rule Group 669, 675 report types and outfiles 444
Set Data Group Auditing 309 spooled files 444
Set Identity Column Attribute 401 comparing
Start Journaling File Entries 347 DLO attributes 459
Start Journaling IFS Entries 350 file and member attributes 450
Start Journaling Obj Entries 354 IFS object attributes 456
Start Lakeview TCP Server 191 object attributes 453
Start MIMIX Managers 306 when file content omitted 417
Synchronize Data Group Activity Entry 504 comparing attributes
Synchronize Data Group File Entry 505, 514 attributes to compare 448
Synchronize DG Activity Entry 498 overview 446
Synchronize DG File Entry 498 supported object attributes 447, 470
Synchronize DLO 497, 503, 524 comparing file data 465
Synchronize IFS 503 active server technology 465
Synchronize IFS Object 497, 520, 530 advanced subsetting 476
Synchronize Object 497, 503, 516, 530 allocated and not allocated records 467
Verify Communications Link 196, 197 comparing a random sample 476
Verify Journaling File Entry 349 comparing a range of records 473
Verify Journaling IFS Entries 352 comparing recently inserted data 473
Verify Journaling Obj Entries 356 comparing records over time 476
Verify Key Attributes 389 data correction 465
Work with Collision Resolution Classes 410 excluding unchanged members 476
Work with Data Group Definition 257 first and last subset 479
Work with Data Group DLO Entries 300 interleave factor 477
Work with Data Group File Entries 300 job ends due to network timeout 470
Work with Data Group IFS Entries 300 keys, triggers, and constraints 468
Work with Data Group Object Entries 300 multi-threaded jobs 466
Work with Journal Definition 257 network inactivity considerations 470
Work with RJ Links 316 number of subsets 477
849
parallel processing 466 configuring, collision resolution 409
processing with DBAPY 465, 487 confirmed journal entries 66
referential integrity considerations 469 considerations
repairing files in *HLDERR 465 journal for independent ASP 664
restrictions 466 what to not replicate 84
security considerations 467 constraints
thread groups 475 apply session for dependent files 400
transfer definition 475 auditing with CMPFILA 446
transitional states 466 comparing file data 468
using active processing 490 omit content and legacy cooperative process-
using subsetting options 493 ing 417
wait time 475 referential integrity considerations 469
with repair capability 484 requirements 399
with repair capability when files are on hold requirements when synchronizing 506
487 restrictions with high availability journal perfor-
without active processing 481 mance enhancements 364
comparing file record counts 462 support 399
concepts when journal is in standby state 362
procedures and steps 557 constraints, CMPFILA file-specific attribute 696
configuration constraints, physical files with
adding a directory to existing 293 apply session ignored 112
adding a library to existing 288 configuring 108
additional supporting tasks 303 legacy cooperative processing 112
copying existing data 653 constraints, referential 111
determining, IFS objects 155 contacting Vision Solutions 23
manually complete seelction rule 288, 293 container receive process
results of #DGFE audit after changing 687 description 56
configuration, deploying the 307 container send process
configuring defaults 244
advanced replication techniques 383 description 56
bi-directional data flow 390 threshold 244
cascading distributions 395 contextual transfer definitions
choosing the correct checklist 135 considerations 183
classes, collision resolution 410 RJ considerations 182
data areas and data queues 113 continuous mode 65
database apply commit mode 368 convert data group
DLO documents and folders 122 to advanced journaling 151
file routing, file combining 392 to application group environment 145
for improved performance 358 COOPDB (Cooperate with database) 114, 120
IFS objects 116 cooperative journal (COOPJRN)
independent ASP 663 behavior 107
Intra communications 656 cooperative processing
job restart time 318 and omitting content 417
keyed replication 386 configuring files 106
library-based objects 100 file, preferred method for 53
message queue objects for user profiles 104 introduction 53
omitting T-ZC journal entry content 416 journaled objects 54
spooled file replication 103 legacy 54
to replicate SQL stored procedures 421 legacy limitations 112
unique key replication 386 MIMIX Dynamic Apply limitations 111
850
cooperative processing, legacy description 28
limitations 112 object 271
requirements and limitations 112 procedures for configuring 270
COOPJRN 107 data group file entry 275
COOPJRN (Cooperative journal) 236 adding individual 281
COOPTYPE (Cooperating object types) 114 changing 282
copying loading from a journal definition 279
data group entries 300 loading from a library 278, 279
definitions 257 loading from FEs from another data group
create operation, how replicated 128 280
CustomerCare 23 loading from object entries 276
customize sources for loading 275
switch procedures 562 data group IFS entry 284
customizing 531 with independent ASPs 664
replication environment 532 data group object entry
adding individual 273
D custom loading 271
data area independent ASP 663
restrictions of journaled 114 with independent ASP 664
data areas data library 34, 167
journaling 75 data management techniques 390
polling interval 238 data queue
synchronizing an object tracking entry 530 restrictions of journaled 114
data areas and data queues data queues
verifying journaling 356 journaling 75
data distribution techniques 390 synchronizing journaled objects 530
data group 27 data resource group entry
convert to remote journaling 146 in data group definition 234
database only 111 data resource group entry, adding 327
determining if RJ link used 316 data resource group entry, adding manually 328
ending 44, 69 data source 235
journal definitions used by a 339 database apply
RJ link differences 69 caching 361
sharing an RJ link 69 serialization 88
short name 234 with compare file data (CMPFILDTA) 465,
starting 44 487
starting the first time 315 database apply caching 361
switching 28 database apply process 79
switching, RJ link considerations 73 description 68
timestamps, automatic 238 target side locking 413
type 235 threshold warning 242
data group definition 37, 233 database apply processing
creating 246 entries under commitment control 367
parameter tips 234 database reader process 68
data group DLO entry 297 description 68
adding individual 298 threshold 241
loading from a folder 297 database receive process 79
data group entry 428 database send process 79
defined 95 description 79
filtering 237
851
threshold 241 generic name support 122
DDM implicit parent object replication 122
password validation 188 keeping same name 243
server in startup programs 146 object processing 122
server, starting 187 documents, MIMIX 19
defaults, command 556 duplicate identity column values 401
definitions dynamic updates
application group 37 adding data group entries 281
data group 37 removing data group entries 301
journal 37
named 36 E
remote journal link 37 ending CMPFILDTA jobs 479
renaming 259 ending journaling
RJ link 37 data areas and data queues 355
system 36 files 348
transfer 36 IFS objects 351
delay times 167 IFS tracking entry 351
delay/retry processing object tracking entry 355
first and second 239 error code, files in error 730
third 239 error messages
delayed commit 367 switch procedures 561
delete management examples
journal receivers 203 convert to advanced journaling 89
overview 213 DLO entry matching 123
remote journal environment 214 IFS object selection, subtree 442
delete operations job restart time 320, 321
journaled *DTAARA, *DTAQ, IFS objects 134 journal definitions for multimanagement envi-
legacy cooperative processing 133 ronment 209
deleting journal definitions for switchable data group
data group entries 301 211
definitions 258 journal receiver exit program 632
procedure 566 load file entries for MIMIX Dynamic Apply 276
delivery mode object entry matching 102
asynchronous 67 object retrieval delay 419
synchronous 65 object selection process 435
deploy configuration 307 object selection, order precedence in 436
detail report 544 object selection, subtree 438
device description port alias, complex 161
to exclude from replication 85, 86 port alias, simple 160
directory entries querying content of an output file 813
managing 179 SETIDCOLA command increment values 405
RDB 178 target journal inspection 335
directory, IFS user-generated notification 674
adding to existing data group 293 WRKDG SELECT statements 813
display output 543 exit points 532
displaying journal receiver management 625, 628
data group entries 302 MIMIX Monitor 626
distribution request, data-retrieval 57 MIMIX Promoter 627
DLOs exit programs
example, entry matching 123
852
journal receiver management 204, 629 IFS objects 116
requesting customized programs 625 determining configuration 155
expand support 545 file ID (FID) use with journaling 78
extended attribute cache 369 file IDs (FIDs) 317
configuring 369 implicit parent object replication 118
journaled entry types, commitment control
F and 76
failed request resolution 46 journaling 75
FEOPT (file and tracking entry options) 239 not supported 116
file path names 117
new 343 supported object types 116
file id (FID) 78 verifying journaling 352
file identifiers (FIDs) 317 IFS objects, journaled
files restrictions 120
combining 393 supported operations 129
omitting content 415 sychronizing 507, 530
output 545 immediate commit 367
routing 394 implicit parent object replication
sharing 390 DLO object 122
synchronizing 505 IFS object 118
temporary 84 independent ASP 660
filtering limitations 662
database replication 79 primary 660
messages 49 replication 658
on database send 237 requirements 662
on source side 237 restrictions 662
remote journal environment 68 secondary 660
firewall, using CMPFILDTA with 467 synchronizing data within an 501
folder path names 122 independent ASP threshold monitor 667
independent ASP, journal receiver change 213
G information and additional resources 22
generic name support 429 inspecting of journals on target system 334
DLOs 122 installations, multiple MIMIX 26
generic user exit 625 interleave factor 477
Intra configuration 654
IPL, journal receiver change 213
H
history retention 167
hot backup 24 J
job classes 38
job description parameter 546
I
job descriptions 38, 167
IBM i5/OS option 42 362
in data group definition 244
IFS directory
in product library 38
created during installation 33
list of MIMIX 39
exclude from replication 85, 86
job log
IFS file systems 116
for audit 684
unsupported 116
job name parameter 546
IFS object selection
job names 51
examples, subtree 442
job restart time 318
subtree 432
853
data group definition procedure 324 for data area and data queues 733
examples 320 supported by MIMIX user journal processing
overview 318 732
parameter 167, 245 journal image 240, 385
shared object send jobs 319 journal inspection, target 334
system definition procedure 323 journals not checked 334
jobs journal manager 35
procedures, used in 558 journal receiver 29
jobs, restarted automatically 318 change management 203, 213
journal 28 delete management 203, 213, 214
improving performance of 358 prefix 201
maximum number of objects in 29 RJ processing earlier receivers 215
MXCFGJRN 200 size for advanced journaling 217
security audit (system) 55 starting point 29
system (security audit) 55 stranded on target 216
journal analysis 47 journal receiver management
journal at create 126, 238 interaction with other products 214
requirements 343 recommendations 213
requirements and restrictions 344 journal sequence number, change during IPL
journal caching 202, 363 213
configuring 365 journal standby state 362
journal caching alternative 361 configuring 365
journal code journaled data areas, data queues
failed objects 735 planning for 87
files in error 727 journaled IFS objects
system journal transactions 735 planning for 87
journal codes journaled object types
user journal transactions 727 user exit program considerations 90
journal definition 37 journaling 29
configuring 198 data areas and data queues 75
created by other processes 200 ending for data areas and data queues 355
creating 218 ending for files defined to a data group 348
fields on data group definition 236 ending for IFS objects 351
MXCFGJRN 200 IFS objects 75
parameter tips 201 IFS objects and commitment control 76
remote journal environment considerations implicitly started 343
206 requirements for starting 343
remote journal naming convention 210 starting for data areas and data queues 354
remote journal naming convention, default starting for IFS objects 350
208 starting for physical files 347
remote journaling example 211 starting, ending, and verifying 342
used by a data group 339 verifying 512
journal entries 29 verifying for data areas and data queues 356
confirmed 66 verifying for IFS objects 352
filtering on database send 237 verifying for physical files 349
minimized data 359 journaling environment
OM journal entry 129 automatically creating 236
receive journal entry (RCVJRNE) 377 building 221
unconfirmed 66, 73 changing to *MAXOPT3 222
journal entry codes 735 removing 231
854
source for values (JRNVAL) 221 M
journaling on target, RJ environment consider- manage directory entries 179
ations 216 management system 27
journaling status maximum size transmitted 178
data areas and data queues 354 MAXOPT3
files 347 change receiver size value 217
IFS objects 350 receiver size option 204
member data, locks on target side 413
K menu
keyed replication 385 MIMIX Configuration 305
comparing file data restriction 466 MIMIX Main 93
file entry option defaults 240 message handling 166
preventing before-image filtering 237 message log 541
verifying file attributes 389 message queues
associated with user profiles 104
L journal-related threshold 204
large object (LOB) support message, step 573
user exit program 108 messages 48
large objects (LOBs) CMPDLOA 536
minimized journal entry data 359 CMPFILA 534
legacy cooperative processing CMPFILDTA 537
configuring 109 CMPIFSA 535
limitations 112 CMPOBJA 535
requirements 112 CMPRCDCNT 536
libraries comparison completion and escape 534
iOptimize, to not replicate 85 MIMIX Dynamic Apply
MIMIX Availability, to not replicate 85 configuring 106, 109
MIMIX Director to not replicate 86 recommended for files 106
objects in installation libraries 84 requirements and limitations 111
system, to not replicate 84 MIMIX environment 33
library MIMIX installation 26
adding to existing data group 288 MIMIX jobs, restart time for 318
library list MIMIX Model Switch Framework 626
adding QSOC to 164 MIMIX performance, improving 358
library list, effect of independent ASP 665 MIMIX rules 669
library-based objects, configuring 100 command prompting 671
limitations MIMIXOWN user profile 40, 188
database only data group 111 MIMIXQGPL library 33
list detail report 544 MIMIXSBS subsystem 34, 92
list summary report 544 minimized journal entry data 359
load leveling 59 LOBs 108
loading MMNFYNEWE monitor 126
tracking entries 286 monitor
LOB replication 108 new objects not configured to MIMIX 126
local-remote journal pair 65 move/rename operations
locks, database apply process 413 system journal replication 129
log space 38 user journal replication 130
logical files 106, 107 multimanagement
long IFS path names 117 journal definition naming 208
855
limiting internal communications 170 set by MIMIX 60, 309
multi-threaded jobs 466 object auditing value
data areas, data queues 113
N DLOs 122
name pattern 432 IFS objects 119
name space 55 library-based objects 98
names, displaying long 117 omit T-ZC entry considerations 416
naming conventions object entry, data group
data group definitions 234 creating 271
journal definitions 201, 207, 210 object locking retry interval 239
multi-part 31 object processing
transfer definitions 176 data areas, data queues 113
transfer definitions, contextual (*ANY) 183 defaults 242
transfer definitions, multiple network systems DLOs 122
172 high volume objects 380
network inactivity IFS objects 116
comparing file data 470 retry interval 239
network systems 27 spooled files 103
multiple 172 object receive process
new objects description 56
automatically journal 238 object retrieval delay
automatically replicate 126 considerations 419
files 126 examples 419
files processed by legacy cooperative pro- selecting 419
cessing 127 object retrieve process
files processed with MIMIX Dynamic Apply defaults 244
126 description 56
IFS object journal at create requirements 343 threshold 244
IFS objects, data areas, data queues 127 with high volume objects 380
journal at create selection criteria 344 object selection 425
notification of objects not in configuration 126 audits 425
notification retention 167 commands which use 425
notifications examples, order precedence 436
user-defined 673 examples, process 435
user-generated 668 examples, subtree 438
name pattern 432
O order precedence 429
object parameter 428
changed on target system 334, 337 process 426
journal entry codes 735 subtree 431
object apply process object selector elements 428
defaults 244 by function 430
description 56 object selectors 428
threshold 244 object send job
object attributes, comparing 448 job restart time for shared 319
object auditing object send process
used for replication 343 description 55
object auditing level, i5/OS shared 59, 243
manually set for a data group 309 threshold 243
object types supported 97, 635
856
objects output
new 343 batch 546
Omit content (OMTDTA) parameter 416 considerations 542
and comparison commands 417 display 543
and cooperative processing 417 expand support 545
open commit cycles file 545
audit results 689, 691 parameter 542
OptiConnect, configuring 164 print 543
outfiles 737 output file
MCAG 739 querying content, examples of 813
MCDTACRGE 742 output file fields
MCNODE 745 Difference Indicator 689, 692
MXAUDHST 763 System 1 Indicator field 695
MXAUDOBJ 766 System 2 Indicator field 695
MXCDGFE 747 output queues 167
MXCMPDLOA 749 overview
MXCMPFILA 751 MIMIX operations 44
MXCMPFILD 753 remote journal support 63
MXCMPFILR 756 starting and ending replication 44
MXCMPIFSA 759 support for resolving problems 46
MXCMPOBJA 761 support for switching 28, 47
MXCMPRCDC 757 working with messages 48
MXDGACT 769
MXDGACTE 771 P
MXDGDFN 778 parallel processing 466
MXDGDLOE 786 path names, IFS 117
MXDGFE 787 implicit parent object replication 118
MXDGIFSE 790 path names, implicit DLO parent object replica-
MXDGIFSTE 834 tion 122
MXDGOBJE 816 performance
MXDGOBJTE 837 improved record count compare 381
MXDGSTS 791 policy, CMPRCDCNT commit threshold 381
MXDGTSP 819 polling interval 238
MXJRNDFN 821 port alias 160
MXJRNINSP 830 complex example 161
MXPROC 839 creating 162
MXPROCSTS 840 simple example 160
MXSTEP 842 primary node
MXSTEPMSG 843 configure for application group 327
MXSTEPPGM 841 print output 543
MXSTEPSTS 844 printing
MXSYSDFN 826 controlling characteristics of 168
MXSYSSTS 829 private authorities, *MSGQ replication of 104
MXTFRDFN 832 problems, journaling
user profile password 725 data areas and data queues 354
user profile status 722 files 347
WRKRJLNK 824 IFS objects 350
outfiles, supporting information problems, resolving
record format 737 audit results 687
work with panels 738
857
auditing 679 QAUDLVL system value 41, 55, 103
procedure QDFTJRN data area 238
begin at step 331, 560 QLIBLCKLVL system value 41
displaying steps 567 QMLTTHDACN system value 42
procedures 38, 557, 563 QRETSVRSEC system value 42
adding a step 568 QSECURITY system value 41
components 557 QSOC
creating type *NODE 565 library 164
creating type *USER 565 QTIME system value 42
creating types *END, *START, *SWSTPLAN, QTIMZON system value 42
*SWTUNPLAN 566
customizing user application steps 562 R
displaying available 563 RCVJRNE (Receive Journal Entry) 377
history 561 configuring values 378
invoking 560 determining whether to change the value of
job processing 558 378
last started run 568 understanding its values 377
programming support 571, 574 RDB
removing a step 570 directory entries 178, 180
status 561 reader wait time 235
step attributes 559 receiver library, changing for RJ target journal
step error processing 559 225
switch customizing 561 receivers
types of 558 change management 203
process delete management 203
database apply 79 recommendations
database reader 68 multimanagement journal definitions 208
database receive 79 relational database (RDB) 178
database send 79 entries 178, 186
names 51 remote journal
object send 55 i5/OS function 29, 63
process, object selection 426 i5/OS function, asynchronous delivery 67
processing defaults i5/OS function, synchronous delivery 65
container send 244 MIMIX support 63
database apply 241 relational database 178
file entry options 239 remote journal environment
object apply 244 changing 225
object retrieve 244 contextual transfer definitions 182
user journal entry 237 receiver change management 213
product authority receiver delete management 214
overview 638 restrictions 64
production system 26 RJ link 68
programs, step 570 security implications 188
publications, IBM 22 switch processing changes 48
remote journal ID 207, 208
Q remote journal link 37, 68
QALWOBJRST system value 41 remote journal link, See also RJ link
QALWUSRDMN system value 41 remote journaling
QAUDCTL system value 41, 55 data group definition 236
858
repairing queues 113
file data 484 restore operations, journaled *DTAARA, *DTAQ,
files in *HLDERR 465 IFS objects 134
files on hold 487 restrictions
replication bi-directional environments 334
advanced topic parameters 237 comparing file data 466
by object type 97 data areas and data queues 114
configuring advanced techniques 383 independent ASP 662
constraint-induced modifications 400 journal at create 344
defaults for object types 97 journal receiver management 214
direction of 26 journal receiver size *MAXOPT3 204
ending data group 44 journaled *DTAARA, *DTAQ objects 114
ending MIMIX 44 journaled IFS objects 120
implicitly identified parent objects 118, 122 legacy cooperative processing 112
independent ASP 658 LOBs 109
maximum size threshold 178 MIMIX Dynamic Apply 111
positional vs. keyed 385 number of objects in journal 29
process, remote journaling environment 68 QDFTJRN data area 344
retrieving extended attributes 369 remote journaling 64
spooled files 103 standby journaling 364
SQL stored procedures 421 target journal inspection 334
starting data group 44 retrying, data group activity entries 46
starting MIMIX 44 RJ link 37
supported paths 24 adding 227
system journal 24 changing 228
system journal process 55 data group definition parameter 236
unit of work for 27 description 68
user journal 24 end options 69
user profiles 500 identifying data groups that use 316
user-defined functions 421 sharing among data groups 69
what to exclude 84 switching considerations 73
replication manager 51, 337 threshold 238
replication path 50 RJ link monitors
reports description 71
detail 544 displaying status of 71
list detail 544 ending 71
list summary 544 not installed, status when 71
types for compare commands 444 operation 71
requirement rule groups
objects and journal in same ASP 29 MIMIX 677
requirements rules 669
independent ASP 662 messages from 671
journal at create 343 MIMIX 669
journaling 343 notifications from 671
keyed replication 385 relationship with rules 669
legacy cooperative processing 112 run command considerations 671
MIMIX Dynamic Apply 111 run on management system 671
standby journaling 364 user-defined 668
system values for installing 41 running
user journal replication of data areas and data rule groups 675
859
rules 675 journal caching 363
journal standby state 362
S MIMIX processing with 363
save-while-active 423 overview 362
considerations 423 requirements 364
examples 424 restrictions 364
options 424 starting
wait time 423 data groups initially 315
search process, *ANY transfer definitions 181 procedure at step 331, 560
security procedures 560
considerations, CMPFILDTA command 467 system and journal managers 306
functions provided by Vision Solutions 638 TCP server 191
general information 81 TCP server automatically 192
remote journaling implications 188 starting journaling
security audit journal 55 data areas and data queues 354
security class table, product 639 file entry 347
sequence number files 347
maximum size option 204 IFS objects 350
sequence number size option, *MAXOPT3 222 IFS tracking entry 350
serialization object tracking entry 354
database files and journaled objects 88 startup programs
object changes with database 75 changes for remote journaling 146
servers MIMIX subsystem 92
starting DDM 187 status
starting TCP 191 audit compliance 684
services journaling data areas and data queues 354
cluster 36 journaling files 347
short transfer definition name 176 journaling IFS objects 350
source physical files 106, 107 journaling tracking entries 350, 354
source system 26 procedures and steps 561
spooled files 103 status receive process
compare commands 444 description 56
keeping deleted 103 status send process
options 103 description 56
retaining on target system 243 status, values affecting updates to 238
SQL stored procedures 421 step
replication requirements 421 begin procedure at 331, 560
SQL table identity columns 401 step messages 573
alternatives to SETIDCOLA 403 adding 573
check for replication of 406 list available 573
problem 401 removing 574
SETIDCOLA command details 404 step program
SETIDCOLA command examples 405 changing 571
SETIDCOLA command limitations 402 creating a custom program 570
SETIDCOLA command usage notes 405 custom, for switching 561
setting attribute 406 ENDUSRAPP 562
when to use SETIDCOLA 402 format STEP0100 571
standby journaling STRUSRAPP 562
IBM i5/OS option 42 362 step programs 570
display available 570
860
steps 38, 567 limit maximum size 499
adding to procedure 568 LOB data 501
changing attributes 569 object tracking entries 530
enabling and disabling 569 object, IFS, DLO overview 503
remove from procedure 570 objects 516
runtime attributes 559 objects in a data group 516
storage, data libraries 167 objects without a data group 517
stranded journal on target, journal entries 216 related file 506
subsystem resources for 509
MIMIXSBS, starting 92 status changes caused by 501
subtree 431 tracking entries 507
IFS objects 432 user profiles 499, 500
switch procedure customization 561 synchronous delivery 65
switch procedure error messages 561 unconfirmed entries 66
switching SYSBAS 658, 660
allowing 235 system ASP 659
data group 28 system definition 36, 165
enabling journaling on target system 235 changing 170
example RJ journal definitions for 211 creating 169
independent ASP restriction 663 parameter tips 166
MIMIX Model Switch Framework with RJ link system journal 55
73 system journal replication 24
preventing identity column problems 401 advanced techniques 383
remote journaling changes to 48 journaling requirements 343
removing stranded journal receivers 216 omitting content 415
RJ link considerations 73 system library list 164, 665
synchronization check, automatic 238 system manager 34
synchronizing 497 multimanagement enviroment 170
activity entries overview 504 system value
commands for 499 QALWOBJRST (Allow object restore option)
considerations 499 41
data group activity entries 528 QALWUSRDMN (Allow user domain objects
database files 514 in libraries) 41
database files overview 505 QAUDCTL 55
DLOs 524 QAUDCTL (Auditing control) 41
DLOs in a data group 524 QAUDLVL 55, 103
DLOs without a data group 525 QAUDLVL (Security auditing level) 41
establish a start point 508 QLIBLCKLVL (Library locking level) 41
file entry overview 505 QMLTTHDACN (Multithreaded job action) 42
files with triggers 506 QRETSVRSEC (Retain server security data)
IFS objects 520 42
IFS objects by path name only 522 QSECURITY (System security level) 41
IFS objects in a data group 520 QSYSLIBL 164
IFS objects without a data group 522 QSYSLIBL (System part of the library list) 41
IFS tracking entries 530 QTIMZON (Time zone) 42
including logical files 506 system, RJ identifier for a 207, 208
independent ASP, data in an 501 system, roles 26
initial 510 sytem value
initial configuration 508 QTIME (Time of day) 42
initial configuration MQ environment 508
861
T trigger programs
target journal inspection 35, 334 defined 397
automatic corrections 337 synchronizing files 398
disabling 340 triggers
enabling 338 avoiding problems 468
example 335 comparing file data 468
false errors 335 disabling during synchronization 506
journals not inspected 334 read 468
restriction 334 update, insert, and delete 468
target journal state 202 T-ZC journal entries
target system 26 access types 415
TCP server, autostart job entry for 178 configuring to omit 416
TCP/IP omitting 415
adding to startup program 146
configuring native 159 U
creating port aliases for 160 unconfirmed journal entries 66, 73
temporary files to not replicate 84 unique key
thread groups 475 comparing file data restriction 466
threshold, backlog file entry options for replicating 240
adjusting 251 replication of 385
container send 244 user ASP 659
database apply 242 user exit points 628
database reader/send 241 user exit program
object apply 244 data areas and data queues 90
object retrieve 244 IFS objects 90
object send 243 large objects (LOBs) 108
remote journal link 238 user exit, generic 625
threshold, CMPRCDCNT commit 381 user journal replication 24
timestamps, automatic 238 advanced techniques 383
tracking entries journaling requirements 343
loading 286 requirements for data areas and data queues
loading for data areas, data queues 287 113
loading for IFS objects 286 supported journal entries for data areas, data
purpose 77 queues 733
tracking entry tracking entry 77
file identifiers (FIDs) 317 user profiles
transfer definition 36, 174, 475 default 167
changing 185 exclude from replication 84, 85, 86
contextual system support (*ANY) 32, 181 MIMIXOWN 188
fields in data group definition 235 password indicator attribute 725
fields in system definition 166 replication of 104
multiple network system environment 172 specifying status 243
other uses 174 status attribute 722
parameter tips 176 synchronizing 499
short name 176 system distribution directory entries 500
transfer protocols Vision-supplied 40
OptiConnect parameters 178 user-defined functions 421
SNA parameters 177
TCP parameters 176
862
V
verifying
communications link 196, 197
initial synchronization 512
journaling, IFS tracking entries 352
journaling, object tracking entries 356
journaling, physical files 349
key attributes 389
send and receive processes automatically
238
W
wait time
comparing file data 475
reader 235
WRKDG SELECT statement 813
863