SDD 1.8 - 3.4 User Guide English Version
SDD 1.8 - 3.4 User Guide English Version
SDD 1.8 - 3.4 User Guide English Version
GC52-1309-05
IBM License Agreement for Machine Code This guide might contain references to machine code, which includes Licensed Internal Code. Licensed Internal Code is licensed to you under the terms of the IBM License Agreement for Machine Code. Carefully read the agreement. By using this product, you agree to abide by the terms of this agreement and applicable copyright laws. See IBM license agreement for machine code on page 467.
ii
Note Before using this information and the product it supports, read the information in Notices on page 465.
This edition applies to the following versions of IBM Multipath Subsystem Device Driver and to all subsequent releases and modifications until otherwise indicated in new editions: Subsystem Device Driver Version 1 Release 8 Modification 0 Level x for HP-UX Subsystem Device Driver Version 1 Release 7 Modification 2 Level x for AIX Subsystem Device Driver Version 1 Release 6 Modification 5 Level x for Solaris Subsystem Device Driver Version 1 Release 6 Modification 4 Level x for Windows Subsystem Device Driver Version 1 Release 6 Modification 3 Level x for Linux Subsystem Device Driver Version 1 Release 6 Modification 0 Level x for Netware Subsystem Device Driver Device Specific Module Version 2 Release 4 Modification x Level x Subsystem Device Driver Device Specific Module Version 2 Release 4 Modification 3 Level 4 for Windows Subsystem Device Driver Path Control Module Version 3 Release 0 Modification x Level x Subsystem Device Driver Path Control Module Version 2 Release 6 Modification 4 Level 0 This edition replaces GC52-1309-04. Copyright IBM Corporation 1999, 2013. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
iv
Contents
Figures . . . . . . . . . . . . . . xv Tables . . . . . . . . . . . . . . xvii About this guide . . . . . . . . . . xix
Who should use this guide . . . . . . . . . xix Summary of changes . . . . . . . . . . . xix Updated information . . . . . . . . . . xx Command syntax conventions . . . . . . . . xx Highlighting conventions . . . . . . . . . xx Special characters conventions . . . . . . . xx Related information . . . . . . . . . . . xx The ESS library . . . . . . . . . . . . xxi The DS8000 library . . . . . . . . . . xxiii The DS6000 library . . . . . . . . . . xxiii The DS5000 and DS Storage Manager library xxiii The DS4000 library . . . . . . . . . . xxiv The SAN Volume Controller library . . . . . xxv The Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication libraries . . . . . . . . . . xxv Ordering IBM publications . . . . . . . xxvi IBM Publications Center . . . . . . . xxvi How to send your comments . . . . . . . . xxvi Virtualization products requirements . . . . SCSI requirements for ESS . . . . . . . Fibre requirements . . . . . . . . . . Preparing for the SDD installation . . . . . . . Configuring the disk storage system . . . . . Configuring the virtualization products . . . . Installing the AIX fibre-channel device drivers. . Uninstalling the AIX fibre-channel device drivers Using the smitty deinstall command . . . . Using the installp command . . . . . . . Installing and upgrading the AIX SDD host attachment . . . . . . . . . . . . . Configuring fibre-channel-attached devices . . . Removing fibre-channel-attached devices . . . Verifying the adapter firmware level . . . . . Determining if the sddServer for Expert is installed . . . . . . . . . . . . . . Understanding SDD support for IBM System p with static LPARs configured . . . . . . . Determining the installation package . . . . . Installation packages for 32-bit and 64-bit applications on AIX 4.3.3 (or later) host systems. . . . . . . . . . . . . . Switching between 32-bit and 64-bit modes on AIX 5.1.0, AIX 5.2.0, and AIX 5.3.0 host systems. . . . . . . . . . . . . . Installation of major files on your AIX host system . . . . . . . . . . . . . . Determining the installation type . . . . . . Installing and upgrading the SDD . . . . . . . Installing the SDD . . . . . . . . . . . Installing the SDD from CD-ROM . . . . . Installing SDD from downloaded code . . . Upgrading the SDD . . . . . . . . . . Upgrading the SDD packages automatically without system restart . . . . . . . . . Upgrading SDD manually . . . . . . . Updating SDD packages by applying a program temporary fix . . . . . . . . Upgrading AIX OS or host attachment and SDD packages . . . . . . . . . . . Case 1: In stand-alone host or with the HACMP services stopped . . . . . . . Case 2: In HACMP node-by-node migration with nonconcurrent resource group . . . . Verifying the currently installed version of SDD for SDD 1.3.3.11 (or earlier) . . . . . . . . Verifying the currently installed version of SDD for SDD 1.4.0.0 (or later) . . . . . . . . . Preparing to configure SDD . . . . . . . . . Maximum number of LUNs . . . . . . . . ODM attributes for controlling the maximum number of LUNs in SDD version 1.6.0.7 or later on AIX 5.2 and later . . . . . . . . 14 14 14 15 15 16 16 17 17 17 18 18 18 19 19 20 20
20
21 21 22 23 23 23 24 24 24 28 29 32 32 34 35 37 38 38
. 7 . 7 . 8
. 8 . 8 . 9 . 9
. 11 12 . 12 . 12 . 13 . 13 . 14
40
Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled . . . . . . . . . . . . . Controlling I/O flow to SDD devices with the SDD qdepth_enable attribute . . . . . . . Controlling reserve policy of SDD devices with the SDD reserve_policy attribute . . . . . . Configuring SDD . . . . . . . . . . . . Unconfiguring SDD . . . . . . . . . . Verifying the SDD configuration . . . . . . Dynamically adding paths to SDD vpath devices Dynamically removing or replacing PCI adapters or paths . . . . . . . . . . . . . . Dynamically removing a PCI adapter from SDD configuration . . . . . . . . . . Dynamically replacing a PCI adapter in an SDD configuration . . . . . . . . . . Dynamically removing a path of an SDD vpath device . . . . . . . . . . . . Removing SDD from an AIX host system . . . . Special considerations when you uninstall SDD in the NIM environment . . . . . . . . . Removing SDD Host Attachment from an AIX host system . . . . . . . . . . . . . . . . SAN boot support . . . . . . . . . . . . Manual exclusion of devices from the SDD configuration . . . . . . . . . . . . . Replacing manually excluded devices in the SDD configuration . . . . . . . . . . . . . SAN boot installation procedures . . . . . . SAN boot installation procedure for AIX 5.1 SAN boot installation procedure for AIX 5.2, AIX 5.3, and AIX 6.1 . . . . . . . . . Understanding SDD support for High Availability Cluster Multiprocessing . . . . . . . . . . SDD persistent reserve attributes . . . . . . Preparation for importing volume groups under HACMP . . . . . . . . . . . . . . HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups. Creating HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Importing HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Removing HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Extending HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Reducing HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Exporting HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Enhanced concurrent-capable volume groups Recovering paths that are lost during HACMP node fallover that is caused when a system locks up . . . . . . . . . . . . . Supporting enhanced concurrent mode in an HACMP environment . . . . . . . . . SDD server daemon . . . . . . . . . . . Verifying if the SDD server has started . . . .
40 43 44 45 45 46 47 47 47 48 49 50 51 51 52 52 52 53 53 53 54 56 57 58 59 59 62 62 63 64 64
66 67 67 68
Starting the SDD server manually . . . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . . . . Stopping the SDD server . . . . . . . . . Replacing the SDD server with a stand-alone version . . . . . . . . . . . . . . . PTFs for APARs on AIX with Fibre Channel and the SDD server . . . . . . . . . . . . Dynamically changing the SDD path-selection policy algorithm . . . . . . . . . . . . . . . datapath set device policy command . . . . . Dynamically opening an invalid or close_dead path Fibre-channel dynamic device tracking for AIX 5.20 TL1 (and later) . . . . . . . . . . . . . Understanding SDD 1.3.2.9 (or later) support for single-path configuration for supported storage devices . . . . . . . . . . . . . . . . Understanding the persistent reserve issue when migrating from SDD to non-SDD volume groups after a system reboot . . . . . . . . . . . Managing secondary-system paging space . . . . Listing paging spaces . . . . . . . . . . Adding a paging space . . . . . . . . . Removing a paging space . . . . . . . . Verifying load-balancing and failover protection . . Displaying the supported storage device SDD vpath device configuration . . . . . . . . Configuring volume groups for failover protection . . . . . . . . . . . . . . Losing failover protection . . . . . . . . . Losing a device path . . . . . . . . . . Creating a volume group from single-path SDD vpath devices. . . . . . . . . . . . . A side effect of running the disk change method Manually deleting devices and running the configuration manager (cfgmgr) . . . . . . Using LVM with SDD . . . . . . . . . . . Importing volume groups with SDD . . . . . Exporting a volume group with SDD . . . . . Recovering from mixed volume groups . . . . Extending an existing SDD volume group . . . Backing up all files belonging to an SDD volume group . . . . . . . . . . . . . . . Restoring all files belonging to an SDD volume group . . . . . . . . . . . . . . . SDD-specific SMIT panels . . . . . . . . . Accessing the Display Data Path Device Configuration SMIT panel . . . . . . . . Accessing the Display Data Path Device Status SMIT panel . . . . . . . . . . . . . Accessing the Display Data Path Device Adapter Status SMIT panel . . . . . . . . . . . Accessing the Define and Configure All Data Path Devices SMIT panel . . . . . . . . . Accessing the Add Paths to Available Data Path Devices SMIT panel . . . . . . . . . . Accessing the Configure a Defined Data Path Device SMIT panel . . . . . . . . . . . Accessing the Remove a Data Path Device SMIT panel . . . . . . . . . . . . . . .
68 68 68 69 69 70 70 71 71
72
72 73 73 73 73 73 74 75 76 77 77 77 79 79 79 80 81 81 81 82 82 83 84 84 85 85 85 85
vi
Accessing the Add a Volume Group with Data Path Devices SMIT panel . . . . . . . . . Accessing the Add a Data Path Volume to a Volume Group SMIT panel . . . . . . . . Accessing the Remove a Physical Volume from a Volume Group SMIT panel . . . . . . . . Accessing the Backup a Volume Group with Data Path Devices SMIT panel . . . . . . . . . Accessing the Remake a Volume Group with Data Path Devices SMIT panel . . . . . . . SDD utility programs . . . . . . . . . . . addpaths . . . . . . . . . . . . . . hd2vp and vp2hd . . . . . . . . . . . dpovgfix . . . . . . . . . . . . . . lsvpcfg . . . . . . . . . . . . . . . mkvg4vp . . . . . . . . . . . . . . extendvg4vp . . . . . . . . . . . . . excludesddcfg . . . . . . . . . . . . lquerypr . . . . . . . . . . . . . . sddgetdata . . . . . . . . . . . . . Persistent reserve command tool . . . . . . . Using supported storage devices directly . . . . Using supported storage devices through AIX LVM Migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode . . . . . . . . . . . . Detailed instructions for migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode . . Using the trace function . . . . . . . . . .
85 86 86 86 87 87 87 88 88 88 89 90 90 91 91 91 93 94
94
96 98
Creating and mounting the CD-ROM filesystem. . . . . . . . . . . . . Using the System Management Interface Tool facility to install SDDPCM . . . . . . . Unmounting the CD-ROM File System . . . Installing SDDPCM from downloaded code . . Installing SDDPCM with the AIX OS from an AIX NIM SPOT server to the client SAN boot disk or the internal boot disk . . . . . . . Updating SDDPCM . . . . . . . . . . Updating SDDPCM packages by installing a newer base package or a program temporary fix . . . . . . . . . . . . . . . Committing or rejecting a program temporary fix update . . . . . . . . . Verifying the currently installed version of SDDPCM . . . . . . . . . . . . . Maximum number of devices that SDDPCM supports . . . . . . . . . . . . . Migrating SDDPCM . . . . . . . . . . Migrating the supported storage SAN boot device or nonboot volume group from AIX default PCM to SDDPCM . . . . . . . Migrating from SDDPCM to the AIX default PCM or to SDD . . . . . . . . . . Migrating from SDD with SAN boot devices (on supported storage hdisks) to SDDPCM with multipath SAN boot devices . . . . Migrating SDDPCM during an AIX OS upgrade with multipath SAN boot devices (on supported storage hdisks) . . . . . . Configuring and unconfiguring supported storage MPIO-capable devices . . . . . . . . . . Configuring supported storage MPIO-capable devices . . . . . . . . . . . . . . Verifying the SDDPCM configuration . . . . Dynamically adding and removing paths or adapters . . . . . . . . . . . . . . Unconfiguring supported storage MPIO-capable devices . . . . . . . . . . . . . . Switching DS4000, DS5000, or DS3950 storage device configurations . . . . . . . . . . Removing SDDPCM from an AIX host system . . SDDPCM support for HACMP with Enhanced Concurrent Mode volume groups . . . . . . MPIO reserve policies . . . . . . . . . . No Reserve reservation policy . . . . . . . Exclusive Host Access single-path reservation policy . . . . . . . . . . . . . . . Persistent Reserve Exclusive Host Access reservation policy . . . . . . . . . . . Persistent Reserve Shared Host Access reservation policy . . . . . . . . . . . SDDPCM active/passive storage device controller health-check feature . . . . . . . . . . . SDDPCM ODM attribute settings. . . . . . . SDDPCM ODM attribute default settings . . . Changing device reserve policies . . . . . . Changing the path selection algorithm . . . . Using the load balancing port algorithm . . . Changing SDDPCM path healthcheck mode . .
Contents
113 114
117 118
119
119 119 119 120 120 122 122 123 124 125 125 125 125 125 126 126 127 127 127 128 129
vii
Changing SDDPCM path healthcheck time interval . . . . . . . . . . . . . . Changing the Open HyperSwap quiesce expire time . . . . . . . . . . . . . . . Supported AIX fibre-channel device driver features Fast I/O failure of fibre-channel devices . . . Fibre channel dynamic device tracking . . . . Changing SDDPCM controller healthcheck delay_time . . . . . . . . . . . . . Changing SDDPCM controller healthcheck interval time. . . . . . . . . . . . . Multipath SAN boot support . . . . . . . . Configuring supported storage system MPIO devices as the SAN boot device . . . . . . Support system dump device with the supported storage system MPIO device . . . . . . . . Dynamically enabling and disabling paths or adapters . . . . . . . . . . . . . . . Dynamically enabling or disabling a path . . . Dynamically enabling or disabling an adapter Using the SDDPCM trace function . . . . . . SDDPCM server daemon . . . . . . . . . Verifying if the SDDPCM server has started . . Starting the SDDPCM server manually . . . . Stopping the SDDPCM server . . . . . . . AE daemon . . . . . . . . . . . . . . Verifying if the AE server has started . . . . Starting the AE server manually . . . . . . Stopping the AE server manually. . . . . . SDDPCM utility programs . . . . . . . . . Persistent reserve command tools . . . . . pcmquerypr . . . . . . . . . . . . pcmgenprkey . . . . . . . . . . . sddpcm_get_config . . . . . . . . . Using SDDPCM pcmpath commands . . . . pcmpath clear device count. . . . . . . pcmpath disable ports . . . . . . . . pcmpath enable ports . . . . . . . . pcmpath open device path . . . . . . . pcmpath query adapter . . . . . . . . pcmpath query adaptstats . . . . . . . pcmpath query device . . . . . . . . pcmpath query devstats . . . . . . . . pcmpath query essmap . . . . . . . . pcmpath query port . . . . . . . . . pcmpath query portmap. . . . . . . . pcmpath query portstats. . . . . . . . pcmpath query session . . . . . . . . pcmpath query version . . . . . . . . pcmpath query wwpn . . . . . . . . pcmpath set adapter . . . . . . . . . pcmpath set device algorithm . . . . . . pcmpath set device hc_interval . . . . . pcmpath set device hc_mode . . . . . . pcmpath set device cntlhc_interval . . . . pcmpath set device cntlhc_delay . . . . . pcmpath set device path. . . . . . . . pcmpath chgprefercntl device . . . . . . Summary of command syntax . . . . . .
130 131 131 131 132 133 133 134 134 135 136 136 137 137 138 138 138 139 139 139 140 140 140 140 140 143 144 144 147 148 150 152 154 156 158 164 166 167 169 170 172 173 174 175 177 178 179 180 181 182 183 185
viii
Standard UNIX applications . . . . . . Creating new logical volumes . . . . . Removing logical volumes . . . . . . Re-creating the existing logical volumes . Installing SDD on a NFS file server . . . . Setting up NFS for the first time . . . . Installing SDD on a system that already has the NFS file server . . . . . . . .
. . . . . .
. 214
SAN Boot instructions for RHEL 3 with IBM SDD (ppc) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SDD upgrade procedure. . . . . . . SAN Boot Instructions for SLES 8 with IBM SDD (x86) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SDD upgrade procedure. . . . . . . SAN Boot Instructions for SLES 9 with IBM SDD (x86) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SDD upgrade procedure. . . . . . . SAN Boot instructions for SLES 9 with IBM (ppc) . . . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Upgrading the SDD . . . . . . . . SAN Boot Instructions for SLES 9 with IBM SDD (x86) and LVM 2 . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SAN boot instructions for RHEL 4 with IBM SDD (x86) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Upgrading the SDD . . . . . . . . SAN Boot instructions for RHEL 4 with IBM SDD (ppc) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Upgrading the SDD . . . . . . . . SAN boot instructions for RHEL 4 with IBM SDD (x86) and LVM 2 . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Using lilo with SDD (remote boot) on x86 . . Manually specifying disk geometry of the boot device . . . . . . . . . . . SDD server daemon . . . . . . . . . . Verifying if the SDD server has started . . . Starting the SDD server manually . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . Stopping the SDD server . . . . . . . Collecting trace information . . . . . . Understanding SDD support for single-path configuration . . . . . . . . . . . . Partitioning SDD vpath devices . . . . . . Using standard UNIX applications . . . . . Managing common issues . . . . . . . .
. . . . . . . . . . . . . . . .
244 244 246 252 253 253 254 261 262 262 264 268 268 268 270 275
. 276 . 276 . 277 . . . . . . . . . . . . . . . . 283 283 284 288 289 289 291 294 295 295 297 302 303 305 305 305
. . . .
ix
Disk storage system requirements . . . . . SCSI requirements . . . . . . . . . . . Fibre-channel requirements . . . . . . . . Preparing for SDD installation. . . . . . . . Configuring the disk storage system. . . . . Configuring fibre-channel adapters . . . . . Configuring SCSI adapters . . . . . . . . Using a NetWare Compaq Server. . . . . . Installing SDD . . . . . . . . . . . . . Installing SDD from CD-ROM . . . . . . . Installing SDD from downloaded code . . . . Configuring SDD . . . . . . . . . . . . Maximum number of LUNs . . . . . . . Displaying the current version of the SDD. . . . Features . . . . . . . . . . . . . . . Automatic path detection, failover and selection Manual operations using the datapath commands . . . . . . . . . . . . . Understanding SDD error recovery algorithms Single-path mode . . . . . . . . . . Multiple-path mode . . . . . . . . . Dynamic load balancing . . . . . . . . . Disk storage system logical unit detection . . . Error reporting and logging . . . . . . . SDD in NetWare-layered architecture . . . . Display a single device for a multipath device . . . . . . . . . . . . . . Removing the SDD . . . . . . . . . . . Cluster setup for Novell NetWare 5.1 . . . . . Cluster setup for Novell NetWare 6.0 . . . . . Examples of commands output on the Console Window . . . . . . . . . . . . . .
312 312 313 313 313 313 314 314 315 315 315 315 316 316 316 316 317 318 318 318 318 318 318 319 319 319 320 320 320
Solaris 10 Zone support . . . . . . . . . SDD installation in a server with nonglobal zones . . . . . . . . . . . . . . Access SDD vpath devices in nonglobal zones . . . . . . . . . . . . . . Dynamically changing the SDD path-selection policy algorithm . . . . . . . . . . . datapath set device policy command . . . Excluding LUNs from being configured by SDD Determining the LUN identifier of a LUN Uninstalling the SDD . . . . . . . . . . . Understanding SDD support for single-path configuration for disk storage system . . . . . SDD server daemon . . . . . . . . . . . Verifying if the SDD server has started . . . . Starting the SDD server manually . . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . . Changing the retry count value when probing SDD server inquiries . . . . . . . . . . Stopping the SDD server . . . . . . . . Using applications with SDD . . . . . . . . Standard UNIX applications . . . . . . . Installing the SDD on a NFS file server . . . . Setting up NFS for the first time . . . . . Installing SDD on a system that already has the Network File System file server . . . . Veritas Volume Manager. . . . . . . . . Oracle . . . . . . . . . . . . . . . Installing an Oracle database for the first time . . . . . . . . . . . . . . Installing an SDD on a system that already has Oracle in place . . . . . . . . . Solaris Volume Manager (formerly Solstice DiskSuite) . . . . . . . . . . . . . Installing Solaris Volume Manager for the first time . . . . . . . . . . . . . Installing SDD on a system that already has Solstice DiskSuite in place . . . . . . . Setting up transactional volume for UFS logging on a new system . . . . . . . Installing vpath on a system that already has transactional volume for UFS logging in place . . . . . . . . . . . . . .
337 337 337 338 339 339 339 340 341 341 341 341 341 342 342 342 343 343 343 343 344 345 345 346 348 349 349 350
351
Installing SDD from CD-ROM . . . . . . Installing SDD from downloaded code . . . Upgrading SDD . . . . . . . . . . . Displaying the current version of the SDD. . . Configuring the SDD . . . . . . . . . . . Maximum number of LUNs . . . . . . . Adding paths to SDD devices . . . . . . . Reviewing the existing SDD configuration information . . . . . . . . . . . . Installing and configuring additional paths Verifying additional paths are installed correctly . . . . . . . . . . . . . Adding or modifying a multipath storage configuration to the supported storage device . Reviewing the existing SDD configuration information . . . . . . . . . . . . Adding new storage to an existing configuration . . . . . . . . . . . Verifying that new storage is installed correctly . . . . . . . . . . . . . Uninstalling the SDD . . . . . . . . . . . Using high-availability clustering on an ESS . . . Special considerations in the high-availability clustering environment . . . . . . . . . Configuring a Windows NT cluster with the SDD installed . . . . . . . . . . . . Making the MoveGroup Service startup type automatic . . . . . . . . . . . . SDD server daemon . . . . . . . . . . . Verifying that the SDD server has started . . . Starting the SDD server manually . . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . . Stopping the SDD server . . . . . . . .
356 356 357 357 357 357 357 358 359 359 361 361 362 362 363 363 364 364 366 366 366 366 366 367
Booting from a SAN device with Windows 2000 and the SDD using Qlogic HBA <BIOS 1.43> or later . . . . . . . . . . . . . . . Booting from a SAN device with Windows 2000 and the SDD using an Emulex HBA <Firmware v3.92a2, v1.90.x5> or later . . . . . . . . Limitations when you boot from a SAN boot device on a Windows 2000 host . . . . . . SAN boot disk migration . . . . . . . . Support for Windows 2000 clustering . . . . . Special considerations in the Windows 2000 clustering environment . . . . . . . . . Configuring a Windows 2000 cluster with the SDD installed . . . . . . . . . . . . Upgrading the SDD in a two-node cluster environment. . . . . . . . . . . . . Uninstalling the SDD in a two-node cluster environment. . . . . . . . . . . . . SDD server daemon . . . . . . . . . . . Verifying if the SDD server has started . . . . Starting the SDD server manually . . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . . Stopping the SDD server . . . . . . . .
378
379 380 381 381 381 382 382 383 383 383 383 384 384
Chapter 10. Using SDD on a Windows Server 2003 host system . . . . . . 385
Verifying the hardware and software requirements Unsupported environments. . . . . . . Disk storage system requirements . . . . Host system requirements . . . . . . . SCSI requirements . . . . . . . . . Fibre-channel requirements . . . . . . Preparing for SDD installation. . . . . . . Configuring the supported storage device . . Configuring fibre-channel adapters . . . . Configuring SCSI adapters for ESS devices . Installing SDD . . . . . . . . . . . . Installing for the first time . . . . . . . Installing SDD from CD-ROM . . . . . Installing SDD from downloaded code . . Upgrading the SDD . . . . . . . . . Displaying the current version of the SDD. . Upgrading from a Windows NT host system to Windows Server 2003. . . . . . . . . Configuring the SDD . . . . . . . . . . Maximum number of LUNs . . . . . . Verifying the configuration . . . . . . . Activating additional paths . . . . . . . Verifying that additional paths are installed correctly . . . . . . . . . . . . . Uninstalling the SDD . . . . . . . . . . SAN boot support . . . . . . . . . . . Booting a SAN device with Windows Server 2003 and the SDD using Qlogic HBA <BIOS 1.43> or later . . . . . . . . . . . Booting a SAN device with IA64-bit Windows Server 2003 and the SDD using a Qlogic HBA Booting from a SAN device with Windows Server 2003 and SDD using an EMULEX HBA <Firmware v3.92a2, v1.90.x5> or later . . .
Contents
. . . . . . . . . . . . . . . . . . . .
385 385 385 386 386 386 387 387 387 388 388 388 388 390 391 391 392 392 392 392 393
. 396 . 397
. 398
xi
SAN boot disk migration . . . . . . . Support for Windows Server 2003 clustering . . Special considerations in the Windows Server 2003 clustering environment . . . . . . Configure Windows 2003 cluster with the SDD installed . . . . . . . . . . . . . Upgrading SDD in a two-node cluster environment. . . . . . . . . . . . Uninstalling SDD in a two-node cluster environment. . . . . . . . . . . . SDD server daemon . . . . . . . . . . Verifying if the SDD server has started . . . Starting the SDD server manually . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . Stopping the SDD server . . . . . . .
SDDDSM datapath command support . . . SDDDSM server daemon . . . . . . . Verifying if the SDDDSM server has started Starting the SDDDSM server manually . . Changing to a different port number for the SDDDSM server . . . . . . . . . Stopping the SDDDSM server . . . . .
. . . . . .
. . . .
. 421 . 421
Chapter 12. Using the SDD server and the SDDPCM server. . . . . . . . . 423
SDD server daemon . . . . . . . . . . . Understanding how the SDD server daemon works . . . . . . . . . . . . . . . Path reclamation . . . . . . . . . . Path probing . . . . . . . . . . . sddsrv and the IBM TotalStorage Expert V.2.1.0 . . . . . . . . . . . . . . sddsrv and IBM TotalStorage support for Geographically Dispersed Sites for Microsoft Cluster Service . . . . . . . . . . . SDDPCM server daemon . . . . . . . . . sddsrv.conf file format . . . . . . . . . . pcmsrv.conf file format . . . . . . . . . . Enabling or disabling the sddsrv or pcmsrv TCP/IP port. . . . . . . . . . . . . . Changing the sddsrv or pcmsrv TCP/IP port number . . . . . . . . . . . . . . . Disabling the probing function in sddsrv . . . . Changing the probing interval in sddsrv . . . . 423 423 423 423 424
. 402 . 402
| | | |
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system . . . . . . . . . . . . 403
Verifying the hardware and software requirements Unsupported environments. . . . . . . . Host system requirements . . . . . . . . Fibre-channel requirements . . . . . . . Preparing for SDDDSM installation . . . . . . Configuring the supported storage device . . . Configuring fibre-channel adapters . . . . . Installing SDDDSM . . . . . . . . . . . Installing for the first time . . . . . . . . Installing SDDDSM from CD-ROM . . . . Installing SDDDSM from downloaded code Upgrading SDDDSM . . . . . . . . . . Displaying the current version of SDDDSM . . Configuring SDDDSM . . . . . . . . . . Maximum number of LUNs . . . . . . . Verifying the configuration . . . . . . . . Activating additional paths . . . . . . . . Verifying that additional paths are installed correctly . . . . . . . . . . . . . . Removing additional paths . . . . . . . . Verifying that additional paths are removed correctly . . . . . . . . . . . . . . Uninstalling SDDDSM . . . . . . . . . . SAN boot support . . . . . . . . . . . . Remote boot support for 32-bit Windows Server 2003, Windows Server 2008, or Windows Server 2012 using a QLogic HBA . . . . . . . . Booting from a SAN device with Windows Server 2003, Windows Server 2008, or Windows Server 2012 and the SDD using an Emulex HBA Support for Windows Server 2003, Windows Server 2008, or Windows Server 2012 clustering . . . . Special considerations in the Windows Server 2003 clustering environment . . . . . . . Configuring a Windows Server 2003, Windows Server 2008, or Windows Server 2012 cluster with SDDDSM installed . . . . . . . . . Removing SDDDSM in a two-node cluster environment. . . . . . . . . . . . . 403 404 404 404 405 405 405 405 405 405 407 408 408 408 408 408 410 410 413 413 415 416
| | | | | | | | | | | | | |
416
Appendix A. SDD, SDDPCM, and SDDDSM data collection for problem analysis. . . . . . . . . . . . . . 457
Enhanced trace capability for the SDD and SDDDSM . . . . . . . . . . . . . . Using sddgetdata to collect information for problem determination . . . . . . . . Enhanced trace capability for SDDPCM . . . . 457 . 457 . 458
418 419
xii
. 467
461
. 461 . 462 . 464
Index . . . . . . . . . . . . . . . 493
Notices . . . . . . . . . . . . . . 465
Trademarks . . . . . . . . . . . . . . 467
Contents
xiii
xiv
Figures
1. 2. Multipath connections between a host system and the disk storage in a disk storage system Multipath connections between a host system and the disk storage with the SAN Volume Controller . . . . . . . . . . . . SDDPCM in the protocol stack . . . . . . Workload imbalance when one link receives twice the load of the other links . . . . . Workload imbalance when one link is more heavily loaded than another link . . . . . Workload imbalance when one host sharing workload across two paths loses one path . . Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows 2000 host system . . . . . . . . . . . . . . 8. . 5 Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows Server 2003 host system . . . . . . . . . . . . 390 Example showing SAN Volume Controller devices to the host and path access to the SAN Volume Controller devices in a successful SDDDSM installation on a Windows Server 2003 host system . . . . 407
3. 4. 5. 6. 7.
9.
373
xv
xvi
Tables
|
1. 2. 3. 4. SDD platforms on supported storage devices 1 SDD in the protocol stack . . . . . . . . 3 Package-naming relationship between SDD 1.3.3.x and SDD 1.4.0.0 (or later) . . . . . 12 SDD 1.4.0.0 (or later) installation packages for different AIX OS levels and the supported AIX kernel mode, application mode, and interface . 20 Major files included in the SDD installation package . . . . . . . . . . . . . . 22 List of previously installed installation packages that are supported with the installation upgrade . . . . . . . . . . 25 Maximum LUNs allowed for different AIX OS levels . . . . . . . . . . . . . . 39 Recommended maximum paths supported for different number of LUNs on AIX 5.2 or later . 39 Recommended SDD installation packages and supported HACMP modes for SDD versions earlier than SDD 1.4.0.0 . . . . . . . . 55 Software support for HACMP 4.5 on AIX 4.3.3 (32-bit only), 5.1.0 (32-bit and 64-bit), 5.2.0 (32-bit and 64-bit) . . . . . . . . . . 55 Software support for HACMP 4.5 on AIX 5.1.0 (32-bit and 64-bit kernel) . . . . . . . . 56 PTFs for APARs on AIX with fibre-channel support and the SDD server daemon running . 69 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. SDD-specific SMIT panels and how to proceed Commands . . . . . . . . . . . . SDD installation scenarios . . . . . . . Patches necessary for proper operation of SDD on HP-UX . . . . . . . . . . . SDD components installed for HP-UX host systems . . . . . . . . . . . . . System files updated for HP-UX host systems SDD commands and their descriptions for HP-UX host systems . . . . . . . . . SDD components for a Linux host system Summary of SDD commands for a Linux host system . . . . . . . . . . . . . . SDD installation scenarios . . . . . . . Operating systems and SDD package file names . . . . . . . . . . . . . . SDD components installed for Solaris host systems . . . . . . . . . . . . . System files updated for Solaris host systems SDD commands and their descriptions for Solaris host systems . . . . . . . . . Windows 2000 clustering SCSI-2 Reserve/Release and Persistent Reserve/Release support with MSCS . . . Commands . . . . . . . . . . . . 82 145 189 190 200 200 201 219 220 328 328 331 332 333
5. 6.
7. 8. 9.
10.
11. 12.
28.
381 429
xvii
xviii
Summary of changes
This guide contains information that was published in the IBM System Storage Multipath Subsystem Device Driver User's Guide and technical updates in that information. All changes to this guide are marked with a vertical bar (|) in the left margin.
xix
Note: For the more recent updates that are not included in this guide, go to the SDD website at: www.ibm.com/servers/storage/support/software/sdd
Updated information
| | | | | | With SDDDSM version 2.4.3.3-5, SDD extends support to the Microsoft Windows Server 2012 platform. With SDDDSM version 2.4.3.4, the logging feature is enhanced to dynamically configure the size and count of the log files. Two new parameters max_log_size and max_log_count are added to the sddsrv.conf file. Changes to this edition also include an update to the descriptions of the max_log_size and max_log_count parameters in the pcmsrv.conf file.
Highlighting conventions
The following typefaces are used to show emphasis: boldface Text in boldface represents menu items and command names. italics Text in italics is used to emphasize a word. In command syntax, it is used for variables for which you supply actual values.
monospace Text in monospace identifies the commands that you type, samples of command output, examples of program code or messages from the system, and configuration state of the paths or volumes (such as Dead, Active, Open, Closed, Online, Offline, Invalid, Available, Defined).
Related information
The tables in this section list and describe the following publications: v The publications for the IBM TotalStorage Enterprise Storage Server (ESS) library v The publications for the IBM System Storage DS8000 library v The publications for the IBM System Storage DS6000 library
xx
v The publications for the IBM System Storage DS5000 and DS Storage Manager library v The publications for the IBM System Storage DS4000 library v The publications for the IBM System Storage SAN Volume Controller library v The publications for the IBM Tivoli Storage Productivity Center and IBM Tivoli Storage Productivity Center for Replication libraries v Other IBM publications that relate to the ESS v Non-IBM publications that relate to the ESS See Ordering IBM publications on page xxvi for information about how to order publications. See How to send your comments on page xxvi for information about how to send comments about the publications.
SC26-7494 This guide describes the commands that you can use (See Note.) from the ESS Copy Services command-line interface (CLI) for managing your ESS configuration and Copy Services relationships. The CLI application provides a set of commands that you can use to write customized scripts for a host system. The scripts initiate predefined tasks in an ESS Copy Services server application. You can use the CLI commands to indirectly control Peer-to-Peer Remote Copy (PPRC) and IBM FlashCopy configuration tasks within an ESS Copy Services server group. This guide provides guidelines and work sheets for SC26-7477 planning the logical configuration of an ESS that attaches (See Note.) to open-systems hosts.
IBM TotalStorage Enterprise Storage Server Configuration Planner for Open-Systems Hosts IBM TotalStorage Enterprise Storage Server Configuration Planner for S/390 and IBM Eserver zSeries Hosts
This guide provides guidelines and work sheets for SC26-7476 planning the logical configuration of an ESS that attaches (See Note.) to either the IBM S/390 or IBM Eserver zSeries host system.
xxi
Title IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide
Description This guide provides guidelines for attaching the ESS to your host system and for migrating to fibre-channel attachment from either a Small Computer System Interface (SCSI) or from the IBM SAN Data Gateway.
IBM TotalStorage This guide introduces the ESS product and lists the Enterprise Storage features you can order. It also provides guidelines for Server Introduction planning the installation and configuration of the ESS. and Planning Guide IBM TotalStorage Storage Solutions Safety Notices IBM TotalStorage Enterprise Storage Server SCSI Command Reference IBM TotalStorage Enterprise Storage Server Subsystem Device Driver User's Guide This publication provides translations of the danger notices and caution notices that IBM uses in ESS publications. This publication describes the functions of the ESS. It provides reference information, such as channel commands, sense bytes, and error recovery procedures for UNIX, IBM Application System/400 (IBM AS/400), and IBM Eserver iSeries 400 hosts. This publication describes how to use the IBM TotalStorage ESS Subsystem Device Driver (SDD) on open-systems hosts to enhance performance and availability on the ESS. SDD creates redundant paths for shared LUNs. SDD permits applications to run without interruption when path errors occur. It balances the workload across paths, and it transparently integrates with applications. This guide provides instructions for setting up and operating the ESS and for analyzing problems.
GC26-7444
GC26-7229
SC26-7297
SC26-7637
IBM TotalStorage Enterprise Storage Server User's Guide IBM TotalStorage Enterprise Storage Server Web Interface User's Guide IBM TotalStorage Common Information Model Agent for the Enterprise Storage Server Installation and Configuration Guide
This guide provides instructions for using the two ESS Web interfaces: ESS Specialist and ESS Copy Services.
This guide introduces the common interface model (CIM) GC35-0485 concept and provides instructions for installing and configuring the CIM agent. The CIM agent acts as an open-system standards interpreter, providing a way for other CIM-compliant storage resource management applications (IBM and non-IBM) to interoperate with each other. GC35-0489
IBM TotalStorage This reference provides information about the ESS Enterprise Storage application programming interface (API). Server Application Programming Interface Reference
Note: No hardcopy book is produced for this publication. However, a PDF file is available from http://www-947.ibm.com/systems/support/.
xxii
IBM System Storage DS8000 Introduction and Planning GC35-0515 Guide IBM System Storage DS Command-Line Interface User's Guide for the DS6000 series and DS8000 series IBM System Storage DS8000 Host Systems Attachment Guide IBM System Storage DS Application Programming Interface 5.4.1 and 5.4.2 Installation and Reference GC53-1127 SC26-7917 GC35-0516
IBM CIM Agent for DS Open Application Programming GC26-7929 Interface 5.5
IBM System Storage DS6000 Introduction and Planning GC26-7924 Guide IBM System Storage DS6000 Host System Attachment Guide IBM System Storage DS6000 Messages Reference GC26-7923 GC26-7920
IBM System Storage DS Command-Line Interface User's GC53-1127 Guide for the DS6000 series and DS8000 series IBM System Storage DS6000 Quick Start Guide GC26-7921
GC53-1140
xxiii
Title IBM System Storage EXP5000 Storage Expansion Enclosure Installation, Users, and Maintenance Guide IBM System Storage DS Storage Manager Command-Line Programming Guide IBM System Storage DS5000 Quick Start Guide: Quick Reference for the DS5100, DS5300 and EXP5000 IBM TotalStorage DS4300 Fibre Channel Storage Subsystem Installation, Users, and Maintenance Guide
IBM System Storage DS4000 Storage Manager Concepts GC26-7734 Guide IBM System Storage DS4000 Storage Manager 10 Installation and Host Support Guide IBM System Storage DS4000 Storage Manager Copy Services Guide IBM System Storage DS4000 Storage Manager Fibre Channel and Serial ATA Intermix Premium Feature Installation Overview IBM System Storage DS4000 Hard Drive and Storage Expansion Enclosure Installation and Migration Guide IBM System Storage DS3000/DS4000 Command-Line Programming Guide IBM System Storage DS4000 EXP420 Storage Expansion Unit Installation, User's and Maintenance Guide IBM System Storage DS4000 EXP810 Storage Expansion Enclosure Installation, User's and Maintenance Guide GA76-0422 GC27-2172 GC26-7907
GC26-7798
IBM TotalStorage DS4000 EXP700 and EXP710 Storage GC26-7735 Expansion Enclosures Installation, Users, and Maintenance Guide IBM System Storage DS4200/DS4700 Quick Start Guide IBM System Storage DS4700 Installation, User's and Maintenance Guide IBM System Storage DS4800 Quick Start Guide IBM System Storage DS4800 Installation, User's and Maintenance Guide IBM System Storage DS4800 Controller Cache Upgrade Kit Instructions GC27-2147 GC26-7843 GC27-2148 GC26-7845 GC26-7774
xxiv
IBM System Storage SAN Volume Controller Hardware GC27-2226 Maintenance Guide
The Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication libraries
The following publications make up the Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication libraries. These publications are available from the following website: http://www-05.ibm.com/e-business/linkweb/publications/servlet/pbi.wss
Title IBM Tivoli Storage Productivity Center and IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide IBM Tivoli Storage Productivity Center User's Guide IBM Tivoli Storage Productivity Center Messages IBM Tivoli Storage Productivity Center Command-Line Interface Reference IBM Tivoli Storage Productivity Center Problem Determination Guide IBM Tivoli Storage Productivity Center Workflow User's Guide Order number SC27-2337
xxv
xxvi
v IBM System Storage DS3950 This guide uses the following terminology: v The phrase supported storage devices refers to the following types of devices: RSSM DS3950, DS4100 (AIX only), DS4200, DS4300, DS4500, DS4700, DS4800, DS5020, DS5100, DS5300, DS6000, and DS8000 ESS SAN Volume Controller v The phrase disk storage system refers to ESS, DS8000, or DS6000 devices. v The phrase virtualization product refers to the SAN Volume Controller. Table 1 indicates the products that different SDD platforms support. v The phrase DS4000 refers to DS4100 (AIX only), DS4200, DS4300, DS4500, DS4700, and DS4800 devices. v The phrase DS5000 refers to DS5100 and DS5300 devices. v The phrase RSSM refers to IBM BladeCenter S SAS RAID Controller Module devices. v The phrase Open HyperSwap refers to Open HyperSwap replication. v The phrase Open HyperSwap device refers to pair of volumes that are managed in a Tivoli Productivity Center for Replication copy set. v The phrase Open HyperSwap session refers to a collection of Tivoli Productivity Center for Replication managed copy sets. |
Table 1. SDD platforms on supported storage devices Supported storage device SAN Volume Controller U U U U U U U U U U U
ESS U U U U
DS8000 U U U U
DS6000 U U U U
DS5000
DS4000
DS3950
RSSM
Table 1. SDD platforms on supported storage devices (continued) Supported storage device SAN Volume Controller
Platform Novell SUN Windows NT SDD Windows 2000 and Windows 2003 SDD
ESS U U U U
DS8000 U U
DS6000 U U
DS5000
DS4000
DS3950
RSSM
U U
| | | | | | |
The SDD supports a storage-redundant configuration environment for a host system that is attached to storage devices. It provides enhanced data availability, dynamic input/output (I/O) load-balancing across multiple paths, and automatic path failover protection. This guide provides step-by-step procedures on how to install, configure, and use SDD features on the following host systems: v IBM AIX (SDD and SDDPCM) v HP-UX v Supported Linux distributions, levels, and architectures. For up to date information about specific kernel levels supported in this release, see the Readme file on the CD-ROM or visit the SDD website: www.ibm.com/servers/storage/support/software/sdd v Novell Netware (disk storage systems only) v Sun Solaris v Microsoft Windows NT, Windows 2000, or Windows 2003 SDD v Microsoft Windows Server 2003 or Windows Server 2008 SDD | | v Microsoft Windows Server 2003, Windows Server 2008, or Windows Server 2012 SDDDSM
Table 2 shows the position of the SDD in the protocol stack. I/O operations that are sent to the SDD proceed to the host disk driver after path selection. When an active path experiences a failure (such as a cable or controller failure), the SDD dynamically switches to another path.
Table 2. SDD in the protocol stack
Disk I/O
File system
S008996Q
S009318
Subsystem Device Driver Sun Solaris disk driver SCSI adapter driver
HP disk driver
S008999Q
S008998Q
adapter driver
adapter driver
S008997Q
adapter driver
Each SDD vpath device represents a unique physical device on the storage server. Each physical device is presented to the operating system as an operating system disk device. There can be up to 32 operating system disk devices that represent up to 32 different paths to the same physical device. The SDD vpath devices behave almost like native operating system disk devices. You can use most disk device operations of operating systems on the SDD vpath devices, including commands such as open, close, dd, or fsck.
path to an alternate operational path. This capability prevents a single failing bus adapter on the host system, SCSI or fibre-channel cable, or host-interface adapter on the disk storage system from disrupting data access.
Host System
ESS
Port 0
Port 1
Cluster 1
Cluster 2
LUN 0
LUN 1
LUN 2
LUN 3
S009000Q
Figure 1. Multipath connections between a host system and the disk storage in a disk storage system
Figure 2 shows a host system that is attached through fibre-channel adapters to a SAN Volume Controller that has internal components for redundancy and multipath configuration. The SDD, residing in the host system, uses this multipath configuration to enhance data availability. That is, when there is a path failure, the SDD reroutes I/O operations from the failing path to an alternate operational path. This capability prevents a single failing bus adapter on the host system, fibre-channel cable, or host-interface adapter on the SAN Volume Controller from disrupting data access.
Host System
FCP adapter 0
FCP adapter 1
FABRIC
Storage Device
Port 0
Port 1
LUN 0
LUN 1
LUN 2
LUN 3
Figure 2. Multipath connections between a host system and the disk storage with the SAN Volume Controller
Note: SAN Volume Controller does not support parallel SCSI attachment.
The SDD dynamically selects an alternate I/O path when it detects a software or hardware problem. Some operating system drivers report each detected error in the system error log. With the SDD automatic path-failover feature, some reported errors are actually recovered from an alternative path.
Concurrent download of licensed machine code for DS3950, DS4000 and DS5000
If you are using the SDD multipath mode, you can concurrently download and install the licensed machine code while your applications continue to run, as long as you configure redundant paths to each storage controller port in addition to the multiple host adapter ports. Because switching a device to another controller is a time-consuming recovery action and affects I/O performance, you can use this redundancy to avoid an unnecessary controller failover if a path fails. Therefore, configure a minimum of four paths for each LUN with two host adapter ports and two storage controller ports where each host adapter port has redundancy to each storage controller port and vice versa. Attention: Do not shut down the host or reconfigure the SDD during the concurrent download of licensed machine code or you might lose your initial SDD configuration.
Concurrent download of licensed machine code for IBM BladeCenter S SAS RAID Controller Module (RSSM)
With the SDD multipath mode (configured with two paths per multipath device), you can concurrently download and install the licensed machine code while your applications continue to run. During the code upgrade, each RSSM node is upgraded sequentially. The node that is being upgraded is temporarily unavailable, and all I/O operations to that node fail. However, failed I/O operations are directed to the other RSSM node, and applications do not see any I/O failures. Attention: Do not shut down the host or reconfigure the SDD during the concurrent download of licensed machine code or you might lose your initial SDD configuration. Refer to RSSM documentation, at the following URL, for information about performing concurrent download of LMC for RSSM: http://www.ibm.com/systems/support/supportsite.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020
Active/Passive dual array controller path-selection algorithm for DS3950, DS4000 and DS5000 products
The DS4000 and DS5000 products are dual array controller disk subsystems. Each LUN is assigned to one controller, which is considered the owner, or the active controller, of a particular LUN. The other controller is considered as an alternate, or passive, controller. Thus, the SDD distinguishes the following paths to the DS4000 and DS5000 product LUN: v Paths on the ownership (active) controller v Paths on the alternate (passive) controller With this type of active/passive dual-controller subsystem device, I/O can be sent only to the ownership controller. When the SDD selects paths for I/O, it selects
paths that are connected only to the ownership controller. If there is no path on the ownership controller that can be used, SDD changes the LUN controller ownership to an alternate controller, switches the paths that were passive to active, and then selects these active paths for I/O.
any user interaction, and with minimal application impact. In addition, while Open HyperSwap is enabled, the Metro Mirror session supports disaster recovery. If a write is successful on the primary site but is unable to get replicated on the secondary site, IBM Tivoli Storage Productivity Center for Replication suspends the entire set of data consistency checking, thus ensuring that a consistent copy of the data exists on the secondary site. If the system fails, this data might not be the latest data, but the data should be consistent and allow the user to manually switch host servers to the secondary site. You can control Open HyperSwap from any system that is running IBM Tivoli Storage Productivity Center for Replication (AIX, Windows, Linux, or z/OS). However, the volumes that are involved with Open HyperSwap must be attached to an AIX system that is connected to IBM Tivoli Storage Productivity Center for Replication. SDD distinguishes the paths of the source volume from the paths of the target volume on an Open HyperSwap copy set. With an Open HyperSwap device, I/O can only be sent to the source volume, so when SDD selects paths for I/O, it only selects paths that are connected to the source volume. If there is no path on the source volume that can be used, SDD will initiate the Open HyperSwap request to Tivoli Storage Productivity Center for Replication and work together to perform the swap. After the swap, SDD will select the target volume paths for I/O. The following output of the pcmpath query device command shows that the target volume paths are being selected.
DEV#: 14 DEVICE NAME: hdisk14 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: session1 OS Direction: H1<-H2 ========================================================================== PRIMARY SERIAL: 25252520000 ----------------------------Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 OPEN NORMAL 6091 0 1 fscsi0/path2 OPEN NORMAL 6300 0 2 fscsi1/path4 OPEN NORMAL 6294 0 3 fscsi1/path5 OPEN NORMAL 6187 0 SECONDARY SERIAL: 34343430000 * ----------------------------Path# Adapter/Path Name State 4 fscsi0/path1 OPEN 5 fscsi0/path3 OPEN 6 fscsi1/path6 OPEN 7 fscsi1/path7 OPEN
Errors 0 0 0 0
In the preceding example, the source volume is on site one, and the target volume is on site two. The output shows that after the swap from site one to site two, SDD selects paths of devices on site two. Note: The primary serial is not always the source volume. The primary serial is the serial number of the volume on site one, and secondary serial is the serial number of the volume on site two.
10
v SDD utility programs v Support for SCSI-3 persistent reserve functions v Support for AIX trace functions v Support more than 512 SAN Volume Controller devices from multiple SAN Volume Controller clusters on an AIX host v Storage I/O priority feature in DS6000 and DS8000, only with AIX53 TL04 or later and with 64-bit kernel v Two types of reserve policies: No reserve and Persistent reserve exclusive host v General Parallel File System (GPFS)
Copyright IBM Corp. 1999, 2013
11
v Virtual I/O Server with AIX 5.3 or later v Dual Virtual I/O Server with AIX 5.3 or later For more information about Virtual I/O Server, go to the following website: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/ home.html
Hardware
The following hardware components are needed: v One or more supported storage devices. v A switch if using a SAN Volume Controller (no direct attachment allowed for SAN Volume Controller) v Host system v SCSI adapters and cables (for ESS only) v Fibre-channel adapters and cables
Software
The following software components are needed: v AIX operating system. Starting with SDD 1.6.1.0, the SDD package for AIX 5.3 (devices.sdd.53.rte) requires AIX53 TL04 with APAR IY76997. Starting with SDD 1.6.2.0, the SDD package for AIX 5.2 (devices.sdd.52.rte) requires AIX52 TL08 or later and the SDD package for AIX 5.3 (devices.sdd.53.rte) requires AIX53 TL04 or later. v SCSI and fibre-channel device drivers v ibm2105.rte package for ESS devices (devices.scsi.disk.ibm2105.rte or devices.fcp.disk.ibm2105.rte package if using NIM) v devices.fcp.disk.ibm.rte for DS8000, DS6000, and SAN Volume Controller Packages for SDD 1.4.0.0 (and later) will be using new package names in order to comply with AIX packaging rules and allow for NIM installation. Table 3 shows the package-naming relationship between SDD 1.3.3.x and SDD 1.4.0.0 (or later).
Table 3. Package-naming relationship between SDD 1.3.3.x and SDD 1.4.0.0 (or later) SDD 1.3.3.x ibmSdd_432.rte SDD 1.4.0.0 (or later) Not applicable Notes Obsolete. This package has been merged with devices.sdd.43.rte. Not applicable Obsolete. This package has been merged with devices.sdd.51.rte. Not applicable New package for AIX 5.2.0 (or later).
ibmSdd_433.rte ibmSdd_510.rte
devices.sdd.51.rte devices.sdd.52.rte
12
Table 3. Package-naming relationship between SDD 1.3.3.x and SDD 1.4.0.0 (or later) (continued) SDD 1.3.3.x Not applicable Not applicable SDD 1.4.0.0 (or later) devices.sdd.53.rte devices.sdd.61.rte Notes New package for AIX 5.3.0 (or later). New package for AIX 6.1.0 (or later).
Notes: 1. SDD 1.4.0.0 (or later) no longer releases separate packages for concurrent and nonconcurrent High Availability Cluster Multiprocessing (HACMP). Both concurrent and nonconcurrent HACMP functions are now incorporated into one package for each AIX kernel level. 2. A persistent reserve issue arises when migrating from SDD to non-SDD volume groups after a reboot. This special case only occurs if the volume group was varied on prior to the reboot and auto varyon was not set when the volume group was created. See Understanding the persistent reserve issue when migrating from SDD to non-SDD volume groups after a system reboot on page 72 for more information.
Unsupported environments
The SDD does not support: v A host system with both a SCSI and fibre-channel connection to a shared ESS logical unit number (LUN). v Placing system primary paging devices (for example, /dev/hd6) on an SDD vpath device v Any application that depends on a SCSI-2 reserve and release device on AIX v Single-path mode during concurrent download of licensed machine code nor during any disk storage systems concurrent maintenance that impacts the path attachment, such as a disk storage systems host-bay-adapter replacement v Multipathing to a system boot device v Configuring the SDD vpath devices as system primary or secondary dump devices v More than 600 SDD vpath devices if the host system is running AIX 4.3.3 or AIX 5.1.0 v More than 1200 SDD vpath devices if the host system is running AIX 5.2, AIX 5.3, or AIX 6.1 v DS8000, DS6000, and SAN Volume Controller with SCSI connectivity v Multiple AIX servers without the SDD-supported clustering software, such as HACMP, installed
13
You must check for and download the latest authorized program analysis reports (APARS), maintenance-level fixes, and microcode updates from the following website: www-03.ibm.com/servers/eserver/support/unixservers/aixfixes.html
Fibre requirements
You must check for and download the latest fibre-channel device driver APARs, maintenance-level fixes, and microcode updates from the following website: www.ibm.com/servers/eserver/support/unixservers/index.html Notes: 1. If your host has only one fibre-channel adapter, it requires you to connect through a switch to multiple disk storage system ports. You must have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure. 2. The SAN Volume Controller always requires that the host be connected through a switch. For more information, see the IBM System Storage SAN Volume Controller Model 21458A4 Hardware Installation Guide. For information about the fibre-channel adapters that can be used on your AIX host system, go to the following website: www.ibm.com/servers/storage/support
14
To use the SDD fibre-channel support, ensure that your host system meets the following requirements: v The AIX host system is an IBM RS/6000 or IBM System p with AIX 4.3.3 (or later). v The AIX host system has the fibre-channel device drivers installed along with all latest APARs. v The bos.adt package is installed. The host system can be a single processor or a multiprocessor system, such as SMP. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. v A fiber-optic cable connects each SAN Volume Controller fibre-channel adapter to a switch. The switch must also be configured correctly. See the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide for information about the SAN Volume Controller. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two paths to a device are attached.
Note: The SDD allows the manual exclusion of supported devices from the SDD configuration. If you want to manually exclude supported devices (hdisks) from the SDD configuration, you must use the excludesddcfg command before configuring the SDD vpath devices. The excludesddcfg command reads the unique serial number of a device (hdisk) and saves the serial number in an exclude file. For detailed information about the excludesddcfg command, see Manual exclusion of devices from the SDD configuration on page 52.
15
16
10. Press Enter. The Install and Update from LATEST Available Software panel is displayed with the name of the software you selected to install. 11. Check the default option settings to ensure that they are what you need. 12. Press Enter to install. SMIT responds with the following message:
+------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. 413 | | This is your last chance to stop before continuing. 415 | +------------------------------------------------------------------------+
13. Press Enter to continue. The installation process can take several minutes to complete. 14. When the installation is complete, press F10 to exit from SMIT. Remove the compact disc. 15. Check to see if the correct APARs are installed by issuing the following command:
instfix -i | grep IYnnnnn
where nnnnn represents the APAR numbers. If the APARs are listed, that means that they are installed. If they are installed, go to Configuring fibre-channel-attached devices on page 18. Otherwise, go to step 3. 16. Repeat steps 1 through 14 to install the APARs.
17
18
where N is the FCP adapter number. For example, if you have two installed FCP adapters (adapter 0 and adapter 1), you must enter both of the following commands:
rmdev rmdev -dl -dl fcs0 fcs1 -R -R
To verify the firmware level, ignore the first three characters in the ZB field. In the example, the firmware level is 3.22A1 3. If the adapter firmware level is at the latest level, there is no need to upgrade; otherwise, the firmware level must be upgraded. For instructions on upgrading the firmware level, see the description for each firmware at: http://www-933.ibm.com/support/fixcentral/
19
Level 1.0.0.0
State COMMITTED
Description IBM SDD Server for AIX IBM SDD Server for AIX
1.0.0.0
COMMITTED
For instructions on how to remove the stand-alone version of sddServer (for ESS Expert) from your AIX host system, see the IBM Subsystem Device Driver Server 1.0.0.0 (sddsrv) README for IBM TotalStorage Expert V2R1 at the following website: www.ibm.com/servers/storage/support/software/swexpert/ For more information about the SDD server daemon, go to SDD server daemon on page 67.
Understanding SDD support for IBM System p with static LPARs configured
The IBM System p server supports static LPARs as a standard feature, and users can partition them if they choose to do so. Each LPAR is composed of one or more processors, some dedicated memory, and dedicated I/O adapters. Each partition has an instance of an operating system and does not share IBM System p hardware resources with any other partition. So each partition functions the same way that it does on a stand-alone system. Storage subsystems need to be shared the same way that they have always been shared (shared storage pool, shared ports into the storage subsystem, and shared data on concurrent mode) where the application is capable of sharing data. If a partition has multiple fibre-channel adapters that can see the same LUNs in a supported storage device, the path optimization can be performed on those adapters in the same way as in a stand-alone system. When the adapters are not shared with any other partitions, SCSI reservation, persistent reserve, and LUN level masking operate as expected (by being "bound" to an instance of the operating system). The SDD provides the same functions on one of the partitions or LPARs of a IBM System p server as it does on a stand-alone server
Installation packages for 32-bit and 64-bit applications on AIX 4.3.3 (or later) host systems
Table 4. SDD 1.4.0.0 (or later) installation packages for different AIX OS levels and the supported AIX kernel mode, application mode, and interface SDD installation package names devices.sdd.43.rte AIX OS level AIX 4.3.3
1
20
Table 4. SDD 1.4.0.0 (or later) installation packages for different AIX OS levels and the supported AIX kernel mode, application mode, and interface (continued) SDD installation package names devices.sdd.51.rte devices.sdd.52.rte devices.sdd.53.rte devices.sdd.61.rte Note: 1. devices.sdd.43.rte is supported only by the ESS and virtualization products. 2. devices.sdd.61.rte supports only the 64-bit kernel. AIX OS level AIX 5.1.0 AIX 5.2.0 AIX 5.3.0 AIX 6.1.0
2
AIX kernel mode 32-bit, 64-bit 32-bit, 64-bit 32-bit, 64-bit 64-bit
Application mode 32-bit, 64-bit 32-bit, 64-bit 32-bit, 64-bit 32-bit, 64-bit
SDD interface LVM, raw device LVM, raw device LVM, raw device LVM, raw device
Switching between 32-bit and 64-bit modes on AIX 5.1.0, AIX 5.2.0, and AIX 5.3.0 host systems
SDD supports AIX 5.1.0, AIX 5.2.0 and AIX 5.3.0 host systems that run in both 32-bit and 64-bit kernel modes. You can use the bootinfo -K or ls -al /unix command to check the current kernel mode in which your AIX 5.1.0, 5.2.0, or 5.3.0 host system is running. The bootinfo -K command directly returns the kernel mode information of your host system. The ls -al /unix command displays the /unix link information. If the /unix links to /usr/lib/boot/unix_mp, your AIX host system runs in 32-bit mode. If the /unix links to /usr/lib/boot/unix_64, your AIX host system runs in 64-bit mode. If your host system is currently running in 32-bit mode, you can switch it to 64-bit mode by typing the following commands in the given order:
ln -sf /usr/lib/boot/unix_64 /unix ln -sf /usr/lib/boot/unix_64 /usr/lib/boot/unix bosboot -ak /usr/lib/boot/unix_64 shutdown -Fr
The kernel mode of your AIX host system is switched to 64-bit mode after the system restarts. If your host system is currently running in 64-bit mode, you can switch it to 32-bit mode by typing the following commands in the given order:
ln -sf /usr/lib/boot/unix_mp /unix ln -sf /usr/lib/boot/unix_mp /usr/lib/boot/unix bosboot -ak /usr/lib/boot/unix_mp shutdown -Fr
The kernel mode of your AIX host system is switched to 32-bit mode after the system restarts.
21
Table 5. Major files included in the SDD installation package File name defdpo cfgdpo define_vp addpaths cfgvpath chgvpath cfallvpath vpathdd hd2vp vp2hd datapath lquerypr lsvpcfg excludesddcfg mkvg4vp extendvg4vp dpovgfix savevg4vp restvg4vp sddsrv sample_sddsrv.conf lvmrecover sddfcmap sddgetdata Description Define method of the SDD pseudo-parent data path optimizer (dpo). Configure method of the SDD pseudo-parent dpo. Define method of the SDD vpath devices. The command that dynamically adds more paths to SDD vpath devices while they are in Available state. Configure method of the SDD vpath devices. Method to change vpath attributes. Fast-path configuration method to configure the SDD pseudo-parent dpo and all SDD vpath devices. The SDD device driver. The SDD script that converts an hdisk device volume group to an SDD vpath device volume group. The SDD script that converts an SDD vpath device volume group to an hdisk device volume group. The SDD driver console command tool. The SDD driver persistent reserve command tool. The SDD driver query configuration state command. The SDD driver tool to exclude user-specified hdisk devices from the SDD vpath configuration. The command that creates an SDD volume group. The command that extends the SDD vpath devices to an SDD volume group. The command that fixes an SDD volume group that has mixed vpath and hdisk physical volumes. The command that backs up all files belonging to a specified volume group with the SDD vpath devices. The command that restores all files belonging to a specified volume group with the SDD vpath devices. The SDD server daemon for path reclamation and probe. The sample SDD server configuration file. The SDD script that restores a system's SDD vpath devices and LVM configuration when a migration failure occurs. The SDD tool that collects information on ESS SCSI or disk storage systems fibre-channel devices through SCSI commands. The SDD data collection tool for problem analysis.
22
v v v v v
See Upgrading the SDD packages automatically without system restart on page 24 for instructions on upgrading the SDD. If SDD 1.4.0.0 (or later) is installed on the host system and you have an SDD PTF that you want to apply to the system, see Updating SDD packages by applying a program temporary fix on page 29 for instructions. A PTF file has a file extension of bff (for example, devices.sdd.43.rte.2.1.0.1.bff) and requires special consideration when being installed.
23
6. Select the compact disc drive that you are using for the installation, for example, /dev/cd0; and press Enter. 7. Press Enter again. The Install Software panel is displayed. 8. Select Software to Install and press F4. The Software to Install panel is displayed. 9. Select the installation package that is appropriate for your environment. 10. Press Enter. The Install and Update from LATEST Available Software panel is displayed with the name of the software that you selected to install. 11. Check the default option settings to ensure that they are what you need. 12. Press Enter to install. SMIT responds with the following message:
ARE YOU SURE?? Continuing may delete information you may want to keep. This is your last chance to stop before continuing.
13. Press Enter to continue. The installation process can take several minutes to complete. 14. When the installation is complete, press F10 to exit from SMIT. Remove the compact disc. Note: You do not need to reboot SDD even though the bosboot message indicates that a reboot is necessary.
This command reflects the newer SDD code version that will be updated. 6. Continue the installation by following the instructions beginning in step 3 on page 23.
24
1. Package migration from a nonpersistent reserve package with version 1.3.1.3 (or later) to a persistent reserve package with version 1.4.0.0 (or later). That is, ibmSdd_432.rte devices.sdd.43.rte and ibmSdd_510.rte devices.sdd.51.rte. 2. Package migration from version 1.3.1.3 or later to version 1.4.0.0 or later. Migration from SDD version earlier than 1.3.1.3 is not supported. 3. Package upgrade from version 1.4.0.0 to a later version. If the SDD currently installed on your host system is listed in Table 6, you can use this automatic migration to upgrade the SDD. If the SDD currently installed on your host system is not listed in Table 6, you will need to upgrade the SDD manually
Table 6. List of previously installed installation packages that are supported with the installation upgrade Installation package name ibmSdd_432.rte ibmSdd.rte.432 ibmSdd_433.rte ibmSdd.rte.433 ibmSdd_510.rte ibmSdd_510nchacmp.rte devices.sdd.43.rte devices.sdd.51.rte devices.sdd.52.rte devices.sdd.53.rte devices.sdd.61.rte
Beginning with SDD 1.6.0.0, SDD introduces a new feature in the configuration method to read the pvid from the physical disks and convert the pvid from hdisks to vpaths during the SDD vpath configuration. With this feature, you can skip the process of converting the pvid from hdisks to vpaths after configuring SDD devices. Furthermore, the SDD migration scripts can now skip the pvid conversion scripts. This tremendously reduces the SDD migration time, especially with a large number of SDD devices and LVM configuration environment. Furthermore, the SDD now introduces two new environment variables that can be used in some configuration environments to customize the SDD migration and further reduce the time needed to migrate or upgrade the SDD. See Customizing the SDD migration or upgrade on page 26 for details. During the migration or upgrade of the SDD, the LVM configuration of the host will be removed, the new SDD package will be installed, and then the original LVM configuration of the host will be restored. Preconditions for migration or upgrade: The following are the preconditions for running the migration: 1. If HACMP is running, gracefully stop the cluster services. 2. If sddServer.rte (stand-alone IBM TotalStorage Expert SDD Server) is installed, uninstall sddServer.rte. 3. If there is any I/O running to SDD devices, stop these I/O activities.
Chapter 2. Using the SDD on an AIX host system
25
4. Stop any activity related to system configuration changes. These activities are not allowed during the SDD migration or upgrade (for example, configuring more devices). 5. If there is active paging space created with the SDD devices, deactivate the paging space. If any of the preceding preconditions are not met, the migration or upgrade will fail. Customizing the SDD migration or upgrade: Beginning with SDD 1.6.0.0, SDD offers two new environment variables, SKIP_SDD_MIGRATION and SDDVG_NOT_RESERVED, for you to customize the SDD migration or upgrade to maximize performance. You can set these two variables based on the configuration of your system. The following discussion explains the conditions and procedures for using these two environment variables. SKIP_SDD_MIGRATION: The SKIP_SDD_MIGRATION environment variable is an option available to bypass of the SDD automated migration process (backup, restoration, and recovery of LVM configurations and SDD device configurations). This variable could help to decrease SDD upgrade time if you choose to reboot the system after upgrading SDD. For example, you might choose this option if you are upgrading other software that requires a reboot on the host at the same time. Another example is if you have a large number of SDD devices and LVM configuration, and a system reboot is acceptable. In these cases, you might want to choose this option to skip the SDD automated migration process. If you choose to skip the SDD automated migration process, follow these procedures to perform an SDD upgrade: 1. Issue export SKIP_SDD_MIGRATION=YES to set the SKIP_SDD_MIGRATION environment variable. 2. Issue smitty install to install SDD. 3. Reboot the system. 4. Issue varyonvg vg_name for the volume groups that are not auto-varied on after reboot. 5. Issue mount filesystem-name to mount the file system. SDDVG_NOT_RESERVED: SDDVG_NOT_RESERVED is an environment variable to indicate to the SDD migration script whether the host has any SDD volume group reserved by another host. If the host has any SDD volume group reserved by another host, set this variable to NO. Otherwise, you should set this variable to YES. If this variable is not set, the SDD migration script will assume the value to be NO. When this variable is set to YES, the SDD migration script skips some procedures. This dramatically reduces the SDD migration time. If SDDVG_NOT_RESERVED is set to NO, the SDD migration script makes certain assumptions and runs more steps. Set this variable to YES if the host is: 1. A completely stand-alone host, that is, not sharing LUNs with any other host 2. A host in a clustering environment but all the volume groups (including the volume groups that belong to a cluster software resource group) are configured for concurrent access only
26
3. A host in a clustering environment with nonconcurrent volume groups but all the nonconcurrent volume groups on all the hosts are varied off. That is, no other node has made reserve on the SDD volume groups. If the host does not meet the any of these three conditions, set SDDVG_NOT_RESERVED to NO, so that the SDD migration script runs the vp2hd pvid conversion script to save the pvid under hdisks. Follow these procedures to perform SDD migration with this variable: 1. Issue export SDDVG_NOT_RESERVED=NO or export SDDVG_NOT_RESERVED=YES to set the SDDVG_NOT_RESERVED environment variable 2. Follow the procedures in Procedures for automatic migration or upgrade. Procedures for automatic migration or upgrade: To start the SDD migration or upgrade: 1. Install the new SDD package by entering the smitty install command. The migration or upgrade scripts run as part of the installation procedure that is initiated by the smitty install command. These scripts save SDD-related LVM configuration on the system. SDD does not support mixed volume groups with the SDD vpath devices and supported storage hdisk devices. A volume group contains the SDD vpath devices or only supported storage hdisk devices. If you do have a mixed volume group, the SDD migration or upgrade script fixes it by changing only the volume group to contain the SDD vpath devices. The following message displays when the SDD migration or upgrade script fixes the mixed volume group:
<volume group> has a mixed of SDD and non-SDD devices. dpovgfix <volume group> is run to correct it. Mixed volume group <volume group> is converted to SDD devices successfully!
The following messages indicate that the preuninstallation operations of the SDD are successful:
LVM configuration is saved successfully. All mounted file systems are unmounted. All varied-on volume groups are varied off. All volume groups created on SDD devices are converted to non-SDD devices. SDD Server is stopped. All SDD devices are removed. Ready for deinstallation of SDD!
2. The older SDD is uninstalled before new SDD will be installed. 3. The migration or upgrade script automatically configures the SDD devices and restores the original LVM configuration. The following messages indicate that the postinstallation of SDD is successful:
Original lvm configuration is restored successfully!
Error recovery for migration or upgrade: If any error occurred during the preinstallation or postinstallation procedures, such as disconnection of cables, you can recover the migration or upgrade. There are two common ways that the migration or the upgrade can fail: Case 1: Smitty install failed.
27
Smitty install fails if there is an error during the preuninstallation activities for the older SDD package. An error message indicating the error is printed, so you can identify and fix the problem. Use the smitty install command to install the new SDD package again. Case 2: Smitty install exits with an OK prompt but configuration of SDD devices or LVM restoration failed. If there is an error during the postinstallation (either the configuration of SDD devices has failed or LVM restoration has failed), the new SDD package is still successfully installed. Thus, the Smitty install exits with an OK prompt. However, an error message indicating the error is printed, so you can identify and fix the problem. Then, run the shell script lvmrecover to configure SDD devices and automatically recover the original LVM configuration.
3. Enter the umount command to unmount all file systems belonging to the SDD volume groups. Enter the following command:
umount filesystem_name
4. Enter the varyoffvg command to vary off the volume groups. Enter the following command:
varyoffvg vg_name
5. If you are upgrading to an SDD version earlier than 1.6.0.0; or if you are upgrading to SDD 1.6.0.0 or later and your host is in a HACMP environment with nonconcurrent volume groups that are varied-on on other host, that is, reserved by other host, run the vp2hd volume_group_name script to convert the volume group from the SDD vpath devices to supported storage hdisk devices. Otherwise, you skip this step. 6. Stop the SDD server by entering the following command:
stopsrc -s sddsrv
7. Remove all the SDD vpath devices. Enter the following command:
rmdev -dl dpo -R
8. Use the smitty command to uninstall the SDD. Enter smitty deinstall and press Enter. The uninstallation process begins. Complete the uninstallation process. See Removing SDD from an AIX host system on page 50 for the step-by-step procedure for uninstalling the SDD. 9. If you need to upgrade the AIX operating system, for example, from AIX 4.3 to AIX 5.1, you could perform the upgrade now. If required, reboot the system after the operating system upgrade.
28
10. Use the smitty command to install the newer version of the SDD from the compact disc. Enter smitty install and press Enter. The installation process begins. Go to Installing and upgrading the SDD on page 23 to complete the installation process. 11. Use the smitty device command to configure all the SDD vpath devices to the Available state. See Configuring SDD on page 45 for a step-by-step procedure for configuring devices. 12. Enter the lsvpcfg command to verify the SDD configuration. Enter the following command:
lsvpcfg
13. If you are upgrading to an SDD version earlier that 1.6.0.0, run the hd2vp volume_group_name script for each SDD volume group to convert the physical volumes from supported storage hdisk devices back to the SDD vpath devices. Enter the following command:
hd2vp volume_group_name
14. Enter the varyonvg command for each volume group that was previously varied offline. Enter the following command:
varyonvg vg_name
15. Enter the lspv command to verify that all physical volumes of the SDD volume groups are SDD vpath devices. 16. Enter the mount command to mount all file systems that were unmounted in step 3 on page 28. Enter the following command:
mount filesystem-name
Attention: If the physical volumes on an SDD volume group's physical volumes are mixed with hdisk devices and SDD vpath devices, you must run the dpovgfix utility to fix this problem. Otherwise, SDD will not function properly. Enter the dpovgfix vg_name command to fix this problem.
29
6. Select the compact disc drive that you are using for the installation (for example, /dev/cd0) and press Enter. 7. Press Enter again. The Install Software panel is displayed. 8. Select Software to Install and press F4. The Software to Install panel is displayed. 9. Select the PTF package that you want to install. 10. Press Enter. The Install and Update from LATEST Available Software panel is displayed with the name of the software that you selected to install. 11. If you only want to apply the PTF, select Commit software Updates? and tab to change the entry to no. The default setting is to commit the PTF. If you specify no to Commit Software Updates?, be sure that you specify yes to Save Replaced Files?. 12. Check the other default option settings to ensure that they are what you need. 13. Press Enter to install. SMIT responds with the following message:
+--------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. | | This is your last chance to stop before continuing. | +--------------------------------------------------------------------------+
14. Press Enter to continue. The installation process can take several minutes to complete. 15. When the installation is complete, press F10 to exit from SMIT. 16. Remove the compact disc. Note: You do not need to reboot SDD even though the bosboot message indicates that a reboot is necessary. Committing or Rejecting a PTF Update: Before you reject a PTF update, you need to stop sddsrv and remove all SDD devices. The following steps will guide you through this process. If you want to commit a package, you will not need to complete these steps. Follow these steps prior to rejecting a PTF update: 1. Stop SDD Server. Enter the following command:
stopsrc -s sddsrv
2. Enter the lspv command to find out all the SDD volume groups. 3. Enter the lsvgfs command for each SDD volume group to find out which file systems are mounted. Enter the following command:
lsvgfs vg_name
4. Enter the umount command to unmount all file systems belonging to SDD volume groups. Enter the following command:
umount filesystem_name
5. Enter the varyoffvg command to vary off the volume groups. Enter the following command:
varyoffvg vg_name
6. If you are downgrading to an SDD version earlier than 1.6.0.0 or if you are downgrading to SDD 1.6.0.0 or later but your host is in a HACMP environment with nonconcurrent volume groups that are varied-on on other host (that is, reserved by other host), run the vp2hd volume_group_name script to convert the volume group from SDD vpath devices to supported storage hdisk devices. Otherwise, you skip this step. 7. Remove all SDD devices. Enter the following command:
rmdev -dl dpo -R
30
Complete the following steps to commit or reject a PTF update with the SMIT facility. 1. Log in as the root user. 2. From your desktop window, enter smitty install and press Enter to go directly to the installation panels. The Software Installation and Maintenance menu is displayed. 3. Select Software Maintenance and Utilities and press Enter. 4. Select Commit Applied Software Updates to commit the PTF or select Reject Applied Software Updates to reject the PTF. 5. Press Enter. The Commit Applied Software Updates panel is displayed or the Reject Applied Software Updates panel is displayed. 6. Select Software name and press F4. The software name panel is displayed. 7. Select the Software package that you want to commit or reject. 8. Check the default option settings to ensure that they are what you need. 9. Press Enter. SMIT responds with the following message:
+---------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. | | This is your last chance to stop before continuing. | +---------------------------------------------------------------------------+
10. Press Enter to continue. The commit or reject process can take several minutes to complete. 11. When the installation is complete, press F10 to exit from SMIT. Note: You do not need to reboot SDD even though the bosboot message might indicate that a reboot is necessary. After the procedure to reject a PTF update completes successfully: 1. Use the smitty device command to configure all the SDD vpath devices to the Available state. See Configuring fibre-channel-attached devices on page 18 for a step-by-step procedure for configuring devices. 2. Enter the lsvpcfg command to verify the SDD configuration. Enter the following command:
lsvpcfg
3. If you have downgraded to an SDD version earlier that 1.6.0.0, run the hd2vp script for each SDD volume group to convert the physical volumes from supported storage hdisk devices back to SDD vpath devices. Enter the following command:
hd2vp vg_name
4. Enter the varyonvg command for each volume group that was previously varied offline. Enter the following command:
varyonvg vg_name
5. Enter the lspv command to verify that all physical volumes of the SDD volume groups are SDD vpath devices. 6. Enter the mount command to mount all file systems that were unmounted in step 4. Enter the following command:
mount filesystem-name
Note: If the physical volumes on an SDD volume group's physical volumes are mixed with hdisk devices and vpath devices, you must run the dpovgfix utility to fix this problem. Otherwise, SDD does not function properly. Enter the dpovgfix vg_name command to fix this problem.
Chapter 2. Using the SDD on an AIX host system
31
32
C C C C
t t t t
-F -F -F -F
| | | |
b. Verify that the hdisk devices are successfully removed using the following command:
lsdev lsdev lsdev lsdev C C C C t t t t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices
4. If you are upgrading the operating system, follow these procedures. Otherwise, if you are not upgrading the operating system, skip to step 5. a. Run stopsrc -s sddsrv to stop the sddsrv daemon. b. Uninstall SDD. c. Upgrade to the latest version of the host attachment, if required. The following are package names: v ibm2105.rte for 2105 devices v devices.fcp.disk.ibm.rte for 2145, 2107, and 1750 devices d. If rootvg is on a SAN boot disk, restart the system. e. Make sure no disk group is online except rootvg. Migrate the AIX OS level. The system automatically restarts at the end of migration. f. Install SDD for the new AIX OS level. g. Configure SDD vpath devices by running the cfallvpath command. h. Continue to step 6. 5. If you are not upgrading the operating system, follow these steps. a. Upgrade to the latest version of Host Attachment, if required. The following are Host Attachment Package names: v ibm2105.rte for 2105 devices v devices.fcp.disk.ibm.rte for 2145, 2107, and 1750 devices b. After upgrading Host Attachment, v If rootvg is on a SAN boot disk, restart the system. Then skip the rest of the steps and follow the procedures in Upgrading the SDD packages automatically without system restart on page 24 to upgrade SDD, if required. v If rootvg is on local SCSI disks and you can restart the system, skip the rest of the steps and restart the system. Then follow the procedures in Upgrading the SDD packages automatically without system restart on page 24 to upgrade SDD, if required. v If rootvg is on local SCSI disks and you cannot restart the system, continue to the next step. c. Upgrade to the latest version of SDD, if required. d. Configure hdisks and SDD vpath devices by running the cfgmgr command. 6. If your new SDD version is earlier than 1.6.0.0, run the hd2vp command on all SDD volume groups. Otherwise, skip this step. 7. Resume all activities related to SDD devices: a. If there was active paging space created with SDD devices, activate the paging space. b. If your host was in an HACMP environment, start the cluster services. c. Vary on all SDD volume groups. d. Mount all file systems.
Chapter 2. Using the SDD on an AIX host system
33
b. Verify that the hdisk devices are successfully removed using the following command:
lsdev lsdev lsdev lsdev C C C C t t t t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices
7. Make sure no disk group is online except rootvg. Migrate to the desired AIX OS level. Ensure you complete the following operations for the OS migration. v If you are using NIM to upgrade to AIX 5.3, make sure NIM SPOT contains AIX Interim Fix APAR IY94507. v Change the option to automatically import user volume groups to no. Reboot automatically starts at the end of migration. 8. If rootvg is on a local SCSI disk, follow these procedures. Otherwise, if rootvg is on a SAN boot disk, skip to Step 9 on page 35.
34
a. Remove all the hdisks of the SDD supported storage devices with the following command.
lsdev lsdev lsdev lsdev C C C C t t t t 2105* 2145* 2107* 1750* -F -F -F -F name name name name | | | | xargs xargs xargs xargs -n1 -n1 -n1 -n1 rmdev rmdev rmdev rmdev -dl -dl -dl -dl for for for for 2105 2145 2107 1750 devices devices devices devices
b. Verify that the hdisk devices are successfully removed using the following command:
lsdev lsdev lsdev lsdev C C C C t t t t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices
9. Upgrade to the latest version of Host Attachment, if required. The following are Host Attachment Package names: v ibm2105.rte for 2105 devices v devices.fcp.disk.ibm.rte for 2145, 2107, and 1750 devices 10. If rootvg is on a SAN boot disk, restart the system. 11. Install SDD. If you have migrated to a new AIX OS level, make sure you install the SDD for the new AIX OS level. 12. On the HACMP active node, run varyonvg bu volume group name on all the SDD non-concurrent volume groups that are shared with the standby node. 13. On the HACMP standby node, complete the following steps: a. Configure hdisks and the SDD vpath devices using one of the following options: v Run cfgmgr vl fcsX for each fibre channel adapter and then run cfallvpath v Run cfgmgr b. If your new SDD version is earlier than 1.6.0.0, run hd2vp on all SDD volume groups. Otherwise, skip this step. c. Run importvg L volume group name physical volume name to update any possible Object Data Manager (ODM) changes on a volume group. 14. On the HACMP active node, run varyonvg volume group name on all SDD non-concurrent volume groups that are shared with the standby node.
Verifying the currently installed version of SDD for SDD 1.3.3.11 (or earlier)
For SDD packages prior to SDD 1.4.0.0, you can verify your currently installed version of SDD by entering the following command: lslpp -l '*Sdd*' The asterisks (*) in the beginning and end of the Sdd characters are used as wildcard symbols to search for the characters ibm... and ...rte. Alternatively, you can enter one of the following commands: lslpp -l ibmSdd_432.rte lslpp -l ibmSdd_433.rte lslpp -l ibmSdd_510.rte
35
lslpp -l ibmSdd_510nchacmp.rte lslpp -l ibmSdd.rte.432 ... ... If you successfully installed the package, the output from the lslpp -l '*Sdd*' or lslpp -l ibmSdd_432.rte command looks like this:
Fileset Level State Description -----------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_432.rte 1.3.3.9 COMMITTED IBM SDD AIX V432 V433 for concurrent HACMP Path: /etc/objrepos ibmSdd_432.rte
1.3.3.9
COMMITTED
If you successfully installed the ibmSdd_433.rte package, the output from the lslpp -l ibmSdd_433.rte command looks like this:
Fileset Level State Description -------------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_433.rte 1.3.3.9 COMMITTED IBM SDD AIX V433 for nonconcurrent HACMP Path: /etc/objrepos ibmSdd_433.rte
1.3.3.9
COMMITTED
for nonconcurrent
If you successfully installed the ibmSdd_510.rte package, the output from the lslpp -l ibmSdd_510.rte command looks like this:
Fileset Level State Description --------------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_510.rte 1.3.3.9 COMMITTED IBM SDD AIX V510 for concurrent HACMP Path: /etc/objrepos ibmSdd_510.rte
1.3.3.9
COMMITTED
If you successfully installed the ibmSdd_510nchacmp.rte package, the output from the lslpp -l ibmSdd_510nchacmp.rte command looks like this:
36
Fileset Level State Description -------------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_510nchacmp.rte 1.3.3.11 COMMITTED IBM SDD AIX V510 for nonconcurrent HACMP Path: /etc/objrepos ibmSdd_510nchacmp.rte
1.3.3.11
COMMITTED
Verifying the currently installed version of SDD for SDD 1.4.0.0 (or later)
For SDD 1.4.0.0 (and later), you can verify your currently installed version of SDD by entering the following command:
lslpp -l devices.sdd.*
If you successfully installed the devices.sdd.43.rte package, the output from the lslpp -l 'devices.sdd.*' command or lslpp -l devices.sdd.43.rte command looks like this:
Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.sdd.43.rte 1.4.0.0 COMMITTED IBM Subsystem Device Driver for AIX V433 Path: /etc/objrepos devices.sdd.43.rte
1.4.0.0
COMMITTED
If you successfully installed the devices.sdd.51.rte package, the output from the lslpp -l devices.sdd.51.rte command looks like this:
Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.sdd.51.rte 1.4.0.0 COMMITTED IBM Subsystem Device Driver for AIX V51 Path: /etc/objrepos devices.sdd.51.rte
1.4.0.0
COMMITTED
If you successfully installed the devices.sdd.52.rte package, the output from the lslpp -l devices.sdd.52.rte command looks like this:
Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.sdd.52.rte 1.4.0.0 COMMITTED IBM Subsystem Device Driver for AIX V52 Path: /etc/objrepos devices.sdd.52.rte
1.4.0.0
COMMITTED
If you successfully installed the devices.sdd.53.rte package, the output from the lslpp -l devices.sdd.53.rte command looks like this:
Chapter 2. Using the SDD on an AIX host system
37
Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.sdd.53.rte 1.6.0.0 COMMITTED IBM Subsystem Device Driver for AIX V53 Path: /etc/objrepos devices.sdd.53.rte
1.6.0.0
COMMITTED
If you successfully installed the devices.sdd.61.rte package, the output from the lslpp -l devices.sdd.61.rte command looks like this:
Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.sdd.61.rte 1.7.0.0 COMMITTED IBM Subsystem Device Driver for AIX V61 Path: /etc/objrepos devices.sdd.61.rte
1.7.0.0
COMMITTED
38
consuming system resources. This might leave fewer resources for SDD vpath devices to be configured. On the other hand, more SDD vpath devices can be configured if the number of paths to each disk is reduced. For AIX versions 4.3 and 5.1, AIX has a published limit of 10 000 devices per system. Based on this limitation, SDD limits the total maximum number of SDD vpath devices that can be configured to 600. This number is shared by all SDD-supported storage devices. For AIX version 5.2 or later, the resource of the AIX operating system is increased. SDD has increased the SDD vpath device limit accordingly. Beginning with SDD 1.6.0.7, SDD supports a combined maximum of 1200 supported storage devices on AIX version 5.2 or later. Table 7 provides a summary of the maximum number of LUNs and the maximum number of paths allowed when running on a host systems with different operating system levels.
Table 7. Maximum LUNs allowed for different AIX OS levels OS level AIX 4.3 AIX 5.1 AIX 5.2 AIX 5.3 AIX 6.1
*
SDD supported storage devices 600 LUNs (maximum 32 paths) 600 LUNs (maximum 32 paths) 1200 LUNs (maximum 32 paths; see Table 8 for recommended maximum number of paths.) 1200 LUNs (maximum 32 paths; see Table 8 for recommended maximum number of paths.) 1200 LUNs (maximum 32 paths; see Table 8 for recommended maximum number of paths.)
Note: AIX 4.3 is only supported for ESS and virtualization products.
You can have a maximum of 32 paths per SDD vpath device regardless of the number of LUNs that are configured. This means that you can only have a maximum of 32 host adapter ports for your SDD vpath devices. However, configuring more paths than is needed for failover protection might consume too many system resources and degrade system performance. Use the minimum number of paths that are necessary to achieve sufficient redundancy in the SAN environment. The recommended number of paths is 2 - 4. To avoid exceeding the maximum number of paths per SDD vpath device on AIX 5.2 or later, follow the recommendations in Table 8.
Table 8. Recommended maximum paths supported for different number of LUNs on AIX 5.2 or later Number of LUNs 1- 600 vpath LUN 601 - 900 vpath LUN 901 - 1200 vpath LUN
*
Note:
39
If you have more than 1200 vpaths already configured on your AIX host (for example, if you have 800 ESS LUNs and 512 SAN Volume Controller LUNs configured as SDD vpath devices on one AIX host), SDD migration to SDD 1.6.0.7 or later will fail because SDD does not support more than 1200 LUNs on one AIX host. If you have this configuration, contact IBM Customer Support at 1-800-IBM-SERV.
ODM attributes for controlling the maximum number of LUNs in SDD version 1.6.0.7 or later on AIX 5.2 and later
SDD for AIX 5.2 and later has consolidated the ODM attributes for controlling the maximum number of LUNs for all supported storage devices. The SDD_maxlun ODM attribute is now used to replace the following ODM attributes: v v v v v 2105_max_luns 2145_max_luns 2062_max_luns Enterpr_maxlun Virtual_maxlun
See Table 7 on page 39 for information about the total number of LUNs that you can configure. The new SDD ODM attribute, SDD_maxlun, defines the maximum number of storage LUNs that SDD can support on a host. This attribute has a default value as well as a maximum value of 1200. This value is not user-changeable. To display the value of the SDD_maxlun attribute, use the lsattr -El dpo command:
> lsattr -El dpo SDD_maxlun 1200 persistent_resv yes Maximum LUNS allowed for SDD Subsystem Supports Persistent Reserve Command False False
Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled
If you plan to configure more than 600 supported storage devices by configuring multiple types of supported storage systems and the total number of LUNs will exceed 600, or if you plan to disable queue depth to remove the limit on the amount of I/O that SDD vpath devices can send, you must first determine whether the system has sufficient resources for large device configuration or heavy I/O operations. There are also some system configurations that must be changed to avoid system bottleneck. To avoid system-performance degradation, tune the following ODM attributes for your AIX fibre-channel adapters before you configure more than 600 supported storage devices or disable queue depth: v lg_term_dma v num_cmd_elems v max_xfer_size v fc_err_recov
40
If you change these attributes, you need to reconfigure the fibre-channel adapter and all its child devices. Because this is a disruptive procedure, change these attributes before assigning or configuring supported storage devices on a host system. lg_term_dma This AIX fibre-channel adapter attribute controls the DMA memory resource that an adapter driver can use. The default value of lg_term_dma is 0x200000, and the maximum value is 0x8000000. A recommended change is to increase the value of lg_term_dma to 0x400000. If you still experience poor I/O performance after changing the value to 0x400000, you can increase the value of this attribute again. If you have a dual-port fibre-channel adapter, the maximum value of the lg_term_dma attribute is divided between the two adapter ports. Therefore, never increase lg_term_dma to the maximum value for a dual-port fibre-channel adapter, because this will cause the configuration of the second adapter port to fail. num_cmd_elems This AIX fibre-channel adapter attribute controls the maximum number of commands to be queued to the adapter. The default value is 200, and the maximum value is: LP9000 adapters LP10000 adapters LP7000 adapters 2048 2048 1024
When a large number of supported storage devices are configured, you can increase this attribute to improve performance. max_xfer_size This AIX fibre-channel adapter attribute controls the maximum transfer size of the fibre-channel adapter. Its default value is 100000 and the maximum value is 1000000. You can increase this attribute to improve performance. Different storages might need different maximum transfer size to utilize the performance. Note: You can change this attribute only with AIX 5.2.0 or later. fc_err_recov Beginning with AIX 5.1 and AIX52 TL02, the fc_err_recov attribute enables fast failover during error recovery. Enabling this attribute can reduce the amount of time that the AIX disk driver takes to fail I/O in certain conditions, and therefore, reduce the overall error recovery time. The default value for fc_err_recov is delayed_fail. To enable fibre-channel adapter fast failover, change the value to fast_fail. Notes: 1. For AIX 5.1, apply APAR IY48725 (Fast I/O Failure for Fibre Channel Devices) to add the fast failover feature. 2. Fast failover is not supported on AIX 4.3.3 or earlier. Using the -P option with the chdev command causes the changes to not take effect until the system is restarted. Use the -P option with the chdev command if your system falls under any one of the following conditions: v If you have boot devices on the adapter v If you have a large number of devices configured and prefer to restart the system
Chapter 2. Using the SDD on an AIX host system
41
v If you plan to reboot the system later Use the following procedure if you can reboot the system and allow the new attribute values to take effect after the reboot: 1. Issue lsattr -El fcsN to check the current value of lg_term_dma, num_cmd_elems, and max_xfer_size. 2. Issue lsattr -El fscsiN to check the current value of fc_err_recov. 3. Issue chdev -l fcsN -P -a lg_term_dma=0x400000 to increase the DMA value. 4. Issue chdev -l fcsN -P -a num_cmd_elems=1024 to increase the maximum commands value. 5. Issue chdev -l fcsN -P -a max_xfer_size=20000 to increase the maximum transfer size. 6. Issue the chdev -l fscsiX -P -a fc_err_recov=fast_fail command to enable fast failover. 7. Assign new LUNs to the AIX host, if needed. 8. Reboot the system now or later. Use the following procedure if you cannot reboot the system but want the new attributes to take effect immediately: 1. Issue lsattr -El fcsN to check the current value of lg_term_dma, num_cmd_elems, and max_xfer_size. 2. Issue lsattr -El fscsiN to check the current value of fc_err_recov. 3. Use the rmdev -dl dpo -R command to remove SDD vpath devices, if they are already configured on your system. 4. Put all existing fibre-channel adapters and their child devices to the Defined state by issuing rmdev -l fcsN -R. 5. Issue chdev -l fcsN -a lg_term_dma=0x400000 to increase the DMA value. 6. Issue chdev -l fcsN -a num_cmd_elems=1024 to increase the maximum commands value. 7. Issue chdev -l fcsN -a max_xfer_size=100000 to increase the maximum transfer size. 8. Issue chdev -l fscsiX -a fc_err_recov=fast_fail to enable fast failover. 9. Assign new LUNs to the AIX host, if needed. 10. Configure the fibre-channel adapters, the child devices and hdisks using cfgmgr -l fcsN. 11. Configure SDD vpath devices with the cfallvpath command if they are removed in step 3. When you have a large number of LUNs, many special device files are created in the /dev directory. Issuing the ls command with a wildcard (*) in this directory might fail. If issuing the command fails in this situation, change the ncargs attribute of sys0. The ncargs attribute controls the ARG/ENV list size in 4-KB byte blocks. The default value for this attribute is 6 (24 KB) and the maximum value for this attribute is 128 (512 KB). Increase the value of this to 30. If you still experience failures after changing the value to 30, increase this value to a larger number. Changing the ncargs attribute is dynamic. Use the following command to change the ncargs attribute to 30:
chdev -l sys0 -a ncargs=30
42
Filesystem space: If you are increasing the maximum number of LUNs, after changing the ODM attributes, use the following steps to determine whether there is sufficient space in the root file system after hdisks are configured: 1. Issue cfgmgr -l [scsiN/fcsN] for each relevant SCSI or FCP adapter. 2. Issue df to ensure that root file system (that is, '/') size is large enough to hold the device special files. For example:
Filesystem /dev/hd4 512-blocks 196608 Free 29008 %Used 86% Iused 15524 %Iused 32% Mounted on /
The minimum required size is 8 MB. If there is insufficient space, run the chfs command to increase the size of the root file system.
Controlling I/O flow to SDD devices with the SDD qdepth_enable attribute
Starting with SDD 1.5.0.0, the SDD attribute, qdepth_enable, allows you to control I/O flow to SDD vpath devices. qdepth_enable was a dpo attribute before SDD 1.6.1.2 and it controls the queue depth logic on all the SDD vpath devices. Beginning with SDD 1.6.1.2, qdepth_enable is changed to a vpath attribute and it allows you to set different queue depth logic per SDD vpath device. By default, SDD uses the device queue_depth setting to control the I/O flow to SDD vpath device and paths. With certain database applications, such as an application running with a DB2 database, IBM Lotus Notes, or IBM Informix database, the software might generate many threads, which can send heavy I/O to a relatively small number of devices. Enabling queue depth logic to control I/O flow can cause performance degradation, or even a system hang. To remove the limit on the amount of I/O sent to vpath devices, use the qdepth_enable attribute to disable this queue depth logic on I/O flow control. By default, the queue depth logic to control the amount of I/O being sent to the vpath devices is enabled in the SDD driver. To determine if queue depth logic is enabled for a particular SDD vpath device, run the following command:
# lsattr -El vpath0 active_hdisk hdisk66/13AB2ZA1020/fscsi3 active_hdisk hdisk2/13AB2ZA1020/fscsi2 active_hdisk hdisk34/13AB2ZA1020/fscsi2 active_hdisk hdisk98/13AB2ZA1020/fscsi3 policy df pvid 0005f9fdcda4417d0000000000000000 qdepth_enable yes reserve_policy PR_exclusive serial_number 13AB2ZA1020 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False True False True True False False
For SDD 1.5.1.0 or later, you can change the qdepth_enable attribute dynamically. The datapath set qdepth command offers a new option to dynamically enable or disable the queue depth logic. For example, if you enter datapath set device 0 2 qdepth disable command, the following output is displayed when the queue depth logic is currently enabled on these SDD vpath devices:
Success: set qdepth_enable to no for vpath0 Success: set qdepth_enable to no for vpath1 Success: set qdepth_enable to no for vpath2
Chapter 2. Using the SDD on an AIX host system
43
The qdepth_enable ODM attribute of these SDD vpath devices will be updated. For example, the following output is displayed when lsattr -El vpath0 is entered.
# lsattr -El vpath0 active_hdisk hdisk66/13AB2ZA1020/fscsi3 active_hdisk hdisk2/13AB2ZA1020/fscsi2 active_hdisk hdisk34/13AB2ZA1020/fscsi2 active_hdisk hdisk98/13AB2ZA1020/fscsi3 policy df pvid 0005f9fdcda4417d0000000000000000 qdepth_enable no reserve_policy PR_exclusive serial_number 13AB2ZA1020 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False True False True True False False
See Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled on page 40 to determine whether the system has sufficient resources for disabling queue depth logic.
Controlling reserve policy of SDD devices with the SDD reserve_policy attribute
Starting with SDD 1.7.1.0, a new reserve_policy attribute allows you to control the default reserve policy of each SDD vpath device. By default, the reserve_policy attribute is set to PR_exclusive. This means that, unless an application opens the SDD vpath device with the no reserve device open option, SDD always makes a persistent reservation on the device. In a shared environment where an application does not implement the no reserve device open option, such as the Dual Virtual I/O Servers environment, you must set the reserve_policy attribute to no_reserve. If you set the reserve_policy attribute to no_reserve, regardless of the device open option, SDD does not make a persistent reservation on the device. To display the current reserve_policy, run the following command:
# lsattr -El vpath0 active_hdisk hdisk331/140FCA30/fscsi1 active_hdisk hdisk2/140FCA30/fscsi0 active_hdisk hdisk53/140FCA30/fscsi0 active_hdisk hdisk197/140FCA30/fscsi0 active_hdisk hdisk280/140FCA30/fscsi1 active_hdisk hdisk475/140FCA30/fscsi1 policy df pvid 00082dba1dae728c0000000000000000 qdepth_enable yes reserve_policy PR_exclusive serial_number 140FCA30 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False False False True False True True False False
To change the vpath attribute, reserve_policy, from default value PR_exclusive to no_reserve, enter the following command:
# chdev -l vpath0 -a reserve_policy=no_reserve vpath0 changed
The chdev command requires that the device is closed because it reconfigures the SDD vpath device.
44
Configuring SDD
Complete the following steps to configure SDD using SMIT: Note: The list items on the SMIT panel might be worded differently from one AIX version to another. 1. Enter smitty device from your desktop window. The Devices menu is displayed. 2. Select Data Path Device and press Enter. The Data Path Device panel is displayed. 3. Select Define and Configure All Data Path Devices and press Enter. The configuration process begins. 4. Check the SDD configuration state. See Displaying the supported storage device SDD vpath device configuration on page 74. 5. Use the varyonvg command to vary on all deactivated supported storage device volume groups. 6. Mount the file systems for all volume groups that were previously unmounted.
Unconfiguring SDD
1. Before you unconfigure SDD devices, ensure that: v All I/O activities on the devices that you need to unconfigure are stopped. v All file systems belonging to the SDD volume groups are unmounted and all volume groups are varied off. v A paging space created with SDD devices is deactivated. 2. Run the vp2hd volume_group_name conversion script to convert the volume group from SDD devices (vpathN) to supported storage devices (hdisks). Note: Because SDD implements persistent reserve command set, you must remove the SDD vpath device before removing the SDD vpath device's underlying hdisk devices. You can use SMIT to unconfigure the SDD devices in two ways. You can either unconfigure without deleting the device information from the Object Database Manager (ODM) database, or you can unconfigure and delete device information from the ODM database: v If you unconfigure without deleting the device information, the device remains in the Defined state. You can use either SMIT or the mkdev -l vpathN command to return the device to the Available state. v If you unconfigure and delete the device information from the ODM database, that device is removed from the system. To reconfigure it, follow the procedure described in Configuring SDD on page 45. Complete the following steps to delete device information from the ODM and to unconfigure SDD devices: 1. Enter smitty device from your desktop window. The Devices menu is displayed. 2. Select Devices and press Enter. 3. Select Data Path Device and press Enter. The Data Path Device panel is displayed. 4. Select Remove a Data Path Device and press Enter. A list of all SDD devices and their states (either Defined or Available) is displayed.
Chapter 2. Using the SDD on an AIX host system
45
5. Select the device that you want to unconfigure. Select whether you want to delete the device information from the ODM database. 6. Press Enter. The device is unconfigured to the state that you selected. 7. To unconfigure more SDD devices, you have to repeat steps 4 - 6 for each SDD device. The fast-path command to unconfigure all SDD devices and change the device state from Available to Defined is: rmdev -l dpo -R. The fast-path command to unconfigure and remove all SDD devices from your system is: rmdev -dl dpo -R.
The output shows: v The name of each SDD vpath device (for example, vpath13) v The Defined or Available state of an SDD vpath device v Whether the SDD vpath device is defined to AIX as a physical volume (indicated by the pv flag) v The name of the volume group the device belongs to (for example, vpathvg)
46
v The unit serial number of the disk storage system LUN (for example, 02FFA067) or the unit serial number of the virtualization product LUN (for example, 60056768018A0210B00000000000006B) v The names of the AIX disk devices making up the SDD vpath device and their configuration and physical volume state
47
2. Enter datapath remove adapter n, where n is the adapter number to be removed. For example, to remove adapter 0, enter datapath remove adapter 0.
+-------------------------------------------------------------------------------------+ |Success: remove adapter 0 | | | |Active Adapters :3 | | | |Adpt# Adapter Name State Mode Select Errors Paths Active | | 1 fscsi1 NORMAL ACTIVE 65916 3 10 10 | | 2 fscsi2 NORMAL ACTIVE 76197 28 10 10 | | 3 fscsi3 NORMAL ACTIVE 4997 39 10 10 | +-------------------------------------------------------------------------------------+
Note that Adpt# 0 fscsi0 is removed and the Select counts are increased on other three adapters, indicating that I/O is still running. 3. Enter rmdev -dl fcs0 -R to remove fcs0, a parent of fscsi0, and all of its child devices from the system. Issuing lsdev -Cc disk should not show any devices associated with fscsi0. 4. Enter drslot -R -c pci -s P1-I8 where P1-I8 is the slot location found by issuing lscfg -vl fcs0. This command prepares a hot-plug slot for systems with AIX 5L or later. 5. Follow the instruction given by drslot to physically remove the adapter and install a new one. 6. Update the World Wide Name (WWN) of the new adapter at the device end and in the fabric. For example, for ESS devices, go to the ESS Specialist to update the WWN of the new adapter. The zone information of fabric switches must be updated with the new WWN as well. 7. Enter cfgmgr or cfgmgr -vl pci(n), where n is the adapter number, to configure the new adapter and its child devices. Use the lsdev -Cc disk and lsdev -Cc adapter commands to ensure that all devices are successfully configured to Available state. 8. Enter the addpaths command to configure the newly installed adapter and its child devices to SDD. The newly added paths are automatically opened if vpath is open.
48
+--------------------------------------------------------------------------------------+ |Active Adapters :4 | | | |Adpt# Adapter Name State Mode Select Errors Paths Active | | 0 fscsi0 NORMAL ACTIVE 11 0 10 10 | | 1 fscsi1 NORMAL ACTIVE 196667 6 10 10 | | 2 fscsi2 NORMAL ACTIVE 208697 36 10 10 | | 3 fscsi3 NORMAL ACTIVE 95188 47 10 10 | +--------------------------------------------------------------------------------------+
2. Enter datapath remove device m path n, where m is the device number and n is the path number of that device. For example, enter datapath remove device 0 path 1 to remove Path#1 from DEV#0.
+------------------------------------------------------------------------------------------+ |Success: device 0 path 1 removed | | | |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | |==========================================================================================| |Path# Adapter/Hard Disk State Mode Select Errors | | 0 fscsi1/hdisk18 OPEN NORMAL 567 0 | | 1 fscsi0/hdisk34 OPEN NORMAL 596 0 | | 2 fscsi0/hdisk42 OPEN NORMAL 589 0 | +------------------------------------------------------------------------------------------+
Note that fscsi1/hdisk26 is removed and Path# 1 is now fscsi0/hdisk34. 3. To reclaim the removed path, see Dynamically adding paths to SDD vpath devices on page 47.
+------------------------------------------------------------------------------------------+ |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | | | |==========================================================================================| |Path# Adapter/Hard Disk State Mode Select Errors | | 0 fscsi1/hdisk18 OPEN NORMAL 588 0 | | 1 fscsi0/hdisk34 OPEN NORMAL 656 0 | | 2 fscsi0/hdisk42 OPEN NORMAL 599 0 | | 3 fscsi1/hdisk26 OPEN NORMAL 9 0 | +------------------------------------------------------------------------------------------+
49
50
ARE YOU SURE?? Continuing may delete information you may want to keep. This is your last chance to stop before continuing.
f. Press Enter to begin the removal process. This might take a few minutes. g. When the process is complete, the SDD software package is removed from your system.
2. Verify that the hdisk devices are successfully removed using the following command:
lsdev lsdev lsdev lsdev C C C C t t t t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices
3. Enter smitty deinstall from your desktop window to go directly to the Remove Installed Software panel. 4. Enter the following installation package names in the SOFTWARE name field: a. ibm2105.rte b. devices.fcp.disk.ibm.rte Note: You can also press F4 in the Software name field to list the currently installed installation package and search (/) on ibm2105 and devices.fcp.disk.ibm. 5. Press the Tab key in the PREVIEW Only? field to toggle between Yes and No. Select No to remove the software package from your AIX host system. Note: If you select Yes, the deinstall process does a pre-check and lets you preview the results without removing the software. If the state for any SDD device is either Available or Defined, the process fails. 6. Select No for the remaining fields on this panel. 7. Press Enter. SMIT responds with the following message:
51
ARE YOU SURE? Continuing may delete information you may want to keep. This is your last chance to stop before continuing.
8. Press Enter to begin the removal process. This might take a few minutes. 9. When the process is complete, the SDD software package is removed from your system.
52
b. Or, unmount all the file systems of the volume group and use the reducevg command to reduce that device from the volume group. 2. Issue excludesddcfg -dl hdisk# to remove this device from exclusion. 3. Issue cfallvpath configure methods to configure these new devices. 4. Issue lsvpcfg to verify that these devices are configured as SDD vpath devices.
SAN boot installation procedure for AIX 5.2, AIX 5.3, and AIX 6.1
Use this procedure for SAN boot installation for AIX 5.2 and later releases: 1. Configure disk storage system devices to the AIX system; there should be only a single path per LUN. In other words, the AIX system should see only one hdisk configured per LUN. 2. Install the base operating system on the selected disk storage system single-path devices. 3. Upgrade the base operating system to the latest technology level.
Chapter 2. Using the SDD on an AIX host system
53
4. Connect additional AIX host adapters and additional storage adapter to the fabric in order to configure multipath (multiple hdisks) per disk storage system LUN. 5. Install both SDD Host Attachment and SDD. 6. Reboot the AIX system. 7. Verify that SDD vpath devices are configured correctly with multipath per LUN. Disk Storage system devices (hdisks) should be configured as IBM 2105, IBM 2107, or IBM 1750 devices. Run the datapath query device command to verify that SDD vpath devices are configured with multiple paths and that the vpath device policy is Optimized. 8. The logical device names of the hdisks might not be configured in a continuous sequence because of the parallel configuration feature in AIX 5.2 and above. If that is the case, follow these additional steps to simplify future maintenance, before you create any SDD volume group and file systems. a. Remove all hdisk logical devices names (rootvg will not be removed) and SDD vpath devices. b. Reconfigure all the hdisk logical devices and SDD vpath devices with cfgmgr command, or reboot the AIX system. c. Verify that all the logical device names of the hdisks (except rootvg) are configured in a continuous sequence.
54
nonconcurrent Only one node in a cluster is actively accessing shared disk resources while other nodes are standby. concurrent Multiple nodes in a cluster are actively accessing shared disk resources.
Table 9. Recommended SDD installation packages and supported HACMP modes for SDD versions earlier than SDD 1.4.0.0 Installation package ibmSdd_432.rte ibmSdd_433.rte ibmSdd_510nchacmp.rte Version of SDD supported SDD 1.1.4 (SCSI only) SDD 1.3.1.3 (or later) (SCSI and fibre channel) SDD 1.3.1.3 (or later) (SCSI and fibre channel) HACMP mode supported Concurrent Concurrent or nonconcurrent Concurrent or nonconcurrent
Tip: If your SDD version is earlier than 1.4.0.0, and you use a mix of nonconcurrent and concurrent resource groups (such as cascading and concurrent resource groups or rotating and concurrent resource groups) with HACMP, you should use the nonconcurrent version of SDD. Different storage systems or models might support different versions of HACMP. For information, see the interoperability matrix for your storage: http://www.ibm.com/systems/storage/disk/ess/ http://www.ibm.com/systems/storage/disk/ds6000/ http://www.ibm.com/systems/storage/disk/ds8000/ http://www.ibm.com/systems/storage/software/virtualization/svc/ SDD supports RS/6000 and IBM System p servers connected to shared disks with SCSI adapters and drives as well as FCP adapters and drives. The kind of attachment support depends on the version of SDD that you have installed. Table 10 and Table 11 on page 56 summarize the software requirements to support HACMP v4.5. You can use the command instfix -ik IYxxxx, where xxxx is the APAR number, to determine if APAR xxxx is installed on your system.
Table 10. Software support for HACMP 4.5 on AIX 4.3.3 (32-bit only), 5.1.0 (32-bit and 64-bit), 5.2.0 (32-bit and 64-bit) SDD version and release level devices.sdd.43.rte installation package for SDD 1.4.0.0 (or later) (SCSI/FCP) devices.sdd.51.rte installation package for SDD 1.4.0.0 (or later) (SCSI/FCP) HACMP 4.5 + APARs Not applicable v IY36938 v IY36933 v IY35735 v IY36951
55
Table 10. Software support for HACMP 4.5 on AIX 4.3.3 (32-bit only), 5.1.0 (32-bit and 64-bit), 5.2.0 (32-bit and 64-bit) (continued) SDD version and release level devices.sdd.52.rte installation package for SDD 1.4.0.0 (or later) (SCSI/FCP) HACMP 4.5 + APARs v IY36938 v IY36933 v IY36782 v IY37744 v IY37746 v IY35810 v IY36951 Note: For up-to-date APAR information for HACMP, go to the following website: http://www.ibm.com/support/us/en/ Table 11. Software support for HACMP 4.5 on AIX 5.1.0 (32-bit and 64-bit kernel) SDD version and release level ibmSdd_510nchacmp.rte installation package for SDD 1.3.1.3 (SCSI/FCP) HACMP 4.5 + APARs v IY36938 v IY36933 v IY35735 v IY36951 ibmSdd_510nchacmp.rte installation package for SDD 1.3.2.9 (to SDD 1.3.3.x) (SCSI/FCP) v IY36938 v IY36933 v IY35735 v IY36951 Note: For up-to-date APAR information for HACMP, go to the following website: http://www.ibm.com/support/us/en/
For HACMP v5.1, v5.2, v5.3, and v5.4 for AIX5L support information, go to the following website: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/ com.ibm.cluster.hacmp.doc/hacmpbooks.html For HACMP up-to-date APAR information, go to the following website: http://www14.software.ibm.com/webapp/set2/sas/f/hacmp/download/ aix53.html
56
CuAt ODM and the value is set to yes; otherwise this attribute exists only in the PdAt ODM and the value is set to no (default). You can use the following command to check the persistent_resv attribute after the SDD device configuration is complete:
lsattr -El dpo
If your host is attached to a supported storage device that supports Persistent Reserve, the output should look similar to the following output:
> lsattr -El dpo SDD_maxlun 1200 Maximum LUNS allowed True persistent_resv yes Subsystem Supports Persistent Reserve Command False
To check the persistent reserve key of a node that HACMP provides, enter the command:
odmget -q "name = ioaccess" CuAt
57
There are four scenarios: Scenario 1. lspv displays pvids for both hdisks and vpath:
>lspv hdisk1 hdisk2 vpath0 003dfc10a11904fa None 003dfc10a11904fa None 003dfc10a11904fa None
For both Scenario 1 and Scenario 2, the volume group should be imported using the hdisk names and then converted using the hd2vp command:
>importvg -y vg_name -V major# hdisk1 >hd2vp vg_name
For Scenario 3, the volume group should be imported using the vpath name:
>importvg -y vg_name -V major# vpath0
Scenario 4. lspv does not display the pvid on the hdisks or the vpath:
>lspv hdisk1 hdisk2 vpath0 none none none None None None
For Scenario 4, the pvid will need to be placed in the ODM for the SDD vpath devices and then the volume group can be imported using the vpath name:
>chdev -l vpath0 -a pv=yes >importvg -y vg_name -V major# vpath0
Note: See Importing volume groups with SDD on page 79 for a detailed procedure for importing a volume group with the SDD devices.
HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups
This section provides information about HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups. This section also provides instructions on the following procedures for both HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups. v Creating volume groups v Importing volume groups v v v v Removing volume groups Extending volume groups Reducing volume groups Exporting volume groups
58
Starting with AIX51 TL02 and HACMP 4.4.1.4, you can create enhanced concurrent-capable volume groups with supported storage devices. HACMP supports both kinds of concurrent volume groups (HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups). This section describes the advantage of enhanced concurrent-capable volume groups in an HACMP environment. It also describes the different ways of creating two kinds of concurrent-capable volume groups. While there are different ways to create and vary on concurrent-capable volume groups, the instructions to export a volume group are always the same. See Exporting HACMP RAID concurrent-mode volume groups on page 64. Note: For more information about HACMP RAID concurrent-mode volume groups, see the HACMP Administration Guide.
59
2. Then grep the pvid on the other nodes using the lspv | grep <pvid found in step 1> and the lsvpcfg commands. There are three scenarios. Follow the procedure for the scenario that matches the pvid status of your host: a. If the pvid is on an SDD vpath device, the output of the lspv | grep <pvid found in step 1> and the lsvpcfg commands should look like the following example: NODE VG BEING IMPORTED TO zebra> lspv | grep 000900cf4939f79c vpath124 000900cf4939f79c none zebra> zebra> lsvpcfg vpath124 vpath124 (Avail pv) 21B21411=hdisk126 (Avail) hdisk252 (Avail) 1) Enter smitty importvg at the command prompt. 2) A screen similar to the following example is displayed. Enter the information appropriate for your environment. The following example shows how to import an enhanced concurrent-capable volume group using the con_vg on an SDD vpath device vpath3:
************************************************************************ Import a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath124] Volume Group MAJOR NUMBER [80] Make this VOLUME GROUP concurrent-capable? no Make default varyon of VOLUME GROUP concurrent? no ************************************************************************
b. If the pvid is on hdisk devices, the output of the lspv | grep <pvid found in step 1> and the lsvpcfg commands should look like the following example: NODE VG BEING IMPORTED TO zebra> lspv | grep 000900cf4939f79c hdisk126 000900cf4939f79c none hdisk252 000900cf4939f79c none zebra> zebra> lsvpcfg | egrep -e hdisk126 ( vpath124 (Avail) 21B21411=hdisk126 (Avail pv) hdisk252 (Avail pv) 1) Enter smitty importvg at the command prompt. 2) A screen similar to the following is displayed. Enter the information appropriate for your environment. The following example shows how to import an HACMP RAID concurrent-mode volume group using the con_vg on an SDD hdisk126:
*********************************************************************** Import a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [hdisk126] Volume Group MAJOR NUMBER [80] Make this VOLUME GROUP concurrent-capable? no Make default varyon of VOLUME GROUP concurrent? no **********************************************************************
60
3) After importing volume groups have been completed, issue the lsvpcfg command again to verify the state of the vpath.
zebra> lsvpcfg | egrep -e hdisk126 ( vpath124 (Avail) 21B21411=hdisk126 (Avail pv con_vg) hdisk252 (Avail pv con_vg)
4) Enter the hd2vp command against the volume group to convert the pvid from hdisk devices to SDD vpath devices:
zebra> hd2vp con_vg zebra> lsvpcfg | egrep -e hdisk126 ( vpath124 (Avail pv con_vg) 21B21411=hdisk126 (Avail) hdisk252 (Avail)
c. If there is no pvid on either hdisk or SDD vpath device, the output of the lspv | grep <pvid found in step 1> and the lsvpcfg commands should look like the following example: NODE VG BEING IMPORTED TO zebra> lspv | grep 000900cf4939f79c zebra> 1) Issue the chdev -l vpathX -a pv=yes command to retrieve the pvid value. 2) There is a possibility that the SDD vpath device might be different for each host. Verify that the serial numbers (in this example, it is 21B21411) following the SDD vpath device names on each node are identical. To determine a matching serial number on both nodes, run the lsvpcfg command on both nodes.
monkey> lsvpcfg vpath122 (Avail) 21921411=hdisk255 (Avail) hdisk259 (Avail) vpath123 (Avail) 21A21411=hdisk256 (Avail) hdisk260 (Avail) vpath124 (Avail pv con_vg) 21B21411=hdisk127 (Avail) hdisk253 (Avail) monkey> zebra> lsvpcfg | egrep -e 21B221411 vpath124 (Avail) 21B21411=hdisk126 (Avail) hdisk252 (Avail) zebra>
Note: You should also verify that the volume group is not varied on for any of the nodes in the cluster prior to attempting retrieval of the pvid. 3) Enter smitty importvg at the command prompt. 4) A screen similar to the following is displayed. Enter the information appropriate for your environment. The following example shows how to import an HACMP RAID concurrent-mode volume group using the con_vg on an SDD vpath device vpath124.
********************************************************************** Import a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath124] Volume Group MAJOR NUMBER [80] Make this VOLUME GROUP concurrent-capable? no Make default varyon of VOLUME GROUP concurrent? no **********************************************************************
3. After importing volume groups has been completed, issue the lsvpcfg command again to verify the state of the SDD vpath device.
zebra> lsvpcfg vpath124 vpath124 (Avail pv con_vg) 21B21411=hdisk126 (Avail) hdisk252 (Avail)
61
Attention: When any of these HACMP RAID concurrent-mode volume groups are imported to the other nodes, it is important that they are not set for autovaryon. This will cause errors when attempting to synchronize the HACMP cluster. When the concurrent access volume groups are not set to autovaryon, a special option flag -u is required when issuing the varyonvg command to make them concurrent-accessible across all the cluster nodes. Use the lsvg vgname command to check the value of autovaryon. Use the chvg -an vgname command to set autovaryon to FALSE.
62
[Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath2] *****************************************************************
4. Vary off the volume group after extending it on the current node. 5. For all the nodes sharing con_vg, do the following: a. Enter the chdev -l vpath2 -a pv=yes command to obtain the pvid for this vpath on the other host. b. Verify that the pvid exists by issuing the lspv command. c. Enter importvg -L con_vg vpath2 to import the volume group again. d. Verify that con_vg has the extended vpath included by using the lspv command.
6. Vary off the volume group after reducing it on the current node. 7. For all the nodes sharing con_vg, do the following: a. Enter exportvg con_vg at the command prompt. b. Enter smitty importvg at the command prompt. c. A screen similar to the following is displayed. Enter the information appropriate for your environment.
*************************************************************** Import a Volume Group Type or select values in entry fields. Press Enter AFTER making all desired changes. VOLUME GROUP name PHYSICAL VOLUME name [Entry Fields] [con_vg] [vpath0]
Chapter 2. Using the SDD on an AIX host system
63
+ Volume Group MAJOR NUMBER +# Make this VG Concurrent Capable? Make default varyon of VG Concurrent? [45] No no + +
***************************************************************
d. Verify that con_vg has the vpath reduced by using the lspv command.
From this listing, the next common available major number can be selected (41, 55, 58, 61, 67, 68, 80, ...). However, if multiple volume groups are going to be created, the user might begin with the highest available (80) and increase by increments from there. 1. Enter smitty datapath_mkvg at the command prompt.
64
2. A screen similar to the following example is displayed. Enter the information appropriate for your environment. The following example shows how to create an enhanced concurrent-capable volume group using the con_vg on an SDD vpath0.
Add a Volume Group with Data Path Devices Type or select values in entry fields. Press Enter AFTER making all desired changes. VOLUME GROUP name Physical partition SIZE in megabytes * PHYSICAL VOLUME names Force the creation of a volume group? Activate volume group AUTOMATICALLY at system restart? Volume group MAJOR NUMBER Create VG Concurrent Capable? Auto-varyon in Concurrent Mode? Create a big VG format Volume Group? LTG Size in kbytes [Entry Fields] [con_vg] [vpath0] no no [80] yes no no 128 + + + + +# + + + +
Importing enhanced concurrent-capable volume groups: Complete the following steps to import enhanced concurrent-capable volume groups. Before importing enhanced concurrent-capable volume groups on SDD vpath devices, issue the lspv command to make sure there is pvid on the SDD vpath device. If pvid is not displayed, import the volume group on one of the hdisks that belongs to the SDD vpath device. Enter hd2vp to convert the volume group to SDD vpath devices. If the hdisks do not have a pvid, run chdev -l hdiskX -a pv=yes to recover it. To verify that pvid now exists, run the lspv command against the hdisk. This method can also be used when attempting to obtain a pvid on an SDD vpath device. Verify that the volume group is not varied on for any of the nodes in the cluster prior to attempting to retrieve the pvid. Enter smitty importvg at the command prompt. A screen similar to the following example is displayed. Enter the information appropriate to your environment. The following example shows how to import an enhanced concurrent-capable volume group using the con_vg on SDD vpath device vpath3.
******************************************************************************** Import a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath3] Volume Group MAJOR NUMBER [45] Make this VOLUME GROUP concurrent-capable? yes Make default varyon of VOLUME GROUP concurrent? no ********************************************************************************
Note: The major number identified must be the same one used when the volume group was first created. Extending enhanced concurrent-capable volume groups:
Chapter 2. Using the SDD on an AIX host system
65
Note: Before attempting the extend of the concurrent volume group, ensure that pvids exist on the SDD vpath device/hdisks on all nodes in the cluster. Complete the following steps to extend an enhanced concurrent-capable volume group: 1. Enter smitty datapath_extendvg at the command prompt. 2. A screen similar to the following is displayed. Enter the information appropriate for your environment. The following example shows how to extend an enhanced concurrent-capable volume group using the con_vg on SDD vpath device vpath2.
******************************************************************************** Add a Datapath Physical Volume to a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath2] ********************************************************************************
Note: Verify that extending of enhanced concurrent-capable volume groups worked on the particular node and that all changes were propagated to all other nodes in the cluster using the lsvpcfg command. Reducing enhanced concurrent-capable volume groups: Complete the following steps to reduce an enhanced concurrent-capable volume group: 1. Enter smitty vg at the command prompt. 2. Select Set Characteristics of a Volume Group from the displayed menu. 3. Select Remove a Physical Volume from a Volume Group from the displayed menu. 4. A screen similar to the following is displayed. Enter the information appropriate for your environment. The following example shows how to reduce an enhanced concurrent-capable volume group using the con_vg on SDD vpath device vpath2.
******************************************************************************** Remove a Physical Volume from a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath2] FORCE deallocation of all partitions yes ********************************************************************************
Note: Verify that reducing of volume groups worked on the particular node and that all changes were propagated to all other nodes in the cluster using the lsvpcfg command.
Recovering paths that are lost during HACMP node fallover that is caused when a system locks up
Typically, if an active node locks up, HACMP transfers ownership of shared disks and other resources through a process known as node fallover. Certain situations,
66
such as a loose or disconnected SCSI or fibre-channel-adapter card, can cause your SDD vpath devices to lose one or more underlying paths after the failed node is restarted. Complete the following steps to recover these paths: v Make sure the issue that is causing lost paths is fixed. Then run the cfgmgr command to configure all the underlying paths (hdisks) to Available state. v Enter the addpaths command to add the lost paths back to the SDD devices. If your SDD vpath devices have lost one or more underlying paths that belong to an active volume group, you can use either the Add Paths to Available Data Path Devices SMIT panel or run the addpaths command from the AIX command line to recover the lost paths. Go to Dynamically adding paths to SDD vpath devices on page 47 for more information about the addpaths command. Note: Running the cfgmgr command while the SDD vpath devices are in the Available state will not recover the lost paths; you must run the addpaths command to recover the lost paths.
67
daemon installed should apply the PTFs listed in PTFs for APARs on AIX with Fibre Channel and the SDD server on page 69.
where NNN is the process ID number. The status of sddsrv should be Active if the SDD server has automatically started. If the SDD server has not started, the status will be Inoperative. Go to Starting the SDD server manually to proceed. Note: During OS installations and migrations, the following command could be added to /etc/inittab:
install_assist:2:wait:/usr/sbin/install_assist </dev/console>/dev/console 2>&1
Because this command runs in the foreground, it will prevent all the subsequent commands in the script from starting. If sddsrv happens to be behind this line, sddsrv will not run after system reboot. You should check /etc/inittab during OS installations or migrations and comment out this line.
68
PTFs for APARs on AIX with Fibre Channel and the SDD server
If you have fibre-channel support and the SDD server daemon running, PTFs for the APARs shown in Table 12 must be applied to your AIX servers in order to avoid a performance degradation.
Table 12. PTFs for APARs on AIX with fibre-channel support and the SDD server daemon running AIX version AIX 5.1 APAR IY32325 (available in either of devices.pci.df1000f7.com 5.1.0.28 or 5.1.0.35) IY37437 (available in devices.pci.df1000f7.com 5.1.0.36) IY35177 (available in devices.pci.df1000f7.com 4.3.3.84) IY37841 (available in devices.pci.df1000f7.com 4.3.3.86) PTF U476971 U482718 U483680
AIX 5.1
AIX 4.3.3
U483803
AIX 4.3.3
U484723
If you experience a degradation in performance, you should disable sddsrv until the PTFs for these APARs can be installed. After the PTFs for these APARs are installed, you should re-enable sddsrv. If you are running IBM TotalStorage Expert, see the Replacing the SDD server with a stand-alone version. Otherwise, see Stopping the SDD server on page 68.
69
70
Note: You can enter the datapath set device N policy command to change the policy dynamically associated with vpaths in either Close or Open state. See datapath set device policy on page 454 for more information about the datapath set device policy command.
Fibre-channel dynamic device tracking for AIX 5.20 TL1 (and later)
This section applies only to AIX 5.20 TL1 and later releases. Beginning with AIX 5.20 TL1, the AIX fibre-channel driver supports fibre-channel dynamic device tracking. This enables the dynamic changing of fibre-channel cable connections on switch ports or on supported storage ports without unconfiguring and reconfiguring hdisk and SDD vpath devices. With dynamic tracking enabled, the fibre-channel adapter detects the change of the device's fibre-channel node port ID. It reroutes the traffic that is destined for that device to the new worldwide port name (WWPN) while the device is still online. SDD 1.5.0.0 and later support this feature. SDD 1.6.0.0 and later support all disk storage system devices. This feature allows for the following scenarios to occur without I/O failure: 1. Combine two switches in two SANs into one SAN by connecting switches with cable and cascading switches within 15 seconds. 2. Change connection to another switch port; the disconnected cable must be reconnected within 15 seconds. 3. Swap switch ports of two cables on the SAN; the disconnected cable must be reconnected within 15 seconds. The switch ports must be in the same zone on the same switch. 4. Swap ports of two cables on disk storage system; the disconnected cable must be reconnected within 15 seconds. Note: This 15 seconds includes the time to bring up the fibre channel link after you reconnect the cables. Thus the actual time that you can leave the cable disconnected is less than 15 seconds. For disk storage systems, it takes approximately 5 seconds to bring up the fibre channel link after the fibre channel cables are reconnected. By default, dynamic tracking is disabled. Use the following procedure to enable dynamic tracking: 1. Issue the rmdev -l fscsiX -R for all adapters on your system to change all the child devices of fscsiX on your system to the defined state. 2. Issue the chdev -l fscsiX -a dyntrk=yes command for all adapters on your system. 3. Run cfgmgr to reconfigure all devices back to the available state.
71
To use Fibre-channel Dynamic Device Tracking, you need the following fibre-channel device driver PTFs applied to your system: v U486457.bff (This is a prerequisite PTF.) v U486473.bff (This is a prerequisite PTF.) v U488821.bff v U488808.bff After applying the PTFs listed above, use the lslpp command to ensure that the files devices.fcp.disk.rte and devices.pci.df1000f7.com are at level 5.2.0.14 or later. Note: Fibre-channel device dynamic tracking does not support the following case: The port change on the supported storage devices where a cable is moved from one adapter to another free, previously unseen adapter on the disk storage system. The World Wide Port Name will be different for that previously unseen adapter, and tracking will not be possible. The World Wide Port Name is a static identifier of a remote port.
Understanding SDD 1.3.2.9 (or later) support for single-path configuration for supported storage devices
SDD 1.3.2.9 (or later) does not support concurrent download of licensed machine code in single-path mode. SDD does support single-path SCSI or fibre-channel connection from your AIX host system to supported storage devices. It is possible to create a volume group or an SDD vpath device with only a single path. However, because SDD cannot provide single-point-failure protection and load balancing with a single-path configuration, you should not use a single-path configuration. Tip: It is also possible to change from single-path to multipath configuration by using the addpaths command. For more information about the addpaths command, go to Dynamically adding paths to SDD vpath devices on page 47.
Understanding the persistent reserve issue when migrating from SDD to non-SDD volume groups after a system reboot
There is an issue with migrating from SDD to non-SDD volume groups after a system reboot. This issue only occurs if the SDD volume group was varied on prior to the system reboot and auto varyon was not set when the volume group was created. After the system reboot, the volume group will not be varied on. The command to migrate from SDD to non-SDD volume group (vp2hd) will succeed, but a subsequent command to vary on the volume group will fail. This is because during the reboot, the persistent reserve on the physical volume of the volume group was not released, so when you vary on the volume group, the command will do a SCSI-2 reserve and fail with a reservation conflict. There are two ways to avoid this issue. 1. Unmount the filesystems and vary off the volume groups before rebooting the system. 2. Issue lquerypr -Vh /dev/vpathX on the physical LUN before varying on volume groups after the system reboot. If the LUN is reserved by the current
72
host, release the reserve by issuing lquerypr -Vrh /dev/vpathX command. After successful processing, you can vary on the volume group successfully.
73
The following information is displayed: v The name of each SDD vpath device, such as vpath1. v The configuration state of the SDD vpath device. It is either Defined or Available. There is no failover protection if only one path is in the Available state. At least two paths to each SDD vpath device must be in the Available state to have failover protection. Attention: The configuration state also indicates whether the SDD vpath device is defined to AIX as a physical volume (pv flag). If pv is displayed for both SDD vpath devices and the hdisk devices that it is comprised of, you might not have failover protection. Enter the dpovgfix command to fix this problem. v The name of the volume group to which the device belongs, such as vpathvg. v The unit serial number of the supported storage device LUN, such as 019FA067. v The names of the AIX disk devices that comprise the SDD vpath devices, their configuration states, and the physical volume states. See lsvpcfg on page 88 for information about the lsvpcfg command.
74
You can also use the datapath command to display information about an SDD vpath device. This command displays the number of paths to the device. For example, the datapath query device 10 command might produce this output:
DEV#: 10 DEVICE NAME: vpath10 TYPE: 2105B09 POLICY: Optimized SERIAL: 02CFA067 ================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 scsi6/hdisk21 OPEN NORMAL 44 0 1 scsi5/hdisk45 OPEN NORMAL 43 0
The sample output shows that device vpath10 has two paths and both are operational. See datapath query device on page 439 for more information about the datapath query device command.
75
physical volumes known to AIX, use the lspv command. Any SDD vpath devices that were created into physical volumes are included in the output similar to the following output:
hdisk0 hdisk1 ... hdisk10 hdisk11 ... hdisk48 hdisk49 vpath0 vpath1 vpath2 vpath3 vpath4 vpath5 vpath6 vpath7 vpath8 vpath9 vpath10 vpath11 vpath12 vpath13 0001926922c706b2 none none 00000000e7f5c88a none 00000000e7f5c88a 00019269aa5bc858 none none none none none none none none 00019269aa5bbadd 00019269aa5bc4dc 00019269aa5bc670 000192697f9fd2d3 000192697f9fde04 rootvg None None None None None None None None None None None None None None vpathvg vpathvg vpathvg vpathvg vpathvg
To display the devices that comprise a volume group, enter the lsvg -p vg-name command. For example, the lsvg -p vpathvg command might produce the following output:
PV_NAME vpath9 vpath10 vpath11 vpath12 vpath13 PV STATE active active active active active TOTAL PPs 29 29 29 29 29 FREE PPs 4 4 4 4 28 FREE DISTRIBUTION 00..00..00..00..04 00..00..00..00..04 00..00..00..00..04 00..00..00..00..04 06..05..05..06..06
The example output indicates that the vpathvg volume group uses physical volumes vpath9 through vpath13.
76
hdisk0 hdisk1 ... hdisk10 hdisk11 ... hdisk48 hdisk49 vpath0 vpath1 vpath2 vpath3 vpath4 vpath5 vpath6 vpath7 vpath8 vpath9 vpath10 vpath11 vpath12 vpath13
0001926922c706b2 none none 00000000e7f5c88a none 00000000e7f5c88a 00019269aa5bc858 none none none none none none none none 00019269aa5bbadd 00019269aa5bc4dc 00019269aa5bc670 000192697f9fd2d3 000192697f9fde04
rootvg None None None None None None None None None None None None None None vpathvg vpathvg vpathvg vpathvg vpathvg
In some cases, access to data is not lost, but failover protection might not be present. Failover protection can be lost in several ways: v v v v v Losing a device path Creating a volume group from single-path SDD vpath devices A side effect of running the disk change method Running the mksysb restore command Manually deleting devices and running the configuration manager (cfgmgr)
The following sections provide more information about the ways that failover protection can be lost.
77
As an example, if you issue the lsvpcfg command, the following output is displayed:
vpath0 (Avail pv vpathvg) 018FA067 = hdisk1 (Avail ) vpath1 (Avail ) 019FA067 = hdisk2 (Avail ) vpath2 (Avail ) 01AFA067 = hdisk3 (Avail ) vpath3 (Avail ) 01BFA067 = hdisk4 (Avail ) hdisk27 (Avail ) vpath4 (Avail ) 01CFA067 = hdisk5 (Avail ) hdisk28 (Avail ) vpath5 (Avail ) 01DFA067 = hdisk6 (Avail ) hdisk29 (Avail ) vpath6 (Avail ) 01EFA067 = hdisk7 (Avail ) hdisk30 (Avail ) vpath7 (Avail ) 01FFA067 = hdisk8 (Avail ) hdisk31 (Avail ) vpath8 (Avail ) 020FA067 = hdisk9 (Avail ) hdisk32 (Avail ) vpath9 (Avail pv vpathvg) 02BFA067 = hdisk20 (Avail ) hdisk44 (Avail ) vpath10 (Avail pv vpathvg) 02CFA067 = hdisk21 (Avail ) hdisk45 (Avail ) vpath11 (Avail pv vpathvg) 02DFA067 = hdisk22 (Avail ) hdisk46 (Avail ) vpath12 (Avail pv vpathvg) 02EFA067 = hdisk23 (Avail ) hdisk47 (Avail ) vpath13 (Avail pv vpathvg) 02FFA067 = hdisk24 (Avail ) hdisk48 (Avail )
The following example of a chdev command could also set the pvid attribute for an hdisk:
chdev -l hdisk46 -a pv=yes
For this example, the output of the lsvpcfg command would look similar to this:
vpath0 (Avail pv vpathvg) 018FA067 = hdisk1 (Avail ) vpath1 (Avail ) 019FA067 = hdisk2 (Avail ) vpath2 (Avail ) 01AFA067 = hdisk3 (Avail ) vpath3 (Avail ) 01BFA067 = hdisk4 (Avail ) hdisk27 (Avail ) vpath4 (Avail ) 01CFA067 = hdisk5 (Avail ) hdisk28 (Avail ) vpath5 (Avail ) 01DFA067 = hdisk6 (Avail ) hdisk29 (Avail ) vpath6 (Avail ) 01EFA067 = hdisk7 (Avail ) hdisk30 (Avail ) vpath7 (Avail ) 01FFA067 = hdisk8 (Avail ) hdisk31 (Avail ) vpath8 (Avail ) 020FA067 = hdisk9 (Avail ) hdisk32 (Avail ) vpath9 (Avail pv vpathvg) 02BFA067 = hdisk20 (Avail ) hdisk44 (Avail ) vpath10 (Avail pv vpathvg) 02CFA067 = hdisk21 (Avail ) hdisk45 (Avail ) vpath11 (Avail pv vpathvg) 02DFA067 = hdisk22 (Avail ) hdisk46 (Avail pv vpathvg) vpath12 (Avail pv vpathvg) 02EFA067 = hdisk23 (Avail ) hdisk47 (Avail ) vpath13 (Avail pv vpathvg) 02FFA067 = hdisk24 (Avail ) hdisk48 (Avail )
The output of the lsvpcfg command shows that vpath11 contains hdisk22 and hdisk46. However, hdisk46 is the one with the pv attribute set. If you run the lsvg -p vpathvg command again, the output would look similar to this:
vpathvg: PV_NAME vpath10 hdisk46 vpath12 vpath13 PV STATE active active active active TOTAL PPs 29 29 29 29 FREE PPs 4 4 4 28 FREE DISTRIBUTION 00..00..00..00..04 00..00..00..00..04 00..00..00..00..04 06..05..05..06..06
Notice that now device vpath11 has been replaced by hdisk46. That is because hdisk46 is one of the hdisk devices included in vpath11 and it has a pvid attribute in the ODM. In this example, the LVM used hdisk46 instead of vpath11 when it activated volume group vpathvg. The volume group is now in a mixed mode of operation because it partially uses SDD vpath devices and partially uses hdisk devices. This is a problem that must be fixed because failover protection is effectively disabled for the vpath11 physical volume of the vpathvg volume group. Note: The way to fix this problem with the mixed volume group is to run the dpovgfix vg-name command after running the chdev command.
78
The datapath query device command might show now that only one path (either hdisk4 or hdisk27) is configured for vpath3. To restore failover protection (that is, configure multiple paths for vpath3), complete the following steps: 1. Enter cfgmgr once for each installed SCSI or fibre-channel adapter, or enter cfgmgr n times, where n represents the number of paths per SDD device. Tip: Running cfgmgr n times for n-path vpath configurations is not always required. It is only necessary to run cfgmgr n times for an n-path configuration if the supported storage device has been used as a physical volume of a volume group. This is because the AIX disk driver might configure only one set of hdisks from one adapter if pvid is present on a device. 2. Run addpaths to dynamically add the paths discovered by cfgmgr to SDD vpath devices. The addpaths command allows you to dynamically add more paths to SDD vpath devices while they are in Available state. The cfgmgr command might need to be run N times when adding new LUNs. This command opens a new path (or multiple paths) automatically if the SDD vpath device is in the Open state, and the original number of paths of the vpath is more than one. You can either use the Add Paths to Available Data Path Devices SMIT panel or run the addpaths command from the AIX command line. Go to Dynamically adding paths to SDD vpath devices on page 47 for more information about the addpaths command.
79
Complete the following steps to import a volume group with SDD devices: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select System Storage Management (Physical & Logical Storage) and press Enter. The System Storage Management (Physical & Logical Storage) panel is displayed. 3. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. 4. Select Volume Groups and press Enter. The Volume Groups panel is displayed. 5. Select Import a Volume Group and press Enter. The Import a Volume Group panel is displayed. 6. In a. b. c. the Import a Volume Group panel, complete the following tasks: Enter the volume group that you want to import. Enter the physical volume that you want to import. Press Enter after making the changes.
80
Run the dpovgfix shell script to recover a mixed volume group. The syntax is dpovgfix vg-name. The script searches for an SDD vpath device that corresponds to each hdisk in the volume group and replaces the hdisk with the SDD vpath device. In order for the shell script to be run, all mounted file systems of this volume group have to be unmounted. After successful completion of the dpovgfix shell script, mount the file systems again.
81
Attention: Backing up files (running the savevg4vp command) will result in the loss of all material previously stored on the selected output medium. Data integrity of the archive might be compromised if a file is modified during system backup. Keep system activity at a minimum during the system backup procedure.
Accessing the Display Data Display Data Path Device Path Device Configuration SMIT panel on page 83 Configuration Display Data Path Device Accessing the Display Data Status Path Device Status SMIT panel on page 84 Display Data Path Device Accessing the Display Data Adapter Status Path Device Adapter Status SMIT panel on page 84 Define and Configure all Data Path Devices Accessing the Define and Configure All Data Path Devices SMIT panel on page 85 Accessing the Add Paths to Available Data Path Devices SMIT panel on page 85
cfallvpath
addpaths
Configure a Defined Data Accessing the Configure a Path Device Defined Data Path Device SMIT panel on page 85 Remove a Data Path Device Add a Volume Group with Data Path Devices
mkdev
Accessing the Remove a Data rmdev Path Device SMIT panel on page 85 Accessing the Add a Volume mkvg4vp Group with Data Path Devices SMIT panel on page 85
82
Table 13. SDD-specific SMIT panels and how to proceed (continued) Add a Data Path Volume to a Volume Group Accessing the Add a Data Path Volume to a Volume Group SMIT panel on page 86 Accessing the Remove a Physical Volume from a Volume Group SMIT panel on page 86 extendvg4vp
exportvg volume_group
Back Up a Volume Group Accessing the Backup a with Data Path Devices Volume Group with Data Path Devices SMIT panel on page 86 Remake a Volume Group with Data Path Devices Accessing the Remake a Volume Group with Data Path Devices SMIT panel on page 87
savevg4vp
restvg
The Select Query Option has three options: All devices This option runs lsvpcfg and all the data path devices are displayed. No entry is required in the Device Name/Device Model field. Device name This option runs lsvpcfg <device name> and only the specified device is displayed. Enter a device name in the Device Name/Device Model field. Device model This option runs lsvpcfg -d <device model> and only devices with the specified device model are displayed. Enter a device model in the Device Name/Device Model field.
83
See lsvpcfg on page 88 for detailed information about the lsvpcfg command.
The Select Query Option has 3 options: All devices This option runs datapath query device and all the data path devices are displayed. No entry is required in the Device Name/Device Model field. Device number This option runs datapath query device <device number> and only the specified device is displayed. Enter a device number in the Device Name/Device Model field. Device model This option runs datapath query device d <device model> and only devices with the specified device model are displayed. Enter a device model in the Device Name/Device Model field. See datapath query device on page 439 for detailed information about the datapath query device command.
Accessing the Display Data Path Device Adapter Status SMIT panel
Complete the following steps to access the Display Data Path Device Adapter Status panel: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select Devices and press Enter. The Devices panel is displayed. 3. Select Data Path Devices and press Enter. The Data Path Devices panel is displayed. 4. Select Display Data Path Device Adapter Status and press Enter.
84
Accessing the Define and Configure All Data Path Devices SMIT panel
Complete the following steps to access the Define and Configure All Data Path Devices panel: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select Devices and press Enter. The Devices panel is displayed. 3. Select Data Path Devices and press Enter. The Data Path Devices panel is displayed. 4. Select Define and Configure All Data Path Devices and press Enter.
Accessing the Add Paths to Available Data Path Devices SMIT panel
Complete the following steps to access the Add Paths to Available Data Path Devices panel: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select Devices and press Enter. The Devices panel is displayed. 3. Select Data Path Devices and press Enter. The Data Path Devices panel is displayed. 4. Select Add Paths to Available Data Path Devices and press Enter.
Accessing the Add a Volume Group with Data Path Devices SMIT panel
Complete the following steps to access the Add a volume group with data path devices panel: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select System Storage Management (Physical & Logical Storage) and press Enter. The System Storage Management (Physical & Logical Storage) panel is displayed.
85
3. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. 4. Select Volume Groups and press Enter. The Volume Groups panel is displayed. 5. Select Add Volume Group with Data Path Devices and press Enter. Note: Press F4 while highlighting the PHYSICAL VOLUME names field to list all the available SDD vpaths.
Accessing the Add a Data Path Volume to a Volume Group SMIT panel
Complete the following steps to access the Add a Data Path Volume to a Volume Group panel: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select System Storage Management (Physical & Logical) and press Enter. The System Storage Management (Physical & Logical) panel is displayed. 3. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. 4. Select Volume Group and press Enter. The Volume Group panel is displayed. 5. Select Add a Data Path Volume to a Volume Group and press Enter. 6. Enter the volume group name and physical volume name and press Enter. Alternately, you can use the F4 key to list all the available SDD vpath devices and use the F7 key to select the physical volumes that you want to add.
Accessing the Remove a Physical Volume from a Volume Group SMIT panel
Complete the following steps to access the Remove a Physical Volume from a Volume Group panel: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. 3. Select Volume Groups and press Enter. The Volume Groups panel is displayed. 4. Select Set Characteristics of a Volume Group and press Enter. The Set Characteristics of a Volume Group panel is displayed. 5. Select Remove a Physical Volume from a Volume Group and press Enter. The Remove a Physical Volume from a Volume Group panel is displayed.
Accessing the Backup a Volume Group with Data Path Devices SMIT panel
Complete the following steps to access the Back Up a Volume Group with Data Path Devices panel and to backup a volume group with SDD devices: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select System Storage Management (Physical & Logical Storage) and press Enter. The System Storage Management (Physical & Logical Storage) panel is displayed. 3. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. 4. Select Volume Groups and press Enter. The Volume Groups panel is displayed. 5. Select Back Up a Volume Group with Data Path Devices and press Enter. The Back Up a Volume Group with Data Path Devices panel is displayed.
86
6. In the Back Up a Volume Group with Data Path Devices panel, complete the following steps: a. Enter the Backup DEVICE or FILE name. b. Enter the Volume Group to backup. c. Press Enter after making all required changes. Tip: You can also use the F4 key to list all the available SDD devices, and you can select the devices or files that you want to backup. Attention: Backing up files (running the savevg4vp command) will result in the loss of all material previously stored on the selected output medium. Data integrity of the archive might be compromised if a file is modified during system backup. Keep system activity at a minimum during the system backup procedure.
Accessing the Remake a Volume Group with Data Path Devices SMIT panel
Complete the following steps to access the Remake a Volume Group with Data Path Devices panel and restore a volume group with SDD devices: 1. Enter smitty from your desktop window. SMIT is displayed. 2. Select System Storage Management (Physical & Logical Storage) and press Enter. The System Storage Management (Physical & Logical Storage) panel is displayed. 3. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. 4. Select Volume Groups and press Enter. The Volume Groups panel is displayed. 5. Select Remake a Volume Group with Data Path Devices and press Enter. The Remake a Volume Group with Data Path Devices panel is displayed. 6. Enter the Restore DEVICE or FILE name that you want to restore, and press Enter. You can also press F4 to list all the available SDD devices, and you can select the devices or files that you want to restore.
addpaths
You can use the addpaths command to dynamically add more paths to the SDD devices when they are in the Available state. In addition, this command allows you to add paths to the SDD vpath devices (which are then opened) belonging to active volume groups. This command will open a new path (or multiple paths) automatically if the SDD vpath device is in Open state. You can either use the Add Paths to Available Data Path Devices SMIT panel or run the addpaths command from the AIX command line. The syntax for this command is:
addpaths
87
For more information about this command, go to Dynamically adding paths to SDD vpath devices on page 47.
vp2hdvgname
dpovgfix
You can use the dpovgfix script tool to recover mixed volume groups. Performing AIX system management operations on adapters and hdisk devices can cause original supported storage device hdisks to be contained within an SDD volume group. This is known as a mixed volume group. Mixed volume groups happen when an SDD volume group is not active (varied off), and certain AIX commands to the hdisk put the pvid attribute of hdisk back into the ODM database. The following is an example of a command that does this:
chdev -1 hdiskN -a queue_depth=30
If this disk is an active hdisk of an SDD vpath device that belongs to an SDD volume group, and you run the varyonvg command to activate this SDD volume group, LVM might pick up the hdisk device instead of the SDD vpath device. The result is that an SDD volume group partially uses the SDD vpath devices, and partially uses supported storage device hdisk devices. This causes the volume group to lose path-failover capability for that physical volume. The dpovgfix script tool fixes this problem. The syntax for this command is:
dpovgfixvgname
vgname Specifies the volume group name of the mixed volume group to be recovered.
lsvpcfg
You can use the lsvpcfg script tool to display the configuration state of SDD devices. This displays the configuration state for all SDD devices. The lsvpcfg command can be issued in three ways.
88
1. The command can be issued without parameters. The syntax for this command is:
lsvpcfg
See Verifying the SDD configuration on page 46 for an example of the output and what it means. 2. The command can also be issued using the SDD vpath device name as a parameter. The syntax for this command is:
lsvpcfg vpathN vpathN vpathN
See Verifying the SDD configuration on page 46 for an explanation of the output. 3. The command can also be issued using the device model as a parameter. The option to specify a device model cannot be used when you specify an SDD vpath device. The syntax for this command is:
lsvpcfg device model
The following are examples of valid device models: 2105 2105F 2105800 All 2105 800 models (ESS). 2145 2107 1750 All 2145 models (SAN Volume Controller). All DS8000 models. All DS6000 models. All 2105 models (ESS). All 2105 F models (ESS).
mkvg4vp
You can use the mkvg4vp command to create an SDD volume group. For more information about this command, go to Configuring volume groups for failover protection on page 75. For information about the flags and parameters for this command, go to: http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds3/ mkvg.htm. The syntax for this command is:
mkvg4vp S* -d MaxPVs -B -G -f -q **
-C | -c [-x]
-i
-s PPsize
-n
89
-V MajorNumber
-L LTGsize***
for AIX 5.3 and later only for AIX 5.2 and later only *** for AIX 5.1 and later only
**
extendvg4vp
You can use the extendvg4vp command to extend an existing SDD volume group. For more information about this command, go to Extending an existing SDD volume group on page 81. For information about the flag and parameters for this command, go to: http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds2/ extendvg.htm The syntax for this command is:
extendvg4vp -f VGname PVname
excludesddcfg
You can use the excludesddcfg command to exclude supported storage device (hdisk) from the SDD vpath configuration. You must run this command before the SDD vpath devices are configured. The excludesddcfg command saves the serial number for the logical device (hdisk) to an exclude file (/etc/vpexclude). During the SDD configuration, the SDD configure methods read all serial numbers that are listed in this file and exclude these devices from the SDD configuration. The syntax for this command is:
excludesddcfg -l hdisk# -d -dl hdisk# device-name
-l Specifies the logical number of the supported storage device (hdiskN). This is not the SDD device name. -d When this optional flag is set, the excludesddcfg command removes all the exclusions by deleting all the existing contents from the exclude file. -dl When this optional flag is set, the excludesddcfg command allows users to remove the exclusion of a particular storage device. device name Specifies the supported storage device (hdiskN). Example:
90
excludesddcfg -l hdisk11
hdisk11 SERIAL_NUMBER = 7C0FCA30 Success: Device with this serial number is now excluded from the SDD configuration. To undo this exclusion, run excludesddcfg -dl hdisk#. # excludesddcfg -dl hdisk11 hdisk11 SERIAL NUMBER = 7C0FCA30 TYPE = 2105 Success: SERIAL_NUMBER 7C0FCA30 is removed from /etc/vpexclude file. To configure previously excluded device(s), run cfallvpath.
Notes: 1. Do not use the excludesddcfg command to exclude a device if you want the device to be configured by SDD. 2. If the supported storage device LUN has multiple configurations on a server, use the excludesddcfg command on only one of the logical names of that LUN. 3. Do not use the excludesddcfg command multiple times on the same logical device. Using the excludesddcfg command multiple times on the same logical device results in duplicate entries in the /etc/vpexclude file, so that the system administrator has to administer the file and its content. 4. Issue the excludesddcfg command with the -d flag to delete all existing contents from the exclude file. If you want to remove only one device from the /etc/vpexclude file, issue the excludesddcfg command with the -dl flag and specify the logical device name for which you want to remove the exclusion. For detailed instructions on the proper procedure, see Replacing manually excluded devices in the SDD configuration on page 52.
lquerypr
See Persistent reserve command tool.
sddgetdata
See Appendix A, SDD, SDDPCM, and SDDDSM data collection for problem analysis, on page 457, which describes the use of sddgetdata to collect information for problem determination.
91
lquerypr command Purpose To query and implement certain SCSI-3 persistent reserve commands on a device. Syntax
lquerypr -p -c -r -v -V -h/dev/PVname
Description The lquerypr command implements certain SCSI-3 persistent reservation commands on a device. The device can be either hdisk or SDD vpath devices. This command supports persistent reserve service actions, such as read reservation key, release persistent reservation, preempt-abort persistent reservation, and clear persistent reservation. Note: This command can only be used when the device is not already opened. Flags p If the persistent reservation key on the device is different from the current host reservation key, it preempts the persistent reservation key on the device. If there is a persistent reservation key on the device, it removes any persistent reservation and clears all reservation key registration on the device. Removes the persistent reservation key on the device made by this host. Displays the persistent reservation key if it exists on the device. Verbose mode. Prints detailed message.
r v V
Return code If the command is issued without options of -p, -r, or -c, the command will return 0 under two circumstances. 1. There is no persistent reservation key on the device. 2. The device is reserved by the current host. If the persistent reservation key is different from the host reservation key, the command will return 1. If the command fails, it returns 2. If the device is already opened on a current host, the command returns 3. Example 1. To query the persistent reservation on a device, enter lquerypr -h/dev/vpath30. This command queries the persistent reservation on the device without displaying. If there is a persistent reserve on a disk, it returns 0 if the device is reserved by the current host. It returns 1 if the device is reserved by another host. 2. To query and display the persistent reservation on a device, enter lquerypr -vh/dev/vpath30.
92
Same as Example 1. In addition, it displays the persistent reservation key. 3. To release the persistent reservation if the device is reserved by the current host, enter lquerypr -rh/dev/vpath30. This command releases the persistent reserve if the device is reserved by the current host. It returns 0 if the command succeeds or the device is not reserved. It returns 2 if the command fails. 4. To reset any persistent reserve and clear all reservation key registrations, enter lquerypr -ch/dev/vpath30. This command resets any persistent reserve and clears all reservation key registrations on a device. It returns 0 if the command succeeds, or 2 if the command fails. 5. To remove the persistent reservation if the device is reserved by another host, enter lquerypr -ph/dev/vpath30. This command removes an existing registration and persistent reserve from another host. It returns 0 if the command succeeds or if the device is not persistent reserved. It returns 2 if the command fails.
93
Note: Alternately, you can enter lsvpcfg from the command-line interface rather than using SMIT. This displays all configured SDD vpath devices and their underlying paths (hdisks).
9. When the conversion is complete, mount all file systems that you previously unmounted. When the conversion is complete, your application now accesses supported storage device physical LUNs through SDD vpath devices. This provides load-balancing and failover protection for your application.
Migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode
Before you migrate your non-SDD volume group to an SDD volume group, make sure that you have completed the following tasks:
94
v The SDD for the AIX host system is installed and configured. See Verifying the currently installed version of SDD for SDD 1.3.3.11 (or earlier) on page 35 or Verifying the currently installed version of SDD for SDD 1.4.0.0 (or later) on page 37. v The supported storage devices to which you want to migrate have multiple paths configured per LUN. To check the state of your SDD configuration, use the System Management Interface Tool (SMIT) or issue the lsvpcfg command from the command line. To use SMIT: Enter smitty and press Enter from your desktop window. The System Management Interface Tool panel is displayed. Select Devices and press Enter. The Devices panel is displayed. Select Data Path Device and press Enter. The Data Path Device panel is displayed. Select Display Data Path Device Configuration and press Enter. A list of the SDD vpath devices and whether there are multiple paths configured for the devices is displayed. v Ensure that the SDD vpath devices that you are going to migrate to do not belong to any other volume group, and that the corresponding physical device (supported storage device LUN) does not have a pvid written on it. Enter the lsvpcfg command output to check the SDD vpath devices that you are going to use for migration. Make sure that there is no pv displayed for this SDD vpath device and its paths (hdisks). If a LUN has never belonged to any volume group, there is no pvid written on it. In case there is a pvid written on the LUN and the LUN does not belong to any volume group, you need to clear the pvid from the LUN before using it to migrate a volume group. The commands to clear the pvid are:
chdev -l chdev -l hdiskN vpathN -a -a pv=clear pv=clear
Attention: Exercise care when clearing a pvid from a device with this command. Issuing this command to a device that does belong to an existing volume group can cause system failures. You should complete the following steps to migrate a non-SDD volume group to a multipath SDD volume group in concurrent mode: 1. Add new SDD vpath devices to an existing non-SDD volume group: a. Enter smitty and press Enter from your desktop window. The System Management Interface Tool panel is displayed. b. Select System Storage Management (Physical & Logical) and press Enter. The System Storage Management (Physical & Logical) panel is displayed. c. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. d. Select Volume Group and press Enter. The Volume Group panel is displayed. e. Select Add a Data Path Volume to a Volume Group and press Enter. f. Enter the volume group name and physical volume name and press Enter. Alternately, you can use the F4 key to list all the available SDD vpath devices and use the F7 key to select the physical volumes that you want to add.
95
2. Enter the smitty mklvcopy command to mirror logical volumes from the original volume to an SDD supported storage device volume. Use the new SDD vpath devices for copying all logical volumes. Do not forget to include JFS log volumes. Note: The command smitty mklvcopy copies one logical volume at a time. A fast-path command to mirror all the logical volumes on a volume group is mirrorvg. 3. Synchronize logical volumes (LVs) or force synchronization. Enter the smitty syncvg command to synchronize all the volumes: There are two options on the smitty panel: v Synchronize by Logical Volume v Synchronize by Physical Volume The fast way to synchronize logical volumes is to select the Synchronize by Physical Volume option. 4. Remove the mirror and delete the original LVs. Enter the smitty rmlvcopy command to remove the original copy of the logical volumes from all original non-SDD physical volumes. 5. Enter the smitty reducevg command to remove the original non-SDD vpath devices from the volume group. The Remove a Physical Volume panel is displayed. Remove all non-SDD devices. Note: A non-SDD volume group refers to a volume group that consists of non-supported storage devices or supported storage hdisk devices.
Detailed instructions for migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode
This procedure shows how to migrate an existing AIX volume group to use SDD vpath devices that have multipath capability. You do not take the volume group out of service. The example shown starts with a volume group, vg1, made up of one supported storage device, hdisk13. To perform the migration, you must have SDD vpath devices available that are greater than or equal to the size of each of the hdisks making up the volume group. In this example, the volume group is migrated to an SDD device, vpath12, with two paths, hdisk14 and hdisk30. 1. Add the SDD vpath device to the volume group as an Available volume: a. Enter smitty and press Enter from your desktop window. The System Management Interface Tool panel is displayed. b. Select System Storage Management (Physical & Logical) and press Enter. The System Storage Management (Physical & Logical) panel is displayed. c. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. d. Select Volume Group and press Enter. The Volume Group panel is displayed. e. Select Add a Data Path Volume to a Volume Group and press Enter. f. Enter vg1 in the Volume Group Name field and enter vpath12 in the Physical Volume Name field. Press Enter. You can also use the extendvg4vp -f vg1 vpath12 command.
96
2. Mirror logical volumes from the original volume to the new SDD vpath device volume: a. Enter smitty and press Enter from your desktop window. The System Management Interface Tool panel is displayed. b. Select System Storage Management (Physical & Logical) and press Enter. The System Storage Management (Physical & Logical) panel is displayed. c. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. d. Select Volume Group and press Enter. The Volume Group panel is displayed. e. Select Mirror a Volume Group and press Enter. The Mirror a Volume Group panel is displayed. f. Enter a volume group name and a physical volume name. Press Enter. You can also enter the mirrorvg vg1 vpath12 command. 3. Synchronize the logical volumes in the volume group: a. Enter smitty and press Enter from your desktop window. The System Management Interface Tool panel is displayed. b. Select System Storage Management (Physical & Logical) and press Enter. The System Storage Management (Physical & Logical) panel is displayed. c. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. d. Select Volume Group and press Enter. The Volume Group panel is displayed. e. Select Synchronize LVM Mirrors and press Enter. The Synchronize LVM Mirrors panel is displayed. f. Select Synchronize by Physical Volume. You can also enter the syncvg -p hdisk13 vpath12 command. 4. Delete copies of all logical volumes from the original physical volume: a. Enter smitty and press Enter from your desktop window. The System Management Interface Tool panel is displayed. b. Select Logical Volumes and press Enter. The Logical Volumes panel is displayed. c. Select Set Characteristic of a Logical Volume and press Enter. The Set Characteristic of a Logical Volume panel is displayed. d. Select Remove Copy from a Logical Volume and press Enter. The Remove Copy from a Logical Volume panel is displayed. You can also enter the command:
rmlvcopy loglv01 1 hdisk13 rmlvcopy lv01 1 hdisk13
5. Remove the old physical volume from the volume group: a. Enter smitty and press Enter from your desktop window. The System Management Interface Tool panel is displayed. b. Select Logical Volume Manager and press Enter. The Logical Volume Manager panel is displayed. c. Select Volume Groups and press Enter. The Volume Groups panel is displayed. d. Select Set Characteristics of a Volume Group and press Enter. The Set Characteristics of a Volume Group panel is displayed.
97
e. Select Remove a Physical Volume from a Volume Group and press Enter. The Remove a Physical Volume from a Volume Group panel is displayed. You can also enter the reducevg vg1 hdisk13 command.
Then you can start the trace function. To start the trace function, enter:
trace -a -j 2F8
Note: To perform the AIX trace function, you must have the bos.sysmgt.trace installation package installed on your system.
98
99
Raw I/O
AIX MPIO Disk Driver SDD PCM (for IBM disk storage system) AIX Default PCM Other Vendor PCM
For detailed information about MPIO support on AIX 5.2 TL07 (or later), AIX 5.3 TL03 (or later), or AIX 6.1, visit the following website:
http://publib16.boulder.ibm.com/pseries/en_US/aixbman/baseadmn/manage_MPIO.htm
AIX MPIO-capable device drivers automatically discover, configure and make available every storage device path. SDDPCM manages the paths to provide: v High availability and load balancing of storage I/O v Automatic path-failover protection v Concurrent download of supported storage devices licensed machine code v Prevention of a single-point-failure For updated and additional information that is not included in this chapter, see the Readme file on the CD-ROM or visit the SDD website: www.ibm.com/servers/storage/support/software/sdd SDD and SDDPCM are exclusive software packages on a server. You cannot install both software packages on a server for supported storage devices. When supported storage devices are configured as non-MPIO-capable devices (that is, multiple logical device instances are created for a physical LUN), you should install SDD to get multipath support. You must install SDDPCM in order to configure supported storage devices into MPIO-capable-devices (where only one logical device instance is created for a physical LUN). Before you install SDDPCM, make sure that you meet all the required hardware and software requirements. See Verifying the hardware and software requirements on page 102 and the Preparing for SDDPCM installation for supported storage devices on page 105. Note: SDDPCM does not support SCSI storage devices.
100
sdd52w04
With SDD 1.6.0.0 (or later), SDDPCM and SDD cannot coexist on a AIX server. If a server connects to any supported storage devices, all devices must be configured either as non-MPIO-capable devices or as MPIO-capable devices.
101
v Web-based System Manager (WebSM) for MPIO supported storage devices (See http://www-03.ibm.com/systems/power/software/aix/index.html for more information about WebSM.) v Reserve last path of a device in OPEN mode v Support the essutil Product Engineering tool in the SDDPCM pcmpath command line program v Support HACMP with Enhanced Concurrent Mode volume group in concurrent resource groups and nonconcurrent resource groups Note: This support does not include RSSM devices as HACMP is not supported on IBM JS-series blades. v Support GPFS in AIX 5.2 TL06 (or later), 5.3 TL02 (or later), and AIX 6.1. v Support Virtual I/O server with AIX 5.3 or later and AIX 6.1. Note: This support does not include DS4000, DS5000, and DS3950 storage devices. v Support Tivoli Storage Productivity Center for Replication Metro Mirror Failover/Failback replication for Open HyperSwap for System Storage DS8000. Note: This support is only available on AIX 5.3 TL11 (or later), and AIX 6.1 TL04 (or later). v Support for Non-disruptive Vdisk Movement (NDVM) feature of SVC. For information about SAN Volume Controller, see the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide. Note: This support is only available on SDDPCM 2.6.4.0 (or later) on AIX.
Hardware
The following hardware components are needed: v Supported storage devices (FCP and SAS devices only) v One or more switches, if the supported storage devices is not direct-attached v Host system
102
Software
The following software components are needed: v AIX 5.2 TL10 (or later), AIX 5.3 TL08 (or later), or AIX 6.1 TL02 (or later) operating system, with all the latest PTFs. Refer to the Readme file of the SDDPCM level that you plan to install for the required AIX TL for that level. v If your attached storage is SAN Volume Controller version 4.2.1.6 or above and you require SAN Volume Controller APAR IC55826, you must install SDDPCM 2.2.0.3 or above with the required AIX TL and APAR. Refer to the Readme file of the SDDPCM level that you plan to install for the required AIX TL and APARs. v Fibre-channel device drivers or serial-attached SCSI drivers v One of the following installation packages: devices.sddpcm.52.rte (version 2.5.1.0 or later) devices.sddpcm.53.rte (version 2.5.1.0 or later) devices.sddpcm.61.rte (version 2.5.1.0 or later) v Supported storage devices: devices.fcp.disk.ibm.mpio.rte (version 1.0.0.21 or later versions of 1.x.x.x) host attachment package for SDDPCM (version 2.x.x.x or prior versions) devices.fcp.disk.ibm.mpio.rte (version 2.0.0.1 or later versions) host attachment package for SDDPCM (version 3.0.0.0 or later versions) devices.sas.disk.ibm.mpio.rte (version 1.0.0.0 or later versions) host attachment package for SDDPCM
Unsupported environments
SDDPCM does not support: v ESS SCSI devices v A host system with both a SCSI and fibre-channel connection to a shared ESS logical unit number (LUN) v Single-path mode during code distribution and activation of LMC nor during any supported storage devices concurrent maintenance that impacts the path attachment, such as a supported storage device host-bay-adapter replacement. v AIX5.2 with DS4000, DS5000, and DS3950 storage subsystems. v AIX5.2 with Open HyperSwap.
103
Fibre requirements
Note: There is no fibre requirement for RSSM devices that are connected through SAS. Refer to RSSM documentation for SAS requirements at: http://www.ibm.com/systems/support/supportsite.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 You must check for and download the latest fibre-channel device driver APARs, maintenance-level fixes, and microcode updates from the following website: www-1.ibm.com/servers/eserver/support/ If your host has only one fibre-channel adapter, it requires you to connect through a switch to multiple supported storage device ports. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure. For information about the fibre-channel adapters that can be used on your AIX host system, go to the following website: www.ibm.com/servers/storage/support To use the SDDPCM fibre-channel support, ensure that your host system meets the following requirements: v The AIX host system is an IBM RS/6000 or IBM System p with AIX 5.2 TL10 (or later), AIX 5.3 TL06 (or later), or AIX 6.1. v The AIX host system has the fibre-channel device drivers installed along with all latest APARs. v The host system can be a single processor or a multiprocessor system, such as SMP. v A fiber-optic cable connects each fibre-channel adapter to a supported storage system port. v If you need the SDDPCM I/O load-balancing and failover features, ensure that a minimum of two paths to a device are attached.
sddpcmke
sdduserke pcmpath
104
Daemon for enhanced path healthcheck, and First Time Data Capture The sample SDDPCM server daemon configuration file Collects supported storage devices fibre-channel device information through SCSI commands SDDPCM persistent reserve command tool SDDPCM persistent reserve command tool to generate persistent reserve key Release SCSI-2 reserve on boot devices or on active nonboot volume groups Note: Beginning with SDDPCM version 2.5.1.0, the relbootrsv program also can be used to release SCSI-2 reserve on an MPIO hdisk.
relSDDPCMbootrsv
Automatically executed upon reboot to release leftover SCSI-2 reserve on SDDPCM boot device, beginning with SDDPCM version 2.2.0.1. Script to collect SDDPCM information, trace log files, and system error logs into an sddpcmdata_host_date_time.tar file for problem determination Display configuration information for DS4000, DS5000, and DS3950 storage and MPIO-based devices. Display SDDPCM MPIO device/path configuration/state information. Daemon for communication with Tivoli Storage Productivity Center for Replication to support Open HyperSwap. Note: This file was added to the SDDPCM installation package beginning with SDDPCM 3.0.0.0.
sddpcmgetdata
sddpcm_get_config
lspcmcfg AE
105
v Determine that you have the correct installation package v Remove the SDD package, if it is installed. v Remove the ibm2105.rte (version 32.6.100.x) and/or devices. fcp.disk.ibm.rte ( version 1.0.0.x), if they are installed. v Install the AIX fibre-channel device drivers, if necessary. v Verify and upgrade the fibre channel adapter firmware level v Install the SDDPCM host attachment: devices.fcp.disk.ibm.mpio.rte (version 1.0.0.15 or later), or devices.sas.disk.ibm.mpio.rte (version 1.0.0.0 or later).
106
You must check for the latest information on fibre-channel device driver APARs, maintenance-level fixes, and microcode updates at the following website: www-1.ibm.com/servers/storage/support/ Complete the following steps to install the AIX fibre-channel device drivers from the AIX compact disk: 1. Log in as the root user. 2. Load the compact disc into the CD-ROM drive. 3. From your desktop window, enter smitty install_update and press Enter to go directly to the installation panels. The Install and Update Software menu is displayed. 4. Highlight Install Software and press Enter. 5. Press F4 to display the INPUT Device/Directory for Software panel. 6. Select the compact disc drive that you are using for the installation; for example, /dev/cd0, and press Enter. 7. Press Enter again. The Install Software panel is displayed. 8. Highlight Software to Install and press F4. The Software to Install panel is displayed. 9. The fibre-channel device drivers include the following installation packages: devices.pci.df1080f9 The adapter device driver for RS/6000 or IBM System p with feature code 6239. devices.pci.df1000f9 The adapter device driver for RS/6000 or IBM System p with feature code 6228. devices.pci.df1000f7 The adapter device driver for RS/6000 or IBM System p with feature code 6227. devices.common.IBM.fc The FCP protocol driver. devices.fcp.disk The FCP disk driver. Select each one by highlighting it and pressing F7. 10. Press Enter. The Install and Update from LATEST Available Software panel is displayed with the name of the software you selected to install. 11. Check the default option settings to ensure that they are what you need. 12. Press Enter to install. SMIT responds with the following message:
+------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. 413 | | This is your last chance to stop before continuing. 415 | +------------------------------------------------------------------------+
13. Press Enter to continue. The installation process can take several minutes to complete. 14. When the installation is complete, press F10 to exit from SMIT. Remove the compact disc. 15. Check to see if the correct APARs are installed by entering the following command:
Chapter 3. Using SDDPCM on an AIX host system
107
instfix
-iv |
grep
IYnnnnn
where nnnnn represents the APAR numbers. If the APARs are listed, that means that they are installed. If they are installed, go to Configuring supported storage MPIO-capable devices on page 119. Otherwise, go to step 3. 16. Repeat steps 1 through 14 to install the APARs.
To verify the firmware level, ignore the second character in the ZB field. In the example, the firmware level is sf330X1. 3. If the adapter firmware level is at the latest level, there is no need to upgrade; otherwise, the firmware level must be upgraded. To upgrade the firmware level, go to Upgrading the fibre channel adapter firmware level. Upgrading the fibre channel adapter firmware level: Upgrading the firmware level consists of downloading the firmware (microcode) from your AIX host system to the adapter. Before you upgrade the firmware, ensure that you have configured any fibre-channel-attached devices (see Configuring fibre-channel-attached devices on page 18). After the devices are configured, download the firmware from the AIX host system to the FCP adapter by performing the following steps: 1. Verify that the correct level of firmware is installed on your AIX host system. Go to the /etc/microcode directory and locate the file called df1000f7.XXXXXX for feature code 6227 and df1000f9.XXXXXX for feature code 6228, where XXXXXX is the level of the microcode. This file was copied into the /etc/microcode directory during the installation of the fibre-channel device drivers. 2. From the AIX command prompt, enter diag and press Enter. 3. Highlight the Task Selection option. 4. Highlight the Download Microcode option.
108
5. Press Enter to select all the fibre-channel adapters to which you want to download firmware. Press F7. The Download panel is displayed with one of the selected adapters highlighted. Press Enter to continue. 6. Highlight /etc/microcode and press Enter. 7. Follow the instructions that are displayed to download the firmware, one adapter at a time. Fibre-channel HBA attributes for DS4000, DS5000, and DS3950 storage devices: You must set the FC HBA dyntrk attribute to yes instead of the default settting, which is no.
or
tar -xvf devices.sas.disk.ibm.mpio.rte.tar
109
or
pwd rm -i .toc inutoc . grep -i devices.sas.disk.ibm .toc
This command should reflect the newer SDDPCM host attachment version that will be uploaded. 6. From your desktop window, type smitty install_update and press Enter to go directly to the installation panels. The Install and Update Software menu is displayed. 7. Highlight Install Software and press Enter. 8. Type . to indicate the current directory and press Enter. 9. Highlight Software to Install and press F4. The Software to Install panel is displayed. 10. Select the devices.fcp.disk.ibm.mpio.rte package. 11. Press Enter. The Install and Update from the LATEST Available Software panel is displayed with the name of the software that you selected to install. 12. Check the default option settings to ensure that they are what you need. 13. Press Enter to install. SMIT responds with the following message:
+---------------------------------------------------------------------+ |ARE YOU SURE?? | |Continuing may delete information you may want to keep. | |This is your last chance to stop before continuing. | +---------------------------------------------------------------------+
14. Press Enter to continue. The installation process may take a few minutes to complete. 15. When the installation or upgrade is complete, press F10 to exit from SMIT. 16. If this is a host attachment installation, do not reboot the system until you have the SDDPCM package installed. If this is a host attachment upgrade, and the SDDPCM package is already installed on the system, then reboot the system to complete the host attachment upgrade. Attention: Do not reboot the system if you only have the SDDPCM host attachment package installed.
110
2. The AIX SDDPCM host attachment package (devices.fcp.disk.ibm.mpio.rte or devices.sas.disk.ibm.mpio.rte) must be installed before you install the SDDPCM package (devices.sddpcm.52.rte, devices.sddpcm.53.rte, or devices.sddpcm.61.rte).
11. Press Enter to create the CDROM File System. 12. When the CDROM File System has been created, press F10 to exit from smit. 13. From your desktop window, enter smitty mount and press Enter. 14. Select Mount a File System and press Enter. The Mount a File System panel is displayed. 15. 16. 17. 18. 19. 20. 21. Select FILE SYSTEM name and press F4 Select the CDROM File System that you created and press Enter. Select DIRECTORY on which to mount and press F4. Select the CDROM File System that you created and press Enter. Select TYPE of file system and press Enter. Select cdrfs as the type of file system and press Enter. Select Mount as a REMOVABLE file system? and press TAB to change the entry to yes.
111
22. Select Mount as a READ-ONLY system? and press TAB to change entry to yes. 23. Click to check the default option settings for the other fields to ensure that they are what you need.
+-----------------------------------------------------------------+ + Mount a File System + + Type or select values in entry fields. + + Press Enter AFTER making all desired changes. + + [Entry Fields] + + FILE SYSTEM name [/dev/cd0] + + DIRECTORY over which to mount [/cdmnt] + + TYPE of file system cdrfs + + FORCE the mount? no + + REMOTE NODE containing the file system [] + + to mount + + Mount as a REMOVABLE file system? yes + + Mount as a READ-ONLY system? yes + + Disallow DEVICE access via this mount? no + + Disallow execution of SUID and sgid programs no + + in this file system? + + + +-----------------------------------------------------------------+
24. Press Enter to mount the file system. 25. When the file system has been mounted successfully, press F10 to exit from smit. Attention: Do not reboot the system if you only have the SDDPCM host attachment package installed.
112
ARE YOU SURE?? Continuing may delete information you may want to keep. This is your last chance to stop before continuing.
11. Press Enter to continue. The installation process can take several minutes to complete. 12. When the installation is complete, press F10 to exit from SMIT.
This command should reflect the newer SDDPCM code version that will be updated. 6. Continue the installation by following the instructions beginning in step 3 on page 112.
Installing SDDPCM with the AIX OS from an AIX NIM SPOT server to the client SAN boot disk or the internal boot disk
You can install SDDPCM from an AIX Network Installation Management (NIM) server to the client SAN boot disk or the internal boot disk at the same time that the AIX OS is installed. You must set up the NIM master and create the lpp_source and Shared Product Object Tree (SPOT) resources with the images on a file system, which is either NFS-exported or is obtained from a CD or DVD. Prepare for the NIM SPOT installation with AIX OS and SDDPCM on the client's SAN boot disk or the internal boot disk. To do this, first set up a NIM master and create the lpp_source and SPOT resource. You can use the System Management Interface Tool (SMIT) facility to implement the following procedures: 1. Install the following filesets to set up the system as an NIM master: bos.sysmgt.min.master bos.sysmgt.nim.spot
113
2. Initialize the NIM master system by running the smitty nim_config_env command. 3. Create a new lpp_source and SPOT resource by running the smitty nim_config_env command. 4. Add the SDDPCM fileset to the newly created lpp_source by running the smitty nim_task_inst command. 5. Create a SPOT from the new lpp_source by running the smitty nim_config_env command. 6. Define an NIM client by running the smitty nim command. See the NIM task roadmap on the Web for detailed information on how to complete these tasks: publib16.boulder.ibm.com/pseries/en_US/aixins/insgdrf/ nim_roadmap.htm#nim_roadmap After you have successfully prepared for the NIM SPOT installation, you are ready to use the SMIT tool to start the NIM installation on the client system: 1. Run the smitty nim command. a. Click Perform NIM Administration Tasks > Manage Network Install Resource Allocation > Manage Machines > Allocate Network Install Resources. b. Select the hostname of the client that you defined previously. c. Select the lpp_source and SPOT resources that you created previously, and then press Enter. 2. Run the smitty nim command again. a. Click Perform NIM Administration Tasks > Manage Machines > Perform Operations on Machines. b. Select the hostname of the client that you selected previously. c. Click bos_inst. d. Set the ACCEPT new license agreements field to Yes, and then press Enter. The system automatically reboots after the smitty nim task completes. Use the following command to check the SAN boot disk and make sure the boot disk is configured with SDDPCM: lsattr -El hdiskX (SAN boot disk device name) From the output of this command, check the ODM attribute PCM to ensure that the value is PCM/friend/sddpcm or PCM/friend/sddappcm.
Updating SDDPCM
The following sections discuss methods of updating SDDPCM, verifying the currently installed version of SDDPCM, and the maximum number of devices that SDDPCM supports.
Updating SDDPCM packages by installing a newer base package or a program temporary fix
SDDPCM allows you to update SDDPCM by installing a newer base package or a program temporary fix (PTF). A PTF file has a file extension of .bff (for example, devices.sddpcm.52.rte.2.1.0.1.bff) and can either be applied or committed when it is installed. If the PTF is committed, the update to SDDPCM is permanent; to remove
114
the PTF, you must uninstall SDDPCM. If the PTF is applied, you can choose to commit or to reject the PTF at a later time. If you decide to reject the PTF, you will not need to uninstall SDDPCM from the host system. Note: If you are not upgrading the operating system, regardless of whether you have SAN boot devices, you can update SDDPCM packages by installing a newer base package or a program temporary fix. Otherwise, see Migrating SDDPCM during an AIX OS upgrade with multipath SAN boot devices (on supported storage hdisks) on page 119. After applying the base package or the PTF, reboot the system. The SDDPCM server daemon should automatically start after restarting the system. If it does not start automatically, start the SDDPCM server daemon manually. Use the SMIT facility to update SDDPCM. The SMIT facility has two interfaces, nongraphical (enter smitty to invoke the nongraphical user interface) and graphical (enter smit to invoke the GUI). If the base package or PTF is on a CD-ROM, you must mount the CD file system. See Creating and mounting the CD-ROM filesystem on page 111 for directions on how to mount the CD file system. In the following procedure, /dev/cd0 is used for the CD drive address. The drive address can be different in your environment. Complete the following SMIT steps to update the SDDPCM package on your system: 1. Log in as the root user. 2. Type smitty install_update and press Enter to go directly to the installation panels. The Install and Update Software menu is displayed. 3. Select Install Software and press Enter. 4. Press F4 to display the INPUT Device/Directory for Software panel. 5. Select either a CD drive that you are using for the installation or a local directory where the packages reside; for example, /dev/cd0, and press Enter. 6. Press Enter again. The Install Software panel is displayed. 7. Select Software to Install and press F4. The Software to Install panel is displayed. 8. Select the base package or the PTF package that you want to install. 9. Press Enter. The Install and Update from LATEST Available Software panel is displayed with the name of the software that you selected to install. 10. If you only want to apply the PTF, select Commit software Updates? and tab to change the entry to no. The default setting is to commit the PTF. If you specify no to Commit Software Updates?, ensure that you specify yes to Save Replaced Files?. 11. Check the other default option settings to ensure that they are what you need. 12. Press Enter to install. SMIT responds with the following message:
+---------------------------------------------------------------------+ |ARE YOU SURE?? | |Continuing may delete information you may want to keep. | |This is your last chance to stop before continuing. | +---------------------------------------------------------------------+
13. Press Enter to continue. The installation process can take several minutes to complete. 14. When the installation is complete, press F10 to exit from SMIT. 15. Unmount the CD-ROM file system and remove the compact disc.
Chapter 3. Using SDDPCM on an AIX host system
115
10. Press Enter to continue. The commit or reject process can take several minutes to complete. 11. When the installation is complete, press F10 to exit from SMIT. Note: You do not need to restart the system even though the bosboot message may indicate that a restart is necessary.
116
In order to support 1200 supported storage device LUNs, system administrators should first determine whether the system has sufficient resources to support a large number of devices. See Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled on page 40 for more information. For AIX blade servers in an IBM BladeCenter S Chassis connected to RSSM devices, refer to the RSSM documentation at the following URL for the maximum number of RSSM LUNs supported: http://www.ibm.com/systems/support/supportsite.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 For AIX 5.3, a single host should manage a maximum of 1024 devices when devices have been enabled for Open HyperSwap on the host, with 8 logical paths configured for each copy set in the session. For AIX 6.1, a single host should manage a maximum of 1024 devices when devices have been enabled for Open HyperSwap on the host, with 16 logical paths configured for each copy set in the session.
Migrating SDDPCM
The following sections discuss the methods of migrating SDDPCM with and without SAN boot devices v Migrating the supported storage SAN boot device or nonboot volume group from AIX default PCM to SDDPCM v Migrating from SDDPCM to the AIX default PCM or to SDD on page 118 v Migrating from SDD with SAN boot devices (on supported storage hdisks) to SDDPCM with multipath SAN boot devices on page 119
Migrating the supported storage SAN boot device or nonboot volume group from AIX default PCM to SDDPCM
The default reserve policy of the AIX base PCM is a single-path policy, which is SCSI-2 reserve. The path selection algorithm is fail_over, which means only one path is opened at a time and that path made SCSI-2 reserve to the disk. All I/O is routed to this path. This reserve policy and path selection algorithm can cause problems if, after the SDDPCM packages are installed, you open the default PCM device before you restart the system. The default PCM device can be opened if: v You build a volume group and file system with AIX default PCM devices, and leave the volume groups active and file system mounted v You configure default PCM devices as the backing devices of a virtual target device when you are in a VIOS environment After the system starts, you might see some paths in the INVALID state. The INVALID state means that the path failed to open. This is because the SCSI-2 reserve is not released during the system restart; thus, only the paths previously opened with SCSI-2 reserve can be opened and used for I/O after system restart. You might not see paths in the INVALID state if your system is at AIX 5.2 TL10 or later or at AIX 5.3 TL07 or later, or if you have IY83717 or IY83847 installed on your system. Instead, you might see a heavier I/O select count on one path. This is because the SCSI-2 reserve is not released during the system restart. Even if all the paths are allowed to be opened, only opened paths that previously made SCSI-2 reserve can actually be used for I/O.
117
If you have supported storage SAN boot devices that are configured with AIX default PCM, and the reserve policy is single_path (SCSI-2 reserve), switching the boot devices from AIX default PCM to SDDPCM might result in this reservation conflict situation. If you install an SDDPCM version earlier than 2.2.0.1, you must always run the relbootrsv command to release the SCSI-2 reserve on SAN boot devices after you install the SDDPCM host attachment package and the SDDPCM package. Run the relbootrsv command before you restart the system and then run the following command against the hdisks that are part of the rootvg to verify that they are no longer reserved.
# pcmquerypr -Vh /dev/hdisk6 connection type: fscsi0 open dev: /dev/hdisk6 Attempt to read reservation key... Attempt to read registration keys... Read Keys parameter Generation : 0 Additional Length: 0 resrvpolicy= no_reserve Reserve Key provided by current host = none (hex)02bbb003 Not reserved.
If you install SDDPCM version 2.2.0.1 or later, the SCSI-2 reserve on SAN boot devices is released automatically during system boot. In a VIOS environment, reservation conflict problems can occur on a virtual target device that used to have a default PCM device as its backing device. To prevent this problem, perform one of the following actions: v Switch from the AIX default PCM to SDDPCM before you use the AIX default PCM device as the backing device of a virtual target device. v Before switching from the AIX default PCM to SDDPCM, put the virtual target devices in the Define state. This properly closes the AIX default PCM and releases the SCSI-2 reserve before migrating to SDDPCM. Reservation conflict problems can also occur on nonboot volume groups. To prevent this problem, perform one of the following actions: v Switch from the AIX default PCM to SDDPCM before you make any volume groups and file systems. v To switch from the AIX default PCM to SDDPCM, you must unmount file systems and vary off the volume group of the AIX default PCM to release the SCSI-2 reserve on the volume group before system restart. v Issue relbootrsv VGname to release the SCSI-2 reserve on the active, nonboot volume group devices before you restart the system. Note: If you specify a VGname (volume group name), relbootrsv releases the SCSI-2 reserve of the specified non-SAN boot volume group; otherwise, it releases the SCSI-2 reserve of a SAN boot volume group (rootvg).
118
To migrate from SDDPCM to the AIX default PCM or to SDD, you must first unconfigure the devices, stop the SDDPCM server daemon, and then uninstall the SDDPCM package and the SDDPCM host attachment package. See Removing SDDPCM from an AIX host system on page 123 for directions on uninstalling SDDPCM. After you uninstall SDDPCM, you can then restart the system to migrate supported storage MPIO devices to the AIX default PCM. If you want to migrate supported storage devices to SDD devices, you must then install the supported storage device host attachment for SDD and the appropriate SDD package for your system. Then restart the system to configure the supported storage devices to SDD vpath devices.
Migrating from SDD with SAN boot devices (on supported storage hdisks) to SDDPCM with multipath SAN boot devices
If you have supported storage devices configured with SDD and there are SAN boot devices with supported storage hdisk devices, you need to contact IBM Customer Support for migration from SDD to SDDPCM.
Migrating SDDPCM during an AIX OS upgrade with multipath SAN boot devices (on supported storage hdisks)
SDDPCM provides different packages to match the AIX OS level. If an AIX system is going to be upgraded to a different OS level; for example, from AIX 5.3 to AIX 6.1, you need to install the corresponding SDDPCM package for that OS level. If you want to upgrade AIX OS and there are SAN boot devices with SDDPCM supported storage hdisk devices, you need to contact IBM Customer Support for migration from SDDPCM during the OS upgrade. If you are not in SAN boot environment, or you are only upgrading the AIX OS Technology Level or Server Pack; for example, from AIX 5.3 TL04 to AIX 5.3 TL06, you can follow the procedures in Updating SDDPCM on page 114.
119
already started. See SDDPCM server daemon on page 138 for information describing how to check the daemon status and how to manually start the daemon. v shutdown -rF command to restart the system. After the system restarts, the SDDPCM server daemon (pcmsrv) should automatically start.
120
or mkpath -l hdiskX -p sasY When the command returns successfully, the paths are added to the devices. To check the device configuration status, enter: lspath -l hdiskX -H -F "name path_id parent connection status" or pcmpath query device X To add a new fibre-channel adapter to existing, available, supported, storage MPIO devices, enter: cfgmgr -vl fscX To add a new SAS adapter, depending on the parent driver, enter: cfgmgr -vl mptsasX or cfgmgr -vl sissasX To check the adapter configuration status, enter: pcmpath query adapter or pcmpath query device To dynamically remove all paths under a parent fibre-channel adapter from a supported storage MPIO device, enter: rmpath -dl hdiskX -p fscsiY To dynamically remove a fibre-channel or SAS adapter and all child devices from supported storage MPIO devices, use smit mpio, or enter the following on the command line: rmdev -l fscsiX -R or rmdev -l sasY -R To dynamically remove a particular path, run smit mpio, or enter one of the following commands on the command line: rmpath -l hdiskX -p fscsiY -w connection location code or
Chapter 3. Using SDDPCM on an AIX host system
121
rmpath -l hdiskX -p sasY -w connection location code or rmpath -dl hdiskX -p fscsiY -w connection location code or rmpath -dl hdiskX -p sasY -wconnection location code Issue the following command to get a particular path connection location code: lspath -l hdiskX -H -F "name path_id parent connection status" Note: You cannot remove the last path from a supported storage MPIO device. The command will fail if you try to remove the last path from a supported storage MPIO device.
122
This AIX command also allows users to switch DS4000, DS5000, and DS3950 storage device configuration drivers among these driver options. However, switching device configuration drivers requires a system reboot to reconfigure the devices after running this command. The syntax of this command function is:
manage_disk_drivers -d device -o driver_option
6. Press Enter to begin the removal process. This might take a few minutes.
123
7. When the process is complete, the SDDPCM software package and the supported storage device host attachment for SDDPCM are removed from your system.
SDDPCM support for HACMP with Enhanced Concurrent Mode volume groups
Beginning with SDDPCM 2.1.2.0, SDDPCM supports HACMP V5.1, V5.3, and V5.4 on an AIX 5.2 TL07 (or later) and AIX 5.3 TL03 (or later) system with both concurrent and nonconcurrent resource groups. SDDPCM 2.4.0.0 supports HACMP V5.3, and V5.4 on an AIX 6.1 system with both concurrent and nonconcurrent resource groups. This support requires definition of the shared volume groups as Enhanced Concurrent Mode volume groups, in either concurrent or nonconcurrent resource groups. This means that no reserve needs to be broken in response to a node failure, and hence any requirement on breaking reserves is removed. A special interaction between HACMP and LVM ensures that if the volume group is used in a nonconcurrent resource group, applications are allowed to access it on one node at a time. Only no_reserve policy is supported in both concurrent and nonconcurrent resoure groups. The Enhanced Concurrent Mode volume groups are sufficient to ensure high availability. However, If system ECM volume groups are in nonconcurrent resource groups, you should configure your SAN using the following guidelines: v The interaction between HACMP and LVM to ensure that only one node has nonconcurrent access at a time is advisory locking. This is in contrast to the mandatory locking provided by SCSI reserves. To ensure that production data is not inadvertently modified by nodes that are not in the HACMP cluster, the following should be done: 1. Use either physical cabling or zoning to ensure that only HACMP nodes have access to the shared LUNs. That is, non-HACMP nodes should be prevented by hardware from accessing the shared LUNs. 2. Start HACMP on the cluster nodes at boot time. This ensures that HACMP will activate the appropriate access controls on the shared disk before applications have a chance to modify the access controls. v Configure disk heartbeating to reduce the likelihood of one node considering the other dead and attempting to take over the shared disks. (This is known as a partitioned cluster, or split brain syndrome). If the shared disks consist of multiple enclosures, use one disk in each enclosure as a heartbeat path. Different storage systems or models might support different versions of HACMP and SDDPCM. For information, see the interoperability matrix for your storage: http://www.ibm.com/systems/storage/disk/ess/ http://www.ibm.com/systems/storage/disk/ds6000/ http://www.ibm.com/systems/storage/disk/ds8000/ http://www.ibm.com/systems/storage/software/virtualization/svc/ For HACMP v5.1, v5.2, v5.3, and v5.4 for AIX5L support information, go to the following website: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/ com.ibm.cluster.hacmp.doc/hacmpbooks.html
124
For HACMP up-to-date APAR information, go to the following website: http://www14.software.ibm.com/webapp/set2/sas/f/hacmp/download/ aix53.html
125
Current HACMP clustering software supports no_reserve policy with Enhanced Concurrent Mode volume group. HACMP support for persistent reserve policies for supported storage MPIO devices is not available.
126
http://www.ibm.com/systems/support/supportsite.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020
127
You can also use the chdev command to change the path selection algorithm of a device. Because chdev requires that the device be unconfigured and then reconfigured, this is a disruptive operation. Use the following command to change the device path selection algorithm to round robin: chdev -l hdiskX -a algorithm=round_robin You can change the reserve_policy and algorithm for a device with one command. For example, to change the reserve policy to no_reserve and the path selection algorithm to round robin: chdev -l hdiskX -a reserve_policy=no_reserve -a algorithm=round_robin
Figure 4. Workload imbalance when one link receives twice the load of the other links
Host X is attempting to balance the load across four paths (A1, A2, B1, B2). Host Y is attempting to balance the load across four paths (C2, C3, D2, D3). In Figure 5 on page 129, link 2 between the switch and the storage is more heavily loaded than link 1. This condition creates a throughput imbalance on link 2 that prevents optimal load balancing on Host X.
128
Figure 5. Workload imbalance when one link is more heavily loaded than another link
Host X is attempting to balance the load across four paths (A1, A2, B1, B2). Host Y has only one active path to the storage (C2). In Figure 6, one path is lost from a failed component, which results in the same workload imbalance shown in Figure 5.
Figure 6. Workload imbalance when one host sharing workload across two paths loses one path
Host X is attempting to balance the load across four paths (A1, A2, B1, B2). Host Y has two paths to the storage, but only one path is active (C1, D2). You can either use pcmpath set device algorithm lbp or chdev -l hdiskX -a algorithm=load_balance_port to configure the device with the load balancing port algorithm. To ensure that the load balancing port algorithm produces the desired result, all the hdisks on a host system that is attached to a given storage device must be configured with the same algorithm. Note: This algorithm improves performance by balancing drastically unbalanced throughput on multiple target ports of the same storage. It cannot completely balance the throughput on the target ports because SDDPCM manages I/O load balancing only on the host where it is installed.
pcmpath query port target port number pcmpath query portstats target port number
To display performance information about the target ports, you can use the pcmpath query port target port number command and the pcmpath query portstats target port number command. For more information, see Using SDDPCM pcmpath commands on page 144.
129
hc_mode Healthchecking supports the following modes of operations: v Enabled - When this value is selected, the healthcheck command will be sent to paths that are opened with a normal path mode. v Failed - When this value is selected, the healthcheck command is sent to paths that are in failed state. v Nonactive - When this value is selected, the healthcheck command will be sent to paths that have no active I/O. This includes paths that are opened or in failed state. If the algorithm selected is round robin or load balance, the healthcheck command will only be sent to failed paths, because the round robin and load-balancing algorithms route I/O to all opened paths that are functional. The default value setting of SDDPCM is nonactive. Starting with SDDPCM 2.1.0.0, the pcmpath set device hc_mode command allows you to dynamically change the path healthcheck mode. See pcmpath set device hc_mode on page 179 for information about this command. You can also use the chdev command to change the device path healthcheck mode. Because chdev requires that the device be unconfigured and then reconfigured, this is a disruptive operation. To change the path healthcheck mode to failed, issue following command: chdev -l hdiskX -a hc_mode=failed
130
From SDDPCM v2.1.2.3 and later, a new feature of SDDPCM server daemon healthcheck function is introduced. The SDDPCM server daemon automatically starts or stops the healthcheck function on a device if you issue one of the following: v pcmpath set device m hc_interval 0 on the fly to disable a device internal healthcheck function v pcmpath set device m hc_interval n to enable a device internal healthcheck function on the fly Note: The SDDPCM server daemon only healthcheck FAILED paths. It does not implement healthcheck on opened or idle paths as the SDDPCM internal healthceck does. This is the difference between the SDDPCM internal healthcheck and the SDDPCM server daemon healthcheck.
131
Dynamic Tracking: I/O Hang after Host HBA Cable Pull Dynamic Tracking: Ioctl call may fail after N_Port ID Change Dynamic Tracking: Back-to-Back Cable Move May Delay Error Recovery Fast Fail/Dynamic Tracking: FC Device Inaccessible after Move Dynamic Tracking & MPIO: Multiple Cable Swap Cause Path Failure
Note: You must set the FC HBA fc_err_recov attribute to fast_fail for DS4000, DS5000, and DS3950 storage devices.
132
http://www-01.ibm.com/support/docview.wss?rs=1203&context=SWGD0 &context=SWG10&context=SWGC0&context=HW182&dc=DB550 &q1=dynamic+fast+tracking+%22and%22+Fast+I%2fO+failure&uid=isg1IY37183 &loc=en_US&cs=UTF-8&lang=en Installing APAR IY37183 also installs the file: /usr/lpp/bos/README.FIBRE-CHANNEL This file has more information about the Dynamic Tracking and Fast I/O Failure features.
133
You can also use the chdev command to change the device controller healthcheck delay_time. Because chdev requires that the device be unconfigured and then reconfigured, this is a disruptive operation. Or you can use the chdev -P command, which requires a reboot to make the change to take effect. For example, to enable the device controller health check function, issue the following command to set cntl_delay_time and cntl_hcheck_int to non-zero values:
chdev -l hdiskX -a cntl_delay_time=60 -a cntl_hcheck_int=3
Configuring supported storage system MPIO devices as the SAN boot device
A supported storage MPIO device can be used as the system boot device. To configure the supported storage device boot device with the SDDPCM module: 1. Select one or more supported storage system devices as the boot device. 2. Install one of the following AIX operating systems on the selected supported storage devices: v If the selected supported storage device is ESS, the required operating system is AIX 5.2 TL06 (or later), AIX 5.3 TL02 (or later), or AIX 6.1 TL0 (or later). v If the selected supported storage device is DS6000, the required operating system is AIX 5.2 TL07 (or later), AIX 5.3 TL03 (or later), or AIX 6.1 TL0 (or later). v If the selected supported storage device is DS8000, the required operating system is AIX 5.2 TL07 (or later), AIX 5.3 TL03 (or later), or AIX 6.1 TL0 (or later). v If the selected supported storage device is SAN Volume Controller, the required operating sytsem is AIX 5.2 TL07 (or later), AIX 5.3 TL03 (or later), or AIX 6.1 TL0 (or later). v If the selected supported storage device is DS4000, DS5000, or DS3950, the required operating system is AIX53 TL8 (or later) or AIX61 TL2 (or later). v If the selected supported storage device is RSSM, the required operating system is AIX 6.1 TL03 (or later). Refer to the following URL for RSSM documentation with instructions on how to install the AIX operating system on a JS blade with remote volumes: http://www.ibm.com/systems/support/supportsite.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 3. Restart the system. The supported storage boot device is configured as an MPIO device with AIX default PCM.
134
Note: For IBM DS4000 storage devices, if the OS is AIX53, or if the system was migrated from AIX53 to AIX61 (or later), the DS4000 devices are configured by the AIX FCPArray(RDAC) device driver instead of the AIX native MPIO PCM. 4. Install the supported storage device host attachment for SDDPCM and the SDDPCM packages. 5. To release scsi-2 reserve on boot devices with SDDPCM v2.2.0.0 or earlier, you must issue the relbootrsv command. For boot devices with SDDPCM v2.2.0.1 or later, scsi-2 reserve is automatically released during the system reboot. If you want to release non-rootvg scsi-2 reserves, provide the volume group name as a parameter. For example: relbootrsv vgname. 6. Restart the system. All supported storage MPIO devices, including supported storage MPIO SAN boot devices, are now configured with SDDPCM. When you convert a boot device with SDDPCM v2.2.0.0 and earlier from the AIX default PCM to SDDPCM, you must issue the relbootrsv command, as shown in step 5. If you fail to do so, you might encounter a problem where either all paths of the boot device cannot be opened successfully or they can be opened but cannot be used for I/O. This problem occurs because the AIX default PCM has a default reserve policy of single-path (SCSI-2). See Migrating the supported storage SAN boot device or nonboot volume group from AIX default PCM to SDDPCM on page 117 for information about solving this problem. There is a known problem during the SAN boot configuration. After the system is restarted followed by the operating system installation on the supported storage MPIO devices, you might see that some paths of the rootvg are in Failed path state. This can happen even if the system is successfully restarted. This problem is corrected in AIX 5.2 TL08 or later and AIX 5.3 TL04 or later. Apply the following APARs on these OS levels after the first reboot followed by the operating system installation: v AIX 5.2 TL08 or later: apply APAR IY83717 v AIX 5.3 TL04 or later: apply APAR IY83847 No APAR is available to correct this problem on AIX52 TL07 and AIX53 TL03. If you configure a SAN boot device with supported storage MPIO devices on one of these operating system levels and experience this problem, you can manually recover these failed paths by issuing one of the following commands: v chpath -s E -l hdiskX -p fscsiX v pcmpath set device M path N online
Support system dump device with the supported storage system MPIO device
You can choose a supported storage MPIO device to configure with the system primary and secondary dump devices. You can configure the system dump device with the supported SAN boot device, or with the nonboot device. The path selection algorithm for the system dump device automatically defaults to failover_only when the system dump starts. During the system dump, only one path is selected for dump requests. If the first path fails, I/O is routed to the next path being selected.
Chapter 3. Using SDDPCM on an AIX host system
135
You must apply all the APARs for AIX 5.2 TL08 and later, or AIX 5.3 TL04 and later.
136
137
trcstop To read the report, enter: trcrpt | pg To save the trace data to a file, enter: trcrpt > filename Note: To perform the SDDPCM trace function, you must have the bos.sysmgt.trace installation package installed on your system.
where NNN is the process ID number. The status of pcmsrv should be Active if the SDDPCM server has automatically started. If the SDDPCM server has not started, the status will be Inoperative. Go to Starting the SDDPCM server manually to proceed. Because pcmsrv is bound to the SDDPCM kernel extension module, pcmsrv can fail to start if the SDDPCM is installed and the supported storage MPIO devices have not been configured yet. In this case, you can either restart the system or you can start pcmsrv manually after supported storage MPIO devices are configured. Because pcmsrv is bound to the SDDPCM kernel extension module, in order to uninstall or upgrade SDDPCM, you must stop pcmsrv so that the SDDPCM kernel extension can be unloaded from the system. During an upgrade, the new SDDPCM kernel extension can be loaded into the system when supported storage MPIO devices are configured.
138
startpcmsrv For sddpcm 2.6.0.1 or prior releases, you can start pcmsrv by entering: startsrv -s pcmsrv -e XPG_SUS_ENV=ON Go to Verifying if the SDDPCM server has started on page 138 to see if you successfully started the SDDPCM server.
AE daemon
For SDDPCM 3.0.0.0 or later releases, a UNIX application daemon, AE server, is added to SDDPCM path control module. The AE server daemon intefaces with the TPC-R server and the SDDPCM kernel driver to provide OpenSwap functionality.
139
where NNN is the process ID number. The status of AE should be Active if the AE server has automatically started. If the AE server has not started, the status will be Inoperative. Go to Starting the AE server manually to proceed.
pcmquerypr
The pcmquerypr command provides a set of persistent reserve functions. This command supports the following persistent reserve service actions: v Read persistent reservation key v Release persistent reserve v Preempt-abort persistent reserve v Clear persistent reserve and registration keys This command can be issued to all system MPIO devices, including MPIO devices not supported by SDDPCM. The pcmquerypr command can be used in the following situation: the reserve policy of the SDDPCM MPIO devices is set to either persistent reserve exclusive host access (PR_exclusive) or persistent reserve shared host access (PR_shared), and persistent reserve has been left on the device by a node, blocking access by
140
another node. The pcmquerypr command can be used in this situation to query, preempt, or clear the persistent reserve left by a node or server on the devices. There are more cases when you might need this tool to solve persistent reserve related problems, such as unexpected persistent reserve left on devices because of failing to release the persistent reserve. Caution must be taken with the command, especially when implementing preempt-abort or clear persistent reserve service action. With preempt-abort service action not only the current persistent reserve key is preempted; it also aborts tasks on the LUN that originated from the initiators that are registered with the preempted key. With clear service action, both persistent reservation and reservation key registrations are cleared from the device. The following information describes in detail the syntax and examples of the pcmquerypr command. Description The pcmquerypr command implements certain SCSI-3 persistent reservation commands on a device. The device can be a supported storage MPIO device. This command supports persistent reserve IN and OUT service actions, such as read, reservation key, release persistent reservation, preempt-abort persistent reservation, or clear persistent reservation and reservation key registrations. Syntax
pcmquerypr -p -c -r -v -V -h /dev/PVname
Flags: -p If the persistent reservation key on the device is different from the current host reservation key, the existing persistent reservation key on the device is preempted. This option can be issued only when the device is not already open. If there is a persistent reservation on the device, the persistent reservation is removed and all reservation key registrations on the device are cleared. This option can be issued only when the device is not already open. Removes the persistent reservation key on the device made by this host. This option can be issued only when the device is not already open. Displays the persistent reservation key if it exists on the device. Verbose mode. Prints detailed message.
-c
-r -v -V
Return code If the command issued without options of -p, -r or -c, the command will return: 0 1 2 There is no persistent reservation key on the device, or the device is reserved by the current host The persistent reservation key is different from the host reservation key The command failed.
If the command issued with one of the options of -p, -r or -c, it returns:
Chapter 3. Using SDDPCM on an AIX host system
141
0 2
Examples 1. To query the persistent reservation on a device, enter pcmquerypr -h /dev/hdisk30. This command queries the persistent reservation on the device without displaying. If there is a persistent reserve on a disk, it returns 0 if the device is reserved by the current host. It returns 1 if the device is reserved by another host. 2. To query and display the persistent reservation on a device, enter pcmquerypr -vh /dev/hdisk30. Same as Example 1. In addition, it displays the persistent reservation key. 3. To query and display which type of persistent reservation is on a device, enter pcmquerypr -Vh /dev/hdisk#. The following output indicates there is SCSI-2 reserve on the device:
# pcmquerypr -Vh /dev/hdisk27 connection type: fscsi3 open dev: /dev/hdisk27 Attempt to read reservation key... *> ioctl(PR_READ) error; errno = 5 (I/O error) *> status_validity=0x1, scsi_bus_status=0x2 Attempt to read reservation key... *> ioctl(PR_READ) error; errno = 5 (I/O error) *> status_validity=0x1, scsi_bus_status=0x18 Attempt to read reservation key... *> ioctl(PR_READ) error; errno = 5 (I/O error) *> status_validity=0x1, scsi_bus_status=0x18 Attempt to read reservation key... *> ioctl(PR_READ) error; errno = 5 (I/O error) *> status_validity=0x1, scsi_bus_status=0x18
The following output indicates that there is SCSI-3 reserve on the device:
# pcmquerypr -Vh /dev/hdisk43 connection type: fscsi0 open dev: /dev/hdisk43 Attempt to read reservation key... *> ioctl(PR_READ) error; errno = 5 (I/O error) *> status_validity=0x1, scsi_bus_status=0x2 Attempt to read reservation key... Attempt to read registration keys... Read Keys parameter Generation : 12 Additional Length: 32 Key0 : 0x3236303232344446 Key1 : 0x3236303232344446 Key2 : 0x3236303232344446 Key3 : 0x3236303232344446 resrvpolicy= no_reserve Reserve Key provided by current host = none (hex)0924ffff
142
Reserve Key on the device: 0x3236303232344446 Reservation key type: 0x6 Device is reserved by SDD device.
4. To release the persistent reservation if the device is reserved by the current host, enter pcmquerypr -rh /dev/hdisk30. This command releases the persistent reserve if the device is reserved by the current host. It returns 0 if the command succeeds or the device is not reserved. It returns 2 if the command fails. 5. To reset any persistent reserve and clear all reservation key registrations, enter pcmquerypr -ch /dev/hdisk30. This command resets any persistent reserve and clears all reservation key registrations on a device. It returns 0 if the command succeeds, or 2 if the command fails. 6. To remove the persistent reservation if the device is reserved by another host, enter pcmquerypr -ph /dev/hdisk30. This command removes an existing registration and persistent reserve from another host. It returns 0 if the command succeeds or if the device is not persistent reserved. It returns 2 if the command fails.
pcmgenprkey
Description The pcmgenprkey command can be used to set or clear the PR_key_value ODM attribute for all SDDPCM MPIO devices. It also can be used to query and display the reservation policy of all SDDPCM MPIO devices and the persistent reserve key, if those devices have a PR key. Syntax
pcmgenprkey -v -u -k prkeyvalue
Examples 1. To set the persistent reserve key to all SDDPCM MPIO devices with a provided key value, issue pcmgenprkey -u -k 0x1234567890abcedf. This creates a customized PR_key_value attribute with the provided key value for all SDDPCM MPIO devices, except the devices that already have the same customized PR key attribute. The provided key must contain either a decimal integer or a hexadecimal integer. 2. To clear the PR_key_value attribute from all SDDPCM MPIO devices, issue pcmgenprkey -u -k none . 3. To update the customized PR_key_value attribute with the HACMP-provided Preserve key or the output string from the uname command for all the SDDPCM MPIO devices, issue pcmgenprkey -u. When the -u option is used without the -k option, this command searches for the HACMP-provided Preservekey attribute and uses that value as the PR key if that attribute is available; otherwise, it uses the output string from the uname command as the PR key.
143
4. To display the reserve_policy, the PR_key_value attribute, and the persistent reserve key attribute of all the SDDPCM devices, issue pcmgenprkey -v. If the MPIO device does not have a persistent reserve key, a value of none is displayed.
sddpcm_get_config
Description The sddpcm_get_config command can be used to display information about MPIO-based DS4000 or DS5000 subsystems and the hdisks associated with them. Specifically, it displays information about the frame (subsystem), including the frame's assigned name and worldwide name, and a list of hdisks (only those currently in the Available state) that are associated with that subsystem, including the hdisk name, LUN number, current ownership, preferred path, and the user-assigned label for that volume. Attention: To protect the configuration database, the sddpcm_get_config command is not interruptible, because stopping this command before it completes could result in a corrupted database. Syntax
sddpcm_get_config -v -l hdiskN -A
Flags -l hdiskN List information for the subsystem which includes hdiskN. -A -V List information for all attached subsystems. List additional information, largely of limited value, including the MPIO SDDPCM internal frame number, number of controllers, partition number, and partition count.
-----------------------------------------------------------------------
144
online or offline, or set all paths that are connected to a supported storage device port or ports to online or offline. This section includes descriptions of these commands. Table 14 provides an alphabetical list of these commands, a brief description, and where to go in this chapter for more information.
Table 14. Commands Command pcmpath clear device count pcmpath disable ports pcmpath enable ports pcmpath open device path pcmpath query adapter pcmpath query adaptstats Description Dynamically clears the error count or error/select counts to zero. Places paths connected to certain ports offline. Places paths connected to certain ports online. Opens an INVALID path. Displays information about adapters. Displays performance information for all FCS adapters that are attached to SDDPCM devices. Displays information about devices. Displays performance information for a single SDDPCM device or all SDDPCM devices. Displays each device, path, location, and attributes. Displays information about a single target port or all target ports that are attached to SDDPCM-configured MPIO devices. Displays the status of the logic paths that are managed by SDDPCM between the host and the storage ports. Displays performance information about a single target port or all target ports that are attached to SDDPCM-configured MPIO devices. Displays the session of the Open HyperSwap devices configured on the host. Displays the version of the currently installed SDDPCM. Displays the world wide port name (WWPN) for all fibre-channel adapters. Sets all device paths that are attached to an adapter to online or offline. Sets the path of a device to online or offline. Set all or some of supported storage MPIO device path selection algorithm Set all or some of supported storage MPIO device health check time interval Set all or some of supported storage MPIO device health check mode
Chapter 3. Using SDDPCM on an AIX host system
158 164
166 167
169
170
172
pcmpath query version pcmpath query wwpn pcmpath set adapter pcmpath set device path pcmpath set device algorithm pcmpath set device hc_interval pcmpath set device hc_mode
145
Table 14. Commands (continued) Command pcmpath set device cntlhc_interval Description Set all or some of supported active/passive MPIO device controller health check time interval Page 180
pcmpath set device cntlch_delay Set all or some of supported active/passive MPIO device controller health check delay_time
181
146
Parameters: device number 1 <device number 2> When two device numbers are entered, this command applies to all the devices whose index numbers fit within the range of these two device index numbers. error Clears the error counter of the specified SDDPCM MPIO device or devices. all Clears both the select counter and the error counter of the specified SDDPCM MPIO device or devices. Examples: If you have a non-zero select counter or error counter, entering pcmpath query device 20 causes the following output to be displayed:
DEV#: 20 DEVICE NAME: hdisk20 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018180235800000000000463 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 CLOSE NORMAL 14 0 1* fscsi1/path1 CLOSE NORMAL 8 0 2 fscsi3/path2 CLOSE NORMAL 10009 0 3* fscsi3/path3 CLOSE NORMAL 8 0
If you enter the pcmpath clear device 20 count all and then enter pcmpath query device 20, the following output is displayed:
DEV#: 20 DEVICE NAME: hdisk20 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018180235800000000000463 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 CLOSE NORMAL 0 0 1* fscsi1/path1 CLOSE NORMAL 0 0 2 fscsi3/path2 CLOSE NORMAL 0 0 3* fscsi3/path3 CLOSE NORMAL 0 0
147
Parameters: connection The connection code must be in one of the following formats: v Single port = R1-Bx-Hy-Zz v All ports on card = R1-Bx-Hy v All ports on bay = R1-Bx Use the output of the pcmpath query essmap command to determine the connection code. essid The supported storage device serial number, given by the output of pcmpath query portmap command. Examples: If you enter the pcmpath disable ports R1-B1-H3 ess 12028 command and then enter the pcmpath query device command, the following output is displayed:
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20712028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 6 0 1 fscsi0/path1 CLOSE NORMAL 9 0 2 fscsi1/path2 CLOSE OFFLINE 11 0 3 fscsi1/path3 CLOSE NORMAL 9 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20712028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 8702 0 1 fscsi0/path1 CLOSE NORMAL 8800 0 2 fscsi1/path2 CLOSE OFFLINE 8816 0 3 fscsi1/path3 CLOSE NORMAL 8644 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20912028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 8917 0 1 fscsi0/path1 CLOSE NORMAL 8919 0 2 fscsi1/path2 CLOSE OFFLINE 9008 0 3 fscsi1/path3 CLOSE NORMAL 8944 0 DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20B12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 9044 0 1 fscsi0/path1 CLOSE NORMAL 9084 0 2 fscsi1/path2 CLOSE OFFLINE 9048 0 3 fscsi1/path3 CLOSE NORMAL 8851 0
148
DEV#: 7 DEVICE NAME: hdisk7 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20F12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 9089 0 1 fscsi0/path1 CLOSE NORMAL 9238 0 2 fscsi1/path2 CLOSE OFFLINE 9132 0 3 fscsi1/path3 CLOSE NORMAL 9294 0 DEV#: 8 DEVICE NAME: hdisk8 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 21012028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 9059 0 1 fscsi0/path1 CLOSE NORMAL 9121 0 2 fscsi1/path2 CLOSE OFFLINE 9143 0 3 fscsi1/path3 CLOSE NORMAL 9073 0
149
Parameters: connection The connection code must be in one of the following formats: v Single port = R1-Bx-Hy-Zz v All ports on card = R1-Bx-Hy v All ports on bay = R1-Bx Use the output of the pcmpath query essmap command to determine the connection code. essid The supported storage device serial number, given by the output of pcmpath query portmap command. Examples: If you enter the pcmpath enable ports R1-B1-H3 ess 12028 command and then enter the pcmpath query device command, the following output is displayed:
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 6 0 1 fscsi0/path1 CLOSE NORMAL 9 0 2 fscsi1/path2 CLOSE NORMAL 11 0 3 fscsi1/path3 CLOSE NORMAL 9 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20712028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 8702 0 1 fscsi0/path1 CLOSE NORMAL 8800 0 2 fscsi1/path2 CLOSE NORMAL 8816 0 3 fscsi1/path3 CLOSE NORMAL 8644 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20912028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 8917 0 1 fscsi0/path1 CLOSE NORMAL 8919 0 2 fscsi1/path2 CLOSE NORMAL 9008 0 3 fscsi1/path3 CLOSE NORMAL 8944 0 DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20B12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 9044 0 1 fscsi0/path1 CLOSE NORMAL 9084 0 2 fscsi1/path2 CLOSE NORMAL 9048 0 3 fscsi1/path3 CLOSE NORMAL 8851 0
150
DEV#: 7 DEVICE NAME: hdisk7 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20F12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 9089 0 1 fscsi0/path1 CLOSE NORMAL 9238 0 2 fscsi1/path2 CLOSE NORMAL 9132 0 3 fscsi1/path3 CLOSE NORMAL 9294 0 DEV#: 8 DEVICE NAME: hdisk8 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 21012028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 9059 0 1 fscsi0/path1 CLOSE NORMAL 9121 0 2 fscsi1/path2 CLOSE NORMAL 9143 0 3 fscsi1/path3 CLOSE NORMAL 9073 0
151
Parameters: device number The logical device number of this hdisk, as displayed by the pcmpath query device command. path number The path ID that you want to change, as displayed under Path Name by the pcmpath query device command. Examples: If you enter the pcmpath query device 23 command, the following output is displayed:
DEV#: 23 DEVICE NAME: hdisk23 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 557 0 1 fscsi1/path1 OPEN NORMAL 568 0 2 fscsi0/path2 INVALID NORMAL 0 0 3 fscsi0/path3 INVALID NORMAL 0 0
Note that the current state of path 2 and path 3 is INVALID, which means that open path 2 and path 3 failed. If the root cause of the path 2 open failure is fixed and you enter the pcmpath open device 23 path 2 command, the following output is displayed:
Success: device 23 path 2 opened DEV#: 23 DEVICE NAME: hdisk23 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 557 0 1 fscsi1/path1 OPEN NORMAL 568 0 2 fscsi0/path2 OPEN NORMAL 0 0 3 fscsi0/path3 INVALID NORMAL 0 0
After issuing the pcmpath open device 23 path 2 command, the state of path 2 becomes OPEN. The terms used in the output are defined as follows: Dev# The logical device number of this hdisk.
Device name The name of this device. Type The device product ID from inquiry data.
152
Algorithm The current path selection algorithm for the device. The algorithm selected is one of the following types: load balancing, load balancing port, round robin, or failover. Serial The LUN for this device. Path# The path index displayed by the pcmpath query device command.
Adapter The name of the adapter to which the path is attached. Path Name The name of the path. The number displayed as part of the name is the path ID of this path that is used by the pcmpath open device path and pcmpath set device path commands. State The condition Open Close Close_Failed Failed Invalid of each path of the named device: Path is in use. Path is not being used. Path is broken and is not being used. Path is opened, but no longer functional because of error. The path failed to open.
Mode The mode of the named path, which is either Normal or Offline. Select The number of times this path was selected for I/O. Errors The number of I/O errors that occurred on this path.
153
Parameters: adapter number The index number of the adapter for which you want information displayed. If you do not enter an adapter index number, information about all adapters is displayed. aa The adapter of active/active storage controller devices. ap The adapter of active/passive storage controller devices. Examples: If you enter the pcmpath query adapter command and your system has both Dual Active or Active/Asymmetrc (for example, ESS) and Active/Passive (for example, DS4800) devices, the following output is displayed:
Total Dual Active and Active/Asymmetrc Adapters : 2 Adpt# 0 1 Name fscsi2 fscsi0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 920506 921100 Errors 0 0 Paths 80 80 Active 38 38
Total Active/Passive Adapters : 2 Adpt# 0 1 Name fscsi0 fscsi1 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 0 0 Errors 0 0 Paths 30 30 Active 0 0
If you enter the pcmpath query adapter command on a host with RSSM LUNs, the following output is displayed:
Total Dual Active and Active/Asymmetrc Adapters : 1 Adpt# 0 Name State sas1 NORMAL Mode ACTIVE Select 1680197 Errors 0 Paths 32 Active 32
The terms used in the output are defined as follows: Adpt # The index number of the adapter. Name The name of the adapter. State The condition Normal Degraded Failed of the named adapter. It can be either: Adapter is in use. One or more opened paths are not functioning. All opened paths that are attached to this adapter are not functioning.
Mode The mode of the named adapter, which is either Active or Offline. Select The number of times this adapter was selected for I/O.
154
Errors The number of errors that occurred on all paths that are attached to this adapter. Paths The number of paths that are attached to this adapter.
Active The number of functional paths that are attached to this adapter. The number of functional paths is equal to the number of opened paths attached to this adapter minus any that are identified as failed or disabled (offline).
155
Parameters: adapter number The index number of the adapter for which you want information displayed. If you do not enter an adapter index number, information about all adapters is displayed. aa The adapter of active/active storage controller devices. ap The adapter of active/passive storage controller devices. Examples: If you enter the pcmpath query adaptstats command and your system only has Active/Passive devices (for example, DS4800), the following output is displayed:
Total Active/Passive Adapters : 2 Adapter #: 0 ============= Total Read Total Write Active Read Active Write I/O: 0 0 0 0 SECTOR: 0 0 0 0 Adapter #: 1 ============= Total Read Total Write Active Read Active Write I/O: 0 0 0 0 SECTOR: 0 0 0 0
Maximum 0 0 Maximum 0 0
/*-------------------------------------------------------------------------*/
The terms used in the output are defined as follows: Total Read v I/O: total number of completed read requests v SECTOR: total number of sectors that have been read Total Write v I/O: total number of completed write requests v SECTOR: total number of sectors that have been written Active Read v I/O: total number of read requests in process v SECTOR: total number of sectors to read in process Active Write v I/O: total number of write requests in process v SECTOR: total number of sectors to write in process
156
Maximum v I/O: the maximum number of queued I/O requests v SECTOR: the maximum number of queued sectors to Read or Write
157
158
For DS4000 and DS5000 SDDPCM MPIO devices, only opened passive paths are displayed with the '*' mark in the pcmpath query device command. Beginning with SDDPCM 2.1.3.0, two new options are added to the device query command. The first option lets you specify two numbers to query a set of devices; the second option -i x y lets you repeat the query command every x seconds for y times. Beginning with SDDPCM 3.0.0.0, a new session name option is added to the device query command that allows you to query a session name to show the set of devices in that session. The pcmpath query device commands display only supported storage MPIO devices that are configured with the SDDPCM module. Any AIX internal disks or non-SDDPCM-configured MPIO devices are not displayed. Syntax:
pcmpath query device device number device number m device number n -d device model -S session name -i x -i x y
Parameters: device number The device number refers to the logical device number of the hdisk. device number_m device_number_n Use the device_number_m device_number_n option to provide a range of device index numbers. device model Displays devices of a particular device model. The valid device models are: v 1722 - All 1722 devices (DS4300) v 1724 - All 1724 devices (DS4100) v 1742 - All 1742 devices (DS4400 and DS4500) v 1750 - All 1750 models (DS6000) v 1814 - All 1814 devices (DS4200 and DS4700) v v v v v 1815 1818 2105 2107 2145 - All - All - All - All - All 1815 1818 2105 2107 2145 devices (DS4800) devices (DS5100 and DS5300) models (ESS) models (DS8000) models (SAN Volume Controller)
v 1820 - All 1820 devices (RSSM) session name Displays the set of devices in the specified session. i Repeats the command every x seconds for y times. If you do not specify y, the command repeats indefinitely every x seconds.
159
Examples: If you enter the pcmpath query device 65 66 command, the following output is displayed: For the supported storage devices:
DEV#: 65 DEVICE NAME: hdisk65 TYPE: 1814 ALGORITHM: Load Balance SERIAL: 600A0B800011796C0000B68148283C9C =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi0/path0 OPEN NORMAL 42 0 1* fscsi1/path2 OPEN NORMAL 44 0 2 fscsi0/path1 OPEN NORMAL 54 0 3 fscsi1/path3 OPEN NORMAL 52 0 DEV#: 66 DEVICE NAME: hdisk66 TYPE: 1814 ALGORITHM: Load Balance SERIAL: 600A0B80001179440000C1FE48283A08 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 44 1 1 fscsi1/path2 CLOSE NORMAL 57 0 2 fscsi0/path1 CLOSE NORMAL 0 0 3 fscsi1/path3 CLOSE NORMAL 1 0
If you enter the pcmpath query device 4 5 command, the following output is displayed:
DEV#: 4 DEVICE NAME: hdisk4 TYPE: 1820N00 ALGORITHM: Load Balance SERIAL: 6005076B07409F7F4B54DCE9000000B9 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 sas1/path0 OPEN NORMAL 1748651 0 1* sas1/path1 OPEN NORMAL 364 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 1820N00 ALGORITHM: Load Balance SERIAL: 6005076B07409F7F4B54DCED000000BA =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 sas1/path0 OPEN NORMAL 1748651 0 1* sas1/path1 OPEN NORMAL 364 0
If hdisk2 is an Open HyperSwap device that has never been opened, and you enter pcmpath query device 2, the following output is displayed:
DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: session1OS DIRECTION: H1->H2 ========================================================================== PRIMARY SERIAL: 10000000F00 SECONDARY SERIAL: 20000000F80 ----------------------------Path# Adapter/Path Name State 0 fscsi0/path0 CLOSE 1 fscsi0/path1 CLOSE 2 fscsi0/path2 CLOSE 3 fscsi0/path3 CLOSE 4 fscsi1/path4 CLOSE 5 fscsi1/path5 CLOSE 6 fscsi1/path6 CLOSE 7 fscsi1/path7 CLOSE
Select 0 0 0 0 0 0 0 0
Errors 0 0 0 0 0 0 0 0
If hdisk2 is an Open HyperSwap device that is being opened, and you enter pcmpath query device 2, the following output is displayed. Note: Note: in the following example, hdisk2 is created on two physical devices. At any time, only one of the two physical devices is active for I/Os. The
160
asterisk (*) indicates the active device to which the current I/Os are being sent.
161
DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: session1 OS DIRECTION: H1->H2 ========================================================================== PRIMARY SERIAL: 10000000F00 * ----------------------------Path# 0 1 2 3 Adapter/Path Name fscsi0/path0 fscsi0/path2 fscsi1/path4 fscsi1/path5 State OPEN OPEN OPEN OPEN Mode NORMAL NORMAL NORMAL NORMAL Select 8 9 9 9 Errors 0 0 0 0
SECONDARY SERIAL: 20000000F80 ----------------------------Path# 4 5 6 7 Adapter/Path Name fscsi0/path1 fscsi0/path3 fscsi1/path6 fscsi1/path7 State OPEN OPEN OPEN OPEN Mode NORMAL NORMAL NORMAL NORMAL Select 0 0 0 0 Errors 0 0 0 0
The terms used in the output are defined as follows: Dev# The logical device number of this hdisk.
Name The logical name of this device. Type The device product ID from inquiry data.
Algorithm The current path selection algorithm selected for the device. The algorithm selected is one of the following: load balancing, load balancing port, round robin, or failover. Session name The name of the Tivoli Productivity Center for Replication session in which this Open HyperSwap device is contained. Serial The LUN for this device. OS direction The current Open HyperSwap direction. H1->H2 shows that the I/O is active on the H1 site. H1<-H2 shows that the I/O is active on the H2 site. Primary serial The serial number of the volume on the H1 site. Secondary serial The serial number of the volume on the H2 site. Path# The path index displayed by device query command.
Adapter The name of the adapter to which the path is attached. Path Name The name of the path. The number displayed as part of the name is the path ID that is used by pcmpath open device path and pcmpath set device path commands. State The condition of the path attached to the named device: Open Path is in use. Close Path is not being used. Failed Path is no longer being used. It has been removed from service due to errors.
162
Close_Failed Path was detected to be broken and failed to open when the device was opened. The path stays in Close_Failed state when the device is closed. Invalid The path is failed to open, but the MPIO device is opened. Mode The mode of the named path. The mode can be either Normal or Offline. Select The number of times this path was selected for I/O. Errors The number of input and output errors that occurred on a path of this device.
163
Parameters: device number The device number refers to the logical device number of the hdisk. device number_m device_number_n Use the device_number_m device_number_n option to provide a range of device index numbers. device model Displays devices of a particular device model. The valid device models are: v 1722 - All 1722 devices (DS4300) v 1724 - All 1724 devices (DS4100) v 1742 - All 1742 devices (DS4400 and DS4500) v 1750 - All 1750 models (DS6000) v 1814 - All 1814 devices (DS4200 and DS4700) v v v v v i 1815 2105 2107 2145 1820 - All - All - All - All - All 1815 2105 2107 2145 1820 devices (DS4800) models (ESS) models (DS8000) models (SAN Volume Controller) devices (RSSM)
Repeats the command every x seconds for y times. If you do not specify y, the command repeats indefinitely every x seconds.
Examples: If you enter the pcmpath query devstats 2 command, the following output about hdisk2 is displayed:
DEV#: 2 DEVICE NAME: hdisk2 =============================== Total Read Total Write
Active Read
Active Write
Maximum
164
60 320
10 0
0 0
0 0
2 16
<= 512 <= 4k <= 16K <= 64K > 64K 30 40 0 0 0 /*-------------------------------------------------------------------------*/
The terms used in the output are defined as follows: Total Read v I/O: total number of completed read requests v SECTOR: total number of sectors that have been read Total Write v I/O: total number of completed write requests v SECTOR: total number of sectors that have been written Active Read v I/O: total number of read requests in process v SECTOR: total number of sectors to read in process Active Write v I/O: total number of write requests in process v SECTOR: total number of sectors to write in process Maximum v I/O: the maximum number of queued I/O requests v SECTOR: the maximum number of queued sectors to read or write Transfer size v <= 512: the number of I/O requests received, whose transfer size is 512 bytes or less v <= 4k: the number of I/O requests received, whose transfer size is 4 KB or less (where KB equals 1024 bytes) v <= 16K: the number of I/O requests received, whose transfer size is 16 KB or less (where KB equals 1024 bytes) v <= 64K: the number of I/O requests received, whose transfer size is 64 KB or less (where KB equals 1024 bytes) v > 64K: the number of I/O requests received, whose transfer size is greater than 64 KB (where KB equals 1024 bytes)
165
Examples: If you enter the pcmpath query essmap command, the following output is displayed:
Disk -----hdisk5 hdisk5 hdisk5 hdisk5 Path ---path0 path1 path2 path3 P Location adapter LUN SN - ---------- ------ ------ ----* 30-60-01[FC] fscsi1 13AAAKA1200 30-60-01[FC] fscsi0 13AAAKA1200 * 20-60-01[FC] fscsi0 13AAAKA1200 20-60-01[FC] fscsi1 13AAAKA1200 Type ----------IBM 1750-500 IBM 1750-500 IBM 1750-500 IBM 1750-500 Size ---1.1 1.1 1.1 1.1 LSS ---18 18 18 18 Vol --0 0 0 0 Rank ----0000 0000 0000 0000 C/A ---01 01 01 01 S Y Y Y Y ... ... ... ... ... ...
The terms used in the output are defined as follows: Disk Path P Location Adapter LUN SN Type Size LSS The logical device name assigned by the host. The logical path name of a MPIO device. Indicates the logical paths and whether the path is preferred and nonpreferred. * indicates that the path is a nonpreferred path. The physical location code of the host adapter through which the LUN is accessed. The logical adapter name assigned by the host LUN. The unique serial number for each LUN within the supported storage device. The device and model. The capacity of the configured LUN. The logical subsystem where the LUN resides. (Beginning with 2.1.3.0, the value displayed is changed from decimal to hexadecimal.) The volume number within the LSS. The unique identifier for each RAID array within the supported storage device. The cluster and adapter accessing the array. Indicates that the device is shared by two and more supported storage device ports. Valid values are yes or no. The physical location code of supported storage device adapter through which the LUN is accessed. The supported storage device port through which the LUN is accessed. The disk RAID mode.
166
Parameters: target port number Use the target port number option to display information about the target port. If you do not enter a target port number, information about all target ports is displayed. Examples: If you have 12 active ports and enter the pcmpath query port command, the following output is displayed:
Port# 0 1 2 3 4 5 6 7 8 9 10 11 Wwpn 5005022300ca031e 5005022307044750 5005022307155750 500502230e02ac54 500502230e01fe54 500502230eb3d54 500503380110046b 5005033801100d64 50050338011003d1 5005033801100118 500503330323006a 500503330323406a State NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL Mode ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE Select 0 0 0 0 0 0 0 0 0 0 3167 3220 Errors Paths Active 0 60 0 0 64 0 0 64 0 0 64 0 0 64 0 0 64 0 0 60 0 0 60 0 0 60 0 0 60 0 0 62 4 1 62 4
The terms used in the output are defined as follows: Port # WWPN State The index number of the port. The worldwide port name of the target port. The condition of the named target port, which can be one of the following types: Normal The target port that is in use. Degraded One or more of the opened paths are not functioning. Failed All opened paths that are attached to this target port are not functioning. Mode Select Errors Paths Active The mode of the named target port, which is either Active or Offline. The number of times this target port was selected for input or output. The number of errors that occurred on all paths that are attached to this target port. The number of paths that are attached to this target port. The number of functional paths that are attached to this target port. The number of functional paths is equal to the number of opened paths that are attached to this target port minus any that are identified as failed or disabled (offline).
Chapter 3. Using SDDPCM on an AIX host system
167
Note: This command is supported only with 1750, 2105, 2107, and 2145 device types.
168
Examples: If you enter the pcmpath query portmap command, the following output is displayed:
ESSID DISK BAY-1(B1) H1 H2 H3 H4 ABCD ABCD ABCD ABCD BAY-5(B5) H1 H2 H3 H4 ABCD ABCD ABCD ABCD BAY-2(B2) H1 H2 H3 ABCD ABCD ABCD BAY-6(B6) H1 H2 H3 ABCD ABCD ABCD H4 ABCD H4 ABCD BAY-3(B3) H1 H2 H3 ABCD ABCD ABCD BAY-7(B7) H1 H2 H3 ABCD ABCD ABCD H4 ABCD H4 ABCD BAY-4(B4) H1 H2 H3 ABCD ABCD ABCD BAY-8(B8) H1 H2 H3 ABCD ABCD ABCD H4 ABCD H4 ABCD
13AAAKA hdisk5 O--- ---- ---- ---13AAAKA hdisk6 Y--- ---- ---- ---Y O N PD = = = = = online/open online/closed offline path not configured path down
The terms used in the output are defined as follows: Y y The port is online and open, meaning that at least one path attached to this port is functional. Paths connected to this port are nonpreferred paths. The port is online and open, meaning that at least one path attached to this port is functional. The port is online and closed, meaning that at least one path state and mode is closed and online. Paths connected to this port are nonpreferred paths. The port is online and closed, meaning that at least one path state and mode is closed and online. The port is offline, meaning that all paths attached to this port are offline. Paths connected to this port are nonpreferred paths. The port is offline, meaning that all paths attached to this port are offline. The path is not configured. The path is down. It is either not functional or has been placed offline.
O o
N n PD
Note: The following fields apply only 1750 devices and can only be shown after the device is opened once: v y v o v n The serial number of ESS devices is five digits, whereas the serial number of DS6000 and DS8000 devices is seven digits.
169
Parameters: target port number Use the target port number option to display information about the target port. If you do not enter a target port number, information about all target ports is displayed. Examples: If you have four target ports and enter the pcmpath query portstats command, the following output is displayed:
Port #: 0 ============= I/O: SECTOR: Port #: 1 ============= I/O: SECTOR: Port #: 2 ============= I/O: SECTOR: Port #: 3 ============= I/O: SECTOR: Total Read Total Write Active Read Active Write 3169 51 0 0 24762 6470 0 0 Maximum 2 2096 Total Read Total Write Active Read Active Write 3109 58 0 0 28292 4084 0 0 Maximum 2 2096 Total Read Total Write Active Read Active Write 0 0 0 0 0 0 0 0 Maximum 0 0 Total Read Total Write Active Read Active Write 0 0 0 0 0 0 0 0 Maximum 0 0
The terms used in the output are defined as follows: Total Read v I/O: The total number of completed read requests v SECTOR: The total number of sectors that have been read Total Write v I/O: The total number of completed write requests v SECTOR: The total number of sectors that have been written Active Read v I/O: The total number of read requests in process v SECTOR: The total number of sectors to read in process Active Write v I/O: The total number of write requests in process v SECTOR: The total number of sectors to write in process. Maximum v I/O: The maximum number of queued I/O requests v SECTOR: The maximum number of queued sectors to Read or Write
170
Notes: 1. This command is supported only with 1750, 2105, 2107, and 2145 device types. 2. Data that is displayed by this command is collected only when the devices algorithm is set to lbp. For example, if the algorithm from hdisk10 to hdisk20 is set to lbp, the statistical data for each device is saved in the associated ports and displayed here. If none of the devices algorithm is set to lbp, there is no change in the port statistical output.
171
Parameters: None Examples: If you enter the pcmpath query session command, the following output is displayed:
Total Open Hyperswap Sessions : 1 SESSION NAME: session1 SessId 0 Host_OS_State READY Host_Total_copysets 2 Disabled 0 Quies 0 Resum 0 SwRes 0
The terms used in the output are defined as follows: Session Name The name of the Tivoli Productivity Center for Replication session in which this Open HyperSwap device is contained. The session ID. The session state on this host (READY or NOT READY). The number of Open HyperSwap devices configured on this host. The number of Open HyperSwap devices disabled for HyperSwap on this host. The number of Open HyperSwap devices quiesced or in the process of being quiesced. The number of Open HyperSwap devices in the resume process. Number of Open HyperSwap devices in the swap-and-resume process.
172
Parameters: None Examples: If you enter the pcmpath query version command, the following output is displayed:
[root@abc]> pcmpath query version IBM SDDPCM Version 2.1.1.0 (devices.sddpcm.52.rte)
173
Parameters: None Examples: If you enter the pcmpath query wwpn command, the following output is displayed:
Adapter Name fscsi0 fscsi1 PortWWN 10000000C925F5B0 10000000C9266FD1
174
Parameters: adapter number The index number of the adapter that you want to change. online Enables the adapter for service. offline Disables the adapter from service. aa The adapter of active/active storage controller devices. ap The adapter of active/passive storage controller devices. Examples: If you enter the pcmpath set adapter 0 offline ap command:
Chapter 3. Using SDDPCM on an AIX host system
175
v Adapter 0 of the active/passive controller devices changes to Offline mode and, if there are some paths in the opened state, its state might change to failed. v All paths of the active/passive controller devices that are attached to adapter 0 change to Offline mode and their states change to Dead, if they were in the Open state.
176
Note: If the device reserve policy is set to single_path ( SCSI-2 reserve), the device algorithm must be set to fail_over. Any attempt to set the algorithm to round_robin, load_balance, or load_balance_port with single_path reserve policy will fail. Parameters: num1 [ num2 ] v When only num1 is specified, the command applies to the hdisk specified by num1. v When two device logical numbers are entered, this command applies to all the devices whose logical numbers fit within the range of the two device logical numbers. option Specifies one of the following path selection algorithms: v rr, where rr indicates round robin v lb, where lb indicates load balancing v fo, where fo indicates failover policy v lbp, where lbp indicates load balancing port Notes: 1. You can enter the pcmpath set device N algorithm rr/fo/lb/lbp command to dynamically change to the path selection algorithm associated with SDDPCM MPIO devices that are in either Close or Open state. 2. Beginning with SDDPCM 2.4.0.0, the algorithm lbp incorporates I/O statistics from both host adapters and target ports in the path selection algorithm. This new algorithm is applicable only for device models 1750, 2105, 2107, and 2145. Examples: If you enter pcmpath set device 2 10 algorithm rr, the path-selection algorithm of hdisk 2 to hdisk 10 is immediately changed to the round robin algorithm. You can also use the chdev command to change the path selection algorithm of a device: chdev -l hdiskX -a algorithm=load_balance_port
177
Parameters: num1 [ num2 ] v When only num1 is specified, the command applies to the hdisk specified by num1. v When 2 device logical numbers are entered, this command applies to all the devices whose logical numbers fit within the range of the two device logical numbers. t The range of supported values for health check interval is 1-3600 seconds. To disable the health check function of a device, set interval time to 0.
Examples: If you enter pcmpath set device 2 10 hc_interval 30, the health check time interval of hdisk2 to hdisk10 is immediately changed to 30 seconds.
178
Parameters: num1 [ num2 ] v When only num1 is specified, the command applies to the hdisk specified by num1. v When 2 device logical numbers are entered, this command applies to all the devices whose logical numbers fit within the range of the two device logical numbers. option Specifies one of the following policies: v enabled, indicates the health check command will be sent to paths that are opened with a normal path mode. v failed, indicates the health check command will be sent to paths that are in failed state. v nonactive, indicates the health check command will be sent to paths that have no active I/O. This includes paths that are opened or in failed state. Examples: If you enter pcmpath set device 2 10 hc_mode enabled, the health check mode of MPIO hdisk2 to hdisk10 is immediately changed to the enabled mode.
179
Parameters: num1 [ num2 ] v When only num1 is specified, the command applies to the hdisk specified by num1. v When 2 device logical numbers are entered, this command applies to all active/passive devices whose logical numbers fit within the range of the two device logical numbers. t The range of supported values for controller health check time interval is 0-300 seconds. Setting the value to 0 will disable this feature.
Examples: If you enter pcmpath set device 2 10 cntlhc_interval 3, the controller health check time interval of hdisk2 to hdisk10 is immediately changed to 3 seconds, if hdisk2 to hdisk10 are all active/passive devices.
180
Parameters: num1 [ num2 ] v When only num1 is specified, the command applies to the hdisk specified by num1. v When 2 device logical numbers are entered, this command applies to all active/passive devices whose logical numbers fit within the range of the two device logical numbers. t The range of supported values for controller health check time interval is 0-300 seconds. Setting the value to 0 will disable this feature.
Examples: If you enter pcmpath set device 2 10 cntlhc_delay 30, the controller health check delay time of hdisk2 to hdisk10 is immediately changed to 30 seconds, if hdisk2 to hdisk10 are all active/passive devices. Notes: 1. If cntl_delay_time is set to '1', it disables the controller health check feature, which is the same as setting it to '0'. 2. If you try to set cntl_hcheck_int with a value larger than cntl_delay_time, then cntl_hcheck_int will be set to the same value as cntl_delay_time. 3. If you try to set cntl_delay_time with a value smaller than cntl_hcheck_int, the command will fail with the INVALID parameter.
181
Parameters: device number The logical device number of the hdisk. path ID The path ID that you want to change, as displayed under Path Name by the pcmpath query device command. online Enables the path for service. offline Disables the path from service. Examples: If you enter the pcmpath set device 5 path 0 offline command, path 0 for device 5 changes to Offline mode.
182
Parameters: device number The logical device number of the hdisk. Examples: On SVC: hdisk5 has 8 paths. Paths 0-3 are from iogroup 0 and paths 4-7 are from iogroup 1. Currently iogroup 0 has access to the vdisk corresponding to hdisk5. Paths 0 and 2 are the preferred paths.
# pcmpath query device 5 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018182DDC0000000000001BE ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/path0 OPEN NORMAL 51 0 1* fscsi0/path1 OPEN NORMAL 35 0 2 fscsi0/path2 OPEN NORMAL 35 0 3* fscsi0/path3 OPEN NORMAL 35 0 4* fscsi0/path4 OPEN NORMAL 5 0 5* fscsi0/path5 OPEN NORMAL 5 0 6* fscsi0/path6 OPEN NORMAL 5 0 7* fscsi0/path7 OPEN NORMAL 5 0
To move access of hdisk5 to iogroup 1, run svctask movevdisk -iogrp 1 <vdisk_id>. Now, if you enter the pcmpath chgprefercntl device 5 command, then SDDPCM rediscovers the preferred controllers. After a few seconds, enter pcmpath query device 5 and see the change in output.
# pcmpath query device 5 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018182DDC0000000000001BE ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0* fscsi0/path0 OPEN NORMAL 53 0 1* fscsi0/path1 OPEN NORMAL 35 0 2* fscsi0/path2 OPEN NORMAL 38 0 3* fscsi0/path3 OPEN NORMAL 35 0 4 fscsi0/path4 OPEN NORMAL 7 0
183
5* 6 7*
7 7 7
0 0 0
184
v pcmpath query essmap v pcmpath query port <target port number> v pcmpath query portmap v v v v pcmpath pcmpath pcmpath pcmpath query query query query portstats <target port number> session wwpn version
v pcmpath chgprefercntl device Note: If the commands are used for a device, the n is the number of the device logical name. For example, pcmpath query devstats 3 queries the device statistics for hdisk3. If the commands are used for adapter, the n is the index of the adapter. For example, pcmpath query adapter 2 queries the adapter statistics for the third adapter in adapter list order, which can be fscsi5.
185
186
Hardware
The following hardware components are needed: v One or more of the supported storage devices. v For ESS devices: at least one SCSI host adapter (two are required for load balancing and failover) To install SDD and use the input/output (I/O) load balancing and failover features, you need a minimum of two SCSI or fibre-channel adapters. A host system with a single fibre-channel adapter that connects through a switch to multiple ESS ports is considered to have multiple fibre-channel SDD vpath devices. For information on the fibre-channel adapters that can be used on your HP-UX host system go to: www.ibm.com/servers/storage/support v A SCSI cable to connect each SCSI host adapter to a storage system controller port v Subsystem LUNs that have been created and confirmed for multiport access v A fiber-optic cable to connect each fibre-channel adapter to a supported storage device port
Software
SDD supports certain HP-UX kernel levels.
187
Unsupported environments
SDD does not support the following environments: v HP-UX 11.0 32-bit kernel v HP-UX 11.0 64-bit kernel v A system start from an SDD pseudo device v A system paging file on an SDD pseudo device v A host system with both a SCSI and fibre-channel connection to a shared LUN v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment, such as a disk storage system-host-bay-adapter replacement v Single-path configuration for fibre channel v DS8000 and DS6000 with SCSI connectivity
188
DS4000 Storage Manager Concepts Guide for configuring LUNs that are attached to the HP-UX host system. The SDD requires a minimum of two independent paths that share the same logical unit to use load balance and path failover features. With a single path, failover protection is not provided. The DS4000 and DS5000 controllers can be set to two different modes: AVT or non-AVT but can operate only in one mode per storage partition. The controller mode is determined by the host type, which is predefined in the DS4000 or DS5000 Storage Manager profile. Only the AVT mode is supported for the HP-UX host type. To ensure that a single path failure does not result in controller failover during recovery, configure redundant paths to each controller. Because switching to another controller affects performance, configuring the redundant path to a controller can avoid unnecessary controller failover that is caused by a path failure.
189
Table 15. SDD installation scenarios (continued) Scenario 1 v SDD is not installed. v The SDD server for Expert is installed. v No software application or DBMS communicates directly to sdisk interface. Scenario 2 v SDD is not installed. v The SDD server for Expert is installed. v An existing application package or DBMS communicates directly to the sdisk interface. Scenario 3 v SDD is installed. v The SDD server for Expert is installed. Go to: 1. Determining if the SDD 1.3.1.5 (or later) server for Expert is installed 2. Installing SDD on page 191 3. Standard UNIX applications on page 208 Go to: 1. Determining if the SDD 1.3.1.5 (or later) server for Expert is installed 2. Installing SDD on page 191 3. Using applications with SDD on page 208 Go to 1. Determining if the SDD 1.3.1.5 (or later) server for Expert is installed 2. Upgrading the SDD on page 193 Table 16. Patches necessary for proper operation of SDD on HP-UX HP-UX 11.23 11.11 Patch bundles March 06, standard patch bundles September 05, support plus
See http://www11.itrc.hp.com/service/home/ home.do?admit=109447626+1266865784425+28353475 for patch details and prerequisites for patches.
Determining if the SDD 1.3.1.5 (or later) server for Expert is installed
If you previously installed the SDD server (the stand-alone version) for IBM TotalStorage Expert V2R1 (ESS Expert) on your HP-UX host system, you must remove this stand-alone version of the SDD server before you proceed with SDD 1.3.1.5 installation. The installation package for SDD 1.3.1.5 includes the SDD server daemon (also referred to as sddsrv), which incorporates the functionality of the stand-alone version of the SDD server (for ESS Expert). To determine if the stand-alone version of the SDD server is installed on your host system, enter: swlist SDDsrv If you previously installed the stand-alone version of the SDD server, the output from the swlist SDDsrv command looks similar to this:
SDDsrv 1.0.0.0 SDDsrv bb-bit Version: 1.0.0.0 Nov-14-2001 15:34
190
Notes: 1. The installation package for the stand-alone version of the SDD server (for ESS Expert) is SDDsrvHPbb_yymmdd.depot (where bb represents 32- or 64-bit, and yymmdd represents date of installation package). For ESS Expert V2R1, the stand-alone SDD server installation package is SDDsrvHP32_020115.depot for a 32-bit environment, and SDDsrvHP64_020115.depot for a 64-bit environment.) 2. For instructions on how to remove the stand-alone version of the SDD server (for ESS Expert) from your HP-UX host system, see the IBM SUBSYSTEM DEVICE DRIVER SERVER 1.0.0.0 (sddsrv) readme for IBM TotalStorage Expert V2R1 at the following website: www.ibm.com/servers/storage/support/software/swexpert/ For more information about the SDD server daemon, go to SDD server daemon on page 203.
Installing SDD
Before you install SDD, make sure that you have root access to your HP-UX host system and that all the required hardware and software is ready.
191
/cdrom/hp64bit/IBMsdd.depot or /your_installation_directory/hp64bit/IBMsdd.depot c. Click OK. You will see output similar to the following example:
IBMsdd_tag 1.7.0.3 IBMsdd Driver 64-bit Version: 1.7.0.3 Sep-24-2007 16:35
8. Click the IBMsdd_tag product. 9. From the Bar menu, click Actions Mark for Install. 10. From the Bar menu, click Actions Install (analysis). An Install Analysis panel is displayed, showing the status of Ready. 11. Click OK to proceed. A Confirmation window opens and states that the installation will begin. 12. Click Yes and press Enter. The analysis phase starts. 13. After the analysis phase has finished, another Confirmation window opens informing you that the system will be restarted after installation is complete. Click Yes and press Enter. The installation of IBMsdd will now proceed. 14. An Install window opens, informing you about the progress of the IBMsdd software installation. The window looks similar to the following:
Press Product Summary and/or Logfile for more target information. Target : XXXXX Status : Executing install setup Percent Complete : 17% Kbytes Installed : 276 of 1393 Time Left (minutes) : 1 Product Summary Logfile Done Help
The Done option is not available when the installation is in progress. It becomes available after the installation process is complete. 15. Click Done. Note: SDD 1.5.0.4 is changed from a static driver to a dynamic loadable kernel module (DLKM) driver. The system does not restart after SDD is installed. After the installation is finished, the SDD driver is automatically loaded. You can use the datapath query device command to verify the SDD installation. SDD is successfully installed if the command runs successfully.
192
3. Select Install Software to Local Host. 4. At this point, the SD Install - Software Selection panel is displayed. Then a Specify Source menu is displayed: a. Select the Local Directory for Source Depot Type. b. Select the directory in which you have issued the tar xvf IBMsdd*.tar command to untar the file and the IBMsdd.depot file for the Source Depot Path. Use the fully-qualified path name for the depot file as shown below.
/your_installation_directory/IBMsdd.depot
5. Click the IBMsdd_tag product and complete the steps beginning with step 9 on page 192 shown in Installing SDD from CD-ROM on page 191.
Upgrading from SDD 1.6.0.x to SDD 1.6.1.0 or later with concurrent access
The memory management and the installation process have been enhanced to allow installation of the SDD package while the LVM volume groups are active and user applications are running. The concurrent driver upgrade function permits uninterrupted operation when installing SDD. The installation process: 1. Converts SDD vpath devices to PVLINK devices 2. Unloads and reloads the SDD driver 3. Converts the PVLINK devices back to SDD vpath devices after the new package is installed. Because the volume groups must be active for the PVLINK conversion process, the following are the limitations: 1. The volume groups must be managed by HP-UX LVM. 2. The MC Service Guard cluster must be halted prior to upgrade. The primary node and the adoptive node or nodes must operate in a single-host environment. The shared volume groups in the adoptive nodes must be exported so that the volumes are not shared; the volume groups can be active in the primary node only. Restore the cluster environment after upgrading SDD. Performance during upgrade: You should consider the following performance topics while you are upgrading: v The PVLINK conversion process and the driver reload require additional system resources such as LVM lock, accessing LVM meta data and the kernel memory. With the concurrent I/O, the upgrade process can take longer because the conversion process must wait for the I/O to complete before a link can be removed from PVLINK. v Reloading the SDD driver can also take longer because of the contention with the kernel memory; the system must wait for a window when the resources become available. The actual time for installation depends on the processor model, physical memory size, I/O intensity, and configuration size. The larger the SDD configuration or the more concurrent I/O activities, the longer it can take to upgrade. The installation time can also take longer if the devices from
Chapter 4. Using the SDD on an HP-UX host system
193
the ioscan output are not accessible. If there were a lot of inaccessible devices as the result of fabric reconfiguration, you should attempt to clean up the configuration before upgrading. After the upgrade, you should check the VPATH_EVENT for allocation failures in syslog.log, /var/adm/IBMsdd/hd2vp.errlog and vp2hd.errlog. These are the indications that the upper limit of the resources have been reached during the conversion process and that you should take a more conservative approach next time. That is, the concurrent upgrade should be performed during a period when the system load is lighter than the normal operation. v The installation process also ensures that the current SDD state is not in any degraded state; the recovery process can be lengthy if the upgrade failed due to the hardware errors. Issue the swjob command that is indicated at the end of swinstall output to get detailed information about the installation. v The diagnose message in the package installation and configuration process has been greatly improved to include logs for cfgvpath, vp2hd, hd2vp and the syslog messages. All the SDD related logs have been moved to the /var/adm/IBMsdd directory. v
Upgrading from SDD 1.5.0.4 to SDD 1.6.1.0 or later with nonconcurrent access.
Upgrading SDD consists of removing and reinstalling the IBMsdd package. If you are upgrading SDD, go to Uninstalling SDD on page 202 and then go to Installing SDD on page 191.
194
Note: Vpathname vpathN is reserved when it is assigned to a LUN even after the LUN has been removed from the host. The same vpathname, vpathN, will be assigned to the same LUN when it is reconnected to the host. 7. /etc/vpathsave.cfg is the file to reserve vpathnames. Improper removal of the file will invalidate existing volume groups. Do not remove the /etc/vpathsave.cfg file.
195
their serial number information can be included in the /etc/vpathmanualexcl.cfg text file. For bootable devices, the get_root_disks command generates a file called /etc/vpathexcl.cfg to exclude bootable disks from the SDD configuration.
Dynamic reconfiguration
Dynamic reconfiguration provides a way to automatically detect path configuration changes without requiring a reboot. 1. cfgvpath -r: This operation finds the current hardware configuration and compares it to the SDD vpath device configuration in memory and then identifies a list of differences. It then issues commands to update the SDD vpath device configuration in memory with the current hardware configuration. The commands that cfgvpath -r issues to the vpath driver are: Add an SDD vpath device. Remove an SDD vpath device; this will fail if device is busy. Add path to the SDD vpath device. Remove path from the SDD vpath device; this will fail deletion of the path if the device is busy, but will set path to DEAD and OFFLINE. 2. rmvpath command removes one or more SDD vpath devices. a. b. c. d.
rmvpath -all rmvpath vpath_name # Remove all SDD vpath devices # Remove one SDD vpath device at a time # this will fail if device is busy
196
load balancing (lb) The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at random from those paths. Load balancing mode also incorporates failover protection. Note: The load balancing policy is also known as the optimized policy. round robin (rr) The path to use for each I/O operation is chosen at random from those paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two. The path-selection policy is set at the SDD device level. The default path-selection policy for an SDD device is load balancing. You can change the policy for an SDD device. SDD version 1.4.0.0 (or later) supports dynamic changing of the SDD devices path-selection policy. Before changing the path-selection policy, determine the active policy for the device. Enter datapath query device N where N is the device number of the SDD vpath device to show the current active policy for that device.
197
SDD datapath query adapter command changes for SDD 1.4.0.0 (or later)
For SDD 1.4.0.0 (or later), the output of some of the datapath commands has changed. See Chapter 13, Using the datapath commands, on page 429 for details about the datapath commands. For SDD 1.3.3.11 (or earlier), the output of the datapath query adapter command shows all the fibre-channel arrays as different adapters, and you need to determine which hardware paths relate to which adapters. If you need to place an adapter offline, you need to manually issue multiple commands to remove all the associated hardware paths. For SDD 1.4.0.0 (or later), the output of the datapath query adapter command has been simplified. The following examples show the output resulting from the datapath query adapter command for the same configuration for SDD 1.3.3.11 (or earlier) and for SDD 1.4.0.0 (or later). Example output from datapath query adapter command issued in SDD 1.3.3.11 (or earlier):
Active Adapters :8 Adapter# Adapter Name 0 0/7/0/0.4.18.0.38 1 0/4/0/0.4.18.0.38 2 0/7/0/0.4.18.0.36 3 0/4/0/0.4.18.0.36 4 0/7/0/0.4.18.0.34 5 0/4/0/0.4.18.0.34 6 0/7/0/0.4.18.0.32 7 0/4/0/0.4.18.0.32 State NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL Mode ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE Select Error Path Active 0 0 1 1 0 0 1 1 0 0 2 2 0 0 2 2 0 0 2 2 0 0 2 2 0 0 1 1 0 0 1 1
Adapter #s 0, 2, 4, 6 belong to the same physical adapter. In order to place this adapter offline, you need to issue datapath set adapter offline four times. After the four commands are issued, the output of datapath query adapter will be:
Active Adapters :8 Adapter# Adapter Name 0 0/7/0/0.4.18.0.38 1 0/4/0/0.4.18.0.38 2 0/7/0/0.4.18.0.36 3 0/4/0/0.4.18.0.36 4 0/7/0/0.4.18.0.34 5 0/4/0/0.4.18.0.34 6 0/7/0/0.4.18.0.32 7 0/4/0/0.4.18.0.32 State NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL Mode OFFLINE ACTIVE OFFLINE ACTIVE OFFLINE ACTIVE OFFLINE ACTIVE Select Error Path Active 0 0 1 0 0 0 1 0 0 0 2 0 0 0 2 0 0 0 2 0 0 0 2 0 0 0 1 0 0 0 1 0
Example output from datapath query adapter command issued in SDD 1.4.0.0 (or later):
Active Adapters :2 Adapter# Adapter Name State Mode 0 0/7/0/0 NORMAL ACTIVE 1 0/4/0/0 NORMAL ACTIVE Select 0 0 Error Path Active 0 6 6 0 6 6
Adapters 0 and 1 represent two physical adapters. To place one of the adapters offline, you need to issue one single command, for example, datapath set adapter 0 offline. After the command is issued, the output of datapath query adapter will be:
198
Active Adapters :2 Adapter# Adapter Name State Mode 0 0/7/0/0 NORMAL OFFLINE 1 0/4/0/0 NORMAL ACTIVE
Select 0 0
SDD datapath query device command changes for SDD 1.4.0.0 (or later)
The following change is made in SDD for the datapath query device command to accommodate the serial numbers of supported storage devices. The locations of Serial and Policy are swapped because the SAN Volume Controller serial is too long to fit in the first line. Example output from datapath query device command issued in SDD 1.3.3.11 (or earlier):
Dev#: 3 Device Name: vpath5 Type: 2105800 Serial: 14123922 Policy: Optimized ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c19t8d1 OPEN NORMAL 3869815 0 1 0/7/0/0 c13t8d1 OPEN NORMAL 3872306 0 2 0/3/0/0 c17t8d1 OPEN NORMAL 3874461 0 3 0/3/0/0 c11t8d1 OPEN NORMAL 3872868 0 Dev#: 3 Device Name: vpath5 Type: 2105800 Policy: Optimized Serial: 14123922 ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c19t8d1 OPEN NORMAL 3869815 0 1 0/7/0/0 c13t8d1 OPEN NORMAL 3872306 0 2 0/3/0/0 c17t8d1 OPEN NORMAL 3874461 0 3 0/3/0/0 c11t8d1 OPEN NORMAL 3872868 0
Example output from datapath query device command issued in SDD 1.4.0.0 (or later): (This example shows a SAN Volume Controller and a disk storage system device.)
Dev#: 2 Device Name: vpath4 Type: 2145 Policy: Optimized Serial: 60056768018506870000000000000000 ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c23t0d0 OPEN NORMAL 2736767 62 1 0/7/0/0 c9t0d0 OPEN NORMAL 6 6 2 0/3/0/0 c22t0d0 OPEN NORMAL 2876312 103 3 0/3/0/0 c8t0d0 OPEN NORMAL 102 101 Dev#: 3 Device Name: vpath5 Type: 2105800 Policy: Optimized Serial: 14123922 ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c19t8d1 OPEN NORMAL 3869815 0 1 0/7/0/0 c13t8d1 OPEN NORMAL 3872306 0 2 0/3/0/0 c17t8d1 OPEN NORMAL 3874461 0 3 0/3/0/0 c11t8d1 OPEN NORMAL 3872868 0
Note: vpathname vpathN is reserved once it is assigned to a LUN even after the LUN has been removed from the host. The same vpathname, vpathN, will be assigned to the same LUN when it is reconnected to the host.
199
Postinstallation
After SDD is installed, the device driver resides above the HP SCSI disk driver (sdisk) in the protocol stack. In other words, SDD now communicates to the HP-UX device layer. The SDD software installation procedure installs a number of SDD components and updates some system files. Those components and files are listed in Table 17 through Table 19 on page 201.
Table 17. SDD components installed for HP-UX host systems File mod.o Executables README.sd sddsrv sample_sddsrv.conf sddserver Location /opt/IBMsdd/bin /opt/IBMsdd/bin /opt/IBMsdd /sbin/sddsrv /etc/ /sbin/init.d Description Object file for the SDD driver module Configuration and status tools README file SDD server daemon Sample SDD server configuration file Script to start or stop the SDD daemon at system up/down time Script to load SDD driver and run cfgvpath at system boot time Script to move /sbin/rc1.d/S100localmount to /sbin/rc1.d/ S250localmount in order to fix the auto mount problem for SDD vpath device filesystems
confserver
/sbin/init.d
mvserver
/sbin/init.d
datapath.1 rmvpath.1 showvpath.1 gettrace.1 sddsrv.1 vp2hd.1 hd2vp.1 cfgvpath.1 vpcluster.1 sddgetdata.1
/usr/local/man/man1/datapath.1 Manpage for datapath /usr/local/man/man1/rmvpath.1 Manpage for rmvpath /usr/local/man/man1/ showvpath.1 /usr/local/man/man1/gettrace.1 /usr/local/man/man1/sddsrv.1 /usr/local/man/man1/vp2hd.1 /usr/local/man/man1/hd2vp.1 Manpage for showvpath Manpage for gettrace Manpage for sddsrv Manpage for vp2hd Manpage for hd2vp
/usr/local/man/man1/cfgvpath.1 Manpage for cfgvpath /usr/local/man/man1/vpcluster.1 Manpage for vpcluster /usr/local/man/man1/ sddgetdata.1 Manpage for sddgetdata
Table 18. System files updated for HP-UX host systems File vpath vpath Location /usr/conf/master.d /stand/system.d Description Master configuration file System configuration file
200
Table 19. SDD commands and their descriptions for HP-UX host systems Command cfgvpath Description Configures new SDD vpath devices when there are no existing vpath devices. Do not use the legacy parameter "-c". The SDD vpath device configuration is updated without system reboot. If initially there is no SDD vpath device configured, cfgvpath -r will fail with message failed to get information from kernel, don't run dynamic configuration, do cfgvpath instead. In this case, issue cfgvpath without any option. Lists the configuration mapping between the SDD devices and underlying disks. The SDD driver console command tool. Converts a volume group from sdisks into the SDD vpath devices. Converts a volume group from the SDD vpath devices into sdisks. Imports or exports MC Service Guard volume groups. Removes the SDD vpath devices from the configuration. Debug tool that gets trace information when problem occurs. The SDD data collection tool for problem analysis. Manpage for SDD commands, for example, man datapath. Supported SDD commands are datapath, gettrace, hd2vp, querysn, rmvpath, sddsrv, sddgetdatashowvpath, vp2hd, vpcluster, and sddgetdata.
showvpath datapath hd2vp vp2hd vpcluster rmvpath [-all, -vpathname] gettrace sddgetdata man
If you are not using a DBMS or an application package that communicates directly to the sdisk interface, the installation procedure is nearly complete. However, you still need to customize HP-UX so that standard UNIX applications can use the SDD. Go to Standard UNIX applications on page 208 for instructions. If you have a DBMS or an application package installed that communicates directly to the sdisk interface, such as Oracle, go to Using applications with SDD on page 208 and read the information specific to the application that you are using. During the installation process, the following files were copied from the IBMsdd_depot to the system: # Kernel-related files v /opt/IBMsdd/bin/mod.o v /stand/system.d/vpath v /usr/conf/master.d/vpath # SDD driver-related files v /opt/IBMsdd
Chapter 4. Using the SDD on an HP-UX host system
201
v v v v v v v v v v v v v v v v v v
/opt/IBMsdd/bin /opt/IBMsdd/README.sd /opt/IBMsdd/bin/cfgvpath /opt/IBMsdd/bin/datapath /opt/IBMsdd/bin/showvpath /opt/IBMsdd/bin/master /opt/IBMsdd/bin/system /opt/IBMsdd/bin/mod.o /opt/IBMsdd/bin/rmvpath /opt/IBMsdd/bin/get_root_disks /opt/IBMsdd/bin/gettrace /opt/IBMsdd/bin/sddgetdata /opt/IBMsdd/bin/hd2vp /opt/IBMsdd/bin/vp2hd /opt/IBMsdd/bin/vpcluster /sbin/cfgvpath /sbin/datapath /sbin/get_root_disks
v /etc/sample_sddsrv.conf During installation, /opt/IBMsdd/bin/cfgvpath program is initiated to create SDD vpath devices in the /dev/dsk and /dev/rdsk directories for all IBM disks that are available on the system. After installation is done, all SDD vpath devices are configured and the driver is loaded. The system will not reboot. Note: SDD devices are found in /dev/rdsk and /dev/dsk. The device is named according to the SDD number. A device with a number of 0 would be /dev/rdsk/vpath1.
Uninstalling SDD
The following procedure explains how to remove SDD. You must uninstall the current level of SDD before upgrading to a newer level. Complete the following procedure to uninstall SDD: 1. Stop applications. 2. If you are using SDD with a database, such as Oracle, edit the appropriate database configuration files (database partition) to remove all the SDD devices. 3. Before running the sam program, run script vp2hd to convert volume groups from SDD vpath devices to sdisks. 4. Run the sam program.
202
5. 6. 7. 8.
> sam Click Software Management. Click Remove Software. Click Remove Local Host Software. Click the IBMsdd_tag selection. a. From the Bar menu, click Actions Mark for Remove. b. From the Bar menu, click Actions Remove (analysis). A Remove Analysis window opens and shows the status of Ready. c. Click OK to proceed. A Confirmation window opens and indicates that the uninstallation will begin. d. Click Yes. The analysis phase starts. e. After the analysis phase has finished, another Confirmation window opens indicating that the system will be restarted after the uninstallation is complete. Click Yes and press Enter. The uninstallation of IBMsdd begins. f. An Uninstall window opens showing the progress of the IBMsdd software uninstallation. This is what the panel looks like:
Target : XXXXX Status : Executing unconfigure Percent Complete : 17% Kbytes Removed : 340 of 2000 Time Left (minutes) : 5 Removing Software : IBMsdd_tag,...........
The Done option is not available when the uninstallation process is in progress. It becomes available after the uninstallation process completes. 9. Click Done. When SDD has been successfully uninstalled, the first part of the procedure for upgrading the SDD is complete. To complete an upgrade, you need to reinstall SDD. See the installation procedure in Installing SDD on page 191. Note: The MC Service Guard cluster must be halted prior to uninstall. The primary node and the adoptive node or nodes must operate in a single-host environment. The shared volume groups in the adoptive nodes must be exported so that the volumes are not shared; the volume groups can be active in the primary node only.
203
If the SDD server (sddsrv) has automatically started, the output will display the process number on which sddsrv has started. If sddsrv has not started, you should uninstall SDD and then reinstall SDD. See Installing SDD on page 191 for more information.
204
4. Create the group special file. See Creating the group special file on page 206. For more information about the vgimport command, see Importing volume groups on page 206.
m mapfile
vg_name
vgexport command example: To export the specified volume group on node 1, enter: vgexport p -v s m /tmp/vgpath1.map vgvpath1 where /tmp/vgpath1.map represents your mapfile, and vgvpath1 represents the path name of volume group that you want to export.
205
m mapfile
vg_name
vgimport command example: To import the specified volume group on node 2, enter: vgimport -p -v -s -m /tmp/vgpath1.map vgvpath1 where /tmp/vgpath1.map represents your mapfile, and vgvpath1 represents the path name of the volume group that you want to import. Note: The vgimport command only imports the scsi pvlink devices. Issue the hd2vp command after issuing the vgimport command.
206
are SDD-managed vpath devices, performs vgexport and creates vpcluster control files for the adoptive nodes to import. The input file to vpcluster does not have to be the same cluster configuration file for the SG cluster creation. It can be any ASCII file with the line entries that contain NODE_NAME and VOLUME_GROUP keywords without # as a comment. Optionally, the vpcluster control file can be copied to each adoptive node with the rcp command. For the adoptive node operation, vpcluster uses the control file created by the primary node operation. Prior to volume group import, it validates that the adoptive node is included in the cluster configuration, ensures the importing volume groups are not active volume groups in the adoptive node, creates volume group nodes /dev/vgXXXX using the mknod command, and ensures that the same device serial and LUN-id are configured by SDD. Notes: 1. The device names, either vpath# or C#T#D#, might be different between the primary and adoptive nodes. However, the vpcluster process attempts to keep the volume group minor number consistent between the primary and adoptive nodes. In case the same minor number is in use, the next sequential number is assigned. Because the HP vgimport process only imports those device names with the standard name C#T#D#, hd2vp is invoked to convert sdisk devices to SDD devices after a successful vgimport. 2. Use the cmquerycl HP command to create the cluster configuration file. This command recognizes only the pvlink scsi devices on both nodes. If you are using the cmquerycl command to create the cluster configuration file, you should first issue vp2hd to convert vpath devices to pvlink devices. In addition, the report option for adoptive node validates all volume groups exported by the primary node. A mismatch of volume group minor number or vpath device name is allowed. Other mismatches will be reported. Syntax:
vpcluster -primary -adoptive
-f file
-dorcp
-report
-debug
-h
where, -primary Specifies primary node operation. You must specify -primary or -adoptive. -adoptive Specifies adoptive node operation. You must specify -primary or -adoptive. -f file For the primary node, specify the cluster configuration file, default is /etc/cmcluster/cmclconf.ascii. For the adoptive node, specify the vpcluster control file created the primary node, default is /tmp/vpcluster/vpcluster.primary.tar -dorcp Specifies the vpcluster control tar file to be RCPed to the adoptive nodes. The default is no.
Chapter 4. Using the SDD on an HP-UX host system
207
-report Validates that the exported volume groups from the primary node are imported to the adoptive node and creates a report. This option is valid in the adoptive node. -debug Specifies that a debugging statement is to be printed during vpcluster run time. -h Specifies that detailed help info about the vpcluster function is to be displayed. There are more than one way to configure the SG cluster locking: quorum server, or lock disk. In case the lock disk is chosen, the SDD vpath device should not be used since it is not recognized by the FIRST_CLUSTER_LOCK_PV parameter. Furthermore, it is recommended that SDD vpath devices and sdisk pvlink should not be mixed in the same volume group. The lock device should be excluded from the SDD configuration. See the information about the /etc/vpathmanualexcl.cfg text file on page 195.
208
driver (sdisk) in the protocol stack. In other words, SDD now communicates to the HP-UX device layer. To use standard UNIX applications with SDD, you must make some changes to your logical volumes. You must convert your existing logical volumes or create new ones. Standard UNIX applications such as newfs, fsck, mkfs, and mount, which normally take a disk device or raw disk device as a parameter, also accept the SDD device as a parameter. Similarly, entries in files such as vfstab and dfstab (in the format of cntndnsn) can be replaced by entries for the corresponding SDD vpathNs devices. Make sure that the devices that you want to replace are replaced with the corresponding SDD device. Issue the showvpath command to list all SDD vpath devices and their underlying disks. To use the SDD driver for an existing logical volume, you must run the hd2vp conversion script (see SDD utility programs on page 87). Attention: Do not use the SDD for critical file systems needed at startup, such as /(root), /stand, /usr, /tmp or /var. Doing so may render your system unusable if SDD is ever uninstalled (for example, as part of an upgrade).
The first number in the message is the major number of the character device, which is the number that you want to use. 2. Create a device node for the logical volume device. Note: If you do not have any other logical volume devices, you can use a minor number of 0x010000. In this example, assume that you have no other logical volume devices. A message similar to the following is displayed:
# mknod group c 64 0x010000
Create a physical volume by performing the procedure in step 3 on page 210. a. Create a subdirectory in the /dev directory for the volume group. Enter the following command to create a subdirectory in the /dev directory for the volume group: # mkdir /dev/vgIBM In this example, vgIBM is the name of the directory. b. Change to the /dev directory. Enter the following command to change to the /dev directory:
Chapter 4. Using the SDD on an HP-UX host system
209
# cd /dev/vgIBM c. Create a device node for the logical volume device. Enter the following command to re-create the physical volume: # pvcreate /dev/rdsk/vpath1 A message similar to the following is displayed:
Physical volume "/dev/rdsk/vpath1" has been successfully created.
In this example, the SDD vpath device associated with the underlying disk is vpath1. Verify the underlying disk by entering the following showvpath command: # /opt/IBMsdd/bin/showvpath A message similar to the following is displayed:
vpath1: /dev/dsk/c3t4d0
3. Create a physical volume. Enter the following command to create a physical volume: # pvcreate /dev/rdsk/vpath1 4. Create a volume group. Enter the following command to create a volume group: # vgcreate /dev/vgIBM/dev/dsk/vpath1 5. Create a logical volume. Enter the following command to create logical volume lvol1: # lvcreate -L 100 -n lvol1 vgIBM The -L 100 portion of the command makes a 100-MB volume group; you can make it larger if you want to. Now you are ready to create a file system on the volume group. 6. Create a file system on the volume group. Use the following process to create a file system on the volume group: a. If you are using an HFS file system, enter the following command to create a file system on the volume group: # newfs -F HFS /dev/vgIBM/rlvol1 b. If you are using a VXFS file system, enter the following command to create a file system on the volume group: # newfs -F VXFS /dev/vgIBM/rlvol1 c. Mount the logical volume. This process assumes that you have a mount point called /mnt. 7. Mount the logical volume. Enter the following command to mount the logical volume lvol1: # mount /dev/vgIBM/lvol1 /mnt
210
Attention: In some cases, it may be necessary to use standard HP-UX recovery procedures to fix a volume group that has become damaged or corrupted. For information about using recovery procedures, such as vgscan, vgextend, vpchange, or vgreduce, see the following website: http://docs.hp.com/ Click HP-UX Reference (Manpages). Then see HP-UX Reference Volume 2.
When prompted to delete the logical volume, enter y. 2. Remove the existing volume group. Enter the following command to remove the volume group vgIBM: # vgremove /dev/vgIBM A message similar to the following is displayed:
Volume group "/dev/vgIBM" has been successfully removed.
211
# lvdisplay /dev/vgIBM/lvol1 | grep "LV Size" A message similar to the following is displayed:
LV Size (Mbytes) 100
In this case, the logical volume size is 100 MB. 2. Re-create the physical volume. Enter the following command to re-create the physical volume: # pvcreate /dev/rdsk/vpath1 A message similar to the following is displayed:
Physical volume "/dev/rdsk/vpath1" has been successfully created.
In this example, the SDD vpath device associated with the underlying disk is vpath1. Verify the underlying disk by entering the following command: # /opt/IBMsdd/bin/showvpath A message similar to the following is displayed:
vpath1: /dev/dsk/c3t4d0
3. Re-create the volume group. Enter the following command to re-create the volume group: # vgcreate /dev/vgibm /dev/dsk/vpath1 A message similar to the following is displayed:
Increased the number of physical extents per physical volume to 2187. Volume group "/dev/vgibm" has been successfully created. Volume Group configuration for /dev/vgibm has been saved in /etc/lvmconf/vgibm.conf
4. Re-create the logical volume. Re-creating the logical volume consists of a number of smaller steps: a. Re-creating the physical volume b. Re-creating the volume group c. Re-creating the logical volume Enter the following command to re-create the logical volume: # lvcreate -L 100 -n lvol1 vgibm A message similar to the following is displayed:
Logical volume "/dev/vgibm/lvol1" has been successfully created with character device "/dev/vgibm/rlvol1". Logical volume "/dev/vgibm/lvol1" has been successfully extended. Volume Group configuration for /dev/vgibm has been saved in /etc/lvmconf/vgibm.conf
212
The -L 100 parameter comes from the size of the original logical volume, which is determined by using the lvdisplay command. In this example, the original logical volume was 100 MB in size. Attention: The re-created logical volume should be the same size as the original volume; otherwise, the re-created volume cannot store the data that was on the original. 5. Setting the proper timeout value for the logical volume manager. The timeout values for the Logical Volume Manager must be set correctly for SDD to operate properly. This is particularly true if the concurrent firmware download has taken place. There are two timeout values: one for logical volume (LV) and one for physical volume (PV). The LV timeout value is determined by the application. If the application has no specific timeout requirement, use the HP default value, which is 0 (forever). The PV timeout value is recommended by the storage vendor. The HP default PV timeout value is 30 seconds. Generally, this is sufficient during normal operations. However, during the concurrent firmware download, you must set the PV timeout value to a minimum of 90 seconds. You can set the timeout value to 90 seconds for normal operations as well. In addition, if you do not use the default LV timeout value, ensure that the LV timeout value is no less than the PV timeout value multiplied by the number of paths. For example, when the default is not used, if a vpath device has four underlying paths and the PV timeout value is 90, the LV timeout value must be at least 360. To display the timeout value, use the lvdisplay or pvdisplay command. To change the PV timeout value, use the pvchange command after pvcreate, and to change the LV timeout value, use the lvchange command after lvcreate. For example: v To change the timeout value of all underlying paths of vpathX to 90 seconds, enter pvchange -t 90 /dev/dsk/vpathX v To change the timeout value for logical volume /dev/vgibm/lvolY to 360 seconds, enter lvchange -t 360 /dev/vgibm/lvolY In some cases, it might be necessary to use standard HP recovery procedures to fix a volume group that has become damaged or corrupted. For information about using recovery procedures, such as vgscan, vgextend, vpchange, or vgreduce, see the following website: http://docs.hp.com/ Click HP-UX Reference (Manpages). Then, see HP-UX Reference Volume 2.
213
3. Create file systems on the selected SDD devices using the appropriate utilities for the type of file system that you will use. If you are using the standard HP-UX UFS file system, enter the following command: # newfs /dev/rdsk/vpathN In this example, N is the SDD device instance of the selected volume. Create mount points for the new file systems. 4. Install the file systems into the directory /etc/fstab. In the mount at boot field, click yes. 5. Install the file system mount points into the /etc/exports directory for export. 6. Restart the system.
Installing SDD on a system that already has the NFS file server
Complete the following steps if you have the NFS file server already configured to: v Export file systems that reside on a multiport subsystem, and v Use SDD partitions instead of sdisk partitions to access them 1. List the mount points for all currently exported file systems by looking in the /etc/exports directory. 2. Match the mount points found in step 1 with sdisk device link names (files named /dev/(r)dsk/cntndn) by looking in the /etc/fstab directory. 3. Match the sdisk device link names found in step 2 with SDD device link names (files named /dev/(r)dsk/vpathN) by issuing the showvpath command. 4. Make a backup copy of the current /etc/fstab file. 5. Edit the /etc/fstab file, replacing each instance of an sdisk device link named /dev/(r)dsk/cntndn with the corresponding SDD device link. 6. Restart the system. 7. Verify that each exported file system: a. Passes the start time fsck pass b. Mounts properly c. Is exported and available to NFS clients If there is a problem with any exported file system after completing step 7, restore the original /etc/fstab file and restart to restore NFS service. Then review your steps and try again.
214
Hardware
The following hardware components are needed: v Supported storage devices v One or more pairs of fibre-channel host adapters To use SDD's input/output (I/O) load balancing features and failover features, you need a minimum of two paths to your storage devices. For more information about the fibre-channel adapters that you can use on your Linux host system, see the Host Systems Attachment Guide for your product. v Subsystem LUNs that have been created and configured for multiport access. Subsystem LUNs are known as SDD vpath devices in Linux SDD. Each SDD vpath device can have up to 32 paths (SCSI disk instances). v A fibre optic cable to connect each fibre-channel adapter to a supported storage device port, or to switch ports subsequently zoned to supported storage device ports. See the IBM TotalStorage Enterprise Storage Server: Interoperability Guide for more information regarding hardware, software, and driver support.
Software
A general list of supported Linux distributions and major release levels is shown below. For the most up-to-date information regarding support for specific architectures and kernels, see the Readme file for the latest SDD release on the CD-ROM or visit the SDD website: www.ibm.com/servers/storage/support/software/sdd v Novell SUSE SUSE Linux Enterprise Server (SLES) 8 / UnitedLinux 1.0 SLES 9 v Red Hat RHEL 3 AS RHEL 4 AS v Asianux Red Flag Advanced Server 4.1 Red Flag DC Server 4.1
Copyright IBM Corp. 1999, 2013
215
Unsupported environments
SDD does not support environments containing the following functions: v DS8000 and DS6000 do not support SCSI connectivity. ESS Model 800 does support SCSI connectivity. v The EXT3 file system on an SDD vpath device is only supported on distributions running the 2.4.21 or newer kernel. v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment, such as a disk storage system host-bay-adapter replacement, or host zoning reconfiguration that affects the host or storage ports in use.
216
See the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide for more information about how to install and configure fibre-channel adapters for your Linux host system and for information about working around Linux LUN limitations.
Installing SDD
Before you install SDD, make sure that you have root access to your Linux host system and that all the required hardware and software is ready.
217
v For SUSE: enter cd /media/cdrom 5. If you are running Red Hat, enter cd redhat. If you are running SUSE, enter cd suse, and then enter ls to display the name of the package. If you are running Miracle Linux, Red Flag, or Asianux, enter cd asianux. 6. Enter rpm -qpl IBMsdd-N.N.N.N-x.arch.distro.rpm to view all the files in the package, where: v N.N.N.N-x represents the current version release modification level number; for example, N.N.N.N-x = 1.6.0.1-1. v arch is the architecture (i686, ppc64, ia64) v distro is one of the following: rhel3 rhel4 ul1 sles8 sles9 asianux 7. Enter the following command to install SDD: rpm -ivh [--prefix=newpath] IBMsdd-N.N.N.N-x.arch.distro.rpm where, newpath is the new directory under which you want to place SDD files (the default is /opt). Note that you cannot specify --prefix=/. The prefix flag is optional. A message similar to the following is displayed:
Preparing for installation ... IBMsdd-N.N.N.N-1
where newpath is the new directory where you want to place SDD files (the default directory is /opt). You cannot specify --prefix=/. The prefix flag is optional.
Upgrading SDD
Complete the following steps to upgrade SDD on your Linux host system: 1. Log on to your host system as the root user. 2. Insert the SDD installation CD into your CD-ROM drive. 3. Enter mount /dev/cdrom to mount the CD-ROM drive. 4. Enter the following to access your CD-ROM contents : v For Red Hat or Asianux: enter cd /mnt/cdrom v For SUSE: enter cd /media/cdrom
218
5. If you are running Red Hat, enter cd redhat; if you are running SUSE, enter cd suse, and then enter ls to display the name of the package. 6. Enter rpm -qpl IBMsdd-N.N.N.N-x.arch.distro.rpm to view all the files in the package. 7. Enter rpm -U IBMsdd-N.N.N.N-x.arch.distro.rpm [--prefix=newpath] to upgrade SDD. The --prefix option should be used if it was used during the RPM installation of SDD. A message similar to the following is displayed:
Preparing for installation ... IBMsdd-N.N.N.N-1
Note: The RPM upgrade command (rpm -U) will not work if you want to upgrade from a pre-SDD 1.6.1.x package to an SDD 1.6.1.x or later package. Instead: 1. Uninstall the SDD package using the RPM erase command (rpm -e IBMsdd) 2. Install the new SDD 1.6.1.x or later package using rpm -i. 3. If you modified your /etc/vpath.conf, the rpm -e command saved a copy in /etc/vpath.conf.rpmsave. To preserve your /etc/vpath.conf modifications, you must also copy the /etc/vpath.conf.rpmsave to /etc/vpath.conf.
Description SDD device driver file (where XXX stands for the kernel level of your host system. SDD device driver file (where XXX stands for the kernel level of your host system. SDD configuration file sddsrv configuration file SDD configuration and status tools Symbolic links to the SDD utilities Symbolic link for the SDD system startup option Symbolic link for the SDD manual start or restart option
sdd-mod.o-xxx (for /opt/IBMsdd Linux 2.4 and earlier kernels) sdd-mod.ko-xxx (For /opt/IBMsdd Linux 2.6 kernels only) vpath.conf sddsrv.conf executables /etc /etc /opt/IBMsdd/bin /usr/sbin /etc/init.d/sdd sdd.rcscript /usr/sbin/sdd
In this table, the /opt directory is the default directory. The root prefix might be different, depending on the installation. You can issue the rpm -qi IBMsdd command to receive information on the particular package, or rpm -ql IBMsdd command to list the specific SDD files that were successfully installed on your Linux host system. If the installation was successful, issue the cd /opt/IBMsdd and then ls -l commands to list all the installed SDD components. You will see output similar to the following:
Chapter 5. Using SDD on a Linux host system
219
1 1 2 1 1 1
26 26 2 26 26 26
SDD utilities are packaged as executable files and contained in the /bin directory. If you issue the cd /opt/IBMsdd/bin and then ls -l commands, you will see output similar to the following:
total 232 -rwxr-x---rwxr-x---rwxr-x---rwxr-x---rwxr-x---rwxr-x---rwxr-x--1 1 1 1 1 1 1 root root root root root root root root root root root root root root 32763 28809 1344 16667 78247 22274 92683 Sep Sep Sep Sep Sep Sep Sep 26 26 26 26 26 26 26 17:40 17:40 17:40 17:40 17:40 17:40 17:40 cfgvpath datapath sdd.rcscript lsvpcfg pathtest rmvpath addpaths
Note: The addpaths command is still supported on the 2.4 kernels. On the 2.6 kernels cfgvpath will perform the functionality of addpaths. If the installation failed, a message similar to the following is displayed:
package IBMsdd is not installed
Configuring SDD
Before you start the SDD configuration process, make sure that you have successfully configured the supported storage device to which your host system is attached and that the supported storage device is operational. This section provides instructions for the following procedures: v Configurating and verifying an SDD v Configuring SDD at system startup v Maintaining SDD vpath device configuration persistence Table 21 lists all of the commands that can help system administrators configure SDD. More details about the function and use of each command are described later in this section.
Table 21. Summary of SDD commands for a Linux host system Command cfgvpath cfgvpath query lsvpcfg rmvpath Description Configures SDD vpath devices.See the note. Displays all SCSI disk devices. Displays the current devices that are configured and their corresponding paths. Removes one or all SDD vpath devices.
220
Table 21. Summary of SDD commands for a Linux host system (continued) Command addpaths Description Adds any new paths to an existing SDD vpath device. This command is only supported for Linux 2.4 kernels. For Linux 2.6 kernels, the functionality of the addpaths command has been added to the cfgvpath command. If you need to add paths to an existing SDD vpath device with a Linux 2.6 kernel, run the cfgvpath command. sdd start sdd stop sdd restart Loads the SDD driver and automatically configures disk devices for multipath access. Unloads the SDD driver (requires that no SDD vpath devices currently be in use). Unloads the SDD driver (requires that no SDD vpath devices currently be in use), and then loads the SDD driver and automatically configures disk devices for multipath access.
Note: For Linux 2.4 kernels, the SDD vpath devices are assigned names according to the following scheme:
vpatha, vpathb, ..., vpathp vpathaa, vpathab, ..., vpathap vpathba, vpathbb, ..., vpathbp ... vpathza, vpathzb, ..., vpathzp vpathaaa, vpathaab, ..., vpathaap ... ,
For Linux 2.6 kernels, the SDD vpath devices are assigned names according to the following scheme:
vpatha, vpathb, ..., vpathy, vpathz vpathaa, vpathab, ..., vpathay, vpathaz vpathba, vpathbb, ..., vpathby, vpathbz ... vpathza, vpathzb, ..., vpathzy, vpathzz vpathaaa, vpathaab, ..., vpathaay, vpathaaz ...
SDD configuration
Use the following steps to configure SDD on your Linux host system. 1. Log on to your Linux host system as the root user. 2. Enter sdd start. 3. You can verify the configuration using the datapath query device command to determine that all your disk are configured. If the system is not configured properly, see Verifying SDD configuration on page 222. 4. Use the sdd stop command to unconfigure and unload the SDD driver. Use the sdd restart command to unconfigure, unload, and then restart the SDD
221
configuration process. If a vpath device is in use (mounted), then the sdd stop command fails with a error stating that the module sdd-mod is in use.
233360 192000 2880 3760 16752 6352 56192 4048 67664 20928 48320 131872 15856 34112 40880
0 0 1 1 0 0 1 4 4 0 1 -1 1 0 0
(unused) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (unused) [usb-uhci] (autoclean) (autoclean) (unused) (autoclean)
For Linux 2.6 kernels, the SDD driver is displayed as sdd_mod. 2. Enter cat /proc/IBMsdd to verify that the SDD sdd-mod driver level matches that of your system kernel. The following example shows that SDD 1.6.0.0 is installed on a Linux host system running a 2.4.9 symmetric multiprocessor kernel:
sdd-mod: SDD 1.6.0.0 2.4.9 SMP Sep 26 2001 17:39:06 (C) IBM Corp.
3. The order of disk recognition on a Linux system is: a. Fibre-channel Host Bus Adapter (HBA) driver The HBA driver needs to recognize the disks. The recognized disks are typically listed in /proc/scsi/adapter_type/host_number, for example /proc/scsi/qla2300/2. Example /proc/scsi/adapter_type/host_number output is shown below. Note that this is not always true for the Linux 2.6 kernel because the HBA driver version can use the sysfs filesystem instead of the proc filesystem to expose information. b. SCSI driver (scsi-mod or scsi_mod) The SCSI driver has to recognize the disks, and, if this succeeds, it puts disk entries into /proc/scsi/scsi. c. SCSI disk driver (sd-mod or sd_mod) The SCSI disk driver has to recognize the disk entries, and if this succeeds it puts the entries into /proc/partitions. d. SDD driver (sdd-mod or sdd_mod) SDD then uses the disk entries in /proc/partitions to configure the SDD vpath devices. If configuration succeeds, SDD generates more entries in /proc/partitions. Enter cat /proc/scsi/adapter_type/N to display the status of a specific adapter and the names of the attached devices. In this command, adapter_type indicates the
222
type of adapter that you are using, and N represents the host-assigned adapter number. The following example shows a sample output:
# ls /proc/scsi/ qla2300 scsi sym53c8xx # ls /proc/scsi/qla2300/ 2 3 HbaApiNode # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for ISP23xx: Firmware version: 3.01.18, Driver version 6.05.00b5 Entry address = e08ea060 HBA: QLA2300 , Serial# C81675 Request Queue = 0x518000, Response Queue = 0xc40000 Request Queue count= 128, Response Queue count= 512 Total number of active commands = 0 Total number of interrupts = 7503 Total number of IOCBs (used/max) = (0/600) Total number of queued commands = 0 Device queue depth = 0x10 Number of free request entries = 57 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 47 Number of retries for empty slots = 0 Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0 Host adapter:loop state= <READY>, flags= 0x8a0813 Dpc flags = 0x0 MBX flags = 0x0 SRB Free Count = 4096 Port down retry = 008 Login retry count = 008 Commands retried with dropped frame(s) = 0
SCSI Device Information: scsi-qla0-adapter-node=200000e08b044b4c; scsi-qla0-adapter-port=210000e08b044b4c; scsi-qla0-target-0=5005076300c70fad; scsi-qla0-target-1=10000000c92113e5; scsi-qla0-target-2=5005076300ce9b0a; scsi-qla0-target-3=5005076300ca9b0a; scsi-qla0-target-4=5005076801400153; scsi-qla0-target-5=500507680140011a; scsi-qla0-target-6=500507680140017c; scsi-qla0-target-7=5005076801400150; scsi-qla0-target-8=5005076801200153; scsi-qla0-target-9=500507680120011a; scsi-qla0-target-10=500507680120017c; scsi-qla0-target-11=5005076801200150; SCSI LUN (Id:Lun) ( 2: 0): ( 2: 1): ( 2: 2): ( 2: 3): ( 2: 4): ( 2: 5): ( 2: 6): ( 2: 7): . . . Information: Total Total Total Total Total Total Total Total reqs reqs reqs reqs reqs reqs reqs reqs 35, 29, 29, 29, 29, 29, 29, 29, Pending Pending Pending Pending Pending Pending Pending Pending reqs reqs reqs reqs reqs reqs reqs reqs 0, 0, 0, 0, 0, 0, 0, 0, flags flags flags flags flags flags flags flags 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0:0:8c, 0:0:8c, 0:0:8c, 0:0:8c, 0:0:8c, 0:0:8c, 0:0:8c, 0:0:8c,
The disks that the QLogic adapter recognizes are listed at the end of the output under the heading SCSI LUN Information. The disk descriptions are shown one per line. An * at the end of a disk description indicates that the disk is not yet registered with the operating system. SDD cannot configure devices that are not registered with the operating system. See the appropriate Host Systems Attachment Guide for your product to learn about SCSI LUN discovery in Linux.
223
4. Enter cfgvpath query to verify that you have configured the SCSI disk devices that you allocated and configured for SDD. The cfgvpath query is effectively looking at the /proc/partitions output. After you enter the cfgvpath query command, a message similar to the following is displayed. This example output is for a system with disk storage system and virtualization product LUNs.
/dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy /dev/sdz /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf . . . ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240) 0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240) host=0 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 id=0 id=0 id=0 id=0 id=0 id=1 id=1 id=1 id=1 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=10 id=10 id=10 lun=0 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2 vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM pid=DDYS-T36950M pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 serial=xxxxxxxxxxxx serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 df_ctlr=0 X df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0
/dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy /dev/sdz /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf . . .
( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65,
0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240) 0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240)
host=0 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2
ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0
id=0 id=0 id=0 id=0 id=0 id=1 id=1 id=1 id=1 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=10 id=10 id=10
lun=0 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2
vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM
pid=DDYS-T36950M pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2062 pid=2062 pid=2062
serial=xxxxxxxxxxxx serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c
ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1
ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0
df_ctlr=0 X df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0
The sample output shows the name and serial number of the SCSI disk device, its connection information, and its product identification. A capital letter X at the end of a line indicates that SDD currently does not support the device or the device is in use and cfgvpath has not configured it. The cfgvpath utility examines /etc/fstab and the output of the mount command in order to determine the disks that it should not configure. If cfgvpath has not configured a disk that you think it should have configured, verify that an entry for one of
224
these disks exists in /etc/fstab or in the output of the mount command. If the entry is incorrect, delete the wrong entry and issue cfgvpath again to configure the device.
cfgvpath
Enter cfgvpath to configure SDD vpath devices. The configuration information is saved by default in the /etc/vpath.conf file to maintain vpath name persistence in subsequent driver loads and configurations. You might choose to specify your own configuration file by issuing the cfgvpath -f configuration_file_name.cfg command where configuration_file_name is the name of the configuration file that you want to specify. If you use a self-specified configuration file, SDD only configures the SDD vpath devices that this file defines. Enter cfgvpath ? for more information about the cfgvpath command.
225
rmvpath
You can remove an SDD vpath device by using the rmvpath vpath_name command, where vpath_name represents the name of the SDD vpath device that is selected for removal. Enter rmvpath ? for more information about the rmvpath command.
lsvpcfg
Verify the SDD vpath device configuration by entering lsvpcfg or datapath query device. If you successfully configured SDD vpath devices, output similar to the following is displayed by lsvpcfg. This example output is for a system with disk storage system and virtualization product LUNs:
sdd-mod: SDD 1.6.0.0 2.4.19-64GB-SMP SMP Mar 3 2003 18:06:49 (C) IBM Corp. 000 vpatha ( 247, 0) 60812028 = /dev/sdb /dev/sdf /dev/sdax /dev/sdbb 001 vpathb ( 247, 16) 70912028 = /dev/sdc /dev/sdg /dev/sday /dev/sdbc 002 vpathc ( 247, 32) 31B12028 = /dev/sdd /dev/sdh /dev/sdaz /dev/sdbd 003 vpathd ( 247, 48) 31C12028 = /dev/sde /dev/sdi /dev/sdba /dev/sdbe 004 vpathe ( 247, 64) 600507680183000a800000000000000a = /dev/sdj /dev/sdt /dev/sdad /dev/sdan /dev/sdbf /dev/sdbp /dev/sdbz /dev/sdcj 005 vpathf ( 247, 80) 600507680183000a800000000000000b = /dev/sdk /dev/sdu /dev/sdae /dev/sdao /dev/sdbg /dev/sdbq /dev/sdca /dev/sdck 006 vpathg ( 247, 96) 600507680183000a800000000000000c = /dev/sdl /dev/sdv /dev/sdaf /dev/sdap /dev/sdbh /dev/sdbr /dev/sdcb /dev/sdcl 007 vpathh ( 247, 112) 600507680183000a800000000000000d = /dev/sdm /dev/sdw /dev/sdag /dev/sdaq /dev/sdbi /dev/sdbs /dev/sdcc /dev/sdcm 008 vpathi ( 247, 128) 600507680183000a800000000000000e = /dev/sdn /dev/sdx /dev/sdah /dev/sdar /dev/sdbj /dev/sdbt /dev/sdcd /dev/sdcn 009 vpathj ( 247, 144) 600507680183000a800000000000000f = /dev/sdo /dev/sdy /dev/sdai /dev/sdas /dev/sdbk /dev/sdbu /dev/sdce /dev/sdco 010 vpathk ( 247, 160) 600507680183000a8000000000000010 = /dev/sdp /dev/sdz /dev/sdaj /dev/sdat /dev/sdbl /dev/sdbv /dev/sdcf /dev/sdcp 011 vpathl ( 247, 176) 600507680183000a8000000000000011 = /dev/sdq /dev/sdaa /dev/sdak /dev/sdau /dev/sdbm /dev/sdbw /dev/sdcg /dev/sdcq 012 vpathm ( 247, 192) 600507680183000a8000000000000012 = /dev/sdr /dev/sdab /dev/sdal /dev/sdav /dev/sdbn /dev/sdbx /dev/sdch /dev/sdcr 013 vpathn ( 247, 208) 600507680183000a8000000000000013 = /dev/sds /dev/sdac /dev/sdam /dev/sdaw /dev/sdbo /dev/sdby /dev/sdci /dev/sdcs
See Chapter 13, Using the datapath commands, on page 429 for more information about the datapath query device command and all other SDD datapath commands.
addpaths
You can issue the addpaths command to add paths to existing SDD vpath devices. For SDD to discover new paths, the Linux kernel SCSI disk driver must already be aware of the path. For example, addpaths would be useful in a scenario where disks are configured and are visible to the OS but were unavailable at the time that SDD was configured because of a failed switch or unplugged fibre cable. Later, when the disks are recovered through the recovery process or maintenance, you can issue addpaths on a running system to add the restored paths. Issue the addpaths command to add new paths to existing disks. Use cfgvpath to add new disks. See Using dynamic reconfiguration on page 229. Note: For Linux 2.6 kernels, addpaths is not supported because the functionality of addpaths has been incorporated into the cfgvpath command. To add new paths to existing disks when using a Linux 2.6 kernel, run cfgvpath.
226
Complete the following steps to configure SDD at system startup: 1. Log on to your Linux host system as the root user. 2. At startup run one of these commands to enable run level X: For Red Hat: chkconfig --level X sdd on For SUSE: chkconfig --set sdd X 3. Enter chkconfig --list sdd to verify that the system startup option is enabled for SDD configuration. 4. Restart your host system so that SDD is loaded and configured. If necessary, you can disable the startup option by entering: chkconfig --level X sdd off In order for SDD to automatically load and configure, the host bus adapter (HBA) driver must already be loaded. This can be assured at start time by adding the appropriate driver or drivers to the kernel's initial RAM disk. See the Red Hat mkinitrd command documentation or the SUSE mk_initrd command documentation for more information. Additional suggestions might be available from the HBA driver vendor. Sometimes certain system configurations require SDD to start earlier than is possible under the procedure described above. The general rule is: if some application, filesystem, or other product needs to use an SDD vpath device before it is loaded in the system init scripts, then you will need to use another procedure to start SDD to allow these applications or filesystems access to SDD vpath devices. Some of the known system configurations are described below. This is not an exhaustive list, but it does provide an idea of situations where other methods are required: 1. SDD remote boot If booting off of a SDD vpath device, SDD needs to be available before the root filesystem is mounted. This means SDD needs to be placed in the initial ramdisk (initrd). See Booting Linux over the SAN with SDD on page 237 for more instructions on how to set up this environment. 2. Linux Logical Volume Manager (LVM) with SDD Linux LVM with SDD often requires SDD to start early in the init script process because the LVM initialization occurs relatively early. If LVM is used to encapsulate the root disk, SDD needs to be placed in the initial ramdisk (initrd). See Using Linux Logical Volume Manager with SDD on page 233 for more information. Any other customized application, driver, or filesystem that requires access to a SDD vpath device early in the boot process might require: (1) SDD be placed in the initial ramdisk (initrd), or (2) the SDD startup script be placed in earlier in the init scripts.
227
configures and assigns SDD vpath devices accordingly. The configuration is saved in /etc/vpath.conf to maintain name persistence in subsequent driver loads and configurations. The /etc/vpath.conf is not modified during a rpm upgrade (rpm -U). However, if the rpm is removed and reinstalled (using the rpm -e and rpm -i commands), the /etc/vpath.conf is removed. If you are doing a rpm removal, it is important to manually save your /etc/vpath.conf and restore it after the rpm has been reinstalled, before issuing sdd start. After the SDD vpath devices are configured, issue lsvpcfg or the datapath query device command to verify the configuration. See datapath query device on page 439 for more information. You can manually exclude a device in /etc/vpath.conf from being configured. To manually exclude a device from being configured, edit the vpath.conf file prior to running sdd start, adding a # before the first character of the entry for the device that you want to remain unconfigured. Removing the # allows a previously excluded device to be configured again. The following output shows the contents of a vpath.conf file with vpathb and vpathh not configured:
vpatha 60920530 #vpathb 60A20530 vpathc 60B20530 vpathd 60C20530 vpathe 70920530 vpathf 70A20530 vpathg 70B20530 #vpathh 70C20530
228
that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two. round robin sequential (rrs) This policy is the same as the round-robin policy with optimization for sequential I/O. The default path-selection policy for an SDD device is load balancing sequential. You can change the policy for an SDD device. SDD supports dynamic changing of the SDD devices' path-selection policy. Before changing the path-selection policy, determine the active policy for the device. Enter datapath query device N where N is the device number of the SDD vpath device to show the current active policy for that device.
229
/dev/sda now refers to a path to LUN 1. In such a scenario, running sdd restart is required. In SDD 1.6.1.0 versions and later, cfgvpath will automatically refuse to configure the added LUN if name slippage has occurred.
Uninstalling SDD
You must unload the SDD driver before uninstalling SDD. Complete the following steps to remove SDD from a Linux host system: 1. Log on to your Linux host system as the root user. 2. Enter sdd stop to remove the driver. 3. Enter rpm -e IBMsdd to remove the SDD package. 4. Verify the SDD removal by entering either rpm -q IBMsdd or rpm -ql IBMsdd. If you successfully removed SDD, output similar to the following is displayed:
package IBMsdd is not installed
Note: The sdd stop command will not unload a driver that is in use.
Setting up automount
The autofs daemon should be set up at boot time by default. To check this, issue the following command:
chkconfig --list autofs
The output of the command should state the runlevels to which autofs is set. For example:
autofs 0:off 1:off 2:off 3:on 4:on 5:on 6:off
This output indicates that autofs is running on runlevels 3, 4, and 5, which should be the default setting. If you notice that the autofs daemon is not running on runlevels 3, 4, 5, issue the following commands to ensure that it will run on startup: On SUSE:
chkconfig autofs 345
On Red Hat:
230
Configuring automount
Use the following steps to configure automount: 1. Configure the master map file. Automount configuration requires the configuration of the master map file, /etc/auto.master. The format of the file is the following: [mount point] [map file] [options] where, mount point This variable will be the master mount point under which all the vpath devices will be mounted. For example, /mnt, or /vpath (note that it is an absolute path). Note: The mount point that you specify will be mounted over by autofs. That means that whatever items you had mounted at that mount point will be invisible once automount is activated. Thus, ensure that you do not have conflicting mount points for separate applications and that if you plan to mount other things under the master mount point, you do so with automount and not within fstab or another facility or script. If the conflict is unavoidable, change the automount master mount point to a nonconflicting mount point to prevent problems from occurring. map file This is another separate file that will describe under which names certain devices will be mounted and the mount variables for the device. Usually, it is named after the mount point, such as auto.mnt or auto.vpath. It will usually reside under /etc. options These are the options that you can specify and can be referenced by the automount man page. The most relevant setting is the --timeout setting. The timeout setting is the number of seconds that automount will wait for mount point access before unmounting that mount point. If you set this value to 0, automount will not attempt to unmount the master mount point (that is, it will remain permanently mounted unless it is manually unmounted). The default setting is 5 minutes. The following example shows a sample auto.master file:
/vpath /etc/auto.vpath --timeout=0
2. Configure the secondary map file. The secondary map file is the file referred to by the file /etc/auto.master. The format of this map file is: [secondary mount point] [mount options] [device name] where, secondary mount point The secondary mount point is the mount point relative to the master
231
mount point. For example, if you wanted vpatha to be mounted at /vpath/vpatha, you would set this secondary mount point to vpatha. mount options The mount options are standard options passed to the Linux mount command using the -o option. The only difference is that you can use the option fstype to specify the exact filesystem type of the device. For example, you can use ext2, ext3, reiserfs, etc for the fstype. You can find the other options under the man page for mount. Set the fstype to the correct value and use the two options defaults and check=normal. Defaults will give some values to the filesystem that are standard for most Linux operating environments. The check=normal option ensures that certain sanity checks are made on the filesystem before mounting. You can set check=strict to ensure even stricter checking rules during mount time; but performance might be degraded. Most modern filesystems check themselves after a certain number of mounts. device name The following example shows a sample auto.vpath file:
vpatha -fstype=ext3,defaults,check=normal :/dev/vpatha vpathi -fstype=ext2,defaults,check=normal :/dev/vpathi
3. Capture your new file settings. Test with a reboot cycle at least once to ensure that autofs is loaded with the current map files and that the system will automatically mount the devices correctly. Perform one of the following steps: v Reboot the system. v Run /etc/init.d/autofs restart.
232
/dev/hda3 on / type ext3 (rw) none on /proc type proc (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) /dev/hda1 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) automount(pid16309) on /vpath type autofs (rw,fd=4,pgrp=16309,minproto=2,maxproto=4) /dev/vpatha on /vpath/vpatha type ext3 (rw,check=normal) /dev/vpathi on /vpath/vpathi type ext2 (rw,check=normal)
Use automount to mount SDD vpath devices. However, on Red Hat Linux, if you want to add the mount points over SDD devices in /etc/fstab and have them mounted by mount -a during system startup time, you must not enable the autofsck option (which is done by setting the sixth field of the new entry to 0). Also, ensure that you make the following change in /opt/IBMsdd/bin/sdd.rcscript: Change:
# chkconfig: - 25 75
to:
# chkconfig: - 00 75
This allows the SDD driver tp start as early as possible so that other RC scripts that mount file systems as part of the startup sequence will mount vpath devices configured by SDD. The usual startup script that calls mount -a is S25netfs. If this script is not enabled, either enable it using chkconfig, or add the mount -a command to rc.local so that any entries in /etc/fstab that have not yet been mounted will be attempted. Also, verify that other applications that need to use SDD vpath devices or their mount points are started after SDD has been started, configured, and its filesystems mounted. You then need to issue chkconfig sdd on to configure SDD at system startup. Use chkconfig --list sdd to verify the run levels that sdd is configured to start. If the run levels are not correct, modify the run levels by using the --level option in chkconfig to adjust the levels to values that are appropriate for the system configuration.
233
in the initrd, which is a process that is not described here. The procedures and requirements are different for SUSE and Red Hat.
Because the SDD configuration utility (cfgvpath) must be able to write certain configuration parameters to the root disk, this line is needed to remount the root filesystem in read/write mode. 4. Add another line at the end of the start function to remount the root filesystem back into read-only mode. This restores the mount state before and after the start function. The system remounts the filesystem to read write at a later point in the boot process. Use the following line to remount in read-only mode:
mount -n -o remount,ro / 2> /dev/null (the only change from above is ro)
The beginning of your start function should look like the following:
start() { mount -n -o remount,rw / 2> /dev/null # ADDED THIS LINE echo -n "Starting $dev_name driver load: " rm -f ${driver_dir}/${driver}.o ... mount -n -o remount,ro / 2> /dev/null # ADDED THIS LINE }
5. Enter cd /etc/init.d/boot.d to change to the /etc/init.d/boot.d directory. 6. Create a link named Sxxboot.sdd with xx being a number smaller than the one that is on the LVM boot script link. For example, the LVM on this system is at S04boot.lvm:
# ls -l | grep lvm lrwxrwxrwx 1 root root 11 Aug 12 17:35 S04boot.lvm -> ../boot.lvm*
Because LVM is loading up at S04, you must set SDD to at least S03 in order to avoid this problem. Therefore, create a link to the boot.sdd file that was just modified:
# ln -s ../boot.sdd S03boot.sdd
234
1 root 1 root
root root
11 Mar 11 12:03 S03boot.sdd -> ../boot.sdd* 11 Aug 12 17:35 S04boot.lvm -> ../boot.lvm*
Because SUSE uses the numbering scheme to determine which script is run first at boot time, you must ensure that the SDD script is run before the LVM script is run. 7. If you have SDD starting in the runlevel init scripts, you must shut off the script. To do this, issue the chkconfig command:
chkconfig -s sdd off
8. Configure LVM. Reboot the system and the LVM configuration should come up after reboot using SDD vpath devices.
3. Append the following to the end of the block of commands, before the # LVM initialization comment, or on RHEL 4, before the # LVM2 initialization comment:
# Starting SDD /etc/init.d/sdd start
4. The affected section of the rc.sysinit file should look like this:
235
# Remount the root filesystem read-write. update_boot_stage RCmountfs state=`awk / \/ / && ($3 !~ /rootfs/) { print $4 } /proc/mounts` [ "$state" != "rw" -a "$READONLY" != "yes" ] && \ action $"Remounting root filesystem in read-write mode: " mount -n -o remount,rw / # Starting SDD /etc/init.d/sdd start # LVM initialization ...
5. If you have SDD starting in the runlevel init scripts, you need to shut off the script. You can do this using the chkconfig command.
chkconfig sdd off
6. Configure LVM. 7. Reboot the system and the LVM configuration should come up after reboot using SDD vpath devices.
This filter is too broad for SDD, because it recognizes both SDD vpath devices as well as the underlying paths (that is, /dev/sdxxx) to that SDD vpath device. You can narrow this regular expression to only accept vpath devices and not the underlying SCSI disk devices. Modify the regular expression to accept the name vpath and to ignore all other types of devices. This is the simplest example. Adjust the example according to your environment.
filter = [ "a/vpath[a-z]*/", "r/.*/" ]
This regular expression accepts all vpath devices and rejects all other devices under /dev. 2. Value of types. In the file, you will see that it is commented out:
236
# List of pairs of additional acceptable block device types found # in /proc/devices with maximum (non-zero) number of partitions. # types = [ "fd", 16 ]
Delete the comment marker, and replace fd with vpath. This allows LVM to add vpath to its list of internally recognized devices. The partition number should stay at 16. For example:
types = [ "vpath", 16 ]
After making these two changes, save the lvm.conf file. You should be able to run pvcreate on vpath devices (that is, /dev/vpatha) and create volume groups using vgcreate.
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: a. The installation target MUST be single-pathed before installing RHEL 3. b. Have a copy of RHEL 3 x86 either network-accessible or on CD-ROM. c. Be familiar with the RHEL 3 installation. This includes understanding which packages will be installed. d. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. (This procedure was performed on an ESS Model 800). e. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices.
Chapter 5. Using SDD on a Linux host system
237
f. Although SDD functions correctly in single-path environments, it is recommended that there be redundant physical paths to the devices from the host after installation of RHEL 3. g. Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. h. Ensure that there will be network access to the system. 2. Configure QLogic Devices Note: For ease of installation and to avoid issues with internal SCSI or IDE controllers, it is recommended that all internal disk drive controllers be disabled. This procedure assumes that this has been done. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. While in the QLogic Utility, configure the ESS Model 800 device from which the system will boot. If the utility cannot see the correct device, check the SAN and ESS Model 800 configurations before continuing. 3. Configure Boot/Root/SWAP devices. v The boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing a base package set from the installation media to the boot devices. v It is also recommended that the swap device be at least the size of physical memory that is configured in the host. For simplicity these instructions assume that the boot, root, and swap devices are all located on the same device. However, this is not a requirement for the installation. 4. Installation Media The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation, an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Install v From the BIOS Menus select the installation source to boot from. Verify that the QLogic XXXXXXX SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. v NOTE: Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Note which device will be used for the installation before proceeding, that is, /dev/sda. v Select the desired options until arriving at the Installation Settings. Here, modifications of the partitioning settings are required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. v NOTE: The details of installation and partitioning are not written up here. See the installation procedures to determine which packages are needed for the type of system being installed. 6. Rebooting v On reboot, modify the BIOS to boot from hard disk. The system should now boot to the newly installed OS. v Verify that the system is booted from the correct disk and vpaths.
238
v At this point the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete. 7. Upgrading the SDD driver. At the end of this document are instructions on how to upgrade the SDD driver.
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command produces output similar to the following example. Note that some data from the following output has been modified for ease of reading.
cfgvpath query /dev/sda (8, 0) lun_id=12020870 /dev/sdb (8, 16) lun_id=12120870 /dev/sdc (8, 32) lun_id=12220870 /dev/sdd (8, 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870
The lun_id for /dev/sda is 12020870. Edit the /etc/vpath.conf file using the lun_id for vpatha. Remove all other entries from this file (they will be automatically added later by SDD). 3. Modify the /etc/fstab file There is a one-to-one correlation between sd and vpath minor devices; that is, sda1 and vpatha1. Major devices, however, might not necessarily correlate. For example, sdb1 could be vpathd1. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath.conf file, /dev/vpatha1 will be the mount device for /boot.
Chapter 5. Using SDD on a Linux host system
239
To:
/dev/vpatha3 /dev/vpatha1 none none none /dev/vpatha2 / /boot /dev/pts /proc /dev/shm swap ext3 ext3 devpts proc tmpfs swap defaults defaults gid=5,mode=620 defaults defaults defaults 1 1 0 0 0 0 1 2 0 0 0 0
4. Prepare the initrd file. The [initrd file] refers to the current initrd in /boot. The correct initrd can be determined by the following method:
ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd.vp.gz gunzip initrd.vp.gz mkdir /boot/mnt
5. For ext2 file system initrds, you might need to resize the initrd file system.
dd if=/dev/zero of=initrd.vp seek=33554432 count=1 bs=1 losetup /dev/loop0 initrd.vp e2fsck -f /dev/loop0 resize2fs -f /dev/loop0 losetup -d /dev/loop0
Note: Adding the ramdisk_size= option to the kernel entry in the boot loader file is required after increasing the size of the initrd file. For resizing the initrd to 33554432 add the following to the /boot/grub/menu.lst file, ramdisk_size=34000 Modify the /boot/grub/menu.lst file. Add an entry for the SDD boot using initrd.vp.
title Red Hat Enterprise Linux AS (2.4.21-32.0.1.ELsmp) with vpath/SDD root (hd0,0) kernel /vmlinuz-2.4.21-32.0.1.ELsmp ro root=/dev/vpatha3 ramdisk_size=34000 initrd /initrd.vp
6. Change directory to /boot and un-archive the initrd image to /boot/mnt. Mount the initrd file system.
mount -o loop -t ext2 initrd.vp /boot/mnt
240
cd /boot/mnt mkdir mnt mkdir lib/tls mkdir -p opt/IBMsdd/bin chmod -R 640 opt/IBMsdd
To:
passwd: files
b. Change:
group: compat
To:
group: files
10. Copy required library files for cfgvpath. Use the ldd command to determine the library files and locations. Example:
ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)}| grep lib
These files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. 11. Copy the correct sdd-mod to the initrd file system. Use the uname -r command to determine the correct sdd-mod and create a soft link. Example: The command will return something similar to 2.4.21-32.0.1.ELsmp
cp cd ln cd /opt/IBMsdd/sdd-mod.o-`uname r` /boot/mnt/lib/ lib -s sdd-mod.o sdd-mod.o-`uname r` sdd-mod.o ../
241
cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp
/opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ /bin/awk /boot/mnt/bin/ /bin/cat /boot/mnt/bin/ /bin/tar /boot/mnt/bin/ /bin/grep /boot/mnt/bin/ /bin/chmod /boot/mnt/bin/ /bin/chown /boot/mnt/bin/ /bin/mknod /boot/mnt/bin/ /bin/mount /boot/mnt/bin/ /bin/ls /boot/mnt/bin/ /bin/umount /boot/mnt/bin/ /bin/cp /boot/mnt/bin/ /bin/ash /boot/mnt/bin /bin/rm /boot/mnt/bin /bin/sh /boot/mnt/bin /bin/ps /boot/mnt/bin /bin/sed /boot/mnt/bin /bin/date /boot/mnt/bin /usr/bin/cut /boot/mnt/bin
13. Copy required library files for each binary in step 14. Use the ldd command to determine the library files and locations. Note: Many binaries use the same libraries so there might be duplications of copying. Also, copy the following libraries:
cp /lib/libnss_files.so.2 /boot/mnt/lib cp /lib/libproc.so.2.0.17 /boot/mnt/lib
14. Modify the /boot/mnt/linuxrc file. Append the following lines to then end of the linuxrc file. For some storage systems with Linux 2.4 kernels and addition option must be appended to the line where the scsi_mod module is loaded. Change:
insmod /lib/scsi_mod.o
To:
insmod scsi_mod.o max_scsi_luns=256
The following is the original linuxrc script in the initrd file system:
242
#!/bin/nash echo "Loading scsi_mod.o module" insmod /lib/scsi_mod.o echo "Loading sd_mod.o module" insmod /lib/sd_mod.o echo "Loading qla2300.o module" insmod /lib/qla2300.o echo "Loading jbd.o module" insmod /lib/jbd.o echo "Loading ext3.o module" insmod /lib/ext3.o echo Mounting /proc filesystem mount -t proc /proc /proc echo Creating block devices mkdevices /dev echo Creating root device mkrootdev /dev/root echo 0x0100 > /proc/sys/kernel/real-root-dev echo Mounting root filesystem mount -o defaults --ro -t ext3 /dev/root /sysroot pivot_root /sysroot /sysroot/initrd umount /initrd/proc
The following is the modified linuxrc script in the initrd file system:
#!/bin/nash echo "Loading scsi_mod.o module" insmod /lib/scsi_mod.o max_scsi_luns=256 echo "Loading sd_mod.o module" insmod /lib/sd_mod.o echo "Loading qla2300.o module" insmod /lib/qla2300.o echo "Loading jbd.o module" insmod /lib/jbd.o echo "Loading ext3.o module" insmod /lib/ext3.o echo Mounting /proc filesystem mount -t proc /proc /proc echo Creating block devices mkdevices /dev echo Loading SDD module insmod /lib/sdd-mod.o echo Running cfgvpath /opt/IBMsdd/bin/cfgvpath echo Creating block devices mkdevices /dev echo Copying over device files mount o rw t ext3 /dev/vpatha3 /sysroot mkdevices /sysroot/dev umount /sysroot #echo Creating root device #mkrootdev /dev/root echo 0x0100 > /proc/sys/kernel/real-root-dev echo Mounting root filesystem mount -o defaults --ro -t ext3 /dev/vpatha3 /sysroot pivot_root /sysroot /sysroot/initrd umount /initrd/procAppend Delete Edit
16. Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices.
Chapter 5. Using SDD on a Linux host system
243
The /etc/vpath.conf file will be saved to vpath.conf.rpmsave. 5. Install the new SDD driver.
rpm ivh IBMsdd-x.x.x.x-y.i686.rhel3.rpm cd /boot mv initrd.vp initrd.vp.gz gunzip initrd.vp.gz mount o loop t ext2 initrd.vp mnt cp /opt/IBMsdd/sdd-mod.ko-`uname r` /boot/mnt/lib/
6. Verify that the soft link sdd-mod.ko in /boot/mnt/lib points to the current sdd module. 7. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: v Have a copy of RHEL 3 either network accessible or on CD-ROM.
244
v Be familiar with the Red Hat installation. This includes understanding which packages will be installed and how to select required options through the installation. v Be familiar with how to connect to and operate IBM BladeCenter control or IBM System p LPAR. v Be familiar with how to setup an LPAR and select the required resources to create a configured LPAR with processors, memory, and SAN HBAs. For network installs, a network port is required, and for CD-ROM installs a CD-ROM is required. v Be familiar with how to setup a SAN network or direct-attached SAN storage devices so that the configured LPAR can access LUNs from the storage unit. v Be familiar with creating LUNs on the storage unit so that the LPAR can access the storage devices. Although SDD functions correctly in single-path environments, there should be redundant physical paths to the devices from the host (after installation). v Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. 2. Configure Fibre Channel Adapters v Verify that the SAN HBA devices that are configured for the system have been setup to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. 3. Configure root/boot/swap devices v The physical boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing all packages from the installation media to the boot devices. It is also recommended that the swap device be at least the size of physical memory that is configured in the LPAR. For simplicity these instructions assume that the root/boot/swap devices are all located on the same device; however this is not a requirement for the installation. Also, It is not required that a /boot mount exists. In some cases, there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root / mount. 4. Installation Media v The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation, an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Upgrading the SDD driver and/or OS v At the end of this document are instructions on how to upgrade the SDD driver. v Each time the OS is updated or a new initrd is created, these procedures must be performed for the new OS and initrd. Use this procedure to install RHEL 3: 1. From the SMS menu, select the installation source and boot from the media. 2. Verify that the Fibre HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. Note: Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical
Chapter 5. Using SDD on a Linux host system
245
path to the device. Take note which device will be used for the installation before proceeding, that is, /dev/sdb. Also note which of the fibre HBA devices is used to discover this device as it will be needed in a later step. 3. Select the desired options until arriving at the Installation Settings step of the yast install. Here, modification of the partitioning settings is required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. a. Select partitioning, and go to the custom partition setup. b. Select the device and Custom partitioning for experts. c. Make sure that there is a PReP boot partition on the root/boot device and that it is the first partition. d. Continue to partition devices as required to complete this configuration. The details of installation and partitioning are not written up here. See the installation procedures to determine which packages are needed for the type of system being installed. 4. Finish the installation. If an error occurs while attempting to create the yaboot boot loader stating that the device type of fcp is unknown. Select OK and select No when asked to retry. 5. Reboot the SMS menu. This time the boot device which has been setup over the previous steps is now ready to be booted. 6. Select to boot from a Hard Drive/SAN and select the Fibre HBA device adapter associated with the SAN disk device which the installation was completed. The installation boot device should now be listed in the bootable devices discovered in the SAN on the selected Fibre HBA. 7. Select the appropriate device and boot.
246
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command produces output similar to the following: Note that some data from the following output has been modified for ease of reading.
cfgvpath query /dev/sda ( 8, serial=13320870 /dev/sdb /dev/sdb ( 8, serial=13E20870 /dev/sdc ( 8, serial=12E20870 /dev/sdd /dev/sdd ( 8, serial=13F20870 0) host=0 ch=0 id=0 lun_id=13320870 not configured: Either 16) host=0 ch=0 id=0 lun_id=13E20870 32) host=0 ch=0 id=0 lun_id=12E20870 not configured: Either 48) host=0 ch=0 id=0 lun_id=13F20870 lun=0 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 in /etc/fstab, or mounted or is a raw device lun=1 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 X lun=2 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 in /etc/fstab, or mounted or is a raw device lun=3 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 X
The lun_id for /dev/sdb is 13E2087. Edit the /etc/vpath.conf file using the lun_id for vpatha (vpatha 13E2087). Remove all other entries from this file (they will be automatically added later by SDD) . Contents of /etc/vpath.conf :
vpatha 13E280
3. Extracting and mounting the initrd The following unzips and extracts the initrd image so that it can be modified to include the required elements to enable a vpath boot image.
cd /boot
Locate the initrd image used for booting. This will be the image that /etc/yaboot.conf is pointing to. Note that the file pointed to might be symbolic link to another file. Copy the file to a temporary filename with a .gz extension. For example, if the file name is initrd-2.4.21-32.0.1.EL.img, the correct [initrd file] can be determined by the following method:
cd /boot ls -1A /boot | grep initrd | grep $(uname -r) cp [initrd file] to initrd.vp.gz gunzip initrd.vp.gz mkdir /boot/mnt
Create a temporary directory where the image will be manipulated, foe example, /boot/mnt, This is referred to as the image temporary directory throughout the rest of this documentation.
mkdir /boot/mnt
For ext2 file system initrd's, you might be required to resize the initrd file system (recommended).
dd if=/dev/zero of=initrd.vp seek=33554432 count=1 bs=1 e2fsck f /boot/initrd.vp
247
e2fsck 1.32 (09-Nov-2002) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. Create <y>? y Pass 4: Checking reference counts Pass 5: Checking group summary information initrd.vp: ***** FILE SYSTEM WAS MODIFIED ***** initrd.vp: 36/2000 files (0.0% non-contiguous), 2863/8000 blocks [root@elm17a212 boot]# resize2fs f /boot/initrd.vp
Note: Adding the ramdisk_size= option to the kernel entry in the boot loader file might be required after increasing the size of the initrd file. For resizing the initrd to 33554432 add the following to the /boot/grub/menu.lst file in the append section, ramdisk_size=34000. An example of this entry is provided later in this topic. Mount the initrd file system.
mount o loop t ext2 initrd.vp /boot/mnt
4. Modifying the /boot/initrd Create the following directories in the image temporary directory.
cd /boot/mnt mkdir mnt mkdir -p lib/tls mkdir -p lib64/tls mkdir -p opt/IBMsdd/bin chmod -R 640 opt/IBMsdd
Copy the following files to the following directories relative to the image temporary directory.
cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp /opt/IBMsdd/sdd-mod.o-`uname r` /boot/mnt/lib/ /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ /bin/awk /boot/mnt/bin/ /bin/cat /boot/mnt/bin/ /bin/tar /boot/mnt/bin/ /bin/grep /boot/mnt/bin/ /bin/chmod /boot/mnt/bin/ /bin/chown /boot/mnt/bin/ /bin/mknod /boot/mnt/bin/ /bin/mount /boot/mnt/bin/ /bin/ls /boot/mnt/bin/ /bin/umount /boot/mnt/bin/ /bin/cp /boot/mnt/bin/ /bin/ash /boot/mnt/bin /bin/rm /boot/mnt/bin /bin/sh /boot/mnt/bin /bin/ps /boot/mnt/bin /bin/sed /boot/mnt/bin /bin/date /boot/mnt/bin /usr/bin/cut /boot/mnt/bin
Issue the following command from the lib directory. The linked module is the name of the module that was copied into the /boot/mnt/lib directory above.
248
For each of the above binary files (except sdd-mod.o), run the ldd command and verify that the listed library files exist in the image temporary directory. If they do not, copy the listed library files that do not exist to the corresponding lib and lib64 directories in the image temporary directory. 5. Copy required library files for 'cfgvpath'. Use the 'ldd' command to determine the library files and locations. Example:
ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)}| grep lib
These file must be copied to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively. Copy this additional library file:
cp /lib/libnss_files.so.2 /boot/mnt/lib
7. Modify the /etc/fstab to use vpath devices for /root and swap. Other devices using vpaths will also need to be changed. For the initial install it is recommended to only work with the root/boot/swap devices and comment out other sd and hd devices until completed. Original:
LABEL=/1 none none none /dev/sdd3 /dev/sdb3 / /dev/pts /proc /dev/shm swap swap ext3 defaults devpts gid=5,mode=620 proc defaults tmpfs defaults swap defaults swap defaults 1 0 0 0 0 0 1 0 0 0 0 0
Modified:
/dev/vpatha2 none none none #/dev/sdd3 /dev/vpatha3 / /dev/pts /proc /dev/shm swap swap ext3 defaults devpts gid=5,mode=620 proc defaults tmpfs defaults swap defaults swap defaults 1 0 0 0 0 0 1 0 0 0 0 0
cp /etc/fstab /boot/mnt/etc
249
For some storage systems with Linux 2.4 kernels and addition option must be appended to the line where the scsi_mod module is loaded. Change:
insmod /lib/scsi_mod.o
To:
insmod scsi_mod.o max_scsi_luns=256
250
To repackage all of the changes that have just been made to the initrd, issue the following commands:
cd /boot umount /boot/mnt gzip initrd.vp mv initrd.vp.gz initrd.vp
The initrd-2.4.21-32.0.1.EL.img now has the repackaged initrd image with the SDD drive and modified files required to boot from a vpath. 10. Modifying root device files and updating the boot partition. Modify /etc/yaboot.conf. Add a new entry in the file and modify the entry to point at the new initrd image created in the previous step. Also modify the root device in the new entry to point to the vpath chosen from the previous steps. Remember to include the partition if required. Also make sure to modify the entry name. Original /etc/yaboot.conf:
image=/boot/vmlinux-2.4.21-32.0.1.EL label=2.4.21-32.0.1.E read-only initrd=/boot/initrd-2.4.21-32.0.1.EL.img append="console=hvc0 root=/LABEL=/"
Modified /etc/yaboot.conf:
image=/boot/vmlinux-2.4.21-32.0.1.EL label=2.4.21-32.0.1.E read-only initrd=/boot/initrd-2.4.21-32.0.1.EL.img append="console=hvc0 root=/LABEL=/ image=/boot/vmlinux-2.4.21-32.0.1.EL label=2.4.21-32.0.1.E_SDD read-only initrd=/boot/initrd.vp append="console=hvc0 root=/dev/vpatha3 ramdisk_size=34000"
11. Restart the system a. Reboot the system. b. From the SMS menu, select the boot devices as you did before, if the boot device is not already set up as the first boot device. c. When the yaboot prompt is shown during boot, type in the given name for the new boot image. d. During the OS load, ensure that the IBMsdd module is loaded after the SAN disk devices are discovered. e. Ensure that there were no errors printed to the console during boot. f. If there were errors, reboot the system and at the yaboot prompt, select the old image to boot from. When the system boots, review the preceding steps and make any corrections to errors, then repeat these steps, starting with step 9 (repackaging the initrd). 12. Verify System has reboot and SDD is configured correctly
251
Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices: v mount v swapon s v lsvpcfg v datapath query device At this point, the installed boot device can be set as the default boot device for the system via the SMS menu. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete.
For systems that have /boot on a separate mount point, mount /boot partition using /dev/sd device. 4. Remove the previous SDD driver.
rpm e IBMsdd
The /etc/vpath.conf file will be saved to vpath.conf.rpmsave. 5. Install the new SDD driver.
rpm ivh IBMsdd-x.x.x.x-y.ppc64.rhel3.rpm cd /boot mv initrd.vp initrd.vp.gz gunzip initrd.vp.gz mount o loop t ext2 initrd.vp mnt cp /opt/IBMsdd/sdd-mod.ko-`uname r` /boot/mnt/lib/
6. Verify that the soft link sdd-mod.ko in /boot/mnt/lib points to the current sdd module. 7. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
252
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: a. The installation target MUST be single-pathed before installing SLES 8. b. Have a copy of SLES 8 SP4 i386 either network-accessible or on CD-ROM. c. Be familiar with the SLES 8 installation. This includes understanding which packages will be installed. d. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. (This procedure was performed on an ESS Model 800). e. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. f. Although SDD functions correctly in single-path environments, it is recommended that there be redundant physical paths to the devices from the host after installation of SLES 8. g. Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. h. Ensure that there will be network access to the system. 2. Configure QLogic Devices v For ease of installation and to avoid issues with internal SCSI or IDE controllers, it is recommended that all internal disk drive controllers be disabled. This procedure assumes that this has been done. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. While in the QLogic Utility, configure the ESS Model 800 device from which the system will boot. If the utility cannot see the correct device, check the SAN and ESS Model 800 configurations before continuing. 3. Configure Boot/Root/SWAP devices The boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing a base package set from the installation media to the boot devices. The swap device be at least the size of physical memory that is configured in the host. For simplicity, these instructions assume that the boot, root, and swap devices are all located on the same device; however, this is not a requirement for the installation. 4. Installation Media The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Install
253
v From the BIOS Menus select the installation source to boot from. Verify that the QLogic XXXXXXX SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. v Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Note which device will be used for the installation before proceeding, for example, /dev/sda. v Select the desired options until arriving at the Installation Settings. Here, modifications of the partitioning settings are required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. v The details of installation and partitioning are not included here. See the installation procedures to determine which packages are needed for the type of system being installed. 6. Rebooting a. On reboot, modify the BIOS to boot from hard disk. The system should now boot to the newly installed OS. b. Verify that the system is booted from the correct disk and vpaths. c. At this point the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete. 7. Upgrading the SDD driver At the end of this document, are instructions on how to upgrade the SDD driver.
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query command to get the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command will produce output similar to the following: Note that some data from the following output has been modified for ease of reading.
254
cfgvpath query /dev/sda (8, 0) lun_id=12020870 /dev/sdb (8, 16) lun_id=12120870 /dev/sdc (8, 32) lun_id=12220870 /dev/sdd (8, 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870
The lun_id for /dev/sda is 12020870. Edit the /etc/vpath.conf file using the lun_id for vpatha. Remove all other entries from this file (they will be automatically added later by SDD) . 3. Modify the /etc/fstab and the /boot/grub/menu.lst There is a one-to-one correlation between sd and vpath minor devices, such as, sda1 and vpatha1. Major devices, however, might not necessarily correlate; for example, sdb1 could be vpathd1. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath.conf file, /dev/vpatha1 will be the mount device for /boot. Example: Change from:
/dev/sda3 /dev/sda1 none none none /dev/sda2 / /boot /dev/pts /proc /dev/shm swap ext3 ext3 devpts proc tmpfs swap defaults defaults gid=5,mode=620 defaults defaults defaults 1 1 0 0 0 0 1 2 0 0 0 0
To:
/dev/vpatha3 /dev/vpatha1 none none none /dev/vpatha2 / /boot /dev/pts /proc /dev/shm swap ext3 ext3 devpts proc tmpfs swap defaults defaults gid=5,mode=620 defaults defaults defaults 1 1 0 0 0 0 1 2 0 0 0 0
Modify the /boot/grub/menu.lst file. Add an entry for the SDD boot using initrd.vp
title linux-smp kernel (hd0,0)/vmlinuz-2.4.21-295-smp root=/dev/sda3 initrd (hd0,0)/initrd-2.4.21-295-smp title linux-smp-SDD kernel (hd0,0)/vmlinuz-2.4.21-295-smp root=/dev/vpatha3 ramdisk_size=34000 initrd (hd0,0)/initrd.vp
4. Prepare the initrd file. The [initrd file] refers to the current initrd in /boot. The correct initrd can be determined by the following method:
ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd.vp.gz gunzip initrd.vp.gz mkdir /boot/mnt
255
5. For ext2 file system initrds, you might need to resize the initrd file system. For Sles8u5, this step might not be required.
dd if=/dev/zero of=initrd.vp seek=33554432 count=1 bs=1 losetup /dev/loop0 initrd.vp e2fsck -f /dev/loop0 resize2fs -f /dev/loop0 losetup -d /dev/loop0
Note: Adding the ramdisk_size= option to the kernel entry in the boot loader file might be required after increasing the size of the initrd file. For resizing the initrd to 33554432 add the following to the /boot/grub/menu.lst file, ramdisk_size=34000 (see the previous step for modifying the /boot/grub/menu.lst). 6. Change directory to /boot and un-archive the initrd image to /boot/mnt. Mount the initrd file system.
mount -o loop -t ext2 initrd.vp /boot/mnt
To:
passwd: files
b. Change:
group: compat
To:
group: files
10. Copy cfgvpath to the initrd image. Copy /opt/IBMsdd/bin/cfgvpath to /boot/mnt/opt/IBMsdd/bin/ and change permissions of cfgvpath to 755.
256
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
11. Copy required library files for cfgvpath . Use the ldd command to determine the library files and locations. Example:
ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)} | grep lib
These file must be copied to the /boot/mnt/lib/i686/ and /boot/mnt/lib/ directories respectively. 12. Copy the correct sdd-mod.o file to the initrd file system. Use the uname -r command to determine the correct sdd-mod.o file and create a soft link. Example: The uname -r command will return something similar to 2.6.5-7.191-smp.
cp cd ln cd /opt/IBMsdd/sdd-mod.o-`uname r` /boot/mnt/lib/ lib -s sdd-mod.o sdd-mod.o-`uname r` sdd-mod.o ../
14. Copy the required library files for each binary in the previous step. Use the ldd command to determine the library files and locations. Note: Many binaries use the same libraries so there might be duplications of copying. Also, copy the following library.
cp /lib/libnss_files.so.2 /boot/mnt/lib
15. Copy /dev/sd devices to the initrd /dev directory using the tar command.
257
16. Modify the /boot/mnt/linuxrc file. Add the following lines just after the last kernel module is loaded. For 2.4 kernels an addition option must be appended to the line where the scsi_mod module is loaded for storage systems such as the DS6000 and the DS8000. Change:
insmod /lib/scsi_mod.o
To:
insmod scsi_mod.o max_scsi_luns=256
Add the following lines to the linuxrc file after the last driver has been loaded.
echo "Mounting proc" mount -n -tproc none /proc echo "Loading SDD module" insmod /lib/sdd-mod.o echo "Running SDD configuration" /opt/IBMsdd/bin/cfgvpath
Ensure an updated copy of vpath.conf and the vpath device files are copied to the root file system during boot by using the following syntax to mount the root file system.
mount -o rw -t [fstype] [device] /mnt
Add the following lines just after the modules load entries. The values used for the [fstype] and [device] here are only examples. Use the correct values for the system that is being configured.
echo "Copying over device files" mount -o rw -t ext3 /dev/vpatha3 /sysroot (tar cps /dev/IBMsdd /dev/vpath*) | (cd /sysroot && tar xps) cp /etc/vpath.conf /sysroot/etc/ umount /sysroot
You must ensure that the correct major and minor number of the /root vpath device is written to /proc/sys/kernel/real-root-dev. To do this, add the following line to the linuxrc file.
258
echo "Setting correct root device" for name in `cat /proc/cmdline`; do #Look for "root=" echo $name | grep -q ^root if [ $? -eq 0 ]; then # chop off the "root=" dev_name=`expr "$name" : .*=\(.*\) ` echo "Found root = $dev_name" #chop off the "dev" dev_name=`expr "$dev_name" : /dev/\(.*\) ` #find the major/minor in /proc/partitions parts=`grep $dev_name /proc/partitions` dev_major=`echo $parts | cut -d -f1` dev_minor=`echo $parts | cut -d -f2` dev_num=`expr $dev_major \* 256 + $dev_minor` echo $dev_num > /proc/sys/kernel/real-root-dev continue fi done
259
#! /bin/ash export PATH=/sbin:/bin:/usr/bin # check for SCSI parameters in /proc/cmdline mount -n -tproc none /proc for p in `cat /proc/cmdline` ; do case $p in scsi*|*_scsi_*|llun_blklst=*|max_report_luns=*) extra_scsi_params="$extra_scsi_params $p" ;; esac done umount -n /proc echo "Loading kernel/drivers/scsi/scsi_mod.o $extra_scsi_params" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/scsi_mod.o $extra_scsi_params echo "Loading kernel/drivers/scsi/sd_mod.o" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/sd_mod.o echo "Loading kernel/fs/jbd/jbd.o" insmod /lib/modules/2.4.21-295-smp/kernel/fs/jbd/jbd.o echo "Loading kernel/fs/ext3/ext3.o" insmod /lib/modules/2.4.21-295-smp/kernel/fs/ext3/ext3.o echo "Loading kernel/drivers/scsi/qla2300.o" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/qla2300.o echo "Loading kernel/drivers/scsi/qla2300_conf.o" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/qla2300_conf.o Modified linuxrc script in the initrd file system with modifications. #! /bin/ash export PATH=/sbin:/bin:/usr/bin # check for SCSI parameters in /proc/cmdline mount -n -tproc none /proc for p in `cat /proc/cmdline` ; do case $p in scsi*|*_scsi_*|llun_blklst=*|max_report_luns=*) extra_scsi_params="$extra_scsi_params $p" ;; esac done umount -n /proc echo "Loading kernel/drivers/scsi/scsi_mod.o $extra_scsi_params max_scsi_luns=255" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/scsi_mod.o (this line is part of the above line) $extra_scsi_params max_scsi_luns=255 echo "Loading kernel/drivers/scsi/sd_mod.o" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/sd_mod.o echo "Loading kernel/fs/jbd/jbd.o" insmod /lib/modules/2.4.21-295-smp/kernel/fs/jbd/jbd.o echo "Loading kernel/fs/ext3/ext3.o" insmod /lib/modules/2.4.21-295-smp/kernel/fs/ext3/ext3.o echo "Loading kernel/drivers/scsi/qla2300.o" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/qla2300.o echo "Loading kernel/drivers/scsi/qla2300_conf.o" insmod /lib/modules/2.4.21-295-smp/kernel/drivers/scsi/qla2300_conf.o
260
echo "Mounting proc" mount -n -tproc none /proc echo "Loading SDD module" insmod /lib/sdd-mod.o echo "Running SDD configuration" /opt/IBMsdd/bin/cfgvpath echo "Copying over device files" mount -o rw -t ext3 /dev/vpatha3 /sysroot (tar cps /dev/IBMsdd /dev/vpath*) | (cd /sysroot && tar xps) umount /sysroot echo "Setting correct root device" for name in `cat /proc/cmdline`; do #Look for "root=" echo $name | grep -q ^root if [ $? -eq 0 ]; then # chop off the "root=" dev_name=`expr "$name" : .*=\(.*\) ` echo "Found root = $dev_name" #chop off the "dev" dev_name=`expr "$dev_name" : /dev/\(.*\) ` #find the major/minor in /proc/partitions parts=`grep $dev_name /proc/partitions` dev_major=`echo $parts | cut -d -f1` dev_minor=`echo $parts | cut -d -f2` dev_num=`expr $dev_major \* 256 + $dev_minor` echo $dev_num > /proc/sys/kernel/real-root-dev continue fi done echo "Unmounting proc" umount /proc
18. Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices.
mount swapon -s lsvpcfg datapath query device
4. For systems that have /boot on a separate mount point, mount /boot partition using /dev/sa device.
Chapter 5. Using SDD on a Linux host system
261
The /etc/vpath.conf file will be saved to vpath.conf.rpmsave. 6. Install the new SDD driver.
rpm ivh IBMsdd-x.x.x.x-y.i686.sles8.rpm cd /boot mv initrd.vp initrd.vp.gz gunzip initrd.vp.gz mount o loop t ext2 initrd.vp mnt cp /opt/IBMsdd/sdd-mod.ko-`uname r` /boot/mnt/lib/
7. Verify that the soft link sdd-mod.ko in /boot/mnt/lib points to the current SDD module. 8. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: a. The installation target MUST be single-pathed before installing SLES 9. b. Have a copy of SLES 9 SP2 i386 either network-accessible or on CD-ROM. c. Be familiar with the SLES 9 installation. This includes understanding which packages will be installed. d. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. (This procedure was performed on an ESS Model 800). e. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. f. Although SDD functions correctly in single-path environments, it is recommended that there be redundant physical paths to the devices from the host after installation of SLES 9. g. Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. h. Ensure that there will be network access to the system.
262
2. Configure QLogic Devices v For ease of installation and to avoid issues with internal SCSI or IDE controllers, it is recommended that all internal disk drive controllers be disabled. This procedure assumes that this has been done. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. While in the QLogic Utility, configure the ESS Model 800 device from which the system will boot. If the utility cannot see the correct device, check the SAN and ESS Model 800 configurations before continuing. 3. Configure Boot/Root/SWAP devices The boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing a base package set from the installation media to the boot devices. The swap device be at least the size of physical memory that is configured in the host. For simplicity, these instructions assume that the boot, root, and swap devices are all located on the same device; however, this is not a requirement for the installation. 4. Installation Media The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation, an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Install v From the Qlogic BIOS Menus, select the installation source to boot from. Verify that the QLogic XXXXXXX SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. v For Emulex fibre HBAs, use Emulexs utility software for the Emulex model to enable the Emulex HBA BIOS (to use this utility, the system must be booted to DOS). After the BIOS is enabled, go into the Emulex BIOS during POST boot and enable the boot BIOS for each adapter and select the boot LUN from the list. v Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Note which device will be used for the installation before proceeding, for example, /dev/sda. v Select the desired options until arriving at the Installation Settings. Here, modifications of the partitioning settings are required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. v The details of installation and partitioning are not included here. See the installation procedures to determine which packages are needed for the type of system being installed. 6. Rebooting a. On reboot, modify the BIOS to boot from hard disk, the system should now boot to the newly installed OS. b. Verify that the system is booted from the correct disk and vpaths. c. At this point the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete.
Chapter 5. Using SDD on a Linux host system
263
7. Upgrading the SDD driver At the end of this document are instructions on how to upgrade the SDD driver.
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query device command will produce output similar to the following: Note that some data from the following output has been modified for ease of reading.
cfgvpath query /dev/sda (8, 0) lun_id=12020870 /dev/sdb (8, 16) lun_id=12120870 /dev/sdc (8, 32) lun_id=12220870 /dev/sdd (8, 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870
The lun_id for /dev/sda is 12020870. Edit the /etc/vpath.conf file using the lun_id for vpatha. Remove all other entries from this file (they will be automatically added later by SDD). 3. Modify the /etc/fstab, ensuring that root/boot/swap is mounted on vpath devices. There is a one-to-one correlation between sd and vpath minor devices, for example, sda1 and vpatha1. Major devices, however, might not necessarily correlate, for example, sdb1 could be vpathd1. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath.conf file, /dev/vpatha1 will be the mount device for /boot.
264
To:
/dev/vpatha3 /dev/vpatha1 /dev/vpatha2 / /boot swap ext3 ext3 swap defaults defaults defaults 1 1 1 2 0 0
Modify the /boot/grub/menu.lst file. Add entries for the SDD boot using initrd.vp
title Linux-sdd kernel (hd0,0)/vmlinuz root=/dev/vpatha3 selinux=0 splash=silent barrier=off resume=/dev/sda2 elevator=cfq showopts ramdisk_size=34000 initrd (hd0,0)/initrd.vp
4. Prepare the initrd file. The [initrd file] refers to the current initrd in /boot. The correct initrd can be determined by the following method:
ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd.vp.gz gunzip initrd.vp.gz mkdir /boot/mnt
For the ext2 file system, the initrd might be required to resize the initrd file system.
dd if=/dev/zero of=initrd.vp seek=33554432 count=1 bs=1 losetup /dev/loop0 initrd.vp e2fsck -f /dev/loop0 resize2fs -f /dev/loop0 losetup -d /dev/loop0
Adding the ramdisk_size= option to the kernel entry in the boot loader file might be required after increasing the size of the initrd file. For resizing the initrd to 33554432 add the following to the /boot/grub/menu.lst file, ramdisk_size=34000 as mention previously. 5. Change directory to /boot and un-archive the initrd image to /boot/mnt. Mount the initrd file system.
mount -o loop -t ext2 initrd.vp /boot/mnt
265
cp cp cp cp
8. Create an fstab file in the initrd etc directory with the following entry (this might already exist).
sysfs /sys sysfs defaults 0 0
To:
passwd: files
b. Change:
group: compat
To:
group: files
11. Copy required library files for cfgvpath. Use the ldd command to determine the library files and locations. Example:
ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)} | grep lib
These file must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. 12. Copy the correct sdd-mod.o file to the initrd file system. Use the uname -r command to determine the correct sdd-mod.o file and create a soft link. Example: The uname -r command will return something similar to 2.6.5-7.201-smp.
cp cd ln cd /opt/IBMsdd/sdd-mod.ko-2.6.5-7.201-smp /boot/mnt/lib/ lib -s sdd-mod.ko-2.6.5-7.201-smp sdd-mod.ko ../
266
Note: mount and umount might already exist. If they do exist, do not copy them to the initrd mount directory.
cp /bin/tar /boot/mnt/bin/ cp /bin/chown /boot/mnt/bin/
14. Copy the required library files for each binary. Use the ldd command to determine the library files and locations. Note: Many binaries use the same libraries, so there might be duplications of copying. Example:
ldd /bin/mknod | awk {print $(NF-1)} | grep lib /lib/libselinux.so.1 /lib/tls/libc.so.6 /lib/ld-linux.so.2
The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Also, copy the following library file to /boot/mnt/lib/.
cp /lib/libnss_files.so.2 /boot/mnt/lib
15. Modify the /boot/mnt/linuxrc file. Add the following lines just after the last kernel module is loaded.
echo "Loading SDD module" insmod /lib/sdd-mod.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath
Ensure that an updated copy of vpath.conf is copied to the /root file system by using the following syntax to mount the root file system.
/bin/mount -o rw -t [fstype] [device] /mnt
Add the following lines just after the cfgvpath command. The values used for the [fstype] and [device] here are only examples. Use the correct values for the system that is being configured.
/bin/mount -o rw -t ext3 /dev/vpatha3 /mnt /bin/cp /etc/vpath.conf /mnt/etc/ cd /mnt /bin/tar cps /dev/IBMsdd /dev/vpath* | /bin/tar xps cd / /bin/umount /mnt
17. Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices.
Chapter 5. Using SDD on a Linux host system
267
4. For systems that have /boot on a separate mount point, mount /boot partition using /dev/sd device. 5. Remove the previous SDD driver.
rpm e IBMsdd
The /etc/vpath.conf file will be saved to vpath.conf.rpmsave. 6. Install the new SDD driver.
rpm ivh IBMsdd-x.x.x.x-y.i686.sles9.rpm cd /boot mv initrd.vp initrd.vp.gz gunzip initrd.vp.gz mount o loop t ext2 initrd.vp mnt cp /opt/IBMsdd/sdd-mod.ko-`uname r` /boot/mnt/lib/
7. Verify that the soft link sdd-mod.ko in /boot/mnt/lib points to the current SDD module. 8. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure:
268
v Have a copy of SLES 9 SP2 either network-accessible or on CD-ROM. v Be familiar with the SLES installation. This includes understanding which packages will be installed and how to select required options through the installation. v Be familiar with how to connect to and operate IBM BladeCenter JS20 or IBM System p LPAR. v Be familiar with how to set up an LPAR with processors, memory, and SAN HBAs. For network installs a network port is required, and for CD-ROM installs, a CD-ROM is required. v Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the configured system can access LUNs from the storage unit. v Be familiar with creating LUNs on the storage unit so that the LPAR can access the storage devices. Although SDD functions correctly in single-path environments, there should be redundant physical paths to the devices from the host (after installation). v Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. 2. Configure root/boot/swap devices v The physical boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing all packages from the installation media to the boot devices. It is also recommended that the swap device be at least the size of physical memory that is configured in the system. For simplicity, these instructions assume that the root/boot/swap devices are all located on the same device; however this is not a requirement for the installation. Also, it is not required that a /boot mount exists. In some cases, there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root / mount. 3. Installation Media v The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation, an NFS-exported set of CD-ROMs was used. Any of the installation sources listed can be used. 4. Upgrading the SDD driver. At the end of this document are instructions on how to upgrade the SDD driver. Use this procedure to install SLES 9: 1. From the SMS menu, select the installation source and boot from the media. 2. Verify that the Emulex lpfcdd SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. Note: Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Take note which device will be used for the installation before proceeding, that is, /dev/sdh. Also note which of the Emulex devices is used to discover this device as it will be needed in a later step. 3. Select the desired options until arriving at the Installation Settings step of the yast install.
269
Here, modification of the partitioning settings is required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. a. Select partitioning, and go to the custom partition setup. b. Select the device and Custom partitioning for experts. c. Make sure that there is a PReP boot partition on the root/boot device and that it is the first partition. d. Continue to partition devices as required to complete this configuration. The details of installation and partitioning are not written up here. See the installation procedures to determine which packages are needed for the type of system being installed. 4. Finish the installation. An error occurs while attempting to create the yaboot boot loader stating that the device type of fcp is unknown. Select OK and select No when asked to retry. 5. Rebooting a. On reboot after initial install, enter the SMS menu. b. Boot from the installation source media. c. If you are installing from CD media, continue to a point were you can abort the installation and return to the command line menu system. d. If you are booting from the network, you should already be presented with this menu. e. Select to boot an installed system. f. Select the root device that was just installed in the previous steps. Yast will again come up but from the root partition. g. Finish the installation. 6. Upgrading to the latest service pack If there is a service pack available, at the time of this writing there is currently SP2 available, upgrade the installed system to the latest service pack using yast. Once this is complete, view /etc/lilo.conf and verify that the data in this file looks correct for the boot and root partitions. Once this is verified run lilo. This permits the installation of the boot loader to the PReP boot partition of the drive where the installation error occurred from above. 7. Rebooting. a. Reboot again and enter the SMS menu. This time the boot device which has been setup over the previous steps is now ready to be booted. b. Select to boot from a Hard Drive/SAN and select the Emulex device adapter associated with the SAN disk device on which the installation was completed. c. The installation boot device should now be listed in the bootable devices discovered in the SAN on the selected Emulex HBA. d. Select the appropriate device and boot.
270
Note: The following instructions are examples and the values used herein might be different on your systems. In some cases, there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root / mounted file system. It is recommended but not required that vpatha be used as the vpath boot device. 1. Install the IBM SDD driver. Download and install the IBM SDD driver for the kernel version being used. SDD is packaged in an RPM format and can be installed using the rpm command. See Installing SDD on page 217 for more information. 2. Extracting the initrd. The following will unzip and extract the initrd image so that it can be modified to include the required elements to enable a vpath boot image.
cd /boot
Locate the initrd image used for booting. This will be the image that /etc/yaboot.conf is pointing to. Note that the file pointed to might be symbolic link to another file. Copy the file to a temporary filename with a .gz extension; that is, if the file name is initrd-2.6.5-7.191-pseries64 then:
cp initrd-2.6.5-7.191-pseries64 initrd.vp.gz
Create a temporary directory where the image will be manipulated, foe example, /boot/mnt, This is referred to as the image temporary directory throughout the rest of this documentation. Extract the image to that directory using the command:
mkdir p /boot/mnt cd /boot/mnt cpio -iv < ../initrd.vp
3. Modifying the /boot/initrd. Create the following directories in the image temporary directory. For SLES 9 on System p, there might already be a mnt directory in the temporary initrd image. If there is not, create one.
mkdir mkdir mkdir mkdir mkdir chmod mnt dev -p lib/tls -p lib64/tls -p opt/IBMsdd/bin -R 640 opt/IBMsdd
Copy the following files to the following directories relative to the image temporary directory.
cp cp cp cp cp /opt/IBMsdd/sdd-mod.ko-2.6.5-7.191-pseries64 lib/ /opt/IBMsdd/bin/cfgvpath opt/IBMsdd/bin/ /bin/cat bin/ /bin/cp bin /bin/chown bin/
For each of the above binary files (except sdd-mod.o), run the ldd command and verify that the listed library files exist in the image temporary directory. If
Chapter 5. Using SDD on a Linux host system
271
they do not, copy the listed library files that do not exist to the corresponding lib and lib64 directories in the image temporary directory. An example script to gather the correct libraries and copy them to the correct directories:
for libs in `opt/IBMsdd/bin/cfgvpath /bin/cat /bin/cp /bin/chown`; do ldd $libs | awk '{print $(NF-1)} | grep lib | while read line; do cp $line /boot/mnt$line done done
Copy this additional library file. 4. Gather SDD Data in preparation for configuring /etc/fstab, /etc/yaboot.conf and /boot/initrd.
sdd start
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command produces output similar to the following example. Note that some data from the following output has been modified for ease of reading.
cfgvpath query /dev/sda (8, 0) host=0 ch=0 id=0 lun=0 vid=IBM lun_id=12020870 /dev/sdb (8, 16) host=0 ch=0 id=0 lun=1 lun_id=12120870 /dev/sdc (8, 32) host=0 ch=0 id=0 lun=2 lun_id=12220870 /dev/sdd (8, 48) host=0 ch=0 id=0 lun=3 lun_id=12320870 pid=2105800 serial=12020870 vid=IBM pid=2105800 serial=12120870 vid=IBM pid=2105800 serial=12220870 vid=IBM pid=2105800 serial=12320870
The lun_id for /dev/sda is 12020870. Edit the /etc/vpath.conf file using the lun_id for vpatha (vpatha 12020870). Remove all other entries from this file (they will be automatically added later by SDD). 5. Modify the /etc/fstab to use vpath devices for /root and swap. Other devices using vpaths will also need to be changed. For the initial install, work only with the root/boot/swap devices and comment out other sd and hd devices until completed. Original:
/dev/sdd4 /dev/hda2 /dev/hda4 /dev/hda3 /dev/sdd3 devpts proc usbfs sysfs / /data1 /data2 swap swap /dev/pts /proc /proc/bus/usb /sys ext3 auto auto swap swap devpts proc usbfs sysfs acl,user_xattr noauto,user noauto,user pri=42 pri=42 mode=0620,gid=5 defaults noauto noauto 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
Modified:
272
1 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0
6. Stop SDD and copy /etc files to the image temporary directories.
sdd stop cp /etc/vpath.conf /boot/mnt/etc cp /etc/passwd /boot/mnt/etc cp /etc/group /boot/mnt/etc
7. Edit the /boot/mnt/linuxrc file. Edit the init file in the image temporary directory. Go to the line that has the creating device nodes message, right after the init script creates the device nodes using /bin/udevstart , and add the following lines after the echo n . command in the script block.
echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath echo "Mounting and copying some required SDD files" /bin/mount -o rw -t <PARTITION TYPE> /dev/vpathXXX /mnt /bin/cp /etc/vpath.conf /mnt/etc /bin/umount /mnt insmod /lib/scsi_mod.o
where /dev/vpathXXX is the root drive/partition. 8. Edit the /boot/mnt/load_modules.sh file. Edit the load_modules.sh file in the image temporary directory and add the following lines to the end of the script:
echo "Loading SDD Module" insmod /lib/sdd-mod.ko
Issue the following command from the lib directory. The linked module is the name of the module that was copied into the lib directory above.
cd /boot/mnt/lib ln -s sdd-mod.ko-2.6.5-7.191-pseries64 sdd-mod.ko
9. Repackaging the initrd. To repackage all of the changes that have just been made to the initrd, issue the following commands:
cd /boot/mnt find . | cpio -H newc -vo > /boot/initrd.vp cd /boot gzip initrd.vp mv initrd.vp.gz initrd.vp
The initrd-2.6.5-7.191-pseries64 now has the repackaged initrd image with the SDD drive and modified files required to boot from a vpath. 10. Modifying root device files.
273
Additional files need to be modified on the root file system before the modifications are complete. Modify /etc/yaboot.conf. Add a new entry in the file and modify the entry to point at the new initrd image created in the above step. Also modify the root device in the new entry to point to the vpath chosen from the previous steps. Remember to include the partition if required. Also make sure to modify the entry name. Original /etc/yaboot.conf:
# header section partition = 4 timeout = 100 default = linux # image section image = /boot/vmlinux label = linux append = "root=/dev/sdd4 selinux=0 elevator=cfq" initrd = /boot/initrd
Modified /etc/yaboot.conf:
# header section partition = 4 timeout = 100 default = linux # image section image = /boot/vmlinux label = linux append = "root=/dev/sdd4 selinux=0 elevator=cfq" initrd = /initrd image = /boot/vmlinux label = linux-sdd append = "root=/dev/vpatha3 selinux=0 elevator=cfq" initrd = /boot/initrd.vp
11. Restart the system. a. Reboot the system. b. From the SMS menu, select the boot devices as before, if the boot device is not already set up as the first boot device. c. When the yaboot prompt is shown during boot, enter the name for the new boot image. d. During the OS load, ensure that the IBMsdd module is loaded after the SAN disk devices are discovered. e. Ensure that no errors were printed to the console during boot. f. If there were errors, reboot the system and at the yaboot prompt, select the old image to boot from. When the system boots, review the previous steps and correct any errors. Then repeat these steps, starting with step 9 (repackaging the initrd). If all the vpath devices that are discovered by cfgvpath were not created during the modification steps above, the cfgvpath might have to timeout waiting for these devices to be created. Once the system comes up, login and verify that the root mount device is the device specified during the configuration, using df. Also validate that any other configured partitions and swap devices, using swapon -s, are also now mounted on vpath devices. 12. Verify System has reboot and SDD is configured correctly.
274
Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices: v mount v swapon s v lsvpcfg v datapath query device At this point, the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete.
For systems that have /boot on a separate mount point, mount /boot partition using /dev/sd device. 4. Remove the previous SDD driver.
rpm e IBMsdd
The /etc/vpath.conf file will be saved to vpath.conf.rpmsave. 5. Install the new SDD driver.
rpm ivh IBMsdd-x.x.x.x-y.ppc64.sles9.rpm mkdir p /boot/mnt cd /boot mv initrd.vp initrd.vp.gz gunzip initrd.vp.gz cd /boot/mnt cpio -iv < ../initrd.vp cp /opt/IBMsdd/sdd-mod.ko-`uname r` /boot/mnt/lib
6. Verify that the soft link sdd-mod.ko in /boot/mnt/lib points to the current SDD module. 7. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
275
SAN Boot Instructions for SLES 9 with IBM SDD (x86) and LVM 2
The following procedure is used to install SLES 9 x86 on an xSeries host with fibre-channel connect storage and configure SDD with LVM. This procedure assumes that no installation is present to work from and when completed, the boot and swap devices will be running on IBM SDD vpath devices and will be under LVM control.
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: a. The installation target MUST be single-pathed before installing SLES 9. It is also recommended to limit the installation to a single LUN if possible to easy the transition from single-path to IBM SDD vpath; however, this is not required. b. The QLogic BIOS should be enabled for discovery of SAN Devices and that the device that contains the kernel and initrd images, /boot mount point, be selected as the boot device in the QLogic BIOS. Follow the IBM Host Systems Attachment Guide recommendations when setting up the QLogic BIOS for SAN Boot. c. Have a copy of SLES 9 SP2 i386 either network-accessible or on CD-ROM. d. Be familiar with the SLES 9 installation. This includes understanding which packages will be installed. e. Be familiar with setting up root, boot, swap and any other initial mount points that will be used for the setup of the initial system under LVM control. f. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. g. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from the those storage systems. Although SDD functions correctly in single-path environments, it is recommended that there be redundant physical paths to the devices from the host after completing this procedure. h. Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. i. Ensure that there will be network access to the system. 2. Configure QLogic Devices. v For ease of installation and to avoid issues with internal SCSI or IDE controllers, it is recommended that all internal disk drive controllers be disabled. This procedure assumes that this has been done. v Verify that the QLogic SAN HBA devices that are configured for the host have been set up to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. While in the QLogic Utility, configure the ESS Model 800 device from which the system will boot. If the utility cannot see the correct device, check the SAN and ESS Model 800 configurations before continuing. 3. Configure Boot/Root/SWAP devices. The root device that will be used for installation and booting should be at least 4 GB in size. If multiple partitions are being used, that is, /usr /var, the total size of all mount points should be at least this size. This is the minimum size
276
for installing a base package set from the installation media to the boot devices. More space might be required depending on the package selection. The swap device be at least the size of physical memory that is configured in the host. For simplicity, these instructions assume that the boot, root, and swap devices are all located on the same device; however, this is not a requirement for the installation. v The boot (/boot) device must NOT be under LVM control. v The root, (/), and other optional mount points, (/usr, /var, /opt), can be under LVM control. If they are not, at a minimum, they should be mounted to an IBM SDD vpath device. v SWAP can also be under LVM control but this is not a requirement but should at least use a vpath device. 4. Use the installation media. The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation, an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Installing the system. v From the BIOS Menus, select the installation source to boot from. Verify that the QLogic qla2300 SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. v Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Note which device will be used for the installation before proceeding, for example, /dev/sda. v Select the desired options until arriving at the Installation Settings. Here, modifications of the partitioning settings are required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. v The details of installation, partitioning, LVM setup, package selection, boot options, and so on, are not documented here. See the installation procedures to determine which packages are needed for the type of system that you are installing. 6. Restarting the system. a. On reboot, modify the BIOS to boot from hard disk, the system should now boot to the newly installed OS. b. Verify that the system is booted from the correct disk and vpaths. c. At this point the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete.
277
v v v v
All values and devices in the following procedure might not be the same on the system where this procedures is being conducted. It is, however, recommended (but not required) that you use vpatha as the physical device for the root volume group. Perform this procedure in a single-path environment. Once completed and booting with SDD and LVM, configure the SAN for multipath. All commands in this procedure begin with a # sign and might be followed by the output of that command such as the command pvdisplay. Because /boot will not be under LVM control, it might be safer to work from within /boot. - I In this procedure, you will work with a copy of the current initrd named initrd.vp.
/dev/rootVolGroup/ /dev/rootVolGroup/rootVol /dev/rootVolGroup/swapVol /dev/rootVolGroup/rootVol -> /dev/mapper/rootVolGroup-rootVol /dev/rootVolGroup/swapVol -> /dev/mapper/rootVolGroup-swapVol Physical device is sda2 vpath device vpatha2
v The volume groups for root and swap in the example are as follows:
v Before starting SDD, comment out any sd devices from /etc/fstab other than /boot. This will ensure all devices are written to the /etc/vpath.conf file. These devices might later be changed to vpath devices if the intent is to have them multipathed. v The /etc/fstab will also need to be modified to point /boot from /dev/sd[x] or LABEL=[some_label_name_here] to /dev/vpath[x]. v Modify the /boot/grub/menu.lst file to add an entry for the SDD initrd. v Modify /etc/lvm/lvm.conf to recognize vpath devices and ignore sd devices. v It is always a good idea to make copies of files that are going to be manually modified such as /etc/fstab, /etc/vpath.conf /etc/lvm/lvm.conf and /boot/grub/menu.lst. 1. Install the IBM SDD driver. Download and install the IBM SDD driver for the kernel version being used. SDD is packaged in an RPM format and can be installed using the rpm command. See Installing SDD on page 217 for more information. 2. Use pvdisplay to show the physical volume(2) currently configured for use in LVM. These volumes will be converted from a single-path sd drive to the IBM SDD vpath device. The following is an example of the output from pvdisplay.
# pvdisplay --- Physical volume --PV Name /dev/sda2 VG Name rootVolGroup PV Size 9.09 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 291 Free PE 1 Allocated PE 290 PV UUID SSm5g6-UoWj-evHE-kBj1-3QB4-EVi9-v88xiI
3.
278
b. /boot is mounted on a vpath device. There is a one-to-one correlation between sd and vpath minor devices, such as, sda1 and vpatha1. Major devices, however, might not necessarily correlate; for example, sdb1 could be vpathd1. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath.conf file,/dev/vpatha1 will be the mount device for /boot. Example: Change from:
/dev/rootVolGroup/rootVol / LABEL=/boot /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0
To:
/dev/rootVolGroup/rootVol / /dev/vpatha1 /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0
4. Modify the /boot/grub/menu.lst file. Add an entry before the first title entry for the SDD/LVM boot using initrd.vp. Verify which is the default boot image. The default line should point to the new entry. Make sure the root and resume are identical to the current Linux installation.
... title Linux w/LVM w/SDD kernel (hd0,0)/vmlinuz root=/dev/system/lv00 resume=/dev/system/swap selinux=0 splash=silent barrier=off elevator=cfq initrd (hd0,0)/initrd.vp ...
5. Modify /etc/lvm/lvm.conf. This procedure will modify LVM to only discover vpath style devices. Comment out the default filter line. Example:
filter = [ "a/.*/" ]
6. Modify the boot scripts. To support the addition of vpath devices during boot as well as possible changes to the device-mapper, you must add and modify the following boot scripts.
# cd /etc/init.d/boot.d # ln -s ../boot.udev S04boot.udev # vi S06boot.device-mapper
279
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command produces output similar to the following example. Note that some data from the following output has been modified for ease of reading.
# cfgvpath query /dev/sda (8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 lun_id=12020870
The lun_id for /dev/sda is 12020870. This is the sd device that you will map to vpatha. Edit the /etc/vpath.conf file using the lun_id for vpatha and remove all other entries from this file. (SDD will automatically add them later.)
vpatha 12020870
8. Prepare the initrd file. The [initrd file] refers to the current initrd in /boot. The correct initrd can be determined by the following method:
# ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd.vp.gz gunzip initrd.vp.gz mkdir /boot/mnt
For the ext2 file system, the initrd might be required to resize the initrd file system. 9. Resize and mount the initrd image. For x86-based systems, the initrd is an ext2 filesystem. Because of the need to add files to the initrd image, you should increase the size of the image before continuing. After issuing the e2fsck -f initrd.vp command you are prompted create a /lost+found directory. Enter y to create this directory.
280
e2fsck 1.36 (05-Feb-2005) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. Create<y>? # # # # dd if=/dev/zero of=initrd.vp seek=33554432 count=1 bs=1 e2fsck -f initrd.vp resize2fs -f initrd.vp mount -o loop -t ext2 initrd.vp /boot/mnt
Note: For the remainder of this procedure, work from /boot/mnt. 10. Make additional directories in /boot/mnt if they do not exist.
# # # # mkdir mkdir chmod mkdir /boot/mnt/mnt -p /boot/mnt/opt/IBMsdd/bin -R 640 /boot/mnt/opt/IBMsdd -p /boot/mnt/lib/tls
12. Modify the /boot/mnt/etc/fstab file. Remove all lines that the begin with /dev/* 13. Modify the /boot/mnt/etc/nsswitch.conf file. a. Change:
passwd: compat
To:
passwd: files
b. Change:
group: compat
To:
group: files
15. Copy the required library files for cfgvpath. Use the ldd command to determine the library files and locations. Example:
ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)}
281
These files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. 16. Copy the correct sdd-mod.o file to the initrd file system. Use the uname -r command to determine the correct sdd-mod.o file and create a soft link. Example: The uname -r command will return something similar to 2.6.5-7.191-smp.
# cp /opt/IBMsdd/sdd-mod.ko-<uname -r> /boot/mnt/lib/sdd-mod.ko
17. Verify that the following files exist in the initrd image. If they do not exist, copy the following binaries:
# # # # # # cp cp cp cp cp cp /bin/tar /boot/mnt/bin/ /bin/awk /boot/mnt/bin/ /bin/chown /boot/mnt/bin/ /bin/grep /boot/mnt/bin/ /bin/mknod /boot/mnt/bin/ /bin/cp /boot/mnt/bin/
18. Copy the required library files for each binary that you copied over in step 15. Use the ldd command to determine the library files and locations. Note: Many binaries use the same libraries, so there might be duplications of copying. Verify that the libraries do not already exist in /boot/mnt/lib, if they already exist, there is no need to copy over a new version. Example:
# ldd /bin/mknod | awk {print $(NF-1)} | grep lib /lib/libselinux.so.1 /lib/tls/libc.so.6 /lib/ld-linux.so.2
The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Also, copy the following library file to /boot/mnt/lib/.
cp /lib/libnss_files.so.2 /boot/mnt/lib
19. Modify the /boot/mnt/linuxrc file. Add the following lines just before the statement, echo Loading kernel/drivers/md/dm-snapshot.ko.
echo "Loading SDD module" insmod /lib/sdd-mod.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath
Ensure that an updated copy of vpath.conf is copied to the /root file system by using the following syntax to mount the root file system.
/bin/mount -o rw -t [fstype] [device] /mnt
282
Add the following lines just after [ vgchange <...> ] . The values used for the [fstype] and [device] here are only examples. Use the correct values for the system that is being configured.
/bin/mount -o rw -t ext3 /dev/vpatha3 /mnt /bin/cp /etc/vpath.conf /mnt/etc/ cd /mnt
21. Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices.
mount swapon -s pvdisplay lsvpcfg datapath query device
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: a. The installation target MUST be single-pathed before installing RHEL 4. b. Have a copy of RHEL 4 U1 or U2 i386 either network-accessible or on CD-ROM. c. Be familiar with the RHEL 4 installation. This includes understanding which packages will be installed. d. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. (This procedure was performed on an ESS Model 800). e. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. f. Although SDD functions correctly in single-path environments, it is recommended that there be redundant physical paths to the devices from the host after installation of RHEL 4. g. Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. h. Ensure that there will be network access to the system. 2. Configure QLogic Devices
283
Note: For ease of installation and to avoid issues with internal SCSI or IDE controllers, all internal disk drive controllers should be disabled. This procedure assumes that this has been done. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. While in the QLogic Utility, configure the ESS Model 800 device from which the system will boot. If the utility cannot see the correct device, check the SAN and ESS Model 800 configurations before continuing. 3. Configure Boot/Root/SWAP devices. v The boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing a base package set from the installation media to the boot devices. v It is also recommended that the swap device be at least the size of physical memory that is configured in the host. For simplicity these instructions assume that the boot, root, and swap devices are all located on the same device. However, this is not a requirement for the installation. 4. Installation Media The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation, an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Install v Verify that the QLogic qla2030 SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. v For Emulex fibre HBAs, use Emulex utility software for the Emulex model to enable the Emulex HBA BIOS (to use this utility, the system must be booted to DOS). After the BIOS is enabled go into the Emulex BIOS during POST boot and enable the boot BIOS for each adapter and select the boot LUN from the list. v Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Note which device will be used for the installation before proceeding, that is, /dev/sda. v Select the desired options until arriving at the Installation Settings. Here, modifications of the partitioning settings are required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. v The details of installation and partitioning are not written up here. See the installation procedures to determine which packages are needed for the type of system being installed. 6. Rebooting a. On reboot, modify the BIOS to boot from hard disk; the system should now boot to the newly installed OS. b. At this point the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete.
284
command. Also verify that the swap, using swapon -s and other configured partitions are correctly mounted. This completes the single-path boot from SAN. The following list of suggestions should be noted before beginning this procedure: Notes: 1. The following instructions are examples and the values used herein might be different on your system. In some cases, there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root / mounted file system. It is recommended, but not required, that vpatha be used as the vpath boot device. 2. All values and devices in the following procedure might not be the same on the system where this procedures is being conducted. It is, however, recommended (but not required) to use vpatha as the physical device for the root volume group. 3. Perform this procedure in a single-path environment. Once completed and booting with SDD and LVM, configure the SAN for multipath. 4. All commands in this procedure begin with a # sign and might be followed by the output of that command. 5. In this procedure, you will work with a copy of the current initrd named initrd.vp. 6. Before you start SDD, comment out any sd devices from /etc/fstab other than /boot. This ensures that all devices are written to the /etc/vpath.conf file. These devices might later be changed to vpath devices if the intent is to have them multipathed. 7. The /etc/fstab will also need to be modified to point /boot from /dev/sd[x] or LABEL=[some_label_name_here] to /dev/vpath[x]. 8. Modify the /boot/grub/menu.lst file to add an entry for the SDD initrd. 9. It is always a good idea to make copies of files that are going to be manually modified such as /etc/fstab, /etc/vpath.conf /etc/lvm/lvm.conf and /boot/grub/menu.lst. To modify the boot/root and other devices for booting using the SDD driver, continue with the following steps: 1. Install the IBM SDD driver. Download and install the IBM SDD driver for the kernel version being used. SDD is packaged in an RPM format and can be installed using the rpm command. See Installing SDD on page 217 for more information. 2. Modify the /etc/fstab file, ensuring that: a. LABEL= is not being used b. /boot is mounted on a vpath device Because Red Hat writes labels to the disk and uses labels in the /etc/fstab the boot (/boot) device might be specified as a label, that is, LABEL=/boot. This might, however, be a different label other than LABEL=/boot. Check for line in the /etc/fstab where /boot is mounted and change it to the correct vpath device. Also ensure that any other device specified with the LABEL= feature is changed to a /dev/sd or /dev/vpath device. LABEL= in a multi-pathed environment confuses Red Hat. There is a one-to-one correlation between sd and vpath minor devices, such as, sda1 and vpatha1. Major devices, however, might not necessarily correlate; for example, sdb1 could be vpathd1. Because /boot was installed on
285
/dev/sda1 and vpatha corresponds to sda in the /etc/vpath.conf file, /dev/vpatha1 will be the mount device for /boot. 3. Gather SDD data in preparation for configuring /etc/fstab, menu.lst and /boot/initrd.
sdd start
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command produces output similar to the following example. Note that some data from the following output has been modified for ease of reading.
cfgvpath query /dev/sda (8, 0) lun_id=12020870 /dev/sdb (8, 16) lun_id=12120870 /dev/sdc (8, 32) lun_id=12220870 /dev/sdd (8, 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870
The lun_id for /dev/sda is 12020870. Edit the /etc/vpath.conf file using the lun_id for vpatha. Remove all other entries from this file (they will be automatically added later by SDD). Add an entry for the SDD/LVM boot using initrd.vp 4. Modify the /boot/grub/menu.lst file
title Red Hat Enterprise Linux AS (2.6.9-11.ELsmp) w/SDD root (hd0,0) kernel /vmlinuz-2.6.9-11.ELsmp ro root=/dev/vpatha3 initrd /initrd.vp
5. Prepare the initrd file. The [initrd file] refers to the current initrd in /boot. The correct initrd can be determined by the following method:
ls -1A /boot | grep initrd | grep $(uname -r) initrd-2.6.9-11.ELsmp.img might be the result. cd /boot cp [initrd file] to initrd.vp.gz gunzip initrd.vp.gz mkdir /boot/mnt
286
10. Copy required library files for cfgvpath. Use the ldd command to determine the library files and locations. Example:
ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)}
These files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. 11. Copy the correct sdd-mod to the initrd file system. Use the uname -r command to determine the correct sdd-mod and create a soft link. Example: The command will return something similar to 2.6.9-11.ELsmp
cp /opt/IBMsdd/sdd-mod.ko-2.6.9-11.ELsmp /boot/mnt/lib/sdd-mod.ko
13. Copy required library files for each binary copied to the /boot/mnt directory in the previous step. Use the ldd command to determine the library files and locations. Note: Many binaries use the same libraries so there might be duplications of copying. Also, copy the following libraries:
ldd /bin/mknod | awk {print $(NF-1)} | grep lib /lib/libselinux.so.1 /lib/tls/libc.so.6 /lib/ld-linux.so.2
287
The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Also, copy the following library files to /boot/mnt/lib/.
cp /lib/libproc-3.2.3.so /boot/mnt/lib/ cp /lib/libtermcap.so.2 /boot/mnt/lib/ cp /lib/libnss_files.so.2 /boot/mnt/lib/
14. Modify the /boot/mnt/init file. Add the following lines after the modules load and just before the /sbin/udevstart. Note that /sbin/udevstart can exist multiple times in the initrd. Make sure these lines are added before the correct /sbin/udevstart entry which is located after the kernel modules load.
echo "Loading SDD module" insmod /lib/sdd-mod.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath
Ensure that an updated copy of vpath.conf is copied to the /root file system at boot time by using the following syntax to mount the root file system.
/bin/mount -o rw -t [fstype] [device] /mnt
Add the following lines to the init file just after the previously added entries. Values used for the [fstype] and [device] here are only examples. Use the correct values for the system that is being configured.
/bin/mount -o rw -t ext3 /dev/vpatha3 /mnt /bin/cp /etc/vpath.conf /mnt/etc/ /bin/umount /mnt
15. Use cpio to archive the /boot/mnt directory and gzip it in preparation for rebooting.
find . | cpio -H newc -vo > ../initrd.vp cd /boot gzip initrd.vp mv initrd.vp.gz initrd.vp rm rf mnt cd / shutdown -r now
16. Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices.
mount swapon -s lsvpcfg datapath query device
288
mount n o remount,rw /
For systems that have /boot on a separate mount point, mount /boot partition using /dev/sd device. 4. Remove the previous SDD driver.
rpm e IBMsdd
The /etc/vpath.conf file will be saved to vpath.conf.rpmsave. 5. Install the new SDD driver.
rpm ivh IBMsdd-x.x.x.x-y.i686.rhel4.rpm mkdir p /boot/mnt cd /boot mv initrd.vp initrd.vp.gz gunzip initrd.vp.gz cd /boot/mnt cpio -iv < ../initrd.vp cp /opt/IBMsdd/sdd-mod.ko-`uname r` /boot/mnt/lib/
6. Verify that the soft link sdd-mod.ko in /boot/mnt/lib points to the current SDD module. 7. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: v The installation target must be single-pathed before installing RHEL 4. v Have a copy of RHEL 4 either network accessible or on CD-ROM. v Be familiar with the RHEL 4 installation. This includes understanding which packages will be installed and how to select required options through the installation.
289
v Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems (this procedure was performed on an ESS Model 800). v Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. Although SDD functions correctly in single-path environments, there should be redundant physical paths to the devices from the host after installation of RHEL 4. v Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. v Ensure that there is network access to the system. 2. Configure QLogic devices v For ease of installation and to avoid issues with internal SCSI or IDE controllers, all internal disk drive controllers should be disabled. This procedure assumes that this has been done. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. While in the QLogic Utility, configure the ESS Model 800 device from which the system will boot. If the utility cannot see the correct device, check the SAN and ESS Model 800 configurations before continuing. 3. Configure root/boot/swap devices v The physical boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing all packages from the installation media to the boot devices. v It is also recommended that the swap device be at least the size of physical memory that is configured in the LPAR. For simplicity, these instructions assume that the root/boot/swap devices are all located on the same device; however, this is not a requirement for the installation. 4. Installation Media The installation media, or source for installation, can be CD-ROM, NFS, HTTP, FTP, or so on. For this installation, an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Use this procedure to install RHEL 4: a. From the BIOS Menus, select the installation source to boot from. b. Verify that the QLogic qla2030 SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. Note: Because of the way Linux discovers SAN devices, and if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Take note which device will be used for the installation before proceeding, that is, /dev/sda. c. Select the desired options until arriving at the Installation Settings. Here, modification of the partitioning settings is required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. d. The details of installation and partitioning are not written up here. See the installation procedures to determine which packages are needed for the type of system being installed.
290
6. Rebooting a. On reboot, modify the BIOS to boot from hard disk, the system should now boot to the newly installed OS. b. At this point the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete.
291
for example, sdb1 could be vpathd1. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the /etc/vpath.conf file, /dev/vpatha1 will be the mount device for /boot. 3. Modify /etc/yaboot.conf. Add an entry for the SDD/LVM boot using initrd.vp.
image=/vmlinuz-2.6.9-22.0.1.EL label=linux-22.01 read-only initrd=/initrd-2.6.9-22.0.1.EL.img append="console=hvc0 root=/dev/sda4" image=/vmlinuz-2.6.9-22.0.1.EL label=linux-22.01-sdd read-only initrd=/initrd.vp append="console=hvc0 root=/dev/vpatha4
Collect SDD data in preparation for configuring /etc/vpath.conf, /etc/fstab/, etc/yaboot.conf and /boot/initrd.
sdd start
The /etc/vpath.conf file has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command produces output similar to the following example. Note that some data from the following output has been modified for ease of reading.
cfgvpath query /dev/sda (8, 0) lun_id=12020870 /dev/sdb (8, 16) lun_id=12120870 /dev/sdc (8, 32) lun_id=12220870 /dev/sdd (8, 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870
The lun_id for /dev/sdb is 12020870. Edit the /etc/vpath.conf file using the lun_id for vpatha. Remove all other entries from this file (they will be automatically added later by SDD) . 4. Prepare the initrd file The [initrd file] refers to the current initrd in /boot. The correct initrd can be determined by the following method:
ls -1A /boot | grep initrd | grep $(uname -r)
292
9. Copy required library files for cfgvpath. Use the ldd command to determine the library files and locations. Example:
ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)}
These file must be copied to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively. 10. Copy the correct sdd-mod to the initrd file system. Use the uname -r command to determine the correct sdd-mod. The uname -r command returns something similar to 2.6.9-22.0.1
cp /opt/IBMsdd/sdd-mod.ko-2.6.9-22.0.1 /boot/mnt/lib/sdd-mod.ko
11.
12. Copy required library files for each binary copied to the /boot/mnt directory in the previous step. Use the ldd command to determine the library files and locations. Many binaries use the same libraries so there might be duplications of copying. Example:
293
ldd /bin/mknod | awk {print $(NF-1)} | grep lib /lib/libselinux.so.1 /lib/tls/libc.so.6 /lib/ld-linux.so.2
The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Also, copy the following library files to /boot/mnt/lib/.
cp /lib/libproc-3.2.3.so /boot/mnt/lib/ cp /lib/libtermcap.so.2 /boot/mnt/lib/ cp /lib/libnss_files.so.2 /boot/mnt/lib/
13. Modify the /boot/mnt/init file. Add the following lines after the modules load and just before the /sbin/udevstart. Note that /sbin/udevstart may exist multiple times in the initrd. Make sure these lines are added before the correct /sbin/udevstart entry which is located after the kernel modules load.
echo "Loading SDD module" insmod /lib/sdd-mod.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath
14. Use cpio to archive the /boot/mnt directory and gzip it in preparation for rebooting.
find . | cpio -H newc -vo > ../initrd.vp cd /boot gzip initrd.vp mv initrd.vp.gz initrd.vp rm -rf mnt cd / shutdown -r now
15.
Install the yaboot boot loader to a bootstrap partition using the ybin command.
Ybin -b /dev/sda1
Where /dev/sda1 is the PreP partition. 16. Verify System has reboot and SDD is configured correctly Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices: v mount v swapon s v lsvpcfg v datapath query device
294
mount n o remount,rw /
For systems that have /boot on a separate mount point, mount /boot partition using /dev/sd device. 4. Remove the previous SDD driver.
rpm e IBMsdd
The /etc/vpath.conf file will be saved to vpath.conf.rpmsave. 5. Install the new SDD driver.
rpm -ivh IBMsdd-x.x.x.x-y.ppc64.rhel4.rpm mkdir -p /boot/mnt cd /boot mv initrd.vp initrd.vp.gz gunzip initrd.vp.gz cd /boot/mnt cpio -iv < ../initrd.vp cp /opt/IBMsdd/sdd-mod.ko-`uname -r` /boot/mnt/lib/
6. Verify that the soft link sdd-mod.ko in /boot/mnt/lib points to the current sdd module. 7. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.
cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/
9. Install the yaboot boot loader to a bootstrap partition using the ybin command.
Ybin -b /dev/sda1
SAN boot instructions for RHEL 4 with IBM SDD (x86) and LVM 2
Use this procedure to install RHEL 4 U1 (or later) and configure SDD with LVM. This procedure assumes that no installation is present to work from and when completed, the boot and swap devices will be running on SDD vpath devices and will be under LVM control.
Prerequisite steps
1. Ensure that the following conditions exist before continuing with this procedure: v The installation target MUST be single-pathed before installing RHEL 4.
Chapter 5. Using SDD on a Linux host system
295
v A copy of RHEL 4 U1 i386 either network accessible or on CD. v Be familiar with the RHEL 4 installation. This includes understanding which packages will be installed. v Be familiar with setting up root and swap under LVM control. v Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from the those storage systems (this procedure was performed on an ESS Model 800). v Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. Although SDD functions correctly in single path environments, it is recommended that there be redundant physical paths to the devices from the host after installation of RHEL 4. v Optionally, have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. v Ensure there will be network access to the system. 2. Configure the HBA devices. Note: For ease of installation and to avoid issues with internal SCSI or IDE controllers, all internal disk drive controllers should be disabled. This procedure assumes that this has been done. Verify that the SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This permits discovery and use of SAN disk devices during this procedure. 3. Configure the boot/root/swap devices. The boot device that will be used for installation and booting should be at least 4 GB in size. This is the minimum size for installing a base package set from the installation media to the boot devices. It is also recommended that the swap device be at least the size of physical memory that is configured in the host. For simplicity these instructions assume that the boot, root, and swap devices are all located on the same device; however, this is not a requirement for the installation. The root (/) device must be under LVM control. The boot (/boot) device must NOT be under LVM control. Swap might also be under LVM control, but this is not a requirement. However, it should at lease use a vpath device. 4. Use the installation media. The installation media; that is, the source for installation, can be CD-ROM, NFS, HTTP, FTP, and so forth. For this installation, an NFS-exported set of CD-ROMs was used. You can use any of the installation sources that are listed. 5. Install the system. v From the BIOS Menus select the installation source to boot from. v Verify that the HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. Note: Due to the way Linux discovers SAN devices, if SAN devices have already been configured for multiple path access, Linux will discover the same physical device multiple times, once for each logical path to the device. Note which device will be used for the installation before proceeding; that is, /dev/sda. v Select the desired options until arriving at the Installation Settings. Here, modification of the partitioning settings is required for this installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target.
296
Note: The details of installation and partitioning are not documented here. See the installation procedures to determine which packages are needed for the type of system being installed. 6. Reboot the system. a. On reboot, modify the BIOS to boot from hard disk. The system should now boot to the newly installed OS. b. Verify that the system is booted from the correct disk and that the boot/root/swap and LVM configurations are correct. c. At this point the installed boot device can be set as the default boot device for the system. This step is not required, but is suggested because it enables unattended reboots after this procedure is complete.
7. Before starting SDD, comment out any SCSI disk devices from /etc/fstab other than /boot. This will ensure that all devices are written to the /etc/vpath.conf file. These devices can later be changed to vpath devices if the intent is to have them multipathed. This is not absolutely required. 8. The /etc/fstab will also need to be modified to point /boot from /dev/sd[x] or LABEL=[some_label_name_here] to /dev/vpath[x]. 9. Modify the /boot/grub/menu.lst file to add an entry for the SDD initrd. 10. Modify /etc/lvm/lvm.conf to recognize vpath devices and ignore SCSI disk devices. 11. It is always a good idea to make copies of files that are going to be manually modified such as /etc/fstab, /etc/vpath.conf /etc/lvm/lvm.conf and /boot/grub/menu.lst. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps.
Chapter 5. Using SDD on a Linux host system
297
1. Install SDD driver IBMsdd-1.6.0.1-8.i686.rhel4.rpm Change to the directory where the SDD rpm is located and use the rpm tool to install the IBMsdd driver and applications.
# rpm -ivh IBMsdd-1.6.0.1-8.i686.rhel4.rpm
2. Use pvdisplay to get the physical volume for the root and swap lvm volume group(s). In this procedure /dev/sda2 (sda) is the device that will be used for /dev/vpatha2 (vpatha)
# pvdisplay --- Physical volume --PV Name /dev/sda2 VG Name rootVolGroup PV Size 9.09 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 291 Free PE 1 Allocated PE 290 PV UUID SSm5g6-UoWj-evHE-kBj1-3QB4-EVi9-v88xiI
3. Modify the /etc/fstab file, ensuring that: a. LABEL= is not being used b. /boot is mounted on a vpath device Because Red Hat writes labels to the disk and uses labels in the /etc/fstab, the boot (/boot) device might be specified as a label; that is, LABEL=/boot. This can, however, be a different label other than LABEL=/boot. Check for the line in the /etc/fstab where /boot is mounted and change it to the correct vpath device. Also ensure that any other device specified with the LABEL= feature is changed to a /dev/sd or /dev/vpath device. Red Hat does not recognize LABEL= in a multipathed environment. There is a one-to-one correlation between SCSI disk and vpath minor devices, such as, sda1 and vpatha1. Major devices, however, might not correlate; that is, sdb1 could be vpathd1. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the /etc/vpath.conf file, /dev/vpatha1 will be the mount device for /boot. Example: Change:
/dev/rootVolGroup/rootVol / LABEL=/boot /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0
To:
/dev/rootVolGroup/rootVol / /dev/vpatha1 /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0
4. Modify the /boot/grub/menu.lst file. Add an entry for the SDD/LVM boot using initrd.vp.
298
default=1 timeout=10 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux AS (2.6.9-11.ELsmp) w/LVM w/SDD root (hd0,0) kernel /vmlinuz-2.6.9-11.ELsmp ro root=/dev/rootVolGroup/rootVol initrd /initrd.vp title Red Hat Enterprise Linux AS (2.6.9-11.ELsmp) root (hd0,0) kernel /vmlinuz-2.6.9-11.ELsmp ro root=/dev/rootVolGroup/rootVol initrd /initrd-2.6.9-11.ELsmp.img title Red Hat Enterprise Linux AS-up (2.6.9-11.EL) root (hd0,0) kernel /vmlinuz-2.6.9-11.EL ro root=/dev/rootVolGroup/rootVol initrd /initrd-2.6.9-11.EL.img
To:
filter = [ "a/vpath*/", "r/sd*/" ]
6. Start SDD.
# sdd start
/etc/vpath.conf has now been created. You must ensure that vpatha is the root device. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. (In this procedure, sda is the root device). The cfgvpath query command produces output similar to the following: Note: Some data from the following output has been modified for ease of reading.
# cfgvpath query /dev/sda /dev/sdb /dev/sdc /dev/sdd (8, (8, (8, (8, 0) 16) 32) 48) host=0 host=0 host=0 host=0 ch=0 ch=0 ch=0 ch=0 id=0 id=0 id=0 id=0 lun=0 lun=1 lun=2 lun=3 vid=IBM vid=IBM vid=IBM vid=IBM pid=2105800 pid=2105800 pid=2105800 pid=2105800 serial=12020870 serial=12120870 serial=12220870 serial=12320870 lun_id=12020870 lun_id=12120870 lun_id=12220870 lun_id=12320870
The lun_id for /dev/sda is 12020870. Edit the /etc/vpath.conf file using the lun_id for vpatha. Remove all other entries from this file. (They are automatically added later by SDD.)
vpatha 12020870
7. Prepare the initrd file. The [initrd file] refers to the current initrd in /boot. The correct initrd can be determined by the following method:
299
# ls -1A /boot | grep initrd | grep $(uname -r) # # # # cd /boot cp [initrd file] to initrd.vp.gz gunzip initrd.vp.gz mkdir /boot/mnt
Note: The rest of this procedure involves working from /boot/mnt. 8. Change directory to /boot/mnt and unarchive the initrd image to /boot/mnt.
# cd /boot/mnt # cpio -iv < ../initrd.vp
11. Modify the /boot/mnt/etc/nsswitch.conf file. (For rhel4u1i386, this might already be done.) a. Change:
passwd: compat
To:
passwd: files
b. Change:
group: compat
To:
group: files
13. Copy the required library files for cfgvpath. Use the ldd command to determine the library files and locations. Example:
# ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)}
300
For RHEL 4.0, copy these files to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. For RHEL 4.5, copy these files to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively. 14. Copy the correct sdd-mod to the initrd file system. Use the uname -r command to determine the correct sdd-mod. uname -r returns 2.6.9-11.ELsmp For RHEL 4.0:
# cp /opt/IBMsdd/sdd-mod.ko-2.6.9-11.ELsmp /boot/mnt/lib/sdd-mod.ko
16. Copy the required library files for each binary. Use the ldd command to determine the library files and locations. Note: Many binaries use the same libraries, so there might be duplications when copying. Example:
# ldd /bin/mknod | awk {print $(NF-1)} /lib/libselinux.so.1 /lib/tls/libc.so.6 /lib/ld-linux.so.2
For RHEL 4.0, copy these files to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Also, copy all of the following library files to /boot/mnt/lib/.
# cp /lib/libproc-3.2.3.so /boot/mnt/lib/ # cp /lib/libtermcap.so.2 /boot/mnt/lib/ # cp /lib/libnss_files.so.2 /boot/mnt/lib
For RHEL 4.5, copy these files to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively. Also, copy all of the following library files to /boot/mnt/lib64/.
# cp /lib/libproc-3.2.3.so /boot/mnt/lib64/ # cp /lib/libtermcap.so.2 /boot/mnt/lib64/ # cp /lib/libnss_files.so.2 /boot/mnt/lib64
17. Modify the /boot/mnt/init file. Add the following lines just before the following statement:
301
echo "Loading SDD module" insmod /lib/sdd-mod.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath
Ensure that an updated copy of vpath.conf is copied to the /root filesystem by using the following syntax to mount the root file system.
/bin/mount -o rw -t [fstype] [device] /mnt
Add the following lines just after [ insmod /lib/dm-snapshot.ko ]. The values used for the [fstype] and [device] here are only examples. Use the correct values for the system that is being configured.
/bin/mount -o rw -t ext3 /dev/rootVolGroup/rootVol /mnt /bin/cp /etc/vpath.conf /mnt/etc/ /bin/umount /mnt
18. Use cpio to archive the /boot/mnt directory and gzip to compress it in preparation for rebooting.
# # # # # find . | cpio -H newc -vo > ../initrd.vp cd /boot gzip initrd.vp mv initrd.vp.gz initrd.vp cd / # shutdown -r now
19. Once booted, verify that vpath devices are being used. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices.
# # # # # mount swapon -s pvdisplay lsvpcfg datapath query device
302
Fatal: Sorry, don't know how to handle device 0xMMmm, where MM is the major number and mm is the minor number of the device in question (in hex). To prevent lilo from checking the major numbers, you can manually specify the geometry of the disk in the file /etc/lilo.conf.
Use the following procedure to find the information from your system in order to manually specify the disk geometry: 1. Use the sfdisk utility to find the cylinders, heads, and blocks. Use the -l option to list the current partition table and geometry numbers. For example,
[root@server ~]# sfdisk -l /dev/vpatha Disk /dev/vpatha: 5221 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/vpatha1 * 0+ 63 64514048+ 83 Linux /dev/vpatha2 64 4966 4903 39383347+ 83 Linux /dev/vpatha3 4967 5220 254 2040255 82 Linux swap /dev/vpatha4 0 0 0 0 Empty
Note the cylinders, heads, and sectors per track and use this information to fill in the appropriate lilo.conf entries. 2. A separate program, hdparm, can be used to get the starting sector numbers for each partition. However, hdparm only works on scsi disk or ide disk devices (/dev/sdXX or /dev/hdXX) and does not work on vpath devices. You can use one of the underlying paths that correspond to your boot disk to check the values. Your root disk is vpatha and there are four underlying SCSI disk devices or paths that correspond to that vpath device if your lsvpcfg output looks like the following example:
Chapter 5. Using SDD on a Linux host system
303
3.
Choose one vpath, for example, /dev/sda, and then issue the following command:
[root@server ~]# hdparm -g /dev/sda
4. Compare this output to the sfdisk - l output. 5. Issue hdparm -g against every partition. For example:
[root@server ~]# hdparm -g /dev/sda /dev/sda: geometry = 5221/255/63, sectors = 83886080, start = 0 [root@server ~]# hdparm -g /dev/sda1 /dev/sda1: geometry = 5221/255/63, sectors = 1028097, start = 63 [root@server ~]# hdparm -g /dev/sda2 /dev/sda2: geometry = 5221/255/63, sectors = 78766695, start = 1028160 [root@server ~]# hdparm -g /dev/sda3 /dev/sda3: geometry
6. Use the values after the "start = " sections above as the starting sector numbers for the /etc/lilo.conf parameters. These values correspond to the starting sector numbers in the example snippet from lilo.conf shown in the example 303. 7. Insert the disk parameters and all the supporting information. 8. Rerun lilo. The command should now succeed because it does not have to probe the geometry of the vpath device, but instead uses the entries in lilo.conf. Here is an example of a lilo.conf file configured for remote boot:
304
boot=/dev/vpatha map=/boot/map install=/boot/boot.b disk = /dev/vpatha bios = 0x80 sectors = 63 heads = 255 cylinders = 5221 partition = /dev/vpatha1 start = 63 partition = /dev/vpatha2 start = 1028160 partition = /dev/vpatha3 start = 79794855 prompt timeout=50 message=/boot/message default=linux image=/boot/vmlinuz-2.4.21-27.ELsmp label=linux initrd=/boot/initrd-2.4.21-27.ELsmp.img.test read-only root=/dev/vpatha2
If processes are listed, the SDD server has automatically started. If the SDD server has not started, no processes will be listed and you should see Starting the SDD server manually for instructions to start sddsrv.
305
2. Save the file /etc/inittab. 3. Enter the telinit q command. 4. Follow the directions in Verifying if the SDD server has started on page 305 to confirm that the SDD server started successfully.
2. Save the file. 3. Issue telinit q. See Verifying if the SDD server has started on page 305 to verify that the SDD server is not running. If sddsrv is not running, no processes will be listed when you enter ps wax | grep sddsrv.
306
Note: For supported file systems, use the standard UNIX fdisk command to partition SDD vpath devices.
307
Make sure that the SDD devices match the devices that are being replaced. You can issue the lsvpcfg command to list all SDD devices and their underlying disks.
or,
scsi1 (1:1): rejecting I/O to offline device
For 2.4 kernels, the only way to restore devices that have been taken offline by the SCSI layer is to reload the HBA driver. For 2.6 kernels, you can use the sysfs interface to dynamically re-enable SCSI devices that have been taken offline by the SCSI layer. v Setting SCSI midlayer timeout values to address loaded storage targets Some storage devices require a longer time period to retire an I/O command issued by an initiator under heavy load. By default, the SCSI midlayer allots only 30 seconds per SCSI command before cancelling the I/O command to the initiator. Consider setting the timeout value to 60 seconds. If you see SCSI errors of value 0x6000000, LUN reset messages or abort I/O messages, you can change the timeout setting in case it helps to alleviate the situation. It might also be necessary to stop all I/O operations and allow the target to retire all outstanding I/O before starting I/O operations again with the new timeout. For Linux 2.6 kernels, you can manually set the timeout value through the sysfs interface. To do this, issue the following command: echo 60 > /sys/class/scsi_device/<host>:<channel>:<target>:<lun>/timeout where, Replace the items in <> with the following (you can match with the values in /proc/scsi/scsi): host - host number
308
channel - channel number target - target number lun - lun number To simplify this process for multiple paths, Emulex has provided the script set_timout_target.sh at the Emulex website under the Linux tools page. Because this script deals with SCSI disk devices, it can work equally well in environments that use Qlogic host bus adapters. Details on how to use the tool are available on the Emulex website. v Changing default queue depth values to avoid overloaded storage targets You should lower the queue depth per LUN when using multipathing. With multipathing, this default value is magnified because it equals the default queue depth of the adapter multiplied by the number of active paths to the storage device. For example, given that Qlogic uses a default queue depth of 32, the recommended queue depth value to use would be 16 when using two active paths and 8 when using four active paths. Directions for adjusting the queue depth is specific to each HBA driver and should be available in the documentation for the HBA.
309
310
Hardware requirements
The following hardware components are needed: v IBM TotalStorage SAN Fibre Channel Switch 2109 is recommended v Host system v Fibre-channel switch v SCSI adapters and cables (ESS) v Fibre-channel adapters and cables
Software requirements
The following software components are needed: v Microsoft Windows operating system running on the client v One of the following NetWare operating systems running on the server: Novell NetWare 5.1 with Support Pack Novell NetWare 6 with Support Pack NetWare 6.5 with Support Pack NetWare Cluster Service for NetWare 5.1 if servers are being clustered NetWare Cluster Service for NetWare 6.0 if servers are being clustered NetWare Cluster Service for NetWare 6.5 if servers are being clustered ConsoleOne
v v v v
311
Supported environments
SDD supports the following environments: v Novell NetWare 5.1 SP6 v Novell NetWare 6 SP1, SP2, SP3, SP4, or SP5 v Novell NetWare 6.5 SP1.1 or SP2 v Novell Cluster Services 1.01 for Novell NetWare 5.1 is supported on fibre-channel and SCSI devices. v Novell Cluster Services 1.6 for Novell NetWare 6.0 is supported only for fibre-channel devices. v Novell Cluster Services 1.7 for Novell NetWare 6.5 is supported only for fibre-channel devices. Currently, only the following QLogic fibre-channel adapters are supported with SDD: v QL2310FL v QL2200F v QLA2340 and QLA2340/2
Unsupported environments
SDD does not support: v A host system with both a SCSI and fibre-channel connection to a shared disk storage system LUN v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment, such as a disk storage system host-bay-adapter replacement v DS8000 and DS6000 do not support SCSI connectivity.
SCSI requirements
To use the SDD SCSI support, ensure that your host system meets the following requirements: v A SCSI cable connects each SCSI host adapter to an ESS port. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two SCSI adapters are installed. For information about the SCSI adapters that can attach to your NetWare host system, go to the following website: www.ibm.com/servers/storage/support
312
Fibre-channel requirements
You must check for and download the latest fibre-channel device driver APARs, maintenance level fixes, and microcode updates from the following website: www.ibm.com/servers/storage/support/ Note: If your host has only one fibre-channel adapter, you need to connect through a switch to multiple disk storage system ports. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure. To use the SDD fibre-channel support, ensure that your host system meets the following requirements: v The NetWare host system has the fibre-channel device drivers installed. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two paths to a device are attached. For information about the fibre-channel adapters that can be used on your NetWare host system, go to the following website: www.ibm.com/servers/storage/support
313
LOAD QL2200.HAM SLOT=x /LUNS /ALLPATHS /PORTNAMES /GNNFT LOAD QL2200.HAM SLOT=y /LUNS /ALLPATHS /PORTNAMES /GNNFT
Modify the startup.ncf file by adding SET MULTI-PATH SUPPORT=OFF at the top. Then, modify the autoexec.ncf by adding SCAN ALL LUNS before MOUNT ALL:
... ... SCAN ALL LUNS MOUNT ALL ... ...
Ensure that you can see all the LUNs before installing SDD. Use the list storage adapters command to verify that all the LUNs are available. See the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide for more information about how to install and configure fibre-channel adapters for your NetWare host system. See the IBM TotalStorage Enterprise Storage Server: Host Systems Attachment Guide for working around NetWare LUN limitations.
See the IBM TotalStorage Enterprise Storage Server: Host Systems Attachment Guide for more information about how to install and configure fibre-channel adapters for your NetWare host system. See the IBM TotalStorage Enterprise Storage Server: Host Systems Attachment Guide for information about working around NetWare LUN limitations.
314
SET MULTI-PATH SUPPORT = OFF ... #LOAD CPQSHD.CDM ... LOAD SCSIHD.CDM ... LOAD QL2300.HAM SLOT=6 /LUNS /ALLPATHS /PORTNAMES /GNNFT LOAD QL2300.HAM SLOT=5 /LUNS /ALLPATHS /PORTNAMES /GNNFT
Using SCSIHD.CDM rather than CPQSHD.CDM will not cause any problems when running SDD on a Novell NetWare Compaq server.
Installing SDD
This section describes how to install SDD from CD-ROM and downloaded code.
Configuring SDD
To load the SDD module, enter load SDD. To unload the SDD module, enter unload SDD. Note: During the initial load of SDD, NetWare SDD takes over control of each LUN (path) from other device drivers in order to virtualize all the underlying paths (LUNs). Messages like the following are issued during the initial load:
315
Device " {V597-A2-D1:2} IBM 2105F20 REV:.105" deactivated by driver due to device failure.
These messages are expected and are not cause for concern.
Features
SDD provides the following functions: v Automatic path detection, failover and selection v Manual operations (datapath command) v Path selection algorithms v Dynamic load balancing v Disk storage system logical unit detection v Error reporting and logging v SDD in NetWare-layered architecture
The OPEN state indicates that a path is available. This is the initial path state after the system starts. When a path failure occurs in the OPEN state, the path is put into the CLOSE (Error) state. If the SDD recovers the path, the path is put back into the OPEN state. While path recovery is in progress, the path is temporarily changed to the OPEN state. If a path failure occurs three consecutive times in the CLOSE (Error) state, the path is put into the DEAD state in multipath mode. In the single-path mode, it stays in the CLOSE state. However, if the path is recovered, it is put back into the OPEN state. While path reclamation is in progress, the path is temporarily changed to OPEN state. The path is put into the INVALID state and is placed offline if path reclamation fails. Only a datapath command, datapath set adapter <n> online or datapath set device <n> path <m> online, can return the path to the OPEN state. In the event that all the paths fail, all the paths except one are moved into the DEAD state. The one path will still be in OPEN state. This indicates that further access to
316
LUNs is still accepted. At each access, all paths are attempted until at least one of them is recovered. The error count is incremented only for the path in the OPEN state while all other paths are failed.
317
Single-path mode
In single-path mode, only a single path is available in access to a device in a subsystem. The SDD never puts this path into the DEAD state.
Multiple-path mode
In this mode, two or more paths are available in access to a device in a subsystem. SDD has the following behavior concerning path operations: v After a path failure occurs on a path, SDD attempts to use the path again after 2 000 successful I/O operations through another operational path or paths. This process is called Path Recovery. v If the consecutive error count on the path reaches three, SDD puts the path into the DEAD state. v SDD reverts the failed path from the DEAD state to the OPEN state after 50 000 successful I/O operations through an operational path or paths. This process is called Path Reclamation. v If an access fails through the path that has been returned to the OPEN state, SDD puts the path into the INVALID (PERMANENTLY DEAD) state and then never attempts the path again. Only a manual operation using a datapath command can reset a path from the PERMANENTLY DEAD state to the OPEN state. v All knowledge of prior path failures is reset when a path returns to the OPEN state. v SDD never puts the last operational path into the DEAD state. If the last operational path fails, SDD attempts a previously-failed path or paths even though that path (or paths) is in PERMANENTLY DEAD state. v If all the available paths failed, SDD reports an I/O error to the application. v If the path is recovered as either a path recovery operation or a path reclamation operation, the path is then handled as a normal path in the OPEN state and the SDD stops keeping a history of the failed path. Note: You can display the error count with the datapath command.
318
v v v v v
Event source name Time stamp Severity Event number Event description
v SCSI sense data (in case it is valid) Note: A failure in Path Recovery or Path Reclamation is not logged, while a successful path recovery in Path Recovery or Path Reclamation is logged.
Starting with SDD version 1.00i, the list storage adapters displays:
[V597-A3] QL2300 PCI FC-AL Host Adapter Module [V597-A3-D0:0] IBM 2105800 rev:.324 [V597-A3-D0:1] IBM 2105800 rev:.324 [V597-A4] QL2300 PCI FC-AL Host Adapter Module
The datapath query device output will be same in both the cases.
319
320
Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] OPEN NORMAL 224 0 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 224 0 2 0x001A:[V596-A4-D0:0] OPEN NORMAL 224 0 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 224 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 795 0 4 4 1 [V596-A3] NORMAL ACTIVE 794 0 4 4 (Pull one of the cable) Error has occured on device 0x3A path 2 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in CLOSE state. Error has occured on device 0x3A path 0 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in CLOSE state. Path Recovery (1) has failed on device 0x3A path 2 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in CLOSE state. Path Recovery (1) has failed on device 0x3A path 0 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in CLOSE state. ND:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] CLOSE NORMAL 418 2 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 740 0 2 0x001A:[V596-A4-D0:0] CLOSE NORMAL 418 2 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 739 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] DEGRAD ACTIVE 901 5 4 2 1 [V596-A3] NORMAL ACTIVE 1510 0 4 4 (If reconnect cable and issue manual online command) END:datapath set adapter 0 online datapath set adapter command has been issued for adapter 4(Adpt# 0). This adapter is in NORMAL state. device 0x59 path 0 is in OPEN state. device 0x58 path 0 is in OPEN state. datapath set adapter command has been issued for adapter 4(Adpt# 2). This adapter is in NORMAL state.
Chapter 6. Using the SDD on a NetWare host system
321
device 0x59 path 2 is in OPEN state. device 0x58 path 2 is in OPEN state. Success: set adapter 0 to online Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 2838 14 4 4 (If reconnect cable and let SDD do path recovery itself) Path Recovery (2) has succeeded on device 0x3A path 2. This path is in OPEN state. Path Recovery (2) has succeeded on device 0x3A path 0. This path is in OPEN state. (If cable is not reconnected, after 3 retries, path will be set to DEAD) Path Recovery (3) has failed on device 0x3A path 2 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in DEAD state. Path Recovery (3) has failed on device 0x3A path 0 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in DEAD state. END:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] DEAD NORMAL 1418 7 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 4740 0 2 0x001A:[V596-A4-D0:0] DEAD NORMAL 1418 7 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 4739 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 (If cable is continually disconnected, path will be set to INVALID if path reclamation fails) Path Reclamation has failed on device 0x3A path 2 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in INVALID state. Path Reclamation has failed on device 0x3A path 0 (Adapter Error Code: 0x8007, Device Error Code: 0x0000) [No sense data] This path is in INVALID state. END:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] INVALID NORMAL 1418 8 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 54740 0 2 0x001A:[V596-A4-D0:0] INVALID NORMAL 1418 8 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 54739 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 (If pull both cable, volume will be deactivated, IO stops, paths will be set to INVALID except one path left OPEN) Aug 8, 2003 3:05:05 am NSS <comn>-3.02-xxxx: comnVol.c[7478] Volume TEMPVOL: User data I/O error 20204(zio.c[1912]). Block 268680(file block 63)(ZID 3779)
322
Volume TEMPVOL: User data I/O error 20204(zio.c[1912]). Block 268681(file block 64)(ZID 3779) Deactivating pool "TEMPPOOL"... Aug 8, 2003 3:05:06 am NSS<COMN>-3.02-xxxx: comnPool.c[2516] Pool TEMPPOOL: System data I/O error 20204(zio.c[1890]). Block 610296(file block 10621)(ZID 3) Dismounting Volume TEMPVOL The share point "TEMPVOL" has been deactivated due to dismount of volume TEMPVOL . Aug 8, 2003 3:05:06 am NSS<COMN>-3.02-xxxx: comnVol.c[7478] Volume TEMPVOL: User data I/O error 20204(zio.c[1912]). Block 268682(file block 65)(ZID 3779) Aug 8, 2003 3:05:07 am NSS<COMN>-3.02-xxxx: comnVol.c[7478] Volume TEMPVOL: User data I/O error 20204(zio.c[1912]). Block 268683(file block 66)(ZID 3779) Aug 8, 2003 3:05:08 am NSS<COMN>-3.02-xxxx: comnVol.c[7478] Block 268684(file block 67)(ZID 3779) Aug 8, 2003 3:05:08 am NSS<COMN>-3.02-xxxx: comnVol.c[7478] Block 268685(file block 68)(ZID 3779) ........... END:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] OPEN NORMAL 2249 3064 1 0x007A:[V596-A3-D1:0] INVALID OFFLINE 12637 1 2 0x001A:[V596-A4-D0:0] INVALID OFFLINE 2248 16 3 0x005A:[V596-A3-D0:0] INVALID OFFLINE 12637 4 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin
Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] DEGRAD ACTIVE 4499 3080 4 2 1 [V596-A3] DEGRAD ACTIVE 25276 5 4 2 (After reconnect both cables, issue manual online command) END:datapath set adapter 0 online Success: set adapter 0 to online Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 4499 3080 4 4 END:datapath set adapter 1 online Success: set adapter 1 to online Adpt# Adapter Name State Mode Select Errors Paths Active 1 [V596-A3] NORMAL ACTIVE 25276 5 4 4 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 4499 3080 4 4 1 [V596-A3] NORMAL ACTIVE 25276 5 4 4 (At this time, volume tempvol could not be mounted, pool activation is need) END:mount tempvol Volume TEMPVOL could NOT be mounted. Some or all volumes segments cannot be located. If this is an NSS volume, the pool may need to be activated using the command nss /poolactivate=poolname. END:nss /poolactivate=temppool Activating pool "TEMPPOOL"... ** Pool layout v40.07 ** Processing journal ** 1 uncommitted transaction(s) ** 1839 Redo(s), 2 Undo(s), 2 Logical Undo(s) ** System verification completed ** Loading system objects ** Processing volume purge log ** . ** Processing pool purge log ** . Loading volume "TEMPVOL" Volume TEMPVOL set to the DEACTIVATE state. Pool TEMPPOOL set to the ACTIVATE state. END:mount tempvol Activating volume "TEMPVOL"
323
** Volume layout v35.00 ** Volume creation layout v35.00 ** Processing volume purge log ** . Volume TEMPVOL set to the ACTIVATE state. Mounting Volume TEMPVOL ** TEMPVOL mounted successfully END:volumes Mounted Volumes Name Spaces Flags SYS DOS, LONG Cp Sa _ADMIN DOS, MAC, NFS, LONG NSS P TEMPVOL DOS, MAC, NFS, LONG NSS 3 volumes mounted
324
Hardware
The following hardware components are needed: v One or more supported storage devices. v For parallel SCSI access to ESS, one or more SCSI host adapters. v One or more fibre-channel host adapters. In case of a single fibre-channel adapter, it must connect through a switch to multiple disk storage system ports. v Subsystem LUNs that are created and confirmed for multiport access. Each LUN should have up to eight disk instances, with one for each path on the server. v A SCSI cable to connect each SCSI host adapter to a storage system control-unit image port v A fiber-optic cable to connect each fibre-channel adapter to a disk storage system controller port or a fibre-channel switch connected with disk storage system or virtualization product port. To install SDD and use the input/output (I/O) load-balancing and failover features, you need a minimum of two SCSI (ESS only) or fibre-channel host adapters if you are attaching to a disk storage system. To install SDD and use the input-output (I/O) load-balancing and failover features, you need a minimum of two fibre-channel host adapters if you are attaching to a virtualization product. SDD requires enabling the host-adapter persistent binding feature to have the same system device names for the same LUNs.
Software
SDD supports the following software components: v ESS on a SPARC system running 32-bit Solaris 7/8/9 or 64-bit Solaris 7/8/9/10 v DS8000 on a SPARC system running 32-bit Solaris 8/9 or 64-bit Solaris 8/9/10 v DS8000 on an X64 machine running 64-bit Solaris 10 v DS6000 on a SPARC system running 32-bit Solaris 8/9 or 64-bit Solaris 8/9/10 v SAN Volume Controller on a SPARC system running 64-bit Solaris 8/9 SDD does not support the following software: v Applications that issue SCSI 2 Reservation to storage
325
Supported environments
SDD supports 32-bit applications on 32-bit Solaris 7/8/9. SDD supports both 32-bit and 64-bit applications on 64-bit Solaris 7/8/9/10.
Unsupported environments
SDD does not support the following environments: v A host system with both a SCSI and fibre-channel connection to a shared LUN v A system start from an SDD pseudo device v A system paging file on an SDD pseudo device v Root (/), /var, /usr, /opt, /tmp and swap partitions on an SDD pseudo device v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment, such as an disk storage system host-bay-adapter replacement v Single-path configuration for Fibre Channel v DS8000 and DS6000 do not support SCSI connectivity
326
remove this stand-alone version of the SDD server before you proceed with SDD 1.3.1.0 (or later) installation. The installation package for SDD 1.3.1.0 includes the SDD server daemon (also referred to as sddsrv), which incorporates the functionality of the stand-alone version of the SDD server (for ESS Expert). To determine if the stand-alone version of the SDD server is installed on your host system, enter: pkginfo -i SDDsrv If you previously installed the stand-alone version of the SDD server, the output from the pkginfo -i SDDsrv command looks similar to the following output:
application SDDsrv SDDsrv bb-bit Version: 1.0.0.0 Nov-14-2001 15:34
Note: v The installation package for the stand-alone version of the SDD server (for ESS Expert) is SDDsrvSUNbb_yymmdd.pkg. In this version, bb represents 32 or 64 bit, and yymmdd represents the date of the installation package. For ESS Expert V2R1, the stand-alone SDD server installation package is SDDsrvSun32_020115.pkg for a 32-bit environment and SDDsrvSun64_020115.pkg for a 64-bit environment. v For instructions on how to remove the stand-alone version of the SDD server (for ESS Expert) from your Solaris host system, see the IBM Subsystem Device Driver Server 1.0.0.0 (sddsrv) README for IBM TotalStorage Expert V2R1 at the following website: www.ibm.com/servers/storage/support/software/swexpert/ For more information about the SDD server daemon, go to SDD server daemon on page 341.
327
Table 22. SDD installation scenarios Installation scenario Scenario 1 Description How to proceed
v SDD is not installed. Go to: 1. Installing SDD on page 329 v No volume managers are installed. v No software application or DBMS is installed that communicates directly to the sd interface. 2. Standard UNIX applications on page 343
Scenario 2
v SDD is not installed. Go to: v An existing volume 1. Installing SDD on page 329 manager, software application, or DBMS is installed that communicates directly to the sd interface. 2. Using applications with SDD on page 342
Scenario 3 Scenario 4
SDD is installed.
Table 23 lists the installation package file names that come with SDD.
Table 23. Operating systems and SDD package file names Operating system 32-bit Solaris 7/8/9 64-bit Solaris 7/8/9 64-bit Solaris 10 Package file name sun32bit/IBMsdd sun64bit/IBMsdd solaris10/IBMsdd
For SDD to operate properly, ensure that the Solaris patches are installed on your operating system. Go to the following website for the latest information about Solaris patches: http://sunsolve.sun.com/show.do?target=patchpage For more information on the Solaris patches, see the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide or the IBM System Storage SAN Volume Controller Host Systems Attachment Guide. Attention: Analyze and study your operating system and application environment to ensure that there are no conflicts with these patches prior to their installation.
328
Installing SDD
Before you install SDD, make sure that you have root access to your Solaris host system and that all the required hardware and software is ready. You can download the latest SDD package and readme from the SDD website: www.ibm.com/servers/storage/support/software/sdd Note: Note that SDD package name has changed from IBMdpo to IBMsdd for SDD 1.4.0.0 or later.
4. Issue the pkgadd command and point the d option of the pkgadd command to the directory that contains IBMsdd. For Solaris 10, include the -G option to prevent SDD to be installed in nonglobal zones. For example,
pkgadd -d /cdrom/cdrom0/sun32bit IBMsdd or pkgadd -d /cdrom/cdrom0/sun64bit IBMsdd or pkgadd -G -d . /cdrom/cdrom0/Solaris10 IBMsdd
IBM SDD driver (sparc) 1 ## Processing package information. ## Processing system information. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs. This package contains scripts that run with super-user permission during the process of installing this package. Do you want to continue with the installation of <IBMsdd> [y,n,?]
6. Enter y and press Enter to proceed. A message similar to the following message is displayed:
329
Installing IBM sdd driver as <IBMsdd> ## Installing part 1 of 1. /etc/defvpath /etc/rcS.d/S20vpath-config /etc/sample_sddsrv.conf /kernel/drv/sparcv9/vpathdd /kernel/drv/vpathdd.conf /opt/IBMsdd/bin/cfgvpath /opt/IBMsdd/bin/datapath /opt/IBMsdd/bin/defvpath /opt/IBMsdd/bin/get_root_disk /opt/IBMsdd/bin/pathtest /opt/IBMsdd/bin/rmvpath /opt/IBMsdd/bin/setlicense /opt/IBMsdd/bin/showvpath /opt/IBMsdd/bin/vpathmkdev /opt/IBMsdd/devlink.vpath.tab /opt/IBMsdd/etc.profile /opt/IBMsdd/etc.system /opt/IBMsdd/vpath.msg /opt/IBMsdd/vpathexcl.cfg /sbin/sddsrv /usr/sbin/vpathmkdev [ verifying class ] ## Executing postinstall script. /etc/rcS.d/S20vpath-config /etc/sample_sddsrv.conf /kernel/drv/sparcv9/vpathdd /kernel/drv/vpathdd.conf /opt/IBMsdd/bin/cfgvpath /opt/IBMsdd/bin/datapath /opt/IBMsdd/bin/defvpath /opt/IBMsdd/bin/get_root_disk /opt/IBMsdd/bin/pathtest /opt/IBMsdd/bin/rmvpath /opt/IBMsdd/bin/setlicense /opt/IBMsdd/bin/showvpath /opt/IBMsdd/bin/vpathmkdev /opt/IBMsdd/devlink.vpath.tab /opt/IBMsdd/etc.profile /opt/IBMsdd/etc.system /opt/IBMsdd/vpath.msg /opt/IBMsdd/vpathexcl.cfg /sbin/sddsrv /usr/sbin/vpathmkdev [ verifying class ] Vpath: Configuring 24 devices (3 disks * 8 slices) Installation of <IBMsdd> was successful. The following packages are available: 1 IBMcli ibm2105cli (sparc) 1.1.0.0 2 IBMsdd IBM SDD driver Version: May-10-2000 16:51 (sparc) 1 Select package(s) you wish to process (or all to process all packages). (default: all) [?,??,q]:
7. If the SDD installation package determines that the system requires reboot, a message similar to the following message will be displayed:
*** IMPORTANT NOTICE *** This machine must now be rebooted in order to ensure sane operation. Issue shutdown -y -i6 -g0 and wait for the "Console Login:" prompt.
330
Using this command, SDD will be installed with the directory specified by basedir as a root directory. In this type of installation, vpath devices will not be configured during installation. You will need to reboot the system. Vpath devices will be automatically configured after reboot. To install SDD in a Jumpstart environment, add installation of SDD using -R option in the Jumpstart finish script.
Postinstallation
If you install SDD from a CD-ROM, you can now manually unmount the CD. Issue the umount /cdrom command from the root directory. Go to the CD-ROM drive and press the Eject button. After you install SDD, you might need to reboot the system to ensure proper operation. The SDD installation package determines if reboot is required. SDD displays a message to inform you to reboot only if reboot is required. SDD vpath devices are found in the /dev/rdsk and /dev/dsk directories. The SDD vpath device is named according to the SDD instance number. A device with an instance number 1 would be: /dev/rdsk/vpath1a where a denotes a slice. Therefore, /dev/rdsk/vpath1c would be instance 1 and slice 2. Similarly, /dev/rdsk/vpath2c would be instance 2 and slice 2. After SDD is installed, the device driver resides above the Sun SCSI disk driver (sd) in the protocol stack. In other words, SDD now communicates to the Solaris device layer. The SDD software installation procedure installs a number of SDD components and updates some system files. Table 24, Table 25 on page 332, and Table 26 on page 333 list those components and files.
Table 24. SDD components installed for Solaris host systems File vpathdd vpathdd.conf Executables Location /kernel/drv /kernel/drv /opt/IBMsdd/bin Description Device driver SDD config file Configuration and status tools
331
Table 24. SDD components installed for Solaris host systems (continued) sddgetdata S65vpath_config (except Solaris 10) /opt/IBMsdd/bin /etc/rcS.d The SDD data collection tool for problem analysis Boot initialization script Notes: 1. This script must come before other LVM initialization scripts. 2. Prior to SDD 1.6.2.0, this file was named S20vpath_config. ibmsdd-init.xml /var/svc/manifest/system SMF service manifest for boot time initialization (only on Solaris 10) Boot initialization script used by ibmsdd-init.xml manifest (only on Solaris 10) SDD server daemon Sample SDD server config file
ibmsddinit
/lib/svc/method
sddsrv sample_sddsrv.conf
/sbin/sddsrv /etc/sample_sddsrv.conf
Table 25. System files updated for Solaris host systems File /etc/system /etc/devlink.tab Location /etc /etc Description Forces the loading of SDD Tells the system how to name SDD devices in /dev
332
Table 26. SDD commands and their descriptions for Solaris host systems Command cfgvpath [-c] Description Configures SDD vpath devices using the following process: 1. Scan the host system to find all devices (LUNs) that are accessible by the Solaris host. 2. Determine which devices (LUNs) are the same devices that are accessible through different paths. 3. Create configuration file /etc/vpath.cfg to save the information about devices. With -c option: cfgvpath exits without initializing the SDD driver. The SDD driver will be initialized after reboot. This option is used to reconfigure SDD after a hardware reconfiguration. Note: cfgvpath -c updates the configuration file but does not update the kernel. To update the kernel, you need to reboot. Without -c option: cfgvpath initializes the SDD device driver vpathdd with the information stored in /etc/vpath.cfg and creates SDD vpath devices /devices/pseudo/vpathdd* Note: cfgvpath without -c option should not be used after hardware reconfiguration because the SDD driver is already initialized with previous configuration information. Reboot is required to properly initialize the SDD driver with the new hardware configuration information. cfgvpath -r Reconfigures SDD vpath devices if SDD vpath devices exist. See Option 2: Dynamic reconfiguration on page 335. If no SDD vpath devices exist, use cfgvpath without -r option. Lists all SDD vpath devices and their underlying disks. Creates files vpathMsN in the /dev/dsk/ and /dev/rdsk/ directories by creating links to the pseudo-vpath devices /devices/pseudo/vpathdd* that are created by the SDD driver. Files vpathMsN in the /dev/dsk/ and /dev/rdsk/ directories provide block and character access to an application the same way as the cxtydzsn devices created by the system. vpathmkdev runs automatically during SDD package installation. However, issue manually to update files vpathMsN after hardware reconfiguration. datapath SDD driver console command tool.
Chapter 7. Using the SDD on a Solaris host system
showvpath vpathmkdev
333
Table 26. SDD commands and their descriptions for Solaris host systems (continued) Command rmvpath [-b] [all | vpathname] rmvpath -ab Description Removes SDD vpath devices from the configuration. See Option 2: Dynamic reconfiguration on page 335.
If you are not using a volume manager, software application, or DBMS that communicates directly to the sd interface, the installation procedure is nearly complete. If you have a volume manager, software application, or DBMS installed that communicates directly to the sd interface, such as Oracle, go to Using applications with SDD on page 342 and read the information specific to the application that you are using.
334
7. Enter devfsadm and press Enter to reconfigure all the drives. 8. Enter vpathmkdev and press Enter to create all the SDD vpath devices.
335
Note: If there are no existing SDD vpath devices, the cfgvpath -r command does not dynamically reconfigure new SDD vpath devices. Issue cfgvpath to configure new SDD vpath devices. Then issue devfsadm -i vpathdd and vpathmkdev. This operation finds the current hardware configuration and compares it to the SDD vpath device configuration in memory and then works out a list of differences. It then issues commands to put the SDD vpath device configuration in memory up-to-date with the current hardware configuration. The cfgvpath -r operation issues these commands to the vpath driver: a. Add one or more SDD vpath devices. If you are adding new SDD vpath devices, issue devfsadm -i vpathdd and vpathmkdev. b. Remove an SDD vpath device; this will fail if the device is busy. c. Add one or more paths to the SDD vpath device. If the SDD vpath device changes from single path to multiple paths, the path selection policy of the SDD vpath device will be changed to load-balancing policy. d. Remove a path for an SDD vpath device. This deletion of the path will fail if the device is busy, but the path will be set to DEAD and OFFLINE. Removing paths of an SDD vpath device or removing an SDD vpath device can fail if the corresponding devices are busy. In the case of a path removal failure, the corresponding path would be marked OFFLINE. In the case of SDD vpath device removal failure, all the paths of the SDD vpath device would be marked OFFLINE. All OFFLINE paths would not be selected for I/Os. However, the SDD configuration file would be modified to reflect the paths or SDD vpath devices. When the system is rebooted, the new SDD configuration would be used to configure SDD vpath devices. 2. Issue the rmvpath command to remove one or more SDD vpath devices. a. To remove all SDD vpath devices that are not busy, issue the following command:
# rmvpath -all
b. To remove one SDD vpath device if the SDD vpath device is not busy, issue the following command:
# rmvpath vpathname
For example, rmvpath vpath10 will remove vpath10. c. To remove SDD vpath devices if the SDD vpath devices are not busy and also to remove the bindings between SDD vpath device names and LUNs so that the removed SDD vpath device names can be reused for new devices, issue the following command:
# rmvpath -b -all
or
# rmvpath -b vpathname
d. To remove all bindings associated with currently unconfigured vpath names so that all unconfigured SDD vpath device names can be reused for new LUNs, issue the following command:
# rmvpath -ab
Note: Option -ab does not remove any existing SDD vpath devices. Note: When an SDD vpath device, vpathN, is created for a LUN, SDD will also create a binding between that SDD vpath name, vpathN, to that LUN. The
336
binding will not be removed even after the LUN has been removed from the host. The binding allows the same SDD vpath device name, vpathN, to be assigned to the same LUN when it is reconnected to the host. In order to reuse an SDD vpath name for a new LUN, the binding needed to be removed before reconfiguring SDD.
For information about known ZFS issues and the minimum Solaris 10 update that is required, see the Solaris SDD readme.
337
3. Find the major and minor number of the vpath device that you want to port to the nonglobal zone by using the ls -lL command. In the following example, the major number of vpath device vpath2c is 271 and the minor number is 18.
# ls -lL /dev/rdsk/vpath2c crw------1 root sys 271, 18 Jan 16 15:25 /dev/rdsk/vpath2c
4. Create block and raw device special files in the nonglobal zone /dev/dsk and /dev/rdsk directories using the mknod command. The device special files are based on the major and minor numbers of the vpath devices. Issue the mknod command to create a block device special file:
# cd /zones/ngb_zone2/dev/dsk # mknod vpath2c b 271 18 # ls -l total 0 brw-r--r-1 root #cd /zones/ngb_zone2/dev/rdsk #mknod vpath2c c 271 18 #ls -l total 0 crw-r--r-1 root
root
root
5. In the nonglobal zone environment, the vpath devices are now available in the /dev/dsk and /dev/rdsk directories.
338
DEV#: 2 DEVICE NAME: vpath1c TYPE: 2105800 POLICY: OPTIMIZED SERIAL: 03B23922 ======================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 /pci@8,700000/fibre channel@3 sd@1,0:c,raw CLOSE NORMAL 0 0 1 /pci@8,700000/fibre channel@3 sd@2,0:c,raw CLOSE NORMAL 0 0 2 /pci@8,600000/fibre channel@1 sd@1,0:c,raw CLOSE NORMAL 0 0 3 /pci@8,600000/fibre channel@1 sd@2,0:c,raw CLOSE NORMAL 0 0
Special consideration during SDD upgrade: During SDD upgrade, /etc/vpathexcl.cfg is replaced and the LUN exclusion list is lost. In order to retain the exclusion list after SDD upgrade: 1. Copy the existing /etc/vpathexcl.cfg to a new file, for example, /etc/vpathexcl.cfg.sav, before installing the new SDD package.
Chapter 7. Using the SDD on a Solaris host system
339
2. After installing the new SDD package, replace /etc/vpathexec.cfg with the saved file, /etc/vpathexcl.cfg.sav. 3. Issue cfgvpath -r again to exclude the LUNs.
4. Enter y and press Enter. A message similar to the following message is displayed:
## Removing installed package instance <IBMsdd> This package contains scripts that run with super-user permission during the process of removing this package. Do you want to continue with the removal of this package [y,n,?,q] y
5. Enter y and press Enter. A message similar to the following message is displayed:
## Verifying package dependencies. ## Processing package information. ## Executing preremove script. ## Removing pathnames in class <none> usr/sbin/vpathmkdev /sbin/sddsrv /opt/IBMsdd/vpathexcl.cfg /opt/IBMsdd/vpath.msg /opt/IBMsdd/etc.system /opt/IBMsdd/etc.profile /opt/IBMsdd/devlink.vpath.tab /opt/IBMsdd/bin /opt/IBMsdd /kernel/drv/vpathdd.conf /kernel/drv/sparcv9/vpathdd /etc/sample_sddsrv.conf /etc/rcS.d/S20vpath-config /etc/defvpath ## Updating system information. Removal of <IBMsdd> was successful.
340
Attention: If you are not performing an SDD upgrade, you should now reboot the system. If you are in the process of upgrading SDD, you do not need to reboot at this point. You can reboot the system after installing the new SDD package.
Understanding SDD support for single-path configuration for disk storage system
SDD does not support concurrent download of licensed machine code in single-path mode. SDD does support single-path SCSI or fibre-channel connection from your SUN host system to a disk storage system. It is possible to create a volume group or an SDD vpath device with only a single path. However, because SDD cannot provide single-point-failure protection and load balancing with a single-path configuration, you should not use a single-path configuration.
2. Save the file /etc/inittab. 3. Issue init q. 4. Follow the directions in Verifying if the SDD server has started to confirm that the SDD server started successfully.
341
Changing the retry count value when probing SDD server inquiries
Beginning with SDD version 1.6.4.x on Solaris, you can dynamically change the value of the retry count for probing inquiries. Before SDD version 1.6.4.x on Solaris, the retry count was statically compiled in the sddsrv binary and was set to 2. The retry count specifies how many consecutive probing inquiries that sddsrv makes before a path is determined to be nonfunctional and is set to DEAD state. A retry count of 2 indicates that sddsrv will attempt three consecutive probing inquiries. You must either create the probe_retry variable in the existing sddsrv.conf file or generate a new sddsrv.conf file by copying the sample_sddsrv.conf file to the sddsrv.conf file in the /etc directory. Make sure that this variable is commented with the default value of probe_retry=2. To change the default value, uncomment and set probe_retry to a valid value. The valid range for probe_retry is from 2 to 5. If you set a value that is not valid, sddsrv uses the default value of 2. If you set a value greater than 5, sddsrv uses 5. If you set a value less than 2, sddsrv uses 2.
2. Save the file. 3. Issue init q. 4. Check if sddsrv is running by running ps -ef | grep sddsrv. If sddsrv is still running, issue kill -9 pid of sddsrv.
342
Installing SDD on a system that already has the Network File System file server
Complete the following steps to install SDD on a system if you have the Network File System file server already configured to export file systems that reside on a multiport subsystem and to use SDD partitions instead of sd partitions to access file systems: 1. List the mount points for all currently exported file systems by looking in the /etc/exports directory. 2. Match the mount points found in step 1 with sdisk device link names (files named /dev/(r)dsk/cntndn) by looking in the /etc/fstab directory. 3. Match the sd device link names found in step 2 with SDD device link names (files named /dev/(r)dsk/vpathN) by issuing the showvpath command. 4. Make a backup copy of the current /etc/fstab file.
Chapter 7. Using the SDD on a Solaris host system
343
5. Edit the /etc/fstab file, replacing each instance of an sd device link named /dev/(r)dsk/cntndn with the corresponding SDD device link. 6. Restart the system. 7. Verify that each exported file system: v Passes the start time fsck pass v Mounts properly v Is exported and available to NFS clients If a problem exists with any exported file system after you complete step 7, restore the original /etc/fstab file and restart to restore Network File System service. Then review your steps and try again.
344
VPATH_SHARK0_1, and so on. SAN Volume Controller vpath devices will have names such as VPATH_SANVC0_0, VPATH_SANVC0_1, and so on. Case 2: Installing SDD with Veritas already installed. 1. Install SDD using the procedure in Installing SDD on page 329. 2. Ensure that you have rebooted the system after SDD is installed. In Veritas Volume Manager, the ESS vpath devices will have names such as VPATH_SHARK0_0, VPATH_SHARK0_1, and so on. SAN Volume Controller vpath devices will have names such as VPATH_SANVC0_0, VPATH_SANVC0_1, and so on. Note: Multipathing of ESS and SAN Volume Controller devices managed by DMP before SDD installed will be managed by SDD after SDD is installed.
Oracle
You must have superuser privileges to complete the following procedures. You also need to have Oracle documentation on hand. These procedures were tested with Oracle 8.0.5 Enterprise server with the 8.0.5.1 patch set from Oracle.
345
b. Set up the ORACLE_BASE and ORACLE_ HOME environment variables of the Oracle user to be directories of this file system. c. Create two more SDD-resident file systems on two other SDD volumes. Each of the resulting three mount points should have a subdirectory named oradata. The subdirectory is used as a control file and redo log location for the installer's default database (a sample database) as described in the Installation Guide. Oracle recommends using raw partitions for redo logs. To use SDD raw partitions as redo logs, create symbolic links from the three redo log locations to SDD raw device links that point to the slice. These files are named /dev/rdsk/vpathNs, where N is the SDD instance number, and s is the partition ID. 3. Determine which SDD (vpathN) volumes you will use as Oracle8 database devices. 4. Partition the selected volumes using the Solaris format utility. If Oracle8 is to use SDD raw partitions as database devices, be sure to leave sector 0/disk cylinder 0 of the associated volume unused. This protects UNIX disk labels from corruption by Oracle8. 5. Ensure that the Oracle software owner has read and write privileges to the selected SDD raw partition device files under the /devices/pseudo directory. 6. Set up symbolic links in the oradata directory under the first of the three mount points. See step 2 on page 345. Link the database files to SDD raw device links (files named /dev/rdsk/vpathNs) that point to partitions of the appropriate size. 7. Install the Oracle8 server following the instructions in the Oracle Installation Guide. Be sure to be logged in as the Oracle software owner when you run the orainst /m command. Select the Install New Product - Create Database Objects option. Select Raw Devices for the storage type. Specify the raw device links set up in step 2 for the redo logs. Specify the raw device links set up in step 3 for the database files of the default database. 8. To set up other Oracle8 databases, you must set up control files, redo logs, and database files following the guidelines in the Oracle8 Administrator's Reference. Make sure any raw devices and file systems that you set up reside on SDD volumes. 9. Launch the sqlplus utility. 10. Issue the create database SQL command, specifying the control, log, and system data files that you have set up. 11. Issue the create tablespace SQL command to set up each of the temp, rbs, tools, and users database files that you created. 12. Issue the create rollback segment SQL command to create the three redo log files that you set. For the syntax of these three create commands, see the Oracle8 Server SQL Language Reference Manual.
346
2. Complete the basic installation steps in the Installing SDD on page 329 section. 3. Change to the directory where you installed the SDD utilities. Issue the showvpath command. 4. Check the directory list to find a cntndn directory that is the same as the one where the Oracle files are. For example, if the Oracle files are on c1t8d0s4, look for c1t8d0s2. If you find it, you will know that /dev/dsk/vpath0c is the same as /dev/dsk/clt8d2s2. (SDD partition identifiers end in an alphabetical character from a-g rather than s0, s1, s2, and so forth). A message similar to the following message is displayed:
vpath1c c1t8d0s2 c2t8d0s2 /devices/pci@1f,0/pci@1/scsi@2/sd@1,0:c,raw /devices/pci@1f,0/pci@1/scsi@2,1/sd@1,0:c,raw
5. Use the SDD partition identifiers instead of the original Solaris identifiers when mounting the file systems. If you originally used the following Solaris identifiers: mount /dev/dsk/c1t3d2s4 /oracle/mp1 you now use the following SDD partition identifiers: mount /dev/dsk/vpath2e /oracle/mp1 For example, assume that vpath2c is the SDD identifier. Follow the instructions in Oracle Installation Guide for setting ownership and permissions. If using raw partitions: Complete the following procedure if you have Oracle8 already installed and want to reconfigure it to use SDD partitions instead of sd partitions (for example, partitions accessed through /dev/rdsk/cntndn files). All Oracle8 control, log, and data files are accessed either directly from mounted file systems or through links from the oradata subdirectory of each Oracle mount point set up on the server. Therefore, the process of converting an Oracle installation from sdisk to SDD has two parts: v Change the Oracle mount points' physical devices in /etc/fstab from sdisk device partition links to the SDD device partition links that access the same physical partitions. v Re-create any links to raw sdisk device links to point to raw SDD device links that access the same physical partitions. Converting an Oracle installation from sd to SDD partitions: Complete the following steps to convert an Oracle installation from sd to SDD partitions: 1. Back up your Oracle8 database files, control files, and redo logs. 2. Obtain the sd device names for the Oracle8 mounted file systems by looking up the Oracle8 mount points in /etc/vfstab and extracting the corresponding sd device link name (for example, /dev/rdsk/c1t4d0s4). 3. Launch the sqlplus utility. 4. Enter the command: select * from sys.dba_data_files;
347
The output lists the locations of all data files in use by Oracle. Determine the underlying device where each data file resides. You can do this by either looking up mounted file systems in the /etc/vfstab file or by extracting raw device link names directly from the select command output. 5. Enter the ls -l command on each device link found in step 4 on page 347 and extract the link source device file name. For example, if you enter the command: # ls -l /dev/rdsk/c1t1d0s4 A message similar to the following message is displayed:
/dev/rdsk/c1t1d0s4 /devices/pci@1f,0/pci@1/scsi@2/sd@1,0:e
6. Write down the file ownership and permissions by issuing the ls -lL command on either the files in /dev/ or /devices (it yields the same result). For example, if you enter the command: # ls -lL /dev/rdsk/c1t1d0s4 A message similar to the following message is displayed:
crw-r--r-- oracle dba 32,252 Nov 16 11:49 /dev/rdsk/c1t1d0s4
7. Complete the basic installation steps in the Installing SDD on page 329 section. 8. Match each cntndns device with its associated vpathNs device link name by issuing the showvpath command. Remember that vpathNs partition names use the letters a - h in the s position to indicate slices 0 - 7 in the corresponding cntndnsn slice names. 9. Issue the ls -l command on each SDD device link. 10. Write down the SDD device nodes for each SDD device link by tracing back to the link source file. 11. Change the attributes of each SDD device to match the attributes of the corresponding disk device using the chgrp and chmod commands. 12. Make a copy of the existing /etc/vfstab file for recovery purposes. Edit the /etc/vfstab file, changing each Oracle device link to its corresponding SDD device link. 13. For each link found in an oradata directory, re-create the link using the appropriate SDD device link as the source file instead of the associated sd device link. As you perform this step, generate a reversing shell script that can restore all the original links in case of error. 14. Restart the server. 15. Verify that all file system and database consistency checks complete successfully.
348
For these procedures, you need access to the Solaris answerbook facility. These procedures were tested using Solstice DiskSuite 4.2 with the patch 106627-04 (DiskSuite patch) installed. You should have a copy of the DiskSuite Administration Guide available to complete these procedures. You must have superuser privileges to complete these procedures. Note: SDD only supports Solstice DiskSuite line command interface. The DiskSuite Tool (metatool) does not recognize and present SDD devices for configuration. SDD does not support Solaris Volume Manager disk set feature, which issues SCSI 2 Reservation to storage.
349
1. Back up all data. 2. Back up the current Solstice configuration by making a copy of the md.tab file and recording the output of the metastat and metadb -i commands. Make sure all sd device links in use by DiskSuite are entered in the md.tab file and that they all come up properly after a restart. 3. Install SDD using the procedure in the Installing SDD on page 329 section, if you have not already done so. After the installation completes, enter shutdown -i6 -y -g0 and press Enter. This verifies the SDD vpath installation. Note: Do not do a reconfiguration restart. 4. Using a plain sheet of paper, make a two-column list and match the /dev/(r)dsk/cntndnsn device links found in step 2 with the corresponding /dev/(r)dsk/vpathNs device links. Use the showvpath command to do this step. 5. Delete each replica database that is currently configured with a /dev/(r)dsk/cntndnsn device by using the metadb -d -f <device> command. Replace the replica database with the corresponding /dev/(r)dsk/vpathNs device found in step 2 by using the metadb -a <device> command. 6. Create a new md.tab file. Insert the corresponding vpathNs device link name in place of each cntndnsn device link name. Do not do this for start device partitions (vpath does not currently support these). When you are confident that the new file is correct, install it in either the /etc/opt/SUNWmd directory or the /etc/lvm directory, depending on the DiskSuite version. 7. Restart the server, or proceed to the next step if you want to avoid restarting your system. To back out the SDD vpath in case there are any problems following step 7: a. Reverse the procedures in step 4 to step 6, reinstalling the original md.tab in the /etc/opt/SUNWmd directory or the /etc/lvm directory depending on the DiskSuite version. b. Enter the pkgrm IBMsdd command. c. Restart. 8. Stop all applications using DiskSuite, including file systems. 9. Enter the following commands for each existing metadevice: metaclear <device> 10. Enter metainit -a to create metadevices on the /dev/(r)dsk/vpathNs devices. 11. Compare the metadevices that are created with the saved metastat output from step 2. Create any missing metadevices and reconfigure the metadevices based on the configuration information from the saved metastat output. 12. Restart your applications.
350
2. Determine which SDD vpath (vpathNs) volumes that you will use as file system devices. Partition the selected SDD vpath volumes using the Solaris format utility. Be sure to create partitions for UFS logging devices as well as for UFS master devices. 3. Create file systems on the selected vpath UFS master device partitions using the newfs command. 4. Install Solaris Volume Manager if you have not already done so. 5. Create the metatrans device using metainit. For example, assume /dev/dsk/vpath1d is your UFS master device used in step 3, /dev/dsk/vpath1e is its corresponding log device, and d0 is the trans device that you want to create for UFS logging. Enter metainit d0 -t vpath1d vpath1e and press Enter. 6. Create mount points for each UFS logging file system that you have created using steps 3 and 5. 7. Install the file systems into the /etc/vfstab directory, specifying /dev/md/(r)dsk/d <metadevice number> for the raw and block devices. Set the mount at boot field to yes. 8. Restart your system.
Installing vpath on a system that already has transactional volume for UFS logging in place
Complete the following steps if you already have UFS logging file systems residing on a multiport subsystem and you want to use vpath partitions instead of sd partitions to access them. 1. Make a list of the DiskSuite metatrans devices for all existing UFS logging file systems by looking in the /etc/vfstab directory. Make sure that all configured metatrans devices are set up correctly in the md.tab file. If the devices are not set up now, set them up before continuing. Save a copy of the md.tab file. 2. Match the device names found in step 1 with sd device link names (files named /dev/(r)dsk/cntndnsn) using the metastat command. 3. Install SDD using the procedure in the Installing SDD on page 329 section, if you have not already done so. 4. Match the sd device link names found in step 2 with SDD vpath device link names (files named /dev/(r)dsk/vpathNs) by issuing the /opt/IBMsdd/bin/ showvpath command. 5. Unmount all current UFS logging file systems known to reside on the multiport subsystem using the umount command. 6. Enter metaclear -a and press Enter. 7. Create new metatrans devices from the vpathNs partitions found in step 4 that correspond to the sd device links found in step 2. Remember that vpath partitions a - h correspond to sd slices 0 - 7. Use the metainit d <metadevice number> -t <"vpathNs" - master device> <"vpathNs" - logging device> command. Be sure to use the same metadevice numbering as you originally used with the sd partitions. Edit the md.tab file to change each metatrans device entry to use vpathNs devices. 8. Restart the system. Note: If there is a problem with a metatrans device after steps 7 and 8, restore the original md.tab file and restart the system. Review your steps and try again.
351
352
Hardware
The following hardware components are needed: v One or more supported storage devices v Host system v For ESS devices: SCSI adapters and cables v Fibre-channel adapters and cables
Software
The following software components are needed: v Windows NT 4.0 operating system with Service Pack 6A or later v For ESS devices: SCSI device drivers v Fibre-channel device drivers
Unsupported environments
SDD does not support the following environments: v A host system with both a SCSI channel and fibre-channel connection to a shared LUN. v SDD does not support I/O load balancing in a Windows NT clustering environment. v You cannot store the Windows NT operating system or a paging file on an SDD-controlled multipath device (that is, SDD does not support boot from ESS device). v Single-path mode during concurrent download of licensed machine code nor during any ESS concurrent maintenance that impacts the path attachment, such as an ESS host-bay-adapter replacement.
353
ESS requirements
To successfully install SDD, ensure that your host system is configured to the ESS as an Intel processor-based PC server with Windows NT 4.0 Service Pack 6A (or later) installed.
SCSI requirements
To use the SDD SCSI support on ESS devices, ensure that your host system meets the following requirements: v No more than 32 SCSI adapters are attached. v A SCSI cable connects each SCSI host adapter to an ESS port. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two SCSI adapters are installed. Note: SDD also supports one SCSI adapter on the host system. With single-path access, concurrent download of licensed machine code is supported with SCSI devices. However, the load-balancing and failover features are not available. v For information about the SCSI adapters that can attach to your Windows NT host system, go to the following website: www.ibm.com/servers/storage/support
Fibre-channel requirements
To use the SDD fibre-channel support, ensure that your host system meets the following requirements: v No more than 32 fibre-channel adapters are attached. v A fiber-optic cable connects each fibre-channel adapter to an supported storage device port. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two fibre-channel paths are configured between the host and the subsystem. Note: If your host has only one fibre-channel adapter, it requires you to connect through a switch to multiple supported storage device ports. SDD requires a minimum of two independent paths that share the same logical unit to use the load-balancing and path-failover-protection features. For information about the fibre-channel adapters that can attach to your Windows NT host system, go to the following website: www.ibm.com/servers/storage/support
354
Installing SDD
These sections describe how to install the IBM System Storage Multipath Subsystem Device Driver.
355
356
Upgrading SDD
If you attempt to install over an existing version of SDD, the installation fails. You must uninstall any previous version of the SDD before installing a new version of SDD. Attention: After uninstalling the previous version, you must immediately install the new version of SDD to avoid any potential data loss. If you complete a system restart before installing the new version, you might lose access to your assigned volumes. Complete the following steps to upgrade to a newer SDD version: 1. Uninstall the previous version of SDD. (See Uninstalling the SDD on page 363 for instructions.) 2. Install the new version of SDD. (See Installing SDD on page 355 for instructions.)
357
This section contains the procedures for adding paths to SDD devices in multipath environments.
3. Enter datapath query device and press Enter. In the following example, SDD displays 10 devices. There are five physical drives, and one partition has been assigned on each drive for this configuration. Each SDD device reflects a partition that has been created for a physical drive. Partition 0 stores information about the drive. The operating system masks this partition from theuser, but it still exists. Note: In a stand-alone environment, the policy field is optimized. In a cluster environment, the policy field is changed to reserved when a LUN becomes a cluster resource.
Total Devices : 10 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part0 OPEN NORMAL 14 0 DEV#: 1 DEVICE NAME: Disk2 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part1 OPEN NORMAL 94 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02C12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part0 OPEN NORMAL 16 0 DEV#: 3 DEVICE NAME: Disk3 Part1 SERIAL: 02C12028 TYPE: 2105E20 POLICY: OPTIMIZED
===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part1 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk4 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part0 OPEN NORMAL 14 0
358
DEV#: 5 DEVICE NAME: Disk4 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part1 OPEN NORMAL 94 0 DEV#: 6 DEVICE NAME: Disk5 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part0 OPEN NORMAL 14 0 DEV#: 7 DEVICE NAME: Disk5 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part1 OPEN NORMAL 94 0 DEV#: 8 DEVICE NAME: Disk6 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part0 OPEN NORMAL 14 0 DEV#: 9 DEVICE NAME: Disk6 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part1 OPEN NORMAL 94 0
359
Active Adapters :2 Adpt# Adapter Name 0 Scsi Port6 Bus0 1 Scsi Port7 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 188 204 Errors Paths Active 0 10 10 0 10 10
3. Type datapath query device and press Enter. The output includes information about any additional devices that were installed. In the example shown in the following output, the output includes information about the new host bus adapter that was assigned:
Total Devices : 10 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part0 OPEN NORMAL 5 0 1 Scsi Port7 Bus0/Disk7 Part0 OPEN NORMAL 9 0 DEV#: 1 DEVICE NAME: Disk2 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part1 OPEN NORMAL 32 0 1 Scsi Port7 Bus0/Disk7 Part1 OPEN NORMAL 32 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02C12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part0 OPEN NORMAL 7 0 1 Scsi Port7 Bus0/Disk8 Part0 OPEN NORMAL 9 0 DEV#: 3 DEVICE NAME: Disk3 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02C22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part1 OPEN NORMAL 28 0 1 Scsi Port7 Bus0/Disk8 Part1 OPEN NORMAL 36 0 DEV#: 4 DEVICE NAME: Disk4 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part0 OPEN NORMAL 8 0 1 Scsi Port7 Bus0/Disk9 Part0 OPEN NORMAL 6 0 DEV#: 5 DEVICE NAME: Disk4 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part1 OPEN NORMAL 35 0 1 Scsi Port7 Bus0/Disk9 Part1 OPEN NORMAL 29 0 DEV#: 6 DEVICE NAME: Disk5 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part0 OPEN NORMAL 6 0 1 Scsi Port7 Bus0/Disk10 Part0 OPEN NORMAL 8 0 DEV#: 7 DEVICE NAME: Disk5 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part1 OPEN NORMAL 24 0 1 Scsi Port7 Bus0/Disk10 Part1 OPEN NORMAL 40 0
360
DEV#: 8 DEVICE NAME: Disk6 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part0 OPEN NORMAL 8 0 1 Scsi Port7 Bus0/Disk11 Part0 OPEN NORMAL 6 0 DEV#: 9 DEVICE NAME: Disk6 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part1 OPEN NORMAL 35 0 1 Scsi Port7 Bus0/Disk11 Part1 OPEN NORMAL 29 0
The definitive way to identify unique volumes on the storage subsystem is by the serial number displayed. The volume appears at the SCSI level as multiple disks (more properly, Adapter/Bus/ID/LUN), but it is the same volume on the ESS. The previous example shows two paths to each partition (path 0: Scsi Port6 Bus0/Disk2, and path 1: Scsi Port7 Bus0/Disk7). The example shows partition 0 (Part0) for each of the devices. This partition stores information about the Windows partition on the drive. The operating system masks this partition from the user, but it still exists. In general, you will see one more partition from the output of the datapath query device command than what is being displayed from the Disk Administrator application.
3. Enter datapath query device and press Enter. In the following example output from an ESS device, four devices are attached to the SCSI path:
Total Devices : 2 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 =========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 4 0 1 Scsi Port5 Bus0/Disk8 Part0 OPEN NORMAL 7 0
Chapter 8. Using the SDD on a Windows NT host system
361
2 3
OPEN OPEN
NORMAL NORMAL
6 5
0 0
DEV#: 1 DEVICE NAME: Disk2 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part1 OPEN NORMAL 14792670 0 1 Scsi Port5 Bus0/Disk8 Part1 OPEN NORMAL 14799942 0 2 Scsi Port6 Bus0/Disk14 Part1 OPEN NORMAL 14926972 0 3 Scsi Port6 Bus0/Disk20 Part1 OPEN NORMAL 14931115 0
3. Enter datapath query device and press Enter. The output includes information about any additional devices that were installed. In the following example output from an ESS device, the output includes information about the new devices that were assigned:
Total Devices : 2 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 4 0 1 Scsi Port5 Bus0/Disk8 Part0 OPEN NORMAL 7 0 2 Scsi Port6 Bus0/Disk14 Part0 OPEN NORMAL 6 0 3 Scsi Port6 Bus0/Disk20 Part0 OPEN NORMAL 5 0 DEV#: 1 DEVICE NAME: Disk2 Part1 SERIAL: 02B12028 TYPE: 2105E20 POLICY: OPTIMIZED
362
============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part1 OPEN NORMAL 14792670 0 1 Scsi Port5 Bus0/Disk8 Part1 OPEN NORMAL 14799942 0 2 Scsi Port6 Bus0/Disk14 Part1 OPEN NORMAL 14926972 0 3 Scsi Port6 Bus0/Disk20 Part1 OPEN NORMAL 14931115 0
The definitive way to identify unique volumes on the ESS device is by the serial number displayed. The volume appears at the SCSI level as multiple disks (more properly, Adapter/Bus/ID/LUN), but it is the same volume on the ESS. The previous example shows two paths to each partition (path 0: Scsi Port6 Bus0/Disk2, and path 1: Scsi Port7 Bus0/Disk10). The example shows partition 0 (Part0) for each device. This partition stores information about the Windows partition on the drive. The operating system masks this partition from the user, but it still exists. In general, you will see one more partition from the output of the datapath query device command than what is being displayed in the Disk Administrator application.
363
364
a. Verify that the cable for hba_b is connected to the ESS. b. Verify that your LUN configuration on the ESS is correct. c. Repeat steps 2 on page 364 - 5 on page 364. 6. Install SDD on server_1, and restart server_1. For installation instructions , go to Installing SDD on page 355. 7. Connect hba_c to the ESS, and restart server_2. 8. Click Start Programs Administrative Tools Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify the number of LUNs that are connected to server_2. The operating system recognizes each additional path to the same LUN as a device. 9. Disconnect hba_c and connect hba_d to the ESS. Restart server_2. 10. Click Start Programs Administrative Tools Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify that the correct number of LUNs are connected to server_2. If the number of LUNs that are connected to server_2 is correct, proceed to 11. If the number of LUNs that are connected to server_2 is incorrect, complete the following steps: a. Verify that the cable for hba_d is connected to the ESS. b. Verify your LUN configuration on the ESS. c. Repeat steps 7 - 10. 11. Install SDD on server_2, and restart server_2. For installation instructions , go to Installing SDD on page 355. 12. Connect both hba_c and hba_d on server_2 to the ESS, and restart server_2. 13. Use the datapath query adapter and datapath query device commands to verify the number of LUNs and paths on server_2. 14. Click Start Programs Administrative Tools Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify the number of LUNs as online devices. You also need to verify that all additional paths are shown as offline devices. 15. Format the raw devices with NTFS. Make sure to keep track of the assigned drive letters on server_2. 16. Connect both hba_a and hba_b on server_1 to the ESS, and restart server_1. 17. Use the datapath query adapter and datapath query device commands to verify the correct number of LUNs and paths on server_1. Verify that the assigned drive letters on server_1 match the assigned drive letters on server_2. 18. Restart server_2. v Install the Microsoft Cluster Server (MSCS) software on server_1. When server_1 is up, install Service Pack 6A (or later) to server_1, and restart server_1. Then install hotfix Q305638 and restart server_1 again. v Install the MSCS software on server_2. When server_2 is up, install Service Pack 6A (or later) to server_2, and restart server_2. Then install hotfix Q305638 and restart server_2 again. 19. Use the datapath query adapter and datapath query device commands to verify the correct number of LUNs and paths on server_1 and server_2. (This step is optional.)
365
Note: You can use the datapath query adapter and datapath query device commands to show all the physical volumes and logical volumes for the host server. The secondary server shows only the physical volumes and the logical volumes that it owns.
366
367
368
Unsupported environments
SDD does not support the following environments: v DS8000 and DS6000 devices do not support SCSI connectivity. v A host system with both a SCSI channel and a fibre-channel connection to a shared LUN. v Single-path mode during concurrent download of licensed machine code nor during any ESS-concurrent maintenance that impacts the path attachment, such as an ESS host-bay-adapter replacement. v Support of HBA Symbios SYM8751D has been withdrawn starting with ESS Model 800 and SDD 1.3.3.3.
369
where xxx represents the disk storage system model number. To successfully install SDD on your virtualization product, ensure that you configure the virtualization product devices as fibre-channel devices attached to the virtualization product on your Windows 2000 host system.
Fibre-channel requirements
To use the SDD fibre-channel support, ensure that your host system meets the following requirements: v Depending on the fabric and supported storage configuration, the number of fibre-channel adapters attached should be less than or equal to 32 / (n * m), where n is the number of supported storage ports and m is the number of paths that have access to the supported storage device from the fabric. v A fiber-optic cable connects each fibre-channel adapter to a supported storage port or a fabric. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two independent fibre-channel paths are installed. Note: You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure. For information about the fibre-channel adapters that can attach to your Windows 2000 host system, go to the following website at: www.ibm.com/servers/storage/support
370
371
Before you install and use SDD, you must configure your SCSI adapters. For SCSI adapters that are attached to start devices, ensure that the BIOS for the adapter is enabled. For all other adapters that are attached to nonstart devices, ensure that the BIOS for the adapter is disabled. Note: When the adapter shares the SCSI bus with other adapters, the BIOS must be disabled.
Installing SDD
The following section describes how to install SDD on your system.
2. To a. b. c.
4. When the setup.exe program is finished, you will be asked if you want to reboot. If you answer y, the setup.exe program restarts your Windows 2000 system immediately. Follow the instructions to restart. Otherwise, the setup.exe program exits, and you need to manually restart your Windows 2000 system to activate the new installation. 5. Shut down your Windows 2000 host system. 6. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. 7. Change any zoning information that needs to be updated. 8. Restart your Windows 2000 host system. After completing the installation procedures and when you log on again, your Program menu will include a Subsystem Device Driver entry containing the following selections: 1. Subsystem Device Driver management 2. SDD Technical Support website 3. README
372
Notes: 1. You can verify that SDD has been successfully installed by issuing the datapath query device command. If the command runs, SDD is installed. You must issue the datapath command from the datapath directory. You can also use the following operation to verify that SDD has been successfully installed: a. Click Start Programs Administrative Tools Computer Management. b. Double-click Device Manager. c. Expand Disk drives in the right pane. IBM 2105xxx SDD Disk Device: indicates ESS devices connected to Windows 2000 host. Figure 7 shows six ESS devices connected to the host and four paths to each of the disk storage system devices. The Device manager shows six IBM 2105xxx SDD Disk Devices and 24 IBM 2105xxx SCSI Disk Devices.
Figure 7. Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows 2000 host system
2. You can also verify the current version of SDD. For more information, go to datapath query version on page 448.
373
3. When the setup.exe program is finished, you will be asked if you want to reboot. If you answer y, the setup.exe program will restart your Windows 2000 system immediately. Follow the instructions to restart. Otherwise, the setup.exe program exits, and you need to manually restart your Windows 2000 system to activate the new installation. 4. Shut down your Windows 2000 host system. 5. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. 6. Change any zoning information that needs to be updated. 7. Restart your Windows 2000 host system.
Upgrading SDD
Complete the following steps to upgrade SDD on your host system: 1. Log on as the administrator user. 2. To upgrade from CD-ROM: a. Insert the SDD installation CD-ROM into the selected drive. b. Start the Windows 2000 Explorer program. c. Double-click the CD-ROM drive. A list of all the installed directories on the compact disc is displayed. d. Double-click the \win2k\IBMsdd directory. 3. To download code from the SDD website: a. Unzip the SDD code to your installation subdirectory b. Run the setup.exe program. Tip: The setup program provides the following command-line options for silent install/upgrade:
--> setup -s : silent install/upgrade --> setup -s -n : silent install/upgrade; no reboot (requires SDD 1.6.0.6 or later)
If you have previously installed a 1.3.1.1 (or earlier) version of SDD, you will see an "Upgrade?" question while the setup program is running. You should answer y to this question to continue the installation. Follow the displayed setup program instructions to complete the installation. If you currently have SDD 1.3.1.2 or 1.3.2.x installed on your Windows 2000 host system, answer y to the "Upgrade?" question. 4. When the setup program is finished, you will be asked if you want to reboot. If you answer y, the setup program restarts your Windows 2000 system immediately. Follow the instructions to restart. Otherwise the setup program exits, and you need to manually restart your Windows 2000 system to activate the new installation Notes: 1. You can verify that SDD has been successfully installed by issuing the datapath query device command. If the command runs, SDD is installed. 2. You can also verify the current version of SDD. See Displaying the current version of SDD.
374
You can display the current version of SDD on a Windows 2000 host system by viewing the sddbus.sys file properties. Complete the following steps to view the properties of sddbus.sys file: a. Click Start Programs Accessories Windows Explorer to open Windows Explorer. b. In Windows Explorer, go to the %SystemRoot%\system32\drivers directory, where %SystemRoot% is: %SystemDrive%\winnt for Windows 2000. If Windows is installed on the C: drive, %SystemDrive% is C:. If Windows is installed on E: drive, %SystemDrive% is E: c. Right-click the sddbus.sys file, and then click Properties. The sddbus.sys properties window opens. d. In the sddbus.sys properties window, click Version. The file version and copyright information about the sddbus.sys file is displayed. 2. By running datapath query version command (requires SDD 1.6.1.x or later).
Configuring SDD
Use the following sections to configure SDD.
375
4. Enter datapath query device and press Enter. In the following example showing disk storage system device output, six devices are attached to the SCSI path:
Total Devices : 6
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0
376
2. Enter datapath query adapter and press Enter. The output includes information about any additional adapters that were installed. In the example shown in the following output, an additional host bus adapter has been installed:
Active Adapters :2 Adpt# 0 1 Adapter Name Scsi Port1 Bus0 Scsi Port2 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 1325 1312 Errors 0 0 Paths Active 8 8 8 8
3. Enter datapath query device and press Enter. The output should include information about any additional devices that were installed. In this example, the output includes information about the new host bus adapter that was assigned:
Total Devices : 6
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 1 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 96 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 95 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 94 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 1 Scsi Port5 Bus0/Disk4 Part0 OPEN NORMAL 96 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 1 Scsi Port5 Bus0/Disk5 Part0 OPEN NORMAL 99 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0 1 Scsi Port5 Bus0/Disk6 Part0 OPEN NORMAL 79 0
Uninstalling SDD
Complete the following steps to uninstall SDD on a Windows 2000 host system. 1. Shut down your Windows 2000 host system. 2. Ensure that there is a single-path connection from the system to the storage device.
Chapter 9. Using the SDD on a Windows 2000 host system
377
Turn on your Windows 2000 host system. Log on as the administrator user. Click Start Settings Control Panel. The Control Panel opens. Double-click Add/Remove Programs. The Add/Remove Programs window opens. 7. In the Add/Remove Programs window, select the Subsystem Device Driver from the currently installed programs selection list. 8. Click Add/Remove. You will be asked to verify that you want to uninstall SDD. 9. Restart your system. The SDD setup.exe program provides the following command line options for silent uninstall: 3. 4. 5. 6.
--> setup -s -u: silent uninstall --> setup -s -u -n : silent uninstall; no reboot (requires SDD 1.6.0.6 or later)
Booting from a SAN device with Windows 2000 and the SDD using Qlogic HBA <BIOS 1.43> or later
For information about how SAN boot works, see the Qlogic website: http://www.qlogic.com/. Complete the following steps to set up a SAN boot device with SDD: 1. Configure the SAN environment so that both Qlogic HBAs in the host system can see the SAN boot device. Ensure that there is a single-path connection from each Qlogic HBA to the SAN boot device. 2. Turn on the host system with 2 fibre-channel cables connected to both HBAs. When the HBA banner appears, press CTRL-Q 3. Select the first HBA from the displayed list. 4. Select Configuration Settings. 5. Select Host Adapter Settings. Select Host Adapter BIOS and enable it. Press the Back key to back up one menu. Select Selectable Boot Settings. Under Selectable Boot Settings, enable Selectable Boot. Under (Primary) Boot Port Name, LUN, select the IBM device that will be providing the storage for SAN boot. At the Select LUN prompt, select the first supported LUN, which is LUN 0. 11. This returns to the previous screen, which will now have information under (Primary) Boot Port Name, LUN for the device that you selected in the previous step. 12. Press the Back key twice to exit the menus. Then select Save Changes to save the changes. 6. 7. 8. 9. 10.
378
13. Select the second HBA and repeat steps 4-12. 14. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 15. Insert the Windows 2000 with latest Service Pack CD-ROM into the CD-ROM drive. 16. Restart the host system. 17. At the very first Windows 2000 installation screen, quickly press F6 to install third-party device. 18. Select S to specify an additional device. 19. Insert the diskette with the Qlogic driver into diskette drive and press Enter. 20. Continue with the Windows 2000 installation process. Remember to select the IBM SAN device seen by Qlogic HBA as the device to install Windows 2000. Continue with the OS installation. 21. After Windows 2000 is successfully installed on the SAN boot device, shut down the host system. 22. Unplug the fibre-channel cable from the first HBA and plug the fibre-channel cable to the second HBA so that you have is single connection from the second HBA to the IBM SAN boot device. 23. Restart the host system to boot from SAN. 24. Install the latest SDD version on your host system and reboot. 25. To add multipath support to the SAN boot device, complete the following steps: a. Shut down the host system. b. Configure the SAN to allow additional paths to the SAN boot device if needed. c. Connect all fibre-channel cables. 26. Restart the host system.
Booting from a SAN device with Windows 2000 and the SDD using an Emulex HBA <Firmware v3.92a2, v1.90.x5> or later
Note: The Automatic LUN Mapping checkbox of the Emulex Configuration Settings should be selected so that both HBA ports can see all assigned LUNs. Complete the following steps to set up a SAN boot device with Windows 2000 and SDD using an Emulex HBA: 1. Configure the SAN environment so that both Emulex HBAs in the host system can see the SAN boot device. Ensure that there is a single-path connection from each of the Emulex HBA to the SAN boot device. 2. Turn on the host system with 2 fibre-channel cables connected to both HBAs. Press Alt+E to go to Emulex BIOS Utilities. 3. Select the first HBA. 4. Select Configure HBA Parameter Settings. 5. Select Option 1 to enable BIOS for this HBA. 6. Press the Page Up key to go back. Then select Configure boot device. 7. Select the first unused boot device for Select Boot Entry from the List Of Saved Boot Devices.
Chapter 9. Using the SDD on a Windows 2000 host system
379
Select 01 for Select The Two Digit Number Of The Desired Boot Device. Enter 00 for Enter Two Digit Of Starting LUNs (hexadecimal). Select the device number 01 for Enter Selection For Starting LUN. Select Boot Device Via WWPN. Page up and select the second HBA. Repeat steps 4-11 to configure boot support for this HBA. 13. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 14. Insert the Windows 2000 with latest Service Pack CD-ROM into the CD-ROM drive. 15. Restart the host system. 16. At the very first Windows 2000 installation screen, quickly press F6 to install a third-party device. 17. Select S to specify an additional device. 18. Insert the diskette with Emulex HBA driver into diskette drive and press Enter. 19. Continue with the Windows 2000 installation process. Remember to select the IBM SAN device that is seen by the Emulex HBA as the device to install Windows 2000. Continue with the OS installation. 20. After Windows 2000 is successfully installed on the SAN boot device, shut down the host system. 21. Disconnect the fibre-channel cable from the first HBA. Reconnect the fibre-channel cable to the second HBA. Ensure that there is a single-path connection from second HBA to IBM SAN boot device. 22. Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the IBM SAN device. 23. Restart the host system to boot from SAN. 24. Install the latest SDD and reboot. 25. To add multipath support to the SAN boot device, complete the following steps: a. Shut down the host system. b. Configure the SAN to add multipaths to the SAN boot device if needed. c. Reconnect all fibre-channel cables. 26. Restart the host system.
Limitations when you boot from a SAN boot device on a Windows 2000 host
The following limitations apply when you boot from a SAN boot device on a Windows 2000 host: v You cannot use the same HBA as both the SAN boot device and a clustering adapter. This is a Microsoft physical limitation. v The following limitations might apply to a host system that is running at a BIOS or Firmware level older than the specified one. 1. If you reboot a system with adapters while the primary path is in failed state, you must: a. Manually disable the BIOS on the first adapter. b. Manually enable the BIOS on the second adapter.
380
2. You cannot enable the BIOS for both adapters at the same time. If the BIOS for both adapters is enabled at the same time and there is path failure on the primary adapter, you will receive the error message INACCESSIBLE_BOOT_DEVICE when the system restarts.
where, device type The device type to which you are migrating. To migrate the boot disk from a Model 2105 to a Model 2107: 1. Enter the datapath set bootdiskmigrate 2107 command on the remote boot host that needs to be migrated. 2. Shutdown the host. 3. Using Metro Mirror, PPRC or any other tool, migrate the data to the 2107 disk. 4. Boot the host from the 2107 disk instead of the 2105 disk.
When running Windows 2000 clustering, clustering failover might not occur when the last path is being removed from the shared resources. See Microsoft article Q294173 for additional information. Windows 2000 does not support dynamic disks in the MSCS environment.
381
Reserve/Release support clustering environment, the datapath set adapter offline command does not change the condition of the path if the path is active or being reserved. If you issue the command, the following message is displayed: to preserve access some paths left online
Complete the following steps to configure a Windows 2000 cluster with SDD: 1. On both server_1 and server_2, configure SAN devices on supported storage as shared for all HBAs . 2. Install the latest SDD on server_1. For installation instructions, see Installing SDD on page 372. 3. Connect fibre-channel cables from server_1 to the supported storage device, and restart server_1. 4. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_1. 5. Click Start All Programs Administrative Tools Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. 6. Format the raw devices with NTFS and assign drive letters for all SAN devices that are going to be used as MSCS resources. Ensure that you keep track of the assigned drive letters on server_1. 7. Shut down server_1. 8. Install the latest SDD on server_2. For installation instructions, see Installing SDD on page 372. 9. Connect fibre-channel cables from server_2 to the supported storage device, and restart server_2. 10. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_2. 11. Click Start All Programs Administrative Tools Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. Verify that the assigned drive letters for MSCS resources on server_2 match the assigned drive letters on server_1. 12. Insert the Windows 2000 CD-ROM into the CD-ROM drive and install the MSCS software on server_2. 13. Restart server_1. 14. Insert the Windows 2000 CD-ROM into the CD-ROM drive and install the MSCS software on server_1 as the second node of the MSCS cluster. Information about installing a Windows 2000 cluster can be found at: www.microsoft.com/windows2000/techinfo/planning/server/clustersteps.asp
382
2. On node A, follow the instructions from Upgrading SDD on page 374. 3. After node A is started, move all resources from node B to node A. 4. On node B, follow the instructions from Upgrading SDD on page 374.
383
5. Click Stop.
384
Unsupported environments
SDD does not support the following environments: v A host system with both a SCSI channel and a fibre-channel connection to a shared LUN. v Single-path mode during code distribution and activation of LMC nor during any disk storage system concurrent maintenance that impacts the path attachment, such as a disk storage system host-bay-adapter replacement. v SDD is not supported on the Windows Server 2003 Web edition. v DS8000 and DS6000 do not support SCSI connectivity.
385
v IBM 2107xxx, for DS8000 devices v IBM 1750xxx, for DS6000 devices v IBM 2145, for SAN Volume Controller devices where xxx represents the disk storage system model number.
SCSI requirements
To use the SDD SCSI support, ensure that your host system meets the following requirements: v No more than 32 SCSI adapters are attached. v A SCSI cable connects each SCSI host adapter to an ESS port. (DS8000 and DS6000 do not support SCSI connectivity.) v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two independent paths are configured between the host and the subsystem. Note: SDD also supports one SCSI adapter on the host system. With single-path access, concurrent download of licensed machine code is supported with SCSI devices. However, the load-balancing and failover features are not available. v For information about the SCSI adapters that can attach to your Windows Server 2003 host system, go to the following Web site: www.ibm.com/servers/storage/support
Fibre-channel requirements
To use the SDD fibre-channel support, ensure that your host system meets the following requirements: v Depending on the fabric and supported storage configuration, the number of fibre-channel adapters attached should be less than or equal to 32 / (n * m), where n is the number of supported storage ports and m is the number of paths that have access to the supported storage device from the fabric. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two independent fibre-channel paths are installed. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure. For information about the fibre-channel adapters that can attach to your Windows Server 2003 host system, go to the following Web site at: www.ibm.com/servers/ storage/support
386
387
To get the latest recommendation for host adapter settings for a disk storage system, see the HBA interoperability search tool at the following Web site: www-03.ibm.com/servers/storage/support/config/hba/index.wss SDD supports the Emulex HBA with full-port driver. When you configure the Emulex HBA for multipath functions, select Allow Multiple Paths to SCSI Targets in the Emulex Configuration Tool panel.
Installing SDD
This section describes first time installation, upgrading, displaying current versions, and upgrading from Windows NT.
388
4. When the setup program is finished, you are asked if you want to reboot. If you answer y, the setup program restarts your Windows 2003 system immediately. Follow the instructions to restart. Otherwise the setup program exits, and you need to manually restart your Windows 2003 system to activate the new installation. 5. Shut down your Windows 2003 host system. 6. Reconnect all cables that connect the host bus adapters and the supported storage devices, if needed. 7. Change any zoning information that needs to be updated. 8. Restart your Windows 2003 host system. After completing the installation procedures and when you log on again, you will see a Subsystem Device Driver entry in your Program menu containing the following selections: 1. Subsystem Device Driver Management 2. SDD Technical Support Web site 3. README Notes: 1. You can verify that SDD has been successfully installed by issuing the datapath query device command. You must issue the datapath command from the datapath directory. If the command runs, SDD is installed. You can also use the following procedure to verify that SDD has been successfully installed: a. Click Start Programs Administrative Tools Computer Management. b. Double-click Device Manager. c. Expand Disk drives in the right pane. IBM 2105 indicates an ESS device IBM 2107 indicates a DS8000 device IBM 1750 indicates a DS6000 device IBM 2145 indicates a SAN Volume Controller device In Figure 8 on page 390, there are six ESS devices connected to the host and four paths to each of the ESS devices. The Device manager shows six IBM 2105xxx SDD disk devices and 24 IBM 2105xxx SCSI disk devices.
389
Figure 8. Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows Server 2003 host system
2. You can also verify the current version of SDD. For more information, go to Displaying the current version of the SDD on page 391.
3. When the setup.exe program is finished, you will be asked if you want to reboot. If you answer y, the setup.exe program will restart your Windows 2003 system immediately. Follow the instructions to restart. Otherwise, the setup.exe program exits, and you need to manually restart your Windows 2003 system to activate the new installation. 4. Shut down your Windows 2003 host system. 5. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. 6. Change any zoning information that needs to be updated. 7. Restart your Windows 2003 host system.
390
If you have previously installed a 1.3.1.1 (or earlier) version of SDD, you will see an Upgrade? question while the setup program is running. You should answer y to this question to continue the installation. Follow the displayed setup instructions to complete the installation. If you currently have SDD 1.3.1.2 or 1.3.2.x installed on your Windows 2000 host system, answer y to the Upgrade? question. 4. When the setup program is finished, you are asked if you want to reboot. If you answer y, the setup program restarts your Windows Server 2003 system immediately. Follow the instructions to restart. Otherwise the setup program exits, and you need to manually restart your Windows Server 2003 system to activate the new installation. You can verify that SDD has been successfully installed by issuing the datapath query device command. If the command runs, then SDD is installed. You can also verify the current version of SDD. For more information, see Displaying the current version of the SDD.
391
d. In the sddbus.sys properties window, click Version. The file version and copyright information about the sddbus.sys file is displayed. 2. Run datapath query version command (requires SDD 1.6.1.x or later).
392
3. Enter datapath query adapter and press Enter. The output includes information about all the installed adapters. In the example shown in the following output, one HBA is installed:
Active Adapters :1 Adpt# 0 Adapter Name Scsi Port4 Bus0 State NORMAL Mode ACTIVE Select 592 Errors 0 Paths 6 Active 6
4. Enter datapath query adapter and press Enter. In the example shown in the following output, six devices are attached to the SCSI path:
Total Devices : 6
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0
393
3. Enter datapath query device and press Enter. The output should include information about any additional devices that were installed. In this example, the output includes information about the new host bus adapter that was assigned. The following output is displayed:
394
Total Devices : 6
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 1 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 96 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 95 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 94 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 1 Scsi Port5 Bus0/Disk4 Part0 OPEN NORMAL 96 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 1 Scsi Port5 Bus0/Disk5 Part0 OPEN NORMAL 99 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0 1 Scsi Port5 Bus0/Disk6 Part0 OPEN NORMAL 79 0
395
--> setup -s -u : silent uninstall --> setup -s -u -n : silent uninstall; no reboot (requires SDD 1.6.0.6 or later)
Booting a SAN device with Windows Server 2003 and the SDD using Qlogic HBA <BIOS 1.43> or later
Complete the following steps to install the SDD: 1. Configure the SAN environment so that both Qlogic HBAs in the host system can see the SAN boot device. Ensure that there is a single-path connection from each of the Qlogic HBAs to the SAN boot device. 2. Turn on the host system with 2 fibre-channel cables connected to both HBAs. Press CTRL-Q when the HBA banner appears during boot process. 3. 4. 5. 6. 7. 8. Select the first HBA from the displayed list. Select Configuration Settings. Select Host Adapter Settings. Select Host Adapter BIOS and enable it. Press the Page Up key to back up one menu. Select Selectable Boot Settings.
9. Under Selectable Boot Settings, enable Selectable Boot. 10. Under (Primary) Boot Port Name, LUN, select the IBM device that will be providing the storage for SAN boot. At the Select LUN prompt, select the first supported LUN, which is LUN 0. 11. This returns you to the previous screen, which now has information under (Primary) Boot Port Name, LUN for the device you selected in the previous step. 12. Press the Page Up key twice to exit the menus. Select Save Changes to save the changes. 13. Select the second HBA and repeat steps 4-12. 14. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 15. Insert the Windows Server 2003 with latest Service Pack CD-ROM into the CD-ROM drive. 16. Restart the host system. 17. At the first Windows Server 2003 installation screen, quickly press F6 to install a third-party device. 18. Select S to specify an additional device. 19. Insert the diskette with the Qlogic driver into diskette drive and press Enter. 20. Continue with the Windows Server 2003 installation process. Remember to select the SAN device seen by the Qlogic HBA as the device to install Windows Server 2003. 21. After Windows Server 2003 is successfully installed on the SAN boot device, shut down the system. 22. Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the SAN device. 23. Restart the host system to boot from the SAN.
396
24. Install the latest SDD version on your host system and restart. 25. To add multipath support to the SAN boot device, complete the following steps: a. Shut down the host system b. Configure the SAN to allow additional paths to the SAN boot device, if needed. c. Connect all FC cables. 26. Restart the host system.
Booting a SAN device with IA64-bit Windows Server 2003 and the SDD using a Qlogic HBA
Complete the following steps to install the SDD: 1. Load EFI code v1.07 into QLogic HBA flash. 2. Build the QLogic EFI code using the ISO file. a. Insert the EFI code CD-ROM in the CD-ROM drive. b. At the EFI prompt, enter the following commands: fs0 flasutil After some time, the flash utility starts. It displays the addresses of all available QLogic adapters. c. Select the address of each HBA and select f option to load code into flash memory. 3. Enable the boot option in the QLogic EFI configuration. a. At EFI shell prompt, enter drivers -b. A list of installed EFI drivers is displayed. b. Locate the driver named QlcFC SCSI PASS Thru Driver. Determine the DRVNUM of that driver. 1) Enter DrvCfg DRVNUM. 2) A list of adapters under this driver is displayed. Each adapter has its own CTRLNUM. 3) For each HBA for which you need to configure boot option, enter Drvcfg -s DRVNUM CTRLNUM. c. At the QLcConfig> prompt, enter b to enable the boot option, enter c for the connection option, or enter d to display the storage back-end WWN. d. The topology should be point-to-point. e. Exit the EFI environment. f. Reboot the system. 4. Connect the USB drive to the system. 5. Insert the disk that contains the ramdisk.efi file. You can obtain this file from the Intel Application Tool Kit in the binaries\sal64 directory. See www.intel.com/technology/efi/ 6. The USB drive should be attached to fs0. Enter the following command: fs0: load ramdisk.efi This will create virtual storage.
Chapter 10. Using SDD on a Windows Server 2003 host system
397
7. Enter map -r to refresh. 8. Insert the diskette that contains the QLogic driver for your QLA HBAs. Assume that fs0 is virtual storage and fs1 is the USB drive. You can enter map -b to find out fs0: 9. Enter copy fs1:\*.* This will copy the QLogic driver to the virtual storage. 10. Install the Windows Server 2003 64-bit OS on the SAN device. a. At the first Windows 2003 installation panel, press F6 to install a third-party device. b. Use the QLogic driver loaded from virtual storage c. Continue to install Windows 2003. d. Select the first ESS volume seen by the QLogic HBA as the device on which to install Windows Server 2003. e. Install the Windows Server 2003 Service Pack, if applicable. 11. Install SDD. 12. Add multipaths to ESS.
Booting from a SAN device with Windows Server 2003 and SDD using an EMULEX HBA <Firmware v3.92a2, v1.90.x5> or later
Complete the following steps to set up a SAN boot device with Windows Server 2003 and SDD using an Emulex HBA: 1. Configure the SAN Environment so that both Emulex HBAs in the host system can see the SAN Boot Device. Ensure that there is a single-path connection from each of the Emulex HBAs to the SAN boot device. 2. Power up the host system with 2 fibre-channel cables connected to both HBAs. Press Alt+E to go to the Emulex Bios Utilities. 3. Select the first HBA. 4. Select Configure HBA Parameter Settings. 5. Select Option 1 to enable BIOS for this HBA. 6. Press Page-up to go back and select Configure boot device. 7. Select first unused boot device for Select Boot Entry from the List Of Saved Boot Devices. 8. Select 01 for Select The Two Digit Number Of The Desired Boot Device. 9. Enter 00 for Enter Two Digit Of Starting LUNs (hexadecimal). 10. Select device number 01 for Enter Selection For Starting LUN. 11. Select Boot Device Via WWPN. 12. Page up and select the second HBA. Repeat steps 4-11 to configure boot support for this HBA. 13. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 14. Insert the Windows Server 2003 with the latest Service Pack CD-ROM into the CD-ROM drive. 15. Restart the host system. 16. At the very first Windows Server 2003 installation screen, quickly press F6 to install a third-party device. 17. Select S to specify an additional device.
398
18. Insert the diskette with Emulex HBA driver into diskette drive and press Enter. 19. Continue with the Windows Server 2003 installation process. Remember to select the SAN device seen by the Emulex HBA as the device to install Windows Server 2003. Continue with the OS installation. 20. After Windows Server 2003 is successfully installed on the SAN boot device, shut down the host system. 21. Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the SAN device. 22. Restart the host system to boot from the SAN. 23. Install the latest SDD and reboot. 24. To add multipath support to SAN boot device, complete the following steps: a. Shut down the host system. b. Configure the SAN to add multipaths to the SAN boot device if needed. c. Reconnect all fibre-channel cables. 25. Restart the host system.
where, device type The device type to which you are migrating. To migrate the boot disk from a Model 2105 to a Model 2107: 1. Enter the datapath set bootdiskmigrate 2107 command on the remote boot host that needs to be migrated. 2. Shutdown the host. 3. Using Metro Mirror, PPRC or any other tool, migrate the data to the 2107 disk. 4. Boot the host from the 2107 disk instead of the 2105 disk.
399
Complete the following steps to configure a Windows Server 2003 cluster with SDD: 1. On both server_1 and server_2, configure SAN devices on supported storage as shared for all HBAs. 2. Install the latest SDD on server_1. For installation instructions, see Installing SDD on page 388. 3. Connect fibre-channel cables from server_1 to the supported storage, and restart server_1. 4. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_1. 5. Click Start All Programs Administrative Tools Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. 6. Format the raw devices with NTFS and assign drive letters for all SAN devices that are going to be used as MSCS resources. Ensure that you keep track of the assigned drive letters on server_1. 7. Shut down server_1. 8. Install the latest SDD on server_2. For installation instructions, see Installing SDD on page 388. 9. Connect fibre-channel cables from server_2 to the supported storage device, and restart server_2 10. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_2. 11. Click Start All Programs Administrative Tools Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. Verify that the assigned drive letters for MSCS resources on server_2 match the assigned drive letters on server_1. 12. Insert the Windows Server 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_2. 13. Restart server_1.
400
14. Insert the Windows Server 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_1 as the second node of the MSCS cluster. Information about installing a Windows 2003 cluster can be found in a file, confclus.exe, located at:
http://www.microsoft.com/downloads/details.aspx?familyid=96F76ED7-9634-4300-9159-89638F4B4EF7&displaylang=en
401
402
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
Subsystem Device Driver Device Specific Module (SDDDSM) provides multipath I/O support based on the MPIO technology of Microsoft. SDDDSM is a device-specific module designed to provide support for supported storage devices.
| | |
This chapter provides procedures for you to install, configure, use, and remove SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system that is attached to a supported storage device. Install the package from the %ProgramFiles%\IBM\SDDDSM directory of the SDDDSM CD-ROM or the location where SDDDSM package was saved. For updated and additional information that is not included in this chapter, see the readme file on the CD-ROM or visit the SDDDSM website: www.ibm.com/servers/storage/support/software/sdd
| |
| | | | | |
403
| | |
v The Microsoft Visual C++ 2012 Redistributable package that can be downloaded from the Microsoft Corporation website. The HBAInfo utility requires this package.
Unsupported environments
SDDDSM does not support the following environments: v Single-path mode during code distribution and activation of LMC nor during any disk storage system concurrent maintenance that impacts the path attachment, such as a disk storage system host-bay-adapter replacement. v SDDDSM is not supported on the Windows Server 2003 Web edition. v DS8000 and DS6000 do not support SCSI connectivity.
Fibre-channel requirements
To use the SDDDSM fibre-channel support, ensure that your host system meets the following requirements: v No more than 32 fibre-channel adapters are attached. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. v If you need the SDDDSM I/O load-balancing and failover features, ensure that a minimum of two fibre-channel adapters are installed. Note: If your host has only one fibre-channel adapter, it requires you to connect through a switch to multiple disk storage system ports. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure. | | | For information about the fibre-channel adapters that can attach to your Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system, go to the following website: www.ibm.com/servers/storage/support
404
Installing SDDDSM
You can install SDDDSM either from a CD-ROM or download. After it is installed, you can update SDDDSM or display the current version number.
| |
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
405
5. Select the CD-ROM drive. A list of all the installed directories on the compact disc is displayed. 6. If you have the zip file for the SDDDSM package available, select the %ProgramFiles%\IBM\SDDDSM installation subdirectory and go to step 9. 7. If you still do not have the zip file for the SDDDSM package available, go to the SDD website and download and save it to a directory. 8. Extract the zip file for the SDDDSM package to a directory and go to that directory. 9. Run the setup.exe program. Follow the instructions. 10. Shut down your Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system. 11. Connect additional cables to your storage if needed. 12. Make any necessary zoning configuration changes. 13. Restart your Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system. After completing the installation procedures and when you log on again, you will see an SDDDSM entry in your Program menu containing the following selections: 1. Subsystem Device Driver DSM 2. SDDDSM Technical Support website 3. README Notes: 1. You can verify that SDDDSM has been successfully installed by issuing the datapath query device command. You must issue the datapath command from the datapath directory. If the command runs, SDDDSM is installed. You can also use the following operation to verify that SDDDSM has been successfully installed: a. Click Start Programs Administrative Tools Computer Management. b. Double-click Device Manager. c. Expand Disk drives in the right pane. In Figure 9 on page 407, there are eight SAN Volume Controller devices connected to the host and four paths to each of the SAN Volume Controller devices. The Device manager shows eight 2145 Multipath Disk Devices and 32 2145 SDDDSM SCSI Devices.
| |
| |
406
Figure 9. Example showing SAN Volume Controller devices to the host and path access to the SAN Volume Controller devices in a successful SDDDSM installation on a Windows Server 2003 host system
2. You can also verify the current version of SDDDSM. For more information, go to Displaying the current version of SDDDSM on page 408.
3. When the setup.exe program is finished, you will be asked if you want to reboot. If you answer y, the setup.exe program will restart your SDDDSM system immediately. Follow the instructions to restart. Otherwise, the setup.exe program exits, and you need to manually restart your SDDDSM system to activate the new installation. 4. Shut down your SDDDSM host system. 5. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. 6. Change any zoning information that needs to be updated. 7. Restart your SDDDSM host system.
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
407
Upgrading SDDDSM
Use the following procedure to upgrade SDDDSM. 1. Log in as administrator. 2. Open Windows Explorer and go to the directory where the SDDDSM package is located. 3. Double-click the file setup.exe. 4. Follow the instructions to continue with SDDDSM setup procedure. 5. When the upgrade is complete, SDDDSM will ask you to reboot. Answer yes to reboot the system and activate the new SDDDSM. You can check the SDDDSM version to verify that SDDDSM has been successfully upgraded. For more information, see Displaying the current version of SDDDSM.
| | | | | |
Configuring SDDDSM
Use these topics to configure SDDDSM.
408
| | | | | | | | | | | | | | |
Attention: Ensure that SDDDSM is installed and activated before you add new paths to a device. Otherwise, the Windows Server 2003, Windows Server 2008, or Windows Server 2012 server could lose the ability to access existing data on that device. Before adding any new hardware, review the configuration information for the adapters and devices currently on your Windows Server 2003, Windows Server 2008, or Windows Server 2012 server. Complete the following steps to display information about the adapters and devices: 1. You must log on as an administrator user to have access to the Windows Server 2003, Windows Server 2008, or Windows Server 2012 Computer Management. 2. Open the DOS prompt window. v On a Windows Server 2003 or a Windows Server 2008: Click Start Program Subsystem Device Driver DSM Subsystem Device Driver DSM. v On a Windows Server 2012: Click Start Screen Apps Subsystem Device Driver DSM. 3. Enter datapath query adapter and press Enter. The output includes information about all the installed adapters. In the following example, the output shows that one HBA is installed:
Active Adapters : 1 Adpt# 0 Adapter Name Scsi Port4 Bus0 State NORMAL Mode ACTIVE Select 592 Errors 0 Paths 6 Active 6
4. Enter datapath query device and press Enter. In the following example, the output shows that six devices are attached to the SCSI path:
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
409
Total Devices : 6
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0
410
| | | | |
1. Open the DOS prompt window. v On a Windows Server 2003 or a Windows Server 2008: Click Start Program Subsystem Device Driver DSM Subsystem Device Driver DSM. v On a Windows Server 2012: Click Start Apps Subsystem Device Driver DSM. 2. Type datapath query adapter and press Enter. The output includes information about any additional adapters that were installed. In the example shown in the following output, an additional HBA has been installed:
Active Adapters : 2 Adpt# 0 1 Adapter Name Scsi Port2 Bus0 Scsi Port3 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 391888 479686 Errors 844 566 Paths Active 16 16 16 16
| | | | |
3. Type datapath query device and press Enter. The output should include information about any additional devices that were installed. In this example, the output includes information about the new HBA and the new device numbers that were assigned. The following output is displayed:
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
411
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Total Devices : 8 DEV#: 0 DEVICE NAME: \Device\Harddisk2\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000023 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Path0 OPEN NORMAL 3079 103 1 Scsi Port2 Bus0/Disk2 Path1 OPEN NORMAL 43 6 2 Scsi Port3 Bus0/Disk2 Path2 OPEN NORMAL 45890 72 3 Scsi Port3 Bus0/Disk2 Path3 OPEN NORMAL 30 4 DEV#: 1 DEVICE NAME: \Device\Harddisk3\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000025 ============================================================================= Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Path0 OPEN NORMAL 51775 101 1 Scsi Port2 Bus0/Disk3 Path1 OPEN NORMAL 34 6 2 Scsi Port3 Bus0/Disk3 Path2 OPEN NORMAL 64113 68 3 Scsi Port3 Bus0/Disk3 Path3 OPEN NORMAL 30 4 DEV#: 2 DEVICE NAME: \Device\Harddisk4\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000024 ============================================================================= Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk4 Path0 OPEN NORMAL 43026 124 1 Scsi Port2 Bus0/Disk4 Path1 OPEN NORMAL 440 6 2 Scsi Port3 Bus0/Disk4 Path2 OPEN NORMAL 51992 63 3 Scsi Port3 Bus0/Disk4 Path3 OPEN NORMAL 11152 4 DEV#: 3 DEVICE NAME: \Device\Harddisk5\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000026 ============================================================================= Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk5 Path0 OPEN NORMAL 47507 106 1 Scsi Port2 Bus0/Disk5 Path1 OPEN NORMAL 402 6 2 Scsi Port3 Bus0/Disk5 Path2 OPEN NORMAL 51547 76 3 Scsi Port3 Bus0/Disk5 Path3 OPEN NORMAL 10930 4 DEV#: 4 DEVICE NAME: \Device\Harddisk6\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000027 ============================================================================= Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk6 Path0 OPEN NORMAL 45604 107 1 Scsi Port2 Bus0/Disk6 Path1 OPEN NORMAL 45 6 2 Scsi Port3 Bus0/Disk6 Path2 OPEN NORMAL 60839 76 3 Scsi Port3 Bus0/Disk6 Path3 OPEN NORMAL 31 4 DEV#: 5 DEVICE NAME: \Device\Harddisk7\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000029 ============================================================================= Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk7 Path0 OPEN NORMAL 46439 80 1 Scsi Port2 Bus0/Disk7 Path1 OPEN NORMAL 423 6 2 Scsi Port3 Bus0/Disk7 Path2 OPEN NORMAL 50638 76 3 Scsi Port3 Bus0/Disk7 Path3 OPEN NORMAL 10226 4 DEV#: 6 DEVICE NAME: \Device\Harddisk8\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000028 ============================================================================= Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk8 Path0 OPEN NORMAL 42857 92 1 Scsi Port2 Bus0/Disk8 Path1 OPEN NORMAL 46 6 2 Scsi Port3 Bus0/Disk8 Path2 OPEN NORMAL 61256 53 3 Scsi Port3 Bus0/Disk8 Path3 OPEN NORMAL 31 4 DEV#: 7 DEVICE NAME: \Device\Harddisk9\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A80000000000002A ============================================================================= Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk9 Path0 OPEN NORMAL 2161 62 1 Scsi Port2 Bus0/Disk9 Path1 OPEN NORMAL 108007 27 2 Scsi Port3 Bus0/Disk9 Path2 OPEN NORMAL 50767 50 3 Scsi Port3 Bus0/Disk9 Path3 OPEN NORMAL 10214 4
412
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3. Type datapath query device and press Enter. Continuing with the earlier example, the output includes information about three devices that are attached to the SCSI path:
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
413
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 7502281102E LUN SIZE: 1.0 GB ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 518 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 507 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 450 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 503 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 7502281102F LUN SIZE: 1.0 GB =========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 531 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 521 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 431 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 495 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75022811030 LUN SIZE: 1.0 GB ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 519 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 491 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 456 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 512 0
4. Remove two paths and run the datapath query device command. Continuing with the earlier example, the output shows that three devices are attached to the SCSI path. The output includes information about the paths that are not removed.
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 7502281102E LUN SIZE: 1.0 GB ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 518 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 503 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 7502281102F LUN SIZE: 1.0 GB =========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 531 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 495 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75022811030 LUN SIZE: 1.0 GB ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 519 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 512 0
5. Add the two paths again and run the datapath query device command. Continuing with the earlier example, the output shows that three devices are attached to the SCSI path. The path numbers for the existing paths do not change. The old path numbers are reassigned to the paths that you added in this step.
414
| | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 7502281102E LUN SIZE: 1.0 GB ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 518 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 507 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 450 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 503 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 7502281102F LUN SIZE: 1.0 GB =========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 531 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 521 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 431 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 495 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75022811030 LUN SIZE: 1.0 GB ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 519 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 491 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 456 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 512 0
Uninstalling SDDDSM
Attention: 1. You must install SDDDSM immediately before performing a system restart to avoid any potential data loss. Go to Installing SDDDSM on page 405 for instructions. 2. If you are not planning to reinstall SDDDSM after the uninstallation, ensure that there is a single-path connection from the system to the storage device before performing a restart to avoid any potential data loss. | | | | | | | | | | | | | | | | | | | Complete the following steps to uninstall SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system: 1. Log on as the administrator user. 2. Uninstall SDDDSM. v On Windows Server 2003 or Windows Server 2008: a. Click Start Settings Control Panel. The Control Panel opens. b. Double-click Add/Remove Programs. The Add/Remove Programs window opens. c. In the Add/Remove Programs window, select Subsystem Device Driver DSM from the currently installed programs selection list. d. Click Add/Remove. You will be asked to confirm that you want to uninstall. v On Windows Server 2012: a. Click Start Screen Control Panel. The Control Panel opens. b. Click Programs Program and Features Uninstall a program. c. From the list of programs, select Subsystem Device Driver DSM and in the Confirm dialog box, click OK. 3. Shut down your Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system after the uninstallation process completes.
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
415
| | | |
4. Change the zoning configuration or cable connections to ensure that there is only single-path connection from the system to the storage device. 5. Power on your Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system.
Remote boot support for 32-bit Windows Server 2003, Windows Server 2008, or Windows Server 2012 using a QLogic HBA
Complete the following steps to install SDD: 1. Configure the SAN Environment so that both Qlogic HBAs in the server can see the SAN boot device. 2. Start the server with 2 fibre-channel cables connected to both HBAs. 3. Press Crl+Q to go to Qlogic Bios Fast Utilities. 4. Select the first HBA. 5. Select Configuration Settings. 6. Select Host Adapter Setting. Enable the BIOS. 7. Press ESC. 8. 9. 10. 11. 12. 13. Select Selectable Boot Settings. Enable Selectable Boot. Select first (primary) boot and press Enter. Select IBM storage device and press Enter. At the Select LUN prompt, select the first supported LUN, which is LUN 0. Press Esc and select Save Changes.
14. Select the second HBA and repeat steps 5-13. 15. Remove the fibre-channel cable from the second HBA so that you have only a single path to first HBA. | | | | | 16. Restart the Windows Server 2003, Windows Server 2008, or Windows Server 2012 with the latest Service Pack CD-ROM. 17. At the very first Windows 2003 installation screen, quickly press F6 to install the third-party device. For Windows Server 2008 and Windows Server 2012, skip to step 20. 18. Select S to specify an additional device. 19. Insert the diskette with the Qlogic storport miniport driver into the diskette drive and press Enter. 20. Continue with the Windows Server 2003, Windows Server 2008, or Windows Server 2012 installation process. Remember to select the SAN device that is seen by Qlogic HBA as the device to install Windows Server 2003, Windows Server 2008, or Windows Server 2012. Continue with the OS installation. 21. After Windows Server 2003, Windows Server 2008, or Windows Server 2012 is installed on the SAN boot device, shut down the system. 22. Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the SAN device. 23. Restart the server. The system should come up in SAN Boot mode.
| | | | | |
416
24. Install the latest SDDDSM and restart. 25. To add multipath support to a SAN boot device: a. Shut down the server. b. Plug in the fibre-channel cable to the other HBA. c. Configure the SAN to have more paths to SAN boot device if needed. 26. Restart the server. | | |
Booting from a SAN device with Windows Server 2003, Windows Server 2008, or Windows Server 2012 and the SDD using an Emulex HBA
Note: The Automatic LUN Mapping checkbox of the Emulex Configuration Settings should be selected so that both HBA ports can see all assigned LUNs. Complete the following steps. 1. Configure the SAN Environment so that both Emulex HBAs in the server can see the SAN boot device. 2. 3. 4. 5. 6. Boot the server with 2 fibre-channel cables connected to both HBAs. Press Alt+E to go to the Emulex BIOS Utilities. Select the first HBA. Select Configure HBA Parameter Settings. Select Option 1 to enable BIOS for this HBA.
7. Press Page Up to go back. Then select Configure Boot Device. 8. Select the first unused boot device for Select Boot Entry from the List Of Saved Boot Devices. 9. Select 01 for Select The Two Digit Number Of The Desired Boot Device. 10. Enter 00 for Enter Two Digit Of Starting LUNs (hexadecimal). 11. Select device number 01 for Enter Selection For Starting LUN. 12. Select Boot Device Via WWPN. 13. Page up. Then select the second HBA. Repeat steps 5-12 to configure boot support for this HBA. 14. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 15. Restart the Windows Server 2003, Windows Server 2008, or Windows Server 2012 with the latest Service Pack CD-ROM. 16. At the very first Windows 2003 installation screen, quickly press F6 to install third-party device. For Windows Server 2008 and Windows Server 2012, skip to step 19. 17. Select S to specify an additional device. 18. Insert the diskette with the Emulex HBA driver into the diskette drive and press Enter. 19. Continue with the Windows Server 2003, Windows Server 2008, or Windows Server 2012 installation process. Remember to select the SAN device seen by the Emulex HBA as the device to install Windows 2003. Continue with the OS installation. 20. After Windows Server 2003, Windows Server 2008, or Windows Server 2012 is installed on the SAN Boot device, shut down the system.
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
| |
| | | | | |
417
21. Unplug the fibre-channel cable from the first HBA and plug in the fibre-channel cable to the second HBA so that you have a single path from second HBA to SAN device. 22. Restart the server. The system should be up in SAN boot mode. 23. Install the latest SDD and restart. 24. To add multipath support to a SAN boot device: a. Shut down the server. b. Plug in the fibre-channel cable to the other HBA. c. Configure the SAN to have more paths to the SAN boot device if needed. 25. Restart the server. | |
Support for Windows Server 2003, Windows Server 2008, or Windows Server 2012 clustering
When running Windows Server 2003 clustering, clustering failover might not occur when the last path is being removed from the shared resources. See Microsoft article Q294173 for additional information. Windows Server 2003 does not support dynamic disks in the MSCS environment.
Configuring a Windows Server 2003, Windows Server 2008, or Windows Server 2012 cluster with SDDDSM installed
The following variables are used in this procedure: server_1 server_2 Represents the first server with two HBAs. Represents the second server with two HBAs.
| |
Complete the following steps to configure a Windows Server 2003, Windows Server 2008, or Windows Server 2012 cluster with SDDDSM:
418
| |
1. On both server_1 and server_2, configure SAN devices on supported storage as shared for all HBAs. 2. Install the latest SDDDSM on server_1. For installation instructions, see Installing SDDDSM on page 405. 3. Connect fibre-channel cables from server_1 to the supported storage, and restart server_1. 4. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_1. 5. Click Start All Programs Administrative Tools Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. 6. Format the raw devices with NTFS and assign drive letters for all SAN devices that are going to be used as MSCS resources. Ensure that you keep track of the assigned drive letters on server_1. 7. Shut down server_1. 8. Install the latest SDDDSM on server_2 . For installation instructions, see Installing SDDDSM on page 405. 9. Connect fibre-channel cables from server_2 to the supported storage, and restart server_2. 10. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_2. 11. Click Start All Programs Administrative Tools Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. Verify that the assigned drive letters for MSCS resources on server_2 match the assigned drive letters on server_1. 12. Insert the Windows 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_2. For Windows 2008 and Windows 2012, enable the Failover Clustering feature and configure MSCS on server_2. 13. Restart server_1. 14. Insert the Windows 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_1 as the second node of the MSCS cluster. For Windows 2008 and Windows 2012, enable the Failover Clustering feature and configure MSCS on server_1 as the second node of the MSCS cluster. 15. Use the datapath query adapter and datapath query device commands to verify that the correct number of LUNs and paths on server_1 and server_2. (This step is optional.) Note: You can use the datapath query adapter and datapath query device commands to show all the physical and logical volumes for the host server. The secondary server shows only the physical volumes and the logical volumes that it owns. Information about installing a Windows 2003 cluster can be found in the confclus.exe file, located at:
http://www.microsoft.com/downloads/details.aspx?familyid=96F76ED7-9634-4300-9159-89638F4B4EF7&displaylang=en
| | | | | | |
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
419
Complete the following steps to remove SDDDSM in a two-node cluster environment: 1. Move all cluster resources from node A to node B. 2. Ensure that there is a single-path connection from the system to the storage device which may include the following activities: a. Disable access of second HBA to the storage device. b. Change the zoning configuration to allow only one port accessed by this host. c. Remove shared access to the second HBA through the IBM TotalStorage Expert V.2.1.0 Specialist. 3. 4. 5. 6. d. Remove multiple SAN Volume Controller port access, if applicable. Uninstall SDDDSM. See Uninstalling SDDDSM on page 415 for instructions. Restart your system. Move all cluster resources from node B to node A. Complete steps 2 - 5 on node B.
Beginning with SDDDSM version 2.1.1.0, SDDDSM also supports the following datapath commands: v datapath query version v v v v v datapath datapath datapath datapath datapath query portmap query essmap set device device_number policy rr/rrs/fo/lb/lbs/df clear device device_number count error/all disable/enable ports ess
Notes: 1. The options [], [-d ], [-i x/x y], [-s] in datapath query device are supported only by SDDDSM 2.1.1.0 or later. 2. For RSSM devices, even when two or more serial-attached SCSI (SAS) HBAs are installed on the host, SDDDSM finds only a single HBA, and the output of datapath query adapter shows only one adapter. For additional information about the datapath commands, see Chapter 13, Using the datapath commands, on page 429.
420
Chapter 11. Using SDDDSM on a Windows Server 2003, Windows Server 2008, or Windows Server 2012 host system
421
422
Chapter 12. Using the SDD server and the SDDPCM server
The SDD server (sddsrv) is an application program that is installed in addition to SDD. SDDPCM server (pcmsrv) is an integrated component of SDDPCM 2.0.1.0 (or later).
Path reclamation
The SDD server regularly tests and recovers broken paths that have become operational. It tests invalid, close_dead, or dead paths and detects if these paths have become operational. The daemon sleeps for three-minute intervals between consecutive runs unless otherwise specified for a specific platform. If the test succeeds, sddsrv reclaims these paths and changes the states of these paths according to the following characteristics: v If the state of the SDD vpath device is OPEN, sddsrv changes the states of INVALID and CLOSE_DEAD paths of that SDD vpath device to OPEN. v If the state of the SDD vpath device is CLOSE, sddsrv changes the states of CLOSE_DEAD paths of that SDD vpath device to CLOSE. v The sddsrv daemon changes the states of dead paths to OPEN.
Path probing
The SDD server regularly tests close paths and open paths that are idle to see if they are operational or have become not operational. The daemon sleeps for one-minute intervals between consecutive runs unless otherwise specified for a specific platform. If the test fails, sddsrv then changes the states of these paths according to the following characteristics: v If the SDD vpath device is in the OPEN state and the path is not working, sddsrv changes the state of the path from OPEN to DEAD.
423
v If the SDD vpath device is in the CLOSE state and the path is not working, sddsrv changes the state of the path from CLOSE to CLOSE_DEAD. v The sddsrv daemon will set the last path to DEAD or CLOSE_DEAD depending upon the state of the SDD vpath device. Note: The sddsrv daemon will not test paths that are manually placed offline. In SDD 1.5.0.x (or earlier), sddsrv by default was binding to a TCP/IP port and listening for incoming requests. In SDD 1.5.1.x (or later), sddsrv does not bind to any TCP/IP port by default, but allows port binding to be dynamically enabled or disabled. For all platforms except Linux, the SDD package ships a template file of sddsrv.conf that is named sample_sddsrv.conf. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the sample_sddsrv.conf file is in the directory where SDD is installed. You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as sample_sddsrv.conf by copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying parameters in sddsrv.conf. | | | | | | | | | | | | | | | | Because the TCP/IP interface of sddsrv is disabled by default, you cannot get sddsrv traces from a Web browser like you could in SDD releases earlier than 1.5.1.0. Starting with SDD 1.5.1.x, the sddsrv trace is saved in the sddsrv log files. The sddsrv trace log files are wrap-around files, and the size of each file can be a maximum of 4 MB. The sddsrv daemon also collects the SDD driver trace and puts it in log files. The daemon creates two sdd log files (sdd.log and sdd_bak.log) for the driver trace. The SDD driver trace log files are also wrap-around files, and the size of each file can be a maximum of 4 MB. You can find sddsrv and sdd log files in the following directories based on your host system platform: v AIX: /var/adm/ras v HP-UX: /var/adm/IBMsdd v Linux: /var/log v Solaris: /var/adm v Windows 2000 and Windows NT: \WINNT\system32 v Windows Server 2003: \Windows\system32 See Appendix A, SDD, SDDPCM, and SDDDSM data collection for problem analysis, on page 457 for information about reporting SDD problems.
sddsrv and IBM TotalStorage support for Geographically Dispersed Sites for Microsoft Cluster Service
The sddsrv TCP/IP port must be enabled to listen over the network if you are using IBM TotalStorage Support for Geographically Dispersed Sites for Microsoft Cluster Service (MSCS). Apply your corporate security rules to this port.
424
425
loopbackbind If you set the enableport parameter to true, the loopbackbind parameter specifies whether sddsrv listens to any Internet address or the loopback (127.0.0.1) address. To enable sddsrv to listen to any Internet address, set the loopbackbind parameter to false. To enable sddsrv to listen only to the loopback address 127.0.0.1, the loopbackbind parameter must be set to true. | | | | | | | | | | | | max_log_count This parameter specifies the number of log files. You can comment out the max_log_count parameter to use the default value of 1. You can uncomment this parameter to change the file count to a value in the range 1 - 3. If you specify a value smaller than 1, sddsrv uses the default value of 1. If you specify a value greater than 3, sddsrv uses 3. max_log_size This parameter specifies the size of the log file in MB. You can comment out the max_log_size parameter to use the default value of 4. You can uncomment this parameter to change the file size to a value in the range 4 - 25. If you specify a value smaller than 4, sddsrv uses the default value of 4. If you specify a value greater than 25, sddsrv uses 25. portnumber This parameter specifies the port number that sddsrv binds to. The default value of this parameter is 20001. You can modify this parameter to change the port number. If the enableport parameter is set to true, you must set this parameter to a valid port number to which sddsrv can bind. Use a port number that is not used by any other application. probeinterval This parameter specifies the probe interval for sddsrv probing in minutes. You can leave the probeinterval parameter commented to use the default probe interval, which is documented in the sample_sddsrv.conf file. You can uncomment this parameter to change the default probe interval to a value from 0 to 65535. If you specify a value less than 0, sddsrv uses the default value. If you specify a value greater than 65535, sddsrv uses 65535. probe_retry This parameter specifies the number of additional retries after a SCSI inquiry by sddsrv fails while probing inquiries. The probe_retry parameter is only available for Solaris SDD. You can leave the probe retry parameter commented to use the default value of 2. You can uncomment this parameter to change the number of probe retries to a value from 2 to 5. If you specify a value below 2, sddsrv uses the default value of 2. If you specify a value greater than 5, sddsrv uses 5. You can modify these parameters while sddsrv is running. Your changes take effect dynamically within 30 seconds.
426
| | | | | | | | | |
max_log_count parameter to use the default value of 1. You can uncomment this parameter to change the file count to a value in the range 1 - 3. If you specify a value smaller than 1, pcmsrv uses the default value of 1. If you specify a value greater than 3, pcmsrv uses 3. max_log_size This parameter specifies the size of the log file in MB. You can comment out the max_log_size parameter to use the default value of 4. You can uncomment this parameter to change the file size to a value in the range 4 - 25. If you specify a value smaller than 4, pcmsrv uses the default value of 4. If you specify a value greater than 25, pcmsrv uses 25. loopbackbind If you set the enableport parameter to true, the loopbackbind parameter specifies whether pcmsrv listens to any Internet address or the loopback (127.0.0.1) address. To enable pcmsrv to listen to any Internet address, set the loopbackbind parameter to false. To enable pcmsrv to listen only to the loopback address 127.0.0.1, set the loopbackbind parameter to true. portnumber This parameter specifies the port number that pcmsrv binds to. The default value of this parameter is 20001. You can modify this parameter to change the port number. If the enableport parameter is set to true, set this parameter to a valid port number to which pcmsrv can bind. Use a port number that is not used by any other application. You can modify these parameters while pcmsrv is running. Your changes take effect dynamically.
Chapter 12. Using the SDD server and the SDDPCM server
427
If you disable sddsrv probing, sddsrv is not able to mark bad ideal paths to the DEAD or CLOSED_DEAD state, which might lengthen cluster failover. Also, it takes more time to open SDD devices with bad paths.
428
This chapter includes descriptions of these commands. Table 28 provides an alphabetical list of these commands, a brief description, and where to go in this chapter for more information.
Table 28. Commands Command datapath clear device count datapath disable ports datapath enable ports datapath open device path datapath query adapter datapath query adaptstats Description Dynamically clears the select counter or error counter. Places paths connected to certain ports offline. Places paths connected to certain ports online. Dynamically opens a path that is in an invalid or close_dead state. Displays information about adapters. Displays performance information for all SCSI and FCS adapters that are attached to SDD devices. Displays information about devices. Displays performance information for a single SDD vpath device or all SDD vpath devices. Displays each SDD vpath device, path, location, and attributes. Displays the status of the logic paths that are managed by SDD between the host and the storage ports. Displays the version of SDD that is installed. Displays the World Wide Port Name (WWPN) of the host fibre-channel adapters. Dynamically removes an adapter. Dynamically removes a path of an SDD vpath device. Page 431 432 433 434 436 438
439 442
444 446
448 449
450 451
429
Table 28. Commands (continued) Command datapath set adapter datapath set device policy Description Sets all device paths that are attached to an adapter to online or offline. Dynamically changes the path-selection policy of a single or multiple SDD vpath devices. Sets the path of an SDD vpath device to online or offline. Dynamically enables or disables queue depth of an SDD vpath device. Page 453 454
455 456
430
Syntax
datapath clear device number 1 device number 2 count error all
Parameters
device number 1 <device number 2> When two device numbers are entered, this command applies to all the devices whose index numbers fit within the range of these two device index numbers. error Clears only the error counter of the SDD vpath device or range of devices specified. all Clears both the select counter and the error counter of the SDD vpath device or devices in the specified range.
Examples
If you have a nonzero select counter or error counter and enter the datapath query device command, the following output is displayed:
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 600507680181006B20000000000000D1 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk15 CLOSE NORMAL 53020 47 1 fscsi0/hdisk20 CLOSE NORMAL 0 0 2 fscsi1/hdisk55 CLOSE NORMAL 365742 0 3 fscsi1/hdisk60 CLOSE NORMAL 0 0
If you enter the datapath clear device 0 count all command and then enter the datapath query device command, the following output is displayed:
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 600507680181006B20000000000000D1 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk15 CLOSE NORMAL 0 0 1 fscsi0/hdisk20 CLOSE NORMAL 0 0 2 fscsi1/hdisk55 CLOSE NORMAL 0 0 3 fscsi1/hdisk60 CLOSE NORMAL 0 0
431
Syntax
datapath disable ports <connection> ess <essid>
Parameters
connection The connection code must be in one of the following formats: v Single port = R1-Bx-Hy-Zz v All ports on card = R1-Bx-Hy v All ports on bay = R1-Bx Use the output of the datapath query essmap command to determine the connection code. essid The disk storage system serial number, given by the output of the datapath query portmap command.
Examples
If you enter the datapath disable ports R1-B1-H3 ess 12028 command and then enter the datapath query device command, the following output is displayed:
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/hdisk2 DEAD OFFLINE 6 0 1 fscsi0/hdisk4 OPEN NORMAL 9 0 2 fscsi1/hdisk6 DEAD OFFLINE 11 0 3 fscsi1/hdisk8 OPEN NORMAL 9 0
432
Syntax
datapath enable ports connection ess essid
Parameters
connection The connection code must be in one of the following formats: v Single port = R1-Bx-Hy-Zz v All ports on card = R1-Bx-Hy v All ports on bay = R1-Bx Use the output of the datapath essmap command to determine the connection code. essid The disk storage system serial number, given by the output of the datapath query portmap command.
Examples
If you enter the datapath enable ports R1-B1-H3 ess 12028 command and then enter the datapath query device command, the following output is displayed:
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/hdisk2 OPEN NORMAL 6 0 1 fscsi0/hdisk4 OPEN NORMAL 9 0 2 fscsi1/hdisk6 OPEN NORMAL 11 0 3 fscsi1/hdisk8 OPEN NORMAL 9 0
433
Syntax
datapath open device device number path path number
Parameters
device number The device number refers to the device index number as displayed by the datapath query device command. path number The path number that you want to change, as displayed by the datapath query device command.
Examples
If you enter the datapath query device 8 command, the following output is displayed:
DEV#: 8 DEVICE NAME: vpath9 TYPE: 2105E20 POLICY: Optimized SERIAL: 20112028 ================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi1/hdisk18 OPEN NORMAL 557 0 1 fscsi1/hdisk26 OPEN NORMAL 568 0 2 fscsi0/hdisk34 INVALID NORMAL 0 0 3 fscsi0/hdisk42 INVALID NORMAL 0 0
Note that the current state of path 2 is INVALID. If you enter the datapath open device 8 path 2 command, the following output is displayed:
Success: device 8 path 2 opened DEV#: 8 DEVICE NAME: vpath9 TYPE: 2105E20 POLICY: Optimized SERIAL: 20112028 ================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi1/hdisk18 OPEN NORMAL 557 0 1 fscsi1/hdisk26 OPEN NORMAL 568 0 2 fscsi0/hdisk34 OPEN NORMAL 0 0 3 fscsi0/hdisk42 INVALID NORMAL 0 0
After you issue the datapath open device 8 path 2 command, the state of path 2 becomes open. The terms used in the output are defined as follows: Dev# The number of this device.
434
Type
Policy The current path-selection policy selected for the device. See datapath set device policy on page 454 for a list of valid policies. Serial The logical unit number (LUN) for this device. Path# The path number displayed by the datapath query device command.
Adapter The name of the adapter to which the path is attached. Hard Disk The name of the logical device to which the path is bound. State The condition of the named device: Open Path is in use. Close Path is not being used. Close_Dead Path is broken and is not being used. Dead Path is no longer being used. Invalid The path failed to open.
Mode The mode of the named path, which is either Normal or Offline. Select The number of times that this path was selected for input and output. Errors The number of input errors and output errors that are on this path.
435
Syntax
datapath query adapter adapter number
Parameters
adapter number The index number for the adapter for which you want information displayed. If you do not enter an adapter index number, information about all adapters is displayed.
Examples
If you enter the datapath query adapter command, the following output is displayed:
Active Adapters :4 Adpt# 0 1 2 3 Name scsi3 scsi2 fscsi2 fscsi0 State NORMAL NORMAL NORMAL NORMAL Mode ACTIVE ACTIVE ACTIVE ACTIVE Select 129062051 88765386 407075697 341204788 Errors Paths Active 0 64 0 303 64 0 5427 1024 0 63835 256 0
The terms used in the output are defined as follows: Adpt # The number of the adapter defined by SDD. Adapter Name The name of the adapter. State The condition of the named adapter. It can be either: Normal Adapter is in use. Degraded One or more paths attached to the adapter are not functioning. Failed All paths attached to the adapter are no longer operational.
Mode The mode of the named adapter, which is either Active or Offline. Select The number of times this adapter was selected for input or output. Errors The number of errors on all paths that are attached to this adapter. Paths The number of paths that are attached to this adapter. Note: In the Windows NT host system, this is the number of physical and logical devices that are attached to this adapter. Active The number of functional paths that are attached to this adapter. The number of functional paths is equal to the number of paths attached to this adapter minus any that are identified as failed or offline. Note: Windows 2000 and Windows Server 2003 host systems can display different values for State and Mode depending on adapter type when a path is placed
436
437
Syntax
datapath query adaptstats adapter number
Parameters
adapter number The index number for the adapter for which you want information displayed. If you do not enter an adapter index number, information about all adapters is displayed.
Examples
If you enter the datapath query adaptstats 0 command, the following output is displayed:
Adapter #: 0 ============= I/O: SECTOR: Total Read 1442 156209 Total Write 41295166 750217654 Active Read Active Write 0 2 0 32 Maximum 75 2098
/*-------------------------------------------------------------------------*/
The terms used in the output are defined as follows: Total Read v I/O: total number of completed read requests v SECTOR: total number of sectors that have been read Total Write v I/O: total number of completed write requests v SECTOR: total number of sectors that have been written Active Read v I/O: total number of read requests in process v SECTOR: total number of sectors to read in process Active Write v I/O: total number of write requests in process v SECTOR: total number of sectors to write in process Maximum v I/O: the maximum number of queued I/O requests v SECTOR: the maximum number of queued sectors to read or write
438
Syntax
datapath query device device_number device_number_m device_number_n -d device model
-i
x x y
-l
-s
Parameters
device_number The device index number that is displayed by the datapath query device command, rather than the SDD device number. device_number_m device_number_n The option that you can use to provide a range of device index numbers. -d device model The device model that you want to display. The option to specify a device model is supported on all platforms except Novell. Examples of valid device models include the following models: 2105 2105F 2105800 All 2105 800 models (ESS). 2145 2107 1750 -i -l -s All 2145 models (SAN Volume Controller). All DS8000 models. All DS6000 models. All 2105 models (ESS). All 2105 F models (ESS).
Repeats the command every x seconds for y times. If y is not specified, the command will repeat every x seconds indefinitely. Marks the nonpreferred paths with an asterisk, displays the LUN identifier, and for AIX only, displays the qdepth_enable value. Queries the SCSI address of the device. This option is available for both SDD 1.6.1.x (or later) and SDDDSM 2.1.1.x (or later) for Windows platforms.
Examples
If you enter the datapath query device 0 command, the following output is displayed:
439
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 6005076801818008C000000000000065 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi1/hdisk72 OPEN NORMAL 0 0 1 fscsi0/hdisk22 OPEN NORMAL 5571118 0 2 fscsi0/hdisk32 OPEN NORMAL 0 0 3 fscsi1/hdisk62 OPEN NORMAL 5668419 0
If you enter the datapath query device 0 -l command for a device type that has preferred and nonpreferred paths, the following output is displayed:
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 6005076801818008C000000000000065 LUN IDENTIFIER: 6005076801818008C000000000000065 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0* fscsi1/hdisk72 OPEN NORMAL 0 0 1 fscsi0/hdisk22 OPEN NORMAL 5571118 0 2* fscsi0/hdisk32 OPEN NORMAL 0 0 3 fscsi1/hdisk62 OPEN NORMAL 5668419 0
Notes: 1. Usually, the device number and the device index number are the same. However, if the devices are configured out of order, the two numbers are not always consistent. To find the corresponding index number for a specific device, always run the datapath query device command first. 2. For SDD 1.4.0.0 (or later), the location of the policy and serial number are swapped. The terms used in the output are defined as follows: Dev# The number of this device defined by SDD.
Name The name of this device defined by SDD. Type The device product ID from inquiry data.
Policy The current path selection policy selected for the device. See datapath set device policy on page 454 for a list of valid policies. Serial The LUN for this device. Path# The path number.
Adapter The name of the adapter to which the path is attached. Hard Disk The name of the logical device to which the path is bound. State The condition of the named device: Open Path is in use. Close Path is not being used. Close_Dead Path is broken and not being used. Dead Path is no longer being used. It was either removed by SDD due to errors or manually removed using the datapath set device M path N offline or datapath set adapter N offline command. Invalid The path failed to open.
Mode The mode of the named path. The mode can be either Normal or Offline.
440
Select The number of times this path was selected for input or output. Errors The number of input and output errors on a path that is attached to this device.
441
Syntax
datapath query devstats device_number device_number_m device_number_n -d device model
-i
x x y
Parameters
device number The device index number that is displayed by the datapath query device command, rather than the SDD device number. device_number_m device_number_n The option that you can use to provide a range of device index numbers. -d device model The device model that you want to display. Note: The -d device model option is supported on AIX only. Examples of valid device models include the following models: 2105 2105F 2105800 All 2105 800 models (ESS). 2145 2107 1750 All 2145 models (SAN Volume Controller). All DS8000 models. All DS 6000 models. All 2105 models (ESS). All 2105 F models (ESS).
Note: The option to specify a device model is supported on all platforms except Novell. -i Repeats the command every x seconds for y times. If y is not specified, the command will repeat every x seconds indefinitely.
Examples
If you enter the datapath query devstats 0 command, the following output is displayed:
442
Device #: 0 ============= I/O: SECTOR: Transfer Size: Total Read Total Write Active Read Active Write 387 24502563 0 0 9738 448308668 0 0 <= 512 4355850 <= 4k 1024164 <= 16K 19121140 <= 64K 1665 Maximum 62 2098 > 64K 130
/*-------------------------------------------------------------------------*/
The terms used in the output are defined as follows: Total Read v I/O: total number of completed read requests v SECTOR: total number of sectors that have been read Total Write v I/O: total number of completed write requests v SECTOR: total number of sectors that have been written Active Read v I/O: total number of read requests in process v SECTOR: total number of sectors to read in process Active Write v I/O: total number of write requests in process v SECTOR: total number of sectors to write in process Maximum v I/O: the maximum number of queued I/O requests v SECTOR: the maximum number of queued sectors to Read or Write Transfer size v <= 512: the number of I/O requests received, whose transfer size is 512 bytes or less v <= 4k: the number of I/O requests received, whose transfer size is 4 KB or less (where KB equals 1024 bytes) v <= 16K: the number of I/O requests received, whose transfer size is 16 KB or less (where KB equals 1024 bytes) v <= 64K: the number of I/O requests received, whose transfer size is 64 KB or less (where KB equals 1024 bytes) v > 64K: the number of I/O requests received, whose transfer size is greater than 64 KB (where KB equals 1024 bytes)
443
Syntax
datapath query essmap
Examples
If you enter the datapath query essmap command, the following output is displayed:
Disk Path P Location adapter LUN SN ------ ---- ----------- ------- ----------vpath20 hdisk1 30-60-01[FC] fscsi1 13AAAKA1200 IBM vpath20 hdisk720 * 30-60-01[FC] fscsi1 13AAAKA1200 IBM vpath20 hdisk848 20-60-01[FC] fscsi0 13AAAKA1200 IBM vpath20 hdisk976 * 20-60-01[FC] fscsi0 13AAAKA1200 IBM Type Size LSS Vol Rank C/A ----------- ---- ---- --- ----- ---1750-500 1.1 18 0 0000 01 Y 1750-500 1.1 18 0 0000 01 Y 1750-500 1.1 18 0 0000 01 Y 1750-500 1.1 18 0 0000 01 Y S ... - ... ... ... ... ...
The terms used in the output are defined as follows: Disk Path P The logical device name assigned by the host. The logical path name of an SDD vpath device. Indicates whether the logical path is a preferred path or nonpreferred path. * indicates that it is a nonpreferred path. This field applies only to 1750 devices. The physical location code of the host adapter through which the LUN is accessed. The logical adapter name that is assigned by the host LUN. The unique serial number for each LUN within the disk storage system. The device and model. The configured capacity of the LUN. The logical subsystem where the LUN resides. (Beginning with 1.6.3.0, the value displayed is changed from decimal to hexadecimal.) The volume number within the disk storage system. The unique identifier for each RAID within the disk storage system.
Vol Rank
444
The cluster and adapter accessing the array. Indicates that the device is shared by two and more disk storage system ports. Valid values are yes or no. The physical location code of disk storage system adapter through which the LUN is accessed. The disk storage system port through which the LUN is accessed. The disk RAID mode.
445
Syntax
datapath query portmap
Examples
If you enter the datapath query portmap command, the following output is displayed:
ESSID DISK BAY-1(B1) H1 H2 H3 ABCD ABCD ABCD BAY-5(B5) H1 H2 H3 ABCD ABCD ABCD O--- ---- ---Y--- ---- ---H4 ABCD H4 ABCD ------BAY-2(B2) H1 H2 H3 ABCD ABCD ABCD BAY-6(B6) H1 H2 H3 ABCD ABCD ABCD o--- ---- ---y--- ---- ---H4 ABCD H4 ABCD ------BAY-3(B3) H1 H2 H3 ABCD ABCD ABCD BAY-7(B7) H1 H2 H3 ABCD ABCD ABCD ---- ---- ------- ---- ---H4 ABCD H4 ABCD ------BAY-4(B4) H1 H2 H3 ABCD ABCD ABCD BAY-8(B8) H1 H2 H3 ABCD ABCD ABCD ---- ---- ------- ---- ---H4 ABCD H4 ABCD -------
The terms used in the output are defined as follows: Y y The port is online and open. At least one path attached to this port is functional. Paths connected to this port are nonpreferred paths. The port is online and open. At least one path attached to this port is functional. The port is online and closed. At least one path state and mode is closed and online. Paths connected to this port are nonpreferred paths. The port is online and closed. At least one path state and mode is closed and online. The port is offline. All paths attached to this port are offline. Paths connected to this port are nonpreferred paths. The port is offline. All paths attached to this port are offline. The path is not configured. The path is down. It is either not functional or has been placed offline.
O o
N n PD
446
The serial number of ESS devices is five digits, whereas the serial number of DS6000 and DS8000 devices is seven digits.
447
Syntax
datapath query version
Parameters
None
Examples
If you enter the datapath query version command, the following output is displayed:
[root@abc]> datapath query version IBM SDD Version 1.6.1.0 (devices.sdd.52.rte)
448
Syntax
datapath query wwpn
Parameters
None
Examples
If you enter the datapath query wwpn command, the following output is displayed:
[root@abc]> Adapter Name fscsi0 fscsi1 datapath query wwpn PortWWN 10000000C925F5B0 10000000C9266FD1
449
Syntax
datapath remove adapter adapter number
Parameters
adapter number The index number of the adapter that you want to remove.
Examples
If you enter the datapath query adapter command, the following output is displayed:
+----------------------------------------------------------------------------+ |Active Adapters :4 | | | |Adpt# Name State Mode Select Errors Paths Active| | 0 fscsi0 NORMAL ACTIVE 62051 0 10 10| | 1 fscsi1 NORMAL ACTIVE 65386 3 10 10| | 2 fscsi2 NORMAL ACTIVE 75697 27 10 10| | 3 fscsi3 NORMAL ACTIVE 4788 35 10 10| +----------------------------------------------------------------------------+
If you enter the datapath remove adapter 0 command, the following actions occur: v The entry for Adpt# 0 disappears from the datapath query adapter command output. v All paths that are attached to adapter 0 disappear from the datapath query device command output. You can enter this command while I/O is running.
+----------------------------------------------------------------------------+ |Active Adapters :3 | | | |Adpt# Name State Mode Select Errors Paths Active| | 1 fscsi1 NORMAL ACTIVE 65916 3 10 10| | 2 fscsi2 NORMAL ACTIVE 76197 27 10 10| | 3 fscsi3 NORMAL ACTIVE 4997 35 10 10| +----------------------------------------------------------------------------+
The adapter/hard disk Adpt# 0 fscsi0 is removed and the select counts are increased on the other three adapters, indicating that I/O is still running.
450
Syntax
datapath remove device device number path path number
Parameters
device number The device number shown in the output of the datapath query device command. path number The path number shown in the output of the datapath query device command.
Examples
If you enter the datapath query device 0 command, the following output is displayed:
+----------------------------------------------------------------+ |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | | | |================================================================| |Path# Adapter/Hard Disk State Mode Select Errors| | 0 fscsi1/hdisk18 OPEN NORMAL 557 0| | 1 fscsi1/hdisk26 OPEN NORMAL 568 0| | 2 fscsi0/hdisk34 OPEN NORMAL 566 0| | 3 fscsi0/hdisk42 OPEN NORMAL 545 0| +----------------------------------------------------------------+
If you enter the datapath remove device 0 path 1 command, the entry for DEV# 0 Path# 1 (that is, fscsi1/hdisk26) disappears from the datapath query device 0 command output and the path numbers are rearranged.
+----------------------------------------------------------------+ |Success: device 0 path 1 removed | | | |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | |================================================================| |Path# Adapter/Hard Disk State Mode Select Errors| | 0 fscsi1/hdisk18 OPEN NORMAL 567 0| | 1 fscsi0/hdisk34 OPEN NORMAL 596 0| | 2 fscsi0/hdisk42 OPEN NORMAL 589 0| +----------------------------------------------------------------+
451
The addpaths command reclaims the removed path. The mode of the added path is set to NORMAL and its state to either OPEN or CLOSE, depending on the device state.
+----------------------------------------------------------------+ |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | |================================================================| |Path# Adapter/Hard Disk State Mode Select Errors| | 0 fscsi1/hdisk18 OPEN NORMAL 580 0| | 1 fscsi0/hdisk34 OPEN NORMAL 606 0| | 2 fscsi0/hdisk42 OPEN NORMAL 599 0| | 3 fscsi1/hdisk26 OPEN NORMAL 14 0| +----------------------------------------------------------------+
Note that fscsi1/hdisk26 is back online with path 3 and is selected for I/O.
452
Syntax
datapath set adapter adapter number online offline
Parameters
adapter number The index number of the adapter that you want to change. online Sets the adapter online. offline Sets the adapter offline.
Examples
If you enter the datapath set adapter 0 offline command: v The mode of Adapter 0 will be changed to OFFLINE while the state of the adapter remains the same. v All paths attached to adapter 0 change to OFFLINE mode and their states change to Dead, if they were in the Open state. You can use the datapath set adapter 0 online command to cause an adapter that is offline to come online: v Adapter 0's mode changes to ACTIVE and its state to NORMAL. v The mode of all paths attached to adapter 0 changes to NORMAL and their state to either OPEN or CLOSE depending on the SDD vpath device state.
453
Syntax
datapath set device device_num1 device_num2 policy option
Parameters
device number1 [device number2] When two device numbers are entered, this command will apply to all the devices whose index numbers fit within the range of these two device index numbers. option Specifies one of the following policies: v rr, where rr indicates round robin v rrs, where rrs indicates round robin sequential (AIX and Linux Only) v v v v lb, where lb indicates load balancing ( also known as optimized policy ) lbs, where lbs indicates load balancing sequential (AIX and Linux Only) df, where df indicates the default policy, which is load balancing fo, where fo indicates failover policy
Note: You can enter the datapath set device N policy command to dynamically change the policy associated with SDD vpath devices in either Close or Open state.
Examples
If you enter datapath set device 2 7 policy rr, the path-selection policy of SDD vpath devices with device index 2 to 7 is immediately changed to the round robin policy.
454
Syntax
datapath set device device number path path number online offline
Parameters
device number The device index number that you want to change. path number The path number that you want to change. online Sets the path online. offline Removes the path from service.
Examples
If you enter the datapath set device 0 path 0 offline command, path 0 for device 0 changes to Offline mode.
455
Syntax
datapath set device n m qdepth enable disable
Parameters
n m The beginning vpath number for which the queue depth logic setting is to be applied. Then ending vpath number for which the queue depth logic setting is to be applied.
Enable Enable the queue depth logic. Disable Disable the queue depth logic.
Examples
If you enter the datapath set device 0 2 qdepth disable command, the following output is displayed:
Success: set qdepth_enable to no for vpath0 Success: set qdepth_enable to no for vpath1 Success: set qdepth_enable to no for vpath2
The qdepth_enable ODM attribute of these SDD vpath devices is updated. The following output is displayed when you enter lsattr -El vpath0.
# lsattr -El vpath0 active_hdisk hdisk66/13AB2ZA1020/fscsi3 active_hdisk hdisk2/13AB2ZA1020/fscsi2 active_hdisk hdisk34/13AB2ZA1020/fscsi2 active_hdisk hdisk98/13AB2ZA1020/fscsi3 policy df pvid 0005f9fdcda4417d0000000000000000 qdepth_enable no reserve_policy PR_exclusive serial_number 13AB2ZA1020 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False True False True True False False
456
Appendix A. SDD, SDDPCM, and SDDDSM data collection for problem analysis
The following sections describe enhanced trace capability for SDD, SDDPCM and SDDDSM.
v sddsrv.log.1 v sddsrv.log.2 v sddsrv.log.3 These files can be found in the following directories: v AIX - /var/adm/ras v HP-UX - /var/adm v Linux - /var/log v Solaris - /var/adm v Windows 2000 and Windows NT - \WINNT\system32
| |
v Windows Server 2003, Windows Server 2008, and Windows Server 2012 \Windows\system32
457
name (for example, sdddata_hostname_yyyymmdd_hhmmss.tar or sdddata_hostname_yyyymmdd_hhmmss.tar.Z, where yyyymmdd_hhmmss is the timestamp of the file creation). For Windows, you can run the sddgetdata script from the install directory to collect the data for problem determination. sddgetdata creates a cab file in the install directory with the current date and time as part of the file name (for example, sdddata_hostname_yyyymmdd_hhmmss.cab), where yyyymmdd_hhmmss is the timestamp of the file creation). For SDD, the install directory is %root%\Program Files\IBM\Subsystem Device Driver. For SDDDSM, the install directory is %root%\Program Files\IBM\SDDDSM. When you report an SDD problem, it is essential to run this script and send this output file for problem determination. Steps within the sddgetdata script might fail depending on the problem and the system condition. Is this case, you might have to issue manual commands. Here is an example output for the AIX platform:
/tmp/sdd_getdata>sddgetdata /tmp/sdd_getdata>ls ./ ../ sdddata_host1_20050315_122521.tar
v AE_bak.log
458
Starting with SDDPCM 2.6.4.x on AIX, the logging feature is enhanced. The size and number of log files that can be maintained on the system are dynamically configurable. The pcmsrv.conf file has two parameters (max_log_size and max_log_count) to configure the size and the number of log files. Trace data is stored in one or all of the following eight files: v pcm.log v pcm.log.1 v pcm.log.2 v pcm.log.3 v pcmsrv.log v pcmsrv.log.1 v pcmsrv.log.2 v pcmsrv.log.3
Appendix A. SDD, SDDPCM, and SDDDSM data collection for problem analysis
459
460
461
VPATH_RESV_CFLICT An attempt was made to open an SDD vpath device, but the reservation key of the SDD vpath device is different from the reservation key currently in effect. The attempt to open the device fails and this error log is posted. The device could not be opened because it is currently reserved by someone else. The following are information messages that are logged if you perform AIX Hot Plug procedures with SDD: VPATH_ADPT_REMOVED The datapath remove adapter n command runs. Adapter n and its child devices are removed from SDD. VPATH_PATH_REMOVED The datapath remove device m path n command runs. Path n for device m is removed from SDD. The following error messages are logged by sddsrv: SDDSRV_CONF_MISMATCH This error is logged when sddsrv finds out hdisk information in the driver is different from what sddsrv discovered. sddsrv logs the error to the system error log immediately and every 15 minutes thereafter SDDSRV_PORTBINDFAIL This error is logged when sddsrv cannot bind the TCP/IP port number specified in its sddsrv.conf file. SDDSRV_LOG_WFAIL This error is logged when sddsrv cannot write its log file (that is, sddsrv.log) to file system. sddsrv logs the error to the system error log immediately and every 10 minutes thereafter until sddsrv can write again. SDDSRV_DRLOG_WFAIL This error is logged when sddsrv cannot write the driver log file (that is, sdd.log) to file system. SDDSRV_PROBEENABLE This message is logged when the sddsrv probing functionality is enabled. SDDSRV_PROBEDISABLE This message is logged when the sddsrv probing functionality is disabled. SDDSRV_PROBEINTERVAL This message is logged when the sddsrv probing interval is changed.
462
SDDPCM_OPENPATH_FAILED One of the SDDPCM MPIO hdisk's paths has failed to open. The failing path is put in the INVALID state if the MPIO hdisk is opened. SDDPCM_OSPAIRASSOCIATE Couple a source device and a target device into one Open HyperSwap device. SDDPCM_OSPAIRBLOCK An Open HyperSwap device I/O is blocked. SDDPCM_OSPAIRDISABLED The Open HyperSwap functionality of an Open HyperSwap device is disabled because there are no available paths on either the source or target devices. SDDPCM_OSPAIRDISASSOCIATE Disassociate an Open HyperSwap device from a session. The device is no longer an Open HyperSwap-enabled device SDDPCM_OSPAIRENABLED The Open HyperSwap functionality of an Open HyperSwap device is enabled, with paths available to both source and target devices. SDDPCM_OSPAIRSRFAILED An Open HyperSwap device failed to perform swap and resume. SDDPCM_PATH_FAILED Several attempts to retry an I/O request for an MPIO device on a path have failed, or a path reaches the threshhold of continuous I/O errors. The path state is set to FAILED and the path is taken offline. A FAILED path can be automatically recovered by the health checker if the problem is fixed, or the user can enter the pcmpath set device M path N online command to manually recover the path. For more information, see Using SDDPCM pcmpath commands on page 144. SDDPCM_PATH RECOVERED A failed path is recovered and is in an operational state. SDDPCM_QUIESCETIMEOUT Exceeded time limit for quiescing I/Os on an Open HyperSwap device. SDDPCM_RESUMEDONE Resuming I/Os on all Open HyperSwap devices in a session is complete. SDDPCM_SESSIONQUIESCE Quiescing I/Os to all Open HyperSwap devices in a session. SDDPCM_SESSIONRDY A session is ready for HyperSwap. SDDPCM_SESSIONRESUME Initiate resume I/Os on all Open HyperSwap devices in a session. SDDPCM_SESSIONSWAPRESUME Initiate swapping and resuming I/Os to target devices on all Open HyperSwap devices in a session. SDDPCM_SWAPRESUMEDONE Swapping and resuming I/Os to target devices on all Open HyperSwap devices in a session is complete.
463
464
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATIONS "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publications. IBM may make improvements and/or changes in the product(s) and/or program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.
465
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation Information Enabling Requests Dept. DZWA 5600 Cottle Road San Jose, CA 95193 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information is for planning purposes only. The information herein is subject to change before the products described become available. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. If you are viewing this information softcopy, the photographs and color illustrations may not appear.
466
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/ copytrade.shtml. Adobe is a registered trademark of Adobe Systems Incorporated in the United States, and/or other countries. Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United States and other countries. Java is a trademark of Sun Microsystems, Inc. in the United States, other countries, or both. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, and service names may be trademarks or service marks of others.
Notices
467
September 30, 2008, any reference to the IBM Agreement for Licensed Internal Code in an agreement between Licensee and IBM means this License Agreement. 1. Definitions Built-in Capacity -- any computing resource or capability that may be included with a Machine and that is to remain inactive, or for which use is restricted, until the right to access and use the resource or capability is properly acquired directly from IBM or through an authorized IBM reseller. Such computing resources and capabilities include, without limitation, processors, memory, storage, interactive processing capacity, and workload-specific resources or capabilities (such as a specific operating system, programming language, or application to which use of the Machine is limited). Licensed Internal Code (also referred to as LIC) -- machine Code used by certain Machines that IBM or an IBM reseller identifies to Licensee as a Specific Machine. Machine -- a hardware device, its features, conversions, upgrades, elements or accessories, or any combination of them. Machine Code -- microcode, basic input/output system code (BIOS), utility programs, device drivers, diagnostics, and any other code (all subject to the exclusions in this License Agreement) delivered with an Machine for the purpose of enabling the Machines function(s) as stated in the IBM document entitled Official Published Specifications applicable to such Machine. Machine Code does not include programs and code provided under open source licenses or other separate license agreements. The term Machine Code includes LIC, any whole or partial copy of Machine Code, and any fix, patch, or replacement provided for Machine Code (including LIC). 2. License International Business Machines Corporation, one of its subsidiaries, or a third party owns (including, without limitation, ownership of all copyrights in) Machine Code and all copies of Machine Code (including, without limitation, copies of the original Machine Code and copies made from copies). Machine Code is copyrighted and licensed (not sold). IBM licenses Machine Code to only one rightful possessor at a time. 2.1 Authorized Use IBM grants the Licensee a nonexclusive license to use Machine Code on, or in conjunction with, only the Machine for which IBM provided it, and only to the extent of IBM authorizations that Licensee has acquired for access to and use of Built-in-Capacity. If Licensees use of Built-in-Capacity exceeds such IBM authorizations, Licensee agrees to pay IBM or (if applicable) an authorized IBM reseller the full price of permanent, unrestricted use of the Built-in-Capacity at the then current price. Licensee is not authorized to use such Built-in-Capacity until such payment is made. Under each license, IBM authorizes Licensee to do only the following: a. execute Machine Code to enable the Machine to function according to its Official Published Specifications;
468
b. use only the Built-in-Capacity that Licensee has properly acquired for the Machine directly from IBM or through an authorized IBM reseller; c. make a reasonable number of copies of Machine Code to be used solely for backup or archival purposes, provided i) Licensee reproduces the copyright notice and any other legend of ownership on any such copies and ii) uses the copies only to replace the original, when necessary; and d. execute and display Machine Code as necessary to maintain the Machine. No other licenses or rights (including licenses or rights under patents) are granted either directly, by implication, or otherwise. 2.2 Actions Licensee May Not Take Licensee agrees to use Machine Code only as authorized above. Licensee may not do any of the following: a. otherwise copy, display, transfer, adapt, modify, or distribute (electronically or otherwise) Machine Code, except as IBM may authorize in a Machines user documentation or in writing to Licensee; reverse assemble, reverse compile, otherwise translate, or reverse engineer Machine Code unless expressly permitted by applicable law without the possibility of contractual waiver; assign the license for Machine Code; or sublicense, rent, or lease Machine Code or any copy of it.
b.
c. d. 2.3
Replacements, Fixes, and Patches Licensee agrees to acquire any replacement, fix or patch for, or additional copy of, Machine Code directly from IBM in accordance with IBMs standard policies and practices. Unless Licensee is provided with a different IBM Machine Code license agreement, the terms of this License Agreement will apply to any replacement, fix or patch for, or additional copy of, Machine Code Licensee acquires for the Machine. If such additional or different terms apply to any replacement, fix, or patch, Licensee accepts them when Licensee downloads or uses the replacement, fix, or patch.
2.4
Machine Code Transfers Licensee may transfer possession of Machine Code and its media to another party only with the transfer of the Machine for which that Machine Code is authorized. In the event of such transfer, Licensee agrees to 1) destroy all of Licensees copies of that Machine Code that were not provided by IBM, 2) either provide to the other party all Licensees IBM-provided copies of Machine Code or destroy them, 3) provide to the other party a copy of this License Agreement, and 4) provide to the other party all user documentation. IBM licenses the other party to use Machine Code when that party accepts the terms of this License Agreement and is the rightful possessor of the associated Machine.
Notices
469
2.5
Termination Licensees license for Machine Code terminates when Licensee no longer rightfully possesses the associated Machine.
3.
Built-in Capacity Built-in-Capacity is limited by certain technological measures in Machine Code. Licensee agrees to IBM's implementation of such technological measures to protect Built-in-Capacity, including measures that may impact availability of data or performance of the Machine. As a condition of Licensees license to use Machine Code under this License Agreement, Licensee may not (i) circumvent such technological measures or use a third party or third party product to do so, or (ii) otherwise access or use unauthorized Built-in-Capacity. If IBM determines that changes are necessary to the technological measures designed to limit access to, or use of, Built-in-Capacity, IBM may provide Licensee with changes to such technological measures. As a condition of Licensees license to use Machine Code under this License Agreement, Licensee agrees, at IBMs option, to apply or allow IBM to apply such changes.
4.
Relationship to Other Agreements If Licensee obtained Machine Code directly from IBM and an IBM Customer Agreement (ICA) or an equivalent agreement is in effect between Licensee and IBM, the terms of this License Agreement are incorporated by reference into the ICA or the applicable equivalent agreement. If no ICA or equivalent agreement is in effect between Licensee and IBM or Licensee obtained Machine Code through an IBM reseller or other third party, the IBM Statement of Limited Warranty (SOLW) is incorporated by reference into this License Agreement and applies to Machine Code. To the extent of any conflict between the terms of this License Agreement and those of (i) the ICA or applicable equivalent agreement or (ii) the SOLW, the terms of this License Agreement prevail.
470
Glossary
This glossary includes terms for disk storage system products and Virtualization products. This glossary includes selected terms and definitions from: v The American National Standard Dictionary for Information Systems, ANSI X3.1721990, copyright 1990 by the American National Standards Institute (ANSI), 11 West 42nd Street, New York, New York 10036. Definitions derived from this book have the symbol (A) after the definition. v IBM Terminology, which is available online at the following Web site: www-01.ibm.com/ software/globalization/terminology/index.jsp. Definitions derived from this book have the symbol (GC) after the definition. v The Information Technology Vocabulary developed by Subcommittee 1, Joint Technical Committee 1, of the International Organization for Standardization and the International Electrotechnical Commission (ISO/IEC JTC1/SC1). Definitions derived from this book have the symbol (I) after the definition. Definitions taken from draft international standards, committee drafts, and working papers being developed by ISO/IEC JTC1/SC1 have the symbol (T) after the definition, indicating that final agreement has not been reached among the participating National Bodies of SCI. This glossary uses the following cross-reference forms: See This refers the reader to one of two kinds of related information: v A term that is the expanded form of an abbreviation or acronym. This expanded form of the term contains the full definition. v A synonym or more preferred term.
Special characters
1750. The machine type for the IBM System Storage DS6000 series. Models for the DS6000 include the 511 and EX1. 1820. The machine number for the RSSM. 2105. The machine number for the IBM TotalStorage Enterprise Storage Server (ESS). Models of the ESS are expressed as the number 2105 followed by Model <xxx>, such as 2105 Model 800. The 2105 Model 100 is an ESS expansion enclosure that is typically referred to simply as the Model 100. See also IBM TotalStorage Enterprise Storage Server and Model 100. 2107. A hardware machine type for the IBM System Storage DS8000 series. Hardware models for the 2107 include base units 921, 922, 931, 932, 9A2, 9B2 and expansion units 92E and 9AE. 2145. A hardware machine type for the IBM System Storage SAN Volume Controller. Models of the SAN Volume Controller are expressed as the number 2145 followed by -xxx, such as 2145-8G4. Hardware models for the 2145 include 2145-4F2, 2145-8F2, 2145-8F4, and 2145-8G4. 3390. The machine number of an IBM disk storage system. The ESS, when interfaced to IBM S/390 or IBM System z hosts, is set up to appear as one or more 3390 devices, with a choice of 3390-2, 3390-3, or 3390-9 track formats. 3990. The machine number of an IBM control unit. 7133. The machine number of an IBM disk storage system. The Model D40 and 020 drawers of the 7133 can be installed in the 2105-100 expansion enclosure of the ESS. 8-pack. See disk eight pack. / file system. The root file system; contains files that contain machine-specific configuration data. /tmp file system. A shared storage location for files. /usr file system. Contains files and programs necessary for operating the machine. /var file system. Contains files that are variable on a per-client basis, such as spool and mail files.
See also This refers the reader to one or more related terms.
471
A
access. (1) To obtain the use of a computer resource. (2) In computer security, a specific type of interaction between a subject and an object that results in flow of information from one to the other. access-any mode. One of the two access modes that can be set for the disk storage system product during initial configuration. It enables all fibre-channelattached host systems with no defined access profile to access all logical volumes on the disk storage system. With a profile defined in ESS Specialist for a particular host, that host has access only to volumes that are assigned to the WWPN for that host. See also pseudo-host and worldwide port name. ACK. See request for acknowledgement and acknowledgement. active Copy Services server. The Copy Services server that manages the Copy Services domain. Either the primary or the backup Copy Services server can be the active Copy Services server. The backup Copy Services server is available to become the active Copy Services server if the primary Copy Services server fails. See also backup Copy Services server, Copy Services client, and primary Copy Services server. active/active mode. A configuration that enables one controller node of a storage system pair to process I/O requests and provide a standby capability for the other controller node. Generally, an active/active storage system involves a battery-backed mirrored cache, in which the cache content of a controller is mirrored to another for data integrity and availability. active/passive mode. A configuration that enables one controller node of a storage system pair to process I/O requests, while the other controller node is idle in standby mode ready to take over I/O activity if the active primary controller fails or is taken offline. alert. A message or log that a storage facility generates as the result of error event collection and analysis. An alert indicates that a service action is required. allegiance. In Enterprise Systems Architecture/390, a relationship that is created between a device and one or more channel paths during the processing of certain conditions. See also implicit allegiance, contingent allegiance, and reserved allegiance. allocated storage. In a disk storage system, the space that is allocated to volumes but not yet assigned. See also assigned storage. American National Standards Institute (ANSI). An organization of producers, consumers, and general interest groups that establishes the procedures by which accredited organizations create and maintain voluntary industry standards in the United States. (A)
Anonymous. In ESS Specialist, the label on an icon that represents all connections that are using fibre-channel adapters between the ESS and hosts and that are not completely defined to the ESS. See also anonymous host, pseudo-host, and access-any mode. anonymous host. Synonym for pseudo-host (in contrast to the Anonymous label that appears on some pseudo-host icons. See also Anonymous and pseudo-host. ANSI. See American National Standards Institute. APAR. See authorized program analysis report. (GC) arbitrated loop. For fibre-channel connections, a topology that enables the interconnection of a set of nodes. See also point-to-point connection and switched fabric. array. An ordered collection, or group, of physical devices (disk drive modules) that are used to define logical volumes or devices. More specifically, regarding the disk storage system, an array is a group of disks designated by the user to be managed by the RAID-5 technique. See also redundant array of independent disks. ASCII. (American National Standard Code for Information Interchange) The standard code, using a coded character set consisting of 7-bit coded characters (8 bits including parity check), that is used for information interchange among data processing systems, data communication systems, and associated equipment. The ASCII set consists of control characters and graphic characters. (A) Some organizations, including IBM, have used the parity bit to expand the basic code set. assigned storage. On a disk storage system, the space allocated to a volume and assigned to a port. authorized program analysis report (APAR). A report of a problem caused by a suspected defect in a current, unaltered release of a program. (GC) availability. The degree to which a system or resource is capable of performing its normal function. See data availability.
B
backup Copy Services server. One of two Copy Services servers in a Copy Services domain. The other Copy Services server is the primary Copy Services server. The backup Copy Services server is available to become the active Copy Services server if the primary Copy Services server fails. A Copy Services server is software that runs in one of the two clusters of an ESS, and manages data-copy operations for that Copy Services server group. See also active Copy Services server, Copy Services client, and primary Copy Services server.
472
bay. In the disk storage system, the physical space used for installing SCSI, ESCON, and fibre-channel host adapter cards. The ESS has four bays, two in each cluster. See also service boundary. bit. (1) Either of the digits 0 or 1 when used in the binary numeration system. (T) (2) The storage medium required to store a single binary digit. See also byte. block. (1) A string of data elements recorded or transmitted as a unit. The elements may be characters, words, or physical records. (T) (2) In the disk storage system, a group of consecutive bytes used as the basic storage unit in fixed-block architecture (FBA). All blocks on the storage device are the same size (fixed size). See also fixed-block architecture and data record. byte. (1) A group of eight adjacent binary digits that represent one EBCDIC character. (2) The storage medium required to store eight bits. See also bit.
high-availability cluster multiprocessing (HACMP), cascading pertains to a cluster configuration in which the cluster node with the highest priority for a particular resource acquires the resource if the primary node fails. The cluster node relinquishes the resource to the primary node upon reintegration of the primary node into the cluster. catcher. A server that service personnel use to collect and retain status data that a disk storage system sends to it. CCR. See channel command retry. CCW. See channel command word. CD. See compact disc. compact disc. An optically read disc, typically storing approximately 660 MB. CD-ROM (compact disc read-only memory) refers to the read-only format used to distribute disk storage system code and documentation. CEC. See computer-electronic complex. channel. In Enterprise Systems Architecture/390, the part of a channel subsystem that manages a single I/O interface between a channel subsystem and a set of control units. channel command retry (CCR). In Enterprise Systems Architecture/390, the protocol used between a channel and a control unit that enables the control unit to request that the channel reissue the current command. channel command word (CCW). In Enterprise Systems Architecture/390, a data structure that specifies an I/O operation to the channel subsystem. channel path. In Enterprise Systems Architecture/390, the interconnection between a channel and its associated control units. channel subsystem. In Enterprise Systems Architecture/390, the part of a host computer that manages I/O communication between the program and any attached control units. channel-subsystem image. In Enterprise Systems Architecture/390, the logical functions that a system requires to perform the function of a channel subsystem. With ESCON multiple image facility (EMIF), one channel subsystem image exists in the channel subsystem for each logical partition (LPAR). Each image appears to be an independent channel subsystem program, but all images share a common set of hardware facilities. CKD. See count key data. CLI. See command-line interface. See also Copy Services command-line interface.
C
cache. A special-purpose buffer storage, smaller and faster than main storage, used to hold a copy of instructions and data obtained from main storage and likely to be needed next by the processor. (T) cache fast write. In the disk storage system, a form of the fast-write operation in which the storage server writes the data directly to cache, where it is available for later destaging. cache hit. An event that occurs when a read operation is sent to the cluster, and the requested data is found in cache. The opposite of cache miss. cache memory. Memory, typically volatile memory, that a storage server uses to improve access times to instructions or data. The cache memory is typically smaller and faster than the primary memory or storage medium. In addition to residing in cache memory, the same data also resides on the storage devices in the storage facility. cache miss. An event that occurs when a read operation is sent to the cluster, but the data is not found in cache. The opposite of cache hit. call home. A communication link established between the disk storage system and a service provider. The disk storage system can use this link to place a call to IBM or to another service provider when it requires service. With access to the machine, service personnel can perform service tasks, such as viewing error logs and problem logs or initiating trace and dump retrievals. See also heartbeat and remote technical assistance information network. cascading. (1) Connecting network controllers to each other in a succession of levels, to concentrate many more lines than a single level permits. (2) In
Glossary
473
cluster. (1) In the disk storage system, a partition capable of performing all disk storage system functions. With two clusters in the disk storage system, any operational cluster can take over the processing of a failing cluster. (2) In the AIX operating system, a group of nodes within a complex. cluster processor complex (CPC). In the disk storage system, the unit within a cluster that provides the management function for the disk storage system. It consists of cluster processors, cluster memory, and related logic. Code Distribution and Activation (CDA). Process of installing licensed machine code on a disk storage system while applications continue to run. command-line interface (CLI). An interface provided by an operating system that defines a set of commands and enables a user (or a script-like language) to issue these commands by typing text in response to the command prompt (for example, DOS commands, UNIX shell commands). See also Copy Services command-line interface. compression. (1) The process of eliminating gaps, empty fields, redundancies, and unnecessary data to shorten the length of records or blocks. (2) Any encoding that reduces the number of bits used to represent a given message or record. (GC) computer-electronic complex (CEC). The set of hardware facilities associated with a host computer. concurrent copy. A facility on a storage server that enables a program to make a backup of a data set while the logical volume remains available for subsequent processing. The data in the backup copy is frozen at the point in time that the server responds to the request. concurrent download of licensed machine code. Process of installing licensed machine code while applications continue to run. concurrent maintenance. Service that is performed on a unit while it is operational. concurrent media maintenance. Service performed on a disk drive module (DDM) without losing access to the data. configure. In storage, to define the logical and physical configuration of the input/output (I/O) subsystem through the user interface that the storage facility provides for this function. consistent copy. A copy of a data entity (a logical volume, for example) that contains the contents of the entire data entity at a single instant in time.
console. A user interface to a server, such as can be provided by a personal computer. See also IBM TotalStorage ESS Master Console. contingent allegiance. In Enterprise Systems Architecture/390, a relationship that is created in a control unit between a device and a channel when the channel accepts unit-check status. The allegiance causes the control unit to guarantee access; the control unit does not present the busy status to the device. The allegiance enables the channel to retrieve sense data that is associated with the unit-check status on the channel path associated with the allegiance. control unit (CU). (1) A device that coordinates and controls the operation of one or more input/output devices, and synchronizes the operation of such devices with the operation of the system as a whole. (2) In Enterprise Systems Architecture/390, a storage server with ESCON, FICON, or OEMI interfaces. The control unit adapts a native device interface to an I/O interface supported by an ESA/390 host system. (3) In the ESS, the portion of the ESS that supports the attachment of emulated CKD devices over ESCON, FICON, or OEMI interfaces. See also cluster. control-unit image. In Enterprise Systems Architecture/390, a logical subsystem that is accessed through an ESCON or FICON I/O interface. One or more control-unit images exist in each control unit. Each image appears as an independent control unit, but all control-unit images share a common set of hardware facilities. The ESS can emulate 3990-3, TPF, 3990-6, or 2105 control units. control-unit initiated reconfiguration (CUIR). A software mechanism that the ESS uses to request that an operating system of an IBM System z or S/390 host verify that one or more subsystem resources can be taken offline for service. The ESS can use this process to automatically vary channel paths offline and online to facilitate bay service or concurrent code installation. Depending on the operating system, support for this process might be model-dependent, might depend on the IBM TotalStorage Enterprise Storage Server Subsystem Device Driver, or might not exist. Coordinated Universal Time (UTC). The international standard of time that is kept by atomic clocks around the world. Copy Services client. Software that runs on each ESS cluster in the Copy Services server group and that performs the following functions: v Communicates configuration, status, and connectivity information to the Copy Services server. v Performs data-copy functions on behalf of the Copy Services server. See also active Copy Services server, backup Copy Services server, and primary Copy Services server.
474
Copy Services CLI. See Copy Services Command-Line Interface. Copy Services domain. A collection of user-designated ESS clusters participating in Copy Services functions managed by a designated active Copy Services server. See also Copy Services server, dual-active server, and single-active server. Copy Services command-line interface (Copy Services CLI). In the ESS, command-line interface software provided with ESS Copy Services and used for invoking Copy Services functions from host systems attached to the ESS. See also command-line interface. Copy Services server. An ESS cluster designated by the copy services administrator to perform the ESS Copy Services functions. See also active Copy Services server, backup Copy Services server, and primary Copy Services server. Copy Services server group. A collection of user-designated ESS clusters participating in Copy Services functions managed by a designated active Copy Services server. A Copy Services server group is also called a Copy Services domain. See also active Copy Services server, backup Copy Services server, and primary Copy Services server. copy set. A set of volumes that contain copies of the same data. All the volumes in a copy set are the same format (count key data [CKD] or fixed block) and size. count field. The first field of a count key data (CKD) record. This eight-byte field contains a four-byte track address (CCHH). It defines the cylinder and head that are associated with the track, and a one-byte record number (R) that identifies the record on the track. It defines a one-byte key length that specifies the length of the record's key field (0 means no key field). It defines a two-byte data length that specifies the length of the record's data field (0 means no data field). Only the end-of-file record has a data length of zero. count key data (CKD). In Enterprise Systems Architecture/390, a data-record format employing self-defining record formats in which each record is represented by up to three fields: a count field identifying the record and specifying its format, an optional key field that can be used to identify the data area contents, and an optional data field that typically contains the user data. For CKD records on the ESS, the logical volume size is defined in terms of the device emulation mode (3390 or 3380 track format). The count field is always 8 bytes long and contains the lengths of the key and data fields, the key field has a length of 0 to 255 bytes, and the data field has a length of 0 to 65 535 or the maximum that will fit on the track. See also data record. CPC. See cluster processor complex. CRC. See cyclic redundancy check.
CU. See control unit. CUIR. See control-unit initiated reconfiguration. custom volume. In the ESS, a volume in count-key-data (CKD) format that is not a standard volume, which basically means that it does not necessarily present the same number of cylinders and capacity to its assigned logical control unit as provided by one of the following standard S/390 volume types: 3390-2, 3390-3, 3390-9, 3390-2 (3380-track mode), or 3390-3 (3380-track mode). See also count-key-data, interleave, standard volume, and volume. CUT. See Coordinated Universal Time. cyclic redundancy check (CRC). A redundancy check in which the check key is generated by a cyclic algorithm. (T) cylinder. A unit of storage on a CKD device with a fixed number of tracks.
D
DA. See device adapter. See also SSA adapter. daisy chain. See serial connection. DASD. See direct access storage device. DASD fast write (DFW). A function of a storage server in which active write data is stored in nonvolatile cache, thus avoiding exposure to data loss. data availability. The degree to which data is available when needed, typically measured as a percentage of time that the system would be capable of responding to any data request (for example, 99.999% available). data compression. A technique or algorithm used to encode data such that the encoded result can be stored in less space than the original data. The original data can be recovered from the encoded result through a reverse technique or reverse algorithm. See also compression. Data Facility Storage Management Subsystem. An operating environment that helps automate and centralize the management of storage. To manage storage, DFSMS provides the storage administrator with control over data class, storage class, management class, storage group, and automatic class selection routine definitions. data field. The optional third field of a count key data (CKD) record. The count field specifies the length of the data field. The data field contains data that the program writes. data record. The basic unit of S/390 and IBM System z storage on an ESS, also known as a count-key-data
Glossary
475
(CKD) record. Data records are stored on a track. The records are sequentially numbered starting with 0. The first record, R0, is typically called the track descriptor record and contains data normally used by the operating system to manage the track. See also count-key-data and fixed-block architecture. data sharing. The ability of multiple host systems to concurrently utilize data that they store on one or more storage devices. The storage facility enables configured storage to be accessible to any, or all, attached host systems. To use this capability, the host program must be designed to support data that it is sharing. DDM. See disk drive module. DDM group. See disk eight pack. dedicated storage. Storage within a storage facility that is configured such that a single host system has exclusive access to the storage. demote. To remove a logical data unit from cache memory. A storage server demotes a data unit to make room for other logical data units in the cache or because the logical data unit is not valid. The ESS must destage logical data units with active write units before they can be demoted. destaging. Movement of data from an online or higher priority to an offline or lower priority device. The ESS stages incoming data into cache and then destages it to disk. device. In Enterprise Systems Architecture/390, a disk drive. device adapter (DA). A physical component of the ESS that provides communication between the clusters and the storage devices. The ESS has eight device adapters that it deploys in pairs, one from each cluster. DA pairing enables the ESS to access any disk drive from either of two paths, providing fault tolerance and enhanced availability. device address. In Enterprise Systems Architecture/390, the field of an ESCON or FICON device-level frame that selects a specific device on a control-unit image. device ID. In the ESS, the unique two-digit hexadecimal number that identifies the logical device. device interface card. A physical subunit of a storage cluster that provides the communication with the attached DDMs. device number. In Enterprise Systems Architecture/390, a four-hexadecimal-character identifier, for example 13A0, that the systems administrator associates with a device to facilitate
communication between the program and the host operator. The device number is associated with a subchannel. device sparing. A subsystem function that automatically copies data from a failing DDM to a spare DDM. The subsystem maintains data access during the process. DFS. See distributed file service. direct access storage device (DASD). (1) A mass storage medium on which a computer stores data. (2) A disk device. disk cage. A container for disk drives. Each disk cage supports eight disk eight packs (64 disks). disk drive. Standard term for a disk-based nonvolatile storage medium. The ESS uses hard disk drives as the primary nonvolatile storage media to store host data. disk drive module (DDM). A field replaceable unit that consists of a single disk drive and its associated packaging. disk drive module group. See disk eight pack. disk eight pack. In the ESS, a group of eight disk drive modules (DDMs) installed as a unit in a DDM bay. disk group. In the ESS, a collection of disk drives in the same SSA loop set up by the ESS to be available to be assigned as a RAID array. A disk group can be formatted as CKD or fixed block, and as RAID or non-RAID, or it can be left unformatted. A disk group is a logical assemblage of eight disk drives, in contrast to a disk eight pack. See also disk eight pack. disk storage system. One or more storage devices that are installed with a storage software application to provide a single common pool of storage that is used to store, safeguard, retrieve, and share data. Most disk storage systems also include disaster planning and recovery options. In SDD, a disk storage system refers to an ESS, DS6000, or DS8000 device. distributed file service (DFS). A service that provides data access over IP networks. DNS. See domain name system. domain. (1) That part of a computer network in which the data processing resources are under common control. (2) In TCP/IP, the naming system used in hierarchical networks. (3) A Copy Services server group, in other words, the set of clusters designated by the user to be managed by a particular Copy Services server. domain name system (DNS). In TCP/IP, the server program that supplies name-to-address translation by mapping domain names to internet addresses. The
476
address of a DNS server is the internet address of the server that hosts the DNS software for the network. drawer. A unit that contains multiple DDMs and provides power, cooling, and related interconnection logic to make the DDMs accessible to attached host systems. drive. (1) A peripheral device, especially one that has addressed storage media. See also disk drive module. (2) The mechanism used to seek, read, and write information on a storage medium. dual-active mode. (1) With respect to a Copy Services server, the mode of operation of the server when the LIC level of the associated ESS cluster is 2.0 or higher. (2) With respect to a Copy Services domain, the mode of operation of the domain, when the Copy Services servers are dual-active servers. See also Copy Services server, Copy Services domain, mixed mode, and single-active server. duplex. (1) Regarding ESS Copy Services, the state of a volume pair after PPRC has completed the copy operation and the volume pair is synchronized. (2) In general, pertaining to a communication mode in which data can be sent and received at the same time. dynamic sparing. The ability of a storage server to move data from a failing disk drive module (DDM) to a spare DDM while maintaining storage functions.
end of file. A coded character recorded on a data medium to indicate the end of the medium. On a CKD direct access storage device, the subsystem indicates the end of a file by including a record with a data length of zero. engineering change (EC). An update to a machine, part, or program. Enterprise Storage Server. See IBM TotalStorage Enterprise Storage Server. Enterprise Systems Architecture/390 (ESA/390). An IBM architecture for mainframe computers and peripherals. Processor systems that follow the ESA/390 architecture include the ES/9000 family. See also z/Architecture. Enterprise Systems Connection (ESCON). (1) An Enterprise Systems Architecture/390 and IBM System z computer peripheral interface. The I/O interface uses ESA/390 logical protocols over a serial interface that configures attached units to a communication fabric. (2) A set of IBM products and services that provide a dynamically connected environment within an enterprise. EPO. See emergency power off. ERDS. See error-recording data set. error-recording data set (ERDS). On S/390 and IBM System z hosts, a data set that records data-storage and data-retrieval errors. A service information message (SIM) provides the error information for the ERDS. ERP. See error recovery procedure. error recovery procedure (ERP). Procedures designed to help isolate and, where possible, to recover from errors in equipment. The procedures are often used in conjunction with programs that record information on machine malfunctions. ESA/390. See Enterprise Systems Architecture/390. ESCD. See ESCON director. ESCON. See Enterprise System Connection. ESCON channel. An S/390 or IBM System z channel that supports ESCON protocols. ESCON director (ESCD). An I/O interface switch that provides for the interconnection of multiple ESCON interfaces in a distributed-star topology. ESCON host systems. S/390 or IBM System z hosts that attach to the ESS with an ESCON adapter. Such host systems run on operating systems that include MVS, VSE, TPF, or versions of VM. ESCON multiple image facility (EMIF). In Enterprise Systems Architecture/390, a function that enables
Glossary
E
E10. The predecessor of the F10 model of the ESS. See also F10. E20. The predecessor of the F20 model of the ESS. See also F20. EBCDIC. See extended binary-coded decimal interchange code. EC. See engineering change. ECKD. See extended count key data. eight pack. See disk eight pack. electrostatic discharge (ESD). An undesirable discharge of static electricity that can damage equipment and degrade electrical circuitry. emergency power off (EPO). A means of turning off power during an emergency, usually a switch. EMIF. See ESCON multiple image facility. enclosure. A unit that houses the components of a storage subsystem, such as a control unit, disk drives, and power source.
477
LPARs to share an ESCON channel path by providing each LPAR with its own channel-subsystem image. EsconNet. In ESS Specialist, the label on a pseudo-host icon that represents a host connection that uses the ESCON protocol and that is not completely defined on the ESS. See also pseudo-host and access-any mode. ESD. See electrostatic discharge. eServer. See IBM Eserver. ESS. See IBM TotalStorage Enterprise Storage Server. ESS Copy Services. In the ESS, a collection of optional software features, with a Web-browser interface, used for configuring, managing, and monitoring data-copy functions. ESS Copy Services CLI. See Copy Services Command-Line Interface. ESS Expert. See IBM TotalStorage Enterprise Storage Server Expert. ESS Master Console. See IBM TotalStorage ESS Master Console. ESSNet. See IBM TotalStorage Enterprise Storage Server Network. ESS Specialist. See IBM TotalStorage Enterprise Storage Server Specialist. Expert. See IBM TotalStorage Enterprise Storage Server Expert. extended binary-coded decimal interchange code (EBCDIC). A coding scheme developed by IBM used to represent various alphabetic, numeric, and special symbols with a coded character set of 256 eight-bit codes. extended count key data (ECKD). An extension of the CKD architecture. Extended Remote Copy (XRC). A function of a storage server that assists a control program to maintain a consistent copy of a logical volume on another storage facility. All modifications of the primary logical volume by any attached host are presented in order to a single host. The host then makes these modifications on the secondary logical volume. extent. A continuous space on a disk that is occupied by or reserved for a particular data set, data space, or file. The unit of increment is a track. See also multiple allegiance and parallel access volumes.
F
F10. A model of the ESS featuring a single-phase power supply. It has fewer expansion capabilities than the Model F20. F20. A model of the ESS featuring a three-phase power supply. It has more expansion capabilities than the Model F10, including the ability to support a separate expansion enclosure. fabric. In fibre-channel technology, a routing structure, such as a switch, receives addressed information and routes to the appropriate destination. A fabric can consist of more than one switch. When multiple fibre-channel switches are interconnected, they are said to be cascaded. failback. Cluster recovery from failover following repair. See also failover. failover. (1) In SAN Volume Controller, the function that occurs when one redundant part of the system takes over the workload of another part of the system that has failed. (2) In the ESS, the process of transferring all control of the ESS to a single cluster in the ESS when the other cluster in the ESS fails. See also cluster. fast write. A write operation at cache speed that does not require immediate transfer of data to a disk drive. The subsystem writes the data directly to cache, to nonvolatile storage, or to both. The data is then available for destaging. A fast-write operation reduces the time an application must wait for the I/O operation to complete. FBA. See fixed-block architecture. FC. See feature code. Note: FC is a common abbreviation for fibre channel in the industry, but the ESS customer documentation library reserves FC for feature code. FC-AL. See Fibre Channel-Arbitrated Loop. FCP. See fibre-channel protocol. FCS. See fibre-channel standard. feature code (FC). A code that identifies a particular orderable option and that is used by service personnel to process hardware and software orders. Individual optional features are each identified by a unique feature code. fibre channel. A data-transmission architecture based on the ANSI fibre-channel standard, which supports full-duplex communication. The ESS supports data transmission over fiber-optic cable through its fibre-channel adapters. See also fibre-channel protocol and fibre-channel standard.
478
Fibre Channel-Arbitrated Loop (FC-AL). An implementation of the fibre-channel standard that uses a ring topology for the communication fabric. See American National Standards Institute (ANSI) X3T11/93-275. In this topology, two or more fibre-channel end points are interconnected through a looped interface. The ESS supports this topology. fibre-channel connection (FICON). A fibre-channel communications protocol designed for IBM mainframe computers and peripherals. fibre-channel protocol (FCP). A protocol used in fibre-channel communications with five layers that define how fibre-channel ports interact through their physical links to communicate with other ports. fibre-channel standard (FCS). An ANSI standard for a computer peripheral interface. The I/O interface defines a protocol for communication over a serial interface that configures attached units to a communication fabric. The protocol has two layers. The IP layer defines basic interconnection protocols. The upper layer supports one or more logical protocols (for example, FCP for SCSI command protocols and SBCON for ESA/390 command protocols). See American National Standards Institute (ANSI) X3.230-199x. See also fibre-channel protocol. FICON. See fibre-channel connection. FiconNet. In ESS Specialist, the label on a pseudo-host icon that represents a host connection that uses the FICON protocol and that is not completely defined on the ESS. See also pseudo-host and access-any mode. field replaceable unit (FRU). An assembly that is replaced in its entirety when any one of its components fails. In some cases, a field replaceable unit might contain other field replaceable units. (GC) FIFO. See first-in-first-out. File Transfer Protocol (FTP). In TCP/IP, an application protocol used to transfer files to and from host computers. See also Transmission Control Protocol/Internet Protocol. firewall. A protection against unauthorized connection to a computer or a data storage system. The protection is usually in the form of software on a gateway server that grants access to users who meet authorization criteria. first-in-first-out (FIFO). A queuing technique in which the next item to be retrieved is the item that has been in the queue for the longest time. (A) fixed-block architecture (FBA). An architecture for logical devices that specifies the format of and access mechanisms for the logical data units on the device.
The logical data unit is a block. All blocks on the device are the same size (fixed size). The subsystem can access them independently. fixed-block device. An architecture for logical devices that specifies the format of the logical data units on the device. The logical data unit is a block. All blocks on the device are the same size (fixed size); the subsystem can access them independently. This is the required format of the logical data units for host systems that attach with a SCSI or fibre-channel interface. See also fibre-channel and small computer systems interface. FlashCopy. An optional feature for the ESS that can make an instant copy of data, that is, a point-in-time copy of a volume. FRU. See field replaceable unit. FTP. See File Transfer Protocol. full duplex. See duplex.
G
GB. See gigabyte. GDPS. See Geographically Dispersed Parallel Sysplex. Geographically Dispersed Parallel Sysplex (GDPS). An S/390 multisite application-availability solution. gigabyte (GB). A gigabyte of storage is 109 bytes. A gigabyte of memory is 230 bytes. group. In ESS documentation, a nickname for two different kinds of groups, depending on the context. See disk eight pack or Copy Services server group.
H
HA. See host adapter. HACMP. See High-Availability Cluster Multiprocessing. hard disk drive (HDD). (1) A storage medium within a storage server used to maintain information that the storage server requires. (2) A mass storage medium for computers that is typically available as a fixed disk (such as the disks used in system units of personal computers or in drives that are external to a personal computer) or a removable cartridge. hardware service manager (HSM). An option on an AS/400 or IBM System i host that enables the user to display and work with system hardware resources and to debug input-output processors (IOP), input-output adapters (IOA), and devices. HBA. See host bus adapter. HDA. See head and disk assembly.
Glossary
479
HDD. See hard disk drive. hdisk. An AIX term for storage space. head and disk assembly (HDA). The portion of an HDD associated with the medium and the read/write head. heartbeat. A status report sent at regular intervals from the ESS. The service provider uses this report to monitor the health of the call home process. See also call home, heartbeat call home record, and remote technical assistance information network. heartbeat call home record. Machine operating and service information sent to a service machine. These records might include such information as feature code information and product logical configuration information. hierarchical storage management. (1) A function provided by storage management software such as Tivoli Storage Management or Data Facility Storage Management Subsystem/MVS (DFSMS/MVS) to automatically manage free space based on the policy that the storage administrator sets. (2) In AS/400 storage management, an automatic method to manage and distribute data between the different storage layers, such as disk units and tape library devices. High-Availability Cluster Multiprocessing (HACMP). Software that provides host clustering, so that a failure of one host is recovered by moving jobs to other hosts within the cluster. high-speed link (HSL). A hardware connectivity architecture that links system processors to system input/output buses and other system units. home address (HA). A nine-byte field at the beginning of a track that contains information that identifies the physical track and its association with a cylinder. Note: In the ESS, the acronym HA is shared between home address and host adapter. See also host adapter. hop. Interswitch connection. A hop count is the number of connections that a particular block of data traverses between source and destination. For example, data traveling from one hub over a wire to another hub traverses one hop. host. See host system. host adapter (HA). A physical subunit of a storage server that provides the ability to attach to one or more host I/O interfaces. The Enterprise Storage Server has four HA bays, two in each cluster. Each bay supports up to four host adapters. In the ESS, the acronym HA is shared between home address and host adapter. See also home address.
host bus adapter. An interface card that connects a host bus, such as a peripheral component interconnect (PCI) bus, to the storage area network. host name. The Internet address of a machine in the network. In the ESS, the host name can be entered in the host definition as the fully qualified domain name of the attached host system, such as mycomputer.city.company.com, or as the subname of the fully qualified domain name, for example, mycomputer. See also host system. host processor. A processor that controls all or part of a user application network. In a network, the processing unit in which the data communication access method resides. See also host system. host system. A computer, either of the mainframe (S/390 or IBM system z) or of the open-systems type, that is connected to the ESS. S/390 or IBM System z hosts are connected to the ESS through ESCON or FICON interfaces. Open-systems hosts are connected to the ESS by SCSI or fibre-channel interfaces. hot plug. Pertaining to the ability to add or remove a hardware facility or resource to a unit while power is on. HSL. See high-speed link. HSM. See hierarchical storage management or Hardware Service Manager.
I
IBM Eserver. The IBM brand name for a series of server products that are optimized for e-commerce. The products include the IBM System i, System p, IBM System x, and IBM System z. IBM product engineering (PE). The third-level of IBM service support. Product engineering is composed of IBM engineers who have experience in supporting a product or who are knowledgeable about the product. IBM System Storage Multipath Subsystem Device Driver (SDD). Software that is designed to provide multipath configuration environment support for a host system that is attached to storage devices. SDD resides in a host system with the native disk device driver. IBM System Storage Multipath Subsystem Device Driver Path Control Module (SDDPCM). A loadable path control module for disk storage system devices to supply path management functions and error recovery algorithms. When the disk storage system devices are configured as Multipath I/O (MPIO)-devices, SDDPCM becomes part of the AIX MPIO Fibre Channel Protocol device driver during the configuration. The AIX MPIO-capable device driver with the disk storage system SDDPCM module enhances the data availability and I/O load balancing.
480
IBM System Storage Subsystem Device Driver Device Specific Module (SDDDSM). An IBM storage subsystems multipath I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is designed to support IBM storage subsystems devices such as SAN Volume Controller, DS8000, and DS6000. SDDDSM resides on a host server with the native disk device driver and provides enhanced data availability, automatic path failover protection, concurrent download of controller firmware code, and path selection policies for the host system. IBM TotalStorage Enterprise Storage Server (ESS). A member of the Seascape product family of storage servers and attached storage devices (disk drive modules). The ESS provides for high-performance, fault-tolerant storage and management of enterprise data, providing access through multiple concurrent operating systems and communication protocols. High performance is provided by multiple symmetric multiprocessors, integrated caching, RAID support for the disk drive modules, and disk access through a high-speed serial storage architecture (SSA) interface. IBM TotalStorage Enterprise Storage Server Expert (ESS Expert). The software that gathers performance data from the ESS and presents it through a Web browser. IBM TotalStorage Enterprise Storage Server Specialist (ESS Specialist). Software with a Web-browser interface for configuring the ESS. IBM TotalStorage Enterprise Storage Server Network (ESSNet). A private network providing Web browser access to the ESS. IBM installs the ESSNet software on an IBM workstation called the IBM TotalStorage ESS Master Console, supplied with the first ESS delivery. IBM TotalStorage ESS Master Console (ESS Master Console). An IBM workstation (formerly named the ESSNet console and hereafter referred to simply as the ESS Master Console) that IBM installs to provide the ESSNet facility when they install your ESS. It includes a Web browser that provides links to the ESS user interface, including ESS Specialist and ESS Copy Services. ID. See identifier. identifier (ID). A unique name or address that identifies things such as programs, devices, or systems. IML. See initial microprogram load. implicit allegiance. In Enterprise Systems Architecture/390, a relationship that a control unit creates between a device and a channel path when the device accepts a read or write operation. The control unit guarantees access to the channel program over the set of channel paths that it associates with the allegiance.
initial microcode load (IML). The action of loading microcode for a computer into that computer's storage. initial program load (IPL). The action of loading software into a computer, typically an operating system that controls the computer. initiator. A SCSI device that communicates with and controls one or more targets. An initiator is typically an I/O adapter on a host computer. A SCSI initiator is analogous to an S/390 channel. A SCSI logical unit is analogous to an S/390 device. See also target. i-node. The internal structure in an AIX operating system that describes the individual files in the operating system. It contains the code, type, location, and owner of a file. input/output (I/O). Pertaining to (a) input, output, or both or (b) a device, process, or channel involved in data input, data output, or both. input/output configuration data set. A configuration definition built by the I/O configuration program (IOCP) and stored on disk files associated with the processor controller. interleave. In the ESS, to automatically create two striped partitions across the drives in a RAID-5 array, both of which use the count-key-data (CKD) record format. Internet Protocol (IP). In the Internet suite of protocols, a protocol without connections that routes data through a network or interconnecting networks and acts as an intermediary between the higher protocol layers and the physical network. The upper layer supports one or more logical protocols (for example, a SCSI-command protocol and an ESA/390 command protocol). See ANSI X3.230-199x. The IP acronym is the IP in TCP/IP. See also Transmission Control Protocol/Internet Protocol. invalidate. To remove a logical data unit from cache memory because it cannot support continued access to the logical data unit on the device. This removal might be the result of a failure within the storage server or a storage device that is associated with the device. I/O. See input/output. I/O adapter (IOA). In the ESS, an input-output adapter on the PCI bus. IOCDS. See input/output configuration data set. I/O device. An addressable read and write unit, such as a disk drive device, magnetic tape device, or printer. I/O interface. An interface that enables a host to perform read and write operations with its associated peripheral devices.
Glossary
481
I/O Priority Queueing. Facility provided by the Workload Manager of OS/390 and supported by the ESS that enables the system administrator to set priorities for queueing I/Os from different system images. See also multiple allegiance and parallel access volume. I/O processor (IOP). Controls input-output adapters and other devices. I/O sequential response time. The time an I/O request is queued in processor memory waiting for previous I/Os to the same volume to complete. IOSQ. See I/O sequential response time. IP. See Internet Protocol. IPL. See initial program load. IBM System i. An IBM Eserver product that emphasizes integration. It is the successor to the AS/400 family of servers.
LBA. See logical block address. LCU. See logical control unit. least recently used (LRU). (1) The algorithm used to identify and make available the cache space that contains the least-recently used data. (2) A policy for a caching algorithm that chooses to remove from cache the item that has the longest elapsed time since its last access. LED. See light-emitting diode. LIC. See Licensed Internal Code. Licensed Internal Code (LIC). Microcode that IBM does not sell as part of a machine, but licenses to the customer. LIC is implemented in a part of storage that is not addressable by user programs. Some IBM products use it to implement functions as an alternate to hard-wired circuitry. See also licensed machine code (LMC). LIFO. See last-in first-out.
J
Java Virtual Machine (JVM). A software implementation of a central processing unit (CPU) that runs compiled Java code (applets and applications). (GC) JVM. See Java Virtual Machine.
light-emitting diode (LED). A semiconductor chip that gives off visible or infrared light when activated. LMC. See licensed machine code. licensed machine code (LMC). Microcode, basic input/output system code (BIOS), utility programs, device drivers, and diagnostics delivered with an IBM machine. link address. On an ESCON or FICON interface, the portion of a source or destination address in a frame that ESCON or FICON uses to route a frame through an ESCON or FICON director. ESCON or FICON associates the link address with a specific switch port that is on the ESCON or FICON director. Equivalently, it associates the link address with the channel subsystem or control unit link-level functions that are attached to the switch port. link-level facility. The ESCON or FICON hardware and logical functions of a control unit or channel subsystem that allow communication over an ESCON or FICON write interface and an ESCON or FICON read interface. local area network (LAN). A computer network located on a user's premises within a limited geographic area. local e-mail. An e-mail configuration option for storage servers that are connected to a host-system network that does not have a domain name system (DNS) server. logical address. On an ESCON or FICON interface, the portion of a source or destination address in a frame used to select a specific channel-subsystem or control-unit image.
K
KB. See kilobyte. key field. The second (optional) field of a CKD record. The key length is specified in the count field. The key length determines the field length. The program writes the data in the key field and use the key field to identify or locate a given record. The subsystem does not use the key field. kilobyte (KB). (1) For processor storage, real, and virtual storage, and channel volume, 210 or 1024 bytes. (2) For disk storage capacity and communications volume, 1000 bytes. Korn shell. Interactive command interpreter and a command programming language. KPOH. See thousands of power-on hours.
L
LAN. See local area network. last-in first-out (LIFO). A queuing technique in which the next item to be retrieved is the item most recently placed in the queue. (A)
482
logical block address (LBA). The address assigned by the ESS to a sector of a disk. logical control unit (LCU). See control-unit image. logical data unit. A unit of storage that is accessible on a given device. logical device. The facilities of a storage server (such as the ESS) associated with the processing of I/O operations directed to a single host-accessible emulated I/O device. The associated storage is referred to as a logical volume. The logical device is mapped to one or more host-addressable units, such as a device on an S/390 I/O interface or a logical unit on a SCSI I/O interface, such that the host initiating I/O operations to the I/O-addressable unit interacts with the storage on the associated logical device. logical partition (LPAR). In Enterprise Systems Architecture/390, a set of functions that create the programming environment in which more than one logical partition (LPAR) is established on a processor. An LPAR is conceptually similar to a virtual machine environment except that the LPAR is a function of the processor. Also, the LPAR does not depend on an operating system to create the virtual machine environment. logical path. In the ESS for Copy Services, a relationship between a source logical subsystem and target logical subsystem that is created over a physical path through the interconnection fabric used for Copy Services functions. logical subsystem (LSS). In the ESS, a topological construct that consists of a group of up to 256 logical devices. An ESS can have up to 16 CKD-formatted logical subsystems (4096 CKD logical devices) and also up to 16 fixed-block (FB) logical subsystems (4096 FB logical devices). The logical subsystem facilitates configuration of the ESS and might have other implications relative to the operation of certain functions. There is a one-to-one mapping between a CKD logical subsystem and an S/390 control-unit image. For S/390 or IBM System z hosts, a logical subsystem represents a logical control unit (LCU). Each control-unit image is associated with only one logical subsystem. See also control-unit image. logical unit. In open systems, a logical disk drive. logical unit number (LUN). In the SCSI protocol, a unique number used on a SCSI bus to enable it to differentiate between up to eight separate devices, each of which is a logical unit. logical volume. The storage medium associated with a logical disk drive. A logical volume typically resides on one or more storage devices. The ESS administrator
defines this unit of storage. The logical volume, when residing on a RAID array, is spread over the drives in the array. logical volume manager (LVM). A set of system commands, library routines, and other tools that allow the user to establish and control logical volume storage. The LVM maps data between the logical view of storage space and the physical disk drive module (DDM). longitudinal redundancy check (LRC). (1) A method of error-checking during data transfer that involves checking parity on a row of binary digits that are members of a set that forms a matrix. Longitudinal redundancy check is also called a longitudinal parity check. (2) In the ESS, a mechanism that the ESS uses for locating errors. The LRC checks the data as it progresses from the host, through the ESS controller, into the device adapter, and to the array. longwave laser adapter. A connector used between a host and the ESS to support longwave fibre-channel communication. loop. The physical connection between a pair of device adapters in the ESS. See also device adapter. LPAR. See logical partition. LRC. See longitudinal redundancy check. LRU. See least recently used. LSS. See logical subsystem. LUN. See logical unit number. LVM. See logical volume manager.
M
machine level control (MLC. A database that contains the EC level and configuration of products in the field. machine reported product data (MRPD). Product data gathered by a machine and sent to a destination such as an IBM support server or RETAIN. These records might include such information as feature code information and product logical configuration information. mainframe. A computer, usually in a computer center, with extensive capabilities and resources to which other computers may be connected so that they can share facilities. maintenance analysis procedure (MAP). A hardware maintenance document that gives an IBM service representative a step-by-step procedure for tracing a symptom to the cause of a failure.
Glossary
483
Management Information Base (MIB). (1) A collection of objects that can be accessed by means of a network management protocol. (GC) (2) In the ESS, the MIB record conforms to the Open Systems Interconnection (OSI) standard defined by the International Organization for Standardization (ISO) for the exchange of information. See also simple network management protocol. MAP. See maintenance analysis procedure. mass storage. The various techniques and devices for storing large amounts of data in a persisting and machine-readable fashion. Mass storage devices include all types of disk drives and tape drives. master boot record. The boot program that the BIOS loads. This boot program is located in the first sector of the hard disk and is used to start the boot process Master Console. See IBM TotalStorage ESS Master Console. MB. See megabyte. MBR. See master boot record. MCA. See Micro Channel architecture. mean time between failures (MTBF). (1) A projection of the time that an individual unit remains functional. The time is based on averaging the performance, or projected performance, of a population of statistically independent units. The units operate under a set of conditions or assumptions. (2) For a stated period in the life of a functional unit, the mean value of the lengths of time between consecutive failures under stated conditions. (I) (A) medium. For a storage facility, the disk surface on which data is stored. megabyte (MB). (1) For processor storage, real and virtual storage, and channel volume, 220 or 1 048 576 bytes. (2) For disk storage capacity and communications volume, 1 000 000 bytes. MES. See miscellaneous equipment specification. MIB. See management information base. Micro Channel architecture (MCA). The rules that define how subsystems and adapters use the Micro Channel bus in a computer. The architecture defines the services that each subsystem can or must provide. Microsoft Internet Explorer (MSIE). Web browser software manufactured by Microsoft. migration. In the ESS, the replacement of a system or subsystem with a different type of system or subsystem, such as replacing a SCSI host adapter with a fibre-channel host adapter. When used in the context
of data migration regarding the ESS, the transfer of data from one storage facility to another, such as from a 3390 to the ESS. MIH. See missing-interrupt handler. mirrored pair. Two units that contain the same data. The system refers to them as one entity. mirroring. In host systems, the process of writing the same data to two disk units within the same auxiliary storage pool at the same time. miscellaneous equipment specification (MES). IBM field-installed change to a machine. missing-interrupt handler (MIH). An MVS and MVS/XA facility that tracks I/O interrupts. MIH informs the operator and creates a record whenever an expected interrupt fails to occur before a specified elapsed time is exceeded. mixed mode. With respect to a Copy Services domain, the mode of operation of the domain when one Copy Services server is a dual-active server and the other Copy Services server is a single-active server. See also Copy Services server, dual-active server, and single-active server. MLC. See machine level control. mobile solutions terminal (MoST). The mobile terminal used by service personnel. mode conditioning patch. This cable is used to convert a single mode signal generated by a longwave adapter into a light signal that is appropriate for multimode fibre. Another mode conditioning patch cable is required at the terminating end of the multimode fibre to convert the signal back to single mode light sent into a longwave adapter. Model 100. A 2105 Model 100, often simply referred to as a Mod 100, is an expansion enclosure for the ESS. See also 2105. MoST. See mobile solutions terminal. MRPD. See machine reported product data. MSA. See multiport serial adapter. MSIE. See Microsoft Internet Explorer. MTBF. See mean time between failures. multiple allegiance. An ESS hardware function that is independent of software support. This function enables multiple system images to concurrently access the same logical volume on the ESS as long as the system images are accessing different extents. See also extent and parallel access volumes.
484
multiple virtual storage (MVS). Implies MVS/390, MVS/XA, MVS/ESA, and the MVS element of the OS/390 operating system. multiplex. The action of transmitting simultaneously. multiport serial adapter (MSA). An adapter on the ESS Master Console that has multiple ports to which ESSs can be attached. MVS. See multiple virtual storage.
open system. A system whose characteristics comply with standards made available throughout the industry and that therefore can be connected to other systems complying with the same standards. Applied to the ESS, such systems are those hosts that connect to the ESS through SCSI or FCP protocols. See also small computer system interface and fibre-channel protocol. operating system (OS). A set of programs that control how the system works. Controls the running of programs and provides such services as resource allocation, scheduling, input and output control, and data management. organizationally unique identifier (OUI). An IEEE-standards number that identifies an organization with a 24-bit globally unique assigned number referenced by various standards. OUI is used in the family of 802 LAN standards, such as Ethernet and Token Ring. original equipment manufacturer's information (OEMI). A reference to an IBM guideline for a computer peripheral interface. The interface uses ESA/390 logical protocols over an I/O interface that configures attached units in a multidrop bus topology. OS. See operating system. OS/390. The IBM operating system that includes and integrates functions previously provided by many IBM software products (including the MVS operating system) for the IBM S/390 family of enterprise servers. OS/400. The IBM operating system that runs the IBM AS/400 and IBM System i Eserverfamilies of servers. OUI. See organizationally unique identifier.
N
name server. A server that stores names of the participating ESS clusters. Netfinity. IBM Intel-processor-based server; predecessor to the IBM xSeries server. Netscape Navigator. Web browser software manufactured by Netscape. Network Installation Management (NIM). An environment that provides installation and configuration of software within a network interface. NIM. See Network Installation Management. node. (1) In a network, a point at which one or more functional units connect channels or data circuits. An ESS is a node in a fibre-channel network. (2) One SAN Volume Controller. Each node provides virtualization, cache, and Copy Services to the storage area network. node fallover. See failover. non-RAID. A disk drive set up independently of other disk drives and not set up as part of a disk eight pack to store data using the redundant array of disks (RAID) data-striping methodology. nonremovable medium. A recording medium that cannot be added to or removed from a storage device. nonvolatile storage (NVS). In the ESS, memory that stores active write data to avoid data loss in the event of a power loss. NVS. See nonvolatile storage.
P
panel. The formatted display of information that appears on a display screen. parallel access volume (PAV). An advanced function of the ESS that enables OS/390 and z/OS systems to issue concurrent I/O requests against a CKD logical volume by associating multiple devices of a single control-unit image with a single logical device. Up to eight device addresses can be assigned to a PAV. The PAV function enables two or more concurrent write operations to the same logical volume, as long as the write operations are not to the same extents. See also extent, I/O Priority Queueing, and multiple allegiance. parity. A data checking scheme used in a computer system to ensure the integrity of the data. The RAID implementation uses parity to re-create data if a disk drive fails. path group. In ESA/390 architecture, a set of channel paths that are defined to a control unit as being
Glossary
O
octet. In Internet Protocol (IP) addressing, one of the four parts of a 32-bit integer presented in dotted decimal notation. dotted decimal notation consists of four 8-bit numbers written in base 10. For example, 9.113.76.250 is an IP address containing the octets 9, 113, 76, and 250. OEMI. See original equipment manufacturer's information.
485
associated with a single logical partition (LPAR). The channel paths are in a group state and are online to the host. See also logical partition. path group identifier. In ESA/390 architecture, the identifier that uniquely identifies a given logical partition (LPAR). The path group identifier is used in communication between the LPAR program and a device. The identifier associates the path group with one or more channel paths, thereby defining these paths to the control unit as being associated with the same LPAR. See also logical partition. PAV. See parallel access volume. PCI. See peripheral component interconnect. PE. See IBM product engineering. Peer-to-Peer Remote Copy (PPRC). A function of a storage server that constantly updates a secondary copy of a logical volume to match changes made to a primary logical volume. The primary and secondary volumes can be on the same storage server or on separate storage servers. See also synchronous PPRC and PPRC Extended Distance. peripheral component interconnect (PCI). An architecture for a system bus and associated protocols that supports attachments of adapter cards to a system backplane. persistent binding. A feature where a device has the same identification to the operating system after it restarts and after other devices are added to the operating system. physical path. A single path through the I/O interconnection fabric that attaches two units. For Copy Services, this is the path from a host adapter on one ESS (through cabling and switches) to a host adapter on another ESS. point-to-point connection. For fibre-channel connections, a topology that enables the direct interconnection of ports. See arbitrated loop and switched fabric. port. In the ESS, a physical connection on a host adapter to the cable that connects the ESS to hosts, switches, or another ESS. The ESS uses SCSI and ESCON host adapters that have two ports per adapter, and fibre-channel host adapters that have one port. See also ESCON, fibre channel, host adapter, and small computer system interface. POST. See power-on self test. power-on self test (POST). A diagnostic test that servers or computers run when they are turned on. PPRC. See Peer-to-Peer Remote Copy.
PPRC Extended Distance. An optional feature for the ESS that maintains a fuzzy copy of a logical volume on the same ESS or on another ESS. In other words, all modifications that any attached host performs on the primary logical volume are also performed on the secondary logical volume at a later point in time. The original order of update is not strictly maintained. See also Peer-to-Peer Remote Copy (PPRC) and synchronous PPRC. PPRC-XD. See PPRC Extended Distance. predictable write. A write operation that can cache without knowledge of the existing format on the medium. All write operations on FBA DASD devices are predictable. On CKD DASD devices, a write operation is predictable if it does a format write operation for the first data record on the track. primary Copy Services server. One of two Copy Services servers in a Copy Services server group. The primary Copy Services server is the active Copy Services server until it fails; it is then replaced by the backup Copy Services server. A Copy Services server is software that runs in one of the two clusters of an ESS and performs data-copy operations within that group. See active Copy Services server and backup Copy Services server. product engineering. See IBM product engineering. program. On a computer, a generic term for software that controls the operation of the computer. Typically, the program is a logical assemblage of software modules that perform multiple related tasks. program-controlled interruption. An interruption that occurs when an I/O channel fetches a channel command word with the program-controlled interruption flag on. program temporary fix (PTF). A temporary solution or bypass of a problem diagnosed by IBM in a current unaltered release of a program. (GC) promote. To add a logical data unit to cache memory. protected volume. In the IBM AS/400 platform, a disk storage device that is protected from data loss by RAID techniques. An AS/400 host does not mirror a volume configured as a protected volume, while it does mirror all volumes configured as unprotected volumes. The ESS, however, can be configured to indicate that an AS/400 volume is protected or unprotected and give it RAID protection in either case. System p. The product name of an IBM Eserver product that emphasizes performance. It is the successor to the IBM RS/6000 family of servers. pseudo-host. A host connection that is not explicitly defined to the ESS and that has access to at least one volume that is configured on the ESS. The FiconNet
486
pseudo-host icon represents the FICON protocol. The EsconNet pseudo-host icon represents the ESCON protocol. The pseudo-host icon labelled Anonymous represents hosts connected through the FCP protocol. Anonymous host is a commonly used synonym for pseudo-host. The ESS adds a pseudo-host icon only when the ESS is set to access-any mode. See also access-any mode. PTF. See program temporary fix. PV Links. Short for Physical Volume Links, an alternate pathing solution from Hewlett-Packard providing for multiple paths to a volume, as well as static load balancing.
request for acknowledgement and acknowledgement (REQ/ACK). A cycle of communication between two data transport devices for the purpose of verifying the connection, which starts with a request for acknowledgement from one of the devices and ends with an acknowledgement from the second device. The REQ and ACK signals help to provide uniform timing to support synchronous data transfer between an initiator and a target. The objective of a synchronous data transfer method is to minimize the effect of device and cable delays. reserved allegiance. In Enterprise Systems Architecture/390, a relationship that is created in a control unit between a device and a channel path when the device completes a Sense Reserve command. The allegiance causes the control unit to guarantee access (busy status is not presented) to the device. Access is over the set of channel paths that are associated with the allegiance; access is for one or more channel programs until the allegiance ends. RETAIN. See remote technical assistance information network. RSSM. IBM BladeCenter S SAS RAID Controller Module.
R
R0. See track-descriptor record. rack. See enclosure. RAID. See redundant array of independent disks. RAID is also commonly expanded to redundant array of independent disks. See also array. RAID 5. A type of RAID that optimizes cost-effective performance while emphasizing use of available capacity through data striping. RAID 5 provides fault tolerance for up to two failed disk drives by distributing parity across all the drives in the array plus one parity disk drive. The ESS automatically reserves spare disk drives when it assigns arrays to a device adapter pair (DA pair). See also device adapter, RAID 10, and redundant array of independent disks. RAID 10. A type of RAID that optimizes high performance while maintaining fault tolerance for up to two failed disk drives by striping volume data across several disk drives and mirroring the first set of disk drives on an identical set. The ESS automatically reserves spare disk drives when it assigns arrays to a device adapter pair (DA pair). See also device adapter, RAID 5, and redundant array of independent disks. random access. A mode of accessing data on a medium in a manner that requires the storage device to access nonconsecutive storage locations on the medium. rank. See array. redundant array of independent disks (RAID). A methodology of grouping disk drives for managing disk storage to insulate data from a failing disk drive. remote technical assistance information network (RETAIN). The initial service tracking system for IBM service support, which captures heartbeat and call-home records. See also support catcher and support catcher telephone number. REQ/ACK. See request for acknowledgement and acknowledgement.
S
S/390. IBM enterprise servers based on Enterprise Systems Architecture/390 (ESA/390). S/390 is the currently accepted shortened form of the original name System/390. S/390 storage. (1) Storage arrays and logical volumes that are defined in the ESS as connected to S/390 servers. This term is synonymous with count-key-data (CKD) storage. (2) In ESS documentation, when noted, the term can refer to both S/390 and IBM System z storage. See also IBM System z storage. SAID. See system adapter identification number. SAM. See sequential access method. SAN. See storage area network. SAS. See serial-attached SCSI. SBCON. See Single-Byte Command Code Sets Connection. screen. The physical surface of a display device upon which information is shown to users. SCSI. See small computer system interface. SCSI device. A disk drive connected to a host through an I/O interface using the SCSI protocol. A SCSI device is either an initiator or a target. See also initiator and small computer system interface.
Glossary
487
SCSI host systems. Host systems that are attached to the ESS with a SCSI interface. Such host systems run on UNIX, OS/400, Windows NT, Windows 2000, or Novell NetWare operating systems. SCSI ID. A unique identifier assigned to a SCSI device that is used in protocols on the SCSI interface to identify or select the device. The number of data bits on the SCSI bus determines the number of available SCSI IDs. A wide interface has 16 bits, with 16 possible IDs. SCSI-FCP. Synonym for fibre-channel protocol, a protocol used to transport data between an open-systems host and a fibre-channel adapter on an ESS. See also fibre-channel protocol and small computer system interface. SDD. See IBM System Storage Enterprise Storage Server Subsystem Device Driver. SDDDSM. See IBM System Storage Subsystem Device Driver Device Specific Module. SDDPCM. See IBM System Storage Multipath Subsystem Device Driver Path Control Module. Seascape architecture. A storage system architecture developed by IBM for open-systems servers and S/390 and IBM System z host systems. It provides storage solutions that integrate software, storage management, and technology for disk, tape, and optical storage. self-timed interface (STI). An interface that has one or more conductors that transmit information serially between two interconnected units without requiring any clock signals to recover the data. The interface performs clock recovery independently on each serial data stream and uses information in the data stream to determine character boundaries and inter-conductor synchronization. sequential access. A mode of accessing data on a medium in a manner that requires the storage device to access consecutive storage locations on the medium. sequential access method (SAM). An access method for storing, deleting, or retrieving data in a continuous sequence based on the logical order of the records in the file. serial-attached SCSI (SAS). A data transfer technology that uses a host bus adapter with four or more channels that operate simultaneously. Each full-duplex channel, known as a SAS port, transfers data in each direction. serial connection. A method of device interconnection for determining interrupt priority by connecting the interrupt sources serially. serial storage architecture (SSA). An IBM standard for a computer peripheral interface. The interface uses a
SCSI logical protocol over a serial interface that configures attached targets and initiators in a ring topology. See also SSA adapter. server. (1) A host that provides certain services to other hosts that are referred to as clients. (2) A functional unit that provides services to one or more clients over a network. (GC) service boundary. A category that identifies a group of components that are unavailable for use when one of the components of the group is being serviced. Service boundaries are provided on the ESS, for example, in each host bay and in each cluster. service information message (SIM). A message sent by a storage server to service personnel through an S/390 operating system. service personnel. A generalization referring to individuals or companies authorized to service the ESS. The terms service provider, service representative, and IBM service support representative (SSR) refer to types of service personnel. See also service support representative. service processor. A dedicated processing unit used to service a storage facility. service support representative (SSR). Individuals or a company authorized to service the ESS. This term also refers to a service provider, a service representative, or an IBM service support representative (SSR). An IBM SSR installs the ESS. session. A collection of multiple copy sets that comprise a consistency group. shared storage. In an ESS, storage that is configured so that multiple hosts can concurrently access the storage. The storage has a uniform appearance to all hosts. The host programs that access the storage must have a common model for the information on a storage device. The programs must be designed to handle the effects of concurrent access. shortwave laser adapter. A connector used between host and ESS to support shortwave fibre-channel communication. SIM. See service information message. Simple Network Management Protocol (SNMP). In the Internet suite of protocols, a network management protocol that is used to monitor routers and attached networks. SNMP is an application layer protocol. Information on devices managed is defined and stored in the application's Management Information Base (MIB). (GC) See also management information base. simplex volume. A volume that is not part of a FlashCopy, XRC, or PPRC volume pair.
488
single-active mode. (1) With respect to a Copy Services server, the mode of operation of the server when the LIC level of the associated ESS cluster is below 2.0. (2) With respect to a Copy Services domain, the mode of operation of the domain when the Copy Services servers are single-active servers. See also Copy Services server, Copy Services domain, dual-active server, and mixed mode. Single-Byte Command Code Sets Connection (SBCON). The ANSI standard for the ESCON or FICON I/O interface. small computer system interface (SCSI). A standard hardware interface that enables a variety of peripheral devices to communicate with one another. (GC) smart relay host. A mail relay or mail gateway that has the capability to correct e-mail addressing problems. SMIT. See System Management Interface Tool. SMP. See symmetric multiprocessor. SMS. See Systems Management Server. SNMP. See simple network management protocol. Systems Management Server (SMS). Change and configuration management software from Microsoft that runs on the Microsoft platform and distributes relevant software and updates to users. software transparency. Criteria applied to a processing environment that states that changes do not require modifications to the host software in order to continue to provide an existing function. spare. A disk drive on the ESS that can replace a failed disk drive. A spare can be predesignated to allow automatic dynamic sparing. Any data preexisting on a disk drive that is invoked as a spare is destroyed by the dynamic sparing copy process. spatial reuse. A feature of serial storage architecture that enables a device adapter loop to support many simultaneous read/write operations. See also serial storage architecture. Specialist. See IBM TotalStorage Enterprise Storage Server Specialist. Shared Product Object Tree (SPOT) . (1) A version of the /usr file system that diskless clients mount as their own /usr directory. (2) For NIM, a /usr file system or an equivalent file system that is exported by servers in the NIM environment for remote client use. SPOT. See Shared Product Object Tree. SSA. See serial storage architecture.
SSA adapter. A physical adapter based on serial storage architecture. SSA adapters connect disk drive modules to ESS clusters. See also serial storage architecture. SSID. See subsystem identifier. SSR. See service support representative. stacked status. In Enterprise Systems Architecture/390, the condition when the control unit is in a holding status for the channel, and the last time the control unit attempted to present the status, the channel responded with the stack-status control. stage operation. The operation of reading data from the physical disk drive into the cache. staging. To move data from an offline or low-priority device back to an online or higher priority device, usually on demand of the system or on request of the user. standard volume. In the ESS, a volume that emulates one of several S/390 volume types, including 3390-2, 3390-3, 3390-9, 3390-2 (3380-track mode), or 3390-3 (3380-track mode), by presenting the same number of cylinders and capacity to the host as provided by the native S/390 volume type of the same name. STI. See self-timed interface. storage area network. A network that connects a company's heterogeneous storage resources. storage complex. Multiple storage facilities. storage device. A physical unit that provides a mechanism to store data on a given medium such that it can be subsequently retrieved. See also disk drive module. storage facility. (1) A physical unit that consists of a storage server integrated with one or more storage devices to provide storage capability to a host computer. (2) A storage server and its attached storage devices. storage server. A physical unit that manages attached storage devices and provides an interface between them and a host computer by providing the function of one or more logical subsystems. The storage server can provide functions that are not provided by the storage device. The storage server has one or more clusters. striping. A technique that distributes data in bit, byte, multibyte, record, or block increments across multiple disk drives. subchannel. A logical function of a channel subsystem associated with the management of a single device. Subsystem Device Driver. See IBM System Storage Multipath Subsystem Device Driver.
Glossary
489
Subsystem Device Driver Device Specific Module (SDDDSM). An IBM storage subsystems multipath I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is designed to support IBM storage subsystems devices such as SAN Volume Controller, DS8000, and DS6000. SDDDSM resides on a host server with the native disk device driver and provides enhanced data availability, automatic path failover protection, concurrent download of controller firmware code, and path selection policies for the host system. subsystem identifier (SSID). A number that uniquely identifies a logical subsystem within a computer installation. support catcher. See catcher. support catcher telephone number. The telephone number that connects the support catcher server to the ESS to receive a trace or dump package. See also support catcher and remote technical assistance information network. switched fabric. In the ESS, one of three fibre-channel connection topologies that the ESS supports. See also arbitrated loop and point-to-point. symmetric multiprocessor (SMP). An implementation of a multiprocessor computer consisting of several identical processors configured in a way that any subset of the set of processors is capable of continuing the operation of the computer. The ESS contains four processors set up in SMP mode. synchronous PPRC. A function of a storage server that maintains a consistent copy of a logical volume on the same storage server or on another storage server. All modifications that any attached host performs on the primary logical volume are also performed on the secondary logical volume. See also Peer-to-Peer Remote Copy and PPRC Extended Distance. synchronous write. A write operation whose completion is indicated after the data has been stored on a storage device. System/390. See S/390. system adapter identification number (SAID). In the ESS, the unique identification number automatically assigned to each ESS host adapter for use by ESS Copy Services. System Management Interface Tool (SMIT). An interface tool of the AIX operating system for installing, maintaining, configuring, and diagnosing tasks. System Modification Program. A program used to install software and software changes on MVS systems.
T
TAP. See Telocator Alphanumeric Protocol. target. A SCSI device that acts as a slave to an initiator and consists of a set of one or more logical units, each with an assigned logical unit number (LUN). The logical units on the target are typically I/O devices. A SCSI target is analogous to an S/390 control unit. A SCSI initiator is analogous to an S/390 channel. A SCSI logical unit is analogous to an S/390 device. See also small computer system interface. TB. See terabyte. TCP/IP. See Transmission Control Protocol/Internet Protocol. Telocator Alphanumeric Protocol (TAP). An industry standard protocol for the input of paging requests. terabyte (TB). (1) Nominally, 1 000 000 000 000 bytes, which is accurate when speaking of bandwidth and disk storage capacity. (2) For ESS cache memory, processor storage, real and virtual storage, a terabyte refers to 240 or 1 099 511 627 776 bytes. terminal emulator. In the ESS, a function of the ESS Master Console that allows it to emulate a terminal. thousands of power-on hours (KPOH). A unit of time used to measure the mean time between failures (MTBF). time sharing option (TSO). An operating system option that provides interactive time sharing from remote terminals. TPF. See transaction processing facility. track. A unit of storage on a CKD device that can be formatted to contain a number of data records. See also home address, track-descriptor record, and data record. track-descriptor record (R0). A special record on a track that follows the home address. The control program uses it to maintain certain information about the track. The record has a count field with a key length of zero, a data length of 8, and a record number of 0. This record is sometimes referred to as R0. transaction processing facility (TPF). A high-availability, high-performance IBM operating system, designed to support real-time, transaction-driven applications. The specialized architecture of TPF is intended to optimize system efficiency, reliability, and responsiveness for data communication and database processing. TPF provides real-time inquiry and updates to a large, centralized database, where message length is relatively short in both directions, and response time is generally less than three seconds. Formerly known as the Airline Control Program/Transaction Processing Facility (ACP/TPF).
490
Transmission Control Protocol (TCP). A communications protocol used in the Internet and in any network that follows the Internet Engineering Task Force (IETF) standards for internetwork protocol. TCP provides a reliable host-to-host protocol between hosts in packet-switched communications networks and in interconnected systems of such networks. It uses the Internet Protocol (IP) as the underlying protocol. Transmission Control Protocol/Internet Protocol (TCP/IP). (1) A combination of data-transmission protocols that provide end-to-end connections between applications over interconnected networks of different types. (2) A suite of transport and application protocols that run over the Internet Protocol. (GC) See also Internet Protocol and Transmission Control Protocol. transparency. See software transparency. TSO. See time sharing option.
V
virtual machine facility. A virtual data processing machine that appears to the user to be for the exclusive use of that user, but whose functions are accomplished by sharing the resources of a shared data processing system. An alternate name for the VM/370 IBM operating system. virtualization. In the storage industry, a concept in which a pool of storage is created that contains several disk subsystems. The subsystems can be from various vendors. The pool can be split into virtual disks that are visible to the host systems that use them. In SDD, virtualization product refers to SAN Volume Controller. vital product data (VPD). Information that uniquely defines the system, hardware, software, and microcode elements of a processing system. VM. The root name of several IBM operating systems, such as VM/370, VM/ESA, VM/CMS, and VM/SP. See also virtual machine (VM) facility. volume. In Enterprise Systems Architecture/390, the information recorded on a single unit of recording medium. Indirectly, it can refer to the unit of recording medium itself. On a nonremovable-medium storage device, the term can also indirectly refer to the storage device associated with the volume. When multiple volumes are stored on a single storage medium transparently to the program, the volumes can be referred to as logical volumes. VPD. See vital product data. VSE/ESA. IBM operating system, the letters of which represent virtual storage extended/enterprise systems architecture.
U
UFS. UNIX filing system. Ultra-SCSI. An enhanced small computer system interface. unconfigure. To delete the configuration. unit address. In Enterprise Systems Architecture/390, the address associated with a device on a given control unit. On ESCON or FICON interfaces, the unit address is the same as the device address. On OEMI interfaces, the unit address specifies a control unit and device pair on the interface. UNIX File System (UFS) . A section of the UNIX file tree that is physically contained on a single device or disk partition and that can be separately mounted, dismounted, and administered. unprotected volume. An AS/400 term that indicates that the AS/400 host recognizes the volume as an unprotected device, even though the storage resides on a RAID array and is therefore fault tolerant by definition. The data in an unprotected volume can be mirrored. Also referred to as an unprotected device. upper-layer protocol. The layer of the Internet Protocol (IP) that supports one or more logical protocols (for example, a SCSI-command protocol and an ESA/390 command protocol). See ANSI X3.230-199x. UTC. See Coordinated Universal Time. utility device. The ESA/390 term for the device used with the Extended Remote Copy facility to access information that describes the modifications performed on the primary copy.
W
Web Copy Services. See ESS Copy Services. worldwide node name (WWNN). A unique 64-bit identifier for a host containing a fibre-channel port. See also worldwide port name. worldwide port name (WWPN). A unique 64-bit identifier associated with a fibre-channel adapter port. It is assigned in an implementation- and protocol-independent manner. write hit. A write operation in which the requested data is in the cache. write penalty. The performance impact of a classical RAID 5 write operation. WWPN. See worldwide port name.
Glossary
491
X
XD. See PPRC Extended Distance. XRC. See Extended Remote Copy. xSeries. The product name of an IBM Eserver product that emphasizes industry-standard server scalability and self-managing server technologies. It is the successor to the Netfinity family of servers.
Z
z/Architecture. An IBM architecture for mainframe computers and peripherals. The IBM Eserver IBM System z family of servers uses the z/Architecture architecture. It is the successor to the S/390 and 9672 family of servers. See also Enterprise Systems Architecture/390. z/OS. An operating system for the IBM eServer product line that supports 64-bit real storage. IBM System z. (1) An IBM Eserver family of servers that emphasizes near-zero downtime. (2) IBM enterprise servers based on z/Architecture. IBM System z storage. Storage arrays and logical volumes that are defined in the ESS as connected to IBM System z servers. See also S/390 storage.
492
A
about this book xix accessing AIX add a data path volume to a volume group SMIT panel 86 add a volume group with data path devices SMIT panel 85 add paths to available data path devices SMIT panel 85 backup a volume group with data path devices SMIT panel 86 configure a defined data path device SMIT panel 85 define and configure all data path devices SMIT panel 85 display data path device adapter status SMIT panel 84 display data path device configuration SMIT panel 83 Display Data Path Device Status SMIT panel 84 remake a volume group with data path devices SMIT panel 87 remove a data path device SMIT panel 85 Remove a Physical Volume from a Volume Group SMIT panel 86 adapters 313 configuring Linux 216, 217 NetWare 313 Windows 2000 371 Windows NT 355 Windows Server 2003 387, 388, 405 Windows Server 2008 405 Windows Server 2012 405 firmware level 19, 108 LP70000E 14, 104 upgrading firmware level to (sf320A9) 108 adding paths AIX 47 Windows NT 357 Windows Server 2003 host systems 392, 408 Windows Server 2008 host systems 408 Windows Server 2012 host systems 408 storage for Windows NT host systems 361, 362
493
AIX 5.3.0 32bit 21 64bit 21 article Microsoft Knowledge Base Article Number Q293778 information about removing multipath access to your shared volume 363 automount, setting up 230
B
backing-up AIX files belonging to an SDD volume group 81 BIOS, disabling 355, 371, 388 BladeCenter concurrent download of licensed machine code 8 block disk device interfaces (SDD) 189, 327 boot -r command 349 bootinfo -K command 21
C
cat /proc/modules command 221 cat /proc/scsi/scsi command 222 cat /proc/scsi/xxx/N command 222 cat /proc/sdd command 222 cd /media command 217, 218 cd /mnt command 217, 218 cd /opt/IBMsdd command 219 cd /opt/IBMsdd/bin command 220 cfallvpath 52 cfgmgr command 18, 79, 119 run n times where n represents the number of paths per SDD device. 79 run for each installed SCSI or fibre-channel adapter 79 cfgvpath command 225 changing path-selection policy for AIX 70, 228 path-selection policy for HP 196 path-selection policy for Solaris 338 SDD hardware configuration HP-UX host systems 195 Solaris host systems 335 to the /dev directory HP-UX host systems 209 changing SDDPCM controller healthcheck delay_time 133 chdev command 78 chgrp command 348 chkconfig - -level X sdd on command 227 chkconfig - -level X sdd off command 227 chkconfig - -list sdd command 227 chkvpenv command 221 chmod command 348 command /opt/IBMsdd/bin/showvpath 351 addpaths 79, 88
command (continued) boot -r 349 bootinfo -K 21 cat /proc/modules 221 cat /proc/scsi/scsi 222 cat /proc/scsi/xxx/N 222 cat /proc/sdd 222 cd /media 217, 218 cd /mnt 217, 218 cd /opt/IBMsdd 219 cd /opt/IBMsdd/bin 220 cfgmgr 18, 79, 119 running n times for n-path configurations 79 cfgvpath 225 chdev 78 chgrp 348 chkconfig - -level X sdd off 227 chkconfig - -level X sdd on 227 chkconfig - -list sdd 227 chkvpenv 221 chmod 348 datapath clear device count 431 datapath disable ports 432 datapath enable ports 433 datapath open device path 434 datapath query adapter 365, 436 datapath query adaptstats 438 datapath query device 75, 79, 192, 226, 356, 373, 374, 389, 406, 439 datapath query devstats 442 datapath query essmap 444 datapath query portmap 446 datapath query version 448 datapath query wwpn 449 datapath remove adapter 450 datapath remove device 451 datapath remove device path 451 datapath set adapter 453 datapath set adapter # offline 364, 381, 400, 418 datapath set adapter offline 364, 381, 400, 418 datapath set bootdiskmigrate 381, 399 datapath set device 0 path 0 offline 455 datapath set device N policy rr/fo/lb/df 70, 197, 229, 338 datapath set device path 455 datapath set device policy 454 datapath set qdepth 456 dpovgfix 74, 88 dpovgfix vg-name 78 esvpcfg 307 excludesddcfg 52, 90 extendvg 81 extendvg4vp 81, 90 hd2vp and vp2hd 88 hd2vp vg_name 29 HP-UX host systems hd2vp 196 vgexport 205 vgimport 206 vp2hd 196 vpcluster 206 insmod ./vpath.o 221
command (continued) installp 17 instfix -i | grep IY10201 17 instfix -i | grep IY10994 17 instfix -i | grep IY11245 17 instfix -i | grep IY13736 17 instfix -i | grep IYl7902 17 instfix -i | grep IYl8070 17 ls -al /unix 21 ls -l 220 lscfg -vl fcsN 19, 108 lsdev -Cc disk 18, 120 lsdev -Cc disk | grep 2105 38 lsdev -Cc disk | grep SAN Volume Controller 38 lslpp -l ibmSdd_432.rte 35, 36 lslpp -l ibmSdd_433.rte 35, 36 lslpp -l ibmSdd_510.rte 35, 36 lslpp -l ibmSdd_510nchacmp.rte 35, 36 lspv 28, 76 lsvg -p vg-name 76 lsvgfs 28 lsvpcfg 29, 74, 77, 88, 226 lsvpd 224 metadb -a <device> 350 metadb -d -f <device> 350 metadb -i 350 metainit 349 metainit d <metadevice number> -t <"vpathNs" - master device> <"vpathNs" - logging device> 351 metastat 350, 351 mkdev -l vpathN 45 mksysb restore command 77 mkvg 75 mkvg4vp 75, 89 newfs 351 odmget -q "name = ioaccess" CuAt 57 orainst /m 346 pcmpath chgprefercntl device 183 pcmpath clear device count 147 pcmpath disable ports 148 pcmpath enable ports 150 pcmpath open device path 152 pcmpath query adapter 154 pcmpath query adaptstats 156 pcmpath query device 158 pcmpath query devstats 164 pcmpath query essmap 166 pcmpath query port 167 pcmpath query portmap 169 pcmpath query portstats 170 pcmpath query session 172 pcmpath query version 173 pcmpath query wwpn 174 pcmpath set adapter 175 pcmpath set device 0 path 0 offline 182 pcmpath set device algorithm 177 pcmpath set device cntlhc_interval 180, 181 pcmpath set device health_check mode 179 pcmpath set device path 182
494
command (continued) pcmpath set health_check time interval 178 pkgrm IBMsdd 350 restvg 82 restvg4vp 82 rmdev 79 rmdev -dl dpo -R 28, 50, 123 rmdev -dl fcsN -R 18, 120 rmdev -l dpo -R 46 rmvpath xxx 225 rpm -e IBMsdd command 230 rpm -qi IBMsdd 219, 230 rpm -ql IBMsdd 219, 230 savevg 81 savevg4vp 81 showvpath 210, 347, 348, 349, 350 shutdown -i6 -y -g0 350 shutdown -rF 18, 120 smitty 28 smitty deinstall 17 smitty device 28 smitty uninstall 17 table of, in installation package 21 umount 28, 351 umount /cdrom 334 unmod ./sdd-mod.o 230 using 187 varyoffvg 28, 38 varyonvg vg_name 29 comments, how to send xxvi concurrent download of licensed machine code BladeCenter S SAS RAID Controller Module 8 disk storage systems 7 DS4000 8 RSSM 8 virtualization products 7 configuring additional paths on Windows NT host systems 359 AIX cabling storage side switch ports 18, 120 disk storage system 15 ESS 105 fibre-channel attached devices 16, 106 fibre-channel-attached devices 18, 119 SAN Volume Controller 16 volume group for failover protection 75 clusters with SDD Windows 2000 host systems 382 Windows NT host systems 364 Windows Server 2003 host systems 400, 418 Windows Server 2008 host systems 418 Windows Server 2012 host systems 418 disk storage system NetWare host systems 313 Windows 2000 371
configuring (continued) DS4000 for HP-UX 188 ESS HP-UX host systems 188 Linux host systems 216 Solaris host systems 326 Windows NT 355 fibre-channel adapters Linux host systems 216, 217 NetWare host systems 313 Windows 2000 host systems 371 Windows NT host systems 355 Windows Server 2003 host systems 387, 405 Windows Server 2008 host systems 405 Windows Server 2012 host systems 405 MPIO devices as the SAN boot device 134 SAN Volume Controller Solaris host systems 326 SCSI adapters Windows 2000 host systems 371 Windows NT host systems 355 Windows Server 2003 host systems 388 SDD AIX 45 AIX host 38 at system startup 226 Linux host systems 220, 221 NetWare host systems 315 Solaris host systems 335 Windows NT host systems 357 SDDDSM Windows Server 2003 408 Windows Server 2008 408 supported storage device Windows Server 2003 387, 405 Windows Server 2008 405 virtualization products 188 Linux host systems 216 controller health check feature, SDDPCM active/passive storage device 126 conversion scripts hd2vp 87 vp2hd 45, 87 creating device node for the logical volume device in an HP-UX host systems 209 directory in /dev for the volume group in an HP-UX host systems 209 file system on the volume group in an HP-UX host systems 210 logical volume in an HP-UX host systems 210 new logical volumes in an HP-UX host systems 209 physical volume in an HP-UX host systems 210 volume group in an HP-UX host systems 210
customizing Network File System file server 213 Oracle 345 standard UNIX applications 208, 343
D
database managers (DBMS) 327 datapath clear device count command 431 commands 429 disable ports command 432 enable ports command 433 open device path command 434 query adapter 450, 451 adapter command 365, 436 adapter command changes 198 adaptstats command 438 device command 75, 79, 226, 356, 365, 439 devstats command 442 essmap command 444 portmap command 446 set adapter command 381, 399, 453 version command 448 wwpn command 449 remove adapter 450 adapter command 450 device 451 device path command 451 set adapter # offline command 364, 381, 400, 418 set adapter offline command 364, 381, 400, 418 set device 0 path 0 offline command 455 set device N policy rr/fo/lb/df command 70, 197, 229, 338 set device path command 455 set device policy command 454 set qdepth 456 set qdepth command 456 definitions 471 determining AIX adapter firmware level 19, 108 major number of the logical volume device for an HP-UX host systems 209 size of the logical volume for an HP-UX host systems 211 device driver 326 devices.fcp.disk.ibm.rte 12, 16 devices.fcp.disk.ibm2105.rte 12 devices.scsi.disk.ibm2105.rte 12 disk storage system configuring for NetWare 313 configuring on Windows 2000 371 displaying AIX ESS SDD vpath device configuration 74
Index
495
displaying (continued) current version of SDD Windows 2000 374 Windows NT 357 Windows Server 2003 391, 408 Windows Server 2008 408 Windows Server 2012 408 dpovgfix command 74, 88 dpovgfix vg-name command 78 DS4000 concurrent download of licensed machine code 8 configuring LUNs attached to the HP-UX host system 188 installing an additional package for support 195 dynamic I/O load balancing 6 dynamically opening an invalid or close_dead path 71 dynamically removing or replacing adapters AIX Hot Plug support 47 paths AIX Hot Plug support 47 dynamically removing paths 49 dynamically replacing adapters different type replacement 48 same type replacement 47
F
failover protection 6 AIX creating a volume group from a single-path SDD vpath device 77 losing 76 manually deleted devices and running the configuration manager 79 side effect of running the disk change method 77 the loss of a device path 77 verifying load-balancing and failover protection 73 when it does not exist 74 fibre-channel adapters configuring for Windows 2000 371 for Windows Server 2003 387, 405 for Windows Server 2008 405 for Windows Server 2012 405 Linux host systems 216, 217 NetWare host systems 313 supported AIX host systems 14, 104 HP-UX host systems 187 Linux host systems 215 NetWare host systems 313 Solaris host systems 325 Windows 2000 host systems 370 Windows NT host systems 354 Windows Server 2003 host systems 386, 404 Windows Server 2008 host systems 404 fibre-channel device drivers configuring for AIX 16, 106 devices.common.IBM.fc 16, 107 devices.fcp.disk 16, 107 devices.pci.df1000f7 16, 107 installing for AIX 16, 106 supported AIX host systems 14, 104 NetWare host systems 313 fibre-channel requirements Windows Server 2003 404 Windows Server 2008 404
E
enhanced data availability 4 error messages AIX messages for persistent reserve environment 461 VPATH_DEVICE_OFFLINE 461 VPATH_DEVICE_ONLINE 461 VPATH_PATH_OPEN 461 VPATH_XBUF_NOMEM 461 for ibmSdd_433.rte installation package for SDD VPATH_FAIL_RELPRESERVE 461 VPATH_OUT_SERVICE 461 VPATH_RESV_CFLICT 462 SDDPCM 462 Windows 464 ESS AIX displaying SDD vpath device configuration 74 configuring for HP 188 configuring for Linux 216 configuring for Solaris 326 configuring on Windows NT 355 devices (hdisks) 93 LUNs 93 exporting a volume group with SDD, AIX 80 extending an existing SDD volume group, AIX 81 extendvg command 81 extendvg4vp command 81, 90
H
HACMP concurrent mode 55 enhanced concurrent mode 124 hd2vp conversion script 57 importing volume groups 57 node fallover 66 nonconcurrent mode 55 persistent reserve 57 recovering paths 66 SDD persistent reserve attributes 56 software support for enhanced concurrent mode 124 software support for nonconcurrent mode 55
hardware configuration changing HP-UX host systems 195 Solaris host systems 335 hardware requirements HP-UX host systems 187 Linux host systems 215 Solaris host systems 325 hd2vp and vp2hd command 88 hd2vp command HP-UX host systems 196 hd2vp vg_name command 29 hdisk device chdev 77 modify attributes 77 healthcheck 129 High Availability Cluster Multiprocessing (HACMP) 54 host attachment upgrade 32 host system requirements Windows Server 2003 404 Windows Server 2008 404 Windows Server 2012 404 HP-UX 11.0 32-bit 188 64-bit 188, 190 HP-UX 11.11 32-bit 190 64-bit 190 HP-UX 11.23 IA 64-bit 190 PA_RISC 64-bit 190 HP-UX 11i 32-bit 187 64-bit 187 HP-UX host systems changing SDD hardware configuration 195 the path-selection policy 196 to the /dev directory 209 creating a file system on the volume group 210 a logical volume 210 a volume group 210 device node for the logical volume device 209 directory in /dev for the volume group 209 new logical volumes 209 physical volume 210 determining major number of the logical volume 209 size of the logical volume 211 disk device drivers 200, 208 disk driver 3 dynamic reconfiguration 196 installing SDD 191, 192 on a Network File System file server 213 on a system that already has NFS file server 214 LJFS file system 213 mounting the logical volume 210 operating system 187
496
HP-UX host systems (continued) protocol stack 3 re-creating existing logical volume 211 logical volume 212 physical volume 210, 212 volume group 212 removing existing logical volume 211 existing volume group 211 logical volumes 211 SCSI disk driver (sdisk) 188 SDD installing 187 setting the correct timeout value for the logical volume manager 213 setting up Network File System for the first time 213 standard UNIX applications 208 support for DS4000 195 understanding how SDD works 188 unsupported environments 188 upgrading SDD 189, 193 using applications with SDD 208
I
IBM System p with static LPARs configured 20 ibm2105.rte 15 ibm2105.rte ESS package 14 ibmSdd_433.rte installation package for SDD 1.2.2.0 removing 57 for SDD 1.3.2.0. SDD vpath devices unconfiguring 57 importing a volume group with SDD, AIX 79 insmod ./sdd-mod.o command 221 installation package, AIX devices.sdd.nn.rte 23 devices.sdd.43.rte 20, 45 devices.sdd.51.rte 20, 45 devices.sdd.52.rte 20 devices.sdd.53.rte 20 devices.sdd.61.rte 20 devices.sddpcm.52.rte 123 ibmSdd_432.rte 36, 50, 55, 95 ibmSdd_433.rte 36, 50, 55, 56, 57, 95, 461, 462 ibmSdd_510.rte 36, 37, 50, 55 ibmSdd_510nchacmp.rte 50, 55 SDD 23 installing additional paths on Windows NT host systems 359 AIX fibre-channel device drivers 16, 106 host attachment 18 planning 11, 99 SDD 23 SDDPCM 110 converting an Oracle installation from sdisk on a Solaris host system 347 NetWare planning 311
installing (continued) Oracle Solaris host systems 345 package for DS4000 support 195 SDD HP-UX host systems 187, 191, 192 Linux host systems 215, 217 NetWare host systems 315 on a Network File System file server on a Solaris host system 343 on a Network File System file server on HP-UX host systems 213 on a system that already has Network File System file server 343 on a system that already has NFS file server on HP-UX host systems 214 on a system that already has Oracle on a Solaris host system 346 on a system that already has Solstice DiskSuite in place on a Solaris host system 349 Solaris host systems 325, 329 using a file system on a Solaris host system 346 using raw partitions on a Solaris host system 347 Windows 2000 host systems 369 Windows NT host systems 353, 355 Windows Server 2003 host systems 385, 391 SDD 1.4.0.0 (or later) Windows 2000 host systems 372 SDD 1.6.0.0 (or later) Windows Server 2003 host systems 388 SDDDSM Windows Server 2003 host systems 403, 405 Windows Server 2008 host systems 403, 405 Windows Server 2012 host systems 403, 405 Solaris Volume Manager for the first time on a Solaris host system 349 vpath on a system that already has UFS logging in place on a Solaris host system 351 installp command 17 instfix -i | grep IY10201 command 17 instfix -i | grep IY10994 command 17 instfix -i | grep IY11245 command 17 instfix -i | grep IY13736 command 17 instfix -i | grep IYl7902 command 17 instfix -i | grep IYl8070 command 17 invalid or close-dead path dynamically opening 71
L
Licensed Internal Code agreement 467 lilo, using with SDD (remote boot) on x86 302 Linux host systems booting over the SAN withan IBM SDD 237 configuring ESS 216 fibre-channel adapters 216, 217 SDD 220, 226 virtualization products 216 disk driver 3 installing SDD 215, 217 logical volume manager 233 maintaining SDD vpath device configuration persistency 227 partitioning SDD vpath devices 307 preparing SDD installation 216 protocol stack 3 removing SDD 230 unsupported environments 216 upgrading SDD 218 using SDD configuration 221 standard UNIX applications 307 verifying SDD installation 219 load-balancing, AIX 73 loading SDD on Linux 220, 221 on NetWare 315 on Solaris 335 logical volume manager 233, 327 losing failover protection, AIX 76 ls -al /unix command 21 ls -l command 220 lscfg -vl fcsN command 19, 108 lsdev -Cc disk | grep 2105 command 38 lsdev -Cc disk | grep SAN Volume Controller command 38 lsdev -Cc disk command 18, 120 lslpp -l '*Sdd*' command 35 lslpp -l ibmSdd_432.rte command 35, 36 lslpp -l ibmSdd_433.rte command 35, 36 lslpp -l ibmSdd_510.rte command 35, 36 lslpp -l ibmSdd_510nchacmp.rte command 35, 36 lspv command 28, 76 lsvg -p vg-name command 76 lsvgfs command 28 lsvpcfg 52 lsvpcfg command 29, 74, 77, 88, 226, 307 lsvpcfg utility programs, AIX 88 lsvpd command 224
M
maintaining SDD vpath device configuration persistency, for Linux host systems 227 manual exclusion of disk storage system devices from the SDD configuration 52 metadb -a <device> command 350 metadb -d -f <device> command 350 metadb -i command 350 Index
K
KB 165, 443
497
metainit command 349 metainit d <metadevice number> -t <"vpathNs" - master device> <"vpathNs" - logging device> command 351 metastat command 350, 351 migrating AIX an existing non-SDD volume group to SDD vpath devices in concurrent mode 96 non-SDD volume group to a SAN Volume Controller SDD multipath volume group in concurrent mode 94 non-SDD volume group to an ESS SDD multipath volume group in concurrent mode 94 mirroring logical volumes 96 mkdev -l vpathN command 45 mksysb restore command 77 mkvg command 75 mkvg4vp command 75, 89 modifying multipath storage configuration to ESS, Windows NT host systems 361 mounting the logical volume, HP 210 multipath SAN boot support 134
N
NetWare host systems configuring disk storage system 313 fibre-channel adapters 313 SDD 315 error logging 318 error reporting 318 example command output 320 installing SDD 315 preparing SDD installation 313 removing SDD 319 supported environments 312 unsupported environments 312 newfs command 351 NIM environment special considerations 51 uninstalling SDD 51 NIM SPOT server 113 notices Licensed Internal Code 467 notices statement 465
O
odmget -q "name = ioaccess" CuAt command 57 Open HyperSwap quiesce expire time 131 Replication 9 orainst /m command 346
P
partitioning SDD vpath devices, for Linux host systems 307
path-failover protection system 6 path-selection policy changing 70, 197, 229, 338 default (optimized) 317 failover only 70, 196, 228, 317, 338 load balancing 70, 197, 228, 317, 338 round robin 70, 197, 228, 317, 338 pcmpath chgprefercntl device 183 clear device count command 147 disable ports command 148 enable ports command 150 open device path command 152 pcmpath chgprefercntl device 183 query adapter command 154 adaptstats command 156 device command 158 devstats command 164 essmap command 166 port command 167 portmap command 169 portstats command 170 session command 172 set adapter command 175 version command 173 wwpn command 174 set device 0 path 0 offline command 182 set device algorithm 177 set device cntlhc_interval 180, 181 set device hc_interval 178 set device health_check mode 179 set device path command 182 pcmsrv.conf file 426 Persistent Reserve command set 56 pkgrm IBMsdd command 350 planning AIX adapter firmware level 19, 108 disk storage system 15 ESS 105 fibre-channel attached devices 16, 106 fibre-channel device drivers 16, 106 fibre-channel-attached devices 18, 119 installation 11, 99 preparing 15, 105 SAN Volume Controller 16 disk storage system NetWare host systems 313 Windows 2000 host systems 371 ESS HP-UX host systems 188 Linux host systems 216 Solaris host systems 326 Windows NT host systems 355 fibre-channel adapters Windows 2000 host systems 371 Windows NT host systems 355 Windows Server 2003 host systems 387, 405 Windows Server 2008 host systems 405
planning (continued) fibre-channel adapters (continued) Windows Server 2012 host systems 405 hardware and software requirements HP-UX host systems 187 Solaris host systems 325 hardware requirements fibre adapters and cables 102 supported storage devices 102 hardware requirements, AIX disk storage systems 12 Fibre channel adapters and cables 12 Host system 12 SAN Volume Controller 12 SCSI adapters and cables 12 hardware requirements, SDDPCM Fibre adapters and cables 102 Host system 102 supported storage devices 102 hardware requirements, Windows 2000 ESS 369 hardware requirements, Windows NT ESS 353 hardware requirements, Windows Server 2003 385 disk storage system 403 hardware requirements, Windows Server 2008 disk storage system 403 host system requirements, AIX 13 ESS 14 Fibre 14 SAN Volume Controller 14 SCSI 14 supported storage devices 103 host system requirements, NetWare 311 disk storage system 312 fibre 313 SCSI 312 host system requirements, SDDPCM 103 Fibre 104 host system requirements, Windows 2000 ESS 369 host system requirements, Windows NT ESS 354 host system requirements, Windows Server 2003 disk storage system 386 installation of SDD HP-UX host systems 189 Solaris host systems 327 NetWare installation 311 preparing for SDD installation on HP-UX host systems 188 Solaris host systems 326 SAN Volume Controller Solaris host systems 326 SCSI adapters Windows NT host systems 355
498
planning (continued) SDD HP-UX host systems 187 Linux host systems 215, 216 NetWare host systems 313 Solaris host systems 325 Windows 2000 host systems 371 Windows NT host systems 353 Windows Server 2003 host systems 387, 405 Windows Server 2008 host systems 405 Windows Server 2012 host systems 405 software requirements Windows 2000 operating system 369 Windows NT operating system 353 Windows Server 2003 operating system 385, 403 Windows Server 2008 operating system 403 Windows Server 2012 operating system 403 software requirements, AIX AIX operating system 12 ibm2105.rte ESS package 12 SCSI and fibre-channel device drivers 12 software requirements, AIX 5.2 TL07 (or later), AIX 5.3 TL08 (or later), or AIX 6.1 TL02 fibre-channel device drivers 103 software requirements, SDDPCM AIX 5.2 TL07 (or later),AIX 5.3 TL08 (or later), or AIX 6.1 TL02 operating system 103 supported storage device Windows Server 2003 host systems 387, 405 Windows Server 2008 host systems 405 virtualization products Linux host systems 216 Windows 2000 disk storage system 371 Windows Server 2003 supported storage device 387, 405 Windows Server 2008 supported storage device 405 postinstallation of SDD HP-UX host systems 200 Solaris host systems 331 preparing AIX SDD installation 15 SDDPCM installation 105 configure on AIX 38 SDD HP-UX host systems 188 Linux host systems 216 NetWare host systems 313 Solaris host systems 326 Windows 2000 installation 371 Windows NT host systems 355
preparing (continued) SDD (continued) Windows Server 2003 installation 387, 405 Windows Server 2008 installation 405 Windows Server 2012 installation 405 probe_retry 342 pvid 76, 95
Q
qdepth_enable 43
R
raw device interface (sd) 327 device interface (sdisk) 189 re-creating existing logical volume on HP-UX host systems 211 physical volume on HP-UX host systems 210, 212 the logical volume on HP-UX host systems 212 the volume group on HP-UX host systems 212 recovering from mixed volume groups, AIX 81 recovery procedures for HP 211, 213 remote boot support Windows 2000 378 Windows Server 2003 396, 416 Windows Server 2008 416 Windows Server 2012 416 removing existing logical volume on HP-UX host systems 211 existing volume group on HP-UX host systems 211 logical volumes on HP-UX host systems 211 SDD from an AIX host 50 from an AIX host system 50 in a two-node cluster environment 383, 401 Linux host systems 230 NetWare host systems 319 Windows 2000 host systems 377 Windows NT host systems 363 Windows Server 2003 host systems 395, 415 Windows Server 2008 host systems 415 Windows Server 2012 host systems 415 SDDDSM in a two-node cluster environment 419 SDDPCM from an AIX host 123 from an AIX host system 123 replacing manually excluded devices in the SDD configuration 52
requirements disk storage system Windows Server 2003 host systems 386 ESS Windows 2000 host systems 369 Windows NT 354 hardware and software HP 187 Linux host systems 215 Solaris host systems 325 hardware, AIX disk storage systems 12 Fibre channel adapters and cables 12 Host system 12 SAN Volume Controller 12 SCSI adapters and cables 12 hardware, SDDPCM fibre adapters and cables 102 Fibre adapters and cables 102 Host system 102 supported storage devices 102 hardware, Windows 2000 ESS 369 hardware, Windows NT ESS 353 hardware, Windows Server 2003 385 disk storage system 403 hardware, Windows Server 2008 disk storage system 403 host system, AIX 13 ESS 14 Fibre 14 SAN Volume Controller 14 SCSI 14 supported storage devices 103 host system, NetWare 311 disk storage system 312 fibre 313 SCSI 312 host system, SDDPCM 103 Fibre 104 software AIX operating system 12 ibm2105.rte ESS package, AIX 12 SCSI and fibre-channel device drivers, AIX 12 SDDPCM, AIX 5.2 TL07 (or later), AIX 5.3 TL08 (or later), or AIX 6.1 TL02 operating system 103 Windows 2000 operating system 369 Windows NT operating system 353 Windows Server 2003 operating system 385, 403 Windows Server 2008 operating system 403 Windows Server 2012 operating system 403 reserve policy attribute 44 controlling 44
Index
499
restoring AIX files belonging to an SDD volume group 82 restvg command 82 restvg4vp command 82 reviewing the existing SDD configuration information, Windows NT 358, 361 rmdev -dl dpo -R command 28, 50, 123 rmdev -dl fcsN -R command 18, 120 rmdev -l dpo -R command 46 rmdev command 79 rmvpath xxx command 225, 226 rpm -e IBMsdd command 230 rpm -qi IBMsdd command 219, 230 rpm -ql IBMsdd command 219, 230 RSSM concurrent download of licensed machine code 8
S
SAN Volume Controller configuring for Solaris 326 preferred node path selection algorithm 9 Preferred Node path selection algorithm 197 savevg command 81 savevg4vp command 81 SCSI adapter support AIX host systems 14 HP-UX host systems 187 NetWare host systems 312 Solaris host systems 325 Windows 2000 host systems 370 Windows NT host systems 354 Windows Server 2003 host systems 386 SCSI-3 Persistent Reserve command set 56 SDD adding paths to vpath devices of a volume group 47 architecture 2 automount 230 configuration checking 46 displaying the current version on Windows Server 2003 391 host attachment, removing 51 how it works on HP-UX host systems 188 how it works on Solaris host systems 326 installing AIX 11 HP-UX host systems 187 Linux 215 Linux over the SAN 237 NetWare 311 scenarios 189 Solaris host systems 325, 329 Windows 2000 host systems 369, 372 Windows NT 353
SDD (continued) installing (continued) Windows Server 2003 host systems 385, 388, 391 introduction 2 mounting devices with automount 230 NIM environment 51 overview 2 postinstallation HP-UX host systems 200 Solaris host systems 331 removing HP-UX host systems 202 NetWare host systems 319 Windows NT host systems 363 server daemon 423 AIX host systems 22, 67, 305 HP-UX host systems 203 Solaris host systems 341 Windows 2000 host systems 383 Windows NT host systems 366 Windows Server 2003 host systems 401 starting manually 204 stopping 204 upgrading automatically 24 HP-UX host systems 189 manually 28 Windows 2000 374 Windows Server 2003 391 userspace commands for reconfiguration 225, 226 using applications with SDD on HP Network File System file server 213 with SDD on HP-UX standard UNIX applications 208 with SDD on Linux standard UNIX applications 307 with SDD on Solaris Network File System file Server 343 with SDD on Solaris standard UNIX applications 343 with SDD on Solaris, Oracle 345 utility programs, AIX 87 verifying additional paths to SDD devices 359, 376, 394 configuration 46 vpath devices 93 website xix SDDDSM configuring Windows Server 2003 408 Windows Server 2008 408 datapath command support 420 displaying the current version on Windows Server 2003 408 displaying the current version on Windows Server 2008 408 displaying the current version on Windows Server 2012 408 installing Windows Server 2003 host systems 403, 405
SDDDSM (continued) installing (continued) Windows Server 2008 host systems 403, 405 Windows Server 2012 host systems 403, 405 server daemon Windows Server 2003 host systems 420 Windows Server 2008 host systems 420 upgrading Windows Server 2003 408 Windows Server 2008 408 verifying additional paths to SDDDSM devices 410, 413 sddpcm 423 SDDPCM installing AIX 99 from AIX NIM SPOT server 113 maximum number of devices 116 path healthcheck time interval 130 pcmpath commands 144 server 119 server daemon 138, 423 updating package 114, 117 verifying if the server has started 138 SDDPCM server daemon 425 sddserver.rte AIX host systems 19 sddsrv AIX host systems 22, 67, 305 for ESS Expert AIX host systems 19 for SDDPCM 423 HP-UX host systems 203 port binding 424 Solaris host systems 341 trace 424 Windows 2000 host systems 383 Windows NT host systems 366 Windows Server 2003 host systems 401, 420 Windows Server 2008 host systems 420 sddsrv.conf file 425 secondary-system paging space, managing 73 setting up correct timeout value for the logical volume manager on HP-UX host systems 213 Network File System for the first time on HP-UX host systems 213 NFS for the first time on a Solaris host system 343 Oracle using a file system Solaris host systems 345 Oracle using raw partitions Solaris host systems 345 UFS logging on a new system on a Solaris host system 350 showvpath command 210, 347, 348, 349, 350 shutdown -i6 -y -g0 command 350
500
shutdown -rF command 18, 120 SMIT configuring SDD for Windows NT host systems 357 definition 23, 112 smitty command 28 smitty deinstall command 17 smitty device command 28 smitty uninstall command 17 smitty, definition 23, 112 software requirements for SDD on HP 187 for SDD on Linux 215 for SDD on Solaris 325 Solaris 10 Zone support 337 Solaris host systems changing SDD hardware configuration 335 the path-selection policy 338 configuring SDD 335 disk device drivers 327 installing Oracle 345 Solaris Volume Manager for the first time 349 vpath on a system that already has UFS logging in place 351 installing SDD 329 converting an Oracle installation from sdisk 347 Network File System file server 343 system that already has Network File System file server 343 system that already has Oracle 346 system that already has Solstice DiskSuite in place 349 using a file system 346 using raw partitions 347 operating system upgrading SDD 325 Oracle 345 SCSI disk driver 326 sd devices 342 SDD 325 SDD postinstallation 331 setting up NFS for the first time 343 UFS logging on a new system 350 Solaris 10 Zone support 337 Solaris Volume Manager 348 Solstice DiskSuite 348 standard UNIX applications 343 supported environments 326 UFS file system 343 understanding how SDD works 326 unsupported environments 326 upgrading SDD 334 upgrading the Subsystem Device Driver on 327 using applications with SDD 342, 344 Subsystem device driver, see SDD 348
Sun host systems disk driver 3 protocol stack 3 support for Windows 2000 381 Windows NT 363 Windows Server 2003 399, 418 Windows Server 2008 418 Windows Server 2012 418 supported environments NetWare host systems 312 Solaris 326 supported storage device configuring on Windows Server 2003 387, 405 Windows Server 2008 405 synchronizing logical volumes 96 System Management Interface Tool (SMIT) 112 definition 23, 112 using for configuring 38 using to access the Add a Data Path Volume to a Volume Group panel on AIX host 86 using to access the Add a Volume Group with Data Path Devices panel on AIX host 85 using to access the backup a Volume Group with Data Path Devices on AIX host 86 using to access the Configure a Defined Data Path Device panel on AIX host 85 using to access the Define and Configure All Data Path Devices panel on AIX host 85 using to access the Display Data Path Device Adapter Status panel on AIX host 84 using to access the Display Data Path Device Configuration panel on AIX host 83 using to access the Display Data Path Device Status panel on AIX host 84 using to access the Remake a Volume Group with Data Path Devices on AIX host 87 using to access the Remove a Data Path Device panel on AIX host 85 using to access the Remove a Physical Volume from a Volume Group panel on AIX host 86 using to back up a volume group with SDD on AIX host 81 using to backup a volume group with SDD on AIX host 86 using to create a volume group with SDD on AIX host 75 using to display the SAN Volume Controller SDD vpath device configuration on AIX host 74 using to display the SDD vpath device configuration on AIX host 74 using to export a volume group with SDD on AIX host 80 using to extend an existing SDD volume group on AIX host 81
System Management Interface Tool (SMIT) (continued) using to import a volume group with SDD on AIX host 79 using to remove SDD from AIX host 50 using to remove SDDPCM from AIX host 123 using to restore a volume group with SDD on AIX host 87 using to restore a volume group withSDD vpath devices on AIX host 82 using to unconfigure SDD devices on AIX host 45 using to verify SDD configuration on AIX host 46
T
trademarks 467
U
umount /cdrom command 334 command 28, 351 unconfiguring SDD on AIX 45 understanding how SDD works for HP-UX host systems 188 how SDD works for Solaris host systems 326 unmod ./sdd-mod.o command 230 unsupported environments AIX 13, 103 HP 188 HP-UX 188 Linux 216 NetWare host systems 312 Solaris 326 Windows 2000 369 Windows NT 353 Windows Server 2003 385, 404 Windows Server 2008 404 updating SDD using a PTF 29 upgrade AIX OS 32 host attachment 32 SDD packages 32 upgrading AIX adapter firmware level 108 manually 28 SDD automatically 24 HP-UX host systems 193 Linux host systems 218 Solaris host systems 334 Windows 2000 host systems 374 Windows NT host systems 357 Windows Server 2003 host systems 391 SDD, manually for AIX 4.3.2 28 for AIX 4.3.3 28 Index
501
upgrading (continued) SDD, manually (continued) for AIX 5.1.0 28 for AIX 5.2.0 28 SDDDSM Windows Server 2003 host systems 408 Windows Server 2008 host systems 408 to SDD 1.3.3.3 (or later) in a two-node cluster environment 382, 401 using AIX trace function 98 datapath commands 429 ESS devices directly, AIX 93 through AIX LVM 94 HP-UX applications with SDD 208 Linux standard UNIX applications 307 pcmpath commands 187 SAN Volume Controller devices through AIX LVM 94 SDDPCM trace function, AIX 137 Solaris applications with SDD 342 utility programs, AIX addpaths 87 dpovgfix 88 extendvg4vp 90 hd2vp and vp2hd 88 lsvpcfg 88 mkvg4vp 89 using disk storage system devices through AIX LVM 94 using ESS devices directly 93 using SAN Volume Controller devices through AIX LVM 94 using the SDDPCM trace function 137 using the trace function 98 utility programs, HP hd2vp 196 vp2hd 196 vpcluster 206
verifying (continued) AIX configuring SDD 46 SDD installation 35 SDD installation Linux host systems 219 Veritas Volume Manager Command Line Interface for Solaris website 344 System Administrator's Guide website 344 vgexport command HP-UX host systems 205 vgimport command HP-UX host systems 206 virtualization products configuring for Linux 216 volume groups AIX 75 mixed dpovgfix vg-name 78 how to fix problem 78 vp2hd command HP-UX host systems 196 vpcluster command HP-UX host systems 206
W
Web sites information about SCSI adapters that can attach to Windows Server 2003 host systems 386 websites AIX APARs maintenance level fixes and microcode updates 14 technology level fixes and microcode updates 103 HP-UX documentation 211, 213 information about SCSI adapters that can attach to Windows 2000 host systems 370 SCSI adapters that can attach to Windows NT host systems 354 information about removing multipath access to your shared volume Multiple-Path Software May Cause Disk Signature to Change (Knowledge Base Article Number Q293778) 363 information on the fibre-channel adapters that can be used on NetWare host systems 313 information on the fibre-channel adapters that can be used on your AIX host 14, 104 information on the SCSI adapters that can attach to your AIX host 14 information on the SCSI adapters that can attach to your NetWare host 312 NetWare APARs, maintenance level fixes and microcode updates 311 SDD xix
V
varyoffvg command 28, 38 varyonvg vg_name command 29 verifying additional paths are installed correctly Windows 2000 host systems 376 Windows NT host systems 359 Windows Server 2003 host systems 394, 410 Windows Server 2008 host systems 410 Windows Server 2012 host systems 410 additional paths are removed correctly Windows Server 2003 host systems 413 Windows Server 2008 host systems 413 Windows Server 2012 host systems 413
Windows 2000 host systems clustering special considerations 381 configuring cluster with SDD 382 disk storage system 371 fibre-channel adapters 371 SCSI adapters 371 disk driver 3 displaying the current version of the SDD 374 installing SDD 1.4.0.0 (or later) 372 path reclamation 381 protocol stack 3 removing SDD 377 SDD 369 support for clustering 381 unsupported environments 369 upgrading SDD 374 verifying additional paths to SDD devices 376 Windows NT host systems adding multipath storage configuration to the ESS 361 new storage to existing configuration 362 paths to SDD devices 357 clustering special considerations 364 configuring additional paths 359 clusters with SDD 364 SDD 357 disk driver 3 displaying the current version of the SDD 357 installing additional paths 359 SDD 355 modifying multipath storage configuration to the ESS 361 path reclamation 364 protocol stack 3 removing SDD 363 reviewing existing SDD configuration information 358, 361 support for clustering 363 unsupported environments 353 upgrading SDD 357 using SDD 353 verifying new storage is installed correctly 362 verifying additional paths to SDD devices 359 Windows Server 2003 host systems adding paths to SDD devices 392 paths to SDDDSM devices 408 clustering special considerations 400, 418 configuring cluster with SDD 400, 418 fibre-channel adapters 387, 405 SCSI adapters 388 supported storage device 387, 405
502
Windows Server 2003 host systems (continued) displaying the current version of the SDD 391, 408 fibre-channel requirements 404 host system requirements 404 installing SDD 1.6.0.0 (or later) 388 SDDDSM 405 path reclamation 400, 418 removing SDD 395, 415 SDD 385 SDDDSM 403 support for clustering 399, 418 unsupported environments 385, 404 upgrading SDD 391 SDDDSM 408 verifying additional paths to SDD devices 394 additional paths to SDDDSM devices 410, 413 Windows Server 2008 host systems adding paths to SDDDSM devices 408 configuring cluster with SDD 418 fibre-channel adapters 405 supported storage device 405 displaying the current version of the SDD 408 fibre-channel requirements 404 host system requirements 404 installing SDDDSM 405 removing SDD 415 SDDDSM 403 support for clustering 418 unsupported environments 404 upgrading SDDDSM 408 verifying additional paths to SDDDSM devices 410, 413 Windows Server 2012 host systems adding paths to SDDDSM devices 408 configuring cluster with SDD 418 fibre-channel adapters 405 displaying the current version of the SDD 408 host system requirements 404 installing SDDDSM 405 removing SDD 415 support for clustering 418 verifying additional paths to SDDDSM devices 410, 413
Index
503
504
Thank you for your support. Send your comments to the address on the reverse side of this form. If you would like a response from IBM, please fill in the following information:
Address
Email address
___________________________________________________________________________________________________
Tape do not Fold and _ _ _ _ _ _ _Fold _ _ _and ___ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please _____ __ _ _ staple _____________________________ ___ _ _ Tape ______ NO POSTAGE NECESSARY IF MAILED IN THE UNITED STATES
International Business Machines Corporation Information Development Department 61C 9032 South Rita Road Tucson, AZ 85744-0001
_________________________________________________________________________________________ Fold and Tape Please do not staple Fold and Tape
GC52-1309-05
Printed in USA
GC52-1309-05