How To Configure VIO LUNs From SAN To Client LPARs
How To Configure VIO LUNs From SAN To Client LPARs
How To Configure VIO LUNs From SAN To Client LPARs
LUNS
This document covers the steps involved and best practices for allocation of SAN LUNs
from VIO server to client LPARs using VSCSI. This document does not cover assigning
LUNs using NPIV and the general best practices for VIO server configuration and
maintenance .
1.1 Set the Fiber channel SCSI Attributes for Each Fiber Channel Adapter
Enable 'dynamic tracking' and 'fast fail' for high availability and quick failover for each
fiber channel adapter.
Example:
As 'padmin'
fscsi0 changed
Note: This should be run for all the adapters in 'lsdev -Cc adapter |grep fcs' output.
Note: Reboot of VIO server is required for the 'dyntrk' and 'fc_err_recov' parameters to
take into effect.
1.2 Prepare the VIO server for Configuring the SAN Storage
Prepare the VIO server to configure the SAN storage at OS level. The steps involved
would be same as AIX stand alone host . High level steps are:
Install the necessary the AIX ODM file sets, host attachment scripts, this software would
be supplied by the storage vendor, i.e., IBM, EMC, Hitachi etc.
Install the Multipath software i.e., SDD PCM, EMC Power path, Hitachi HDLM etc as
required.
Verify that multipaths are active using the path query commands
Example:
For SDDPCM
# powermt display
Set the reserve policy on the logical disk that would be used to create virtual devices, i.e.,
hdisk, vpath, hdiskpower etc. The logical disk that would be used to create a virtual
device would depend upon the multipath software. For example if IBM SDD is being
used as multipath software it would be 'vpath', IBM SDD PCM is being used as multipath
software it would be 'hdisk', whereas using EMC Powerpath it would be 'hdiskpower'.
1.3.1 Assign pvid and set the 'reserve policy' on the AIX logical disk
Example:
As 'padmin'
Check the queue depth settings on the logical devices that would be used to create the
virtual devices i.e., vpath, hdisk, hdispower etc. IBM and EMC ODM file sets the queue
depth value to a reasonable value and does not require initial tuning. For example, IBM
storage ODM file sets the queue depths to 20, EMC ODM file sets it to 32, However
Hitachi ODM configuration sets the queue depth to 2. Queue-depth of 2 is very low and
and it is recommended to work with SAN team/storage support vendor support to change
the initial setting of queue depth to a reasonable value. In case of Hitachi devices start
with a value in the range of 8 to 16.
Example:
Map the vhosts that belong to each client lpar, these are the vhosts that would be used to
allocate LUNs to client lpars.
Example:
lsdev -slots and lsmap -all |grep vhost would help determine the vhosts available and their
mapping to clients
If needed use 'lshwres' command on HMC To create a mapping between the client and
server LPAR virtual adapters.
Example:
Once the vhost is identified based on the mapping, create the virtual device as shown
below:
In the following example the LUN is being assigned to vhost2 (lpar id 004) and the
logical disk is hdiskpower0. The naming convention for the device (-dev) is client lpar
name plus the AIX logical disk name. The naming convention could be site specific and
the same convention should be used consistently on all the VIOS
Run the mkvdev command on the second VIO server also , which would provide the
second path to the disk from the client
Note: To identify the logical disk (hdisk, vpath, hdiskpower etc) that matches with a LUN
serial number from storage use the respective multipath software commands or lscfg
commands.
Where 02C and 02B are LUN serial number's last three alphanumeric characters
Alternatively generate an output file using the above commands so that multiple serial
numbers can be mapped without having to run the pcmpath, powermt , dlnkmgr
commands for every disk.
The total number of LUNs (disks) that can be mapped to a vhost depends on the queue
depth of the SAN LUN. The best practice is to set the queue depth on the virtual disk
(client) same as
The maximum number of LUNs per vhost (virtual adapter) = (512-2)/ (3+queue depth)
From the examples above,
For EMC storage with queue depth of 32 the maximum number of disks per virtual
adapter would be (512-2)/(3+32) = 14
For IBM storage where the queue depth is typically set at 20, the maximum number of
disks per virtual adapter would be (512-2)/(3+20) = 22
For Hitachi storage with a typical queue depth value of 16, the maximum number of disks
per virtual adapter would be (512-2)/(3+16) = 26
Note: If needed create additional server and client adapters for allocating the SAN LUNs.
Set the 'vscsi_path_to' and 'vscsi_err_recov' to the values shown below for each vscsi
adapter on the client lpar.
# chdev -l vscsi1 -a vscsi_path_to=30 -P
Note: vscsi_err_recov cannot be modified on older AIX 5.3 TLs and VIOS 2.x is needed
for it to work.
Reboot of Client LPAR is required to for the above parameters to take into effect.
2.2 Configure VSCSI disks and Set Device Characteristics on the Client LPARs
run 'cfgmgr' and verify that the disks assingned to vhosts on VIOS appear on the client
lpars, use ' lspv |grep pvid ' to match the disks from VIOS to client lpars, also check
UDID using the following command on both client and server. Not all multipath software
and versions would use UDID method to identify the devices.
# odmget -q attribute=unique_id CuAt (first and last numbers will be different from
VIOS and client)
on the newer versions of VIO the following command can also be used to query the
unique id
Verify that two paths are enabled to each vscsi disk (one from each vio server)
lspath should show two paths to each vscsi disk (hdisk) on the client LPAR.
2.2.3 Set 'health_check' interval and 'queue depth' for disks on Client LPAR
Once the hdisks get configured on the client lpars, set the following device characteristics
hcheck_interval
queue_depth
Set the hcheck_interval to at least equal to or smaller than the' rw_timeout' value of the
SAN LUN on VIO server. For IBM and Hitachi Storage the rw-timeout value is typically
60 so hcheck_interval can be set to 60 or less. Check the 'rw_timeout' value from VIOS
using 'lsattr' as shown above in 1.3.2
Example:
For EMC devices rw-timeout value is 30, so set the hecheck_interval to a value equal to
or less than 30
Set the queue depth value to the same value on the physical device on the VIOS
The values set using -P option would require a reboot of client lpar.
Path priority settings would facilitate load balancing across the VIO servers. There are
two approaches to load balance from client lpars to VIO server lpars, in the first approach
set the path priority for all the disks from one client lpar to one VIO sever, from the
second client lpar to the second VIO server and so on. In the second approach set the path
priority for some disks to first VIO lpar and for other disks to second VIO lpar, i.e., use
all VIO servers from all client lpars. The first approach i.e., using one VIO server from
one client LPAR would be easier to administer and provides load balancing.
After the configuration steps on VIO server and client lpars have been completed,
perform MPIO failover testing by shutting down one VIO server at a time. The path on
client LPAR should failover to the second path, 'lspath' and 'errpt' could be used to
validate the path failover. The volume groups should remain active while the VIOS is
down. Start the VIO server, once the VIO server is up, the failed path on the client LPAR
should go to 'Enabled' state.