Using Linux Hosts With ONTAP Storage
Using Linux Hosts With ONTAP Storage
Using Linux Hosts With ONTAP Storage
Contents
Overview of the supported Linux environments ....................................... 7
Improving I/O performance on Red Hat Enterprise Linux hosts .......... 10
(iSCSI) How to configure iSCSI for Linux ............................................... 11
Getting the iSCSI initiator node name ...................................................................... 11
Setting the timeout values to enable multipathing with iSCSI .................................. 12
Setting up CHAP for Red Hat Linux 5, 6, and 7 and SUSE Linux 10, 11, and 12
series for iSCSI .................................................................................................... 13
(iSCSI) Setting up CHAP for Red Hat Enterprise Linux 4 series ............................. 14
Starting the iSCSI service ......................................................................................... 15
Methods for setting up target discovery with software initiators on iSCSI .............. 16
Discovering the iSCSI target by using the iscsiadm utility on Red Hat 5,
6, 7 and SUSE 10, 11, and 12 .................................................................. 16
(iSCSI) Setting up target discovery on Red Hat Enterprise Linux 4 series ... 17
Discovering targets by using YaST2 on SUSE 10, 11, and 12 series on
iSCSI ........................................................................................................ 17
Configuring the iSCSI service to start automatically ................................................ 18
Configuring manual or automatic node login with iSCSI ......................................... 19
Configuring the storage system ................................................................. 20
DM-Multipath configuration ..................................................................... 21
Verifying the required multipathing packages .......................................................... 21
Editing the DM-Multipath configuration file ............................................................ 22
Starting DM-Multipath .............................................................................................. 23
Configuring DM-Multipath to start automatically while booting ................. 24
Verifying the DM-Multipath configuration ............................................................... 24
Stopping DM-Multipath ............................................................................................ 27
Veritas Dynamic Multipath configuration ............................................... 28
(Veritas) VxDMP restore daemon and LUN retries tunable configuration ............... 28
Setting the Veritas restore daemon and LUN retry editable values ............... 28
Configuring Red Hat Enterprise Linux 6 series and Red Hat Enterprise Linux 7
series to support Veritas Storage Foundation ...................................................... 29
Configuring SUSE Linux Enterprise Server 11 to support Veritas Storage
Foundation ........................................................................................................... 30
The Veritas Array Support Library and Array Policy Module .................................. 31
(Veritas) What the ASL is ............................................................................. 31
(Veritas) What the APM is ............................................................................ 32
Installing the ASL and APM software to support Veritas Storage
Foundation ............................................................................................... 32
(Veritas) Removing the ASL and APM ......................................................... 33
(Veritas) Information about ASL error messages .......................................... 33
Methods for working with LUNs in native Linux environments ........... 35
Discovering new LUNs in FC and hardware iSCSI environments ........................... 35
4 | Using Linux® Hosts with ONTAP storage
Discovering new LUNs on Red Hat and SUSE with iSCSI and multipathing ......... 36
Discovering new LUNs on Red Hat Enterprise Linux 4 with DM-Multipath and
software iSCSI ..................................................................................................... 36
Viewing a list of LUNs .............................................................................................. 37
Examples of sanlun, iscsiadm, iscsi output when used to view LUNs ......... 37
Enabling device persistence for newly discovered LUNs ......................................... 43
Removing an unmapped LUN ................................................................................... 43
(Veritas) LUN access when using VxDMP ............................................... 45
Discovering new LUNs on Veritas with FC .............................................................. 45
Discovering new LUNs for Red Hat 5, 6, 7 or SUSE 10, 11 and 12 while using
Veritas and iSCSI ................................................................................................. 46
Viewing LUNs mapped to the host and on VxVM disks .......................................... 46
Examples of sanlun, iscsiadm, and iscsi output when used to view LUNs
on Veritas ................................................................................................. 47
(Veritas) Displaying multipathing information for VxDMP ..................................... 50
(Veritas) Examples of sanlun output for VxDMP ......................................... 51
Removing an unmapped LUN ................................................................................... 52
Displaying available paths using VxVM on Veritas ................................................. 52
(FC) Setting up a SAN boot LUN on Red Hat Enterprise Linux ........... 54
(FC) Setting up a SAN boot LUN on SUSE Linux Enterprise Server ... 55
(FC) Configuring the root partition with DM-Multipath on SUSE Linux
Enterprise Server ................................................................................................. 55
(iSCSI) SAN boot configuration for iSCSI hardware, software
initiators .................................................................................................. 57
(Hardware iSCSI) Configuring SAN boot on Red Hat Enterprise Linux ................. 57
(Native multipathing) Using sanlun to display DM-Multipath information ............. 58
(Native multipathing) Examples of sanlun output containing DM-
Multipath information ............................................................................. 58
(Software iSCSI) Configuring SAN boot on Red Hat Enterprise Linux 5 or 6
series .................................................................................................................... 61
Configuring SAN boot on Red Hat Enterprise Linux 7 series .................................. 62
(Software iSCSI) Configuring SAN boot on SUSE Linux Enterprise Server .......... 63
(Software iSCSI) Configuring multipathing for a SAN boot LUN using SUSE
Linux Enterprise Server ....................................................................................... 65
Configuring SAN boot in a Veritas environment ..................................... 66
Support for host virtualization .................................................................. 68
Hypervisor VHD requires alignment for best performance ...................................... 68
Supported Linux and Data ONTAP features ........................................... 70
Protocols and configurations supported by Host Utilities ......................................... 70
The FC protocol ............................................................................................ 70
The FCoE protocol ........................................................................................ 71
The iSCSI protocol ........................................................................................ 71
SAN booting .................................................................................................. 71
Support for Linux Device Mapper Multipathing .......................................... 72
Volume management and multipathing with Veritas Storage Foundation .... 72
Table of Contents | 5
• Setup issues:
◦ If you are using HBAs, set them up before you install the
Host Utilities.
Oracle Linux:
This guide provides instructions and examples using Red Hat
Enterprise Linux and SUSE Linux Enterprise Server. In most
cases, Oracle Linux uses the same setup procedures as Red Hat
Enterprise Linux. To simplify this guide, it uses "Red Hat" to
refer to both systems using Red Hat Enterprise Linux and systems
using Oracle Linux. If Oracle Linux requires a different
procedure, that procedure is included.
Note: Ensure that the kernel and dm-multipath versions are as
per Interoperability Matrix Tool. If not, install the versions
from Oracle ULN (Oracle Unbreakable Linux Network).
• Setup issues:
Steps
Example
The output changes based on the number of LUNs presented to the host, and might differ from
what this example shows:
Stopping tuned: [ OK ]
Switching to profile 'enterprise-storage'
Applying deadline elevator: dm-0 dm-1 dm-2 dm-3 sda sdb sdc[ OK ]
sdf sdg sdh sdi sdj sdk
Applying ktune sysctl settings:
/etc/ktune.d/tunedadm.conf: [ OK ]
Calling '/etc/ktune.d/tunedadm.sh start': [ OK ]
Applying sysctl settings from /etc/sysctl.conf
Starting tuned: [ OK ]
• If you want to use multipathing, you must edit the iSCSI configuration file to set it up.
• If you want to use CHAP, you must edit the iSCSI configuration file to set it up.
• You must set up target discovery so that the host can access LUNs on the storage system.
• Configure the initiator with the IP address for each storage system using either static, ISNS, or
dynamic discovery.
Steps
1. Use a text editor to open the file containing the node names:
2. If you want to change the default name, edit the line in the file containing the node name.
12 | Using Linux® Hosts with ONTAP storage
You can only replace the RandomNumber portion of the name, and any changes must follow these
naming rules:
• A node name can contain alphabetic characters (a to z), numbers (0 to 9), and the following
three special characters (the underscore character ( _ ) is not supported):
. - :
Note: If the node name does not exist, you can create one by adding a line to the file containing
node names. Use the same format shown below for modifying a node name.
3. Write down the node name so that you can easily enter it when you configure the storage system.
Step
1. Edit the following file to provide the correct timeout value for your Host Utilities environment
(DM-Multipath or Veritas Storage Foundation):
(iSCSI) How to configure iSCSI for Linux | 13
This will reflect the new iSCSI timeout settings for the existing SRs.
For new Storage Repositories:
New as well as existing SRs will be updated with the new iSCSI timeout
settings.
you must add CHAP user names and passwords to the /etc/iscsi/iscsid.conf file and then use
the iscsi security command to set up the same user names and passwords on the storage system.
Steps
3. Provide a CHAP user name and password for the target to use when authenticating the initiator.
You must remove the comment indicators and supply values for the options username and
password in the following configuration entries:
• node.session.auth.username = username
• node.session.auth.password = password
4. Provide a CHAP user name and password for the initiator to use when authenticating the target.
You must remove the comment indicators and supply values for the options username_in and
password_in in the following configuration entries:
• node.session.auth.username_in = username_in
• node.session.auth.password_in = password_in
5. For a successful session discovery, enable discovery CHAP authentication by supplying the
passwords in the discovery.sendtargets.auth. options.
The user name and password must match for both session and discovery on the host. Make sure
that you use the same user names and passwords that you used when you set up CHAP on the
storage system with the iscsi security command.
• discovery.sendtargets.auth.authmethod = CHAP
• discovery.sendtargets.auth.username = username
• discovery.sendtargets.auth.password = password
• discovery.sendtargets.auth.username_in = username_in
• discovery.sendtargets.auth.password_in = password_in
Steps
2. Add CHAP user names and passwords to the storage system's DiscoveryAddress section. Use
a white space or tab to indent the CHAP settings.
(iSCSI) How to configure iSCSI for Linux | 15
• For unidirectional authentication, you should define only the OutgoingUsername and
OutgoingPassword.
Use the OutgoingUsername and OutgoingPassword for the storage system’s inbound user
name and password (inname and inpassword).
• For bidirectional authentication, you should define both sets of user names/passwords:
outgoing and incoming.
Use IncomingUsername and IncomingPassword of the host as the storage system’s
outbound user name and password (outname and outpassword).
Note: Ensure that you use the same user names and passwords when you set up CHAP on the
storage system with the iscsi security command.
If you want to configure global CHAP—that is, the same user name and password for all the
targets—ensure that the CHAP settings are mentioned before the DiscoveryAddress.
Example
DiscoveryAddress=192.168.10.20
OutgoingUsername=username_out
OutgoingPassword=password_out
IncomingUsername=username_in
IncomingPassword=password_in
3. Configure the storage system as a target by adding the following line for any one iSCSI-enabled
interface on each storage system that you used for iSCSI LUNs:
DiscoveryAddress=storage_system_IPaddress
DiscoveryAddress=192.168.10.100
DiscoveryAddress=192.168.10.20
Step
Citrix discourages the use of the iscsiadm tool. The native XAPI stack accomplishes the tasks of
starting and stopping the iscsi service, automatic login on boot, and other iSCSI operations.
• If you are using Red Hat Enterprise Linux 5, 6 and 7 series you can use the iscsiadm utility.
• If you are using Red Hat Enterprise Linux 4 series you should modify the /etc/iscsi.conf
file.
• If you are using SUSE Linux Enterprise Server 10,11, and 12 series you can use either the
iscsiadm utility or YaST2.
Note: If you are using RHEV, Oracle VM, and Citrix XenServer, you can use the Management
GUI for setting up target discovery.
The following sections provide instructions for setting up targets on Red Hat Enterprise Linux 5, 6,
and 7 series; Red Hat Enterprise Linux 4 series; and SUSE Linux Enterprise Server 10, 11, and 12
series. If you are using SUSE Linux Enterprise Server 9 series, see Recommended Host Settings for
Linux Unified Host Utilities 7.0 at mysupport.netapp.com.
Discovering the iSCSI target by using the iscsiadm utility on Red Hat 5, 6, 7
and SUSE 10, 11, and 12
You can use the iscsiadm utility to manage (update, delete, insert, and query) the persistent
database on Red Hat Enterprise 5, 6, or 7 series and SUSE Linux Enterprise Server 10, 11, or 12
(iSCSI) How to configure iSCSI for Linux | 17
series. The utility enables you to perform a set of operations on iSCSI nodes, sessions, connections,
and discovery records.
Steps
The host discovers the target specified by the targetIP variable. The iscsiadm utility displays
each target it discovers on a separate line. It stores the values associated with the target in an
internal persistent database.
The initiator logs in to the discovered nodes that are maintained in the iSCSI database.
Example
The following sample output shows that 1 is the record ID:
Steps
2. Configure the storage system as a target by adding the following line for any one iSCSI-enabled
interface on each storage system that you used for iSCSI LUNs:
DiscoveryAddress=storage_system_IPaddress
Example
The following lines set up the storage systems with the IP addresses 192.168.10.100 and
192.168.10.20 as targets:
DiscoveryAddress=192.168.10.100
DiscoveryAddress=192.168.10.20
Discovering targets by using YaST2 on SUSE 10, 11, and 12 series on iSCSI
If you are running SUSE Linux Enterprise Server 10, 11, or 12 series you can use YaST2 to discover
and configure iSCSI connections. By using YaST2, you can enable the iSCSI initiator at boot time,
18 | Using Linux® Hosts with ONTAP storage
add new targets to the system, and discover iSCSI targets in the network. You can also view the
iSCSI targets that are currently connected.
Steps
2. Click Network Services > iSCSI Initiator > Discovered Targets > Discovery in the YaST2
window.
8. Enter the authentication credentials required for using the selected iSCSI target.
11. Change the startup option to Manual or Automatic, depending on your requirement, by using the
Toggle Start-Up button for all the discovered targets.
Related information
Novell web site
Step
1. From the Linux host command prompt, configure the iSCSI service to start automatically:
Setting the login mode affects only nodes that are discovered after the value is set.
Step
1. Set the login mode for a specific portal on a target, for all the portals on a target, or for all targets
and their ports:
For more information about the iscsiadm options, see the man page.
All the ports on a target Enter the command with the applicable format for your system, including
the targetname and whether the login will be manual or
automatic:
For more information about the iscsiadm options, see the man page.
Steps
1. Ensure that the protocol you are using (FC or iSCSI) is licensed and the service is running.
2. (iSCSI) If you want to use CHAP authentication, use the iscsi security command or the
FilerView interface to configure the CHAP user name and password on the storage system.
Ensure that you use the same user names and passwords that you supplied when you set up
CHAP on the host.
6. If you are using clustered Linux hosts, ensure that the igroup contains either the WWPNs or the
initiator names of all the hosts in the cluster that need access to the mapped LUN.
21
DM-Multipath configuration
You can configure DM-Multipath for use in multipathing in environments that use native Linux
solutions. With DM-Multipath, you can configure multiple I/O paths between a host and storage
controllers into a single device. If one path fails, DM-Multipath reroutes I/Os to the remaining paths.
Note: If you are running Veritas Storage Foundation, you need to use VxDMP as your
multipathing solution.
When you have multiple paths to a LUN, Linux creates a SCSI device for each path. This means that
a single LUN might appear as/dev/sdd and /dev/sdf if there are two paths to it. To make it easy
to keep track of the LUNs, DM-Multipath creates a single device in /dev/mapper/ for each LUN
that includes all the paths. For example, /dev/mapper/360a9800043346852563444717a513571
is the multipath device that is created on top of /dev/sdd and /dev/sdf.
When you are using DM-Multipath, you should create a file system for each LUN and then mount
the LUN using the device in /dev/mapper/.
Note: To create a file system on a LUN, use /dev/mapper/device on a Linux host console.
device is the multipath device name of the LUN in the /dev/mpath/ directory.
You also use the DM-Multipath's configuration file to specify whether ALUA is being used and if the
hardware handler should be enabled for ALUA.
When DM-Multipath is running, it automatically checks the paths. As a result, if you look at the
output of a command such as lun stats -o, you see a small amount of FC partner path traffic
listed under the operations per second. On average, this is usually about 4 kb of operations per path
per LUN every 20 seconds, which is the default time period. This is expected behavior for DM-
Multipath.
Steps
1. Use the rpm -q command to display information about the name and version of the DM-
Multipath package that you have installed.
2. If you do not have the required packages, get a copy of your operating system RPM and install
the multipathing package.
Related information
Red Hat Web site - http://www.redhat.com/software/rhn/
Novell Web site - http://www.novell.com
22 | Using Linux® Hosts with ONTAP storage
Steps
1. If the /etc/multipath.conf file exists, edit it to include the sections needed for your system.
2. If you do not have the /etc/multipath.conf file, copy the sample configuration file for your
operating system.
The following sections provide sample configuration files for several versions of Red Hat
Enterprise Linux and SUSE Linux Enterprise Server.
a. In the blacklist section of the configuration file, enter the WWID of all non-NetApp SCSI
devices installed on your host.
You can get the WWID by running the scsi_id command on a device.
Example
For example, assume that /dev/sda is a local SCSI drive. To obtain the WWID on systems
running Red Hat Enterprise Linux 7 or 6 series or SUSE Linux Enterprise Server 12 and 11,
enter /lib/udev/scsi_id -gud /dev/sda.
To obtain the WWID on systems running other Linux operating systems, enter scsi_id -
gus /block/sda.
blacklist
{
wwid IBM-ESXSMAW3073NC_FDAR9P66067W
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
Example
On Red Hat Enterprise Linux 4 hosts, the blacklist section might appear as the following:
DM-Multipath configuration | 23
devnode_blacklist
{
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode ^cciss.*"
wwid SIBM-ESXSMAW3073NC_FDAR9P66067WJ
}
4. Make sure that you use the correct settings based on whether you are using ALUA.
If you are using ALUA, you must specify the ALUA callout program. If you are not using ALUA,
you must specify the Data ONTAP callout program. The following table provides information
about the values that you must supply.
Note: If you are using clustered Data ONTAP, you must have ALUA enabled.
Note: ALUA is supported in Red Hat Enterprise Linux 5 Update 1 or later, SUSE Linux
Enterprise Server 10 SP2 or later, and SUSE Linux Enterprise Server 11 or later.
Starting DM-Multipath
You can start DM-Multipath manually to configure LUNs to work with it.
Steps
3. Perform the following steps to enable multipath using Xen center or Xen CLI:
b. Enable multipath:
xe host-param-set other-config:multipathing=true uuid=host_uuid
c. xe host-param-set other-config:
multipathhandle=dmp uuid=host_uuid
Step
1. Add the multipath service to the boot sequence on the Linux host console:
If you are using... Enter the following commands...
Note: Hypervisors Oracle VM, Red Hat Enterprise Virtualization, and Citrix XenServer
management stack ensure automatic start of multipath service.
Steps
The -d (dry run) parameter prevents the command from updating the multipath maps.
• For Red Hat Enterprise Linux 6 and 5 series, use the /etc/init.d/multipathd status
command.
• For Red Hat Enterprise Linux 7 series, use the systemctl status multipathd command.
3. To determine whether multipathd is working correctly on your system, enter the multipathd
show config command. This command displays the values currently being used for the
multipath.conf file. You can then confirm that multipathd is using the values you specified.
4. View a list of the multipath devices, including which /dev/sdx devices are used:
multipath -ll
The multipath -ll command output varies slightly across Linux versions, as shown in the
following examples.
Example
Red Hat Enterprise Linux 5.11:
# multipath -ll
3600a098041757765662444434256557a dm-11 NETAPP,LUN C-Mode
[size=10G][features=3 queue_if_no_path pg_init_retries 50]
[hwhandler=1 alua][rw]
\_ round-robin 0 [prio=50][active]
\_ 7:0:6:3 sdco 69:192 [active][ready]
\_ 7:0:1:3 sdbu 68:128 [active][ready]
\_ 8:0:14:3 sdbi 67:192 [active][ready]
\_ 8:0:13:3 sdbe 67:128 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 8:0:7:3 sdag 66:0 [active][ready]
\_ 8:0:15:3 sdbm 68:0 [active][ready]
\_ 7:0:3:3 sdcc 69:0 [active][ready]
\_ 7:0:15:3 sddy 128:0 [active][ready]
Example
Red Hat Enterprise Linux 6.6:
# multipath -ll
3600a0980383034586b3f4644694d6a51 dm-14 NETAPP,LUN C-Mode
size=5.0G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 0:0:10:11 sdxq 128:512 active ready running
| |- 1:0:5:11 sdxn 71:720 active ready running
| |- 0:0:2:11 sdcq 69:224 active ready running
| `- 1:0:8:11 sdafa 68:768 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 0:0:8:11 sdrg 133:416 active ready running
|- 1:0:10:11 sdaie 129:800 active ready running
|- 0:0:11:11 sdaau 133:544 active ready running
`- 1:0:11:11 sdajt 131:944 active ready running
Example
FC on Red Hat Enterprise Linux 7.0:
26 | Using Linux® Hosts with ONTAP storage
# multipath -ll
3600a09804d542d4d71244635712f4a4a dm-20 NETAPP ,LUN C-Mode
size=5.0G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua'
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 4:0:3:21 sdeo 129:0 active ready running
| |- 4:0:7:21 sdnj 71:336 active ready running
| |- 5:0:5:21 sdui 66:672 active ready running
| `- 5:0:7:21 sdxm 71:704 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 4:0:5:21 sdhs 134:32 active ready running
|- 4:0:6:21 sdks 67:256 active ready running
|- 5:0:4:21 sdst 8:528 active ready running
`- 5:0:6:21 sdvx 69:560 active ready running
Example
SLES11 SP3:
# multipath -ll
3600a09804d542d4d71244635712f4a4a dm-20 NETAPP ,LUN C-Mode
size=5.0G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua'
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 4:0:3:21 sdeo 129:0 active ready running
| |- 4:0:7:21 sdnj 71:336 active ready running
| |- 5:0:5:21 sdui 66:672 active ready running
| `- 5:0:7:21 sdxm 71:704 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 4:0:5:21 sdhs 134:32 active ready running
|- 4:0:6:21 sdks 67:256 active ready running
|- 5:0:4:21 sdst 8:528 active ready running
`- 5:0:6:21 sdvx 69:560 active ready running
Example
FC on Red Hat Enterprise Virtualization Hypervisor 6.2 with ALUA enabled:
# multipath -ll
360a98000316b5a776b3f2d7035505a6f dm-0 NETAPP,LUN
size=60G features='3 queue_if_no_path pg_init_retries 50'
hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 4:0:0:0 sda 8:0 active ready running
| `- 5:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 4:0:1:0 sdb 8:16 active ready running
`- 5:0:1:0 sdd 8:48 active ready running
5. You can verify that the NetApp recommended DM-Multipath settings are currently in use by
entering the following commands:
• For RHEL6 and 7 hosts, use the multipathd show config command.
Example
The following example shows the output of the ls -l dev/mapper command:
total 0
brw------- 1 root root 253, 1 Sep 20 17:09
360a98000486e5363693444646a2f656c
brw------- 1 root root 253, 0 Sep 20 17:09
360a98000486e5372635a44646a505643
lrwxrwxrwx 1 root root 16 Sep 12 10:16 control -> ../device-mapper
Stopping DM-Multipath
When you want to stop DM-Multipath on the Linux host, you should stop the affected services.
Steps
In addition, if you are running either Red Hat Enterprise Linux 6 series, Red Hat Enterprise Linux 7
series, or SUSE Linux Enterprise Server 11 series, you must modify the /etc/udev/rules.d/40-
rport.rules file.
You must have the Array Support Library (ASL) and the Array Policy Module (APM) that Symantec
provides for NetApp storage systems installed. The amount of work you must do to set up the ASL
and APM depends on your version of Veritas Storage Foundation.
Related concepts
(Veritas) VxDMP restore daemon and LUN retries tunable configuration on page 28
Setting the Veritas restore daemon and LUN retry editable values
To access LUNs by using VxDMP, you should configure the restore daemon and then verify the
configuration. If you have Veritas Storage Foundation 5.1, you should also set the VxDMP tunable
dmp_lun_retry_timeout command.
Steps
1. Set the value for the VxDMP restore daemon to an interval of 60:
Veritas Dynamic Multipath configuration | 29
vxdmptune dmp_restore_interval=60
On reboot, this value takes effect and remains persistent. Alternatively, you can use the following
commands to configure the daemon value:
vxdmpadm stop restore
vxdmpadm start restore interval=60
This value takes effect immediately; however, it is not persistent on reboot. You must reset the
value each time you reboot the system.
For details about configuring Veritas Volume Manager, see the Veritas Volume Manager
Administrator's Guide for Linux that is shipped along with the software.
3. Set the value of the dmp_lun_retry_timeout to the appropriate interval:
• If you are using Veritas Storage Foundation 5.1 up to 6.0), set the value to 300:
vxdmpadm settune dmp_lun_retry_timeout=300
The editable value changes immediately.
• If you are using Veritas Storage Foundation 6 series and InfoScale 7 series, set the value to 60
The editable value changes immediately.
4. For Veritas Storage Foundation 5.1 SP1 and later, and InfoScale 7.0 series, set the value of the
dmp_path_age to an interval of 120:
vxdmpadm settune dmp_path_age=120
The editable value changes immediately.
Steps
3. Check the value of the IOFENCE timeout parameter and ensure that it is set to 30000.
The IOFENCE timeout parameter specifies the amount of time in milliseconds that it takes
clients to respond to an IOFENCE message before the system halts. When clients receive an
IOFENCE message, they must unregister from the GAB driver within the number of milliseconds
specified by the IOFENCE timeout parameter, or the system halts. The default value for this
parameter is 15000 milliseconds or 15 seconds.
Example
To check the value of this parameter, enter the command gabconfig -l on the host. The
following is an example of the type of output this command produces:
# gabconfig -l
GAB Driver Configuration
Driver state : Configured
Partition arbitration: Disabled
Control port seed : Enabled
Halt on process death: Disabled
Missed heartbeat halt: Disabled
Halt on rejoin : Disabled
Keep on killing : Disabled
Quorum flag : Disabled
Restart : Enabled
Node count : 2
Send queue limit : 128
Recv queue limit : 128
IOFENCE timeout (ms) : 15000
Stable timeout (ms) : 5000
4. If the value of the IOFENCE timeout parameter is not 30000, correct it:
gabconfig -f 30000
This value is not persistent across reboots, so you must check it each time you boot the host and
reset it if necessary.
Steps
2. Install SUSE Linux Enterprise Server 11 with kernel version 2.6.27.45-0.1.1 or later from Novell.
Related information
Novell Web site - http://www.novell.com
Symantec TechNote for setting up SUSE Linux Enterprise Server 11 - http://www.symantec.com/
business/support/index?page=content&id=TECH124725
To determine which version of the ASL and APM you need for this version of the Host Utilities,
check the NetApp Interoperability Matrix. After you know which version you need, go to the
Symantec Web site and download the ASL and APM.
Note: Starting with Veritas Storage Foundation 5.1 and later, InfoScale 7.0 series, ALUA is
supported on FC.
• Active/Active (A/A)
There are multiple active paths to a storage system, and simultaneous I/O is supported on each
path. If a path fails, I/O is distributed across the remaining paths.
• ALUA
A LUN in an ALUA-enabled array can be accessed through both controllers, by using optimized
and non-optimized paths. The array notifies the host of path options, their current state, and state
changes. Using this information, the host can determine which paths are optimized. Failover to
the non-optimized path occurs only if all the optimized paths fail.
For more information about system management, see the Veritas Volume Manager Administrator’s
Guide.
32 | Using Linux® Hosts with ONTAP storage
Installing the ASL and APM software to support Veritas Storage Foundation
If you are using Veritas Storage Foundation for multipathing, you should install and configure the
Symantec Array Support Library (ASL) and Array Policy Module (APM) for storage systems.
• You must have verified that your configuration meets the system requirements.
For more information, see the NetApp Interoperability Matrix.
Note: In Veritas Storage Foundation 5.1 series and later, InfoScale 7.0 series, the NetApp ASL and
APM are included in the Veritas Storage Foundation product.
Steps
2. If you already have the NetApp storage configured as JBOD in your VxVM configuration,
remove the JBOD support for: NetApp
# vxddladm rmjbod vid=NETAPP
3. Install the ASL and APM according to the instructions provided by Symantec:
4. If your host is connected to a NetApp storage system, verify the installation by following these
steps:
Example
The vxdmpadm listenclosure all command shows the Enclosure Type as FAS3170 in this
example:
# vxdmpadm listenclosure all
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT
================================================================
disk Disk DISKS CONNECTED Disk 1
fas31700 FAS3170 80010081 CONNECTED A/A-NETAPP 15
fas31701 FAS3170 80010082 CONNECTED A/A-NETAPP 15
Veritas Dynamic Multipath configuration | 33
Steps
3. Use the rpm command to remove the ASL package. This command has the format: rpm -ev
asl_rpm_name
Example
The following command line removes a previous version of the ASL:
4. Use the rpm command to remove the APM package. This command has the format: rpm -ev
apm_rpm_name.
Example
The following command line removes a previous version of the APM:
The sections that follow provide information about the tools you need to use to work LUNs as well as
what actions you should take when working with LUNs in your environment. For example, if you do
not have multipathing enabled, it is a good practice to provide persistent identification for the LUNs.
Or, if you are using the iSCSI software initiator, you can use either the sanlun or iscsiadm
command to view LUNs.
As you work with LUNs, remember that the host cannot distinguish multiple LUNs from multiple
paths to the same LUN without multipathing software. As a result:
• If you have more than one path from the host to a LUN, you should use DM-Multipath.
• If you are not using multipathing software, you should limit each LUN to a single path.
For information about the supported configurations for DM-Multipath, see the NetApp
Interoperability Matrix.
Step
Discovering new LUNs on Red Hat and SUSE with iSCSI and
multipathing
When you are running Red Hat Enterprise Linux 5, 6, and 7 series or SUSE Linux Enterprise Server
10, 11, or 12 series with DM-Multipath and the software iSCSI initiator, you can discover new LUNs
by rescanning the iSCSI service on the host. Rescanning the service displays all the newly created
LUNs that have been mapped to the host.
Steps
1. To discover a new LUN on a system running DM-Multipath, enter one of the following
commands:
2. To verify that the new LUNs have been discovered, use the sanlun command or the iscsiadm
command.
Steps
2. Use the sanlun or iscsi-ls command to verify that the new LUNs have been discovered.
Step
1. To view the list of LUNs mapped to your host, run the appropriate command for your system
environment.
The following table summarizes the commands and the environments that support them. For more
information on the commands and their options, see the man pages.
Note: You can use the sanlun command to display LUN information and the iscsiadm
command to view iSCSI information.
The sections that follow contain examples of the type of output these commands produce with
different protocols and operating systems.
The following sections provide examples of the type of output you see if you run one of these
commands in a specific environment; for example with iSCSI and DM-Multipath on Red Hat
Enterprise Linux 5 series.
• (SUSE Linux 10, 11) Software iSCSI with DM-Multipath running iscsiadm
• (SUSE Linux 10, 11) Software iSCSI without multipathing running iscsiadm
This example shows sample output from the sanlun lun show all command when it is issued in
a Host Utilities environment that is running the FC protocol with DM-Multipath on a storage system
running Data ONTAP operating in 7-Mode.
This example shows sample output from the sanlun lun show all command when it is issued in
a Host Utilities environment that is running the iSCSI protocol with DM-Multipath on a storage
system running Data ONTAP operating in 7-Mode.
Methods for working with LUNs in native Linux environments | 39
Example of using iscsiadm to view LUNs running iSCSI with DM-Multipath on RHEL
series system
This example shows sample output from the iscsiadm command when it is issued in a Host Utilities
environment that is running the iSCSI protocol and DM-Multipath on a Red Hat Enterprise Linux 5
or 6 series system.
Note: This example lists the available storage systems and LUNs for a session with a specific
session ID. To view the details of all the sessions, you use the iscsiadm -m session -P 3
command.
# iscsiadm -m session -P 3 -r 2
Target: iqn.1992-08.com.netapp:sn.101183016
Current Portal: 10.72.199.71:3260,1001
Persistent Portal: 10.72.199.71:3260,1001
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:5e3e11e0104d
Iface IPaddress: 10.72.199.119
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 65536
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 4 State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sdc State: running
scsi4 Channel 00 Id 0 Lun: 1
Attached scsi disk sde State: running
scsi4 Channel 00 Id 0 Lun: 2
Attached scsi disk sdg State: running
scsi4 Channel 00 Id 0 Lun: 3
Attached scsi disk sdi State: running
scsi4 Channel 00 Id 0 Lun: 4
Attached scsi disk sdk State: running
scsi4 Channel 00 Id 0 Lun: 5
Attached scsi disk sdm State: running
40 | Using Linux® Hosts with ONTAP storage
# iscsiadm -m session -P 3 -r 2
Target: iqn.1992-08.com.netapp:sn.101183016
Current Portal: 10.72.199.71:3260,1001
Persistent Portal: 10.72.199.71:3260,1001
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:5e3e11e0104d
Iface IPaddress: 10.72.199.119
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 65536
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 4 State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sdc State: running
scsi4 Channel 00 Id 0 Lun: 1
Attached scsi disk sde State: running
scsi4 Channel 00 Id 0 Lun: 2
Attached scsi disk sdg State: running
scsi4 Channel 00 Id 0 Lun: 3
Attached scsi disk sdi State: running
scsi4 Channel 00 Id 0 Lun: 4
Attached scsi disk sdk State: running
scsi4 Channel 00 Id 0 Lun: 5
Attached scsi disk sdm State: running
scsi4 Channel 00 Id 0 Lun: 6
Attached scsi disk sdp State: running
scsi4 Channel 00 Id 0 Lun: 7
Attached scsi disk sdq State: running
Methods for working with LUNs in native Linux environments | 41
# /sbin/iscsi-ls -l
*******************************************************************************
SFNet iSCSI Driver Version ... 3.6.2 (27-Sep-2004 )
*******************************************************************************
TARGET NAME : iqn.1992-08.com.netapp:sn.33604646
TARGET ALIAS :
HOST NO : 0
BUS NO : 0
TARGET ID : 0
TARGET ADDRESS : 10.60.128.100:3260
SESSION STATUS : ESTABLISHED AT Mon Jan 3 10:05:14 2005
NO. OF PORTALS : 1
PORTAL ADDRESS 1 : 10.60.128.100:3260,1
SESSION ID : ISID 00023d000001 TSID 103
DEVICE DETAILS :
--------------
LUN ID : 0
Vendor: NETAPP Model: LUN Rev: 0.2
Type: Direct-Access ANSI SCSI revision: 04
page83 type3: 60a980004f6443745359763759367733
page83 type1:4e45544150502020204c554e204f644374535976375936773300000000000000
page80: 4f6443745359763759367733
Device: /dev/sdb
LUN ID : 1
Vendor: NETAPP Model: LUN Rev: 0.2
Type: Direct-Access ANSI SCSI revision: 04
Methods for working with LUNs in native Linux environments | 43
Step
device is the name of the device in the /dev/mapper/ directory. You can create a file system
directly on a multipath device in /dev/mapper/. You do not have to create a partition or label on
the multipath device.
mount_point is the mount point you created for the file system.
_netdev is used for any network-dependent devices such as iSCSI. It is only used in iSCSI
environments and lets you add iSCSI mount point devices to /etc/fstab.
SUSE Linux Enterprise Server 11, 12, or later. For earlier versions, use the vendor-specific rescan
scripts, which are available on their web sites. See the documentation for your HBA.
Step
• Make sure you have set the HBA driver parameters correctly for your system setup.
Having the correct values for these parameters ensures that the multipathing and storage system
failover work correctly.
• If you configured VxDMP, multipath devices are created for all the LUNs that are discovered by
the HBA driver.
Each time an HBA driver is started, it scans the storage system and discovers all mapped LUNs.
• Make sure you set the VxDMP restore daemon to the correct values.
These values ensure that Veritas Storage Foundation works efficiently and correctly.
• When you use Veritas Storage Foundation, the VxVM manages the LUNs.
This means that, in addition to using tools such as sanlun and iscsadm to display information
about the LUNs, you can also use the VxVM interface to display information about the VxVM
devices.
The rescan script is available with the sg3_utils package. In addition, the rescan script is available
with Red Hat Enterprise Linux 5 Update 4 or later, Red Hat Enterprise Linux 6 and 7 series, SUSE
Linux Enterprise Server 10 SP2 or later, and SUSE Linux Enterprise Server 11 and 12 series.
For earlier versions, use the vendor-specific rescan scripts, which are available on their web sites.
See HBA vendor-specific documentation.
Steps
2. Initiate a rescan of the operating system device tree from the Veritas Volume Manager:
vxdisk scandisks
46 | Using Linux® Hosts with ONTAP storage
Steps
2. Verify that the new LUNs have been discovered by using the iscsiadm command for an iSCSI
setup or thesanlun command for other protocol hosts.
Steps
1. To view a list of LUNs mapped to your host, run the appropriate command for your system
environment:
(Veritas) LUN access when using VxDMP | 47
2. To view the LUNs on the VxVM disks, enter the vxdisk list command.
Examples of sanlun, iscsiadm, and iscsi output when used to view LUNs on
Veritas
You can use either the sanlun command, the iscsiadm command, or the iscsi command to view
the LUNs configured on your Linux host. The examples in this section show the type of output you
see if you run one of these commands on your Linux operating system in an environment running
VxDMP.
The tool you use depends on your version of Linux and what you would like to view. The sanlun
command displays the host device names and the LUNs to which they are mapped. The iscsiadm
command lists the available storage systems and LUNs.
The following sections provide examples of the type of output you see if you run one of these
commands in a specific environment: for example, with iSCSI and DM-Multipath on Red Hat
Enterprise Linux 5 series:
• FC running sanlun
• FC running vxdisk
Note: The output in the following examples has been modified to better fit the screen.
Example of using sanlun to view LUNs running Data ONTAP operating in 7-Mode with
FC
This example shows sample output from the sanlun lun show all command when it is issued in
a Host Utilities environment that is running Data ONTAP operating in 7-Mode with FC and Veritas
Storage Foundation.
Note: With the Linux Host Utilities 6.0 release, the output format of the sanlun utility changed.
The format is not backward-compatible when using LUNs mapped for Data ONTAP operating in
7-Mode.
48 | Using Linux® Hosts with ONTAP storage
If you execute the sanlun lun show all command in a Data ONTAP operating in 7-Mode FC
environment, you get output similar to the following:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
fas20200_0 auto:cdsdisk data_dg01 data_dg online thinrclm shared
fas20200_1 auto:cdsdisk data_dg02 data_dg online thinrclm shared
fas20200_2 auto:cdsdisk data_dg113 data_dg online thinrclm shared
fas20200_3 auto:cdsdisk data_dg180 data_dg online thinrclm shared
Example of using sanlun to view LUNs running Data ONTAP operating in 7-Mode with
iSCCI and Veritas
This example shows sample output from the sanlun lun show all command when it is issued in
a Host Utilities environment that is running Data ONTAP operating in 7-Mode with iSCSI and
Veritas Storage Foundation.
Note: With the Linux Host Utilities 6.0 release, the output format of the sanlun utility changed.
The format is not backward-compatible when using LUNs mapped for Data ONTAP operating in
7-Mode.
If you execute the sanlun lun show all command in a Data ONTAP operating in 7-Mode iSCSI
environment, you get output similar to the following:
(Veritas) LUN access when using VxDMP | 49
Example of using iscsiadm to view LUNs running iSCSI and Veritas on RHEL 5 series
system
This example shows sample output from the iscsiadm command when it is issued in a Host Utilities
environment that is running the iSCSI protocol and Veritas Storage Foundation on a Red Hat
Enterprise Linux 5 series system.
Note: This example lists the available storage systems and LUNs for a session with a specific
session ID. To view the details of all the sessions, use the iscsiadm -m session -P 3
command.
# iscsiadm -m session -P 3 -r 2
Target: iqn.1992-08.com.netapp:sn.101183016
Current Portal: 10.72.199.71:3260,1001
Persistent Portal: 10.72.199.71:3260,1001
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:5e3e11e0104d
Iface IPaddress: 10.72.199.119
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: Unknown
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 65536
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 4 State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sdc State: running
scsi4 Channel 00 Id 0 Lun: 1
Attached scsi disk sde State: running
scsi4 Channel 00 Id 0 Lun: 2
Attached scsi disk sdg State: running
scsi4 Channel 00 Id 0 Lun: 3
Attached scsi disk sdi State: running
scsi4 Channel 00 Id 0 Lun: 4
Attached scsi disk sdk State: running
scsi4 Channel 00 Id 0 Lun: 5
Attached scsi disk sdm State: running
50 | Using Linux® Hosts with ONTAP storage
Example of using iscsiadm command running iSCSI and Veritas on SLES 10 and 11
series system
This example shows sample output from the iscsiadm command when it is issued in a Host Utilities
environment that is running the iSCSI protocol and Veritas Storage Foundation on a SUSE Linux
Enterprise Server 10 or 11 system.
Note: This example lists the available storage systems and LUNs for a specific session. To view
the details of all the sessions, use the iscsiadm -m session -P 3 command.
Step
• FC
• Software iSCSI
Step
Steps
Note: For Veritas Storage Foundation 5.0 MP1 and MP2, the ASL displays the enclosure-based
naming disk objects in uppercase.
For Veritas Storage Foundation 5.0 MP3, Veritas Storage Foundation 5.1 and later and
InfoScale 7.0 series, the default behavior of the ASL is to display the enclosure-based naming
disk objects in lowercase.
You can change the enclosure names to uppercase by using the vxddladm set
namingscheme=ebn lowercase=no command.
Example
The output of the vxdisk list command is similar to the following:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
fas31700_0 auto:cdsdisk data_dg01 data_dg online thinrclm shared
fas31700_1 auto:cdsdisk data_dg02 data_dg online thinrclm shared
fas31700_2 auto:cdsdisk data_dg08 data_dg online thinrclm shared
fas31700_3 auto:cdsdisk data_dg09 data_dg online thinrclm shared
(Veritas) LUN access when using VxDMP | 53
2. On the host console, display path information for the device you want:
vxdmpadm getsubpaths dmpnodename=device
device is the name listed under the output of the vxdisk list command.
Example
The output of the vxdmpadm getsubpaths dmpnodename=device command is similar to the
following:
The output displays information about the paths to the storage system (whether the path is a
primary or secondary path). The output also lists the storage system that the device is mapped to.
Example
The output of the vxdmpadm getsubpaths ctlr= controller_name command is similar to
the following:
Steps
1. Create a LUN on the storage system and map it to the host. This LUN will be the SAN boot LUN.
You should ensure the following:
2. Enable the BIOS of the HBA port to which the SAN boot LUN is mapped.
For information about how to enable the HBA BIOS, see your HBA vendor-specific
documentation.
3. Configure the paths to the HBA boot BIOS as primary, secondary, tertiary, and so on, on the boot
device.
For more information, see your vendor-specific documentation.
8. Configure DM-Multipath.
55
Steps
1. Create a LUN on the storage system and map it to the host. This LUN will be the SAN boot LUN.
You should ensure the following:
2. Enable the BIOS of the HBA port to which the SAN boot LUN is mapped.
For information about how to enable the HBA BIOS, see your HBA documentation.
5. Configure DM-Multipath.
Steps
• Software iSCSI
For a software initiator to implement a SAN boot device, you can have the root device on an
iSCSI LUN, and you can use any of the following options to load the kernel:
◦ A host’s locally attached disk (for storing kernel and initrd images)
• Hardware iSCSI
If the SAN boot LUN uses an iSCSI HBA, then, because the protocol stack runs on the HBA, it is
ready to communicate with the storage system and discover a LUN when it starts up.
You can have both the boot device and root device on an iSCSI LUN.
Note: Not all operating systems supported by the Host Utilities work with iSCSI SAN boot LUNs.
For example, Oracle VM does not support creating a SAN boot LUN that uses software iSCSI.
Steps
1. Create a LUN on the storage system and map it to the host. This will be the SAN boot LUN.
You should ensure that the SAN boot LUN is mapped, and multiple paths to the SAN boot LUN
are available on the host. You should also ensure that the SAN boot LUN is visible to the host
during the boot process.
2. Set the Initiator IP Settings and Initiator iSCSI Name in Host Adapter Settings.
3. Set the Primary and Alternate Target IP and iSCSI Name and Adapter Boot Mode
to Manual in iSCSI Boot Settings.
For information, see your HBA vendor-specific documentation.
6. Install the operating system on the boot LUN and follow the installation prompts to complete the
installation.
Note: You should specify Boot Option as linux mpath during the operating system
installation. When you specify linux mpath, you can see the multipath devices (/dev/
mapper/mpathx) as installation devices.
58 | Using Linux® Hosts with ONTAP storage
Step
You can also use the sanlun lun show all command to display more information about your
LUN setup, such as whether you are using LUNs mapped with clustered Data ONTAP or Data
ONTAP operating in 7-Mode.
Note: Check the Interoperability Matrix to determine if clustered Data ONTAP is supported
with your Host Utilities environment.
Clustered Data ONTAP with FC: Example of using sanlun to display DM-Multipath
information
The following examples show the output from the sanlun lun show -p command and the
sanlun lun show all command in a Host Utilities environment that is running clustered Data
ONTAP with FC and DM-Multipath.
The first example uses the sanlun lun show -p command. The output from the command shows
that DM-Multipath (Multipath Provider: Native) is configured.
This example uses the sanlun lun show all command. The output shows that the LUNs are
mapped to clustered Data ONTAP operating an environment using FC.
Data ONTAP operating in 7-Mode with FC: Example of using sanlun to display DM-
Multipath information
The following examples show the output from the sanlun lun show -p command and the
sanlun lun show all in a Host Utilities environment that is running Data ONTAP operating in 7-
Mode with FC and DM-Multipath.
Note: With the Linux Host Utilities 6.0 release, the output format of the sanlun utility has changed.
The format no longer maintains backward compatibility when using LUNs mapped for Data
ONTAP operating in 7-Mode.
The first example uses the sanlun lun show -p command. The output from the command shows
that DM-Multipath (Multipath Provider: Native) is configured.
This example uses the sanlun lun show all command. The output shows that the LUNs are
mapped to Data ONTAP operating in 7-Mode in an environment using FC.
Clustered Data ONTAP with iSCSI: Example of using sanlun to display DM-Multipath
information
The following examples show the output from the sanlun lun show -p command and the
sanlun lun show all in a Host Utilities environment that is running clustered Data ONTAP with
iSCSI and DM-Multipath. The output is the same regardless of whether you are using a software
iSCSI initiator or hardware iSCSI initiator.
The first example uses the sanlun lun show -p command. The output from the command shows
that DM-Multipath (Multipath Provider: Native) is configured.
This example uses the sanlun lun show all command. The output shows that the LUNs are
mapped to clustered Data ONTAP in an environment using iSCSI.
Data ONTAP operating in 7-Mode iSCSI: Example of using sanlun to display DM-
Multipath information
The following examples show the output from the sanlun lun show -p command and the
sanlun lun show all in a Host Utilities environment that is running Data ONTAP operating in 7-
Mode with iSCSI and DM-Multipath. The output is the same regardless of whether you are using a
software iSCSI initiator or hardware iSCSI initiator.
The first example uses the sanlun lun show -p command. The output from the command shows
that DM-Multipath (Multipath Provider: Native) is configured.
This example uses the sanlun lun show all command. The output shows that the LUNs are
mapped to Data ONTAP operating in 7-Mode in an environment using iSCSI.
Steps
1. When you initiate the installation, specify the Boot Option as linux mpath and press Enter.
2. Continue with the installation until you reach the storage configuration page. Click Advanced
storage configuration.
5. On the storage controller, create an igroup with the initiator name that you provided in Step 4.
6. Create a LUN on the storage system on which you intend to create root partition, and map it to
the igroup.
10. Select a partitioning layout as Create custom layout and Click Next.
62 | Using Linux® Hosts with ONTAP storage
You can now proceed with the installation process and enter choices until you reach the
Installation Summary page.
11. At the storage devices selection screen, select the iSCSI multipathed device from the list of
allowable drives where you want to install the root file system.
12. Create the root file system on the selected device and select the mount point as /.
15. Click Next and follow the installation prompts to complete the installation.
Steps
2. Continue with the installation until you reach the installation summary page.
7. On the storage controller, create an igroup with the initiator name that you provided in Step 6.
8. Create a LUN on the storage system on which you intend to create a root partition, and map it to
the igroup.
11. Select all the discovered iSCSI sessions and click on Log In.
12. Navigate to Multipath Devices to select iSCSI multipath SAN boot LUN and click Done.
13. Select the I will configure partitioning option and click Done.
14. In the Manual Partitioning window, create the root file system on the selected device and select
the mount point as /.
(iSCSI) SAN boot configuration for iSCSI hardware, software initiators | 63
If you are using the software suspend funcitonality, you should ensure that the SWAP partition is
on a local disk.
16. Create the /boot partition on the locally attached disk or use a PXE server to load the kernel boot
image.
17. Click Done and follow the installation prompts to complete the installation
Related information
Interoperability Matrix Tool: mysupport.netapp.com/matrix
Steps
1. Log in to the storage system console or the Web interface of the storage system.
2. When you initiate the installation, specify Boot Option as follows: linux withiscsi=1
netsetup=1
3. In the iSCSI Initiator Overview page, select the Service tab and enter the Target IP address and
the iSCSI initiator name.
Note: You should ensure that you associate this IQN with the appropriate privileges on the
storage controller.
4. On the storage controller, create an igroup with the initiator name that you provided in the
previous step.
5. Create a LUN on the storage system on which you can create the root partition, and map it to the
igroup.
6. Return to the host screen. Select the Connected Targets tab and click Add.
d. Click Next.
8. In the list of storage systems that are discovered, click Connect for each one. You might also
have to do this for the authentication credentials also.
64 | Using Linux® Hosts with ONTAP storage
Note: During the installation, you should enable only one path to the root LUN.
Click Next.
9. Verify that the value for Connected is true for all the targets and click Next.
The Connected Targets pane lists all the targets.
10. Set the Start-up mode to onboot by using the Toggle Start-up button, and click Finish.
15. In the Expert Partitioner page, select the LUN where you want to install the root file system.
16. Create the root file system on the selected LUN and select the mount point as /.
19. Ensure that you have the _netdev, nofail keyword in the Arbitrary Option Value text box, and
click OK.
23. After you return to the Expert Partitioner page, review the configuration. Click Finish.
27. For the Optional Kernel Command Line Parameter, ensure that all references to installer
arguments are removed.
The parameter should look similar to the following:
resume=/dev/sda1 splash=silent showopts
30. In the Boot Loader Location pane, select the Boot from Master Boot Record option.
(iSCSI) SAN boot configuration for iSCSI hardware, software initiators | 65
Click Finish. Doing this returns you to the Installation Settings page.
31. Review the configuration settings and click Accept. The Confirm Installation page is displayed.
32. Click Install and follow the prompts to complete the installation.
Steps
2. Use YaST2 to change the startup mode for all the iSCSI sessions:
11 or 12 series Onboot
10 SP2 automatic
4. Change the value of the session re-establishment timeout for iSCSI sessions fetching the SAN
boot LUN:
iscsiadm -m node -T targetname -p ip:port -o update -n
node.session.timeo.replacement_timeout -v 5
5. Create a new initrd image with the root partition on multipath enabled:
6. Change the startup mode of the iSCSI sessions fetching the SAN boot LUN:
iscsiadm -m node -T targetname -p ip:port -o update -n node.startup -v
onboot
iscsiadm -m node -T targetname -p ip:port -o update -n
node.conn[0].startup -v onboot
8. Verify that multipathing is enabled on the root device by running the mount command and
ensuring that the root partition is on a DM-Multipath device.
66
Steps
4. Ensure that only one SAN boot LUN is available to the host.
5. Enable the boot BIOS of the HBA port to which the SAN boot LUN is mapped.
It is best to enable the spinup delay option for the HBA port.
For information about how to enable the boot BIOS, see the HBA vendor-specific documentation.
6. After performing the appropriate changes to the HBA BIOS and ensuring that the SAN boot LUN
is visible, install the operating system on the SAN boot LUN.
Before installing the operating system, see the section on rootability in the Veritas Volume
Manager Administrator’s Guide for Linux that is shipped along with the software for partitioning
information.
Note: The Red Hat Enterprise Linux 4 Update 4 distribution does not include HBA drivers for
4-Gb and 8-Gb QLogic cards; therefore, you must use the device driver kit provided by
QLogic. For more information about the supported drivers, see the NetApp Interoperability
Matrix.
Note: When you install the SUSE Linux Enterprise Server operating system, you must ensure
that GRUB is installed in the Master Boot Record. You can do this from the Expert tab in the
software package selection screen during installation.
9. If you are using HBA drivers acquired from an OEM, install the supported versions of the drivers.
11. Install Veritas Storage Foundation and any appropriate patches or fixes for it.
Configuring SAN boot in a Veritas environment | 67
On reboot, this value takes effect and remains persistent across system reboots.
13. For Veritas Storage Foundation 5.1 and later, set the Veritas DMP LUN retries tunable to a value
of 300:
vxdmpadm settune dmp_lun_retry_timeout=300
14. For Veritas Storage Foundation 6 series and InfoScale 7 series, set the Veritas DMP LUN retries
tunable to a value of 60:
vxdmpadm settune dmp_lun_retry_timeout=60
15. (For Veritas Storage Foundation 5.1 SP1 and later, InfoScale 7 series, set the value of the
dmp_path_age to an interval of 120 by entering the following command:
vxdmpadm settune dmp_path_age=120
You must enable persistence before you can encapsulate the root disk.
For the detailed steps, see the section on encapsulating the disk in the Veritas Volume Manager
Administrator’s Guide for Linux that is shipped along with the software.
18. Reboot the host after encapsulation.
This command displays the rootvol and swapvol volumes under the corresponding disk group.
20. Configure the paths to the HBA boot BIOS as primary, secondary, tertiary, and so on, on the boot
device.
For more information, see the respective HBA vendor-specific documentation.
68
Note: If you are using Oracle VM hypervisor, see the sections for configuring FC, iSCSI, and
multipathing. In this guide, sections that refer to Red Hat Enterprise Linux also apply to Oracle
Linux. Both operating systems use the same instructions for the tasks featured in this guide.
Oracle VM is a server virtualization solution that consists of Oracle VM Server (OVS). This is a self-
contained virtualization environment designed to provide a lightweight, secure, server-based platform
for running virtual machines. OVS is based on an updated version of the underlying Xen hypervisor
technology. Oracle VM Manager provides the user interface to manage OVS and guests.
A workaround for misaligned Linux guests is available at the NetApp Linux Community Program
site at http://linux.netapp.com/tools/fix-alignment.
Related information
Technical report TR-3747
70
• SAN booting
• ALUA
Note: Your specific environment can affect what the Host Utilities support.
The FC protocol
The FC protocol requires one or more supported host bus adapters (HBAs) in the host. Each HBA
port is an initiator that uses FC to access the LUNs on the storage system. The HBA port is identified
by a worldwide port name (WWPN).
You need to make a note of the WWPN so that you can supply it when you create an initiator group
(igroup). To enable the host to access the LUNs on the storage system using the FC protocol, you
must create an igroup on the storage system and provide the WWPN as an identifier. Then, when you
create the LUN, you map it to that igroup. This mapping enables the host to access that specific
LUN.
The Linux Host Utilities support the FC protocol with fabric-attached SAN and direct-attached
configurations:
• Fabric-attached SAN
The Host Utilities support two variations of fabric-attached SANs:
◦ A single-port FC connection from the HBA to the storage system through a single switch
A host is cabled to a single FC switch that is connected by cable to redundant FC ports on a
high-availability storage system. A fabric-attached single-path host has one HBA.
Supported Linux and Data ONTAP features | 71
◦ A dual-port FC connection from the HBA to the storage system through dual switches
The redundant configuration avoids the single point of failure of a single-switch configuration.
• Direct-attached
A single host with a direct FC connection from the host HBA to stand-alone or high-availability
storage system configurations.
Note: You should use redundant configurations with two FC switches for high availability in
production environments. However, direct FC connections and switched configurations using a
single-zoned switch might be appropriate for less critical business applications.
• (Data ONTAP 7.1 and later) An iSCSI target HBA or an iSCSI TCP/IP offload engine (TOE)
adapter.
The connection between the initiator and target uses a standard TCP/IP network. The storage system
listens for iSCSI connections on TCP port 3260.
You need to make a note of the iSCSI node name so that you can supply it when you create an
igroup.
SAN booting
SAN booting is the general term for booting a Linux host from a storage system LUN instead of an
internal hard disk. SAN booting uses a SAN-attached disk, such as a LUN configured on a storage
controller, as a boot device for a host.
Note: SAN booting is not supported with Oracle VM.
• Consolidated and centralized storage, because the host uses the SAN
• Lower cost
The hardware and operating costs are lowered.
72 | Using Linux® Hosts with ONTAP storage
• Greater reliability
Systems without the disks are less prone to failure.
For information about the configurations that are supported for SAN boot LUNs, see the
Interoperability Matrix.
ALUA defines a standard set of SCSI commands for discovering path priorities to LUNs on SANs.
When you have the host and storage controller configured to use ALUA, it automatically determines
which target ports provide optimized and unoptimized access to LUNs.
Note: If you are using clustered DATA ONTAP, ALUA is supported with the FC, FCoE, and iSCSI
protocols and you must use it. If you are using DATA ONTAP operating in 7-Mode, ALUA is
supported only with the FC and FCoE FC protocols.
ALUA is automatically enabled when you set up your storage for FC.
The following configurations support ALUA:
Supported Linux and Data ONTAP features | 73
Troubleshooting
If you encounter problems while running the Host Utilities on FC, iSCSI, or Veritas Storage
Foundation, you can check the sections that follow for troubleshooting tips.
If any devices show up as being blacklisted, check the devnode_blacklist or blacklist section of
the /etc/multipath.conf file. Ensure that all the entries are correctly specified.
If the devices are not blacklisted, but are still not recognized by the multipath command,
regenerate the multipath maps by entering the following command:
multipath -v3
For more information, see bug number 228744 on Bugs Online, which is available on the NetApp
Support Site.
FC troubleshooting
The troubleshooting sections that follow provide tips for dealing with problems that might occur
when you are running the FC protocol with the Host Utilities.
• Warning: libnl.so (32-bit) library not found, some sanlun commands may
not work. Refer Linux Host Utilities Installation and Setup Guide for
more details
To avoid these warnings, make sure you install the packages that provide the libraries before you
install the Host Utilities software. For more information, see the information on installing and
configuring QLogic and Emulex HBAs.
76 | Using Linux® Hosts with ONTAP storage
Steps
1. Set the value of QLogic HBA port topology to Point to Point Only while connecting to a storage
controller operating in SSI, Standby or Partner CF modes, or to Loop Only if the storage system
is operating in Dual Fabric or Mixed CF modes.
c. For each of the WWPNs listed, select ConnectionOptions and set it to Point to Point Only or
Loop Only, as required.
2. Set the value of the QLogic port speed to the highest speed possible, depending on its maximum
and the maximum of the switch or target port to which it is connected.
c. For each of the WWPNs listed, select the Data Rate option and set it to the specified speed.
(FC) The SAN booted root on a DM-Multipath host freezes during FC path
faults
The SAN booted root on a DM-Multipath host triggers a freeze during FC path faults on Red Hat
Enterprise Linux 5 Update 1.
For SAN boot support on Red Hat Enterprise Linux 5 Update 1, you have to download the
appropriate errata multipathing package from the Red Hat Network Web site. For more information,
see NetApp Interoperability Matrix.
Error messages similar to the following indicate that the operating system's libHBAAPI library is not
installed:
Error messages similar to the following indicate that the HBA vendor API plug-in is not installed:
To avoid this problem, make sure you have the correct management software package for your HBA
and host architecture installed:
• For QLogic HBAS, install the QLogic QConvergeConsole CLI package.
• For Emulex HBAs, install the Emulex OneCommand Manager core application (CLI) package.
Note: If you are using a different HBA that is supported by the Host Utilities, you must install the
management software for that HBA.
• /etc/init.d/multipathd restart
• multipath -v3
as a different SCSI device. LVM detects this and chooses one of the devices. LVM then displays a
warning message.
To avoid this problem, you should modify the paths to the preferred_names parameter in
the/etc/lvm/lvm.conf file.
The following is an example of a how the preferred_names parameter line should look:
preferred_names = [ "^/dev/mapper/*" ]
After you make the change, perform a rescan (pvscan and vgscan) to ensure all devices are
properly displayed.
iSCSI troubleshooting
Sometimes you might encounter a problem while running iSCSI. The sections that follow provide
tips for resolving any issues that might occur.
(iSCSI) LVM devices are not automatically mounted during system boot on
SUSE Linux Enterprise Server 11
Currently, the volume groups created on iSCSI devices are not automatically scanned when iSCSI
LUNs are discovered. Therefore, during the system boot, the volume groups that were created on the
iSCSI devices are unavailable.
To overcome this problem, manually mount the logical volumes by using the following command:
/sbin/mount -a
(iSCSI) LVM devices are not automatically mounted during system boot on
SUSE Linux Enterprise Server 10
Currently, the volume groups created on iSCSI devices are not automatically scanned when iSCSI
LUNs are discovered. Therefore, during the system boot, the volume groups that were created on the
iSCSI devices are unavailable. To overcome this problem, a helper script is provided in /usr/
share/doc/packages/lvm2/ lvm-vg-to-udev-rules.sh.
You can use this script to generate udev rules for iSCSI Logical Volume Manager (LVM) devices,
which can be automatically mounted during system boot.
Example:
After completing the preceding steps for each logical volumes, the logical volumes can be
automatically mounted by adding entries in /etc/fstab.
Multipathd fails occasionally because it fails to start "event checker" for some of the DM-Multipath
devices. Because of this failure, multipathd is unable to keep track of the path up or down status for
those devices.
• Ping the storage system interfaces that are used for iSCSI.
iSCSI service status Verify that the iSCSI service is licensed and started on the
storage system. For more information, see the Data ONTAP
SAN Administration Guide for 7-Mode.
Initiator login Verify that the initiator is logged in to the storage system by
entering the iscsi show initiator command on the
storage system console.
If the initiator is configured and logged in to the storage
system, the storage system console displays the initiator node
name and the target portal group to which it is connected.
If the command output shows that no initiators are logged in,
check the initiator configuration on the host. Verify that the
storage system is configured as a target of the initiator.
iSCSI node names Verify that you are using the correct initiator node names in the
igroup configuration.
On the storage system, use the igroup show command to
display the node name of the initiators in the storage system’s
igroups. On the host, use the initiator tools and commands to
display the initiator node name. The initiator node names
configured in the igroup and on the host should match.
80 | Using Linux® Hosts with ONTAP storage
System requirements Verify that the components of your configuration are supported.
Verify that you have the correct host Operating System (OS)
Service Pack level, initiator version, Data ONTAP version, and
other system requirements. You can check the up-to-date
system requirements in the NetApp Interoperability Matrix.
Jumbo frames If you are using jumbo frames in your configuration, ensure
that the jumbo frames are enabled on all the devices in the
network path: the host Ethernet NIC, the storage system, and
any switches.
Firewall settings Verify that the iSCSI port (3260) is open in the firewall rule.
Steps
1. Stop the Veritas cluster service by using hastop on all the nodes.
3. If the fencing driver does not stop, remove the name of the coordinator diskgroup from /etc/
vxfendg.
Note: It is best to halt all the nodes except the last one in the cluster.
Troubleshooting | 81
Steps
2. Follow the instructions to download the Windows zip or Linux tgz version of the nSANity
program, depending on the workstation or server you want to run it on.
3. Change to the directory to which you downloaded the zip or tgz file.
4. Extract all of the files and follow the instructions in the README.txt file. Also be sure to review
the RELEASE_NOTES.txt file for any warnings and notices.
Sample configuration file for Red Hat Enterprise Linux 7 series with and without
ALUA enabled
SAN boot LUNs on Red Hat Enterprise Linux 6 series and user_friendly_names
parameter
If you create a SAN boot LUN and the installer sets the user_friendly_names parameter to yes,
you must perform the following steps:
1. For all versions except for Red Hat Enterprise Linux 6.4 series, change the
user_friendly_names parameter to no.
For Red Hat Enterprise Linux 6.4 series, create an empty multipath.conf file; all the settings
for both RHEL 6.4 with and without ALUA are automatically updated by default.
4. Change the root dm-multipath device name to the WWID-based device name in all of the
locations that refer to the device, such as /etc/fstab and /boot/grub/device.map.
For example, suppose the name of the root device is /dev/mapper/mpatha and the WWID of
the device is 360a98000486e2f66426f583133796572. You must re-create the initrd-image, and
then change the device name to /dev/mapper/360a98000486e2f66426f583133796572
in /etc/fstab, /boot/grub/device.map, and any other place that that refers to
device /dev/mapper/mpatha.
5. Append the following parameter value to the kernel for ALUA and non-ALUA to work:
rdloaddriver=scsi_dh_alua
84 | Using Linux® Hosts with ONTAP storage
Example
kernel /vmlinuz-2.6.32-358.6.1.el6.x86_64 ro root=/dev/mapper/
vg_ibmx355021082-lv_root rd_NO_LUKS rd_LVM_LV=vg_ibmx355021082/
lv_root LANG=en_US.UTF-8 rd_LVM_LV=vg_ibmx355021082/lv_swap rd_NO_MD
SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc
KEYTABLE=us rd_NO_DM rhgb quiet rdloaddriver=scsi_dh_alua
Red Hat Enterprise Linux 6 with ALUA enabled sample configuration file
The following sample file shows values that you might supply when your host is running Red Hat
Enterprise Linux 6 with ALUA enabled. Remember that if you use the blacklist section, you must
replace the sample information with information for your system.
defaults {
user_friendly_names no
max_fds max
flush_on_last_del yes
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "alua"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
Red Hat Enterprise Linux 6 without ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 6 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
max_fds max
flush_on_last_del yes
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
Sample configuration files for Red Hat Enterprise Linux 6 series | 85
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "ontap"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
Red Hat Enterprise Linux 6 update 1 with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 6 update 1 with ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
max_fds max
flush_on_last_del yes
queue_without_daemon no
dev_loss_tmo 2147483647
fast_io_fail_tmo 5
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "alua"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
Red Hat Enterprise Linux 6 update 1 without ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 6 update 1 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
86 | Using Linux® Hosts with ONTAP storage
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
max_fds max
flush_on_last_del yes
queue_without_daemon no
dev_loss_tmo 2147483647
fast_io_fail_tmo 5
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "ontap"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
Red Hat Enterprise Linux 6 update 2 with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 6 update 2 with ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
max_fds max
flush_on_last_del yes
queue_without_daemon no
dev_loss_tmo infinity
fast_io_fail_tmo 5
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "alua"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "1 alua"
rr_weight uniform
Sample configuration files for Red Hat Enterprise Linux 6 series | 87
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
Red Hat Enterprise Linux 6 update 2 without ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 6 update 2 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
max_fds max
flush_on_last_del yes
queue_without_daemon no
dev_loss_tmo infinity
fast_io_fail_tmo 5
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "ontap"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
Red Hat Enterprise Linux 6 update 3 with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 6 update 3 with ALUA enabled. By default, the hardware table sets the rest
of the parameters.
devices {
device {
vendor "NETAPP"
product "LUN.*"
prio "alua"
hardware_handler "1 alua"
}
}
88 | Using Linux® Hosts with ONTAP storage
Red Hat Enterprise Linux 6 update 3 without ALUA enabled sample configuration file
When you are not using ALUA, you only need to list any devices that must be blacklisted. All the
other parameter values are set by the hardware table.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
If you are using a SAN boot LUN and must blacklist the local disk, you must supply the WWID of
the local disk. You do not need to add other devnode information. DM-Multipath adds that
information by default.
Remember: When you use the blacklist section, you must replace the sample information with
information for your system.
Red Hat Enterprise Linux 6 update 4, 5, 6, 7, and 8 with and without ALUA enabled
sample configuration file
Refer to the steps listed in section “Red Hat Enterprise Linux 6 series, SAN boot LUNs, and
user_friendly_names parameter” for setting the parameters, and then continue to the steps 1 and 2.
Remember: When you use the blacklist section, you must replace the sample information with
information for your system.
1. Append the following parameter value to the kernel for ALUA and non-ALUA to work, and then
reboot:
rdloaddriver=scsi_dh_alua
Example
kernel /vmlinuz-2.6.32-358.6.1.el6.x86_64 ro root=/dev/mapper/
vg_ibmx355021082-lv_root rd_NO_LUKS rd_LVM_LV=vg_ibmx355021082/
lv_root LANG=en_US.UTF-8 rd_LVM_LV=vg_ibmx355021082/lv_swap rd_NO_MD
SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc
KEYTABLE=us rd_NO_DM rhgb quiet rdloaddriver=scsi_dh_alua
2. Verify the output of the cat /proc/cmdline command to ensure that the setting is complete.
89
SAN boot LUNs on Red Hat Enterprise Linux 5 series and user_friendly_names
parameter
If you create a SAN boot LUN and the installer sets the user_friendly_names parameter to yes,
you must perform the following steps:
4. Change the root dm-multipath device name to the WWID-based device name in all the locations
that refer to the device, such as /etc/fstab and /boot/grub/device.map.
For example, suppose the name of the root device is /dev/mapper/mpatha and the WWID of
the device is 360a98000486e2f66426f583133796572. You must re-create the initrd-image, and
then change the device name to /dev/mapper/360a98000486e2f66426f583133796572
in /etc/fstab and /boot/grub/device.map, as well as any other place that refers to
the /dev/mapper/mpatha device.
Red Hat Enterprise Linux 5 update 11,10, 9, 8 or update 7 with ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 5 with update 10, 9, 8 or update 7 and has ALUA enabled.
Note: Both Red Hat Enterprise Linux 5 update 11, 10, 9, 8 and Red Hat Enterprise Linux 5 update
7 use the same values in the DM-Multipath configuration file, so this file can apply to either
version.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del yes
max_fds max
pg_prio_calc avg
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
path_checker tur
path_selector "round-robin 0"
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
Red Hat Enterprise Linux 5 update 11,10, 9, 8, or update 7 without ALUA enabled
sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 5 with update 11,10, 9, 8 or update 7 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del yes
max_fds max
pg_prio_calc avg
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
Sample configuration file for Red Hat Enterprise Linux 5 | 91
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
path_checker tur
path_selector "round-robin 0"
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
Red Hat Enterprise Linux 5 update 6 with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 5 with update 6 and has ALUA enabled:
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del yes
max_fds max
pg_prio_calc avg
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
path_checker directio
path_selector "round-robin 0"
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
Red Hat Enterprise Linux 5 update 6 without ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Linux 5 with update 6 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
92 | Using Linux® Hosts with ONTAP storage
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del yes
max_fds max
pg_prio_calc avg
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
path_checker directio
path_selector "round-robin 0"
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
93
4 Update 7 and earlier Place rr_min_io in the default section, not the
device section, of the multipath.conf file and set
its value to 128.
4 Update 6 and earlier Set pathchecker to readsector0 .
defaults
{
user_friendly_names no
queue_without_daemon no
max_fds max
flush_on_last_del yes
}
# All data under blacklist must be specific to your system.
devnode_blacklist
{
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices
{
device
{
vendor "NETAPP"
product "LUN"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
94 | Using Linux® Hosts with ONTAP storage
• For Red Hat Compatible Kernel and multipath packages, you must use the RHEL-based setting.
SAN boot LUNs on Oracle Linux 5 series and the user_friendly_names parameter
1. Change the user_friendly_names parameter to no.
4. Change the root dm-multipath device name to the WWID-based device name in all of the
locations that refer to the device, such as /etc/fstab and /boot/grub/device.map.
For example, suppose the name of the root device is /dev/mapper/mpatha and the WWID of
the device is 360a98000486e2f66426f583133796572. You must re-create the initrd-image and
then change the device name to /dev/mapper/360a98000486e2f66426f583133796572
in /etc/fstab and /boot/grub/device.map, as well as any other place that refers to
device /dev/mapper/mpatha.
Oracle Linux 5 (UEK) update 11, 10, 9, and 8 with ALUA enabled sample configuration
file
The following screen illustration shows sample values supplied when your host is running Oracle
Linux 5 (UEK) with update 11, 10, 9, or 8 and has ALUA enabled. For Oracle Linux 5 (UEK) update
11, 10, 9, and 8, use the same values in the DM-Multipath configuration file, so that this file can
apply to all versions.
Note: If you use the blacklist section, you must replace the sample information with information
for your system.
defaults {
queue_without_daemon no
flush_on_last_del yes
max_fds max
user_friendly_names no
}
blacklist {
wwid (35000c50072648313)
devnode "^cciss.*"
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
}
devices {
device {
hardware_handler "1 alua"
prio "alua"
product "LUN.*"
vendor "NETAPP"
}
}
Oracle Linux 5 (UEK) update 11, 10, 9, and 8 without ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
Oracle Linux 5 (UEK) with update 11, 10, 9, and 8 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
queue_without_daemon no
flush_on_last_del yes
max_fds max
user_friendly_names no
}
blacklist {
wwid (35000c50072648313)
devnode "^cciss.*"
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
}
devices {
device {
hardware_handler "0"
prio "ontap"
product "LUN.*"
vendor "NETAPP"
}
}
Sample configuration file for Oracle Linux Unbreakable Enterprise Kernel and Red Hat Compatible
Kernel | 97
Oracle Linux 6 series (Unbreakable Enterprise Kernel) with and without ALUA
enabled sample configuration file
For Oracle Linux 6 series Unbreakable Enterprise Kernel, follow Red Hat 6 series sample DM
Multipath configuration file.
Oracle Linux 7 series (Unbreakable Enterprise Kernel) with and without ALUA
enabled sample configuration file
Remember: When you use a blacklist section, you must replace the sample information with
information for your system.
If you create a SAN boot LUN, you must perform the following steps:
1. For Oracle Linux 7 series (Unbreakable Enterprise Kernel), create an empty multipath.conf file;
all the settings for Oracle Linux 7 series (Unbreakable Enterprise Kernel) with and without
ALUA are automatically updated by default
4. Change the root dm-multipath device name to the WWID-based device name in all of the
locations that refer to the device, such as /etc/fstab and /boot/grub/device.map.For
example, suppose the name of the root device is /dev/mapper/mpatha and the WWID of the
device is 360a98000486e2f66426f583133796572. You must re-create the initrd-image, and then
change the device name to /dev/mapper/360a98000486e2f66426f583133796572 in /etc/
fstab, /boot/grub/device.map,
and any other place that that refers to device /dev/mapper/mpatha.
5. Append the following parameter value to the kernel for ALUA and non-ALUA to work:
rdloaddriver=scsi_dh_alua
Example:
kernel /vmlinuz-3.8.13-68.1.2.el6uek.x86_64 ro root=/dev/mapper/
vg_ibmx3550m421096-lv_root rd_NO_LUKS
rd_LVM_LV=vg_ibmx3550m421096/lv_root LANG=en_US.UTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16
crashkernel=256M KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_ibmx3550m421096/
lv_swap rd_NO_DM
rhgb quiet rdloaddriver=scsi_dh_alua
Sample configuration file for Oracle Linux Red Hat Compatible Kernel
All versions of Oracle Linux Red Hat Compatible Kernel (RHCK) 5 series use a DM-Multipath
configuration file, but there might be slight variations in the file based on which RHCK update you
have installed. You can replace your current file with the sample file, and then change the values to
ones that are appropriate for your system.
You can use the sample Oracle Linux 5 series (RHCK) series configuration files shown here to create
your own multipath.conf file. When you create your file, keep the following in mind:
98 | Using Linux® Hosts with ONTAP storage
SAN boot LUNs on Oracle Linux 5 series (RHCK) and user_friendly_names parameter
If you create a SAN boot LUN and the installer sets the user_friendly_names parameter to yes,
you must perform the following steps:
4. Change the root dm-multipath device name to the WWID-based device name in all of the
locations that refer to the device, such as /etc/fstab and /boot/grub/device.map.
For example, suppose the name of the root device is /dev/mapper/mpatha and the WWID of
the device is 360a98000486e2f66426f583133796572. You must re-create the initrd-image, and
then change the device name to /dev/mapper/360a98000486e2f66426f583133796572
in /etc/fstab and /boot/grub/device.map, as well as any other place that refers to
device /dev/mapper/mpatha.
Oracle Linux Red Hat Compatible Kernel 5 update 11, 10, 9, 8 and 7 with ALUA
enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Oracle Linux 5 with update 11, 10, 9, 8 and 7 and has ALUA enabled.
Note: For Oracle Linux 5 (Red Hat Compatible Kernel) update 11, 10, 9, 8 and 7 use the same
values in the DM- Multipath configuration file, so this file can apply to all versions.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del yes
max_fds max
pg_prio_calc avg
}
# All data under blacklist must be specific to your system.
Sample configuration file for Oracle Linux Unbreakable Enterprise Kernel and Red Hat Compatible
Kernel | 99
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
path_checker tur
path_selector "round-robin 0"
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
Oracle Linux 5 (RHCK) update 11, 10, 9, 8, and 7 without ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
Oracle Linux 5 (RHCK) with update 11,10, 9, 8, and 7 and does not have ALUA enabled.
Note: For Oracle Linux 5 (RHCK) update 11, 10, 9, 8 and 7 use the same values in the DM-
Multipath configuration file, so this file can apply to all version.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del yes
max_fds max
pg_prio_calc avg
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
path_checker tur
path_selector "round-robin 0"
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
100 | Using Linux® Hosts with ONTAP storage
Oracle Linux 5 update 6 (RHCK) with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Oracle Linux 5 update 6 (RHCK) and has ALUA enabled:
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
Oracle Linux 5 update 6 (RHCK) without ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
Oracle Linux 5 update 6 (RHCK) with update 6 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del yes
max_fds max
pg_prio_calc avg
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
path_checker tur
path_selector "round-robin 0"
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
Oracle Linux 6 series (RHCK) with and without ALUA enabled sample configuration
file
For Oracle Linux 6 series (RHCK), Please follow Red Hat 6 series sample DM-Multipath
configuration file.
Oracle Linux 7 series (RHCK) with and without ALUA enabled sample configuration
file
For Oracle Linux 7 series (RHCK), Please follow Red Hat 7 series sample DM-Multipath
configuration file.
101
# RHEV REVISION
# RHEV PRIVATE
Red Hat Enterprise Virtualization Hypervisor 6.2 with ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Virtualization Hypervisor 6.2 with ALUA enabled:
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
fast_io_fail_tmo 5
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "alua"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/
%n"
}
}
Red Hat Enterprise Virtualization Hypervisor 6.2 without ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Virtualization Hypervisor 6.2 and does not have ALUA enabled:
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/
%n"
}
}
Red Hat Enterprise Virtualization Hypervisor 6.3 with ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Virtualization Hypervisor 6.3 with ALUA enabled:
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
Red Hat Enterprise Virtualization Hypervisor 6.3 without ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
Red Hat Enterprise Virtualization Hypervisor 6.3 without ALUA enabled:
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
prio "ontap"
hardware_handler "0"
}
}
Red Hat Enterprise Virtualization Hypervisor 6.4 with and without ALUA enabled
sample configuration file
In Red Hat Enterprise Virtualization Hypervisor 6.4, dm-multipath can automatically apply ALUA
and non-ALUA settings after you run multipath.conf; you must specify
rdloaddriver=scsi_dh_alua in the kernel command line as described below.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
Red Hat Enterprise Virtualization Hypervisor 6.5 with and without ALUA enabled
sample configuration file
In Red Hat Enterprise Virtualization Hypervisor 6.5, dm-multipath can automatically apply ALUA
and non-ALUA settings with the multipath.conf; you must specify
rdloaddriver=scsi_dh_alua
in the kernel command line as described below.
Note: Unless you are running iSCSI protocol and Data ONTAP operating in 7-Mode, you should
have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information from your system.
Sample configuration file for Red Hat Enterprise Virtualization Hypervisor | 105
4. Change the root dm-multipath device name to the WWID-based device name in all the locations
that refer to the device, such as /etc/fstab and /boot/grub/device.map.
For example, suppose the name of the root device is /dev/mapper/mpatha and the WWID of
the device is 360a98000486e2f66426f583133796572. You must re-create the initrd-image, and
then change the device name to /dev/mapper/360a98000486e2f66426f583133796572
in /etc/fstab and /boot/grub/device.map, as well as any other place that refers to
the /dev/mapper/mpatha device.
Note: In addition to providing a DM-Multipath configuration file, you must also set the
O2CB_HEARTBEAT_THRESHOLD timeout. For more information, see (Oracle VM) Configuring the
O2CB_HEARTBEAT_THRESHOLD on page 112.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio "alua"
path_checker directio
no_path_retry "queue"
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -gus /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
108 | Using Linux® Hosts with ONTAP storage
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio "ontap"
path_checker directio
no_path_retry "queue"
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -gus /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon yes
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN.*"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "alua"
path_checker tur
no_path_retry "queue"
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -gus /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon yes
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN.*"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "ontap"
path_checker tur
no_path_retry "queue"
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -gus /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN.*"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "alua"
path_checker tur
no_path_retry "queue"
failback immediate
hardware_handler "1 alua"
rr_weight uniform
110 | Using Linux® Hosts with ONTAP storage
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -gus /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN.*"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio "ontap"
path_checker tur
no_path_retry "queue"
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -gus /block/%n"
}
}
Oracle VM 3.3 and 3.4 series with and without ALUA enabled:
You can use an empty /etc/multipath.conf file for FC, FCoE, or iSCSI configurations, as well
as ALUA and non-ALUA configurations. You can also add blacklisting information for the local
disks in the file, if required.
When you use a blacklist section, you must replace the information in the following example with
information for your system.
1. Append the following parameter value to the kernel for ALUA and non-ALUA to work. Add to
the file /boot/grub/grub.conf “rdloaddriver=scsi_dh_alua”
For example, the module /vmlinuz-3.8.13-68.3.3.el6uek.x86_64 ro root=UUID=45f0d586-
e59e-4892-b718-08084c78d0fb rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
rdloaddriver=scsi_dh_alua
Sample configuration file for Oracle VM 3 series | 111
Steps
Heartbeat settings and multipath settings should be exactly same on all the hypervisors in a server
pool.
SUSE Linux Enterprise 12 series with and without ALUA enabled sample
configuration
Note: With SUSE Linux Enterprise 12 series, you do not need the /etc/multipath.conf file to
configure DM-Multipath on NetApp LUNs
Remember:To blacklist a device, you must add the following sample information in the /etc/
multipath.conf file:
SUSE Linux Enterprise Server 11, SP1 with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
either SUSE Linux Enterprise Server 11 or 11 SP1 with ALUA.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults
{
user_friendly_names no
max_fds max
flush_on_last_del yes
}
# All data under blacklist must be specific to your system.
blacklist
{
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices
{
device
{
vendor "NETAPP"
product "LUN"
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
prio "alua"
features "1 queue_if_no_path"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
rr_weight uniform
rr_min_io 128
path_checker tur
}
}
Sample configuration files for SUSE Linux Enterprise Server 11 series | 115
SUSE Linux Enterprise Server 11, 11 SP1 without ALUA enabled sample
configuration file
The following file provides an example of the values you need to supply when your host is running
SUSE Linux Enterprise Server 11 or 11 SP1 and ALUA is not enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults
{
user_friendly_names no
max_fds max
flush_on_last_del yes
}
# All data under blacklist must be specific to your system.
blacklist
{
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices
{
device
{
vendor "NETAPP"
product "LUN"
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
prio "ontap"
features "1 queue_if_no_path"
hardware_handler "0"
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
rr_weight uniform
rr_min_io 128
path_checker tur
}
}
SUSE Linux Enterprise Server 11 SP2 with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
SUSE Linux Enterprise Server 11 SP2 KVM and Xen with ALUA.
Note: This configuration file applies to both SLES 11 SP2 KVM and Xen also.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults
{
user_friendly_names no
max_fds max
queue_without_daemon no
flush_on_last_del yes
}
# All data under blacklist must be specific to your system.
blacklist
{
116 | Using Linux® Hosts with ONTAP storage
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices
{
device
{
vendor "NETAPP"
product "LUN"
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
prio "alua"
features "3 queue_if_no_path pg_init_retries 50"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
rr_min_io 128
path_checker tur
}
}
SUSE Linux Enterprise Server 11 SP2 without ALUA enabled sample configuration
file
The following file provides an example of the values you need to supply when your host is running
SUSE Linux Enterprise Server 11 SP2 KVM and Xen when ALUA is not enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Note: This configuration file applies to both SLES 11 SP2 KVM and Xen also.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults
{
user_friendly_names no
max_fds max
queue_without_daemon no
flush_on_last_del yes
}
# All data under blacklist must be specific to your system.
blacklist
{
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices
{
device
{
vendor "NETAPP"
product "LUN"
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
prio "ontap"
features "3 queue_if_no_path pg_init_retries 50"
hardware_handler "0"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
Sample configuration files for SUSE Linux Enterprise Server 11 series | 117
rr_min_io 128
path_checker tur
}
}
SUSE Linux Enterprise 11 SP3, SP4 with and without ALUA enabled sample
configuration file
Note: With SUSE Linux Enterprise 11 SP3, SP4 you do not need the /etc/multipath.conf file
to configure DM-Multipath on NetApp LUNs.
Remember: To blacklist a device, you must add the following sample information in the /etc/
multipath.conf file.
SUSE Linux Enterprise Server 10 SP4 with ALUA enabled sample configuration file
The following file provides an example of the values you need to supply when your host is running
SUSE Linux Enterprise Server 10 SP4 and has ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults
{
user_friendly_names no
max_fds max
flush_on_last_del yes
}
# All data under blacklist must be specific to your system.
blacklist
{
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices
{
device
{
vendor "NETAPP"
product "LUN"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio "alua"
features "1 queue_if_no_path"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
Sample configuration file for SUSE Linux Enterprise Server 10 | 119
rr_weight uniform
rr_min_io 128
path_checker tur
}
}
SUSE Linux Enterprise Server 10 SP4 without ALUA enabled sample configuration
file
The following file provides an example of the values you need to supply when your host is running
SUSE Linux Enterprise Server 10 SP4 and does not have ALUA enabled.
Note: Unless you are running the iSCSI protocol and Data ONTAP operating in 7-Mode, you
should have ALUA enabled.
Remember: If you use the blacklist section, you must replace the sample information with
information for your system.
defaults
{
user_friendly_names no
max_fds max
flush_on_last_del yes
}
# All data under blacklist must be specific to your system.
blacklist
{
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices
{
device
{
vendor "NETAPP"
product "LUN"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio "ontap"
features "1 queue_if_no_path"
hardware_handler "0"
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
rr_weight uniform
rr_min_io 128
path_checker tur
}
}
120
Note: multipath -ll should reflect the changes on the SAN Boot LUN as per the changed
multipath.conf file.
Note: Currently SAN Boot LUN does not reflect queue_if_no_path feature. To overcome this
limitation, perform the step mentioned below:
Sample Configuration for Citrix XenServer | 121
◦ Running multipath forces the unused LUNs (on which there is no Storage Repository) to get
added to the XAPI Layer.
◦ To remove the scsi_device ID of those unused LUNs from mpathutil use the following
command: # /opt/xensource/sm/mpathutil.py remove <scsi_device-id>.
Skip this step if you have no unused LUNs (where no SR is created).
defaults{
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
path_checker directio
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
122 | Using Linux® Hosts with ONTAP storage
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio callout "/sbin/mpath_prio_ontap /dev/%n"
path_checker directio
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio callout "/sbin/mpath_prio_alua /dev/%n"
path_checker tur
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
defaults {
user_friendly_names no
max_fds max
flush_on_last_del no
queue_without_daemon no
}
# All data under blacklist must be specific to your system.
blacklist {
devnode "^hd[a-z]"
wwid "<wwid_of_the_local_disk>"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio callout "/sbin/mpath_prio_ontap /dev/%n"
path_checker tur
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del no
max_fds max
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
path_checker tur
failback immediate
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
124 | Using Linux® Hosts with ONTAP storage
defaults {
user_friendly_names no
queue_without_daemon no
flush_on_last_del no
max_fds max
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
path_checker tur
failback immediate
hardware_handler "0"
rr_weight uniform
rr_min_io 128
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
}
}
defaults {
flush_on_last_del no
dev_loss_tmo 30
fast_io_fail_tmo off
}
blacklist {
wwid device_id_of the_device_to_be_blacklisted
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN.*"
prio "alua"
hardware_handler "1 alua"
}
}
Sample Configuration for Citrix XenServer | 125
defaults {
flush_on_last_del no
dev_loss_tmo 30
fast_io_fail_tmo off
}
blacklist {
wwid device_id_of the_device_to_be_blacklisted
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUN.*"
prio "ontap"
hardware_handler "0"
}
}
Citrix XenServer 6.5 with and without ALUA enabled sample configuration file.
The following file provides an example of the values you need to supply when your host is running
Citrix XenServer 6.5 with and without ALUA enabled.
Note: If you use the blacklist section, you must replace the sample information with information
from your system.
blacklist {
wwid device_id_of the_device_to_be_blacklisted
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss.*"
}
devices {
device {
vendor "NETAPP"
product "LUNS.*"
dev_loss_tmo 30
fast_io_fail_tmo off
flush_on_last_del no
}
}
Citrix XenServer 7.0 with and without ALUA enabled sample Configuration file:
You can use an empty /etc/multipath.conf file for FC, FCoE, or iSCSI configurations, as well as
ALUA and non-ALUA configurations. You can also add blacklisting information for the local disks
in the file, if required.
Note: If you use the blacklist section, you must replace the sample information with information
from your system.
• The Linux Unified Host Utilities documentation contains information to help you install,
configure, and use the Host Utilities. .
Installing the Linux Unified Host Utilities software
• The Interoperability Matrix, helps you to verify that the Host Utilities support your system setup.
NetApp Interoperability
• For detailed information about the DM-Multipath parameters, the Red Hat documentation
provides information on the multipath.conf file.
Current active profile enterprise-storage Red Hat Enterprise Linux/Oracle Linux 6 and 7: set this
Supported RHEL/OL value to enterprise-storage
Versions:
RHEL/OL 6 and 7 series
Current active profile virtual-guest Red Hat Enterprise Linux/Oracle Linux 6 and 7: set this
Supported RHEL/OL value to virtual-guest
Versions:
RHEL/OL 6 and 7 series
128
• RHEL 4 series
• RHEL 4 series
132 | Using Linux® Hosts with ONTAP storage
• RHEL 4 series
Red Hat Enterprise Linux/Oracle Linux: Recommended device settings for DM-Multipath | 133
• RHEL 4 series
• All
135
The parameters and values you supply in the devices section of the multipath.conf file override
the values specified in the defaults section of the file.
• SLES 12 series
• SLES 12 series
• SLES 11
• SLES 11 series
• SLES 10 series
136
• SLES 11 series
• SLES 10 series
• SLES 12 series
• SLES 11 series
• SLES 11 series
• SLES 10 series
path_grouping_pol SLES 12, 11 series, iSCSI only with SUSE Linux Enterprise
icy and 10 SP3 and Server 12, 11 series, and 10 SP3 and later:
Supported SUSE later: Set this value to group_by_prio.
versions: group_by_prio iSCSI only with SUSE Linux Enterprise
SLES 10 SP2 and Server 10 SP2 and earlier: Set this value to
• SLES 12 series earlier: multibus multibus.
• SLES 11 series Related parameters: This value works with
failback. If you are using the FC protocol, see
• SLES 10 SP3 and the previous row for information on the value
later you should use.
• SLES 10 SP2 and
earlier
• All
• SLES 12 series
• SLES 11 series
• SLES 10 series
• SLES 12 series
• SLES 11 series
• SLES 10 series
• All
139
Red Hat Enterprise Linux 6 If you are using Red Hat Enterprise Linux 6 series and Red Hat
series and Red Hat Enterprise Enterprise Linux 7 series, you must configure it to support
Linux 7 series: Veritas Storage Foundation.
Create the file /etc/udev/ At the time this document was prepared, you had to create the
rules.d/40-rport.rules. file /etc/udev/rules.d/40-rport.rules with the
following content line:
KERNEL=="rport-*",
SUBSYSTEM=="fc_remote_ports",
ACTION=="add",RUN+="/bin/sh -c 'echo 20 > /sys/class/
fc_remote_ports/%k/fast_io_fail_tmo; echo 864000
>/sys/class/fc_remote_ports/%k/dev_loss_tmo'"
Red Hat Enterprise Linux 6 The default value of the IOFENCE timeout parameter is
series: 15000 milliseconds or 15 seconds. This parameter specifies the
Set the IOFENCE timeout amount of time in milliseconds that it takes clients to respond
parameter to 30000. to an IOFENCE message before the system halts. When clients
receive an IOFENCE message, they must unregister from the
GAB driver within the number of milliseconds specified by the
IOFENCE timeout parameter. If they do not unregister within
that time, the system halts.
To view the value of this parameter, enter the gabconfig -l
command on the host.
To set the value for this parameter, enter the gabconfig -f
30000 command. This value does not persist across host
reboots.
SUSE Linux Enterprise Server If you are using SUSE Linux Enterprise Server 11 series, you
11: must configure it to support Veritas Storage Foundation.
Create the file /etc/udev/ Before you configure anything, you should check Symantec
rules.d/40-rport.rules TechNote 124725. It contains the latest information and is
available at http://www.symantec.com/business/support/index?
page= content&id=TECH124725.
At the time this document was prepared, you had to create the
file /etc/udev/rules.d/40-rport.rules with the
following content line:
KERNEL=="rport-*",
SUBSYSTEM=="fc_remote_ports",
ACTION=="add",RUN+="/bin/sh -c 'echo 20 > /sys/class/
fc_remote_ports/%k/fast_io_fail_tmo; echo 864000
>/sys/class/fc_remote_ports/%k/dev_loss_tmo'".
142
Based on testing done when the version of the Host Utilities was developed, it is best to set the
following values when using Citrix XenServer. Details about how to set these values are in the Linux
Unified Host Utilities 7.1 Installation Guide.
Copyright information
Copyright © 1994–2017 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer
Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
150
Trademark information
Active IQ, AltaVault, Arch Design, ASUP, AutoSupport, Campaign Express, Clustered Data ONTAP,
Customer Fitness, Data ONTAP, DataMotion, Element, Fitness, Flash Accel, Flash Cache, Flash
Pool, FlexArray, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, Fueled by
SolidFire, GetSuccessful, Helix Design, LockVault, Manage ONTAP, MetroCluster, MultiStore,
NetApp, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-TEC, SANscreen,
SANshare, SANtricity, SecureShare, Simplicity, Simulate ONTAP, Snap Creator, SnapCenter,
SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover,
SnapProtect, SnapRestore, Snapshot, SnapValidator, SnapVault, SolidFire, SolidFire Helix,
StorageGRID, SyncMirror, Tech OnTap, Unbound Cloud, and WAFL and other names are
trademarks or registered trademarks of NetApp, Inc., in the United States, and/or other countries. All
other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such. A current list of NetApp trademarks is available on the web.
http://www.netapp.com/us/legal/netapptmlist.aspx
151
Index
sample for Red Hat Enterprise Linux 5 89
/etc/init.d/multipathd command sample for Red Hat Enterprise Virtualization
using to stop DM-Multipath 27 Hypervisor 101
/etc/multipath.conf files configuring 62
editing DM-Multipath 22
D
A
Device Mapper Multipathing
alignment DM-Multipath 72
VHD partition 68 Red Hat Enterprise Linux 72
ALUA SUSE Linux Enterprise Server 72
configuring in the DM-Multipath /etc/multipath.conf device settings
file 22 recommended for DM-Multipath on Red Hat
supported environments 72 Enterprise Linux and Oracle Linux 130
APM discovering LUNs
failover path 32 verifying by using the iscsi-ls command 36
I/O error handling 32 verifying by using the sanlun command 36
I/O error processing 32 discovery, target
path failures 32 methods of setting up with software initiators 16
Array Policy Module DM-Multipath
installing 32 Citrix Xen Server s sample multipath-conf file 120
Array Support Library configuring to start automatically while booting 24
installing 32 displaying information using sanlun lun show -p 58
automatic node login for time-out values when using 139
configuring for iSCSI 19 I/O failover 72
available paths path load sharing 72
displaying using VxVM 52 recommended device settings for Red Hat Enterprise
Linux and Oracle Linux 130
B Red Hat Enterprise Linux 4 sample multipath-conf
file 93
block alignment sample configuration file for Oracle VM 106
VHD partition 68 sample configuration file for Red Hat Enterprise
Linux 5 89
sample configuration file for Red Hat Enterprise
C Virtualization Hypervisor 101
CHAP protocol sample configuration files for Red Hat Enterprise
setting up for Red Hat and SUSE Linux iSCSI hosts Linux 6 83
13 SCSI device 72
chkconfig command stopping 27
using to configure DM-Multipath to start SUSE Linux Enterprise Server 10 sample multipath-
automatically while booting 24 conf file 118
Citrix Xen Server SUSE Linux Enterprise Server 11 sample multipath-
sample multipath-conf file 120 conf files 114
SAN booting not supported 120 SUSE Linux Enterprise Server 12 sample
Citrix XenServer configuration file 113
recommended values 142 verifying 21
Citrix XenServer DM-Multipath default parameters verifying the configuration 24
recommendation 142 DM-Multipath configuration files
Citrix XenServer settings editing 22
recommended settings for 142 DM-Multipath default settings
commands, VxVM recommended for 128, 135
vxdisk list 52 DM-Multipath device settings
vxdmpadm getsubpaths ctlr 52 recommended for 136
vxdmpadm getsubpaths dmpnodename 52 DM-Multipath, configuration, command
comments # multipath 23
how to send feedback about documentation 151 documentation
configuration files how to receive automatic notification of changes to
sample for Red Hat Enterprise Linux 7 series 82 151
configuration files, multipath how to send feedback about 151
Index | 153
Dynamic Multipathing (VxDMP) configuring software iSCSI SAN boot LUN 61, 63
recommended values of 140 discovering LUNs in hardware environments 35
HBA 71
iSCSI target HBA 71
E methods of setting up target discovery with software
enterprise-storage profiles initiators 16
using to improve I/O performance on RHEL hosts 10 preboot execution environment (PXE) 57
Examples sanlun output 58
sanlun FC, iSCSI output 58 standard Ethernet interfaces 71
iSCSI protocol, configure SAN boot
network interface card (NIC) 57
F software iSCSI
locally attached disk 57
FC protocol
preboot execution environment (PXE) 57
discovering new LUNS 35
iSCSI service
host bus adapters
configuring to start automatically 18
HBA port 70
starting 15
sanlun output 58
iSCSI targets
FCoE
discovering on SUSE 10, 11, 12 using YaST2 17
converged network adapters 71
discovering using iscsiadm utility on Red Hat 5, 6, 7
data center bridging 71
and SUSE 10, 11, 12 16
Ethernet switch 71
iSCSI, Veritas environments
traditional FC 71
discovering new LUNs 46
feedback
iscsiadmin utility
how to send comments about documentation 151
discovering iSCSI targets on Red Hat 5, 6, 7 and
SUSE 10, 11, 12 using 16
H
host settings K
Linux Unified Host Utilities 126
Kernel-based Virtual Machine (KVM)
Host Utilities
support in Linux Host Utilities 68
Linux environments 7
hosts
improving I/O performance on RHEL 10 L
setting up CHAP for iSCSI Red Hat and SUSE
Linux 13 Linux
Hypervisor setting up CHAP for Red Hat and SUSE iSCSI hosts
align partitions for best performance 68 13
Linux configurations
ALUA support
I automatically enabled 72
asymmetric logical unit access
I/O performance
Target Port Group Support 72
improving on Red Hat Enterprise Host/Oracle Linux
Linux Host Utilities
settings 127
direct-attached configurations
information
high-availability 70
how to send feedback about improving
single-zoned 70
documentation 151
stand-alone 70
initiator group
fabric-attached SAN
igroup 70
dual-port 70
initiator node names
single-port 70
getting iSCSI, when setting up igroups 11
Linux Unified Host Utilities
initiators, software
host settings 126
methods for setting up target discovery with 16
Linux versions
iSCSI
Oracle Linux 7
setting multipathing timeout values 12
Red Hat Enterprise Linux 7
iSCSI hosts
SUSE Linux Enterprise Server 7
setting up CHAP for Red Hat and SUSE Linux 13
Veritas Storage Foundation 7
iSCSI initiator node names
login, node
getting when setting up igroups 11
configuring manual or automatic for iSCSI 19
iSCSI protocol
LUN retry tunable values
configuring a SAN boot LUN 57
configuring for Veritas Storage Foundation 28
configuring manual or automatic node login 19
LUNs
configuring Red Hat SAN boot LUN 57
154 | Using Linux® Hosts with ONTAP storage