Connecting To SAN Snapshot™ Copies On HP-UX™

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

TECHNICAL REPORT

Connecting to SAN Snapshot™ Copies


on HP-UX™
Description of Connecting Network Appliance Snapshot™
Copies to an HP-UX System in a SAN Environment
by Richard Jooss, Network Appliance, Inc.
May, 2004 | TR 3318

TECHNICAL REPORT

management strategies.
advanced storage solutions and global data
complex technical challenges with
organizations understand and meet
leader in data storage technology, helps
Network Appliance, a pioneer and industry
PARTNER
LOGO

Network Appliance Inc.

1
TECHNICAL REPORT

Table of Contents
1. Scope
2. Intended Audience
3. Requirements and Assumptions
4. Mapping from File System to NetApp Filer Disks
5. Overview of the Process
6. Steps to Connect to LUNs Created with Snapshot Copies
7. Example of Connecting to a LUN from a Snapshot Copy
8. Graphical Representation of Example
9. Steps to Return to the Original State
10. Example of Returning to the Original State
11. Automation/Scripting

Network Appliance Inc.

2
TECHNICAL REPORT

Abstract
This document describes the steps necessary to connect LUNs created with NetApp filer
Snapshot copy technology to an HP-UX system using HP's Logical Volume Manager
(LVM) in a SAN environment.

1. Scope
This document covers the steps needed for mounting NetApp filer Snapshot copies from
LUNs from HP-UX systems. In other words, it describes how to gain read/write access to
Snapshot copies created for file systems on HP-UX systems. It does not expand on the
multitude of reasons for wanting to gain access to these Snapshot copies, for example,
restore, backup, consistency checking, data mining, etc.

2. Intended Audience
This paper is intended for system and storage administrators.

3. Requirements and Assumptions


For the methods and procedures in this document to be useful to the reader, several
assumptions are made:

• The reader has at least basic HP-UX administration skills and has access to the
administrative login for the server.

• The reader has at least basic Network Appliance administration skills and has
administrative access to the filer via the command-line interface.

• The filer and host have the necessary licenses to perform the activities outlined
in this document.

• The target system has the required block-level and network protocol
interconnects to perform the activities outlined in this document.

• Snapshot copies of the file system or, more correctly stated, Snapshot copies of
the NetApp filer volumes containing the LUNs for the file system have already
been created, and they have been created in such a way that the content of the
LUNs is consistent. The recommend method is using NetApp SnapDrive™ for
UNIX®.

• The Snapshot copies will be mounted on the same system where the original
LUNs are mounted. It is generally considered a simpler case to mount the LUNs
to a different (often referred to as nonoriginating) host because the problem of
"duplicate volume group IDs" generally does not exist. This procedure is
considered a safe approach because the HP-UX LVM volume group (VG) ID is
changed before mounting and will work equally well on originating or
nonoriginating hosts.

Network Appliance Inc.

3
TECHNICAL REPORT

• In the examples in this report, all administrative commands are performed at the
server or filer console for clarity. Web-based management tools can also be
used.

4. Mapping from File System to NetApp Filer Disks


It is important to understand the logical stack that exists between the file system and the
NetApp filer disks. There are many possible combinations, and the diagram below
shows the most common configurations. On the host the stack consists of:

• A file system created on a logical volume.

• The logical volume consists of a portion of a single or multiple physical volumes.

• Physical volumes are formatted raw devices or LUNs from a host perspective.
There is a one-to-one relationship between physical volumes and LUNs on the
NetApp filer.

On the NetApp filer the stack consists of:

• Each LUN is mapped to one and only one NetApp filer volume.

• Each volume is mapped to one or more RAID groups.

• Each RAID group consists of a number of disks.

For the sake of simplicity, the logical diagram shown below does not show the multiple
paths to each LUN.

Network Appliance Inc.

4
TECHNICAL REPORT

Figure 1. Mapping from File System to NetApp Filer

5. Overview of the Process


Before going into the detailed steps necessary to mount the LUNs from the Snapshot
copies, it makes sense to quickly describe the general process. The general principle is
that after a Snapshot copy is created (recommended method is using NetApp SnapDrive
for UNIX), the LUNs are created on the NetApp filer but not visible to any hosts. Before
making the LUNs visible to a host for mounting, a readable/writeable copy or Snapshot
copy of the Snapshot copy needs to be created, since the LUNs in the Snapshot copy
are read only. The only additional space required by this additional copy or Snapshot
copy is for the changes that are actually written to the LUN after changes have been
made. All operations on this readable/writ able version are performed on the same disk
mechanisms as the original LUN, so heavy activity can also affect performance. The
readable/writ able LUNs are then mapped to the NetApp filer interfaces (FC or iSCSI) to
make them available to the hosts. After the necessary steps are performed on the host
to gain access to the LUNs, the volume ID on the LUNs is modified to avoid having
multiple volumes with the same IDs. They are then imported into the volume manager.
At that point, it is just necessary to mount the file systems contained on the LUNs to a
mount point on the system.

6. Steps to Connect to LUNs Created with Snapshot Copies


The following table describes the steps necessary to connect LUNs created by creating
a Snapshot copy of a Network Appliance volume.

Network Appliance Inc.

5
TECHNICAL REPORT

Step Command Command Description


Location
1 sanlun lun show -p Host Shows a list of LUNs visible to the host
indicating to which host volume group the
LUNs belong and on which filer the LUNs
reside. This information can be used to
determine on which filer the follow-on steps
need to be executed.
2 lun show -m Filer Shows a list of LUNs mapped on the filer
giving LUN names, filer volume names,
and to which host they are mapped. The
filer volume names and LUN names will be
used in future steps.
3 snap list <filer volume> Filer Lists the available Snapshot copies for the
desired volume. Typically all the LUNs will
be contained within one filer volume. If
there are multiple filer volumes, this step
needs to be repeated for each volume.
4 lun create -b Filer Creates a copy or Snapshot copy of the
<lun_snapshot_name> original LUN from the Snapshot copy.
<new_lun_name> Originally the LUN from the Snapshot copy
is read-only, and in order to mount/activate
the host volume group (VG), a writ able
Snapshot copy is needed. The original
"snapshotted" LUN is not accessible during
the time this LUN created with "lun create -
b" exists. The original Snapshot copy state
is returned after this LUN is removed. This
command has to be done for each needed
LUN.
5 lun map <new_lun_name> Filer Maps the newly created LUNs so that they
<igroup> <LUN_ID> are visible for the host. Typically, the same
igroup will be used as for the original
LUNs, but if for some reason a more
restricted group is desired, a new group
could be created.
6 ioscan -fnC disk Scans all I/O devices on the host for newly
created disk devices (LUNs in the case of
FC/iSCSI devices).
7 insf -eC disk Host Creates device files (i.e., /dev/dsk/cXtYdZ)
for the newly created and mapped LUNs.
8 sanlun lun show -p Host Shows a list of LUNs visible to the host.
The newly created and mapped LUNs
should appear with no host volume group
(VG) assigned. The "-p" option can be left
off to get different presentation of the
device files.
9 vgchgid Host The vgchgid (volume group change ID)
/dev/rdsk/<cXtYdZ> command changes the host volume group
Network Appliance Inc.

6
TECHNICAL REPORT

/dev/rdsk/<cXtYdQ> (VG) ID on the LUNs to avoid the machine


having two VGs with the same ID. All the
LUNs making up the volume group need to
be included in a single invocation of the
command.
10 ls -s /dev/*/group Host This command gives a list of host volume
groups showing the group name
(/dev//group) and minor number
(0xXX0000). The major number of 64 is
also displayed. This list is used to pick a
unique VG name and minor number for the
next steps.
11 mkdir /dev/<vgsnap_directory> Host This is to make the necessary directory for
the new VG group that will be created for
accessing the Snapshot copy.
12 mknod Host This command creates the special
/dev/<vgsnap_directory>/group character device for a host volume group
c 64 0xXX0000 (VG). The directory and minor number
need to be unique.
13 vgimport /dev/<vgsnap> Host This command will import the new
/dev/dsk/<cXtYdZ>/dev/dsk/ ... Snapshot copy LUNs into the new host
volume group that was created. Note that
with multiple paths there will be multiple
devices files per LUN and all the possible
device files pointing to the LUNs should be
entered.
14 vgchange -a y /dev/<vgsnap) Host This command will activate the host VG.
15 mkdir 777 /<snap> Host This command creates a directory for
mounting the file system.
16 mount /dev/<vgsnap>/<logical Host This command mounts the file system on
volume> /<snap> the directory that was just created.

7. Example of Connecting to a LUN from a Snapshot Copy


The following table shows an example where a file system is built using a single LUN
located on a single filer. Typically, a logical volume will consist of multiple LUNs, but for
simplicity only a single LUN is used here. If multiple LUNs are used, steps 4, 5, 9, and
13 need to be repeated or expanded to include the additional LUNs. If the Snapshot
copies are from multiple filers, steps 2, 3, 4, and 5 will need to be executed on each filer.

Network Appliance Inc.

7
TECHNICAL REPORT

Step Example
1 root> sanlun lun show -p

filer1:/vol/vol1/sh-hp2-lun2049 (LUN 2049) VG: /dev/vg_ntap01


700m (734003200) lun state: GOOD
-------- --------- -------------------- ---- ------ ------- --------
path path /dev/dsk host target partner PVlinks
state type filename HBA port port priority
-------- --------- -------------------- ---- ------ ------- --------
up primary /dev/dsk/c44t0d1 td0 4a 0
up primary /dev/dsk/c45t0d1 td1 5a 1
up secondary /dev/dsk/c51t0d1 td1 5b 2
up secondary /dev/dsk/c50t0d1 td0 4b 3

root>
2 filer1> lun show -m
LUN path Mapped to LUN ID
-------------------------------------------------------------
/vol/vol1/sh-hp1-lun-010 sh-hp1 10
/vol/vol1/sh-hp2-lun2049 sh-hp2 2049
filer1>
3 filer1> snap list vol1
Volume vol1
working...
%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Feb 29 12:52 test_hpux_snap
filer1>
4 filer1> lun create -b /vol/vol1/.snapshot/test_hpux_snap/sh-hp2-lun2049
/vol/vol1/snap-sh-hp2-lun2049
filer1>
5 filer1> lun map /vol/vol1/snap-sh-hp2-lun2049 sh-hp2 13
filer1>
6 root> ioscan -fnC disk

Class I H/W Path Driver S/W State H/W Type Description


=================================================================================
disk 0 0/0/1/1.15.0 sdisk CLAIMED DEVICE HP 18.2GMAN3184MC
/dev/dsk/c1t15d0 /dev/rdsk/c1t15d0

disk 9 0/4/0/0.1.4.0.0.1.5 sdisk CLAIMED DEVICE NETAPP LUN


disk 1 0/4/0/0.1.4.0.16.0.1 sdisk CLAIMED DEVICE NETAPP LUN
/dev/dsk/c44t0d1 /dev/rdsk/c44t0d1
disk 15 0/4/0/0.1.7.0.0.1.5 sdisk CLAIMED DEVICE NETAPP LUN
disk 7 0/4/0/0.1.7.0.16.0.1 sdisk CLAIMED DEVICE NETAPP LUN
/dev/dsk/c50t0d1 /dev/rdsk/c50t0d1
disk 10 0/6/2/0.1.12.0.0.1.5 sdisk CLAIMED DEVICE NETAPP LUN
disk 2 0/6/2/0.1.12.0.16.0.1 sdisk CLAIMED DEVICE NETAPP LUN
/dev/dsk/c45t0d1 /dev/rdsk/c45t0d1
disk 16 0/6/2/0.1.15.0.0.1.5 sdisk CLAIMED DEVICE NETAPP LUN
disk 8 0/6/2/0.1.15.0.16.0.1 sdisk CLAIMED DEVICE NETAPP LUN
/dev/dsk/c51t0d1 /dev/rdsk/c51t0d1

root>
7 root> insf -Cdisk
insf: Installing special files for sdisk instance 9 address 0/4/0/0.1.4.0.0.1.5
Network Appliance Inc.

8
TECHNICAL REPORT

insf: Installing special files for sdisk instance 15 address 0/4/0/0.1.7.0.0.1.5


insf: Installing special files for sdisk instance 10 address 0/6/2/0.1.12.0.0.1.5
insf: Installing special files for sdisk instance 16 address 0/6/2/0.1.15.0.0.1.5
8 root> sanlun lun show -p
filer1:/vol/vol1/sh-hp2-lun2049 (LUN 2049) VG: /dev/vg_ntap01
700m (734003200) lun state: GOOD
-------- --------- -------------------- ---- ------ ------- --------
path path /dev/dsk host target partner PVlinks
state type filename HBA port port priority
-------- --------- -------------------- ---- ------ ------- --------
up primary /dev/dsk/c44t0d1 td0 4a 0
up primary /dev/dsk/c45t0d1 td1 5a 1
up secondary /dev/dsk/c51t0d1 td1 5b 2
up secondary /dev/dsk/c50t0d1 td0 4b 3

filer1:/vol/vol1/snap-sh-hp2-lun2049 (LUN 13) VG: none


700m (734003200) lun state: GOOD
-------- --------- -------------------- ---- ------ ------- --------
path path /dev/dsk host target partner PVlinks
state type filename HBA port port priority
-------- --------- -------------------- ---- ------ ------- --------
up secondary /dev/dsk/c55t1d5 td1 5b
up primary /dev/dsk/c53t1d5 td1 5a
up secondary /dev/dsk/c54t1d5 td0 4b
up primary /dev/dsk/c52t1d5 td0 4a

root>
9 root> vgchgid /dev/rdsk/c52t1d5 /dev/rdsk/c41t1d6
10 root> ls -l /dev/*/group

crw-r----- 1 root sys 64 0x000000 Aug 12 2002


/dev/vg00/group
crw-r--r-- 1 root sys 64 0x010000 Oct 30 18:04
/dev/vgntap01/group
11 root> mkdir /dev/vgsnap
12 root> mknod /dev/vgsnap/group c 64 0x440000
13 root> vgimport /dev/vgsnap /dev/dsk/c52t1d5 /dev/dsk/c53t1d5
/dev/dsk/c55t1d5 /dev/dsk/c54t1d5

vgimport: Warning: Volume Group contains "1" PVs, "4" specified. Continuing.

Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after
activating the volume group.
14 root> vgchange -a y /dev/vgsanp
Activated volume group
Volume group "/dev/vgsnap" has been successfully changed.

root>
15 root> mkdir 777 /mnt/snap
root>
16 root> mount /dev/vgsnap/lvol1 /mnt/snap

Network Appliance Inc.

9
TECHNICAL REPORT

8. Graphical Representation of Example


The diagram below depicts the result of the steps described in the preceding table. The
first row, Original, shows how the original file system is mounted. The second row,
Snapshot Copy of Original, shows how the Snapshot copy exists on the filer, but the
LUNs are not visible to the host. The LUNs in this row are read only and not available
during the time the LUNs in the third row exist. The third line, lun create -b of original,
displays the readable/writeable LUNs that were made visible to the host to allow the
mounting of the file system. Any changes made to the LUNs in this third row will be lost
as soon as the LUNs are removed or destroyed. The diagram below differs from the
example in that it shows two LUNs, which make up the file system, and the names not
representative.

Figure 2. Mapping of Snapshot Copy and Readable/Writeable Copy of Snapshot Copy

9. Steps to Return to the Original State


The following table describes the steps necessary to undo what was done to connect to
the Snapshot copy and return to the original state. To correlate the steps, these steps
are numbered r<number> for reverse step.

Step Command Command Description


Location
r16 umount / <snap> Host This command unmounts the file system.
r15 rmdir /<snap> Host This command removes the directory where the file
system was mounted.
r14 vgchange -a n Host This command will deactivate the host VG.
/dev/<vgsnap)

Network Appliance Inc.

10
TECHNICAL REPORT

r13- vgexport /dev/<vgsnap> Host This command will remove the host VG, including
r10 the /dev/<vgsnap> directory and group file from the
host.
r9 No action necessary This change, which was written to the disk, will be
lost when the LUNs are destroyed in a later step.
r8 No action necessary This was for information only.
r7-r6 rmsf -k Host These commands remove the device files from the
/dev/dsk/<cXtYdZ> system kernel (-k) and from the file system. Both
/dev/dsk/<cXtYdZ> commands need to be executed and should be
/dev/dsk/<cXtYdQ>
...
executed with the same list of device files. The list
of device files should be the same as what was
rmsf /dev/dsk/<cXtYdZ> given in step 13 during the create process.
/dev/dsk/<cXtYdZ>
dev/dsk/<cXtYdQ>
...
r5 lun unmap Filer This command removes the mapping of the LUNs
<new_lun_name><igroup> from the igroup.
r4-r1 lun destroy Filer This command removes the created LUN. After this
<new_lun_name> command is executed, the LUN within the
Snapshot copy is returned to its original state.

10. Example of Returning to the Original State


The following table shows the steps necessary to return to the original state from the
example steps show in the preceding table.

Step Example
r16 root> umount /mnt/snap
r15 root> rmdir /mnt/snap
r14 root> vgchange -a n /dev/vgsanp
Volume group "/dev/vgsnap" has been successfully
changed.
root>
r13- root> vgexport /dev/vgsnaproot>
r10
r7-r6 root> rmsf -k /dev/dsk/c52t1d5 /dev/dsk/c53t1d5
/dev/dsk/c55t1d5 /dev/dsk/c54t1d5
root>
root> rmsf /dev/dsk/c52t1d5 /dev/dsk/c53t1d5
/dev/dsk/c55t1d5 /dev/dsk/c54t1d5
root>
r5 filer1> lun unmap /vol/vol1/snap-sh-hp2-lun2049 sh-
hp2filer1>
r4-r1 filer1> lun destroy /vol/vol1/snap-sh-hp2-lun2049

Network Appliance Inc.

11
TECHNICAL REPORT

11. Automation/Scripting
This is a procedure that typically is done repeatedly and is automated through scripts
where the data is then backed up to tape, analyzed, checked for consistency, etc.
Typically, the Snapshot copies are created and renamed using the same names each
time to make the scripting process easier. Also, by using the same mappings each time,
the steps of finding the LUNs and creating the device files can be eliminated.

In a scripted solution, normally only steps 4, 5, 9, 11 through 14, and 16 will need to be
done each time to gain access to the Snapshot copies, and steps r16, r14, r13, r5, and
r4 are necessary to return to the starting state.

Network Appliance, Inc.

© 2004 Network Appliance, Inc. All rights reserved. Specifications subject to change without notice. NetApp, the Network Appliance logo, DataFabric,
FAServer, FilerView, NearStore, NetCache, SecureShare, SnapManager, SnapMirror, SnapRestore, and WAFL are registered trademarks and
Network Appliance, ApplianceWatch, BareMetal, Camera-to-Viewer, Center-to-Edge, ContentDirector, ContentFabric, Data ONTAP, EdgeFiler,
Network Appliance Inc. HyperSAN, InfoFabric, MultiStore, NetApp Availability Assurance, NetApp ProTech Expert, NOW, NOW NetApp on the Web, RoboCache, RoboFiler,
SecureAdmin, Serving Data by Design, Smart SAN, SnapCache, SnapCopy, SnapDirector, SnapDrive, SnapFilter, SnapMigrator, Snapshot,
SnapSuite, SnapVault, SohoCache, SohoFiler, The evolution of storage, Vfiler, VFM, Virtual File Manager, and Web Filer are trademarks of Network
Appliance, Inc. in the U.S. and other countries. All other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such.
12

You might also like