Sg247938 - Implementing The IBM Storwize V7000
Sg247938 - Implementing The IBM Storwize V7000
Sg247938 - Implementing The IBM Storwize V7000
Brian Cartwright
Ronda Hruby
Daniel Koeck
Xin Liu
Massimo Rosati
Thomas Vogel
Bill Wiegand
Jon Tate
ibm.com/redbooks
Draft Document for Review January 12, 2011 1:20 pm 7938edno.fm
December 2010
SG24-7938-00
7938edno.fm Draft Document for Review January 12, 2011 1:20 pm
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
v
7938edno.fm Draft Document for Review January 12, 2011 1:20 pm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Contents ix
7938TOC.fm Draft Document for Review January 12, 2011 1:20 pm
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® IBM® Solid®
DB2® Passport Advantage® System Storage®
DS6000™ POWER® Tivoli®
DS8000® Redbooks® XIV®
FlashCopy® Redbooks (logo) ®
Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other
countries.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Preface
IBM Storwize V7000 incorporates some of IBM’s top technologies typically found only in
enterprise-class storage systems, raising the standard for storage efficiency in midrange disk
systems. This cutting-edge storage system extends the comprehensive storage portfolio from
IBM and can help change the way organizations address the ongoing information explosion.
This book will introduce the features and functions of the Storwize V7000 by example.
Brian Cartwright is a Consulting I/T Specialist based in Brisbane, Australia. He has 23 years
experience in the storage market covering both open systems and mainframe storage
systems. His areas of expertise include Storage Area Networks, Disk and Tape subsystems,
Storage virtualization and IBM storage software. Brian holds a degree in Computing Studies
from Canberra University and is both an IBM Certified I/T Specialist and an IBM Certified
Storage Specialist.
Ronda Hruby is a SAN Volume Controller Level 2 Support Engineer at the Almaden
Research Center in San Jose, California. She also supports multipathing software, and virtual
tape products. Before joining the IBM Storage Software PFE organization in 2002, she
worked in hardware and microcode development for more than 20 years and is a SNIA
certified professional.
Daniel Koeck is a Systems Engineer in IBM Systems and Technology Group in Austria. He
has a graduate degree in applied computer science and IT security. His daily business is
mainly focused on virtualized environments and his areas of expertise include storage,
server, desktop and I/O virtualization, as well as high-availability solutions, datacenter
migrations and software engineering. Daniel is author of several technical publications,
including 3 previous redbooks and enjoys sharing his knowledge as a speaker on national
and international conferences.
Xin Liu is an Advisory IT Specialist in IBM China with a focus on IBM system hardware and
software products he has undertaken various roles in the technical support area. Starting as
a field support engineer in IBM Global Technology Services China, now he is a member of the
Advanced Technical Skills organization in the IBM Greater China Group specializing in
storage virtualization. Xin holds a Master’s degree in Electronic Engineering from Tsinghua
University, China.
Massimo Rosati is a Certified ITS Senior Storage and SAN Software Specialist in IBM Italy.
He has 25 years of experience in the delivery of Professional Services and SW Support. His
areas of expertise include storage hardware, Storage Area Network, storage virtualization,
Disaster Recovery and Business Continuity solutions. He has written other IBM Redbooks®
on storage virtualization products.
Thomas Vogel is an IBM Certified IT Specialist working for the European Advanced
Technical Support organization. He joined IBM in 1997 and since 2000 he has worked at the
European Storage Competence Center (ESCC) in Mainz, Germany. He has worked with IBM
midrange and high end products in the Open Systems environment and he has been involved
in new product introductions for FAStT, DS6000™, DS8000®. Since 2007 his focus is on the
SAN Volume Controller (SVC) and covers European ATS support, and he is involved in
product updates, ESP and Beta programs. Thomas holds a degree in Electrical Engineering
from the Technical University Ilmenau, Germany.
Bill Wiegand is a Senior IT Specialist for IBM System Storage® and is part of the Global
Advanced Technical Skills organization. Bill has been with IBM 32 years and in the
information technology field for 35 in various roles from hardware service and software
technical support to team lead for a UNIX® and DBA group in a Fortune 500 company. For
the last 10 years he has been focused on storage and is the America's storage virtualization
expert for the SAN Volume Controller (SVC). In this role he has worldwide responsibilities for
educating the technical sales teams and business partners, helping to design and sell
solutions around SVC, assisting with customer performance and problem analysis as well as
acting as the technical liaison between the field and the SVC development team.
Jon Tate is a Project Manager for IBM System Storage System Networking and Virtualization
solutions at the International Technical Support Organization, San Jose Center. Before joining
the ITSO in 1999, he worked in the IBM Technical Support Center, providing level 2 and 3
support for IBM storage products. Jon has 25 years of experience in storage software and
management, services, and support, and is both an IBM Certified IT Specialist and an IBM
SAN Certified Specialist. He is also the UK Chair of the Storage Networking Industry
Association.
Preface xv
7938pref.fm Draft Document for Review January 12, 2011 1:20 pm
Howard Greaves
Geoff Lane
Alex Howell
Andrew Martin
Cameron McAllister
Paul Merrison
Steve Randle
Lucy Harris
Bill Scales
Dave Sinclair
Matt Smith
Steve White
Barry Whyte
Muhammad Zuba
IBM Hursley
Duane Bolland
Evelyn Wick
Larry Wiedenhoft
IBM Beaverton
Terry Niemeyer
IBM Austin
Mary Lovelace
IBM San Jose
Larry Chiu
Paul Muench
IBM Almaden
Sharon Wang
IBM Chicago
Chris Saul
IBM San Jose
A special thanks also goes to Brocade who hosted the second team of authors in their
headquarters building in San Jose, California:
Mansi Botadra
Yong Choi
Silviano Gaona
Brian Steffler
Marcus Thordal
Steven Tong
Brocade Communications Systems
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xvii
7938pref.fm Draft Document for Review January 12, 2011 1:20 pm
It is not our intent to provide a detailed description of virtualization in this paper, for a more
detailed explanation of Storage Virtualization refer to Implementing the IBM System Storage
SAN Volume Controller V6.1, SG24-7933.
IBM Storwize V7000 provides a number of configuration options which are aimed at
simplifying the implementation process. It also provides automated wizards, called Directed
Maintenance Procedures (DMP), to assist in resolving any events that may occur. IBM
Storwize V7000 is a clustered, scalable, midrange storage system, as well as an external
virtualization device.
Included with IBM Storwize V7000 is a simple and easy to use Graphical User Interface which
is designed to allow storage to deployed quickly and efficiently. The GUI runs on the IBM
Storwize V7000 system so there is no need for a separate console. The management GUI
contains a series of preestablished configuration options called presets that use commonly
used settings to quickly configure objects on the system. Presets are available for creating
volumes and FlashCopy® mappings and for setting up RAID configuration.
The IBM Storwize V7000 solution provides a choice of up to 120 x 3.5 inch or 240 x 2.5 inch
Serial Attached SCSI (SAS) drives for the internal storage and uses SAS cables and
connectors to attach to the optional expansion enclosures.
Note: Be aware that at the time of writing this book there is a temporary restriction, that
limit the maximum enclosures to 5 (1 controller enclosure and 4 expansion enclosures),
this limitation will be removed in the near future.
When virtualizing external storage arrays IBM Storwize V7000 can provide up to 32PB of
usable capacity. IBM Storwize V7000 supports a range of external disk systems similar to
what the SVC supports today.
The IBM Storwize V7000 solution consists of a control enclosure and optionally up to nine
expansion enclosures (and supports the intermixing of the different expansion enclosures).
Within each enclosure are two canisters. Control enclosures contain two Node canisters,
expansion enclosures contain two Expansion canisters.
Shown in Figure 1-3 is the front view of the 2076-112 and 2076-212 enclosures
Figure 1-3 IBM Storwize V7000 front view for 2076-112 and 2076-212 enclosures
The drives are positioned in 4 columns of 3 horizontal mounted drive assemblies. The drive
slots are numbered 1 to 12, starting top left and going left to right, top to bottom.
Shown in Figure 1-4 is the front view of the 2076-124 and 224 enclosures.
Figure 1-4 IBM Storwize V7000 front view for 2076-124 and 2076-224 enclosures
The drives are positioned in 1 row of 24 vertically mounted drive assemblies. The drive slots
are numbered 1 to 24, starting from the left. (There is a vertical centre drive bay moulding
between slots 12 and 13).
For a complete and updated list of IBM Storwize V7000 configuration limits and restrictions
check the Web site at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003702&myns=s028&mynp=familyin
d5402112&mync=E
This copies only the space used for changes between the source and target and is
generally referred to as “snapshots”.
May be used with multi-target, cascaded and incremental FlashCopy
– Consistency Groups
Consistency groups address the issue where application data resides across multiple
volumes. By placing the FlashCopy relationships into a Consistency Group commands
can be issued against all of the volumes residing in the group. This enables a
consistent point-in-time copy of all of the data even though it may reside on physical
separate volume.
FlashCopy mappings can be members of a consistency group, or they can be operated
in a stand-alone manner, not as part of a consistency group. FlashCopy commands
can be issued to a FlashCopy consistency group, which affects all FlashCopy
mappings in the consistency group, or to a single FlashCopy mapping if it is not part of
a defined FlashCopy consistency group.
Metro Mirror (licensed based on the number of enclosures and includes both Metro and
Global Mirror)
Provides synchronous remote mirroring function up to approximately 300km between
sites. As the host I/O only completes once the data is cached at both locations
performance requirements may limit the practical distance. Metro Mirror is designed to
provide fully synchronized copies at both sites with zero data loss once the initial copy is
completed.
Metro Mirror can operate between multiple IBM Storwize V7000 systems.
Global Mirror (licensed based on capacity being mirrored and includes both Metro and
Global Mirror)
Provides long distance asynchronous remote mirroring function up to approximately
8,000km between sites. With Global Mirror the host I/O completes locally and the changed
data is sent to the remote site later. This is designed to maintain a consistent recoverable
copy of data at the remote site which lags behind the local site.
Global Mirror can operate between multiple IBM Storwize V7000 systems.
Data Migration (no charge for temporary usage)
IBM Storwize V7000 provides a data migration function that can be used to import external
storage systems into the IBM Storwize V7000 system.
It allows you to:
– Move volumes non-disruptively onto a newly installed storage system
– Move volumes to rebalance a changed workload
– Migrate data from other back-end storage to IBM Storwize V7000 managed storage
Easy Tier (no charge)
Provides a mechanism to seamlessly migrate hot spots to the most appropriate tier within
the IBM Storwize V7000 solution. This could be to internal drives within IBM Storwize
V7000 or to external storage systems that are virtualized by IBM Storwize V7000.
This is shown in Figure 1-5 on page 8.
Note: If the Storwize V7000 is to be used as a general migration tool, then the appropriate
External Virtualization licenses must be ordered. The only exception is if you want to
migrate existing data from external storage to IBM Storwize V7000 internal storage, you
can temporarily configure your External Storage license for use within 45 days. For a
more-than-45-day migration requirement from external storage to IBM Storwize V7000
internal storage, then the appropriate External Virtualization license must be ordered too.
SBB is a specification created by a non-profit working group that is defining a mechanical and
electrical interface between a passive backplane drive array and the electronics packages
that give the array its “personality”.
There are two power supply slots, on the extreme left and extreme right, each taking up the
full 2EIA height. The left hand slot is power supply 1, the right hand slot is power supply 2.
The power supplies are inserted different ways up. Power supply 1 appears the correct way
up, and power supply 2 upside down
There are two canister slots, one above the other in the middle of the chassis. The top slot is
canister 1, the bottom slot canister 2. The canisters are inserted different ways up. Canister 1
appears the correct way up, and canister 2 upside down.
Within a control enclosure, each power supply unit (PSU) contains a battery. The battery is
designed to enable the IBM Storwize V7000 system to perform a dump of the cache to
internal disks in the event of both power inputs failing.
The two nodes act as a single processing unit and form an I/O group that is attached to the
SAN fabric. The pair of nodes is responsible for serving I/O to a given volume.
The two nodes provide a highly available fault tolerant controller so if one node was to fail the
surviving node would automatically take over. Nodes are deployed in pairs called I/O groups.
One node is designated as the configuration node, however each node in the control
enclosure holds a copy of the control enclosure state information.
The term node canister and node are used interchangeably throughout this book.
There are four Fibre Channel ports on the left side of the canister. They are in a block of four
in two rows of two connectors. The ports are numbered 1 to 4 from left to right, top to bottom.
The ports operate at 2, 4 or 8 Gb/s. Use of the ports is optional. There are two green LEDs
associated with each port, the speed LED and link activity LED.
There are two 10/100/1000 Mb/s Ethernet ports side by side on the canister. They are
numbered 1 on the left and 2 on the right. Use of port 1 is required while use of port 2 is
optional. There are two LEDs associated with each Ethernet port.
There are two USB 2.0 ports side by side on the canister. They are numbered 1 on the left
and 2 on the right. Use of the connectors is optional. The only defined use is with a USB
memory sticks and is described in Chapter 2, “Initial configuration” on page 35.
There are two 6 Gb/s SAS ports side by side on the canister. They are numbered 1 on the left
and 2 on the right. These ports are used to connect to the optional Expansion enclosures.
The expansion enclosure power supplies are similar to the control enclosure but do not
contain the battery. There is a single power lead connector on the power supply unit. The
PSU has an IEC C14 socket and the mains connection cable has a C13 plug.
Each expansion canister provides 2 SAS interfaces which are used to connect to the Control
enclosure and any optional Expansion enclosures. The ports are numbered 1 on the left and
2 on the right. SAS port 1 is the IN port and SAS port 2 is the OUT port. There is also a
symbol printed above the SAS ports to identify whether it is an IN or an OUT port.
Use of the SAS connector 1 is mandatory as the Expansion enclosure must be attached to
either a Control enclosure or another Expansion enclosure. SAS connector 2 is optional as it
will be used to attach to additional Expansion enclosures.
Each port connects four PHYs (ports of SAS drive). There is an LED associated with each
PHY in each port (8 LEDs in total). The LEDs are green and are next to the ports, and for
each port they are numbered 1 through 4. The LED indicates activity on the PHY.
Shown in Table 1-3 are the IBM Storwize V7000 Disk Drive types that are available at the time
of writing.
Shown in Figure 1-10 on page 12 is an overview of the hardware components of the IBM
Storwize V7000 solution.
The IBM Storwize V7000 subsystem consists of a set of drive enclosures. Control Enclosures
contain disk drives and two nodes – an I/O group – which are attached to the SAN fabric.
Expansion Enclosures contain drives and are attached to control enclosures.
The simplest use of the IBM Storwize V7000 is as a traditional RAID subsystem. The internal
drives are configured into RAID arrays and virtual disks created from those arrays.
The IBM Storwize V7000 can also be used to virtualize other storage controllers as discussed
in Chapter 9, “External Storage Virtualization” on page 337.
The IBM Storwize V7000 supports regular and solid-state drives and uses IBM System
Storage Easy Tier to automatically place volume hot-spots on better-performing storage.
1.4.1 Nodes
IBM Storwize V7000 has two hardware components called nodes or node canisters that
provide the virtualization of internal and external volumes, cache and copy services (remote
copy) functions. A cluster consists of one node pair.
One of the nodes within the cluster is known as the configuration node and it is the node that
manages configuration activity for the cluster. If this node fails the cluster nominates the other
node to become the configuration node.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to the I/O Group. Also, under normal conditions, the I/Os for that specific volume are
always processed by the same node within the I/O Group.
Both nodes of the I/O Group act as preferred nodes for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers, a maximum of 2048
volumes. However both nodes also act as a failover node for its partner node within the I/O
Group. So a node will take over the I/O workload from its partner node, if required, with no
impact to the server’s application.
So in a Storwize V7000 environment, using active-active architecture, the I/O handling for a
volume can be managed by both nodes of the I/O Group. Therefore, it is mandatory for
servers that are connected through FC to use multipath drivers to be able to handle these
capability.
The Storwize V7000 I/O Group is connected to the SAN so that all application servers
accessing volumes from the I/O Group have access to them. Up to 256 host server objects
can be defined in the I/O Group.
Important: The active/active architecture, provides availability to process I/Os for both
controller nodes and allows the application to continue running smoothly, even if the server
has only one access route or path to the storage controller. This type of architecture
eliminates the path/LUN thrashing typical of an active/passive architecture.
1.4.3 Cluster
A cluster consists of a pair of nodes. All configuration, monitoring, and service tasks are
performed at the cluster level and the configuration settings are replicated across both node
canisters in the cluster. To facilitate these tasks one or two management ip addresses are set
for the cluster.
There is a process provided to backup the cluster configuration data onto disk so that the
cluster can be restored in the event of a disaster. This method does not backup application
data, only Storwize V7000 cluster configuration information.
Note: After backup of the cluster configuration, remember to save the backup data on your
hard disk (or at the very least outside of the SAN), because in the event that you are
unable to access the Storwize V7000 you will not have access to the backup data if it is on
the SAN.
For the purposes of remote data mirroring, two or more clusters (IBM Storwize V7000
systems) must form a partnership prior to creating relationships between mirrored volumes.
Note: At this time remote copy services are supported only between Storwize V7000
systems. Any remote copy relationship between Storwize V7000 and SVC is not
supported.
One node is designated as the configuration node canister and it is the only node that
activates the cluster IP address. If the configuration node canister fails the cluster chooses a
new configuration node and the new configuration node takes over the cluster IP addresses.
The cluster can be configured using either the IBM Storwize V7000 management software,
the Command Line Interface (CLI) or via an application that uses the IBM Storwize V7000
CIMOM (that is to say TPC). IBM Systems Director also provides flexible server and storage
management capability.
1.4.4 RAID
The Storwize V7000 setup will contain a number of internal drive objects, but these drives
cannot be directly added to storage pools.
The drives need to be included in a Redundant Array of Independent Disks (RAID) to provide
protection against the failure of individual drives.
An array is a type of MDisk made up of disk drives. These drives are referred to as members
of the array. Each array has a RAID level. RAID levels provide different degrees of
redundancy and performance, and have different restrictions on the number of members in
the array.
IBM Storwize V7000 supports hot spare drives. When an array member drive fails the system
automatically replaces the failed member with a hot spare drive and rebuilds the array to
restore its redundancy. Candidate and spare drives can be manually exchanged with array
members.
Each array has a set of goals that describe the desired location and performance of each
array member. A sequence of drive failures and hotspare takeovers can leave an array
unbalanced, that is with members that do not match these goals. The system automatically
rebalances such arrays when appropriate drives are available.
RAID 0 arrays stripe data across the drives. The system supports RAID 0 arrays with just one
member which is similar to traditional JBOD attach. RAID 0 arrays have no redundancy so do
not support hot-spare takeover or immediate exchange. RAID 0 array can be formed by1 to 8
drives.
RAID1 arrays stripe data over mirrored pairs of drives. A RAID-1 array mirrored pair is rebuilt
independently. A RAID 1 array can be formed by 2 drives only.
RAID 5 arrays stripe data over the member drives with one parity strip on every stripe. RAID 5
arrays have single redundancy. The parity algorithm means that an array can tolerate no more
than one member drive failure. A RAID 5 array can be formed by 3 to 16 drives
RAID-6 arrays stripe data over the member drives with two parity strips on every stripe known
as the P-parity and the Q-parity. The two parity strips are calculated using different algorithms
which gives the array double redundancy. A RAID 6 array can be formed by 5 to 16 drives
RAID-10 arrays have single redundancy: although they can tolerate one failure from every
mirrored pair they cannot tolerate any two-disk failure. One member out of every pair can be
rebuilding or missing at the same time. RAID 10 array can be formed by 2 to 16 drives,
For a detailed description about RAID levels see the Web site at :
http://en.wikipedia.org/wiki/RAID
An MDisk is not visible to a host system on the storage area network as it is internal or only
zoned to the IBM Storwize V7000 system.
Array mode MDisk are constructed from drives using the RAID function. Array MDisks are
always associated with storage pools.
Unmanaged
This is when an MDisk is not being used by the cluster. This might occur when an MDisk is
first imported into the cluster for example.
Managed
This is when an MDisk is assigned to a storage pool and provides extents that volumes
can use it.
Image
This is when an MDisk is assigned directly to a volume with a one-to-one mapping of
extents between the MDisk and the volume. This is normally used when importing logical
volumes into the cluster that already have data on them. This ensures the data is
preserved as it is imported into the cluster.
The cluster automatically forms the quorum disk by taking a small amount of space from a
managed disk (MDisk). It will allocate three different quorum disks for redundancy although
only one quorum is active.
If the environment has multiple storage systems then to avoid the possibility of losing all of
quorum disks with a failure of a single storage system it is recommended to allocate the
quorum disk on different storage systems.
Note: The names must begin with a letter, which cannot be numeric. The name can be a
maximum of 63 characters. Valid characters are uppercase (A-Z), lowercase letters (a-z),
digits (0-9), underscore (_), period (.), hyphen (-) and space. The names must not begin or
end with a space.
MDisks can be added to a storage pool at any time to increase the capacity of the storage
pool. MDisks can belong in only one storage pool and only MDisks in unmanaged mode can
be added to the storage pool. When an MDisk is added to the storage pool the mode will
change from unmanaged to managed and vice-versa when you remove it.
Each MDisk in the storage pool is divided into a number of extents. The size of the extent will
be selected by the administrator at creation time of the storage pool and cannot be changed
later. The size of the extent ranges from 16MB up to 8GB.
The extent size has a direct impact on the maximum volume size and storage capacity of the
cluster. A cluster can manage 4 million (4 x 1024 x 1024) extents. For example, a cluster with
a 16MB extent size can manage up to 16MB x 4MB = 64TB of storage.
The effect of extent size on the maximum volume size is shown inTable 1-4 and is a list of the
extent size and the corresponding maximum cluster size.
16 2048 (2TB)
32 4096 (4TB)
64 8192 (8TB)
The effect of extent size on the maximum Cluster capacity is shown inTable 1-5.
16MB 64TB
32MB 128TB
64MB 256TB
128MB 512TB
256MB 1PB
512MB 2PB
1024MB 4PB
2048MB 8PB
4096MB 16PB
8192MB 32PB
We recommend that you use the same extent size for all storage pools in a cluster, which is a
prerequisite for supporting volume migration between two storage pool. If the storage pool
extent sizes are not the same you must use volume mirroring to copy volumes between
storage pools, as described in Chapter 7, “Storage Pools” on page 217.
For most clusters a capacity of 1PB is sufficient. We therefore recommend that you use a
value of 256MB.
Note: The GUI of IBM Storwize V7000 has a default value of 256 Extent Size when you
define a new storage pool.
A storage pool can have a threshold warning set that will automatically issue a warning alert
when the used capacity of the storage pool exceeds the set limit.
A multi-tiered storage pool will therefore contain MDisks with different characteristics as
opposed to the single tier storage pool. However, we recommend that each tier has MDisks of
the same size and MDisks that provide the same number of extents.
A multi-tiered storage pool is used to enable automatic migration of extents between disk tiers
using the IBM Storwize V7000 Easy Tier function, Chapter 10, “Easy Tier” on page 351.
1.4.8 Volumes
A volume is a logical disk that is presented to a host system by the cluster. In our virtualized
environment the host system has a volume mapped to it by IBM Storwize V7000. IBM
Storwize V7000 translates this volume into a number of extents which are allocated across
MDisks. The advantage with storage virtualization is that the host is “decoupled” from the
underlying storage so the virtualization appliance can move the extents around without
impacting the host system.
The host system cannot directly access the underlying MDisks in the same manner as it could
access RAID arrays in a traditional storage environment.
Sequential
A sequential volume is where the extents are allocated one after the other from one MDisk
then progresses to the next MDisk.
This is shown in Figure 1-13 on page 20.
Image Mode
Image mode volumes are special volumes that have a direct relationship with one MDisk.
They are used to migrate existing data into and out of the cluster.
When the image mode volume is created a direct mapping is made between extents that
are on the MDisk and the extents that are on the volume. The logical block address (LBA)
x on the MDisk is the same as the LBA x on the volume. This ensures that the data on the
MDisk is preserved as it is brought into the cluster.
This is illustrated in Figure 1-14 on page 21.
Some virtualization functions are not available for image mode volumes so it is often useful to
migrate the volume into a new storage pool. Once migrated the MDisk becomes a managed
MDisk.
If you add an MDisk containing data to a storage pool any data on the MDisk is lost. Ensure
that you create image mode volumes from MDisks that contain data before adding MDisks to
the storage pools.
The real capacity will determine the quantity of MDisk extents that will be allocated for the
volume. The virtual capacity will be the capacity of the volume reported to IBM Storwize
V7000 and to the host servers.
The real capacity will be used to store both the user data and the metadata for the Thin
Provisioned volume. The real capacity can be specified as an absolute value or a percentage
of the virtual capacity.
The thin provisioning feature can be used on its own to create over-allocated volumes, or it
can be used in conjunction with FlashCopy. Thin Provisioned volumes can be used in
conjunction with the mirrored volume feature as well.
A thin provisioned volume can be configured to autoexpand, which causes IBM Storwize
V7000 to automatically expand the real capacity of a thin provisioned volume as its real
capacity is used. Autoexpand attempts to maintain a fixed amount of unused real capacity on
the volume. This amount is known as the “contingency capacity”.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A volume that is created with a zero contingency capacity will go offline as soon as it needs to
expand whereas a volume with a non-zero contingency capacity will stay online until it has
been used up.
Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity will be recalculated.
To support the autoexpansion of thin provisioned volumes the storage pools from which they
are allocated have a configurable warning capacity. When the used free capacity of the group
exceeds the warning capacity, a warning is logged.
For example, if a warning of 80% has been specified, the warning will be logged when 20% of
the free capacity remains.
Note: Thin-provisioned volumes require additional I/O operation to read and write
metadata to the internal disks or to the back-end storage which will also add additional
load to the IBM Storwize V7000 nodes. We therefore do not recommend the use of these
volumes for high performance applications, or any workload with a high write I/O
component.
A thin provisioned volume can be converted to a fully allocated volume using volume
mirroring (and vice versa).
When a host system issues a write to a mirrored volume IBM Storwize V7000 will write the
data to both copies. When a host system issues a read to a mirrored volume IBM Storwize
V7000 will place it to the primary copy. If one of the mirrored volume copies is temporarily
unavailable, the IBM Storwize V7000 will automatically use the alternate copy without any
outage for the host system. When the mirrored volume copy is repaired IBM Storwize V7000
will resynchronize the data.
A mirrored volume can be converted into a non-mirrored volume by deleting one copy or by
splitting one copy to create a new non-mirrored volume.
The mirrored volume copy can be any type: image, striped, sequential, and thin provisioned
or not. The two copies can be completely different volume types.
Using mirrored volumes can also assist with migrating volumes between storage pools that
have different extent sizes and can provide a mechanism to migrate fully allocated volumes to
thin provisioned volumes without any host outages.
Note: An unmirrored volume can be migrated from one location to another by simply
adding a second copy to the desired destination, wait for the two copies to synchronize,
and then remove the original copy. This operation can be stopped at any time.
The Easy Tier function may be turned on or off at the storage pool and volume level.
It is possible to get an understanding of the potential benefit of Easy Tier in your environment
before installing solid-state disk. By turning the Easy Tier function ON for a single level
storage pool and also turning Easy Tier Function ON for the volumes within that pool, Easy
Tier will create a migration report every 24 hours on the number of extents it would move if the
pool was a multi-tiered pool. Easy Tier statistics measurement is enabled.
The use of Easy Tier may make it more appropriate to use smaller storage pool extent sizes.
The usage statistics file can be off-loaded from IBM Storwize V7000 nodes and then an IBM
Storage Advisor Tool can be used to create a summary report.
Contact your IBM representative or IBM Business Partner for more information on the
Storage Advisor Tool.
We describe Easy Tier in more detail in Chapter 10, “Easy Tier” on page 351.
1.4.12 iSCSI
iSCSI is an alternative means of attaching hosts to the IBM Storwize V7000. All
communications with back-end storage subsystems, and with other IBM Storwize V7000, only
occur via FC.
The iSCSI function is a software function that is provided by the IBM Storwize V7000 code,
not hardware.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that
encapsulates SCSI commands into TCP/IP packets and thereby leverages an existing IP
network, instead of requiring expensive FC HBAs and a SAN fabric infrastructure.
A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) initiates read or write requests for data from a target server (for example, a data
storage system).
Commands, which are sent by the client and processed by the server, are put into the
Command Descriptor Block (CDB). The server executes a command, and completion is
indicated by a special signal alert.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.
The concepts of names and addresses have been carefully separated in iSCSI:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
“initiator name” and “target name” also refer to an iSCSI name.
An iSCSI Address specifies not only the iSCSI name of an iSCSI node, but also a location
of that node. The address consists of a host name or IP address, a TCP port number (for
the target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An IBM Storwize V7000 node represents
an iSCSI node and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN),
which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted
for Internet nodes.
The iSCSI qualified name format is defined in RFC3720 and contains (in order) these
elements:
The string “iqn”.
A date code specifying the year and month in which the organization registered the
domain or sub-domain name used as the naming authority string.
The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name.
Optionally, a colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique.
For IBM Storwize V7000, the IQN for its iSCSI target is specified as:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
On a Windows® server, the IQN, that is, the name for the iSCSI Initiator, can be defined as:
iqn.1991-05.com.microsoft:<computer name>
The IQNs can be abbreviated used a descriptive name, known as an alias. An alias can be
assigned to an initiator or a target. The alias is independent of the name and does not have to
be unique. Because it is not unique, the alias must be used in a purely informational way. It
cannot be used to specify a target at login or used during authentication. Both targets and
initiators can have aliases.
An iSCSI name provides the correct identification of an iSCSI device irrespective of its
physical location. Remember, the IQN is an identifier, not an address.
Be careful: Before changing cluster or node names for an IBM Storwize V7000 cluster that
has servers connected to it by way of SCSI, be aware that because the cluster and node
name are part of the IBM Storwize V7000’s IQN, you can lose access to your data by
changing these names. The IBM Storwize V7000-GUI will display a specific warning, the
CLI does not.
The iSCSI session, which consists of a login phase and a full feature phase, is completed with
a special command.
The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.
If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.
As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more
than one TCP connection was established, iSCSI requires that each command/response pair
goes through one TCP connection. Thus, each separate read or write command will be
carried out without the necessity to trace each request for passing separate flows. However,
separate transactions can be delivered through separate TCP connections within one
session.
For further details in configuring iSCSI refer to Chapter 4, “Host Configuration” on page 125.
1.4.13 Hosts
A host system is a server that is connected to IBM Storwize V7000 through a Fibre Channel
connection and/or through an iSCSI connection.
Hosts are defined to IBM Storwize V7000 by identifying their worldwide port names (WWPNs)
for Fibre Channel hosts. For iSCSI hosts they are identified by using their iSCSI names. The
iSCSI names can either be iSCSI qualified names (IQNs) or extended unique identifiers
(EUIs).
Copy services functions are implemented within a single IBM Storwize V7000 or between
multiple IBM Storwize V7000s. The copy services layer sits above and operates
independently of the function or characteristics of the underlying disk subsystems used to
provide storage resources to an IBM Storwize V7000.
The remote copy can be maintained in one of two modes: synchronous or asynchronous.
With the IBM Storwize V7000, Metro Mirror and Global Mirror are the IBM branded terms for
the functions that are synchronous remote copy and asynchronous remote copy.
Synchronous remote copy ensures that updates are committed at both the primary and the
secondary before the application considers the updates complete; therefore, the secondary is
fully up-to-date if it is needed in a failover. However, the application is fully exposed to the
latency and bandwidth limitations of the communication link to the secondary. In a truly
remote situation, this extra latency can have a significant adverse effect on application
performance.
Special configuration guidelines exist for SAN fabrics that are used for data replication. It is
necessary to consider the distance and available bandwidth of the intersite links. Refer to
Chapter 11, “Copy Services” on page 379.
In asynchronous remote copy the application is provided acknowledgement that the write is
complete prior to the write being committed at the secondary. Hence, on a failover, certain
updates (data) might be missing at the secondary. The application must have an external
mechanism for recovering the missing updates if possible. This mechanism can involve user
intervention. Recovery on the secondary site involves bringing up the application on this
recent “backup” and, then, rolling forward or backward to the most recent commit point.
IBM Support for automation is provided by IBM Tivoli® Storage Productivity Center for
Replication.
The Tivoli documentation can also be accessed online at the IBM Tivoli Storage Productivity
Center information center:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
1.5.2 FlashCopy
FlashCopy makes a copy of a source volume on a target volume. The original content of the
target volume is lost. After the copy operation has started, the target volume has the contents
of the source volume as it existed at a single point in time. Although the copy operation takes
time, the resulting data at the target appears as though the copy was made instantaneously.
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the
management operations to be coordinated so that a common single point in time is chosen
for copying target volumes from their respective source volumes.
IBM Storwize V7000 also permits multiple Target volumes to be FlashCopied from the same
Source volume. This capability can be used to create images from separate points in time for
the Source volume, as well as create multiple images from a Source volume at a common
point in time. Source and/or Target volumes can be thin-provisioned volumes.
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship and without having to wait for the original copy
operation to complete. IBM Storwize V7000 supports multiple targets and thus multiple
rollback points.
Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery
of their applications and databases. IBM Support is provided by Tivoli Storage FlashCopy
Manager:
http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/
You can read a detailed description of FlashCopy copy services in Chapter 11, “Copy
Services” on page 379.
:http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003702&myns=s028&mynp=familyi
nd5402112&mync=E
Remote Copy (Metro Mirror and 2040 This can be any mix of Metro
Global Mirror) relationships per Mirror and Global Mirror
cluster relationships.
Total Metro Mirror and Global 1024 TB This limit is the total capacity for
Mirror volume capacity per I/O all master and auxiliary
group volumes in the I/O group.
You can maintain a chat session with the IBM service representative so that you can monitor
the activity and either understand how to fix the problem yourself or allow the representative
to fix it for you.
To use the IBM Assist On-site tool, the SSPC or master console must be able to access the
Internet. The following Web site provides further information about this tool:
http://www.ibm.com/support/assistonsite/
When you access the Web site, you sign in and enter a code that the IBM service
representative provides to you. This code is unique to each IBM Assist On-site session. A
plug-in is downloaded onto your SSPC or master console to connect you and your IBM
service representative to the remote service session. The IBM Assist On-site contains several
layers of security to protect your applications and your computers.
You can also use security features to restrict access by the IBM service representative.
Your IBM service representative can provide you with more detailed instructions for using the
tool.
Event notifications
IBM Storwize V7000 can use Simple Network Management Protocol (SNMP) traps, syslog
messages, and Call Home email to notify you and the IBM Support Center when significant
events are detected. Any combination of these notification methods can be used
simultaneously.
Each event that IBM Storwize V7000 detects is assigned a notification type of Error, Warning,
or Information. You can configure IBM Storwize V7000 to send each type of notification to
specific recipients.
SNMP traps
SNMP is a standard protocol for managing networks and exchanging messages. IBM
Storwize V7000 can send SNMP messages that notify personnel about an event. You can use
an SNMP manager to view the SNMP messages that IBM Storwize V7000 sends. You can
use the management GUI or the IBM Storwize V7000 command-line interface to configure
and modify your SNMP settings.
You can use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the IBM Storwize V7000.
This file can be used with SNMP messages from all versions of IBM Storwize V7000
software.
Syslog messages
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be either IPv4 or IPv6. IBM Storwize V7000
can send syslog messages that notify personnel about an event. IBM Storwize V7000 can
transmit syslog messages in either expanded or concise format. You can use a syslog
manager to view the syslog messages that IBM Storwize V7000 sends. IBM Storwize V7000
uses the User Datagram Protocol (UDP) to transmit the syslog message. You can use the
management GUI or the IBM Storwize V7000 command-line interface to configure and modify
your syslog settings.
To send email, you must configure at least one SMTP server. You can specify as many as five
additional SMTP servers for backup purposes. The SMTP server must accept the relaying of
email from the IBM Storwize V7000 cluster IP address. You can then use the management
GUI or the IBM Storwize V7000 command-line interface to configure the email settings,
including contact information and email recipients. Set the reply address to a valid email
address. Send a test email to check that all connections and infrastructure are set up
correctly. You can disable the Call Home function at any time using the management GUI or
the IBM Storwize V7000 command-line interface.
control enclosure A hardware unit that includes the chassis, node canisters,
drives, and power sources that include batteries.
expansion canister A hardware unit that includes the serial-attached SCSI (SAS)
interface hardware that enables the node hardware to use the
drives of the expansion enclosure.
expansion enclosure A hardware unit that includes expansion canisters, drives, and
power sources that do not include batteries.
external storage Managed disks (MDisks) that are Small Computer Systems
Interface (SCSI) logical units presented by storage systems that
are attached to and managed by the cluster.
host mapping The process of controlling which hosts have access to specific
volumes within a cluster.
internal storage Array managed disks (MDisks) and drives that are held in
enclosures and nodes that are part of the cluster.
node canister A hardware unit that includes the node hardware, fabric and
service interfaces, and serial-attached SCSI (SAS) expansion
ports.
PHY A single SAS lane. There are 4 PHY in each SAS cable
quorum disk A disk that contains a reserved area that is used exclusively for
cluster management. The quorum disk is accessed when it is
necessary to determine which half of the cluster continues to
read and write data. Quorum disks can either be MDisks or
internal drives.
http://www.ibm.com/storage/support/storwize/v7000
http://www-03.ibm.com/systems/storage/news/center/storwize_v7000/index.html
The IBM Storwize V7000 Supported hardware list is at this Web site:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003703
The IBM Storwize V7000 Configuration Limit and Restrictions are at this Web site:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003702&myns=s028&mynp=familyin
d5402112&mync=E
http://www-947.ibm.com/support/entry/portal/Documentation/Hardware/System_Storage/
Disk_systems/Mid-range_disk_systems/IBM_Storwize_V7000_%282076%29
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
You can see the lBM Redbooks publications about IBM Storwize V7000 at this Web site:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=v7000
A minimum of two Ethernet cable drops with four preferred for additional configuration
access redundancy and/or iSCSI host access. Ethernet port one on each node canister
must be connected to the LAN with port two being optional.
Note: Port 1 on each node canister must be connected to the same physical LAN or be
configured in the same VLAN and be on the same subnet or set of subnets.
Verify that the default IP address configured on Ethernet port 1 on each of the node
canisters, 192.168.70.121 on node one and 192.168.70.122 on node 2, do not conflict
with existing IP addresses on the LAN. The default mask used with these IP addresses is
255.255.255.0 and the default gateway address used is 192.168.70.1.
A minimum of three IPv4 or IPv6 IP addresses for cluster configuration. One is for the
cluster and is what the administrator will use for management and one for each node
canister for service access as needed.
Note: A fourth IP address is recommended for backup configuration access. This will
allow a second cluster IP address to be configured on port two of either node canister
that the storage administrator can also use for management of the IBM Storwize V7000
system.
A minimum of one and up to four IPv4 or IPv6 addresses are needed if iSCSI attached
hosts will access volumes from the IBM Storwize V7000.
Two one, three or six meter SAS cables per expansion enclosure are required. The length
of the cables depends on the physical rack location they are installed in relative to the
control enclosure and/or other expansion enclosures. The recommendation is to locate the
control enclosure such that four enclosures can be located above it and five enclosures
below it as shown in Figure 2-1.
Note: The disk drives included with the control enclosure, model 2076-224 or
2076-212, are part of SAS chain number two. Therefore, only four additional expansion
enclosures can be connected to this chain. SAS chain number one supports the
addition of up to five expansion enclosures. The first expansion enclosure should be
connected to SAS chain number one so both chains are used and the full bandwidth of
the system is utilized.
Figure 2-1 on page 37 shows the recommended racking and cabling scheme.
Once IBM Storwize V7000, hosts and optional external storage systems are connected to the
SAN fabrics, zoning will need to be implemented.
In each fabric create a zone with just the four IBM Storwize V7000 WWPNs, two from each
node canister. If there is an external storage system to be virtualized then in each fabric
create a zone with the four IBM Storwize V7000 WWPNs, two from each node canister, along
with up to a maximum of eight WWPNs from the external storage system. Assuming every
host has a Fibre Channel connection to each fabric then in each fabric create a zone with the
host WWPN and one WWPN from each node canister in the IBM Storwize V7000 system.
Note: IBM Storwize V7000 supports a maximum of sixteen ports or WWPNs from a given
external storage system that will be virtualized.
Figure 2-2 on page 39 is an example of how to cable devices to the SAN. Reference this
example as we discuss the recommended zoning below.
Create a Host/IBM Storwize V7000 zone for each server that volumes will be mapped to from
the cluster. For example:
Zone Server 1 port A (RED) with all node port 1's.
Zone Server 1 port B (BLUE) with all node port 2's.
Zone Server 2 port A (RED) with all node port 3's.
Zone Server 2 port B (BLUE) with all node port 4's.
Verify that the SAN switches and/or directors that the IBM Storwize V7000 will connect to
meet the following requirements as noted at:
http://www.ibm.com/storage/support/Storwize/V7000
Switches and/or directors are at firmware levels supported by the IBM Storwize V7000.
IBM Storwize V7000 port login maximums listed in restriction document must not be
exceeded.
Note :If you have any connectivity issues between IBM Storwize V7000 ports and Brocade
SAN Switches or Directors at 8 Gbps, refer to the Web site at :
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003699
for the correct setting of the fillword port config parameter in the Brocade operating
system.
Note: There is no issue with configuring multiple IPv4 and/or IPv6 addresses on a given
Ethernet port nor using the same Ethernet port for management and iSCSI access.
However, you can not use the same IP address for both management and iSCSI host use.
Figure 2-3 shows possible IP configuration of the Ethernet ports on the IBM Storwize V7000
system.
The management IP address will be associated with one of the node canisters in the cluster
and that node then becomes the configuration node. Should this node go offline, either
planned or unplanned, the management IP address will fail over to the other node’s Ethernet
port 1.
Figure 2-4 shows a logical view of the Ethernet ports available for configuration of the one or
two management IP addresses. These IP addresses are for the cluster itself and therefore
only associated with one node which is then considered the configuration node.
By default the service IP address on node canister 1 is 192.168.70.121 and on node canister
2 it is 192.168.70.122. The default mask is 255.255.255.0 and the default gateway address is
192.168.70.1.
Figure 2-5 shows a logical view of the Ethernet ports available for configuration of the service
IP addresses. Only port one on each node can be configured with a service IP address.
SAN fabric or the fabric itself fails then the host will lose access to its volumes. Even with just
a single connection to the SAN the host will have multiple paths to the IBM Storwize V7000
volumes because that single connection must be zoned with at least one Fibre Channel port
per node. Therefore a multipath driver is required.
SAN boot with IBM Storwize V7000 is supported and requirements are listed on the IBM
Storwize V7000 support matrix and configuration instructions are provided in the IBM
Storwize V7000 Host Attachment Guide.
Verify that the hosts which will access volumes from the IBM Storwize V7000 meet the
following requirements as found at the URL below.
http://www.ibm.com/storage/support/storwize/v7000
Host operating systems are at levels supported by the IBM Storwize V7000.
HBA BIOS, drivers and firmware along with the multipathing drivers are at levels
supported by IBM Storwize V7000.
If boot from SAN is required ensure it is supported for the operating system(s) to be
deployed.
If host clustering is required ensure it is supported for the operating system(s) to be
deployed.
The date and time can be manually entered but to keep the clock synchronized the
recommendation is to use the network time protocol (NTP) service.
Document the current LAN NTP server IP address used for synchronization of devices.
If external storage systems will be used by IBM Storwize V7000 then a license must be
purchased to cover the number of enclosures to be virtualized.
Document the number of external physical enclosures that will be virtualized under the
IBM Storwize V7000.
Document the total number of physical enclosures, both internal and external virtualized
enclosures for the GM and MM feature. You must have enough GM and MM license
entitlement for all enclosures attached to the system, regardless of the amount of GM and
MM capacity you intend to use..
The Easy Tier function is included with the IBM Storwize V7000 system and is not a
purchased feature. if the system has solid state drives, SSDs, and this capability is to be used
to optimize the utilization of the SSD drives then this function will be enabled.
For alerts to be sent to storage administrators and to set up call home to IBM for service and
support you will need the information below:
Name of primary storage administrator for IBM to contact if necessary.
E-mail address of the above storage administrator for IBM to contact if necessary.
Phone number of the above storage administrator for IBM to contact if necessary.
Physical location of the IBM Storwize V7000 system for IBM service. (E.g. Building 22, 1st
floor)
SMTP or E-mail server address to direct alerts to and from the IBM Storwize V7000.
– For call home service to work, the IBM Storwize V7000 system must have access to a
SMTP server on the LAN that can forward E-mails to the default IBM service address
‘[email protected]’.
E-mail address of local administrators needing to be notified of alerts.
IP address of SNMP server to direct alerts to if desired. (E.g. Operations or help desk)
After the IBM Storwize V7000 initial configuration you may want to add additional users who
can manage the cluster. You can create as many users as you need but currently there are
only three roles generally configured for users, administrator, copyoperator and monitor.
The administrator role allows the user to perform any function on the IBM Storwize V7000
system except create users.
Note: Create users is allowed by the superuser role only and should be limited to as few
users as possible.
The copyoperator role allows the user to view anything in the system but the user can only
configure and manage copy functions which include the replication functions of Metro and
Global Mirror and the FlashCopy capabilities.
The monitor role allows the user to view anything in the system but they can not create,
modify or change anything in the system nor perform any actions that change the state of the
cluster.
The only other role available is the service role which would generally be used if you create a
user ID for the IBM service representative. This role allows IBM service personnel to view
anything on the system just as the monitor role provides, plus perform service related
commands such as adding a node back to the cluster after being serviced.
It allows for troubleshooting and management tasks, such as checking the status of the
storage server components, updating the firmware, and managing the storage server. Finally,
it offers advanced functions, such as FlashCopy, Volume Mirroring, and Remote Mirroring. A
Command Line Interface (CLI) for the IBM Storwize V7000 system is available as well.
This section briefly describes the system management using GUI and CLI.
After the initial configuration as described in 2.8, “Initial configuration” on page 51 you get the
IBM Storwize V7000 system welcome screen as shown in Figure 2-6 on page 44
Example 2-1 System management using the Command Line Interface (CLI)
IBM_2076:ITSO-Storwize-V7000-1:admin>svcinfo lsuser
id name password ssh_key remote usergrp_id usergrp_name
0 superuser yes yes no 0 SecurityAdmin
1 MASSIMO yes no no 1 Administrator
2 JASON yes no no 1 Administrator
3 DAN yes no no 1 Administrator
IBM_2076:ITSO-Storwize-V7000-1:admin>
The initial IBM Storwize V7000 system setup should be done using the graphical tools we
describe starting in 2.7, “First time setup” on page 45.
The IBM Storwize V7000 provides an easy to use initial setup contained within a USB key.
The USB key is delivered with each storage system and contains the initialization application
called “InitTool.exe”. A system management ip address, the subnet mask and the network
gateway address are required. The initialization application creates a configuration file on the
USB key.
The IBM Storwize V7000 will start the initial setup as soon as you plug in the USB key with
the newly created file in the storage system.
Note: In case you are unable to find the official USB key supplied with the IBM Storwize
V7000 you can use any USB key you have, and download and copy the initTool.exe
application from the IBM Storwize V7000 Support site at
http://www.ibm.com/storage/support/Storwize/V7000
The USB key contains the initTool.exe file as shown in Figure 2-7 on page 45.
These are the steps to follow to perform the initial setup using the USB key.
1. Initially plug in the USB key to an MS Windows system and start the initialization tool, if the
system is configured to autorun for USB keys, the initialization tool starts automatically,
otherwise, open the USB key from My Computer and double-click on the InitTool.exe file.
The tool is shown in Figure 2-8 on page 46, after the tool is started select “Initialize a new
system using the USB Key” and click next as shown in Figure 2-8 on page 46.
2. Type the IPV4 or IPV6 address, subnet mask and network gateway address, then click
next as shown in Figure 2-9.
3. Click Next to finish the initial application as shown in Figure 2-10 on page 47.
After these steps the application creates a new file called satask.txt on the USB key as shown
in Figure 2-11 on page 47.
Note: While the cluster is being created, the amber fault LED on the node canister flashes.
When the amber fault LED stops flashing, remove the USB key from IBM Storwize V7000
and insert it in your system to check the results.
After this has completed successfully the initial setup is successfully done; the IBM Storwize
V7000 is available for further configuration changes using the newly defined configuration
address.
Each node has two Ethernet ports that can be used for system management. Ethernet port 1
is used for system management and must be configured and connected on both nodes. The
use of Ethernet port 2 is optional.
Each IBM Storwize V7000 Cluster has one or two cluster ip addresses. If the configuration
node fails, the cluster IP addresses are transferred to another node in the same cluster.
Important: The first system management IP address always uses port 1. Always connect
port 1 for all node canisters to the management network.
System Status
sainfo lsservicenodes
sainfo lsservicestatus
panel_name 32G0CXR-1
cluster_id 0000020060e14ffc
cluster_name Cluster_10.18.228.200
cluster_status Active
cluster_ip_count 2
cluster_port 1
cluster_ip 10.18.228.200
cluster_gw 10.18.228.1
cluster_mask 255.255.255.0
cluster_ip_6
cluster_gw_6
cluster_prefix_6
cluster_port 2
cluster_ip
cluster_gw
cluster_mask
cluster_ip_6
cluster_gw_6
cluster_prefix_6
node_id 1
node_name 32G0CXR-1
node_status Starting
config_node No
hardware 100
service_IP_address 192.168.70.121
service_gateway 192.168.70.1
service_subnet_mask 255.255.255.0
service_IP_address_6
service_gateway_6
service_prefix_6
node_sw_version 6.1.0.0
node_sw_build 49.0.1010130001
cluster_sw_build 49.0.1010130001
node_error_count 0
error_code
error_data
error_code
error_data
error_code
error_data
error_code
error_data
error_code
error_data
fc_ports 4
port_id 1
port_status Inactive
port_speed N/A
port_WWPN 500507680110a7fe
SFP_type Short-wave
port_id 2
port_status Inactive
port_speed N/A
port_WWPN 500507680120a7fe
SFP_type Short-wave
port_id 3
port_status Inactive
port_speed N/A
port_WWPN 500507680130a7fe
SFP_type Short-wave
port_id 4
port_status Inactive
port_speed N/A
port_WWPN 500507680140a7fe
SFP_type Short-wave
ethernet_ports 2
ethernet_port_id 1
port_status Link Online
port_speed 1Gb/s - Full
MAC e4:1f:13:74:0a:81
ethernet_port_id 2
port_status Not Configured
port_speed
MAC e4:1f:13:74:0a:80
product_mtm 2076-124
product_serial 32G0CXR
time_to_charge 0
battery_charging 100
dump_name 32G0CXR-1
node_WWNN
disk_WWNN_suffix
panel_WWNN_suffix
UPS_serial_number
UPS_status active
enclosure_WWNN_1 500507680100a7fe
enclosure_WWNN_2 500507680100a7be
node_part_identity 11S85Y5849YHU9995G051A
node_FRU_part 85Y5899
enclosure_identity 11S85Y5962YHU9992G0CXR
PSU_count 2
PSU_id 1
PSU_status active
PSU_id 2
PSU_status active
Battery_count 2
Battery_id 1
Battery_status active
Battery_id 2
Battery_status active
node_location_copy 1
node_product_mtm_copy 2076-124
node_product_serial_copy 32G0CXR
node_WWNN_1_copy 500507680100a7fe
node_WWNN_2_copy 500507680100a7be
latest_cluster_id 20060e14ffc
next_cluster_id 20061014ffc
sainfo lsservicerecommendation
service_action
No service action required, use console to manage node.
2. Read and accept the license agreement as shown in Figure 2-14 on page 52.
3. Set up the system name. current date, and time as shown in Figure 2-15.
4. Optionally you can type in advanced licenses for virtualization of external storage devices
and a remote copy limit as applicable. The virtualization license for all local devices is
already included in the system and must not be added here.
Figure 2-16 shows an example of how to add it for external storage and remote copy.
Note: When configuring email call home to IBM Support, use one of the following email
addresses depending on country or region of installation:
[email protected]: USA, Canada, Latin America and Caribbean Islands
[email protected]: All other countries/regions
To configure the call home and E-Mail alert event notification in the IBM Storwize V7000
perform the following steps.
Clicking Configure Email Event Notification as shown in Figure 2-17 on page 53 starts an
Email configuration wizard.
Define the sending Email account as shown in Figure 2-18 on page 54.
Type in the receiving Email address for local users and for support and select the appropriate
message type for each user as shown in Figure 2-20 on page 55.
Figure 2-20 Email and Event configuration: set E-Mail address and event level for call home
Click on Next verify the Email configuration and click Finish as shown in Figure 2-21 on
page 55.
Figure 2-21 Email and event configuration: verify Email addresses and finish configuration
The Email Event Notification Setup is successfully done; the Setup wizard shows the new
Email configuration.
You can modify the setting, discard or continue with the configuration by clicking on Next as
shown in Figure 2-22 on page 56.
After you click on Next an Add Enclosure window appears as shown in Figure 2-24 on
page 57.
Wait until The task completed screen appears as shown in Figure 2-25 on page 58.
Finally the configuration wizard detects the internal disk drives and offer you a storage
configuration for all internal disk drives.
If the proposed configuration is acceptable to you, select the check box and click Finish to
finalize the initial configuration as shown in Figure 2-26 on page 59.
If you prefer a customized configuration leave the check box blank and click Finish and
continue with the storage configuration using Advanced Storage Configuration described
in Chapter 8, “Advanced Host and Volume Administration” on page 271 and Chapter 11,
“Copy Services” on page 379.
Optionally you can start the migration wizard and let the system guide you.
After you click on Finish a “Synchronizing memory cache window appears as shown in
Figure 2-27 on page 59.
Wait until The task completed window appears as shown in Figure 2-28 on page 60.
You have now successfully finalized the initial configuration wizard for IBM Storwize V7000
system. Figure 2-29 on page 60 shows the Getting Started page and is ready for use.
Once logged in successfully the Getting Started screen is displayed. This is shown in
Figure 3-2 on page 63.
Screen layout
This screen has three main sections for navigating through the management tool. On the far
left hand side of the screen are eight Function Icons. The eight icons represent:
Home menu
Troubleshooting menu
Physical Storage menu
Volumes menu
Hosts menu
Copy Services menu
User Management menu
Configuration menu
In the middle of the screen is a diagram illustrating the existing configuration. Clicking on the
icons in this area will provide extended help references including a link to a short video
presentation to explain the topic in more detail. This is not a navigation tool but rather an
extended help screen which includes configuration information.
At the bottom of the screen are three status indicators. Clicking on these will provide more
detailed information about the existing configuration of the IBM Storwize V7000 solution.
Simply click on these icons to expand them and minimize them as required.
The diagram in Figure 3-3 on page 64 shows the main screen areas.
Navigation
Navigating around the management tool is very simple.
You can hover the cursor over one of the eight Function Icons on the left hand side of the
screen which will highlight the Function Icon and will then display a list of options. You can
then move the cursor to the desired option and select it. This method is illustrated below in
Figure 3-4 on page 65 below.
An alternative method is to click on the desired Function Icon. This will navigate you directly
to that screen. This is illustrated below in Figure 3-5 on page 66.
Clicking on the top of the screen in All Hosts you can change the view as shown in Figure 3-6
on page 67.This apply to any other menu options.
Shown in Figure 3-7 on page 68 is a list of the IBM Storwize V7000 software Function Icons
and the associated menu options.
Multiple selections
The new management tool also provides the ability to select multiple items by using a
combination of the SHIFT or CTRL keys. To select multiple items in a display click on the first
item and then hold down the SHIFT key and click in the last item in the list you require. This
will cause all the items in between to be selected as shown in Figure 3-8 on page 69.
If you wish to use multiple select on items that are not in sequential order you can click on the
first item and then hold down the CTRL key and click on the other items you require as shown
in Figure 3-9 on page 70.
Status Indicators
Another useful tool is the Status Indicator menus which appear at the bottom of the screen as
shown in Figure 3-10 on page 71. These can be maximized and minimized and provide links
to connectivity options, storage allocation display and a display of long running tasks.
You can see the details of the running task by clicking on the most right status indicator as
shown in Figure 3-11 on page 72.
By moving the cursor down this bar the Used Capacity is displayed as shown in Figure 3-14
on page 74.
By moving the cursor up this bar the Virtual Capacity is displayed as shown in Figure 3-15 on
page 75. The virtual capacity is the capacity reported by IBM Storwize V7000 to the host
when you are using thin-provisioning volumes.
The System Status menu can also display the status of the various IBM Storwize V7000
components. In this example one of the enclosures is reporting a drive problem. By clicking
on the enclosure and then hovering the cursor over the affected drive, a status report is
displayed for the drive in question. This is shown in Figure .
Recommended Actions
Selecting the Recommended Actions option will display the screen shown in Figure 3-19 on
page 79. This screen lists any events that have occurred that may impact the IBM Storwize
V7000 solution. To fix an event you can click on run fix procedure this will run a Directed
Maintenance Procedure (DMP) that will walk you through the process to fix that particular
event.
Another way to fix an event is right click on the particular event and click on the Run Fix
Procedure. To go into detail on a specific event click on Properties as shown in Figure 3-20
on page 80,
Clicking on the Properties option you will get the screen shown in Figure 3-21 on page 81.
Event Log
Clicking on the Event Log option will display the screen shown in Figure 3-22 on page 82.
From this screen you can use the action buttons to either mark an event as fixed or clear the
log.
Alternatively, you can select the particular event you are interested in and right click on it
which will present the options as shown in Figure 3-23 on page 83.
Support
Clicking on the Support option will display the screen shown in Figure 3-24 on page 84. From
this screen click on Show full log listing to display all log files.
You can download the various log files or delete them selecting a single item as shown in
Figure 3-25 on page 85 and click on either the Download or Delete options in the Actions
button.
Note: When the delete option is not highlighted, the file can not be deleted because it is a
file used by the system.
Towards the upper right side of the screen there is Node option to display node canister 1 or
2 log files as shown in Figure 3-26 on page 86.
Selecting the Download Support Package option will display the screen shown in
Figure 3-27 on page 87.
This provides a number of different options to collect logs and state save information from the
cluster.
Selecting Download generates the support package as shown in Figure 3-28 on page 88.
Click on Save File and then OK to save a copy of the package as shown in Figure 3-29 on
page 88.
Internal
Selecting the Internal option will display the screen shown in Figure 3-31 on page 90. From
this screen you can configure the internal disk drives into storage pools. The panel also
provides the option to display the internal drives based on their capacity and speed.
External
Selecting the External option will display the screen in Figure 3-32 on page 91. This will
display any External Disk Systems that IBM Storwize V7000 is virtualizing. From this screen
MDisks can be added to existing pools, imported as image mode volumes or renamed. By
highlighting a MDisk you can also display any dependent volumes.
Storage Pools
The screen shown in Figure 3-33 on page 91 displays the Storage Pools. From here you can
create or delete Storage pools.
MDisks
The screen shown in Figure 3-34 on page 92 displays the MDisks that are available to the
IBM Storwize V7000 system. The MDisks displayed will show whether they are managed in
which case the storage pool will be displayed or whether they are unmanaged in which case
they can be added to a new pool.
By clicking on the display bar as indicated in Figure 3-35 on page 93 you can choose to
change the fields that are displayed. Just select the items you wish to be displayed.
From this screen you can choose to either use the options from the Action button or you
could choose to highlight the particular MDisk(s) that you require and click on the right mouse
button as shown in Figure 3-36 on page 93.
Figure 3-36 Commands for a single MDisk from the MDisks option
All Volumes
Clicking on the All Volumes option will display the screen as shown in Figure 3-38 on
page 95. From here you can perform tasks on the volumes such as shrink, enlarge, map to a
host or migrate a volume.
From this menu you can perform a variety of operations on the volumes. You can use the
Action button or you can right click on the Volume name which will display a list of operations
that can be performed against the volume as shown in Figure 3-39 on page 96.
Volumes by Pool
Selecting the Volumes by Pool option will display the screen shown in Figure 3-40 on
page 97.
Similar to the previous screen you can either use the Action button or you can click on the
Pool to display a list of valid commands as shown in Figure 3-41 on page 98.
Figure 3-41 Commands for a single volume from the Volume by Pool option
Volumes by Host
Selecting the Volumes by Host option will display the screen shown in Figure 3-42 on
page 99. This will display the volumes that have been mapped to a given host.
Use the Action button or you can click on the Pool to display a list of valid commands as
shown in Figure 3-43 on page 100.
Figure 3-43 Commands for a single volume from the Volume by Host option
All Hosts
Selecting the All Hosts option will display the screen shown in Figure 3-45 on page 102.
From here you can modify host mappings, unmap hosts, rename hosts and create new hosts.
As with a number of other screens you can use the command buttons or you can select a host
and right click on it to access the commands as shown in Figure 3-46 on page 103.
Figure 3-46 Commands for a single Host from the All Hosts option
Ports by Host
Selecting the Ports by Hosts option will display the screen shown in Figure 3-47 on
page 104. This screen will display the Fibre Channel and iSCSI ports that are assigned to a
particular host.
From this screen by clicking on the Actions button you can modify the mappings, unmap
volumes, rename hosts and delete ports as shown in Figure 3-48 on page 105.
Figure 3-48 Commands for a single Host from the Ports by Host option
Host Mappings
Clicking on the Host Mappings option will display the screen shown in Figure 3-49 on
page 106. This screen displays the host id, SCSI identifier and the Volume identifier for all the
mapped volumes.
As with a number of other screens you can use the command buttons as shown in
Figure 3-50 on page 107 or you can select a host and right click on it to access the
commands.
Figure 3-50 Commands for a single Host from the Host Mapping option
FlashCopy
Clicking on the FlashCopy option will display the screen shown in Figure 3-52 on page 109.
This screen shows the volumes that are available and by right clicking on a volume a list of
operations is displayed. From here we can perform tasks such as initiate a new snapshot,
clone or backup just by clicking on the volume name.
Clicking on the volume name will display the screen shown in Figure 3-53 on page 110. From
here you can click on the tabs at the top of the screen to display additional information such
as the hosts that the volume or FlashCopy volume is mapped to and its dependant MDisks.
By clicking on the Action button you can also initiate a mirrored copy of the volume or migrate
it to a different storage pool.
FlashCopy Mappings
Clicking on the FlashCopy Mapping option will display the screen shown in Figure 3-55 on
page 112. From this screen we can start, stop, delete and rename the FlashCopy mappings.
There is also an option to move the relationship into a Consistency Group.
Remote Copy
Selecting the Remote Copy option will display the screen shown in Figure 3-56 on page 113.
This screen displays the existing Remote Copy relationships and allows us to setup and
modify consistency groups. From this screen we can also start and stop relationships, add
relationships to a consistency group and switch the direction of the mirror.
Partnerships
Selecting the Partnerships option will display the screen shown in Figure 3-57 on page 114.
This allows us to setup a new partnership or delete an existing partnership with another IBM
Storwize V7000 system for the purposes of remote mirroring.
From this screen we can also set the background copy rate. This specifies the bandwidth, in
megabytes per second (MBps), that is used by the background copy process between the
clusters.
Users
Figure 3-59 on page 116 shows the Users option screen. This screen enables you to create
and delete new users, change and remove passwords, add and remove SSH keys.
Clicking on the New User button will display a window as shown in Figure 3-60 on page 117.
From here you can input the name of the user, the password and load the ssh key.
Audit Log
Selecting the Audit Log option will display the screen shown in Figure 3-61 on page 118. The
cluster maintains an audit log of successfully executed commands, indicating which users
performed particular actions at certain times.
Network
Clicking on the Network option will display the screen shown in Figure 3-63 on page 120.
From here we can update the network configuration, setup iSCSI definitions and display
information on the Fibre Channel connections.
By selecting the Fibre Channel option as shown in Figure 3-64 on page 121 some very
useful information is presented. In this example we have selected the Hosts option from the
drop down box and then selected to display the details for one specific host - Hurricane - from
the list of host systems. Other options available from the drop down box include displaying
Fibre Channel details: for all devices, for clusters, for nodes, for storage systems or for hosts.
Event notification
Shown in Figure 3-65 on page 122 is the screen displayed when selecting the Event
notification option. From this screen you can configure the e-mail alerts, (included the Call
Home function), SNMP monitoring and define syslog servers and the message types.
Advanced
By selecting the Advanced option as shown in Figure 3-66 on page 123 the following screen
is displayed. This screen provides options to set the date and time, update the software
licensing levels, upgrade the firmware and set GUI preferences.
A host system is an open-systems computer that is connected to the switch through a Fibre
Channel or an iSCSI interface.
In this chapter we assume that your hosts are connected to your FC or IP Network and you
have completed the steps described in Chapter 2, “Initial configuration” on page 35. Follow
basic zoning recommendations to ensure that each host has at least two network adapters,
and each adapter is on a separate network (or at minimum in a separate zone), and is
connected to both canisters. This assures four paths for failover and failback purposes.
Prior to mapping the newly created volumes on the host of your choice, a little preparation will
go a long way towards ease of use and reliability. There are several steps required on a host
in preparation for mapping new IBM Storwize V7000 volumes to the host. First use the
System Storage Interoperation Center (SSIC), to check which code levels are supported to
attach your host to your storage. SSIC is a web tool to check the interoperation of host,
storage, switches, and multipathing drivers.
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
The complete support matrix is listed in the IBM Storwize V7000 Supported Hardware List,
Device Driver, Firmware and Recommended Software Levels V6.1 document, which is
available at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003703
This chapter will focus on Windows and VMware. If you do have to attach any other hosts, for
example AIX®, Linux®, or even Apple, then you will find the required information in the IBM
Storwize V7000 Information Center available at:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003703#_Win2008
Host Adapter BIOS: Disabled (unless the host is configured for SAN Boot)
Queue depth: 4
IBM Subsystem Device Driver DSM (SDDDSM) is the IBM multipath I/O solution that is based
on Microsoft MPIO technology; it is a device-specific module specifically designed to support
IBM storage devices on Windows hosts. The intention of MPIO is to get a better integration of
multipath storage solution with the operating system, and it allows the use of multipath in the
SAN infrastructure during the boot process for SAN boot hosts.
To ensure proper multipathing with IBM Storwize V7000, SDDDSM has to be installed on
Windows hosts.
1. Check the SDDDSM download matrix to determine the correct level of SDDDSM to install
for Windows 2008 and download the package at:
http://www-01.ibm.com/support/docview.wss?rs=540&uid=ssg1S7001350#WindowsSDDDSM
2. Extract the package and start the setup application as shown in Figure 4-2 on page 128.
3. If security warnings are enabled on your host, you will be prompted to click Run as shown
in Figure 4-3
4. The setup CLI appears, type yes to install the SDDDSM and press Enter as shown in
Figure 5.
5. After the setup completes you will be asked to restart the system. Confirm this by typing
yes and press Enter as shown in Figure 4-5 on page 129.
After the reboot you have now successfully installed the IBM SDDDSM. You can check the
installed driver version if you click Start All Programs Subsystem Device Driver DSM
Subsystem Device Driver DSM. A command prompt will open and the command datapath
query version may be used to determine the version currently installed as in Example 4-1 for
this Windows 2008 host.
This tool can also be used to determine the wwpns of the host. Type datapath query wwpn as
shown in Example 4-2 and note down the wwpns of your host, as you will need them later.
C:\Program Files\IBM\SDDDSM>
If you need more detailed information about SDDDSM you will find it in the Multipath
Subsystem Device Driver User’s Guide, GC52-1309-02.
http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S7000303
Now the Windows host has been prepared to connect to the IBM Storwize V7000 and we
know the wwpns of the host. The next step is to configure a host object for the wwpn’s
identified using the IBM Storwize V7000 GUI. This is explained in “Creating FC Hosts” on
page 145.
Note that SAN boot hosts are beyond the intended scope of this paper and for more
information follow the steps in the Information Center available from the IBM Support Portal.
Note: This book focuses on Windows 2008, however the procedure for Windows 2003 is
very similar. If you use Windows 2003 do not forget to install Microsoft Hotfix 908980. If you
do not install it before operation, preferred pathing is not available:
http://support.microsoft.com/kb/908980
Confirm the automatic startup of the iSCSI Service as shown in Figure 4-7.
The iSCSI Configuration window will appear, select the configuration tab as shown in
Figure 4-8 on page 132 and remember the initiator name of your Windows host, you will need
it later.
You can change the initiator name, or enable advanced authentication, but this is out of the
scope of our basic setup. More detailed information is available in the IBM Storwize V7000
Information Center:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp?topic=/com.ibm.stor
wize.v7000.doc/svc_iscsiwindowsauthen_fu67gt.html
This are the basic steps to prepare a Windows 2008 iSCSI host, next go to “Creating iSCSI
Hosts” on page 148 to configure the IBM Storwize V7000 for iSCSI connections:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003703#_VMware
Download the latest supported HBA firmware for your configuration and apply it to your
system. Some HBAs and especially the new CNA Adapters require an additional driver to be
loaded into ESX. Check the VMware Compatibility Guide if there are any requirements for
your configuration:
http://www.vmware.com/resources/compatibility/search.php
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_esx_vc_installation_guide.pdf
After you have completed your ESX installation, connect to your ESX Server, using the
vSphere client and navigate to the Configuration tab, select Storage Adapters and scroll
down to your FC HBAs as shown in Figure 4-10 on page 136. Remember the wwpns of the
installed adapters for later use.
The IBM Storwize V7000 is an active/active storage device. Since VMware ESX 4.0 and
higher the recommended multipathing policy is Round Robin. This will perform static load
balancing for I/O. If you do not want to have the I/O balanced over all available paths, the
Fixed policy is supported as well. This policy setting can be selected for every volume. But
this will be done afterwards once we have attached IBM Storwize V7000 LUNs to the ESX
host (5.2.3, “VMware ESX Fibre Channel Attachment” on page 181). If you use an older
version of VMware ESX up to 3.5 Fixed is the recommended policy setting.
After all these steps are completed the ESX host is prepared to connect to the IBM Storwize
V7000, go to 4.2.1, “Creating FC Hosts” on page 145 to create the ESX FC host in the IBM
Storwize V7000 GUI.
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf
Perform the following steps to prepare a VMware ESX host to connect to an IBM Storwize
V7000 using iSCSI.
Make sure that the latest firmware levels are applied on your host system.
Install VMware ESX and load additional drivers if required
Connect the ESX server to your network, it is recommended to use separate network
interfaces for iSCSI traffic.
Configure your network to fulfill your security and performance requirements
The iSCSI initiator is installed by default on your ESX server, and you only have to enable it.
1. Connect to your ESX server using the vSphere Client and navigate to Configuration and
select Networking as shown in Figure 4-11.
2. Click Add Networking to start the Add Network Wizard as shown in Figure 4-12 on
page 138. Select VMkernel and click Next.
3. Select one or more network interface(s) you want to use for iSCSI traffic and click next as
shown in Figure 4-13 on page 139.
4. Enter a meaningful Network Label and click Next as shown in Figure 4-14.
5. Enter an IP address for your iSCSI network, it is strongly recommended to use a dedicated
network for iSCSI traffic, as shown in Figure 4-15.
8. The iSCSI Software Adapter Properties window appears. As you can see in Figure 4-17
the initiator is disabled by default, to change this click Configure.
10.The VMware ESX iSCSI initiator is now successfully enabled as shown in Figure 4-19.
Remember your initiator name for later use.
Your VMware ESX host is now prepared to connect to the IBM Storwize V7000. Go to
“Creating iSCSI Hosts” on page 148 to create the an ESX iSCSI host in the IBM Storwize
V7000 GUI.
Open the host configuration as shown in Figure 4-20 by selecting All Hosts.
The All Host Section will appear as shown in Figure 4-21 on page 144.
To create a new host click New Host to start the wizard as shown in Figure 4-22.
If you want to create a Fibre Channel host continue with 4.2.1, “Creating FC Hosts”, for iSCSI
Hosts go to “Creating iSCSI Hosts” on page 148.
2. Enter a Host Name and click the Fibre Channel Ports box to get a list of all known wwpns
as shown in Figure 4-24.
The IBM Storwize V7000 will have the host port wwpn’s available if you prepared the host(s)
and know your wwpns as described in “Preparing the Host Operating System” on page 126. If
they do not appear in the list, scan for new disks in your operating system and click Rescan in
the configuration wizard. If they still do not appear check your SAN zoning and repeat the
scanning.
3. Select the wwpn for your host and click Add Port to List as shown in Figure 4-25 on
page 146.
4. Add all ports that belong to the host as shown in Figure 4-26.
Note: If you want to create hosts that are offline, or not connected at the moment, it is also
possible to enter the wwpns manually. Just type them into the Fibre Channel Ports Box and
add them to the list as well.
5. If you are creating an HP/UX or TPGS host, check the Advanced checkbox and more
options as shown in Figure 4-27 on page 147 will appear. Select your host type.
6. Click Create Host and the wizard creates the host as shown in Figure 4-28.
8. Repeat these steps for all of your Fibre Channel hosts. Figure 4-30 shows the All Hosts
section after creating a second host.
Once you are completed creating Fibre Channel hosts go to Chapter 5, “Basic Volume
Configuration” on page 155 to create Volumes and map them to the created hosts.
2. Enter a host name, type the iSCSI initiator name into the iSCSI Ports box, and click Add
Ports to List as shown in Figure 4-32. If you want to add several initiator names to one
host repeat this step.
Figure 4-32 Create iSCSI Host - Enter name and iSCSI Ports
3. If you are connecting an HP/UX or TPGS host mark the Advanced checkbox as shown in
Figure 4-33 on page 150 and select the correct host type.
4. Click Create Host and the wizard will complete as shown in Figure 4-34. Click Close.
5. Repeat these steps for every iSCSI host you want to create, Figure 4-35 shows the All
Hosts section after creating two Fibre Channel and two iSCSI hosts.
Now the iSCSI hosts are configured on the IBM Storwize V7000. But to provide connectivity,
the iSCSI ethernet ports also have to be configured. Perform the following steps to enable
iSCSI connectivity:
1. Switch to the configuration section and select Network as shown in Figure 4-36.
2. Select ISCSI and the iSCSI Configuration section will appear as shown in Figure 4-37 on
page 152.
In the configuration you do have an overview about all iSCSI settings for the IBM Storwize
V7000. You can add iSCSI Alias, iSNS Addresses and Chap authentication configuration in
this section, as well as the iSCSI IP Address which we will also edit in the basic setup now.
3. Click on the Ethernet Ports to enter the iSCSI IP Address as shown in Figure 4-38 on
page 153. Repeat this step for each port you want to use for iSCSI traffic.
4. After you have entered the IP Address for each port click Apply Changes to enable the
configuration as shown in Figure 4-39.
5. After the changes are successfully applied click Close as shown in Figure 4-40.
The IBM Storwize V7000 is now configured and ready for iSCSI use. Remember the initiator
names of your storage canisters as shown in Figure 4-37 on page 152 as you will need them
afterwards. Go to Chapter 5., “Basic Volume Configuration” to create volumes and map them
to a host.
In previous chapters storage pool(s) were created using the initial configuration wizard
consisting of MDisks (raid arrays).
The first part of the chapter will describe how to create volumes and map them to the defined
hosts.
The second part covers how to discover those volumes (5.1, “Provisioning storage from the
IBM Storwize V7000 and making it available for the host”). Once you have finished with this
chapter your basic configuration is done and you are able to store data on the IBM Storwize
V7000.
Advanced host and volume administration, like volume migration, creating volume copies...
and much more, is covered in Chapter 8, “Advanced Host and Volume Administration” on
page 271.
To start, open the All Volumes section of the IBM Storwize V7000 GUI as shown in Figure 5-1
on page 156 to start the process to create new volumes.
The “All Volumes” section will appear as shown in Figure 5-2 on page 157.
At the moment we have not created any volumes, therefore the Welcome message is
displayed. Follow the recommendation, click New Volume and a new window will appear as
shown in Figure 5-3.
By default all volumes that you create are striped across all available MDisks in one storage
pool. The GUI for the IBM Storwize V7000 provides the following preset selections for the
user:
Generic: a striped volume that is fully provisioned as shown in “Creating a Generic
Volume” on page 158.
Thin-provision: a striped volume that is space efficient. There are choices in the
Advanced button to determine how much space is fully allocated initially and how large
the volume is able to grow as shown in “Creating a Thin-provisioned Volume” on page 159.
Mirror: the striped volume consists of two striped copies and is synchronized to protect
against loss of data if the underlying storage pool of one copy is lost as shown in “Creating
a Mirrored Volume” on page 161.
Thin-mirror: two synchronized copies, both are thin-provisioned as shown in “Creating a
Thin-mirror Volume” on page 164.
Select which volume you want to create and go to the section as listed above.
We choose a generic volume as shown in Figure 5-3 on page 157, and afterwards we select
the pool in which the volume should be created. Select the pool by clicking on it. In our
example we have done that by clicking on mdiskgrp0. The result is shown in Figure 5-4.
Enter a Volume Name and a size, click Create and Map to Host. The new Generic Volume
will be created as shown in Figure 5-5 on page 159. Click Continue and go to “Mapping
newly created Volumes to the Host using the wizard” on page 166.
If you do not want to map the volumes now, just click Create to complete the task. Volumes
can also be mapped later as described in 5.1.1, “Map Volume to the Host” on page 166.
Note: Thin-provisioned volumes require additional I/O operations to read and write
metadata to the back-end storage which also generates additional load to the system.
Therefore it is recommended to not use thin-provisioned volumes for applications with very
high I/O write workloads.
Select the pool in which the thin-provisioned volume should be created by clicking on it and
enter the volume name and size as shown in Figure 5-7 on page 160.
Note that under the name field there is a Summary showing that you are about to make a
thin-provisioned volume, how much the virtual space will be, the space that will be allocated
(real size) and the free capacity in the pool. By default the real capacity is 2% of the virtual
capacity, you can change this setting by clicking Advanced. On the “Thin Provision” Tab as
shown in Figure 5-8 are several advanced options available.
Real: Specify the size of the real capacity space used during creation.
Automatically Extend: This option enables the automatic expansion of real capacity, if
new capacity has to be allocated.
Warning Threshold: Enter a threshold for receiving capacity alerts.
Thin Provisioned Grain Size: Specify the grain size for real capacity.
Make advanced settings if required and click OK to return to Figure 5-7 and click Create and
Map to Host and the creation task will complete as shown in Figure 5-9 on page 161.
If you do not want to map the volumes now, just click Create to complete the task. Volumes
can also be mapped later as described in 5.1.1, “Map Volume to the Host” on page 166
Click Continue and go to “Mapping newly created Volumes to the Host using the wizard” on
page 166.
To create a mirrored volume select Mirror as shown in Figure 5-10 on page 162.
Select the primary pool by clicking on it and the view will change to second pool as shown in
Figure 5-11.
Select the secondary pool by clicking on it and enter a volume name and the required size as
shown in Figure 5-12 on page 163.
The summary shows you capacity information about the pool, and there is also the possibility
to click Advanced and select the mirror tab as shown in Figure 5-13.
In the advanced mirroring settings you are able to specify a synchronisation rate. Enter a
Mirror Sync Rate between 1 and 100%. With this option you can set the importance of the
copy synchronisation progress. This enables you to prefer more important volumes to
synchronize faster than other mirrored volumes. By default the rate is set to 50% for all
volumes. Click OK to return to Figure 5-12 on page 163.
Click Create and Map to Host and the mirrored volume will be created as shown in
Figure 5-14 on page 164.
Click Continue and go to “Mapping newly created Volumes to the Host using the wizard” on
page 166
Select the primary pool by clicking on it and the view will change to the second pool as shown
in Figure 5-16 on page 165.
Select the pool for the secondary copy and enter a name and a size for the new volume as
shown in Figure 5-17.
The summary shows you the capacity information, and the allocated space. Now you can
click Advanced and customize the thin provision settings as described and shown in
Figure 5-8, or the mirror synchronization rate as described and shown in Figure 5-13. If you
have opened the advanced settings click OK to return to Figure 5-17.
Click Create and Map to Host and the mirrored volume will be created as shown in
Figure 5-18 on page 166.
Click Continue and go to “Mapping newly created Volumes to the Host using the wizard” on
page 166
As the first step of the mapping process, you have to select a host to which the new volume
should be attached to as shown in Figure 5-19.
Now the wizards opens the Modify Mappings section, and has already preselected your host
and the newly created volume. Just click OK and the volume will be mapped to the host as
shown in Figure 5-20 on page 167.
After the task is completed click as shown in Figure 5-21 and the wizard will return to the All
Volumes section.
Now the newly created volume is displayed and we also see that it is already mapped to a
host as shown in Figure 5-22 on page 168.
The host is now able to access the volumes and store data on it, go to 5.2, “Discover the
volumes from the host and specify multipath settings” on page 168 to discover the volumes
on the host and make some additional host settings if required.
Or create multiple volumes in preparation for discovering them later. Mappings can be
customized as well. Advanced host configuration is covered in 8.1.1, “Modify, add and delete
host mappings” on page 272.
5.2 Discover the volumes from the host and specify multipath
settings
This section shows how to discover the volumes created and mapped in the previous section
and set additional multipath settings if required.
We assume that you have completed all steps described previously in the book so that the
hosts and the IBM Storwize V7000 are prepared:
Prepare your operating systems for attachment (Chapter 4, “Host Configuration” on
page 125).
Create Hosts using the GUI (4.2, “Creating Hosts using the GUI”).
Basic Volume configuration and host mapping (5.1, “Provisioning storage from the IBM
Storwize V7000 and making it available for the host”).
This section shows how to discover Fibre Channel and iSCSI Volumes from Windows 2008
and VMware ESX 4.x hosts.
In the IBM Storwize V7000 GUI select Hosts and All Hosts as shown in Figure 5-23 on
page 169.
The view gives you an overview about the currently configured and mapped hosts as shown
in Figure 5-24.
The host details show you which volumes are currently mapped to the host, and you also see
the volume UID and the SCSI ID. In our example one volume with SCSI ID 0 is mapped to the
host.
Log on to your Microsoft host and click “Start All Programs Subsystem Device Driver
DSM Subsystem Device Driver DSM”. A command line box will appear. Enter the
datapath query device command and press enter to see if there are IBM Storwize V7000
disks connected to this host as shown in Example 5-1.
Total Devices : 1
DEV#: 0 DEVICE NAME: Disk14 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 600507680280801AC800000000000009
=============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port5 Bus0/Disk14 Part0 OPEN NORMAL 0 0
1 Scsi Port5 Bus0/Disk14 Part0 OPEN NORMAL 22 0
2 Scsi Port6 Bus0/Disk14 Part0 OPEN NORMAL 19 0
3 Scsi Port6 Bus0/Disk14 Part0 OPEN NORMAL 0 0
C:\Program Files\IBM\SDDDSM>
The output provides information about the connected volume(s), in the example shown there
is one disk connected which is Disk 14 for the Windows host, and 4 paths to the disk are
available (State = Open). Open Windows Disk Management (Figure 5-27) by clicking Start
Run, type diskmgmt.msc and click OK.
Note: Usually Windows discovers new devices such as disks by itself,. If you have
completed all steps and do not see any disks select Actions Rescan Disk in Disk
Management to discover the new volumes.
Right click the disk in the left section and place it online as shown in Figure 5-28 on page 171.
Right click it again to initialize the disk as shown in Figure 5-29 and to confirm the task click
OK.
Now right click the right section of the disk and select Create simple Volume as shown in
Figure 5-30.
Follow the wizard and the volume will be ready to use from your Windows host as shown in
Figure 5-31 on page 172.
The basic setup is now complete, the IBM Storwize V7000 is configured and the host is
prepared to access the volumes over several paths and is able to store data on the storage
subsystem.
The host details show you which volumes are currently mapped to the host, and you also see
the volume UID and the SCSI ID. In our example one volume with SCSI ID 0 is mapped to the
host.
Log on to your Windows 2008 host and click Start Administrative Tools iSCSI
Initiator to open the iSCSI configuration tab as shown in Figure 5-34 on page 174.
Enter the IP address of one of the IBM Storwize V7000 iSCSI ports and click Quickconnect
as shown in Figure 5-35.
Note: The iSCSI IP addresses are different to the cluster and canister IP addresses, and
they have been configured in 4.2.2, “Creating iSCSI Hosts” on page 148.
The IBM Storwize V7000 initiator will be discovered and connected as shown in Figure 5-36
on page 175.
Now you have completed the steps to connect the storage disk to your iSCSI host, but you
are only using a single path at the moment. To enable multipathing for iSCSI targets more
actions are required.
Click Start Run and type cmd to open a command prompt. Enter ServerManagerCMD.exe
-install Multipath-IO and press enter as shown in Example 5-2.
Start Installation...
[Installation] Succeeded: [Multipath I/O] Multipath I/O.
<100/100>
Click Start Administrative Tools MPIO and open the Discover Multi-Paths Tab and
enable the Add support for iSCSI devices checkbox as shown in Figure 5-37 on page 176.
Click Add and confirm the action message to reboot your host.
After the reboot log on again, click Start --> Administrative Tools --> iSCSI Initiator to open
the iSCSI configuration tab and navigate to the Discovery Tab as shown in Figure 5-38 on
page 177.
Click Discover Portal... and enter the IP Address of another IBM Storwize V7000 iSCSI port
as shown in Figure 5-39 and click OK.
Return to the Targets Tab as shown in Figure 5-40 on page 178 and you will find the new
connection there listed as inactive.
Highlight the inactive port and click Connect, the Connect to Target window will appear as
shown in Figure 5-41.
Make sure that you check the Enable Multipath checkbox and click OK, the second port will
now appear as connected as shown in Figure 5-42 on page 179.
Repeat this step for each IBM Storwize V7000 port you want to use for iSCSI traffic, it is
possible to have up to 4 port paths to the system.
Click Devices MPIO to make sure that the multipath policy for Windows 2008 is set to
default, RoundRobin with Subset as shown in (Figure on page 179), and click OK to close
this view.
Open the Windows Disk Management (Figure 5-44) by clicking Start Run type
diskmgmt.msc and click OK.
Place the disk online, initialize it, create a filesystem on it, and then it is ready to use. The
detailed steps of this process are the same as described in 5.2.1, “Windows 2008 Fibre
Channel volume attachment” on page 169.
Now the storage disk is ready for use as shown in Figure 5-45. In our example we have
mapped a 10 TB disk, which is thin-provisioned on the IBM Storwize V7000 to a Windows
2008 host using iSCSI.
In the host details section, you will see that at the moment there is one volume connected to
the ESX FC host using SCSI ID 1. The UID of the volume is also displayed.
Connect to your VMware ESX Server using the vSphere client, navigate to the Configuration
tab and select Storage Adapters as shown in Figure 5-48.
Click Rescan All... and click OK (Figure 5-49) to scan for new storage devices.
Select Storage and click Add Storage as shown in Figure 5-50 on page 184.
The Add Storage wizard will appear, select Select Disk/LUN click Next. Now the IBM
Storwize V7000 disk appears as shown in Figure 5-51. Highlight it and click Next.
Follow the wizard to complete the attachment of the disk, after you select Finish the wizard
closes and you return to the storage view. In Figure 5-52 on page 186 you see that the new
volume has been added to the configuration.
Highlight the new Datastore and click Properties to see the details of it as shown in
Figure 5-53.
Click Manage Paths to customize the multipath settings. Select RoundRobin as shown in
Figure 5-54 on page 187 and click Change.
Now the storage disk is available and ready to use for your VMware ESX server using Fibre
Channel attachment.
In the host details section, you will see that at the moment there is one volume connected to
the ESX iSCSI host using SCSI ID 0. The UID of the volume is also displayed.
Connect to your VMware ESX Server using the vSphere client, navigate to the Configuration
tab and select Storage Adapters as shown in Figure 5-57.
Highlight the iSCSI Software Initiator and click Properties. The iSCSI initiator properties will
appear, select the Dynamic Discovery Tab (Figure 5-58 on page 190) and click Add.
To add a target enter the target IP address as shown in Figure 5-59. The target IP address is
the IP address of a node in the I/O group from which you are mapping the iSCSI volume.
Leave the IP port number at the default value of 3260, and click OK. The connection between
the initiator and target is established after clicking OK
Repeat this step for each IBM Storwize V7000 iSCSI Port you want to use for iSCSI
connections.
Note: The iSCSI IP addresses are different to the cluster and canister IP addresses, they
have been configured in 4.2.2, “Creating iSCSI Hosts” on page 148.
After you have added all the ports required close the iSCSI Initiator Properties by clicking
Close (Figure 5-58).
You will be prompted to rescan for new storage devices, confirm the scan by clicking Yes as
shown in Figure 5-60 on page 191.
Go to the storage view as shown in Figure 5-61 and click Add Storage.
The “Add Storage” wizard will appear (Figure 5-62 on page 192). Select Disk/Lun and click
Next.
The new iSCSI LUN will appear, highlight it and click Next as shown in Figure 5-63.
Review the disk layout and click Next as shown in Figure 5-64 on page 193.
Enter a name for the datastore and click Next as shown in Figure 5-65.
Select the maximum file size and click Next as shown in Figure 5-66.
The new iSCSI LUN is now in the process of being added and this can take a few minutes.
Once the tasks complete, the new datastore appears in the storage view as shown in
Figure 5-68 on page 195.
Highlight the new datastore and click Properties to open and review the datastore settings as
shown in Figure 5-69.
Click Manage Paths, select Round Robin as the multipath policy as shown in Figure 5-70 on
page 196 and click Change.
Click Close twice to return to the storage view, and now the storage disk is available and
ready to use for your VMware ESX server using an iSCSI attachment.
To migrate existing data the IBM Storwize V7000 provides a storage migration wizard to guide
you through the entire procedure.
Before attaching any external storage systems to IBM Storwize V7000, check the IBM
Storwize V7000 support matrix at the website below:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003703
When migrating data from an external storage system to IBM Storwize V7000, where the
external storage system will be removed from IBM Storwize V7000 control when complete,
IBM allows you to temporarily configure the external virtualization license setting. Configuring
the external license setting will prevent messages from being sent indicating you are in
violation of the license agreement. When the migration is complete the external virtualization
license must be reset to its original limit.
To prepare the data migration, external storage systems need to be configured to be under
IBM Storwize V7000 control. These are the steps to follow:
1. Stop host I/O to the external storage LUNs that need to be migrated.
2. Remove zones between the hosts and the storage system from which you are migrating.
3. Update your host device drivers, including your multipath driver and configure them for
attachment to IBM Storwize V7000 system.
4. Create a storage system zone between the storage system being migrated and IBM
Storwize V7000 system, and host zones for the host attachment.
5. Unmap the LUNs in the external storage system to the host and map them to the IBM
Storwize V7000 system.
6. Verify that the IBM Storwize V7000 has discovered the LUNs as unmanaged MDisks.
We will use the IBM Storwize V7000 wizard that has been designed specifically for this
scenario to guide you through the process and discuss the steps along the way.
There are two ways to access the menu options for starting a migration. One starts from the
the initial Getting Started panel where you can select Migrate Storage from the Suggested
Tasks drop down, as shown in Figure 6-1.
Figure 6-1 displays the Getting Started panel and the Migrate Storage option in the
Suggested Tasks drop down.
Figure 6-1 Getting Started panel with Migrate Storage option displayed
The other way is to navigate to the Migration option via the Physical Storage function icon,
the third one on the left side of the panel. Figure 6-2 on page 200 shows the Migration option
via the Physical Storage function icon.
Whichever method is chosen, the storage migration panel will appear. Click on Start New
Migration to start the storage migration wizard.
Now using the IBM Storwize V7000 storage migration wizard, you can easily migrate your
existing data. Follow the steps below:
1. Follow Step 1 of storage migration wizard, check the restrictions and prerequisites, and
Click Next.
Figure 6-4 on page 202 shows Step 1 in the storage migration wizard.
Note: To avoid any potential data loss, back up all the data stored on your external storage
before using the wizard.
Step 1 shows there are some restrictions and prerequisites in using the storage migration
wizard.
Restrictions:
– Do not use the storage migration wizard to migrate cluster hosts, including clusters of
VMware hosts and VIOS.
– Do not use the storage migration wizard to migrate SAN boot images.
If you have either of these two environments you will need to migrate them outside of this
wizard. You can find more information on IBM Storwize V7000 information center:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
The VMware ESX Storage vMotion feature might be an alternative to migrate VMware
clusters, consult VMware for more information.
Prerequisites:
– Make sure the IBM Storwize V7000 Fibre Channel ports have been connected to the
SAN fabric(s) that the external disk controller and hosts you wish to migrate from are
connected to.
– If you have VMware ESX hosts in the migration, make sure the VMware ESX hosts are
set to allow volume copies to be recognized.
Click Next to go to the next step, after you have understood the restrictions and satisfied any
prerequisites.
2. Follow Step 2 of the storage migration wizard, complete the environment preparation for
migration, and Click Next.
Figure 6-5 on page 204 shows Step 2 of the storage migration wizard.
3. Follow Step 3 of the storage migration wizard, complete the mapping of external storage
LUNS. Make sure you record the information mentioned in this step, as it makes it much
easier for your following steps. Then click Next.
Note: You may need to record the SCSI ID that the volume is mapped to the host with.
Some operating systems do not support changing the SCSI ID during the migration.
After you click Next, the IBM Storwize V7000 will start to discover external devices (if you
have correctly zoned the external storage systems with the IBM Storwize V7000 and mapped
the LUNs). When the discovery complete, IBM Storwize V7000 will show the MDisks found.
4. Choose the MDisks you want to migrate, and Click Next.
Figure 6-7 on page 206 shows step 4 of the storage migration wizard.
If the MDisks that need migrating are in the list, select them and click Next, then IBM Storwize
V7000 will start to import the MDisks that you have chosen. If the MDisks that need migrating
are not in the list, you may need to check your zone configuration and LUN mapping, and click
Detect MDisks to let IBM Storwize V7000 run the discovery procedure again.
You can select one or more MDisks as required, and detailed information of the MDisk can be
shown by double clicking on it.
In Figure 6-7 there are six LUNs discovered as MDisks that are candidates for migration. In
your particular situation, you may need to reference the information you recorded earlier to
identify these MDisks. In our example the MDisks have been selected to go forward to the
next step.
When you click Next in this step, IBM Storwize V7000 will complete importing the MDisks with
the host’s data and a storage pool has been created. The MDisks are added to the pool and
Image Mode volumes (with the same size as the MDisks) are created which are ready for
mapping back to the original hosts.
5. Configure the host which needs to access the data after the migration, and/or create new
hosts as needed and Click Next.
Figure 6-8 on page 207 shows Step 5 of the storage migration wizard.
Before you configure any hosts, make sure appropriate drivers have been installed on the
host and host zones have been zoned correctly.
If the host which needs to access the data on the volume after the migration is complete is in
the list, click Next.
If the host has not been created on the IBM Storwize V7000 storage system, click New Host
to create it as required.
In the Create Host window, give the host a name and from the Fibre-Channel Ports
drop-down list select the WWPNs for this host that you recorded earlier. If WWPNs of this
host do not appear in the drop down list. Click the Rescan button and try again. If they still do
not show up in the list you can manually enter them.
After the host has been created, you can find it in the host list in Step 5. Then click Next to
continue with the migration wizard.
Figure 6-10 shows Step 5 of the storage migration wizard with the new host.
6. Map the newly migrated volume to the host, and when mapping is complete, click Next.
In Step 6 of the migration wizard, the volumes from the imported MDisks which need
migrating have been listed. The names of the volumes have been assigned automatically by
the IBM Storwize V7000 storage system. You can change the names to any words that would
be meaningful to you by selecting the volume and click on Rename in the Actions drop-down
menu.
Note: The names must begin with a letter, which cannot be numeric. The name can be a
maximum of 63 characters. Valid characters are uppercase letters (A-Z), lowercase letters
(a-z), digits (0-9), underscore (_), period (.), hyphen (-), and space. And the names must
not begin or end with a space.
To map the volumes to the hosts, select the volumes and click Map to Host. A window will
pop up with a drop-down list of the hosts. At this time, the IBM Storwize V7000 will let you
choose which host you need to map the volumes to. Choose the correct host and click Next.
Note: Best practice recommends to map the volume to host with the same SCSI ID before
the migration, which you should have recorded in Step 3.
Figure 6-12 shows the menu for choosing the host to map the volumes to.
After choosing the host, you will enter the Modify Mappings panel of the IBM Storwize
V7000. On the right, you can find your newly mapped volumes highlighted in yellow. You can
change the SCSI ID of the new mappings. Click OK to complete the mapping.
Figure 6-13 on page 211 shows the Modify Mappings panel in the migration wizard.
When the mapping completes, in Step 6 of the storage migration wizard, you will find the Host
Mappings column of the volumes change from No to Yes. A scan could be performed to
discover the new devices on the host for verification. Click Next to go to the next step of the
storage migration wizard.
Figure 6-14 shows Step 6 of the storage migration wizard with host mappings modified.
7. Select the destination storage pool for data migration, and click Next.
The destination storage pool of data migration could be an external storage pool or an
internal storage pool. Make sure there is enough space in the storage pools.
After you click Next, the migration begins. The migration will run in the background and
results in a copy of the data being placed on the MDisks in the storage pool selected. The
process uses the volume mirroring function included with the IBM Storwize V7000, and when
complete the volumes will have pointers to both the new copy on the storage pool selected as
well as on the original external storage system.
8. Click Finish to end the storage migration wizard in Step 8.
The end of the storage migration wizard is not the end of the data migration. The data
migration has just begun, and after clicking Finish in Step 8, you can find the migration
progress in the migration panel. You can also find the target storage pool which your volumes
are being migrated to, along with the status of the volumes.
Figure 6-17 on page 213 shows the data migration progress in the migration panel.
When the migration progress reaches 100%, select the volumes and click Finalize in the
Actions drop-down menu in the migration panel, as shown in Figure 6-18. The Image Mode
copy of the volumes on the original external storage system will be deleted and the
associated MDisks from the storage pool will be removed, and the status of those MDisks will
be unmanaged.
Figure 6-18 shows how to finalize the data migration in IBM Storwize V7000.
After the you select Finalize in the Action list, the IBM Storwize V7000 will need your
confirmation on finalizing the migration for the volumes. Verify the volume name and the
number of migrations you are finalizing, and if you are satisfied click OK.
When the finalization completes, the data migration to the IBM Storwize V7000 is done, you
can un-zone and remove the legacy storage system from the IBM Storwize V7000.
For more information on advanced migration function, refer to Chapter 7, “Storage Pools” on
page 217 and Chapter 9, “External Storage Virtualization” on page 337.
In this section, we are starting with the environment as shown in Figure 7-1. At this moment in
time all the Internal drives remain unconfigured, and we will cover the steps to configure
internal storage later. Currently the existing MDisks come from external storage, and example
storage pools, volumes and hosts have been created for use.
You can learn how to manage MDisks in 7.2, “Work with MDisks” on page 237; how to
manage storage pools in 7.3, “Work with Storage Pools” on page 264; how to work with
external storage in << refer to external storage chapter>>; how to create volumes in << refer
to basic volume configuration chapter>>; and how to create hosts in << refer to basic host
configuration chapter>>.
The IBM Storwize V7000 storage system provides an individual Internal panel for managing
all internal drives.
You can access the Internal panel through the Getting Started panel, and click the Internal
Drives icon. Then extended help information for internal drives will show up below, click
Physical Storage and you will be taken to the Internal panel.
Figure 7-1 shows how to access the Internal panel from the Getting Started panel.
Figure 7-1 Access the Internal panel from Getting Started panel
The other way to access the Internal panel will be from the Physical Storage functional
icons on the left hand side.
Figure 7-2 on page 219shows how to access the Internal panel from the Physical Storage
functional icons on the left hand side.
Figure 7-2 Access Internal panel from Physical Storage function icon
The Internal panel, as shown in Figure 7-3, gives an overview of all your internal drives. On
the left of the panel there is a catalog of the internal drives. In this catalog you can find out
how many different types of internal disks are in this IBM Storwize V7000 storage system.
Select any type on the left, and the internal disks of this type will show up on the right. Select
All Internal, and all of your internal disks will list on the right.
On the right hand side of the Internal panel, the internal disks of the type you selected will be
listed along with their general information, including the drive ID, capacity, the role of drive,
status, MDisk name, enclosure ID, drive slot, drive type and so on.
In addition, you can find the current internal storage capacity allocation indicator on the top
right. The Total Capacity shows the internal storage capacity you have overall in this IBM
Storwize V7000 storage system; The MDisk Capacity shows the internal storage capacity
have been assigned to MDisks; The Spare Capacity shows the internal storage capacity used
for hot spare. Now the percentage bar indicates 0% capacity allocated, because no internal
storage have been configured in this example.
1. Fix Error action will start the fix procedure, << refer to RAS chapter>>.
2. The internal drives could be taken offline when there are problems on the drives by
clicking on Take Offline in the Actions drop down list. A confirmation window appears as
shown in Figure 7-5 on page 221. The IBM Storwize V7000 storage system prevents the
drive from being taken offline if there may be data loss as a result. It is recommended to
only take the drive offline if a spare drive is available.
Note: Choosing the option to take internal drives offline even if redundancy is lost on the
array could lead to potential data loss.
3. The internal drives in the IBM Storwize V7000 storage system could be assigned to
several roles and are designated as unused, candidate or spare. The meaning of these
roles are:
– Unused: the drive is not in use and will not be used as a spare.
– Candidate: the drive is available for use in an array.
– Spare: the drive can be used as a hot spare if required.
Select the Mark as... in the Actions drop down, and select the role you want the drive to be
assigned as shown in Figure 7-6 on page 222.
4. Use the Identify action to turn on the LED light so you can easily identify a drive that needs
to be replaced, or that you want to troubleshoot.
Figure 7-7 shows the information when you click the Identify action.
Figure 7-8 on page 223 shows the Properties tab with default format.
will pop up to guide you the process of configuring internal storage, as shown in Figure 7-3 on
page 219.
The wizard will include all the candidate internal drives in the configuration. If you have
internal drives with the unused role, a pop-up window will appear to ask you if you want to
include the unused drives in the configuration.
Usage of the storage configuration wizard simplifies the initial disk drive setup and offers two
options:
Use the recommended configuration
Select a different configuration
Selecting the Use the recommended configuration will guide us through the wizard
discussed in “Usage of the Recommended Configuration” on page 226. Select a different
configuration uses the wizard discussed in “Select a Different Configuration” on page 229.
Before going through the storage configuration wizard, the concept of the IBM Storwize
V7000 RAID configuration presets will be introduced first with the basic concept of RAID itself
described in << refer to the overview chapter>>.
Table 7-1 on page 225 describes the presets that are used for solid-state drives (SSDs) for
the IBM Storwize V7000 storage system.
Note: In all SSD RAID instances, drives in the array are balanced across enclosure chains
if possible.
Table 7-2 describes the RAID presets that are used for hard disk drives for the IBM Storwize
V7000 storage system.
Here are the recommended RAID presets for different drive classes:
SSD Easy Tier preset for solid state drives.
Basic RAID-5 for SAS drives.
Basic RAID-6 for Nearline SAS drives.
Using the recommended configuration, spare drives are also automatically created as
needed to meet the spare goals of the presets. Under automatic creation, one spare drive will
be created out of every 24 disk drives with the same drive class on a single chain.
For example, if you have 20 x 450GB 10K SAS drives on one chain, one drive in these 20
drives will be marked as a spare drive; if you have 20 x 450GB 10K SAS drives on both
chains, which means 10 drives in each chain, then one spare drive on each chain will be
created. So if you have 40 x 450GB 10K SAS drives on both chains, then two spare drives on
each chain will be created and you have a total of 36 drives that can be the members for the
RAID setup.
Spare drives in the IBM Storwize V7000 are global spares. This means that any spare which
is at least as big as the drive which is being replaced can be used in an array. Thus an SSD
array with no SSD spare available would use an HDD spare instead.
In our example, after the recommended configuration, three arrays using the basic RAID5
presets are offered with one host spare. If the proposed configuration meets your
requirements click Finish and the system automatically creates array MDisks with the size
equivalent to RAIDs, as shown in Figure 7-13.
Storage pools are also automatically created to contain the MDisks with similar performance
characteristics, including the consideration of RAID type, number of member drives, drive
class and so on.
When an array is created, the array members are synchronized with each other by a
background initialization process. You can monitor the progress of the initialization process by
clicking the status indicator of Running Tasks, as shown in Figure 7-12. The array is
available for I/O during this process, and initialization has no impact on availability due to
member drive failures.
The capacity allocation indicator shows the allocation capacity has reached 96% after
configuration as shown in Figure 7-13 on page 229.
If the recommended configuration is not what you need, choose Select a different
configuration and continue with the more flexible setup as shown in “Select a Different
Configuration” on page 229.
Click Next and select an appropriate RAID preset as shown in Figure 7-15 on page 231.
Choose RAID5 for example, and click Next. Then you can select the number of drives to
provision in the configuration and decide if you want the wizard to automatically configure hot
spare drives for you. Although the usage of this policy is optional, spare drives can be
configured manually as well.
Furthermore here, you have two options as to how to configure your storage:
1. Performance optimized setup:
In a performance optimized setup the IBM Storwize V7000 uses always 8 physical disk drives
in a single array, except in the following situations:
RAID 6 uses 12 disk drives
SSD Easy Tier uses 2 disk drives
As a consequence all arrays with similar physical disks provide the same performance. The
remaining disks can be used in a different array. Figure 7-16 on page 232 shows an example
of the performance optimized setup.
Note: If the performance optimized configuration cannot fully use the number of drives you
choose to provision, you need to configure the remaining unconfigured drives separately.
The goal of this algorithm is to create a setup with the maximum usable capacity, depending
on the selected RAID level.
The IBM Storwize V7000 system tries to achieve the width goal for each RAID array before it
creates a new one. The width goals for the array levels are shown in Table 7-3.
RAID 6 12 disks
All disks are used in the capacity optimized setup. There are no “unconfigured devices” as in
a performance optimized setup.
After you have decided on the setup to be applied to your configuration, you can proceed to
the next step.
In this example, we choose the performance optimized setup as shown in Figure 7-16 on
page 232 and click Next.
In the next step you need to choose the storage pool to assign the new capacity to. You can
select an existing storage pool with no MDisk(s) in it or similar performance characteristics
which has been listed automatically by the IBM Storwize V7000 as shown in Figure 7-18 on
page 235, or create a new storage pool as shown in Figure 7-19 on page 236.
Click Finish to finalize the wizard. After the wizard completes, our configuration can be found
on the right side of the Internal panel, as shown in Figure 7-20 on page 237.
With the performance optimized setup, two MDisks with the size equivalent to arrays have
been created. The storage pool has also been created if you chose to create a new storage
pool to contain the new internal capacity. Seven unconfigured drives can also be found in the
drive list for further actions. You will find that the array initialization process is going on in the
status indicator of Running Tasks, and the capacity allocation indicator has been updated to
67% as well.
You can access the MDisks panel through the Getting Started panel, and click the MDisks
icon. The extended help information for MDisks will show up below, click on Physical
Storage and you will be taken to the MDisks panel.
Figure 7-21 on page 238 shows how to access the MDisks panel from the Getting Started
panel.
Figure 7-21 Access the MDisks panel from Getting Started panel
Another way to access the MDisks panel will be from the Physical Storage functional icons
on the left hand side.
Figure 7-22 on page 239 shows how to access the MDisks panel from the Physical Storage
functional icons on the left hand side.
Figure 7-22 Access MDisks panel from Physical Storage function icon
The MDisks panel, as shown in Figure 7-23 on page 240, provides you with easy access to
manage all your MDisks. All MDisks are listed in the MDisks panel including internal and
external storage, with general information including name, status, capacity, mode, storage
pool it belongs, storage system it comes from for external LUNs as well as its LUN ID, and tier
assigned.
You can find more information on how to attach external storage to IBM Storwize V7000
storage system in << refer to external storage>>.
Note: If there is existing data on the unmanaged MDisks you need to preserve, do not
select Add to pool on this LUN as this will destroy the data. Use Import instead, which is
described in 7.2.2, “Import MDisks” on page 244.
In the next step, choose the storage pool you want to add the MDisk to, and click Add to Pool
on the bottom as shown in Figure 7-25.
After the IBM Storwize V7000 system completes the action, you will see that the MDisk is in
the pool you selected as shown in Figure 7-26.
In some cases, you may need to remove MDisks from storage pools to reorganize your
storage allocation. You can remove MDisks from storage pools by selecting the MDisks and
choosing Remove from Pool from the right click menu or the Actions drop down list, as
shown Figure 7-27.
The next step to remove MDisks from storage pool is shown in Figure 7-28 on page 243. You
need to confirm the number of MDisks you need to remove. If you have data on the MDisks,
and you still need to remove the MDisks from the pool, check the Remove the MDisk from
the storage pool even if it has data on it. The system migrates the data to other MDisks
in the pool box.
Note: Make sure you have enough available capacity left in the storage pool for the data on
the MDisks to be removed
After you click Delete, data migration off the MDisks to be removed starts. You can find the
migration progress in the status indicator of Running Tasks, as shown in Figure 7-29 on
page 244.
Figure 7-29 Data migration progress when remove MDisks from the pool
Image mode is the only mode that preserves the data on the MDisks. As mentioned in 7.2.1,
“Add MDisks to Storage Pools” on page 240, if you add an MDisk that contains existing data
to a storage pool, you lose the data that it contains.
Select an unmanaged MDisk, choose Import in the right click menu or in the Actions drop
down list, as shown in Figure 7-30 on page 245.
As shown in Figure 7-31 on page 246, the import wizard will be activated to guide the import
process.
In the first step of the import wizard, there are two checkboxes which apply to two special
scenarios:
Tick the first checkbox only if the volume that you are importing was originally a
thin-provisioned volume that was exported to an image-mode volume on the cluster. You may
find more information on export volumes in Chapter 8, “Advanced Host and Volume
Administration” on page 271.
The second checkbox is ticked by default, which means the IBM Storwize V7000 will enable
caching for image mode volumes. Although you can remove the tick on the second checkbox
to disable caching if you are using copy services on the external storage to maintain the data
consistency to the application server between the local and remote site on the external disk
controller level. It is recommended to use the copy services of IBM Storwize V7000 for
disaster recovery in the virtualized environment under it. You can find more information about
virtualizing external storage in Chapter 9, “External Storage Virtualization” on page 337
In the next step you can choose the destination storage pool if you want to migrate the data
on image mode MDisks. If you select the destination pool here and click Finish, as shown in
Figure 7-32, the migration will begin after the import of the MDisks.
You can find the migration progress in the status indicator of Running Tasks, as shown in
Figure 7-33 on page 247.
The migration progress can also be found in the Migration panel as shown in Figure 7-34 on
page 248.
After the migration you will find the volume in the destination pool, as shown in Figure 7-35.
All the data has now been migrated from the source MDisk to the target storage pool. With no
data on it, the source MDisk has been changed to managed mode, and could be used to
serve other volumes, as shown in Figure 7-36 on page 249.
On the other hand, you can leave nothing selected in the second step in import wizard and
then click Finish, as shown in Figure 7-37.
A warning message will appear after you click Finish without select anything, as shown in
Figure 7-38 on page 250. Click OK to proceed.
After import, you can see that the MDisk mode has changed to image mode as shown in
Figure 7-39.
Without migration, the related image mode volume will remain in the temporary storage pool,
which is automatically created by IBM Storwize V7000, as shown in Figure 7-40 on page 251.
You can manually start your migration from the image mode volume through the volume
actions Migration to Another Pool or Volume Copy Actions. More information can be
found about volume actions in Chapter 5, “Basic Volume Configuration” on page 155.
You can also migrate the data on imported MDisks to other storage pools in the Migration
panel. How to access the Migration panel could be found in Chapter 6, “Migration Wizard” on
page 197.
If you have already imported MDisks from external storage systems without data migration,
you will find it in the migration panel as shown in Figure 7-41 on page 251.
The candidate volumes for migration in the list, are directly mapped to the image mode
MDisks you have imported. Select the volume you want to migrate and click ‘Start Migration’
in the ‘Actions’ drop down menu, as shown in Figure 7-42.
After you start the migration, IBM Storwize V7000 will let you choose the destination storage
pool. Choose the target storage pool you want to migrate the data from the volumes to and
click Add to Pool, as shown in Figure 7-43 on page 252.
Now the data migration has begun, and you can find the migration progress in the migration
panel, as shown in Figure 7-44.
Since the migration starts in the Migration panel using the volume copy function, you can
also find the progress of volume copy synchronization in the status indicator of Running
Tasks, as shown in Figure 7-45 on page 254. You can find more explanation on the volume
copy function in 8.3, “Advanced Volume Copy Functions” on page 322.
As described in Chapter 6, “Migration Wizard” on page 197, when the migration progress
reaches 100%, click ‘Finalize’ to complete the migration process, which will delete the copy
on the image mode MDisk and keep the other one. The source MDisk which was in image
mode originally will be removed from its storage pool and change its mode to unmanaged as
shown in Figure 7-46 on page 254.
In the Migration panel, you can also stop the migration progress after it starts by clicking
Stop Migration in the Actions drop down list as shown in Figure 7-47. After the migration is
forced to stop, the second copy of the data will be removed. Start the migration again if
needed.
The IBM Storwize V7000 will need your verification and confirmation on the migrations you
choose to stop. Verify the volume names in the migrations and the number of migrations you
want to stop, and click ‘OK’ to confirm your decision.
You can set the spare goal of the MDisk by selecting Set Spare Goal in the RAID Actions
menu. Then input your new spare goal for the MDisk as shown in Figure 7-50 on page 257.
After you click Save, the IBM Storwize V7000 will remember your goal for spare drives. If your
current spare drives do not meet your goal for the MDisk, you will receive a warning in the
event log that says ‘Array mdisk is not protected by sufficient spares’. Manually mark the
drives as spares as needed as described in 7.1.1, “Actions on Internal Drives”.
Swap Drive action could be used to replace drives in the array. Select an MDisk and choose
Swap Drive in the RAID Actions menu. You need to select a drive to swap out of the MDisk
first as shown in Figure 7-51, and click Next.
Then you need to select a drive to swap into the MDisk. The IBM Storwize V7000 will list the
drives with candidate and spare mode for selecting to swap in as shown in Figure 7-52 on
page 258.
After you click Finish, IBM Storwize V7000 will start exchanging the array members in
background. When it is done, you will find the members of the array have been changed. The
swap-out drive will keep the original mode of the one swapped in.
If the array MDisks have reached the end of their lifecycle, they can be deleted by Select the
MDisks and choose Delete in the RAID Actions menu. Then IBM Storwize V7000 will ask
you to confirm your delete action as shown in Figure 7-53 on page 259. You need to confirm
the number of array MDisks you want to delete. If you have data on the MDisks and you still
need to delete them select Delete the RAID array MDisk even if it has data on it. The
system migrates the data to other MDisks in the pool.
Note: Make sure you have enough available capacity left in the storage pool for the data on
the MDisks to be removed.
After you click Delete, all the member drives of the MDisks would return to candidate mode,
and the data on the MDisks would be migrated to other MDisks in the pool.
While the Internal drives will have their tier chosen automatically by IBM Storwize V7000, the
tier of external MDisks will be assigned to the generic HDD tier by default and can be
manually changed by the user.
To assign a specific tier to one MDisk, select it and choose Select Tier in the right click menu
or in the Actions drop down list as shown in Figure 7-54 on page 260.
As the MDisk we select in this example has been assigned to the HDD tier by default, we
select the desired tier for this MDisk to be SSD, and click OK, as shown in Figure 7-55.
After the action has been completed successfully, the MDisk can be found in the tier SSD as
shown in Figure 7-56.
When external storage controllers are added to the IBM Storwize V7000 environment, as
described in Chapter 9, “External Storage Virtualization” on page 337, generally the IBM
Storwize V7000 will automatically discover the controllers and the LUNs that are presented by
the controllers are displayed as unmanaged MDisks. However, if you have attached new
storage and the IBM Storwize V7000 has not detected it, you might need to run this command
before the cluster detects the new LUNs. And if the configuration of the external controllers
are modified afterwards, the IBM Storwize V7000 might be unaware of these configuration
changes. Use this action to rescan the Fibre Channel network and update the list of
unmanaged MDisks.
Note: Detect MDisk is asynchronous and returns a prompt while it continues to run in the
background.
MDisks could be excluded from the IBM Storwize V7000 because of multiple I/O failures.
These failures might be caused by link errors. Once a fabric-related problem has been fixed,
the excluded disk can be added back into the IBM Storwize V7000 by selecting the MDisks
and choosing Include Excluded MDisk in the right click menu or in the Actions drop down
list.
The MDisk can be renamed easily by selecting the MDisk and choosing Rename in the right
click menu, or in the Actions drop down list. Input the new name of your MDisk and click
Rename, as shown in Figure 7-57.
The volumes dependent on an MDisk can be shown by selecting the MDisk and choosing
Show Dependent Volumes in the right click menu, or in the Actions drop down list. The
volumes will be listed with general information as shown in Figure 7-58 on page 262.
Several volume actions could be taken on the volumes by selecting the volume and choosing
the action needed in the right click menu or in the Actions drop down list as shown in
Figure 7-59. More information could be found on volume actions in Chapter 5, “Basic Volume
Configuration” on page 155.
The Properties action on the MDisk will show the information you need to identify it. On the
MDisks panel, select the MDisk and choose Properties in the right click menu or in the
Actions drop down list, and you will get a pop-up window that displays the information as
shown in Figure 7-60 on page 263.
There are three tabs in this information window. The Overview tab contains information about
the MDisk itself.
With the default format in the Overview tab, there is only general information displayed.
Choose Show Details to get more detailed information about the MDisk, as shown in
Figure 7-61.
The Member Drives tab is only for array MDisks. In the Member Drives tab you will find all
the member drives of this MDisk displayed as shown in Figure 7-62. You can also take
required actions here as described in 7.1.1, “Actions on Internal Drives”.
The last tab is the Dependent Volumes tab and the content is the same as described in the
previous Show Dependent Volumes action.
You can access the Pools panel through Getting Started panel, and click the Pools icon.
Then, extended help information for storage pools will show up below and if you click
Physical Storage you will go to the Pools panel.
Figure 7-63 on page 265 shows how to access the Pools panel from the Getting Started
panel.
Figure 7-63 Access the Pools panel from Getting Started panel
The other way to access the Pools panel will be from the Physical Storage functional icons
in the left.
Figure 7-64 on page 266 shows how to access the Pools panel from the Physical Storage
functional icons in the left.
Figure 7-64 Access Pools panel from Physical Storage function icon
The Pools panel, as shown Figure 7-65 on page 266, is where you can manage all your
storage pools. On the left side of the Pools panel you can find your storage pools listed. With
the help of the filter you can list only the storage pools you need. Choose one storage pool
from the left of the Pools panel and you will find more information about the pool on the right.
The name of the storage pool and its status are displayed beside the storage pool icon. The
name of the storage pool could be changed by clicking on the name and entering the new
name.
Below the name of the storage pool you will find the status of the storage pool and the
summary of MDisks and volumes associated with the pool. The Easy Tier status can also be
found here, more information about Easy Tier can be found in Chapter 10, “Easy Tier” on
page 351
On the top right hand side of the Pools panel you can find a volume allocation indicator which
shows the capacity allocated to volumes versus the total capacity. With the percentage
calculated in the indicator it is easy to know the utilization of your storage pool which makes
management much easier.
The MDisks in the storage pool are listed at the below section on the right with general
information, as shown in Figure 7-65 on page 266. The actions that can be taken on the
MDisks will not be duplicated here as they are the same as described in 7.2, “Work with
MDisks” on page 237.
To create a new storage pool, you can click the New Pool button on the top left, then a simple
wizard will guide you the process, as shown in Figure 7-66 on page 267. Input a name for
your new storage pool, or the name will be assigned automatically by IBM Storwize V7000.
Do not forget to choose a icon for your storage pool, it is useful and easy to manage to let the
icon of the storage pool represent the feature of MDisks in the pool. Click Next when you
have made your decision.
In the next step the MDisks that can be added to this storage pool have been listed as shown
in Figure 7-67. It is possible to have no MDisks selected which means you are not ready to
put any MDisks into the storage pool yet, and an empty storage pool will be created. Click
Finish to proceed.
If you choose not to put any MDisks into the pool a warning message will appear to confirm
that you wish to create an empty pool as shown in Figure 7-68 on page 268. Click OK to
continue.
After the creation you will find your new pool listed on the left hand side of the Pools panel. If
the new pool is empty, you can add MDisks to the pool afterwards in the MDisks panel as
described in 7.2.1, “Add MDisks to Storage Pools”.
To remove one storage pool you can select Delete Pool from the Actions drop down list as
shown in Figure 7-69.
A confirmation window will appear as shown in Figure 7-70 on page 269. The volumes listed
in the box are the ones with the only copy in this pool. These volumes will be deleted as well
when you delete a storage pool. Check all the volumes listed and make sure they are all
required to be deleted. If you have volumes or MDisks in this pool you can proceed only if you
have ticked the checkbox Delete all volumes, host mappings, and MDisks that are
associated with this pool. Make your selection and confirm you wish to continue by clicking
Next.
Note: Take care as after you delete the pool, all the data in the pool will be lost except for
image mode MDisks.
After you delete the pool all the associated volumes and their host mappings will be removed.
All the managed or image mode MDisks in the pool will return to a status of unmanaged after
the pool is deleted. And if the pool is deleted, all the array mode MDisks in the pool will be
removed and all the member drives will return to candidate.
This chapter covers all the host and volumes actions, except the basic configuration (covered
previously) and EasyTier, which is described in Chapter 10, “Easy Tier” on page 351.
We assume that you have already created some hosts in your Storwize V7000 GUI and that
some volumes are already mapped to them. In this section we will cover the three functions
that are covered in the host section of the IBM Storwize V7000 GUI as shown in Figure 8-1.
All Hosts (8.1.1, “Modify, add and delete host mappings” on page 272).
Ports by Host (8.1.2, “Add and delete host ports” on page 287).
Host Mappings (8.1.3, “Host mappings overview” on page 295).
As you can see in our example, four hosts are created and volumes are already mapped to
them, we will use these hosts to show the modification possibilities. If you highlight a host you
can either click Action (Figure 8-3) or right click it (Figure 8-4) to see all available tasks.
Modify Mappings
Highlight a host and select Modify Mappings as shown in Figure 8-3 to open the associated
view as shown in (Figure 8-5).
In the upper left you will see that the highlighted host is selected. The two boxes below show
all available volumes. The left box shows the volumes that are ready for mapping to this host,
and the right box includes the volumes already mapped. In our example one volume with
SCSI ID 1 is mapped to the host, and there are three more available. Highlight a volume in the
left box, and select the upper arrow (pointing to the right) to map another volume to a host as
shown in (Figure 8-6 on page 275).
The changes are marked in yellow and now the OK and Apply button are enabled. If you click
OK the changes will be applied (Figure 8-7) and the view will be closed, if you click Apply the
changes will be submitted to the system (Figure 8-7), but the view will remain open for further
changes as shown in (Figure 8-8).
You can now select another host in the hosts drop down box as shown in Figure 8-9 to modify
the host settings for it.
Highlight the volume to be modified again and click the right arrow button to move it to the
right box. The changes will again be shown in yellow there. If you right click the yellow
volume, you are able to change the SCSI ID, which will be used for the host mapping, as
shown in Figure 8-10 on page 277.
Click Edit SCSI ID and click OK to change the ID as shown in Figure 8-11.
The changes are displayed in the modify mappings view as shown in Figure 8-12, click Apply
to submit the changes as shown in Figure 8-13.
If you would like to remove a host mapping the required steps are pretty much the same,
except that you select a volume in the right box and click the left arrow button to remove the
mapping again as shown in Figure 8-14 on page 279.
Figure 8-15 shows that the selected volume has been moved to the left box, to unmap it.
Click OK or Apply to submit the changes to the system (Figure 8-16 on page 280).
Once you are done with all host mapping modifications click OK to return to the modify
mappings view (Figure 8-5).
You will be prompted as to the number of mappings you want to remove, enter the number
and click Unmap as shown in Figure 8-18.
Note: If you click Unmap all access for this host to volumes controlled by IBM Storwize
V7000 system will be removed, make sure that you run the required procedures in your
host operating system before removing the volumes from your host.
Figure 8-20 shows that the selected host does not have any host mappings anymore.
Rename
To rename a host highlight it and click Rename as shown in Figure 8-21 on page 282.
Enter a new name and click Rename as shown in Figure 8-22. If you click Reset the changes
will be reset to the primary host name.
After the changes have been applied to the system click Close as shown in Figure 8-23.
Delete a Host
To delete a host highlight it and click Delete as shown in Figure 8-24 on page 283.
Enter the number of hosts you want to delete and click Delete as shown in Figure 8-25.
If you want to delete a host that has volumes mapped, you have to force it by selecting the
checkbox in the lower part of the window. If you enable this checkbox the host will be deleted,
and it will no longer have access to this system.
After the task has completed click Close as shown in Figure 8-26 to return to the mappings
view.
Host Properties
Highlight a host and select Properties (Figure 8-27) to open the Host Details view as shown
in Figure 8-28.
The Host Details view gives you a overview about your host properties. There are three tabs
available: Overview, Mapped Volumes, and Port Definitions.
The overview tab is shown in Figure 8-28, select the Show Details checkbox to get more
information of the host as shown in Figure 8-29.
Click the Edit button and you will be able to change the host properties as shown in
Figure 8-30.
Note: You can use port masks to control the node target ports that a host can access, this
can be useful to limit the number of logins with mapped volumes visible to a host
multipathing driver, instead of changing the SAN zoning. The best practice
recommendation is to limit each host to 4 paths.
Make the changes, if required, and click Save to apply them (Figure 8-30) to close the editing
view.
The Mapped Volume tab gives you an overview as to which volumes are mapped with which
SCSI ID and UID to this host as shown in Figure 8-31.
The Port Definitions tab shows you the configured host ports of a host and gives you status
information as shown in Figure 8-32.
This view offers you the option to start Add and Delete Port actions, and they are detailed as
described in 8.1.2, “Add and delete host ports” on page 287.
In the left side of the view you will find all hosts listed, and the icons give you a overview if it is
a Fibre Channel (orange cable) or iSCSI (blue cable) host. The properties of the highlighted
host are shown in the right view. If you select the New Host button, the wizard as described in
4.2, “Creating Hosts using the GUI” on page 142 will start. If you expand the Action panel as
shown in Figure 8-34 on page 288 the same tasks as described in 8.1.1, “Modify, add and
delete host mappings” on page 272 are able to be started from this location.
If you click the drop down bar, you will get a list of all known Fibre-Channel host ports as
shown in Figure 8-37.
Select the WWPN you want to add and click Add Port to List as shown in Figure 8-38.
It is possible to repeat this step to add more ports to a host. If the WWPN of your host is not
available in the drop down list, check your SAN zoning, and rescan the SAN from the host.
Afterwards click Rescan and the new port should now be available in the drop down list.
If you want to add an offline port simply type the WWPN of the port into the Fibre Channel
Ports box as shown in Figure 8-39 and click Add Port to List.
The Port will appear as unverified as shown in Figure 8-40 because it is not logged in to the
IBM Storwize V7000. The first time it logs on, it will change is state to online automatically and
the mapping will be applied to this port.
To remove one of the ports from the list simply click the red x next to it as shown in
Figure 8-41.
Click Add Ports to Host and the changes will be applied as shown in Figure 8-42.
Enter the initiator name of your host (Figure 8-44)and click Add Port to List as shown in
Figure 8-45.
Click Add Ports to Host to completed the tasks and apply the changes to the system as
shown in Figure 8-46 on page 293.
If you hold the Ctrl Key on your keyboard, you can also select several host ports to delete as
shown in Figure 8-48 on page 294.
Click Delete and enter the number of host ports you want to remove as shown in Figure 8-49.
Click Delete and the changes will be applied to the system as shown in Figure 8-50 on
page 295.
This will generate a list of all host and volumes. In our example you can see the host
“ESX_RZA” has two volumes mapped, and the associated SCSI ID, Volume Name, and the
Volume Unique Identifier (UID). If you have more than one I/O Group, you also see which
volume is handled by which I/O group.
If you highlight one line and click Actions as shown in Figure 8-52 on page 296 you will have
the following tasks available:
Unmap Volume (“Unmap Volume” on page 296)
Properties Host (“Properties Host” on page 297)
Properties Volume (“Properties Volume” on page 297)
Unmap Volume
Highlight one or more lines and select Unmap Volume, enter the number of entries to remove
(Figure 8-53) and click Unmap, this will remove the mapping(s) for all selected entries as
shown in Figure 8-54.
Properties Host
Selecting an entry and clicking on Properties Host as shown in Figure 8-52 on page 296 will
open the host properties view. The possibilities of this view are described in “Host Properties”
on page 284.
Properties Volume
Selecting an entry and clicking on Properties Volume as shown in Figure 8-52 on page 296
will open the volume properties view. The possibilities of this view are described in 5.1,
“Provisioning storage from the IBM Storwize V7000 and making it available for the host” on
page 156.
Figure 8-55 shows that there are three volumes sections available to administer advanced
features:
All Volumes (8.2.1, “Advanced Volume Functions” on page 298 and 8.3, “Advanced
Volume Copy Functions” on page 322)
Volumes per Pool (8.3.1, “Volumes by Storage Pool” on page 328)
Volumes per Host (8.3.2, “Volumes by Host” on page 332)
This view lists all configured volumes on the system and provides the following information to
you:
Name: Displays the name of the volume, if there is a + next to the name this means that
there are several copies of this volume. Click it to expand the view and list the copies as
shown in Figure 8-57.
Status: Gives you status information about the volume
Capacity: The capacity which is presented to the host is listed here. If there is a little blue
volume listed to the capacity, this means that this volume is a Thin-provisioned volume,
and that the listed capacity is the virtual capacity, which may be less than the real capacity
on the system.
Storage Pool: Shows in which Storage Pool the volume is stored. If you have several
volume copies it shows you the pool of the primary copy.
UID: Is the volume Unique IDentifier
Host Mappings: Provides information if the volume is mapped to at least one host or not.
To create a new volume click New Volume and follow the steps as described in 5.1,
“Provisioning storage from the IBM Storwize V7000 and making it available for the host” on
page 156.
Highlight a volume and click Actions to see the available actions for a volume as shown in
Figure 8-58.
It is also possible to right click a volume and select the actions there as shown in Figure 8-59
on page 300.
Depending on which kind of volume you view, there may be two more available Actions as
also shown in Figure 8-59.
Volume Copy Actions:
– Add Mirror Copy: only available for generic volumes
(“Add a Mirrored Volume Copy” on page 318)
– Thin Provisioned: only available for Thin-provisioned volumes
(“Edit Thin-provision Volume Properties” on page 319)
The modify mappings view will appear, in the left upper corner, you will see your selected
host, and the yellow volume is the selected volume that will be mapped as shown in
Figure 8-61 on page 302. Click OK to apply the changes to the system.
After the changes have completed click Close to return to the All Volumes view as shown in
Figure 8-62.
Note: The Modify Mappings view is described detailed in Chapter 8.1.1, “Modify, add and
delete host mappings” on page 272.
After the task has completed click Close as shown in Figure 8-64 to return to the All Volumes
view.
If you want to remove a mapping highlight the host and click Unmap from Host this will
remove the access for the selected host after you have confirmed it. If there are several hosts
mapped to this volume (for example in a cluster) only the highlighted host will be removed.
Rename a Volume
To rename a volume select Rename as shown in Figure 8-58 on page 300. A new window will
appear, enter the new name there as shown in Figure 8-66.
If you click Reset the name field will always be reset to the currently active name of the
volume. Click Rename to apply the changes and click Close when it is done as shown in
Figure 8-67 on page 305.
Shrink a Volume
IBM Storwize V7000 offers you the possibility to shrink volumes. However, you should only
use this feature if your host operating systems supports this as well. Before shrinking a
volume perform the preparation required in your host operating system to shrink a volume on
the storage. After you have prepared it select Shrink as shown in Figure 8-58 on page 300. In
the opened view you can either enter the new size, or enter how much the volume should
shrink. If you enter a value the other line will update itself as shown in Figure 8-68.
Click Shrink to start the process and click Close as shown in Figure 8-69 on page 306 to
return to the All Volumes view.
Run the required procedures on your host to complete the shrink process.
Expand a Volume
If you want to expand a volume click Expand as shown in Figure 8-58 on page 300 and the
expand view will appear. Before you continue check if your operating system supports online
volume expansion. Enter the new volume size and click Expand as shown in Figure 8-70.
After the tasks have completed click Close as shown in Figure 8-71 to return to the All
Volumes View.
Run the required procedures in your operating system to use the newly available space.
To migrate a volume to another storage pool select Migrate to Another Pool as shown in
Figure 8-58 on page 300. The Migrate Volume Copy will appear. If your volume consists of
more than one copy you will be asked which copy you want to migrate to another storage pool
as shown in Figure 8-72. If the selected volume consists of one copy this selection box will not
appear.
Select the new target storage pool and click Migrate as shown in Figure 8-73.
The volume copy migration will start as shown in Figure 8-74. Click Close to return to the all
volumes view.
Depending on the size of the volume the migration process will take some time, but you can
monitor the status of the migration in the running tasks bar as shown in Figure 8-75.
After the migration has completed volume is shown in the new storage pool. In Figure 8-76 on
page 309 you can see that it was moved from pool mdiskgrp0 to pool DS3000_SATA.
The volume copy has now been migrated without any downtime to the new storage pool. It
would also be possible to migrate both volume copies to other pools online.
Another way to migrate volumes to another pool is doing the migration with using the volume
copies as described in “Migrate Volumes using the Volume Copy features” on page 327.
Select a storage pool for the new image mode volume and click Finish as shown in
Figure 8-78 on page 310.
Figure 8-78
The migration will start as shown in Figure 8-79. Click Close to return to the All Volumes view.
Delete Volume
To delete a volume select Delete as shown in Figure 8-58. Enter the number of volumes you
want to delete and mark the checkbox if you want to force the deletion as shown in
Figure 8-80. You have to force it if the volume has host mappings or is used in FlashCopy
mappings or remote-copy relationships.
Click Delete and the volume will be removed from the system as shown in Figure 8-81.
Note: This will remove all copies from your storage system and the data on the volume will
be lost. Before you perform this step make sure you do not need it anymore.
Volume Properties
To open an advanced view of a volume select Properties as shown in Figure 8-58 on
page 300 and the volume properties will be displayed as shown in Figure 8-82 on page 312.
In this view three tabs are available:
Overview (“Overview” on page 312)
Host Maps (“Host Maps” on page 315)
Member MDisk (“Member MDisk” on page 317)
Overview
The overview tab as shown in Figure 8-82 gives you a basic overview about the volume
properties. In the left part of the view you will find common volume properties, and in the right
part you are shown information about the volume copies. To get a more detailed view check
the Show Details checkbox in the bottom left corner as shown in Figure 8-83.
– Storage Pool: Gives you information in which pool the copy resists, what kind of copy
it is (Generic, Thin Provisioned), and gives you status information.
– Capacity: Displays the allocated (used) and the virtual (Real) capacity, as well as the
warning threshold and the set grain size.
If you want to edit any of these settings click the Edit button and the view will change in the
modify mode as shown in Figure 8-84.
Inside the volume details view it is possible to change the following properties:
Volume Name:
Mirror Sync Rate
Cache Mode
UDID
Make the changes if required and click Save as shown in Figure 8-85 on page 315.
Note: Setting the Mirror Sync Rate to 0% would disable the synchronization.
Host Maps
The second tab of the volume properties is Host Map as shown in Figure 8-87 on page 316.
All hosts that are currently mapped to this volume are listed in this view.
If you want to unmap a Host from this volume highlight it and select Unmap from Host.
Confirm the number of mappings to remove and click Unmap as shown in Figure 8-88.
The changes will be applied to the system as shown in Figure 8-89 on page 317. The
selected host will no longer have access to this volume. Click Close to return to the Host
Maps view.
Member MDisk
The third tab is Member MDisk, which lists all MDisks on which the volume resides. Select a
copy and the associated MDisks will appear in the view as shown in Figure 8-90.
Highlight a MDisk and click Actions to get a view of available tasks as shown in Figure 8-91
on page 318. The tasks are described in Chapter 7, “Storage Pools” on page 217.
The new copy now has been created and the data is synchronized as a background task.
Figure 8-94 shows you that the volume named “Fileserver” now has two volume copies, and
there is one synchronisation task running.
This changes are only to the internal storage, so you do not have to make any changes on
your host.
After the task has completed click Close as shown in Figure 8-97.
The allocated space of the Thin-provisioned volume has now been reduced.
Note: You can only unallocate extends that do not have stored data on it, if the space is
allocated, because there are data on them you will not be able to shrink the allocated
space.
The new space is now allocated, click Close as shown in Figure 8-99.
After the task has completed click Close as shown in Figure 8-101 on page 322 to return to
the All Volumes view.
If you look at the volume copies as shown in Figure 8-102, you will notice that one of the
copies, next to the Name, has a star displayed (Figure 8-103).
Each volume has a primary and a secondary copy, and the star indicates the primary copy.
The two copies are always synchronized which means that all writes are destaged to both
copies, but all reads are always done from the primary copy. By default the primary and
secondary copy always switches between Copy 0 and Copy 1 during creation to balance the
reads across your storage pools. However you should make sure that the I/Os to the primary
copies are customized to the performance of all your storage pools, and therefore you can
change the roles of your copies.
To do this highlight the secondary copy and right click it or select Actions and you will see
that there is also a task “Make Primary” as shown in Figure 8-104 on page 323. Usually it is a
best practice to place the volume copies on storage pools with similar performance, because
the write performance will be constrained if one copy is on a lower performance pool. This is
because writes must complete to both copies before the volume can provide acknowledgment
to the host that the write completed successfully. If you need high read performance only,
another possibility can be to place the primary copy in an SSD pool, and the secondary copy
to a normal disk pool. This will maximise the read performance of the volume, and make sure
that you have a synchronized second copy in your less expensive disk pool. Of course, it is
also possible to migrate the copies online between storage pools, remember, in “Migrate
volume to another Storage Pool” on page 307 you can select which copy you want to migrate.
Select Make Primary and the role of the copy will be changed online. Click Close when the
task has completed as shown in Figure 8-105.
The volume copy feature can also be a powerful option to migrate volumes as described later
in “Migrate Volumes using the Volume Copy features” on page 327.
Thin-provisioned
This section includes the same functions as described in “Shrink Thin-provisioned space” on
page 320 to “Edit Thin-provisioned Properties” on page 321 as shown in Figure 8-106. You
can specify exactly the same settings for each volume copy.
To split a copy select Split into New Volume as shown in Figure 8-102. If you perform this
action on the primary copy, the remaining secondary copy will automatically become the
primary for the source volume. Enter a name for the new volume and click Split Volume
Copy as shown in Figure 8-107.
After the task has completed click Close to return to the All Volumes view where the copy
appears now as a new volume that can be mapped to a host as shown in Figure 8-108 on
page 325.
Select Validate Volume Copies as shown in Figure 8-102 on page 322. The Validate Volume
Copies window opens as shown in Figure 8-109 and there are the following options available:
Generate Event of differences: Use this option if you only want to verify that the mirrored
volume copies are identical. If any difference is found, the command stops and logs an
error that includes the logical block address (LBA) and the length of the first difference.
You can use this option, starting at a different LBA each time to count the number of
differences on a volume.
Overwrite differences: Use this option to overwrite contents from the primary volume
copy to the other volume copy. The command corrects any differing sectors by copying the
sectors from the primary copy to the copies being compared. Upon completion, the
command process logs an event. This indicates the number of differences that were
corrected. Use this option if you are sure that either the primary volume copy data is
correct or that your host applications can handle incorrect data.
Return Media Error to Host: Use this option to convert sectors on all volumes copies that
contain different contents into virtual medium errors. Upon completion, the command logs
an event, which indicates the number of differences that were found, the number that were
converted into medium errors, and the number that were not converted. Use this option if
you are unsure what the correct data is, and you do not want an incorrect version of the
data to be used.
Select which action to perform and click Validate to start the task. The volume is now
checked click Close (Figure 8-110).
The validation process takes some time depending on the volume size, and you can check
the status in the running tasks view as shown in Figure 8-111 on page 327.
The copy will be deleted, click Close (Figure 8-113) to return to the All Volumes view.
1. Create a second copy of your volume in the target storage pool (“Add a Mirrored Volume
Copy” on page 318).
2. Wait until the copies are synchronized.
3. Change the role of the copies and make the new copy primary (8.3, “Advanced Volume
Copy Functions” on page 322).
4. Split or delete the old copy from the volume (“Split into New Volume” on page 324 or
“Delete this Copy” on page 327).
This migration process requires more user interaction but it offers some benefits. For
example, if you migrate a volume from a Tier 1 storage pool to a lower performance Tier2
storage pool. In Step 1 you will create the copy on the Tier 2 pool, while all reads will be still
be performed in the Tier 1 pool to the primary copy. After the synchronization all writes are
destaged to both pools, but the reads are still only done from the primary copy. Now you can
switch the role of the copies online (Step 3), and test the performance of the new pool. If you
are done with the testing against your lower performance pool, you can decide to either split
or delete the old copy in Tier 1, or to easily switch back to Tier1 in seconds, because the Tier2
pool did not meet your requirements.
The left part of the view is named “Pool Filter”, and all of your existing storage pools are
displayed there. More detailed information about storage pools is covered in Chapter 7,
“Storage Pools” on page 217.
To the upper right you will see information about the pool which you selected in the pool filter,
and the following information is covered there:
Pool Icon: Storage Pools have different characteristics, it is possible to change the pool
icon to identify the pool type (“Changing the Storage Pool Icon” on page 330).
Pool Name: This is the name of the storage pool given during creation, it is possible to
change this name from this view (“Changing the Storage Pool Name” on page 331).
Pool Details: Gives you status information about the pool, like the number of MDisks and
Volume copies, as well as the EasyTier Status.
Volume allocation: Provides details about the available, allocated and virtual space in
this pool.
The lower right lists all volumes that have at least one copy in this storage pool and provides
you the following information about them:
Name: Shows the name of the volume.
Status: Gives status feedback about the volume.
Capacity: Displays the capacity which is presented to the host of the volume, if it has the
green volume sign next to it, this means this volume is Thin-provisioned and it is the virtual
size.
UID: Shows the Volume Unique IDentifier
Host Mappings: Displays if at least one host mapping exists or not.
It is also possible to create a new volume in the selected pool. Click Create Volume and the
same wizard as described in 5.1, “Provisioning storage from the IBM Storwize V7000 and
making it available for the host” on page 156 will appear.
If you highlight a volume and select Actions or right click as shown in Figure 8-115 the same
options as in the All Volumes section (8.2.1, “Advanced Volume Functions” on page 298) will
appear. Detailed instructions about each task are covered there.
If you highlight a volume copy and select Actions or right click it as shown in Figure 8-116,
the same options as in the All Volumes section (8.3, “Advanced Volume Copy Functions” on
page 322) will appear. Detailed instructions about each task are covered there.
Use the left and right arrows to select a new icon as shown in Figure 8-118. There are several
options available.
Click OK and the changes will be applied to the system (Figure 8-119). Click Close.
The icon has been changed to make it easier to identify the pool as shown in Figure 8-120.
Press Enter and the changes will be applied to the system as shown in Figure 8-123.
The name for the storage pool has now been changed as shown in Figure 8-124.
In the left side of the view is the “Host Filter” and if you select a host there its properties will
appear in the right side of the view. The hosts with the orange cable represent Fibre Channel
hosts, and the blue cable represents iSCSI hosts. In the upper right side you will see the host
icon, the host name, the number of host ports and the host type. Below all the volumes that
are mapped to this host are listed.
If you want to create a new volume for this host select New Volume and the same wizard as
already described in 5.1, “Provisioning storage from the IBM Storwize V7000 and making it
available for the host” on page 156 will appear.
Highlight a volume and select Actions or right click as shown in Figure 8-126 on page 333,
and the same options as in the All Volumes section (8.2.1, “Advanced Volume Functions” on
page 298) will appear. Detailed instructions about each task are covered there.
If the volume owns more than one copy you can also highlight a volume copy and select
Actions or right click it as shown in Figure 8-127, and the same options as in the All Volumes
section (8.3, “Advanced Volume Copy Functions” on page 322) will appear. Detailed
instructions about each task are covered there
Rename a Host
To rename a host in the Volumes by host view simply click on it and you will be able to edit the
name as shown in Figure 8-128.
Press Enter to apply the changes to the system as shown in Figure 8-130.
Click Close to return to the Volumes by Host view as shown in Figure 8-131.
These external storage systems that will be incorporated into the IBM Storwize V7000
environment could be brand new systems or existing systems. The data on the existing
storage systems could be easily migrated to the IBM Storwize V7000 managed environment,
as described in Chapter 6, “Migration Wizard” on page 197 and Chapter 7, “Storage Pools”
on page 217
Note: If the Storwize V7000 is to be used as a general migration tool, then appropriate
External Virtualization licenses must be ordered. The only exception is if you want to
migrate existing data from external storage systems to IBM Storwize V7000 internal
storage, as you can temporarily configure your External Storage license within 45 days.
For more-than-45-day migration requirement from external storage to IBM Storwize V7000
internal storage, then the appropriate External Virtualization license must be ordered too.
You can configure the IBM Storwize V7000 license by selecting the Configuration button,
which is the bottom icon of the navigation panel on the left hand side, and choose the
Advanced option.
Figure 9-1 on page 339 shows the Advanced option in Configuration button.
In the Advanced panel, select Licensing and you will find the Update License panel on the
right, as shown in Figure 9-2 on page 340.
In the Update License panel there are two license options you can set, External
Virtualization Limit and Remote-Copy Limit. Set these two license options to the limit you
obtained from IBM.
Verify that the SAN switches and/or directors that the IBM Storwize V7000 will connect to
meet the following requirements as noted at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003703#_Switches
Make sure switches and/or directors are at firmware levels supported by the IBM Storwize
V7000 and IBM Storwize V7000 port login maximums listed in restriction document will not be
exceeded. The configuration restrictions can be found at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003702&myns=s028&mynp=familyin
d5402112&mync=E
The recommended SAN configuration comprises a minimum of two fabrics with the ports on
external storage systems to be virtualized by IBM Storwize V7000 and the IBM Storwize
V7000 ports themselves evenly split between the two fabrics to provide redundancy in case
one of the fabrics goes offline whether planned or unplanned.
Once IBM Storwize V7000 and external storage systems are connected to the SAN fabrics,
zoning will need to be implemented. In each fabric create a zone with the four IBM Storwize
V7000 WWPNs, two from each node canister, along with up to a maximum of eight WWPNs
from each external storage system.
Note: IBM Storwize V7000 supports a maximum of sixteen ports or WWPNs from a given
external storage system that will be virtualized.
Figure 9-3 on page 341 is an example of how to cable devices to the SAN. Reference this
example as we discuss the recommended zoning below.
Create an IBM Storwize V7000/external storage zone for each storage system to be
virtualized. For example:
Zone DS5100 controller ports A1 and B1 with all node ports 1 and 3 in the RED fabric.
Zone DS5100 controller ports A2 and B2 with all node ports 2 and 4 in the BLUE fabric.
Verify that the storage controllers to be virtualized by IBM Storwize V7000 meet the following
requirements found at the URL below:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003703
Make sure the firmware or microcode levels of the storage controllers to be virtualized are
supported by IBM Storwize V7000.
IBM Storwize V7000 must have exclusive access to the LUNs from external storage system
mapped to it. LUNs cannot be shared between IBM Storwize V7000s or between IBM
Storwize V7000 and other storage virtualization platforms or between IBM Storwize V7000
and hosts. However, different LUNs could be mapped from one external storage system to
IBM Storwize V7000 and other hosts in the SAN through different storage ports.
Make sure to configure the storage subsystem LUN masking settings to map all LUNs to all
the WWPNs in the IBM Storwize V7000 storage system.
Review the IBM Storwize V7000 Information Center under the section titled Configuring and
servicing external storage system to prepare the external storage systems for discovery by
the IBM Storwize V7000 system. This is the link for IBM Storwize V7000 Information Center:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
The basic concept of managing external storage system are the same as internal storage.
IBM Storwize V7000 discovers LUNs from the external storage system as one or more
MDisks. These MDisks will ultimately be added to a storage pool in which volumes will be
created and mapped to hosts as needed.
Therefore, in this section the steps for managing MDisks/storage pools, which have been
mentioned in the Chapter 7, “Storage Pools” on page 217, will not be described again.
If the external storage systems are not brand new ones, which means there is existing data
on the LUNs that need to be kept after virtualization, follow the steps inChapter 6, “Migration
Wizard” on page 197 to prepare the environment. Then you can migrate the existing data with
or without the wizard to IBM Storwize V7000 internal storage or some other external storage
system.
In 7.2.2, “Import MDisks” on page 244, it shows how to manually import MDisks and migrate
the data to other storage pools. Whether you migrate the data with the wizard or not, you can
select your destination storage pools to be internal storage pools or external storage pools.
You can access the External panel through the Getting Started panel, and click the External
Storage System icon. Extended help information for external storage will show up click on
Physical Storage and you will go to the External panel.
Figure 9-4 shows how to access the External panel from the Getting Started panel.
The other way to access the External panel is rom the Physical Storage functional icons on
the left hand side.
Figure 9-5 on page 345 shows how to access the External panel from the Physical Storage
functional icons on the left hand side.
The External panel, as shown in Figure 9-6, gives you an overview of all your external
storage systems. On the left hand side of the panel, there is a list of the external storage
systems. With the help of the filter you can display only the external storage systems you
need to take action on to be listed. If you click and highlight the external storage system, you
will find detailed information shown on the right hand side including all the MDisks provided by
it.
On the right of the panel you can change the name of external storage system by clicking on
the name beside the picture of the external storage box. The status of the external storage
system and its WWNN can also be found under the name.
From the Actions drop down list on the top of the name of external storage on the right part
of the External panel, you can find Show Dependent Volumes option as shown in
Figure 9-7.
Figure 9-7 shows the Show Dependent Volumes option in Actions drop down list.
Figure 9-7 Show Dependent Volumes option in Actions drop down list
Clicking the Show Dependent Volumes option will show you the volumes in this external
storage system.
Figure 9-8 on page 347 shows the volumes dependent on external storage.
In the display of dependent volumes on external storage you can take volume actions,
including Map to Host, Shrink, Expand, Migrate to Another Pool, Volume Copy Actions
and so on as shown in Figure 9-9 on page 348.
One of the features of the IBM Storwize V7000 storage system is that it can be used as a data
migration tool. In the IBM Storwize V7000 virtualization environment you can migrate your
application data non-disruptively from one internal or external storage system to another,
which will make storage management much simpler with less risk.
Volume copy is another key feature that you can benefit from by introducing IBM Storwize
V7000 virtualization. Two copies could be applied to your data to enhance the availability for a
critical application. Volume copy could be also used for generating test data or data migration.
In Chapter 8, “Advanced Host and Volume Administration” on page 271, you can find more
information on the volume actions of IBM Storwize V7000 storage system.
Returning to the External panel you can find an MDisk menu on the right, including an MDisk
list which shows the MDisks provided by this external storage system. You can find the name
of an MDisk, its capacity, the storage pool and the storage system it belongs to in the list. The
actions on MDisks can also be taken through the menu, including Detect MDisks, Add to
Pool, Import, and so on. This menu is the same as in the MDisks panel which has already
been covered in 7.2, “Work with MDisks” on page 237.
10
Easy Tier automates the placement of data amongst different storage tiers, and can be
enabled for internal and external storage as well. This IBM Storwize V7000 no charge feature
will boost your storage infrastructure performance to achieve optimal performance through a
software, server and storage solution.
This chapter describes the function provided by the EasyTier disk performance optimization
feature of the IBM Storwize V7000, and it also covers how to activate the Easy Tier process
for both evaluation purposes and for automatic extent migration.
In general, the storage environments I/O is monitored on volumes and the entire volume is
always placed inside one appropriate storage tier. Determining the amount of I/O is too
complex to monitor I/O statistics on single extents and move them manually to an appropriate
storage tier and react to workload changes.
Easy Tier is a performance optimization function that overcomes this issue as it will
automatically migrate, or move, extents belonging to a volume between different storage tiers
as shown in Figure 10-1 on page 353. As this migration works at the extent level, in other
publications this technology is often referred to as “Sub-LUN Migration”.
You can enable Easy Tier for storage on a volume basis. It monitors the I/O activity and
latency of the extents on all Easy Tier enabled volumes over a 24 hour period. Based on the
performance log, it creates an extent migration plan and dynamically moves high activity or
hot extents to a higher disk tier within the storage pool, as well as moving extents whose
activity has dropped off, or cooled, from higher disk tier MDisks back to a lower tier MDisk.
To enable this migration between MDisks with different tier levels, the target storage pool has
to consist of different characteristic MDisks. These pools are named “Multi Tiered Storage
Pools”. IBM Storwize V7000 Easy Tier is optimized to boost the performance of storage pools
containing HDDs and SSDs.
To identify the potential benefits of Easy Tier in your environment before actually installing
higher MDisk tiers, such as SSDs, it is possible to enable the Easy Tier monitoring volumes in
single tiered storage pools. Even though the Easy Tier extent migration is not possible within
a single tier pool, the Easy Tier statistical measurement function is. Enabling Easy Tier on a
single tiered storage pool will start the monitoring process and log the activity of the volume
extents. In this case, Easy Tier will create a migration plan file which can then be used to
display a report on the number of extents that would be appropriate for migration to higher
level MDisk tiers, such as SSDs. The IBM Storage Tier Advisor Tool is a no additional cost
tool, that helps you in analyzing this data. If you do not have an IBM Storwize V7000 now, use
Disk Magic to get a better idea about the required number of SSDs appropriate for your
workload. In the worst case where you do not have any workload performance data, a good
starting point can be to add about 5% of nett capacity as SSD to your configuration. But this
ratio is heuristics based and will change according to different applications or different disk
tier performance in each configuration. For database transactions a ratio of fast SAS or FC
drives to SSD is about 6:1 to achieve the optimal performance but, again, this is just theory
and depends a lot on the workload.
Easy Tier is available for IBM Storwize V7000 internal volumes, and for volumes residing on
external virtualized storage subsystems as well, as the SSDs can be either internal and/or
external drives. However, from the fabric point of view, it is recommended to use SSDs inside
the IBM Storwize V7000, even if the lower tiered disk pool resides on an external storage
because this will reduce the traffic traversing the SAN environment.
As shown in Figure 10-2 on page 355, Single Tiered Storage Pools include one type for disk
tier attribute. Each disk should have the same size and performance characteristics. Multi
Tiered Storage Pools are populated with two different disk tier attributes, which means high
performance tier SSD disks, and generic HDD disks. A volume migration as described in
“Migrate volume to another Storage Pool” on page 307 and is when the complete volume is
migrated from one Storage Pool to another. An Easy Tier Data Migration will only move
extents inside the storage pool to different performance attributes.
Data Migrator - DM
DM confirms and schedules data migration activity based on the data migration plan using
IBM Storwize V7000 migration function to seamlessly relocate extents up to, or down from,
the high disk tier as shown in Figure 10-4.
Evaluation Mode
If you turn on Easy Tier in a Single Tiered Storage pool, it runs in evaluation mode. This
means it measures the I/O activity for all extents. A statistic summary file is created and can
be off-loaded from the IBM Storwize V7000 as described later in 10.4.1, “Enable Easy Tier
Evaluation Mode” on page 370. This file can be analyzed with the IBM Storage Tier Advisory
Tool as shown in 10.5, “IBM Storage Tier Advisor Tool” on page 375. This will give you an
understanding about the benefits for your workload if you were to add SSDs to your pool, prior
to any hardware acquisition.
If you do want to disable Auto Data Placement Mode for single volumes inside a Multi Tiered
Storage Pool it is possible to turn the mode off at a volume level. This will exclude it from Auto
Data Placement Mode and will measure the I/O statistics only.
The statistic summary file can be off-loaded for input to the advisor tool. The tool will produce
a report on the extents moved to SSD and a prediction of performance improvement that
could be gained if more SSD was available.
Note: Volume mirroring can have different workload characteristics on each copy of the
data as reads are normally directed to the primary copy and writes occur to both. Thus the
amount of extents that Easy Tier will migrate to SSD tier will probably be different for each
copy.
Easy Tier is supported to work with all striped volumes, which includes:
– Generic Volumes
– Thin-provisioned Volumes
– Mirrored Volumes
– Thin-Mirror Volumes
– Global and Metro Mirror source and target
Easy Tier automatic data placement is not supported on image mode or sequential
volumes. I/O monitoring for such volumes is supported, but you cannot migrate extents on
such volumes unless you convert image or sequential volume copies to striped volumes.
If possible, IBM Storwize V7000 creates new volumes or volume expansions using extents
from MDisks from the HDD tier, but will use extents from MDisks from the SSD tier if no
HDD space is available.
When a volume is migrated out of a storage pool that is managed with Easy Tier,
automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated even if it is between pools
that both have Easy Tier automatic data placement enabled. Automatic data placement for
the volume is re-enabled when the migration is complete.
SSD performance is dependent on block sizes, and small blocks perform much better than
larger ones. As Easy Tier is optimized to work with SSD it decides if an extent is hot by
measuring I/O smaller than 64KB, but it will migrate the entire extent to the appropriate
disk tier.
As extents are migrated, the use of smaller extents makes Easy Tier more efficient.
The first migration of hot data to SSD will start about 1 hour after enabling Auto Data
Placement Mode, but it will take up to 24 hours to achieve optimal performance.
In the current IBM Storwize Easy Tier V7000 implementation it will take about two days
until hot spots are considered to be moved from SSDs, this prevents a case where hot
spots are being moved from SSDs if the workload changes over a weekend.
If you run an unusual workload over a longer time period, Auto Data Placement can be
turned off and on online, to avoid data movement.
Depending on which storage pool and which Easy Tier configuration is set, a volume copy
can have the Easy Tier states as shown in Table 10-1:
On Single On Evaluation
Figure 10-6 on page 360 shows that two internal SSD drives are available and they are in
“Candidate” Status. Internal SSD are added to the high performance tier automatically by IBM
Storwize V7000.
Click Configure Storage (Figure 10-6 on page 360) and the Storage Configuration wizard
will appear as shown in Figure 10-7.
The wizard recommends using the SSD drives to enable Easy Tier. If you select “Use
recommended configuration” it will select the recommended RAID level and hot spare
coverage for your system automatically. If you choose “Select a different configuration” as
shown in Figure 10-8 on page 361 you can select the preset.
Choose a custom RAID level, or you can also select the SSD Easy Tier preset there to review
and modify the recommended configuration. In our example we select SSD Easy Tier and the
view will be expanded as shown in Figure 10-9.
In the IBM Storwize V7000 used for this example, there are only two SSDs installed. The SSD
Easy Tier preset tries to choose a RAID level and protect it with hot spare drives. As this is not
possible with two drives only we get an Error message. To resolve it, unmark the
“Automatically configure spares” option to create a RAID without spare drives as shown in
Figure 10-10 on page 363 and click Next.
To create a Multi Tiered Storage pool, the SSD drives have to be added to an existing generic
HDD pool. Select Expand an existing Pool as shown in Figure 10-11 and select the pool
you want to change to a multi tiered storage pool. In the example the “SAS” storage pool is
selected, click Finish.
Now the array is configured on the SSD drives, and they are added to the selected storage
pool. Click Close after the task has completed as shown in Figure 10-12 on page 364.
Figure 10-13 shows that the internal SSD drives usage has now changed to “Member” and
that the wizard created an MDisk name of “mdisk4”.
In Figure 10-14 on page 365 you can see that the new MDisk is now part of the storage pool
“SAS” and that the status of the Easy Tier has changed to active. In this pool, Auto Placement
Mode is started and the Easy Tier processes start to work.
Now the storage pool has been successfully changed to a Multi Tiered Storage Pool and
Easy Tier has been activated by default. To reflect this change better we rename the storage
pool (“Changing the Storage Pool Name” on page 331) and change the icon (“Changing the
Storage Pool Icon” on page 330) as shown in Figure 10-15.
By default Easy Tier is now active in this pool and for all volumes residing in this storage pool.
Figure 10-16 on page 366 shows that in the example two volumes reside on the Multi Tier
Storage Pool. If you open the properties of a volume (Right Click or Actions Properties)
you can also see that Easy Tier is enabled on the volume by default as shown in
Figure 10-17.
If a volume owns more than one copy, Easy Tier can be enabled and disabled on each copy
separately as shown in Figure 10-18.
If you would like to enable Easy Tier on the second copy, you have to also change the storage
pool of the second copy to a Multi Tier Storage Pool by repeating these steps.
If external SSD is used it is required to select the tier manually as described in 7.2.4, “Select
the tier for MDisks” on page 259 and then add the external SSD MDisk to a storage pool as
covered in 7.2.1, “Add MDisks to Storage Pools” on page 240. This will also change the
storage pools to Multi Tier Storage Pools and enable Easy Tier on the pool and the volumes.
To download the statistics file open the support view as shown in Figure 10-19 on page 368.
Click Show full log listing as shown in Figure 10-20 to display all available logs.
All available logs are displayed as shown in Figure 10-21 on page 369. The Easy Tier log files
are always named “dpa_heat.canister_name_date.time.data”.
Note: Depending on your workload it can take up to 24 hours until a new Easy Tier log file
is created.
To make the search easier type heat into the search field and press Enter as shown in
Figure 10-22.
If you run Easy Tier for a longer time period, it will generate a heat file at least every 24 hours.
The time and date of the file creation is included in the filename. The heat log file always
includes the measured I/O activity of the last 24 hours. Select the file for Easy Tier
measurement for the most representative time period, right click it and select Download as
shown in Figure 10-23.
Depending on your browser settings the file will be downloaded to your default location, or
you will be asked to save it to your computer. This file can be analyzed as described in 10.5,
“IBM Storage Tier Advisor Tool” on page 375.
Before using the CLI you have to configure CLI access as described in Appendix A, “CLI
setup and SAN Boot” on page 541.
Note: In most examples shown in this section, many lines have been deleted in the
command output or responses. The aim being to concentrate on the Easy Tier related
information only.
To get a more detailed view about the Single Tier storage pool type svcinfo lsmdiskgrp
“storage pool name” as shown in Example 10-2.
id 0
name Tier_2
status online
mdisk_count 1
vdisk_count 1
.
easy_tier auto
easy_tier_status inactive
tier generic_ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_mdisk_count 1
tier_capacity 8.14TB
tier_free_capacity 8.11TB
IBM_2076:ras-stand-7-tb1:admin>
To enable Easy Tier on the Single Tier Storage Pool type svctask chmdiskgrp -easytier on
“Storage Pool name” as shown in Example 10-3.
Check the status of the storage pool again by repeating the svcinfo lsmdiskgrp “storage
pool name” command as shown in Example 10-4.
Also repeat the svcinfo lsmdiskgrp command as shown in Example 10-5 and you will see
that Easy Tier is turned on the storage pool now, but that Auto Data Placement Mode is not
active on the Multi Tier Storage Pool.
Type svcinfo lsvdisk to get a list of all volumes as shown in Example 10-6.
To get a more detailed view about a volume, use svcinfo lsvdisk “volume name” as shown
in Example 10-7 on page 372. The example output shows two copies of a volume. Copy 0 is
in a Multi Tier Storage Pool and Auto Data Placement is active, Copy 1 is in the Single Tier
Storage Pool and Easy Tier Evaluation mode is active, this is indicated by the line
“easy_tier_status measured”.
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 0
mdisk_grp_name Tier_2
type striped
easy_tier on
easy_tier_status measured
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 25.00GB
IBM_2076:ras-stand-7-tb1:admin>
These changes are also reflected in the GUI as shown in Figure 10-24 on page 373.
Easy Tier Evaluation Mode is now active on the Single Tier Storage Pools. To download the
I/O statistics and analyze it go to 10.3.2, “Downloading Easy Tier I/O measurements”.
To disable Easy Tier on single volumes type svctask chvdisk -easytier off “volume
name” as shown in Example 10-8.
IBM_2076:ras-stand-7-tb1:admin>
This will disable Easy Tier on all copies of this volume. Example 10-9 shows that the Easy
Tier status of the copies has changed, even if Easy Tier is still enabled on the storage pool.
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 0
mdisk_grp_name Tier_2
type striped
easy_tier off
easy_tier_status measured
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 25.00GB
IBM_2076:ras-stand-7-tb1:admin>
To enable Easy Tier on a volume copy type svctask chvdisk -easytier on “volume name”
as shown in Example 10-10 in and the Easy Tier Status will change back to enabled as
shown in Example 10-7 on page 372.
You can read more about the tool, and download it, at the IBM website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000935
Click Start Run type cmd and click OK to open a command prompt.
C:\EasyTier>
Browse the directory where you directed the output file, and there will be a file named
index.html. Open it with your browser to view the report.
The System Summary already provides the most important numbers. In Figure 10-25 on
page 376 it is shown that 12 volumes have been mirrored with a total capacity of 6000GB.
The results of the analysis of the hot extents is that about 160GB (which means about 2%)
should be migrated to the high performance disk tier. It also recommends that one SSD Array
should be added as a high performance tier consisting of 4 SSD drives (RAID 5 3+P). This
predicted performance improvement in response time reduction at the backend in a balanced
system would be between 64% and 84%.
Click Volume Heat Distribution to change to a more detailed view as shown in Figure 10-26.
The table in this view gives you a more detailed view as to how the hot extents are distributed
across your system. It contains the following information:
Volume ID: Is the unique ID of each Volume on IBM Storwize V7000.
Copy ID: If a volume owns more than one copy the data is measured for each copy.
Pool ID: Is the unique ID of each Pool configured on IBM Storwize V7000
Configured Size: Is the configured size of each volume that is represented to the host.
Capacity on SSD: Capacity of the volumes on high performance disk tier (even in
evaluation mode volumes can reside on high performance disk tiers if they have been
moved there before).
Heat Distribution: Shows the heat distribution of the data in this volume. The blue portion
of the bar represents the capacity of the cold data, and the red portion represents the
capacity of the hot data. The red hot data are candidates to be moved to high performance
disk tier.
11
11.1 FlashCopy
In this section we describe how the FlashCopy function works in IBM Storwize V7000 storage
system and the implementation steps using GUI are provided for FlashCopy configuration
and management.
FlashCopy, also known as point-in-time copy, can be used to help creating immediately
available consistent copies of data sets while they remain online and actively in use.
The FlashCopy is performed at the block level on IBM Storwize V7000 storage system, which
operates below the host operating system and is therefore transparent to the host. However,
to ensure the integrity of the copy that is made, it is necessary to flush the host cache for any
outstanding reads or writes prior to performing the FlashCopy operation. Failing to do so will
produce what’s referred to as a crash consistent copy, meaning the resulting copy will require
the same type of recovery procedure (such as log replay and filesystem checks) as would be
required following a host crash.
Some operating systems and applications provide facilities to stop I/O operations and ensure
all data is flushed from host cache. If these facilities are available they can be used to prepare
and start a FlashCopy operation. When this type of facility is not available the host cache
must be flushed manually by quiescing the application and unmounting the filesystem or
drives.
With the immediately available copy of data, FlashCopy could be used in various business
scenarios, which include:
Creating consistent and fast backups of application data.
The backup data copies can be taken by FlashCopy periodically and the target volumes
can be used to preform a rapid restore of individual files or the entire volume.
The data copy created by FlashCopy may be also used for backup to tape, by attaching
them to another server, which to a great extent off-loaded the production server. Once the
copy to tape has been completed the target volumes can be discarded if required.
Creating data copies for moving or migrating data.
FlashCopy can be used to facilitate the movement or migration of data between hosts
while minimizing downtime for applications. FlashCopy will allow application data to be
copied from source volumes to new target volumes while applications remain online. Once
the volumes are fully copied and synchronized the application can be brought down and
then immediately brought back up on the new server accessing the new FlashCopy target
volumes.
Creating copies of production datasets for development/testing, auditing purposes or data
mining.
It is usually required to have the copies of actual production data for development/testing,
auditing purposes or data mining. FlashCopy makes this easy to accomplish without
putting the production data at risk or requiring downtime to create a constant copy.
The source and target volumes must belong to the same IBM Storwize V7000 storage
system, however they do not have to be in the storage pool.
FlashCopy works by defining a FlashCopy mapping that consists of one source volume
together with one target volume. Multiple FlashCopy mappings can be defined, and
point-in-time consistency can be maintained across multiple individual mappings using
consistency groups, which would be described in “FlashCopy Consistency Groups” on
page 384.
When FlashCopy is started, an effective copy of a source volume to a target volume has been
created. The content of the source volume is immediately presented on the target volume and
the original content of the target volume is lost. This FlashCopy operation is also referred to
as a time-zero copy (T0).
Immediately following the FlashCopy operation both the source and target volumes are
available for use. The FlashCopy operation creates a bitmap which is referenced and
maintained to direct I/O requests within the source and target relationship. This bitmap is
updated to reflect the active block locations as data is copied in the background from the
source to target and updates are made to the source.
When data is copied between volumes it is copied in units of address space known as grains.
The FlashCopy bitmap contains one bit for each grain and is used to keep track of whether
the source grain has been copied to the target.
Figure 11-1 illustrates the redirection of the host I/O toward the source volume and the target
volume.
The FlashCopy bitmap dictates read and write behavior for both the source and target
volumes, as below:
Read I/O request to source:
Reads are performed from the source volume, this is the same as for non-FlashCopy
volumes.
Write I/O request to source:
Writes to the source will cause the grains to be copied to the target if it has not already been
copied, then the write will be performed to the source.
Read I/O request to target:
Reads are performed from the target if the grains has already been copied, otherwise the
read is performed from the source.
Write I/O request to target:
Writes to the target will cause the grain to be copied from the source to the target unless the
entire grain is being written, then the write will complete to the target.
FlashCopy Mappings
A FlashCopy mapping defines the relationship between a source volume and a target volume.
FlashCopy mappings can be either stand-alone or a member of a consistency group, as
described in “FlashCopy Consistency Groups” on page 384.
Each of the mappings from a single source can be started and stopped independently. If
multiple mappings from the same source are active (in the copying or stopping states), a
dependency exists between these mappings.
If a single source volume have multiple target FlashCopy volumes, the write to the source
volume does not cause its data to be copied to all of the targets. Instead, it is copied to the
newest target volume only. The older targets will refer to new targets first before referring to
the source. A dependency relationship exists between a particular target and all newer
targets that share the same source until all data has been copied to this target and all older
targets.
Background Copy
The background copy rate is a property of a FlashCopy mapping defined as a value between
0 and 100. The background copy rate can be defined and dynamically changed for individual
FlashCopy mappings. A value of 0 disables background copy.
With FlashCopy background copy, the source volume data will be copied to the corresponding
target volume in the FlashCopy mapping. If the background copy rate is set to 0, which means
to disable the FlashCopy background copy, only data that changed on the source volume will
be copied to the target volume. The benefit of using a FlashCopy mapping with background
copy enabled is that the target volume becomes a real independent clone of the FlashCopy
mapping source volume once the copy is complete. When the background copy is disabled
the target volume only remains a valid copy of the source data while the FlashCopy mapping
remains in place. Although on the other hand, copying only the changed data saves your
storage capacity (assuming it is thin provisioned and -rsize has been correctly setup.
The relationship of the background copy rate value to the amount of data copied per second
is shown in Table 11-1.
1 - 10 128 KB
11 - 20 256 KB
21 - 30 512 KB
31 - 40 1 MB
41 - 50 2 MB
51 - 60 4 MB
61 - 70 8 MB
71 - 80 16 MB
81 - 90 32 MB
91 - 100 64 MB
Cleaning Rate
When you create or modify a FlashCopy mapping, you can specify a cleaning rate for the
FlashCopy mapping that is independent of the background copy rate. The cleaning rate is
also defined as a value between 0 and 100, which has the same relationship to data copied
per second with the backup copy rate, as shown in Table 11-1.
The cleaning rates controls the rate at which the cleaning process operates. The cleaning
process copies data from the target volume of a mapping to the target volumes of other
mappings that are dependent on this data. The cleaning process must complete before the
FlashCopy mapping can go to the stopped state.
Idle or Copied
The source and target volumes act as independent volumes even if a mapping exists between
the two. Read and write caching is enabled for both the source and the target volumes.
If the mapping is incremental and the background copy is complete, the mapping only records
the differences between the source and target volumes. If the connection to both nodes in the
IBM Storwize V7000 storage system that the mapping is assigned to is lost, the source and
target volumes will be offline.
Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.
Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the SCSI
front end as a hardware error. If the mapping is incremental and a previous mapping has
completed, the mapping only records the differences between the source and target volumes.
If the connection to both nodes in the IBM Storwize V7000 storage system that the mapping is
assigned to is lost, the source and target volumes go offline.
Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error. Any
changed write data for the source volume is flushed from the cache. Any read or write data for
the target volume is discarded from the cache. If the mapping is incremental and a previous
mapping has completed, the mapping records only the differences between the source and
target volumes. If the connection to both nodes in the IBM Storwize V7000 storage system
that the mapping is assigned to is lost, the source and target volumes go offline.
Stopped
The mapping is stopped because either you issued a stop command or an I/O error occurred.
The target volume is offline and its data is lost. To access the target volume, you must restart
or delete the mapping. The source volume is accessible and the read and write cache is
enabled. If the mapping is incremental, the mapping is recording write operations to the
source volume. If the connection to both nodes in the IBM Storwize V7000 storage system
that the mapping is assigned to is lost, the source and target volumes go offline.
Stopping
The mapping is in the process of copying data to another mapping.
If the background copy process is complete, the target volume is online while the stopping
copy process completes.
If the background copy process is not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs.
Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
both the source and target volume to go offline. When access to the metadata is restored, the
mapping returns to the copying or stopping state and the source and target volumes return
online. The background copy process resumes. Any data that has not been flushed and has
been written to the source or target volume before the suspension, is in cache until the
mapping leaves the suspended state.
When consistency groups are used the FlashCopy commands are issued to the FlashCopy
consistency group which preforms the operation on all FlashCopy mappings contained within
the consistency group.
Figure 11-2 on page 385 illustrates a consistency group consisting of two FlashCopy
mappings.
Note: After an individual FlashCopy mapping has been added to a consistency group it
can only be managed as part of the group, operations such as start and stop are no longer
allowed on the individual mapping.
Dependent Writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple
volumes, consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log indicating that a database update is about
to be performed.
2. A second write is executed to perform the actual update to the database.
3. A third write is executed to update the database log indicating that the database update
has completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step. However, if the database log (updates 1 and 3) and the
database itself (update 2) are on separate volumes it is possible for the FlashCopy of the
database volume to occur prior to the FlashCopy of the database log. This can result in the
target volumes seeing writes (1) and (3) but not (2), because the FlashCopy of the database
volume occurred before the write was completed.
In this case, if the database was restarted using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction had completed
successfully when in fact it had not, since the FlashCopy of the volume with the database file
was started (bitmap was created) before the write had completed to the volume. Therefore,
the transaction is lost and the integrity of the database is in question.
To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an
atomic operation with the use of consistency groups.
The consistency group can be specified when the mapping is created. You can also add the
FlashCopy mapping to a consistency group or change the consistency group of a FlashCopy
mapping later. Do not place stand-alone mappings into a consistency group because they
become controlled as part of that consistency group.
Idle or Copied
All FlashCopy Mappings in this consistency group are in the Idle or Copied state.
Preparing
At least one FlashCopy mapping in this consistency group is in the Preparing state.
Prepared
The consistency group is ready to start. While in this state, the target volumes of all
FlashCopy mappings in this consistency group are not accessible.
Copying
At least one FlashCopy mapping in the consistency group is in the Copying state and no
FlashCopy mappings are in the Suspended state.
Stopping
At least one FlashCopy mapping in the consistency group is in the Stopping state and no
FlashCopy mappings are in the Copying or Suspended state.
Stopped
The consistency group is stopped because either you issued a command or an I/O error
occurred.
Suspended
At least one FlashCopy mapping in the consistency group is in the Suspended state.
Empty
The consistency group does not have any FlashCopy mappings.
Reverse FlashCopy
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original copy
operation to complete. It supports multiple targets and thus multiple rollback points.
A key advantage of reverse FlashCopy function is that it does not destroy the original target,
thus allowing processes using the target, such as a tape backup, to continue uninterrupted.
You can also create an optional copy of the source volume to be made prior to starting the
reverse copy operation. This provides the ability to restore back to the original source data
which can be useful for diagnostic purposes.
The following outlines the steps required to restore from an FlashCopy backup
1. (Optional) Create a new target volume (volume Z) and FlashCopy the production volume
(volume X) onto the new target for later problem analysis.
2. Create a new FlashCopy map with the backup to be restored (volume Y) or (volume W) as
the source volume and volume X as the target volume, if this map does not already exist.
3. Start the FlashCopy map (volume Y -> volume X) with the -restore option to copy the
backup data onto the production disk.
Note: In GUI, the -restore option will be applied automatically when you start the
FlashCopy mapping from volume Y to volume X; in the CLI, you need to add -restore
option to the command manually. You will find more information on using the CLI in
Appendix A, “CLI setup and SAN Boot” on page 541.
Note that regardless of whether the initial FlashCopy map (volume X -> volume Y) is
incremental, the reverse FlashCopy operation only copies the modified data.
Consistency groups are reversed by creating a set of new “reverse” FlashCopy maps and
adding them to a new “reverse” consistency group. Consistency groups cannot contain more
than one FlashCopy map with the same target volume.
FlashCopy Presets
The IBM Storwize V7000 storage system provides three FlashCopy presets, named
Snapshot, Clone and Backup, to simplify the more common FlashCopy operations, as shown
in Table 11-3 on page 389.
Figure 11-4 on page 390 shows the Copy Services function icon.
Most of actions to manage the FlashCopy mapping could be done both in the FlashCopy
panel or the FlashCopy Mappings panels. Although the quick path to create FlashCopy
presets could only be found in FlashCopy panel.
Select FlashCopy in the Copy Services function icon, and FlashCopy panel will be
displayed as shown in Figure 11-5. In the FlashCopy panel, the FlashCopy mappings are
organized by volumes.
Select FlashCopy Mappings in the Copy Services function icon, and FlashCopy
Mappings panel will be displayed as shown in Figure 11-6. In the FlashCopy Mappings
panel, the FlashCopy mappings are listed one by one.
The consistency groups panel could be used to manage the consistency groups for
FlashCopy mappings. Select consistency groups in the Copy Services function icon, and
consistency groups panel will be displayed as shown in Figure 11-7.
After only one click, you will have a snapshot volume for the volume you choose.
After only one click, you will have a clone volume for the volume you choose.
After one click, you will have a backup volume for the volume you choose.
Now, in the FlashCopy panel, you will find three FlashCopy target volumes under the source
volume, as shown in Figure 11-11. The progress bars behind the target volumes indicate the
copy progress in percentage. The copy progress remains 0% for snapshot while there is no
change happening now since only the changed data would be copied. At the same time, the
copy progress for clone and backup keep increasing.
The copy progress could be also found in the status indicator of Running Tasks, as shown in
Figure 11-12 on page 395.
After the copy progresses complete, you will find the FlashCopy with the clone preset has
been deleted automatically, as shown in Figure 11-13 on page 396.
After you select Create new target volumes, the wizard will let you choose the preset, as
shown in Figure 11-15, but no matter which preset you choose, you can modify the setting of
the FlashCopy mapping. Therefore choose one preset which has the most similar
configuration to the one required, and click Advanced Settings to make any appropriate
adjustments on the properties, as shown in Figure 11-16 on page 398.
For example, if the snapshot preset has been chosen, the default settings can be found when
you click Advanced Settings, which are:
Background Copy: 0
Incremental: No
It is the same with clone preset, and the default settings of clone preset can be found when
you click Advanced Settings, after you select clone preset, as shown in Figure 11-17 on
page 399, which including:
Background Copy: 50
Incremental: No
Auto Delete after completion: Yes
Cleaning Rate: 50
It is the same with backup preset and the default settings of backup preset could be found
when you click Advanced Settings, after you select backup preset, as shown in Figure 11-18
on page 400, which includes:
Background Copy: 50
Incremental: Yes
Auto Delete after completion: No
Cleaning Rate: 50
Change settings of the FlashCopy mapping according to your requirement, and click Next.
In the next step, you could add your FlashCopy mapping to a consistency group, as shown in
Figure 11-19. If the consistency group is not ready now, the FlashCopy mapping could be
added to consistency group afterwards. Click Next to continue.
Then you have the options to choose which storage pool you want to create your volume
from. As shown in Figure 11-20 on page 401, you can select the same storage pool that is
used by the source volume.
Figure 11-20 Choose to use the same storage pool with the source volume
And you can also specify some other storage pool for your new volume, as shown in
Figure 11-21. Click Next to continue.
Figure 11-21 Choose other storage pool to create the new volume
In the next step, you will be asked if you want to create a thin provisioning volume. If you have
chosen the snapshot preset in the beginning of this wizard, Yes would be your default choice.
You can specify the detail setting of your thin provisioning volume, as shown in Figure 11-22
on page 402.
Otherwise, if you have chosen the clone or backup preset at the beginning of this wizard,
Inherit properties from source volume would be your default choice, as shown in
Figure 11-23 on page 403.
Click Finish when you make your decision, and the FlashCopy mapping will be created on
your volume with a new target, as shown in Figure 11-24. The status of the newly created
FlashCopy mapping will be Idle, it could be started, as described in, “Start FlashCopy
Mapping” on page 406.
Figure 11-24 New FlashCopy mapping has been created with new target
2. In Advanced FlashCopy..., if you already have candidate target volumes, select Use
existing target volumes, as shown in Figure 11-25 on page 404.
Then you need to choose the target volume for the source volume you selected. Select the
target volume in the drop down list on the right of the window, click Add, as shown in
Figure 11-26.
After you click Add, the FlashCopy mapping will be listed as shown in Figure 11-27 on
page 405. Click the red cross if the FlashCopy mapping is not the one you want to create. If
the FlashCopy mapping is what you want, click Next to continue.
In the next step, you can select the preset and make your adjustment to the settings, as
shown in Example 11-28. Make sure the settings meet your requirements, and click Next.
Figure 11-28 Select preset and make your adjustment to the settings
Now you can add the FlashCopy mapping to a consistency group, as shown in Figure 11-29
on page 406. Click Finish and the FlashCopy mapping will be created with the status of Idle.
You can also create the FlashCopy mappings in the FlashCopy Mapping panel by clicking
New FlashCopy Mapping on the top left, as shown in Figure 11-30.
A wizard will pop up to guide you to create a FlashCopy mapping, and the steps are the same
as creating an advanced FlashCopy mapping using existing target volumes in the FlashCopy
panel, which has been just described above.
the following sections, we will show the steps based on the FlashCopy panel, although the
steps would be the same if you were to use the FlashCopy Mapping panel.
You can start by selecting the FlashCopy target volume in the FlashCopy panel and choosing
the Start option in the right click menu or the Actions drop down list, as shown in
Figure 11-31. Then the status of the FlashCopy mapping will be changed to Copying from
Idle.
To change the name of the target volume, selecting the FlashCopy target volume in the
FlashCopy panel and choose the Rename Target Volume option in the right click menu or
the Actions drop down list, as shown in Figure 11-33 on page 409.
Input your new name for the target volume, as shown in Figure 11-34. Click Rename to finish.
To change the name of a FlashCopy mapping, selecting the FlashCopy mapping in the
FlashCopy Mappings panel and choose the Rename Mapping option in the right click menu
or the Actions drop down list as shown in Figure 11-35 on page 410.
Then you need to input your new name for the FlashCopy mapping, as shown in
Figure 11-36. Click Rename to finish.
Note: If the FlashCopy mapping is in the Copying state, it must be stopped before being
deleted.
Then you need to confirm your action to delete FlashCopy mappings in the pop-up window,
as shown in Figure 11-38 on page 412. Verify the number of FlashCopy mappings you need
to delete, and if you are sure that you want to delete the FlashCopy mappings when the data
on the target volume is inconsistent with the source volume, tick the checkbox below. Click
Delete and your FlashCopy mapping will be removed.
Note: Deleting the FlashCopy mapping will not delete the target volume. If it is needed to
reclaim the storage space occupied by the target volume, you need to delete the target
volume manually.
The FlashCopy mapping dependency tree will be displayed, as shown in Figure 11-40.
Edit Property
The background copy rate and cleaning rate can be changed after the FlashCopy mapping
has been created, by selecting the FlashCopy target volume in the FlashCopy panel and
choosing Edit Property option in the right click menu or the Actions drop down list, as shown
in Figure 11-41 on page 414.
You can then modify the value of background copy rate and cleaning rate by moving the
pointer on the bar, as shown in Figure 11-42. Click Save to save changes.
The Consistency Groups panel, as shown in Figure 11-44, is where you can manage both
consistency groups and FlashCopy mappings.
On the left of the Consistency Groups panel, you can list out the consistency groups you
need. Select Not in a Group, and all the FlashCopy mappings which are not in any
consistency groups will be displayed on the right.
On the right of the Consistency Groups panel you can find out the properties of a
consistency group and the FlashCopy mappings in it. You can also take actions on the
consistency group and the FlashCopy mappings in it on the right. All the actions allowed on
the FlashCopy mapping have been introduced in “Manage FlashCopy Mapping” on page 396,
and will not be described again in this section.
Then you would be asked to input the name of the new consistency group, as shown in
Figure 11-45. Following the naming conventions, type the name of the new consistency group
in the box and click Create.
After the creation process has been completed, you can find a new consistency group on the
left of the Consistency Groups panel. Select the new consistency group, and you will see
more detail information of this consistency group on the right, as shown in Figure 11-46 on
page 417.
You simply rename the consistency group by clicking the name of the consistency group on
the right and type a name for it following the naming convention. Under the name of the
consistency group, the state shows that it is now an empty consistency group with no
FlashCopy mapping in it.
Then you will be asked to specify which consistency group you want to move the FlashCopy
mapping into as shown in Figure 11-48. Click Move to Consistency Group to continue.
After the action completes you will find the FlashCopy mappings you selected have been
removed from the Not In a Group list to the consistency group you chosen.
After you start the consistency group, all the FlashCopy mappings in the consistency group
will be started. The start time of the FlashCopy will also be recorded on the top right of the
panel, as shown in Figure 11-50 on page 419.
After the stop process completes, the FlashCopy mappings in the consistency group would
be in Stopped state, and a red cross would appear on the icon of this consistency group to
show an alert, as shown in Figure 11-52 on page 419.
The FlashCopy mappings will be returned to the Not in a Group list after the removal from
the consistency group.
Remote Copy consists of two methods for copying: Metro Mirror and Global Mirror. Metro
Mirror is designed for metropolitan distances in conjunction with a synchronous copy
requirement, while Global Mirror is designed for longer distances without requiring the hosts
to wait for the full round-trip delay of the long distance link.
Metro Mirror and Global Mirror are IBM branded terms for the functions Synchronous Remote
Copy and Asynchronous Remote Copy, respectively. Throughout this paper, the term
“Remote Copy” is used to refer to both functions where the text applies to each term equally.
Partnership
When creating a partnership, we connect two IBM Storwize V7000 that are separated by
distance. After the partnership creation has been configured on both systems, further
communication between the node canisters in each of the storage systems is established and
maintained by the SAN network. All inter-cluster communication goes through the Fibre
Channel network. Partnership must be defined on both IBM Storwize V7000s to make it fully
functional.
Partnership topology
Building up partnerships between up to four IBM Storwize V7000 is allowed.
Typical partnership topologies between multiple IBM Storwize V7000s are described below:
1. Daisy-chain topology, as shown in Figure 11-55:
Partnership States:
Partially Configured
Indicates that only one cluster partner is defined from a local or remote cluster to the
displayed cluster and is started. For the displayed cluster to be configured fully and to
complete the partnership, you must define the cluster partnership from the cluster that is
displayed to the corresponding local or remote cluster.
Fully Configured
Indicates that the partnership is defined on the local and remote clusters and is started.
two clusters has been compromised by too many fabric errors or slow response times of the
cluster partnership.
Typically, the master volume contains the production copy of the data and is the volume that
the application normally accesses. The auxiliary volume typically contains a backup copy of
the data and is used for disaster recovery.
The master and auxiliary volumes are defined when the relationship is created, and these
attributes never change. However, either volume can operate in the primary or secondary role
as necessary. The primary volume contains a valid copy of the application data and receives
updates from the host application, analogous to a source volume. The secondary volume
receives a copy of any updates to the primary volume, because these updates are all
transmitted across the mirror link. Therefore, the secondary volume is analogous to a
continuously updated target volume. When a relationship is created, the master volume is
assigned the role of primary volume and the auxiliary volume is assigned the role of
secondary volume. Therefore, the initial copying direction is from master to auxiliary. When
the relationship is in a consistent state, you can reverse the copy direction.
The two volumes in a relationship must be the same size.The Remote Copy relationship is
supported to be established on the volumes within one IBM Storwize V7000 storage system,
which called intra-cluster relationship, or in different IBM Storwize V7000 storage systems,
which called inter-cluster relationship.
Usage of Remote Copy target volumes as Remote Copy source volumes is not allowed. A
FlashCopy target volume cannot be used as Remote Copy source volume.
Metro Mirror
Metro Mirror is a type of remote copy that creates a synchronous copy of data from a master
volume to an auxiliary volume. With synchronous copies, host applications write to the master
volume but do not receive confirmation that the write operation has completed until the data is
written to the auxiliary volume. This ensures that both the volumes have identical data when
the copy completes. After the initial copy completes, the Metro Mirror function maintains a
fully synchronized copy of the source data at the target site at all times.
Figure 11-59 on page 425 illustrates how a write to the master volume is mirrored to the
cache of the auxiliary volume before an acknowledgement of the write is sent back to the host
that issued the write. This process ensures that the auxiliary is synchronized in real time, in
case it is needed in a failover situation.
The Metro Mirror function supports copy operations between volumes that are separated by
distances up to 300 km. For disaster recovery purposes, Metro Mirror provides the simplest
way to maintain an identical copy on both the primary and secondary volumes. However, like
with all synchronous copies over remote distances, there can be a performance impact to
host applications. This performance impact is related to the distance between primary and
secondary volumes and depending on application requirements, its use might be limited
based on the distance between sites.
Global Mirror
The Global Mirror provides an asynchronous copy, which means that the secondary volume is
not an exact match of the primary volume at every point in time. The Global Mirror function
provides the same function as Metro Mirror Remote Copy without requiring the hosts to wait
for the full round-trip delay of the long distance link.
In asynchronous remote copy which Global Mirror provides, write operations are completed
on the primary site and the write acknowledgement is sent to the host before it is received at
the secondary site. An update of this write operation is sent to the secondary site at a later
stage, which provides the capability to perform remote copy over distances exceeding the
limitations of synchronous remote copy.
Figure 11-60 on page 426 shows that a write operation to the master volume is
acknowledged back to the host issuing the write before the write operation is mirrored to the
cache for the auxiliary volume.
The Global Mirror algorithms maintain a consistent image on the auxiliary at all times. They
achieve this consistent image by identifying sets of I/Os that are active concurrently at the
master, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary.
In a failover scenario, where the secondary site needs to become the master source of data,
depending on the workload pattern as well as the bandwidth and distance between local and
remote site, certain updates might be missing at the secondary site. Therefore, any
applications that will use this data must have an external mechanism for recovering the
missing updates and reapplying them, for example, such as a transaction log replay.
IBM Storwize V7000 storage system provide you the advanced feature on Global Mirror to
permit you to test performance implications before deploying Global Mirror and obtaining a
long distance link. This advanced feature could be enabled by modifying the IBM Storwize
V7000 storage system parameters, gmintradelaysimulation and gminterdelaysimulation, by
using the CLI. These two parameters could be used to simulate the write delay to the
secondary volume. The delay simulation can be enabled separately for each intra-cluster or
inter-cluster Global Mirror. You can use this feature to test an application before the full
deployment of the Global Mirror feature. You can find more information on how to enable the
CLI feature in Appendix A, “CLI setup and SAN Boot” on page 541.
Remote Copy commands can be issued to a Remote Copy consistency group, and therefore
simultaneously for all Metro Mirror relationships defined within that consistency group, or to a
single Metro Mirror relationship that is not part of a Metro Mirror consistency group.
Figure 11-61 on page 427 illustrates the concept of Remote Copy consistency groups.
Because the RC_Relationship 1 and 2 are part of the consistency group, they can be handled
as one entity, while the stand-alone RC_Relationship 3 is handled separately.
Remote Copy relationships can only belong to one consistency group; however, they do not
have to belong to a consistency group. Relationships that are not part of a consistency group
are called stand-alone relationships. A consistency group can contain zero or more
relationships. All relationships in a consistency group must have matching primary and
secondary clusters, which are sometimes referred to as master and auxiliary clusters. All
relationships in a consistency group must also have the same copy direction and state.
Metro Mirror and Global Mirror relationships cannot belong to the same consistency group. A
copy type is automatically assigned to a consistency group when the first relationship is
added to the consistency group. After the consistency group is assigned a copy type, only
relationships of that copy type can be added to the consistency group.
These states apply to both the relationships and the consistency groups, except Empty state
is only for consistency groups:
InconsistentStopped
The primary volumes are accessible for read and write I/O operations, but the secondary
volumes are not accessible for either. A copy process must be started to make the secondary
volumes consistent.
InconsistentCopying
The primary volume are accessible for read and write I/O operations, but the secondary
volume are not accessible for either. This state indicates a copy process is ongoing from the
primary to the secondary volume.
ConsistentStopped
The secondary volumes contain a consistent image, but it might be out-of-date with respect to
the primary volumes. This state can occur when a relationship was in the
ConsistentSynchronized state and experiences an error that forces a freeze of the
consistency group or the Remote Copy relationship.
ConsistentSynchronized
The primary volumes are accessible for read and write I/O operations. The secondary
volumes are accessible for read-only I/O operations.
Idling
Both the primary volumes and the secondary volumes are operating in the primary role.
Consequently the volumes are accessible for write I/O operations.
IdlingDisconnected
The volumes in this half of the consistency group are all operating in the primary role and can
accept read or write I/O operations.
InconsistentDisconnected
The volumes in this half of the consistency group are all operating in the secondary role and
cannot accept read or write I/O operations.
ConsistentDisconnected
The volumes in this half of the consistency group are all operating in the secondary role and
can accept read I/O operations but not write I/O operations.
Empty
The consistency group does not contain any relationships.
The Metro Mirror function supports copy operations between volumes that are separated
by distances up to 300 km.
One Remote Copy relationships can only belong to one consistency group.
All relationships in a consistency group must have matching primary and secondary
clusters, which are sometimes referred to as master and auxiliary clusters. All
relationships in a consistency group must also have the same copy direction and state.
Metro Mirror and Global Mirror relationships cannot belong to the same consistency
group.
To manage multiple Remote Copy relationships as one entity, relationships can be made
part of a Remote Copy consistency group, which ensures data consistency across
multiple Remote Copy relationships and provides ease of management.
IBM Storwize V7000 storage system implements flexible resynchronization support,
enabling it to resynchronize volume pairs that have experienced write I/Os to both disks
and to resynchronize only those regions that are known to have changed.
Total Remote Copy volume 1024 TB (This limit is the total capacity for all master and auxiliary
capacity per I/O Group volumes in the I/O group)
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003634&myns=s033&mynp=familyin
d5329743&mync=E
Note: When a local and a remote fabric are connected together for Remote Copy
purposes, the inter-switch link (ISL) hop count between a local node and a remote node
cannot exceed seven.
Total round trip latency must be <80ms, and in each direction <40ms
2. Bandwidth
Note: If the link between two sites is configured with redundancy so that it can tolerate
single failures, the link must be sized so that the bandwidth and latency requirement could
be met during single failure conditions.
For disaster recovery purposes, you can use the FlashCopy feature to create a consistent
copy of an image before you restart a Global Mirror relationship.
As the name implies, these two panels are used to manage Remote Copy and the
Partnership respectively.
Figure 11-63 Create new partnership between IBM Storwize V7000 storage systems
Check the zoning and the system status, make sure the clusters can ‘see’ each other. Then
you can create your partnership by selecting the appropriate remote storage system as
shown in Figure 11-65 on page 433 and defining the available bandwidth between both
systems.
Figure 11-65 Select the remote IBM Storwize V7000 storage system for a partnership
The bandwidth you need to input here is used by the background copy process between the
clusters in the partnership. In order to set the background copy bandwidth optimally, make
sure that you consider all three resources (the primary storage, the intercluster link
bandwidth, and the secondary storage) to avoid overloading them so as to affect the
foreground I/O latency.
Click Create and the partnership definition is complete on the first IBM Storwize V7000
system, as shown in Figure 11-66. You can find the partnership listed on the left of the
Partnership panel, and with the selection of the partnership, more information for this
partnership will be displayed on the right.
Perform the same steps on the second storage system which will become a fully configured
partner, as shown in Figure 11-67.
The Remote Copy partnership is now implemented between two IBM Storwize V7000
systems and both systems are ready for further configuration of Remote Copy relationships.
You can also change the bandwidth setting for the partnership in the Partnerships panel, as
shown in Figure 11-68. Select Apply Changes to confirm your modification.
After you have stopped the partnership, your partnership will be marked as Fully Configured:
Stopped, as shown in Figure 11-70.
Figure 11-70
With a stopped partnership, you can restart it as needed by selecting Start Partnership in
the Actions drop down list, as shown in Figure 11-71 on page 436.
The partnership will return to fully configured status when it has been restarted.
Delete Partnership
You can delete a partnership by selecting Delete Partnership in the Actions drop down list,
as shown in Figure 11-72.
Access the Remote Copy panel to manage remote copy, as shown in Figure 11-73.
The Remote Copy panel, as shown in Figure 11-74 on page 438, is where you can manage
Remote Copy relationship and Remote Copy consistency groups.
On the left of the Remote Copy panel, there is a consistency group filter to list Remote Copy
consistency groups that meet your requirement. On the right of the Remote Copy panel, you
can find out the properties of a consistency group and the Remote Copy relationships in it.
You can also take actions on Remote Copy relationships and Remote Copy consistency
group on the right. Select Not in a Group, all the Remote Copy relationships which are not in
any Remote Copy consistency groups will be displayed on the right.
As shown in Figure 11-75 on page 439, you need to set the Remote Copy relationship type
first. Based on your requirement, you can select Metro Mirror (synchronous replication) or
Global Mirror (asynchronous replication). Select the appropriate replication type and click
Next.
In the next step you will need to select your Remote Copy auxiliary (target) storage system,
the local system or the already defined second storage system as the Remote Copy partner.
In our example, shown in Figure 11-76, we choose another system to build an inter-cluster
relationship. Click Next to continue.
Then, the Remote Copy master and auxiliary volume needed to be specified. Both volumes
must have the same size. As shown in Figure 11-77 the system offers only appropriate
auxiliary candidates with the same volume size as the selected master volume. After you
select the volumes based on your requirement, click Add.
You can define multiple, independent relationships using the Add button and you can remove
a relationship by clicking on the red cross. In our example, we create two independent
Remote Copy relationships, as shown in Figure 11-78.
In the next step you are asked if the volumes in the relationship are already synchronized. In
most situations the data on the master volume and on the auxiliary volume are not identical.
So select No and click Next to enable an initial copy as shown in Figure 11-79 on page 441.
If you select Yes, the volumes are already synchronized in this step, A warning message
will pop up as shown in Figure 11-80. Make sure the volumes are truly identical, then click OK
to continue.
Figure 11-80 Warning massage to make sure the volumes are synchronized
In the last step you can choose to start the initial copying progress now, or wait to start at a
later time. In our example, we select Yes, start copying now and click Finish, as shown in
Figure 11-81 on page 442.
After the Remote Copy relationships creation completes, two independent Remote Copy
relationships will be defined and displayed in Not in a Group list, as shown in Figure 11-82.
Optionally you can monitor the ongoing initial synchronisation in the status indicator of
Running Tasks, as shown in Figure 11-83 on page 443.
In the next step, allow secondary read/write access if required, and click Stop Relationship,
as shown in Figure 11-85.
After the stop completes, the state of the Remote Copy relationship will be changed to Idling
from Consistent Synchronized, as shown in Figure 11-86 on page 444. Now read/write
access to both volumes is allowed.
To start a Remote Copy relationship, the most important item is selecting the copy direction.
Both master and auxiliary volumes could be the primary. Make your decision based on the
requirement in your situation and click Start Relationship. In our example, we choose the
master volume to be the primary, as shown in Figure 11-88.
A warning message will pop up and show you the consequences, as shown in Figure 11-90
on page 446. If you switch the Remote Copy relationship, the copy direction of the
relationship will become the opposite: the current primary volume will become the secondary,
while the current secondary volume will become the primary. What is more, write access to
the current primary volume will be lost and write access to the current secondary volume will
be enabled. If it is not in a disaster recovery situation, you need to stop your host I/O to the
current primary volume in advance. Make sure you are prepared for the consequences, and if
so click OK to continue.
After the switch completes your Remote Copy relationship will be tagged, as shown in
Figure 11-91, to show you the primary volume in this relationship has been changed.
Then input the new name for the Remote Copy relationship and click Rename, as shown in
Figure 11-93.
Figure 11-93 Input a new name for the Remote Copy relationship
Then you need to input a name for your new consistency group as shown in Figure 11-97.
In the next step, as shown in Figure 11-98, You can choose to create an empty consistency
group or if you want to add Remote Copy relationships into the consistency group now. If you
select Yes, add relationships to this group, you can select existing relationships or create
new ones to add to the consistency group. In our example, we create a empty consistency
group now, and add Remote Copy relationships to the consistency group afterwards. Click
Finish to process.
After the creation process completes, a new empty consistency group appears on the left part
of the Remote Copy panel. Click the new consistency group, and you can find more
information on the right, as shown in Figure 11-99.
You can find the name and the status of the consistency group beside the relationship icon. It
is easy to change the name of consistency group by clicking the name and entering a new
one. On the top of the right part in the Remote Copy panel, you can take actions on the
Remote Copy consistency group. And below the relationship icon, you can find all the Remote
Copy relationships in this consistency group. The actions on the Remote Copy relationships
can be applied here as well as with the right click menu or the Actions drop down list.
In the next step, you are asked to choose the consistency group to add the Remote Copy
relationships to. Based on your requirements, select the appropriate consistency group and
click Add to Consistency Group, as shown in Figure 11-101.
Figure 11-101 Choose the right consistency group to add the Remote Copy relationships to
Then you will find your Remote Copy relationships in the consistency group you selected.
Next, you can select the copy direction of the consistency group as required as shown in
Figure 11-103, and we choose Master is primary in our example and click Start
Consistency Group. Then the consistency group will start copying data from the primary to
the secondary.
In the next step, you can allow read/write access to secondary volumes by ticking the
checkbox, as shown in Figure 11-105, and click Stop Consistency Group to proceed.
A warning message will pop up to you to show the consequence, as shown in Figure 11-107
on page 454. After the switch, the primary cluster in the consistency group will be changed.
And write access to current master volumes will be lost, while write access to the current
auxiliary volumes will be enabled. Make sure that is what you need, and if so click OK to
continue.
You will be asked to confirm the Remote Copy relationships you want to delete from the
consistency group, as shown in Figure 11-109 on page 455. Make sure the Remote Copy
relationships shown in the box are the ones you need to remove from the consistency group,
and click Remove to proceed.
Figure 11-109 Confirm the Remote Copy relationships to be removed from consistency group
After the removal process completes, the Remote Copy relationships will be deleted from the
consistency group and displayed in the Not in a Group list.
Next, you will need to confirm your delete of the consistency group, as shown in
Figure 11-111. Click OK if you are sure that this consistency group should be deleted.
IBM Tivoli Storage Productivity Center can help you manage the capacity utilization of
storage systems, file systems and databases. It can help automate file-system capacity
provisioning, perform device configuration and management of multiple devices from a single
user interface. And it can tune and proactively manage the performance of storage devices on
the Storage Area Network (SAN) and manage, monitor and control your SAN fabric. Now
there is also a Plug in for IBM System Director available named “IBM System Director Storage
Productivity Center”. It is designed to be used as an embedded version of TPC 4.2.1 available
for IBM System Director 6.2.1.
Contact your IBM Business partner or IBM Representative to obtain the correct license for
your requirements.
12.2.6 Agents
Outside of the server, there are several interfaces that are used to gather information about
the environment. The most important sources of information are the TPC agents (Storage
resource agent, Data agent and Fabric agent) as well as SMI-S enabled storage devices that
use a CIMOM agent (either embedded or as a proxy agent). Storage Resource agent, CIM
agents, and Out of Band fabric agents gather host, application, storage system, and SAN
fabric information and send that information to the Data Server or Device Server.
12.2.7 Interfaces
As TPC gathers information from your storage (servers, subsystems, and switches) across
your enterprise, it accumulates a repository of knowledge about your storage assets and how
they are used. You can use the reports provided in the user interface view and analyze that
repository of information from various perspectives to gain insight into the use of storage
across your enterprise. The user interfaces (UI) enables users to request information and
then generate and display reports based on that information. Certain user interfaces can also
be used for configuration of TPC or storage provisioning for supported devices.
In this section, we cover the Windows TPC installation wizard. The installation wizard covers
two installation paths, “Typical” and “Custom”. We will guide you through the installation of the
typical path. The installation in this chapter is not related to any of the different licenses that
are available. All editions use the same code base and as such all the panels look the same.
The Typical installation allows you to install all the components of the Tivoli Storage
Productivity Center on the local server in one step, but you still can decide which components
to install:
Server:Data Server, Device Server Replication Manager and TIP
Clients: PC GUI
Storage Resource Agent
The drawback of using the typical installation is that everything besides the above selection
will be set to default. At about 75% of the installation the installer will launch the Tivoli Storage
Productivity Center for Replication installation wizard to give you the options to also change
some installation parameters. You basically have to step through it, and press Finish to start
its installation procedure. Once this is done, you have to click Finish to return to the Tivoli
Storage Productivity Center installer to complete the last few steps of the installation.
Note: Part 1, part2 and part 3 are required for every TPC installation and need to be
downloaded and extracted to a single directory.
Note: On Windows, ensure that the directory name where the installation images reside
has no spaces or special characters. This will cause the Tivoli Storage Productivity Center
installation to fail. For example, a failure will occur if you happened to have a directory
name such as:
C:\tpc 42 standard edition\disk1
The SRA zip file contains Tivoli Storage Productivity Center Storage Resource Agents
(SRAs). Tivoli Storage Productivity Center Storage Resource Agent contains the local agent
installation components:
Storage Resource Agent
Installation scripts for the Virtual I/O server
In addition to the images mentioned above there are these images available:
Tivoli Storage Productivity Center Storage National Language Support
IBM Tivoli Storage Productivity Center for Replication Two Site Business Continuity
License, which is available for Windows, Linux and AIX
IBM Tivoli Storage Productivity Center for Replication Three Site Business Continuity
License, which is available for Windows, Linux and AIX
Fixpack and 9.7: Do not use 9.7 with any fixpack, even if it is a later version, as the TPC
installation will fail.
Before beginning the installation, it is important that you log on to your system as a local
administrator with Administrator authority.
DB2 installation
To begin the installation of DB2, follow these steps:
1. Insert the IBM DB2 Installer CD into the CD-ROM drive.
If Windows autorun is enabled, the installation program ought to start automatically. If it
does not, open Windows Explorer and go to the DB2 Installation image path and
double-click setup.exe.
Note: Only the user ID that has installed the DB2 product has the privilege to issues the
db2start and db2stop commands.
You will see the Welcome panel, as shown in Figure 12-2. Select Install a Product to
proceed with the installation.
2. The next panel allows you to select the DB2 product to be installed. Select the DB2
Enterprise Server Edition Version 9.7 by clicking Install New to proceed as shown in
Figure 12-3.
3. The DB2 Setup wizard panel is displayed, as shown in Figure 12-4. Click Next to proceed.
4. The next panel displays the license agreement; click I accept the terms in the license
agreement (Figure 12-5).
5. To select the installation type, accept the default of Typical and click Next to continue (see
Figure 12-6).
6. Select Install DB2 Enterprise Server Edition on this computer (see Figure 12-7). Click
Next to continue.
7. The panel shown in Figure 12-8 shows the default values for the drive and directory to be
used as the installation folder. You can change these or accept the defaults, then click
Next to continue. In our installation, we accept to install on the C: drive.
8. The next panel requires user information for the DB2 Administration Server; it can be a
Windows domain user. If it is a local user, select None - use local user account for the
Domain field.
The user name field is pre-filled with a default user name.You can change it or leave the
default and type the password of the DB2 user account that you want to create (see
Figure 12-9). Leave the check-box Use the same user name and password for the
remaining DB2 services checked and click Next to continue.
DB2 creates a user with the following administrative rights:
– Act as a part of an operating system.
– Create a token object.
– Increase quotas.
– Replace a process-level token.
– Log on as a service.
9. In the Configure DB2 instances panel, select Create the default DB2 Instance and click
Next as shown in Figure 12-10.
10.Select Single Partition Instance and click Next as shown in Figure 12-11 on page 470.
11.Accept the default DB2 Instance and click Next to continue (see Figure 12-12).
12.The next panel allows you to specify options to prepare the DB2 tools catalog, as shown in
Figure 12-13. Verify that Prepare the DB2 tools catalog on this computer is not
selected. Click Next to continue.
13.The next panel, shown in Figure 12-14, allows you to set the DB2 server to send
notifications when the database needs attention. Ensure that the check-box Set up your
DB2 server to send notification is unchecked and then click Next to continue.
14.Accept the defaults for the DB2 administrators group and DB2 users group in the Enable
operating system security for DB2 objects panel shown in Figure 12-15 and click Next to
proceed.
15.Figure 12-16 shows the summary panel about what is going to be installed, based on your
input. Review the settings and click Finish to continue.
The DB2 installation proceeds and you see a progress panel similar to the one shown in
Figure 12-17.
17. Close the DB2 Setup Launchpad (Figure 12-19) to complete the installation.
3. Enter the following DB2 commands. Connect to the SAMPLE database, issue a simple
SQL query, and reset the database connection:
db2 connect to sample
db2 “select * from staff where dept = 20”
db2 connect reset
4. If you did install DB2 Enterprise Server 9.7 restart the system now.
Note: If you do not reboot your server or at least the DB2 service after installing DB2 level
9.7 the TPC installation will fail. For more details refer to:
http://www-01.ibm.com/support/docview.wss?uid=swg21452614
Before starting the installation, verify that a supported version of DB2 Enterprise Server
Edition has been installed and it has been started.
3. The License Agreement panel is displayed. Read the terms and select I accept the terms
of the license agreement. Then click Next to continue as shown in Figure 12-23.
4. Select Typical installation and make sure you have marked all checkboxes as shown in
Figure 12-24.
5. Enter the User ID and the Password you have selected during DB2 installation as shown
in Figure 12-25.
6. Specify the location to install TIP as shown in Figure 12-26. Note if you install TPC on 64
bit windows the default path will include “Program Files (x86)”. Remove x86 otherwise you
will get an error message and you will not be able to proceed.
7. Select your preferred authentication method and click Next as shown in Figure 12-27.
8. Review the summary report and click Install to start the installation task as shown in
Figure 12-28 on page 478.
9. The installation task will now start and take some time. During the installation you will be
prompted to select the TPC for Replication settings as shown in Figure 12-29.Click Next.
10.To perform a prerequisites check click Next (Figure 12-30 on page 479).
11.Select I accept the terms of the license agreement and click Next (Figure 12-31).
12.Select an installation path and click Next as shown in Figure 12-32 on page 480.
13.Keep the default ports and click Next as shown in Figure 12-33.
14.Review the installation summary and click Install as shown in Figure 12-34 on page 481.
16.After the installation has completed click Finish as shown in Figure 12-36 on page 482.
17.The view will jump back to the TPC Installation as shown in Figure 12-37.
18.After some minutes the TPC installation process completes, click Finish to close the
wizard (Figure 12-38 on page 483).
2. The welcome panel will appear (Figure 12-40 on page 484). Click Add Devices to
connect to a new system.
4. Select Add and configure new storage subsystems and click Next (Figure 12-42 on
page 485).
5. Select or enter the following settings to connect to IBM Storwize V7000 (Figure 12-43 on
page 486).
a. Device Type: IBM SAN Volume Controller / IBM Storwize V7000
b. Software Version: 5+
c. IP Address: Enter Storwize V7000 Cluster IP
d. Select Key:Upload new key
e. Admin Username: superuser
f. Admin Password: enter the superuser password (default = passw0rd)
g. Username: Select a User to connect
h. Private SSH Key: Specify the location of the private ssh key
(if you don’t have a key now create one as described in Appendix A, “CLI setup and
SAN Boot” on page 541.
6. Click Add to connect to your IBM Storwize V7000 System as shown in Figure 12-44 on
page 487.
7. Repeat these steps to add another system or click Next to complete the discovery of the
new storage subsystems (Figure 12-45).
8. After the discovery select the new storage subsystem that you wish to add to the
configuration as shown in Figure 12-46 on page 488.
9. Specify the data collection settings and to add the new system to the default group select
Monitoring Group and Subsystem Standard Group as shown in Figure 12-47 on
page 489.
10.Review your selections and click Next as shown in Figure 12-48 on page 490 to add the
new device.
11.The IBM Storwize V7000 has now been added to TPC, click Finish to close the wizard
(Figure 12-49 on page 491).
12.You will be prompted if you would like the job history, but we will not view it now, click
Close (Figure 12-50).
The IBM Storwize V7000 has now been added successfully to be administered by Tivoli
Storage Productivity Center. Of course the normal IBM Storwize V7000 GUI and CLI is still
available, and can be used to manage the system as well.
But it is not only possible to monitor configuration data, there are many more possibilities
such as performance statistics, health monitoring, path usage, and so on. Figure 12-53 on
page 494 shows you statistics for a single disk pool.
Figure 12-53 Storage Subsystem performance by Managed Disk Group (Storage Pool): Total I/O Rate
A detailed description about TPC is beyond the intended scope of this book.
13
At the heart of the IBM Storwize V7000 is a pair of Node Canisters. These two canisters
share the data transmitting and receiving load between the attached hosts and the disk
arrays. This section looks at the RAS features of the IBM Storwize V7000, monitoring and
troubleshooting.
Fibre Channel
There are four FC ports on the left hand side of the canister. They are in two rows of two
connectors. The ports are numbered 1 to 4 from left to right, top to bottom. The ports operate
at 2, 4 or 8 Gbps. Each port has two green LEDs associated with it. These are not shown in
the figure but are located between the two rows of ports and are triangular, pointing towards
the port to which they refer. On the left is the Speed LED and on the right the Link LED. In
Table 13-1 we explain the status of the indicators:
USB
There are two USB connectors side by side, and they are numbered as 1 on the left and as
number 2 on the right. There are no indicators associated with the USB ports.
Ethernet
There are two 10/100/1000 Mbps Ethernet ports side by side on the canister and they are
numbered 1 on the left and 2 on the right. Each port has two LEDs and the status is shown in
Table 13-2.
SAS
There are two 6 Gbps SAS ports side by side on the canister. They are numbered 1 on the left
and 2 on the right. Each port connects four PHYs; each PHY is associated with an LED.
These LEDs are green and are directly above the ports. For each port they are numbered 1
through 4. The LED indicates activity on the link and the status is shown in Table 13-3.
ON Link is connected
Canister
There are three LEDs in a row towards the top right of the canister that indicate the status and
identification for the node as shown in Table 13-4 on page 498.
Left Green Cluster ON The node is in the active or starting state. It may not be
status safe to remove the canister. If the fault LED is off then the
node is an active member of a cluster. If the fault LED is
also on then there is a problem establishing a cluster for
example due to lack of quorum.
Right Green Power ON Canister is powered on and the CPUs are running.
Expansion canister
As shown in Figure 13-2 on page 499 there are two 6 Gbps SAS ports side by side on the
canister. They are numbered 1 on the left and 2 on the right. Each port connects four PHYs;
each PHY is associated with a LED. These LEDs are green and are next to the ports.
In Table 13-5 on page 499 we describe the LED status of the expansion canister.
An array is a type of MDisk made up of disk drives that are in the enclosures. These drives
are referred to as members of the array. Each array has a RAID level. RAID levels provide
different degrees of redundancy and performance, and have different restrictions on the
number of members in the array. IBM Storwize V7000 supports hot spare drives. When an
array member drive fails the system automatically replaces the failed member with a hot
spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives can
be manually exchanged with array members.
Each array has a set of goals that describe the desired location and performance of each
array member. A sequence of drive failures and hot spare takeovers can leave an array
unbalanced, that is with members that do not match these goals. The system automatically
rebalances such arrays when appropriate drives are available.
IBM Storwize V7000 supports the RAID levels shown in Table 13-6.
Disk scrubbing
The scrub process runs when arrays do not have any other background process. The process
is to check that drive LBAs are readable and array parity is in synchronization. Arrays are
scrubbed independently and each array is entirely scrubbed every seven days.
A LUN will be created on the array which is then presented to the IBM Storwize V7000 as a
normal managed disk (MDisk). As is the case for HDDs, the SSD RAID array format will help
protect against individual SSD failures. Depending on your requirements, additional high
availability protection, above the RAID level, can be achieved by using volume mirroring.
SAS Cabling
Expansion enclosures are attached to control enclosures using SAS cables as shown in
Figure 13-3 on page 501 The SAS network is made up of strands and chains.
A strand starts with an SAS initiator chip inside an IBM Storwize V7000 node canister and
progresses through SAS expanders which connect disk drives; each canister contains an
expander. Figure 13-3 on page 501 shows how the SAS connectivity works inside the node
and expansion canisters. Each drive has two ports, each connected to a different expander
and strand. This means both nodes in the I/O group have direct access to each drive and
there is no single point of failure.
At system initialization, when devices are added to or removed from strands, and at other
times, the IBM Storwize V7000 software performs a discovery process to update the state of
the drive and enclosure objects.
13.1.3 Power
All enclosures require two PSUs for normal operation. A single PSU can power the entire
enclosure for redundancy.
Control enclosure PSUs contain batteries and expansion enclosure PSUs do not. The
additional battery function requires two additional LEDs which is the main difference between
the PSUs when viewed from outside.
There is a power switch on the power supply. The switch must be on for the PSU to be
operational. If the power switch is turned off then the PSU stops providing power to the
system. For control enclosure PSUs, the integrated battery continues to be able to supply
power to the node canisters.
Figure 13-4 on page 502 shows the two PSUs present in the controller and expansion
enclosure, the controller PSU have 2 more LEDS than the expansion enclosure due to the
battery status indication. Table 13-7 on page 502 shows the meaning of the LEDs in both
enclosure:
Table 13-7 LED status of Power in the Controller and Expansion Enclosure
Position Color Meaning
Table 13-8 shows the meaning of the LEDs for the PSU on the controller and expansion
enclosure.
ON ON ON ON Communication
failure between
PSU and
enclosure
midplane.
OFF ON ON ON Communication
failure and PSU
problem
OFF FLASHING FLASHING FLASHING PSU firmware
download in
progress
Table 13-9 shows the meaning of the LEDs for the battery on the controller enclosure
Regularly saving a configuration backup file of the IBM Storwize V7000 is important, we
recommend that you download this file regularly to your management workstation to protect
the data (best practice is automate with script and save it every day).
This file must be used if there is a serious failure that requires you to restore your system
configuration.
The backup file is updated by the cluster everyday, It is important to save it after any changes
to your system configuration. It contains configuration data such as arrays, pools, volumes
and so on (no customer applications data)
You can use the GUI or the CLI to generate it. You must use the CLI command svcconfig
backup to produce a new backup file, it is not possible at this time generate a new backup file
with the GUI but you can only save it.
Although we recommend that objects have non-default names at the time that the backup is
taken, this prerequisite is not mandatory. Objects with default names are renamed when they
are restored.
To generate a Configuration backup from the Command Line Interface enter this command:
svcconfig backup
The svcconfig backup command creates three files that provide information about the
backup process and cluster configuration. These files are created in the /tmp directory of the
configuration node.
Table 13-10 describes the three files that are created by the backup process:
Table 13-10
File name Description
2. Clicking on the Support option will display the screen shown in Figure 13-6 on page 506.
From this screen, click on Show full log listing... to display all log files.
3. Look for a file named “/dumps/svc.config.backup.xml_*”, and selecting it you can right click
on it to Download it to your workstation.
To upgrade the IBM Storwize V7000 software, perform the following steps:
1. With a supported Web browser, replace <your-cluster-ip-address> with your cluster IP
address here:
http://<your-cluster-ip-address>
You will be taken to the IBM Storwize V7000 GUI login screen as shown in Figure 13-7.
2. Login with your superuser/password and you will get into the IBM Storwize V7000
management home page. From there, go to Configuration Advanced menu as shown in
Figure 13-8 on page 507 and click on Advanced.
3. In the Advanced menu click on Upgrade Software and you will get to the screen as
shown in Figure 13-9.
From the screen shown in Figure 13-9, you can click on the following buttons:
Check for updates — this will check directly on the IBM Web site if there is a newer IBM
Storwize V7000 software version rather then the version you have installed in your IBM
Storwize V7000 and you will need an internet connection to be able to do this.
Launch Upgrade Wizard — this will launch the software upgrade process.
4. Click on Launch Upgrade Wizard to start the upgrade process and you will be redirected
to the screen shown in Figure 13-10 on page 508.
From the screen shown in Figure 13-10 you can download the Upgrade Test Utility, or if you
downloaded it previously you can browse to the location where you saved it as shown in
Figure 13-11.
5. When the Upgrade Test Utility has been uploaded, you will get the screen as shown in
Figure 13-12 on page 509.
6. Click Next in Figure 13-12 on page 509 and the Upgrade Test Utility will be applied and
you will be redirected to the screen as shown in Figure 13-13.
7. Click Close on Figure 13-13 and you will be get the screen shown in Figure 13-14, where
you can run the Upgrade Test Utility.
8. Click Next and you will be redirected to the screen shown in Figure 13-14. At this time the
Upgrade Test Utility will run and you will be able to see the suggested actions, if any, or the
screen shown in Figure 13-15.
9. Click Next on Figure 13-15 to start the software upload procedure, and you will be
redirected to the screen shown in Figure 13-16.
From the screen shown in Figure 13-16 you can download the SVC software upgrade
package, or you can browse and upload the software upgrade package from the location
where you saved it as shown in Figure 13-17.
Click Open in Figure 13-17 and you will be redirected to the screens shown in Figure 13-18
and Figure 13-19.
10.Click Next and you will be redirected to the screen shown in Figure 13-20.
11.Click Finish on Figure 13-20 and the software upgrade will start and you will be redirected
to the screen shown in Figure 13-21. Clicking Close as in Figure 13-21 will give you the
warning message as shown in Figure 13-22.
12.Click OK on Figure 13-22 and you have completed your task to upgrade the SVC software
13.You will get messages that let you know that first one node, then the other, has been
upgraded. When both nodes have been rebooted, you have completed your SVC software
upgrade.
Example 13-2 pcmpath query device showing a problem with one canister
Total Devices : 2
C:\Program
Files\IBM\SDDDSM>
The datapath query adapter command, as shown in Example 13-3, shows all IBM Storwize
V7000 paths that are available to the host. We can see that only Adapter 0 is available, and
that Adapter 1 state is FAILED.
Active Adapters :2
C:\Program
Files\IBM\SDDDSM>
Once the problem is fixed, scan for new disks on your host, and check if all paths are available
again as shown in Example 13-4.
Total Devices : 2
C:\Program Files\IBM\SDDDSM>
You can also use again the datapath query adapter and check that the FAILED path is back
ONLINE (State=NORMAL) as shown in Example 13-5 on page 514.
Active Adapters :2
C:\Program Files\IBM\SDDDSM>
Opening the Troubleshooting panels brings up a navigation panel (along the top) with the
main functions:
Recommended Actions
Event Log
Support
The Recommended Actions tab will indicate the highest priority maintenance procedure that
needs to be executed. Use the troubleshooting wizard first to allow the IBM Storwize V7000 to
determine the proper order of maintenance procedures. We clicked on the Recommended
Actions and the highest priority event that needs to be fixed is shown in Figure 13-24 on
page 516.
In our example one Fibre Channel Port was not operational. The next step in this example is
to review the physical FC cabling to determine the issue, then click the button Run Fix
Procedure.
Best practice is to review the event logs and recommended actions periodically to ensure
there are no unexpected events, and to configure callhome so that notification of serious
events is done immediately.
The Event Log tab includes events that are important to know about including both errors and
informational events. An example of the event log is shown in Figure 13-25.
Another choice from the Event Log is the Support option as shown in Figure 13-26 on
page 517. This selection is useful for providing data necessary for IBM Support to determine
what the current status is of the IBM Storwize V7000. This function provides several versions
of the svc_snap command embedded within the GUI choices.
You can also click on Troubleshooting and then Support as shown in Figure 13-27 and the
choices are to either download a support package to your local system, or to display the
available logs.
The row is highlighted and then a right-mouse click will pop-up column choices. Check or
uncheck the column preferences as needed. This manipulability of the menu columns is also
available on the Audit Log Grid as demonstrated briefly in 13.7, “Recommended Actions -
details” on page 518.
Every field of the event log is available as a column in the Event Log grid. There are several
fields that are useful to add when working with IBM Support and using the Show ALL filter,
with events sorted by timestamp. These fields are the sequence number, event count, and the
fixed state.
Using the selection Reset Grid Preferences will set the grid back to the defaults.
It is also possible to adjust the width of the columns to match your preference as shown in
Figure 13-29 by holding the mouse over the column marker, clicking and holding, and then
moving the column width to match your preference. This can also be reset by the Reset Grid
Preferences.
Figure 13-29 Adjusting the column width of Recommended Actions and Event Log panels
As an example we show how a Fibre Channel cable that is loose presents itself and the IBM
Storwize V7000 reported this problem as shown in Figure 13-30 as the highest priority event
at the time.
Next we clicked the Run Fix Procedure button, and started the dialogue in Figure 13-31.
Unfortunately the Fibre Channel link is inactive so we select the option shown in Figure 13-32
on page 520 and then click Next.
As suggested by the DMP Recommended Action we changed the FC cable and then we
clicked on Next as shown in Figure 13-33 on page 521.
Then we select Refresh fibre-channel status to verify if the status changes as shown in
Figure 13-34 on page 522.
After replacing the cable the status is changed from Inactive to Active as shown in
Figure 13-35 on page 523.
Clicking on the Next button the error is about to be marked as fixed as shown in Figure 13-36
on page 524.
Finally, you get the last panel of the DMP procedure and the error has been marked as fixed.
Click on the Close button to end the procedure as shown in Figure 13-37 on page 525.
Returning to the main Recommended Actions grid, it is also possible to mark this error fixed
or view the event details. This is shown in Figure 13-38.
Clicking on the Properties button drop-down one can view all the properties of this event log,
such as which node canister, which port, which wwpn, and which event code as displayed in
Figure 13-39 on page 526.
Note that the event log properties menu is accessible from both the recommended actions
and the event log navigation tabs. Also, the Next button will skip to the next event log
properties as displayed in Figure 13-40 on page 526.
At this point we reset the loose Fibre Channel cable. Once we return to the GUI, we use the
Mark As Fixed drop-down button to indicate the problem is fixed, or we use the Run Fix
Procedure as shown in Figure 13-41.
Since we clicked the Run Fix Procedure, the directed maintenance procedure (DMP) starts
showing the current status of the ports as shown in Figure 13-42 on page 527.
Figure 13-43 shows the next screen in the procedure, giving you another look at the current
status of the ports, and since we find the status is correct (all active), we click on the Fibre
channel status is correct, mark as fixed button.
As shown in Figure 13-44 on page 529 there is a confirmation panel before the error is
marked as fixed where you can cancel the DMP procedure, and we click on Next button.
At this point the error is marked as fixed as shown by Figure 13-45 on page 530, and the error
will no longer be visible on the Recommended Actions grid.
It is possible to confirm the error is fixed by selecting the Event Log tab, using the Show All
filter, and including the ‘fixed’ column as shown in Figure 13-46 on page 531.
An example of the Audit Log viewed after creating volumes and mapping them to hosts is
shown in Figure 13-48 with a command highlighted.
Notice also that the running tasks button is available at the bottom of the screen in the status
pod, and if clicked shows the progress of currently executing tasks.
The Audit Log is especially useful in determining past configuration events when trying to
determine how perhaps a volume ended up shared by two hosts, or may have been
overwritten. The audit log is also included in the svc_snap support data in order to aid in
problem determination.
It is also possible to change the view of the Audit Log grid just like we were able to manipulate
the event logs by right-clicking on the column headings as demonstrated in Figure 13-49.
The grid layout and sorting is completely under the users control so that one can show
everything in the audit log, sort on different columns, or reset back to the default grid
preferences.
Click on the Support Tab to begin the procedure of collecting support data as shown in
Figure 13-50.
Launching the drop-down as shown in Figure 13-52, and assuming that the node restarted,
we collect the default logs plus all the existing statesaves in order to capture the maximum
data for support.
Then click on the Download Support Package button as shown in Figure 13-51.
As shown in Figure 13-52 on page 535 this will launch the menus for collecting different
versions of svc_snap, depending on the event that is to be investigated. For example, if you
notice that a node was restarted in the event log, we recommend that you capture the snap
with the latest existing statesaves.
The next panel creates the snap on the IBM Storwize V7000, including the latest statesave
from each node canister. This process may take a few minutes as shown in Figure 13-53.
Figure 13-54 shows that the GUI gives you a choice to save the file on your local Windows
system.
As you will want to upload the resulting snap to the IBM Support portal once you open a call
with IBM support, navigate to:
http://www.ecurep.ibm.com/app/upload
At this point you are ready to call the IBM Support Line or use the IBM Support Portal to open
a call:
http://www-947.ibm.com/support/entry/portal/Open_service_request?brandind=Hardware
Note: You should never shut down your IBM Storwize V7000 by powering off the PSUs,
removing both PSUs, or removing both power cables from a running system.
2. From the Action button you can click on the Shutdown Cluster option as shown in
Figure 13-57 on page 537.
3. The Confirm IBM Storwize V7000 Shutdown IBM Storwize V7000 window (Figure 13-58
on page 538) opens. You will get a message asking you to confirm whether you want to
shut down the cluster.
Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data
migration operations, and forced deletions before continuing. Click Yes to begin the shutdown
process.
Tip: When you shut down the IBM Storwize V7000, it will not automatically start. You must
manually start the IBM Storwize V7000.
Shutting down
1. Shutdown your servers and all applications
2. Shutdown your IBM Storwize V7000:
a. Shutdown the cluster via the GUI or Command Line Interface
b. Power off both switches of the Controller enclosure
c. Power off both switches of all Expansion enclosures
3. Shutdown your SAN switches
Powering on
1. Power on you SAN switches and wait until the boot has completed
2. Power on your storage systems and wait until the systems are up, then:
a. Power on both switches of all Expansion enclosures
b. Power on both switches of the Controller enclosure
3. Power on your servers and start your Applications
Detailed CLI information is available in the IBM Storwize V7000 Infocenter Command Line
Section available at:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp?topic=/com.ibm.stor
wize.v7000.doc/svc_clicommandscontainer_229g0r.html
In Implementing the IBM System Storage SAN Volume Controller V6.1, SG24-7933, is also a
lot of information about using the CLI and the commands in that book also apply to the
Storwize V7000.
Basic Setup
In the Storwize V7000 GUI, authentication is done with a username and a password. The CLI
uses a secure shell to connect from the host to the IBM Storwize V7000. Therefore a private
and a public key pair has to be used. The following steps are required to enable CLI access:
A public key and a private key are generated together as a pair
A public key is uploaded to the IBM Storwize V7000 via the GUI.
A client ssh tool has to be configured to authenticate with the private key
A secure connection can be established between the client and IBM Storwize V7000
Secure Shell is the communication vehicle between the management workstation and the
SVC cluster.
The SSH client provides a secure environment from which to connect to a remote machine. It
uses the principles of public and private keys for authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administration and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.
To use the CLI, an SSH client must be installed on that system, the SSH key pair must be
generated on the client system, and the client’s SSH public key must be stored on the SVC
cluster(s).
The recommended ssh client is PuTTY, there is also a PuTTY key generator which could be
used to generate the private and public key pair. The free download is available at:
http://www.chiark.greenend.org.uk
Click Generate and move the cursor on the blank area in order to generate the keys as
shown in Figure A-2.
To generate keys: The blank area indicated by the message is the large blank rectangle
on the GUI inside the section of the GUI labeled Key. Continue to move the mouse pointer
over the blank area until the progress bar reaches the far right. This action generates
random characters to create a unique key pair.
After the keys are generated save them for later use. Click Save public key as shown in
Figure A-3.
You are prompted for a name (for example, pubkey) and a location for the public key (for
example, C:\Support Utils\PuTTY). Click Save.
Ensure that you record the name and location, because the name and location of this SSH
public key must be specified later.
Note: By default the PuTTY Key Generator saves the public key with no extension. We
recommend that you use the string “pub” in naming the public key, for example, “pubkey”,
to easily differentiate the SSH public key from the SSH private key.
You are prompted with a warning message as shown in Figure A-5. Click Yes to save the
private key without a passphrase.
When prompted enter a name (for example “icat”), select a secure place as location and click
Save.
Note: The PuTTY Key Generator saves the private key with the PPK extension.
Right click the user for which you want to upload the key and click Properties as shown in
Figure A-7.
To upload the public key click Browse, select your public key and click OK as shown in
Figure A-8 on page 547.
In the right pane under the “Specify the destination you want to connect to” section, select
SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures that
if there are any connection errors, they will be displayed on the user’s window.
From the Category pane on the left side of the PuTTY Configuration window, click
Connection --> SSH to display the PuTTY SSH Configuration window, as shown in
Figure A-11.
In the right pane, in the “Preferred SSH protocol version” section, select 2.
From the Category pane on the left side of the PuTTY Configuration window, select
Connection --> SSH --> Auth. As shown in Figure A-12 on page 549, in the right pane, in the
“Private key file for authentication:” field under the Authentication Parameters section, either
browse to or type the fully qualified directory path and file name of the SSH client private key
file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK).
From the Category pane on the left side of the PuTTY Configuration window, click Session to
return to the Session view as shown in Figure A-10 on page 548.
In the right pane, enter the hostname or cluster IP address of the IBM Storwize V7000 cluster
in the Host Name field, and enter a session name in the Saved Sessions field as shown in
Figure A-13.
Click Save to save the new session as shown in Figure A-14 on page 550.
Highlight the new session and click Open to connect to the IBM Storwize V7000. A PuTTY
Security Alert will appear, confirm it by clicking Yes as shown in Figure A-15
PuTTY will now connect to the cluster and prompt you for a username. Enter admin as
username and press Enter as shown in Example A-1.
You have now completed the tasks that are required to configure the CLI for IBM Storwize
V7000 administration.
Example Commands
A detailed description about all the available commands is beyond the intended scope of this
book. In this section are sample commands listed that we have referenced in the book.
All commands that have the intention to generate any output start with svcinfo. All
commands which will make system changes start with svctask. If you type svcinfo or
svctask and press the “Tab-Key” twice all the available sub-commands are listed. Pressing
the “Tab Key” twice will also auto complete commands if the input is valid and unique to the
system.
Enter svcinfo lsvdisk as shown in Example A-2 to list all configured volumes on the system.
The example shows that three volumes are configured.
Enter svcinfo lshost to get a list of all configured hosts on the system as shown in
Example A-3.
To map the volume to the hosts use the svctask mkvdiskhostmap command as shown in
Example A-4.
To verify the host mapping use the svcinfo lsvdiskhostmap command as shown in
Example A-5.
In the CLI there are more options available than in the GUI. All advanced settings can be set,
for example I/O throttling. To enable I/O throttling the properties of a volume can be changed
using the svctask changevdisk command as shown in Example A-6. To verify the changes
the svcinfo lsvdisk command is used.
Note: The svcinfo lsvdisk command lists all available properties of a volume and its
copies, however to make it easier to read lines in the example output have been deleted.
If you do not specify the unit parameter the throttling will be based on I/Os instead of
throughput as shown in Example A-7.
To disable I/O Throttling set the I/O rate to 0 as shown in Example A-8.
.
IBM_2076:ras-stand-7-tb1:admin>
Example A-9 shows you the required steps to prepare a Reverse FlashCopy, and shows you
the FlashCopy command using the “Reverse” option. As you can see at the end of
Example A-9 FCMAP_rev_1 shows a restoring value of yes while the FlashCopy mapping is
copying. After it has finished copying, the restoring value field will change to no.
IBM_2076:ITSO-Storwize-V7000-1:admin>svcinfo lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name
group_id group_name status progress copy_rate clean_progress incremental
partner_FC_id partner_FC_name restoring
0 FCMAP_1 4 Volume_FC_S 5 Volume_FC_T_S1
idle_or_copied 0 50 100 off 1 FCMAP_rev_1 no
1 FCMAP_rev_1 5 Volume_FC_T_S1 4 Volume_FC_S
idle_or_copied 0 50 100 off
0 FCMAP_1 no 2 FCMAP_2 5 Volume_FC_T_S1 6 Volume_FC_T1
idle_or_copied 0 50 100 off no
.
total_allocated_extent_capacity 2.10TB
SAN Boot
IBM Storwize V7000 supports SAN Boot for Windows, VMware and many other operating
systems. SAN Boot support can change from time to time, so we recommend regularly
checking the IBM Storwize V7000 interoperability matrix:
http://www-03.ibm.com/systems/storage/disk/storwize_v7000/interop.html
Also in the IBM Storwize V7000 Information Center a lot of information about SAN Boot in
combination with different operating systems is provided:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
Additional information about SAN Boot is also covered in the IBM Multipath Subsystem
Device Driver User’s Guide available at:
http://www-01.ibm.com/support/docview.wss?rs=503&context=HW26L&uid=ssg1S7000303
Note: It might be required to load an additional HBA device driver during installation,
depending on your Windows version and the HBA type.
Note: It might be required to load an additional HBA device driver during installation,
depending on your ESX level and the HBA type.
Note: For SAN Boot Procedures for other operating systems check the IBM Storwize
V7000 Information Center available at:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
Perform the following steps to migrate your existing SAN boot images:
1. If the existing SAN boot images are controlled by an IBM storage controller that uses SDD
as the multipathing driver, you must use SDD v1.6 or higher. Run the SDD command
datapath set bootdiskmigrate 2076 to prepare the host for image migration. See the
Multipath Subsystem Device Driver (SDD) documentation for more information.
2. Shut down the host.
3. Perform the following configuration changes on the storage controller:
a. Write down the SCSI LUN ID each volume is using (for example boot LUN SCSI ID 0,
Swap LUN SCSI ID 1, Database Lun SCSID 2,....)
b. Remove all the image-to-host mappings from the storage controller.
c. Map the existing SAN boot image and any other disks to the Storwize V7000 system.
4. Change the zoning so that the host is able to see the IBM Storwize V7000 I/O group for
the target image-mode volume.
5. Perform the following configuration changes on the Storwize V7000 system:
a. Create an image-mode volume for the managed disk (MDisk) that contains the SAN
boot image. Use the MDisk unique identifier to specify the correct MDisk.
b. Create a host object and assign the host HBA ports.
c. Map the image mode volume to the host using the same SCSI ID as before. For
example, you might map the boot disk to the host with SCSI LUN ID 0.
d. Map the swap disk to the host, if required. For example, you might map the swap disk
to the host with SCSI LUN ID 1.
6. Change the boot address of the host by performing the following steps:
a. Restart the host and open the HBA bios utility of the host during the booting process.
b. Set the BIOS settings on the host to find the boot image at the worldwide port name
(WWPN) of the node that is zoned to the HBA port.
7. If SDD v 1.6 or higher is installed and you ran the bootdiskmigrate command in step 1,
reboot your host, update SDDDSM to the latest level and go to step 14. If SDD v.1.6 is not
installed go to the next step.
8. Modify the SAN Zoning so that the host only sees one path to the IBM Storwize V7000.
9. Boot the host in single-path mode.
10.Uninstall any multipathing driver that is not supported for Storwize V7000 system hosts
that run the applicable Windows Server operating system.
11.Install SDDDSM.
12.Restart the host in single-path mode and ensure that SDDDSM was properly installed.
13.Modify the SAN Zoning to enable multipathing.
14.Rescan drives on your host and check that all paths are available.
15.Reboot your host and enter the HBA bios.
16.Configure the HBA settings on the host. Ensure that all HBA ports are boot-enabled and
can see both nodes in the I/O group that contains the SAN boot image. Configure the HBA
ports for redundant paths.
17.Exit the bios utility and finish booting the host.
18.Map any additional volumes to the host as required.
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources:
IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
IBM System Storage Master Console: Installation and User’s Guide, GC30-4090
Subsystem Device Driver User’s Guide for the IBM TotalStorage Enterprise Storage
Server and the IBM System Storage SAN Volume Controller, SC26-7540
IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface User’s Guide, SC26-7544
IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
IBM TotalStorage Multipath Subsystem Device Driver User’s Guide, SC30-4096
IBM System Storage Open Software Family SAN Volume Controller: Host Attachment
Guide, SC26-7563
IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation
Guide, GC52-1356
IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation
Guide, GC27-2219
IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation
Guide, GC27-2220
IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware
Installation Guide, GC27-2221
IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide,
SG26-7905-05
Command Line Interface User’s Guide, SG26-7903-05
Online resources
These Web sites are also relevant as further information sources:
IBM TotalStorage home page:
http://www.storage.ibm.com
SAN Volume Controller supported platform:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
Download site for Windows Secure Shell (SSH) freeware:
http://www.chiark.greenend.org.uk/~sgtatham/putty
IBM site to download SSH for AIX:
http://oss.software.ibm.com/developerworks/projects/openssh
Open source site for SSH for Windows and Mac:
http://www.openssh.com/windows.html
Cygwin Linux-like environment for Windows:
http://www.cygwin.com
IBM Tivoli Storage Area Network Manager site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe
tworkManager.html
Microsoft Knowledge Base Article 131658:
http://support.microsoft.com/support/kb/articles/Q131/6/58.asp
Microsoft Knowledge Base Article 149927:
http://support.microsoft.com/support/kb/articles/Q149/9/27.asp
Sysinternals home page:
http://www.sysinternals.com
Subsystem Device Driver download site:
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
IBM TotalStorage Virtualization home page:
http://www-1.ibm.com/servers/storage/software/virtualization/index.html
SVC support page:
http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&
brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu
e.x=1
SVC online documentation:
http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp
lBM Redbooks publications about SVC:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Index
GUID 463
A
administrative rights
DB2 user 468 I
alias 24 initiator name 24
aliases 24 integrity 384–385
asynchronous remote 425 intercluster link bandwidth 433
autorun enabled 464 IQN 24
auxiliary VDisk 425 IQNs 24
iSCSI Address 24
iSCSI Name 24
B iSCSI node 24
background copy rate 382 iSCSI Qualified Name 24
ISL hop count 429
C
cache 425 L
CD layout 461 Login Phase 25
CDB 23
Colliding writes 429
Command Descriptor Block 23 M
consistency group 382, 384–385 mapping 381
consistent data set 380 mirrored 425
copy rate 382
N
D NETBIOS 463
database
attention required 471
DB2 Administration Server 468
O
ordering 385
DB2 install 464
overwritten 381
DB2 installation
verify installation 474
DB2 tools catalog 471 P
DB2 user account 468 PLOGI 25
db2sampl command 474 primary 426
dependent writes 385
DNS suffix 463
domain name system 463 R
Redbooks Web site
Contact us xvii
E restore points 387
entire VDisk 380 Reverse FlashCopy 26, 387
RFC3720 24
F
failover 426 S
failover situation 424 secondary 426
Fibre Channel Port Login 25 source 382
FlashCopy mapping states states 382
Idling/Copied 384
FlashCopy mappings 386
FlashCopy properties 388 T
Full Feature Phase 25 target name 24
TotalStorage Productivity Center
component install 475
G typical install 467
granularity 380
SG24-7938-00 ISBN