Docu59923 - VMAX3 TimeFinder SnapVX and Microsoft SQL Server White Paper PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

VMAX3 EMC TIMEFINDER SNAPVX

AND MICROSOFT SQL SERVER

EMC VMAX Engineering White Paper

ABSTRACT
With the introduction of the VMAX3 disk arrays and new local and remote replication
capabilities, administrators can protect their applications effectively and efficiently
with unprecedented ease of use and management. This white paper discusses EMC
VMAX TimeFinder SnapVX functionality in the context of deploying, planning, and
protecting the Microsoft SQL server.

July, 2015

EMC WHITE PAPER


To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store.

Copyright 2015 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
Part Number H14273

2
TABLE OF CONTENTS

EXECUTIVE SUMMARY .............................................................................. 5


AUDIENCE ......................................................................................................... 5

VMAX3 PRODUCT OVERVIEW .................................................................... 6


VMAX3 Product Overview .................................................................................... 6
VMAX3 SnapVX Local Replication Overview ........................................................... 7
VMAX3 SRDF Remote Replication Overview ........................................................... 8

SQL SERVER AND SNAPVX CONSIDERATIONS ........................................ 10


Number of snapshots, frequency and retention .................................................... 10
Snapshot link copy vs. no copy option ........................................................... 10
Microsoft SQL Server database restart solutions ................................................... 10
Remote replication considerations ...................................................................... 11

SQL SERVER PROTECTION DETAILS ........................................................ 11


Creating multiple copies of SQL Server databases ................................................ 11
Mounting a SQL Server SnapVX snapshot to a mount host .................................... 12
Point-in-time SQL Server restore with SnapVX ..................................................... 13
Direct restore of SQL Server SnapVX snapshots ................................................... 14
Indirect restore of SQL Server SnapVX snapshots involving linked targets ............... 15
Leveraging VMAX3 remote snaps for disaster recovery ......................................... 20

SQL SERVER OPERATIONAL DETAILS ..................................................... 21


Microsoft SQL Server AlwaysOn with SnapVX ....................................................... 21
Storage resource pool usage with multiple SnapVX copies ..................................... 21
Reclaiming space of linked target volumes .......................................................... 22

SNAPVX PERFORMANCE USE CASES WITH MICROSOFT SQL SERVER ...... 25


Test bed configuration ...................................................................................... 25
Databases configuration details.......................................................................... 25
Test overview .................................................................................................. 26
Use case 1A Impact of taking 256 SnapVX snapshots on production workload
database on Gold SLO ...................................................................................... 27

3
Use case 1B Impact of taking 256 SnapVX snapshots on production workload
database with a varying SLO ............................................................................. 28
Use case 2A Impact of No-Copy vs Copy mode linked target snapshots on with
workload on production..................................................................................... 30
Use case 2B Impact of No-Copy vs Copy mode linked target snapshots with workload
on both Production and Mount hosts ................................................................... 31

CONCLUSION .......................................................................................... 33
APPENDIXES........................................................................................... 34
Appendix I - Configuring SQL Server database storage groups for replication .......... 34
Appendix II SRDF modes and topologies .......................................................... 36
Appendix III Solutions Enabler CLI commands for TimeFinder SnapVX
management ................................................................................................... 38
Appendix IV Solutions Enabler CLI commands for SRDF Management .................. 40
APPENDIX V - Symntctl VMAX integration utility for windows disk management ....... 41
APPENDIX VI Scripting attach or detach FOR a SQL Server database using
WINDOWS POWERSHELL .................................................................................. 43
Appendix VII Example outputs ........................................................................ 44

4
EXECUTIVE SUMMARY
Many applications are required to be fully operational 24x7x365 and the data for these applications continues to grow. At the same
time, their RPO and RTO requirements are becoming more stringent. As a result, there is a large gap between the requirements for
fast and efficient protection and replication, and the ability to meet these requirements without overhead or operation disruption.
These requirements include the ability to create local and remote database replicas in seconds without disruption of Production host
CPU or I/O activity for purposes such as patch testing, running reports, creating development sandbox environments, publishing data
to analytic systems, offloading backups from Production, Disaster Recovery (DR) strategy, and more.
Traditional solutions rely on host-based replication. The disadvantages of these solutions are the additional host I/O and CPU cycles
consumed by the need to create such replicas, the complexity of monitoring and maintaining them across multiple servers, and the
elongated time and complexity associated with their recovery.
TimeFinder local replication values to Microsoft SQL Server include1:
The ability to create instant and consistent database replicas for repurposing, across a single database or multiple databases,
including external data or message queues, and across multiple VMAX3 storage systems.
TimeFinder replica creation or restore time takes seconds, regardless of the database size. The target devices (in case of a
replica) or source devices (in case of a restore) are available immediately with their data, even as incremental data changes are
copied in the background.
VMAX3 TimeFinder SnapVX snapshots are consistent by default. Each source device can have up to 256 space-efficient
snapshots that can be created or restored at any time. Snapshots can be further linked to up to 1024 target devices,
maintaining incremental refresh relationships. The linked targets can remain space-efficient, or a background copy of all the data
can take place, making it a full copy. In this way, SnapVX allows unlimited number of cascaded snapshots. 1
SRDF remote replication values to SQL Server include:

Synchronous and Asynchronous consistent replication of a single database or multiple databases, including external data or
message queues, and across multiple VMAX3 storage array systems if necessary. The point of consistency is created before a
disaster strikes, rather than taking hours to achieve afterwards when using replications that are not consistent across
applications and databases.
Disaster Recovery (DR) protection for two or three sites, including cascaded or triangular relationships, where SRDF always
maintains incremental updates between source and target devices.
SRDF and TimeFinder are integrated. While SRDF replicates the data remotely, TimeFinder can be used on the remote site to
create writable snapshots or backup images of the database. This allows DBAs to perform remote backup operations or create
remote database copies.
SRDF and TimeFinder can work in parallel to restore remote backups. While a remote TimeFinder backup is being restored to the
remote SRDF devices, in parallel SRDF copies the restored data to the local site. This parallel restore capability provides DBAs
with faster accessibility to remote backups and shortens recovery times.

AUDIENCE
This white paper is intended for database administrators, storage administrators, and system architects who are responsible for
implementing, managing, and maintaining SQL Server databases and EMC VMAX3 storage systems. It is assumed that readers have
some familiarity with Microsoft SQL Server and the VMAX3 family of storage arrays, and are interested in achieving higher database
availability, performance, and ease of storage management.

1
EMC AppSync has been updated to include support for SnapVX and allow for SQL Server integration through the use of VDI at the application and
operating system level. Please check the latest release notes and support matrix for AppSync to ensure support for SnapVX. One of the benefits of
TimeFinder SnapVX in the context of EMC AppSync will be the ability for application administrators to be able to manage application-consistent copies of
SQL Server databases.

5
VMAX3 PRODUCT OVERVIEW
TERMINOLOGY
The following table explains important terms used in this paper.

Term Description

Restartable vs. Recoverable SQL Server distinguishes between a restartable and recoverable database. A restartable state
database requires all log data to be consistent (see Storage consistent replications). SQL Server can be
simply started and perform automatic crash/instance recovery without user intervention. A
recoverable state requires a database media recovery, rolling forward transaction log to achieve
data consistency before the database can be opened.

RTO and RPO Recovery Time Objective (RTO) refers to the time it takes to recover a database after a failure.
Recovery Point Objective (RPO) refers to any amount of data loss after the recovery completes,
where RPO=0 means no data loss of committed transactions.

Storage consistent Storage consistent replication refers to storage replication operations (local or remote) that
replication maintain write-order fidelity at the target devices, even while the application is running. To the
SQL Server database the snapshot data looks as it does after a host reboot, allowing the database
to perform crash/instance recovery when starting.

VMAX3 HYPERMAX OS HYPERMAX OS is the industrys first open converged storage hypervisor and operating system. It
enables VMAX3 to embed storage infrastructure services like cloud access, data mobility, and data
protection directly on the array. This delivers new levels of data center efficiency and
consolidation by reducing footprint and energy requirements. In addition, HYPERMAX OS delivers
the ability to perform real-time and non-disruptive data services.

VMAX3 Storage Group A collection of host addressable VMAX3 devices. A Storage Group can be used to (a) present
devices to host (LUN masking), (b) specify FAST Service Levels (SLOs) to a group of devices, and
(c) manage grouping of devices for replication software such as SnapVX and SRDF.
Storage Groups can be cascaded. For example, the child storage groups can be used for setting
FAST Service Level Objectives (SLOs) and the parent used for LUN masking of all the database
devices to the host.

VMAX3 TimeFinder Snapshot Previous generations of TimeFinder referred to snapshot as a space-saving copy of the source
vs. Clone device, where capacity was consumed only for data changed after the snapshot time. Clones, on
the other hand, referred to full copy of the source device. With VMAX3, TimeFinder SnapVX
snapshots are always space-efficient. When they are linked to host-addressable target devices,
the user can choose whether to keep the target devices space-efficient or to perform full copy.

VMAX3 TimeFinder SnapVX TimeFinder SnapVX is the latest development in TimeFinder local replication software, offering a
higher scale and wider feature set, yet maintaining the ability to emulate legacy behavior.

VMAX3 PRODUCT OVERVIEW


The EMC VMAX3 family of storage arrays is built on the strategy of simple, intelligent, modular storage. The VMAX3 incorporates a
Dynamic Virtual Matrix interface that connects and shares resources across all VMAX3 engines, allowing the storage array to
seamlessly grow from an entry-level configuration into the worlds largest storage array. It provides the highest levels of
performance and availability featuring new hardware and software capabilities.

The newest additions to the VMAX3 family, VMAX 100K, 200K and 400K, deliver the latest in Tier-1 scale-out multi-controller
architecture with consolidation and efficiency for the enterprise. It offers dramatic increases in floor tile density, high capacity flash
and hard disk drives in dense enclosures for both 2.5" and 3.5" drives, and supports both block and file (eNAS).
The VMAX3 family of storage arrays comes pre-configured from the factory to simplify deployment at customer sites and minimize
time to first I/O. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning. While VMAX3 can ship as
an all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and

6
reads even further, it can also ship as hybrid, multi-tier storage that excels in providing FAST 2 (Fully Automated Storage Tiering)
enabled performance management based on Service Level Objectives (SLO). The new VMAX3 hardware architecture comes with
more CPU power, larger persistent cache, and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an
extremely fast internal memory-to-memory and data-copy fabric.
Figure 1 shows possible VMAX3 components. Refer to EMC documentation and release notes to find the most up to date supported
components.

1 8 redundant VMAX3 Engines


Up to 4 PB usable capacity
Up to 256 FC host ports
Up to 16 TB global memory (mirrored)
Up to 384 Cores, 2.7 GHz Intel Xeon E5-2697-v2
Up to 5,760 drives
SSD Flash drives 200/400/800/1,600 GB 2.5/3.5
300 GB 1.2 TB 10K RPM SAS drives 2.5/3.5
300 GB 15K RPM SAS drives 2.5/3.5
2 TB/4 TB SAS 7.2K RPM 3.5

Figure 1. VMAX3 storage array 3


To learn more about VMAX3 and FAST best practices with SQL Server databases refer to the white paper: Deployment Best Practice
for SQL Server with VMAX3 Service Level Object Management.

VMAX3 SNAPVX LOCAL REPLICATION OVERVIEW


EMC TimeFinder SnapVX software delivers instant and storage-consistent point-in-time replicas of host devices that can be used for
purposes such as the creation of gold copies, patch testing, reporting test/dev environments, backup and recovery, data warehouse
refreshes, or any other process that requires parallel access to or preservation of the primary storage devices.

The replicated devices can contain the database data, SQL Server home directories, data that is external to the database (for
example, image files), message queues, and so on.
VMAX3 TimeFinder SnapVX combines the best aspects of previous TimeFinder offerings and adds new functionality, scalability, and
ease-of-use features.
Some of the main SnapVX capabilities related to native snapshots (emulation mode for legacy behavior is not covered) include:
With SnapVX, snapshots are natively targetless. They only relate to a source devices and cant be otherwise accessed directly.
Instead, snapshots can be restored back to the source devices or linked to another set of target devices which can be made
host-accessible.
Each source device can have up to 256 snapshots and can be linked to up to 1024 targets.
Snapshot operations are performed on a group of devices. This group is defined by using a text file specifying the list of devices:
a device-group (DG), composite-group (CG), or a storage group (SG). The recommended way is to use a storage group.
Snapshots are taken using the establish command. When a snapshot is established, a snapshot name is provided with an
optional expiration date. The snapshot time is saved with the snapshot and can be listed. Snapshots also get a generation
number (starting with 0). The snapshot generation is incremented with each new snapshot even if the snapshot name remains
the same.

2
Fully Automated Storage Tiering (FAST) allows VMAX3 storage to automatically and dynamically manage performance service level goals across the
available storage resources to meet the application I/O demand, even as new data is added, and access patterns continue to change over time.
3
Additional drive types and capacities may be available. Contact your EMC representative for more details.

7
SnapVX provides the ability to create either space-efficient or full-copy replicas when linking snapshots to target devices.
Use the -copy option to copy the full snapshot point-in-time data to the target devices during link. This will make the target
devices a stand-alone copy. If the -copy option is not used, the target devices provide the exact snapshot point-in-time data
only until the link relationship is terminated, saving capacity and resources by providing space-efficient replicas.
SnapVX snapshots themselves are always space-efficient as they are simply a set of pointers pointing to the original data when
it is unmodified, or to the original version of the data when it is modified. Multiple snapshots of the same data utilize both
storage and memory savings by pointing to the same location and consuming very little metadata.
SnapVX snapshots are always consistent. That means that snapshot creation always maintains write-order fidelity. This allows
easy creation of restartable database copies. Snapshot operations such as establish and restore are also consistent; the
operation either succeeds or fails for all the devices as a unit.
Linked-target devices cannot restore any changes directly to the source devices. Instead, a new snapshot can be taken from
the target devices and linked back to the original source devices. In this way, SnapVX allows an unlimited number of cascaded
snapshots.
FAST Service Levels apply to either the source devices or to snapshot linked targets, but not to the snapshots themselves.
SnapVX snapshot data resides in the same Storage Resource Pool (SRP) as the source devices, and acquire an Optimized FAST
Service Level Objective (SLO) by default.
See Appendix III for a list of basic TimeFinder SnapVX operations.
For more information on SnapVX refer to the EMC VMAX3 Local Replication Technical Note.

VMAX3 SRDF REMOTE REPLICATION OVERVIEW


The EMC Symmetrix Remote Data Facility (SRDF) family of software is the gold standard for remote replication in mission-critical
environments. Built for the industry leading high-end VMAX storage array, the SRDF family is trusted for disaster recovery and
business continuity. SRDF offers a variety of replication modes that can be combined in different topologies, including two, three, and
even four sites. SRDF and TimeFinder are closely integrated to offer a combined solution for local and remote replication.
Some main SRDF capabilities include:
SRDF modes of operation:
o SRDF Synchronous (SRDF/S) mode which is used to create a no-data-loss of committed transactions solution. The
target devices are an exact copy of the source devices (Production).
o SRDF Asynchronous (SRDF/A) mode which is used to create consistent replicas at unlimited distances without write
response time penalty to the application. The target devices are typically seconds to minutes behind the source devices
(Production), though consistent (restartable).
o SRDF Adaptive Copy (SRDF/ACP) mode which allows bulk transfers of data between source and target devices
without write-order fidelity and without write performance impact to source devices. SRDF/ACP is typically used for data
migrations as a Point-in-Time data transfer. It is also used to catch up after a long period that replication was
suspended and many changes are owed to the remote site. SRDF/ACP can be set to continuously send changes in bulk
until the delta between source and target is reduced to a specified skew. At this time SRDF/S or SRDF/A mode can
resume.
SRDF groups:
o An SRDF group is a collection of matching devices in two VMAX3 storage arrays together with the SRDF ports that are
used to replicate these devices between the arrays. HYPERMAX OS allows up to 250 SRDF groups per SRDF director.
The source devices in the SRDF group are called R1 devices, and the target devices are called R2 devices.
o SRDF operations are performed on a group of devices contained in an SRDF group. This group is defined by using a text
file specifying the list of devices: a device-group (DG), composite/consistency-group (CG), or a storage group (SG).
The recommended way is to use a storage group.
SRDF consistency:
o An SRDF consistency group is an SRDF group to which consistency was enabled.
o Consistency can be enabled for either Synchronous or Asynchronous replication mode.
8
o An SRDF consistency group always maintains write-order fidelity (also called dependent-write consistency) to make sure
that the target devices always provide a restartable replica of the source application.

Note: Even when consistency is enabled the remote devices may not yet be consistent while SRDF state is sync-in-
progress. This happens when SRDF initial synchronization is taking place before it enters a consistent replication state.

o SRDF consistency also implies that if a single device in a consistency group cannot replicate, then the whole group will
stop replicating to preserve target devices consistency.
o Multiple SRDF groups set in SRDF/A mode can be combined within a single array, or across arrays. Such grouping of
consistency groups is called multi-session consistency (MSC). MSC maintains dependent-write consistent
replications across all the participating SRDF groups.
SRDF sessions:
o An SRDF session is created when replication starts between R1 and R2 devices in an SRDF group.
o An SRDF session can establish replication between R1 and R2 devices. R1 and R2 devices will need full copy for the
first establish only. Any subsequent establish (for example, after SRDF split or suspend) will be incremental, only
passing changed data.

o An SRDF session can restore the content of R2 devices back to R1. Restores are incremental, moving only changed
data across the links. TimeFinder and SRDF can restore in parallel (for example, bring back a remote backup image).
o During replication, the devices to which data is replicated are write-disabled (read-only).

o An SRDF session can be suspended, temporarily halting replication until a resume command is issued
o An SRDF session can be split, which not only suspends the replication but also makes the R2 devices read-writable.
o An SRDF checkpoint command does not return the prompt until the content of the R1 devices has reached the R2
devices. This option helps in creating remote database backups when SRDF/A is used.
o An SRDF swap changes R1 and R2 personality and the replication direction for the session.
o An SRDF failover makes the R2 devices writable. R1 devices, if still accessible, will change to Write_Disabled (read-
only). The SRDF session is suspended and application operations proceed on the R2 devices.
o An SRDF failback copies changed data from R2 devices back to R1 and makes the R1 devices writable. R2 devices are
made Write_Disabled (read-only).
o SRDF replication sessions can go in either direction (bi-directional) between the two arrays, where different SRDF
groups can replicate in different directions.
For more information, see SRDF Modes and Topologies and SRDF CLI commands.

9
SQL SERVER AND SNAPVX CONSIDERATIONS
NUMBER OF SNAPSHOTS, FREQUENCY AND RETENTION
VMAX3 TimeFinder SnapVX allows up to 256 snapshots per source device with minimal cache and capacity impact. SnapVX minimizes
the impact of Production host writes by using intelligent Redirect-on-Write and Asynchronous-Copy-on-First-Write. Both methods
allow Production host I/O writes to complete without delay. Background data is copied while Production data is modified and the
snapshot data preserves its Point-in-Time consistency.
If snapshots are used as part of a disaster protection strategy then the frequency of creating snapshots can be determined based on
the RTO and RPO needs.
For a restart solution where no roll-forward is planned, snapshots taken at very short intervals (seconds or minutes) ensure
that RPO is limited to that interval. For example, if a snapshot is taken every 30 seconds, if it is needed to restore the database
without recovery there will be no more than 30 seconds of data loss.
For a recovery solution, frequent snapshots ensure that RTO is short as less data will need recovery during roll-forward of logs
to the current time. For example, if snapshots are taken every 30 seconds, rolling the data forward from the last snapshot will
be much faster than rolling forward from nightly backup or hourly snapshots.
Because snapshots consume storage capacity based on the database change rate, when no longer used, old snapshots should be
terminated to release their consumed storage capacity.

SNAPSHOT LINK COPY VS. NO COPY OPTION


SnapVX snapshots cannot be directly accessed by a host. They can be either restored to the source devices or linked to up to 1024
sets of target devices. When linking any snapshot to target devices SnapVX allows using copy or no-copy option where no-copy
option is the default. Targets created using either of these options can be presented to the mount hostand all the use cases
described later can be executed on them. A no-copy link can be changed to copy on demand to create a full-copy linked target.
No-copy option
No-copy linked targets remain space efficient by sharing pointers with Production and the snapshot. Only changes to either the linked
targets or Production devices consume additional storage capacity to preserve the original data. However, reads to the linked targets
may affect Production performance as they share their storage via pointers to unmodified data. Another by-product of no-copy linked
targets is that they do not retain their data after they are unlinked. When the snapshot is unlinked, the target devices no longer
provide a coherent copy of the snapshot point-in-time data as before, though they can be relinked later.
No-copy linked targets are useful for storage capacity efficiency due to shared pointers. They can be used for short term and light
weight access to avoid affecting Productions performance. When longer retention period of the linked targets is anticipated or heavy
workload, it could be better to perform a link-copy and have them use independent pointers to storage. It should be noted that
during the background copy the storage backend utilization will increase and the operator may want to time such copy operations to
periods of low system utilization to avoid any application performance overhead.
Copy option
Alternatively, the linked-targets can be made a stand-alone copy of the source snapshot point-in-time data by using the copy option.
When the background copy is complete, the linked targets will have their own copy of the point-in-time data of the snapshot, and will
not be sharing pointers with Production. If at that point, the snapshot is unlinked, the target devices will maintain their own coherent
data. If they are later relinked they will be incrementally refreshed from the snapshot (usually after the snapshot is refreshed).

MICROSOFT SQL SERVER DATABASE RESTART SOLUTIONS


TimeFinder SnapVX uses Enginuity Consistency Assist (ECA) to create write order consistent copies of the production database.
Copies created by TimeFinder using ECA are always write-order-consistent and the VMAX3 array acknowledges writes to SQL Server
database once data is written to cache. Thus, all the metadata written to disk is also preserved on TimeFinder-based storage
snapshots. This makes it possible to get the correct state of SQL Server database data files and logs at the time of snapshot. Such
snapshots create a DBMS restartable database replica which can simply be opened, and it will perform crash or instance recovery
just as if the server rebooted or the DBA performed a shutdown abort. To achieve a restartable solution, all data and log files have to
participate in the consistent snapshot.

10
During the restart of SQL Server database from such copies, the following occurs:
1. All transactions that were recorded as committed and written to the transaction log, but which may not have had corresponding
data pages written to the data files, are rolled forward. This is the redo phase.
2. When the redo phase is complete, the SQL Server enters the undo phase where it looks for database changes that were recorded
(for example, a dirty page flushed by a lazy write) but which were never actually committed by a transaction. These changes are
rolled back or undone. The state attained is often referred to as a transactionally consistent point in time. It is essentially the
same process that the SQL Server would undergo should the server have suffered an unanticipated interruption such as a power
failure. Roll-forward recovery using incremental transaction log backups to a point in time after the database copy was created is
not supported on a Microsoft SQL Server restartable database copy. Hence, VMAX consistent split creates crash-consistent and
write-order-consistent, point-in-time copies of the database.
EMC AppSync can create and manage application-consistent copies of Microsoft SQL Server databases, including support for
advanced SQL features, such as AlwaysOn Availability Groups, protection for standalone and clustered production SQL Server
instances, and support for databases on physical hosts, RDMs, and virtual disks on virtual hosts. It uses Microsoft SQL Server's VDI
snapshot feature to create Full and Copy SQL Server backup types. Full backup type protects the database, and the active part of the
transaction log. This copy type is typically used when the copy will be considered a backup of the database or when the copy will be
mounted in order to use a third-party product to create a backup of the database. This type of copy allows you to restore transaction
logs to bring the database forward to a point in time that is newer than the copy, assuming you have backed up those transaction
logs. Copy backup type protects the database and the active part of the transaction log without affecting the sequence of the
backups. This provides SQL Server DBAs with a way to create a copy without interfering with third-party backup applications that
may be creating full and/or differential backups of the SQL Server databases

REMOTE REPLICATION CONSIDERATIONS


For SRDF/A, it is recommended to always use Consistency Enabled to ensure that if a single device cannot replicate, the entire SRDF
group will stop replicating, thus maintaining a consistent database replica on the target devices.
SRDF on its own is a restart solution. Transaction logs could be included in SRDF replication operations to allow offload of backup
operations to the remote site as a stand-alone backup image of the database.
It is always recommended to have a database replica available at the SRDF remote site as a gold copy protection from rolling
disasters. The term rolling disaster describes the situation where a first interruption to normal replication activities is followed by a
secondary database failure on the source, leaving the database without an immediately available valid replica. For example, if SRDF
replication was interrupted for any reason (planned or unplanned) and changes were accumulated on the source, once the
synchronization resumes and until the target is synchronized (SRDF/S) or consistent (SRDF/A), the target is not a valid database
image. For that reason it is best practice before such resynchronization to take a TimeFinder gold copy replica at the target site,
which will preserve the last valid image of the database as a safety measure from rolling disasters.

SQL SERVER PROTECTION DETAILS


CREATING MULTIPLE COPIES OF SQL SERVER DATABASES
TimeFinder SnapVX enables support for a rapid deployment of a test or development database environment, with multiple copies of
the Production SQL database. The Production SQL database workload and throughput is maintained by offloading and repurposing
such copies to a separate mount SQL host. A linked target on a mount host is also a useful option to surgically retrieve some files
that are required on the Production SQL database, or for quick integrity checking when corruption is suspected. The mount host must
have SQL Server installed if you want to attach databases from the mounted linked targets volumes. Mounting of linked target
volumes is possible to a standalone server or cluster nodes of alternate cluster or production cluster as non-clustered resource.
When mounting the linked target volumes back to the production, the mount points to these target volumes need to be different
from the original mount points for SQL database attach to be successful. Multiple SQL Server databases can exist on the same source
volume or across multiple source volumes. However it is best practice to not mix databases from more than one SQL Server instance
on a source volume. These linked target storage groups 4 on the SQL Server mount host can be written to, as they do not affect the

4
The target storage groups contain the linked-target devices of Productions snapshots. They should be added to a masking view to make the target
devices accessible to the mount host.

11
point-in-time of the snapshot. If the data on the linked target needs to be reset to the original point-in-time snapshot, it can be
relinked to that snapshot. SnapVX management Solutions Enabler CLI commands on how to create a SnapVX snapshot (Figure 2),
link it (Figure 3), relink to it and restore from it (Figure 4) are provided in Appendix III.
Figure 2 shows SnapVX sessions being created for a SQL Server Production OLTP database at the parent storage group level. It
shows various snapshots being created at certain intervals for protection and future use. These snapshots include both data and log,
and by default inherit the SLO set on the production storage group.

Figure 2. SnapVX establish operation


Figure 3 illustrates one of the SnapVX sessions being linked, to create a target replica SQL storage group on the mount host. Note
that the mount host storage group can be relinked to the same SnapVX session or a different session, based on need.

Figure 3. SnapVX link, relink operations

MOUNTING A SQL SERVER SNAPVX SNAPSHOT TO A MOUNT HOST


In most cases when creating a copy of a database and utilizing it on the mount host, the SQL server sp_attach_db stored procedure
is used to attach the database to a SQL Server instance. Please refer to SQL Server Books OnLine for complete documentation on
this Transact-SQL stored procedure, and the example in the Appendix VI on how to use Windows PowerShell to automate the entire
operation. If the mount locations are identical to those on the production instance, the metadata file (.mdf) is used by the stored
procedure to identify file locations. If not, it is necessary to provide the new locations. SQL Server will perform the necessary actions
to start the database instance and get it into a transaction consistent state.
The steps to mount a SnapVX replica to SQL Server once it is created and linked are as follows:
1. Rescan using symntctl.
12
2. Mount the storage volumes to mount paths or drive letters.
3. Attach the SQL server database, using SQL commands.

To reinitialize/refresh the mounted linked target, it is necessary to reverse the processes that were executed prior to mounting. The
steps to refresh a linked target include:
1. Drop or detach the SQL Server database.
2. Unmount the volumes from the mount server using appropriate symntctl commands (see Appendix V).
3. Relink the target SnapVX.
4. Remount the volumes and attach the database.

As part of the refresh, perform the following steps to re-mount a SnapVX replica to SQL Server:
1. Detach SQL server database.
2. Unmount the volumes from the mount server.
3. Relink the target SnapVX SQL storage group to the same or a different SnapVX.
4. Follow the steps for mounting a SnapVX replica.

The SnapVX steps necessary to link or relink a SnapVX session using symsnapvx are shown in the following example:

Steps to link or relink a SnapVX session to the Target (Mount) SQL Server storage group

1. Create SnapVX on Production SQL database storage group.

symsnapvx sid 536 sg PROD_SQL_SG establish name PROD_SQL_SnapVX ttl delta 2 nop

2. Verify SnapVX details.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify -summary


symsnapvx sid 536 sg PROD_SQL_SG list v
symsnapvx sid 536 sg PROD_SQL_SG list -detail

3. Link or (Re)Link SnapVX to Mount SQL database storage group in default no-copy mode.

symsnapvx sid 536 sg PROD_SQL_SG lnsg MOUNT_SQL_SG snapshot_name PROD_SQL_SnapVX link nop

(or)

symsnapvx sid 536 sg PROD_SQL_SG lnsg MOUNT_SQL_SG snapshot_name PROD_SQL_SnapVX relink nop

4. Verify link.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify summary


symsnapvx sid 536 sg PROD_SQL_SG list v
symsnapvx sid 536 sg PROD_SQL_SG list -detail
symsnapvx sid 536 sg PROD_SQL_SG list detail linked

POINT-IN-TIME SQL SERVER RESTORE WITH SNAPVX


A SQL Server restore operation requires taking the database offline during the restore process. Restoring a SnapVX replica at the
volume level cannot be an online operation. The volumes need to be dismounted before restore, and mounted back after restore.
The symntctl examples in Appendix V illustrate how this can be accomplished. In Microsoft Failover cluster (MFC) configurations,
among other steps, it is necessary to remove dependencies for the SQL Server instance resource, and then the corresponding
clustered volumes need to be taken offline. Overall, TimeFinder SnapVX still provides the fastest possible restoration process with
reduced risk of potential data loss. SnapVX restores are performed either directly or indirectly. A direct SnapVX restore is performed
when a snapshot is restored directly to source without linking it to a target. An indirect restore is performed when changes

13
performed on a linked target need to be restored to production; more of these restore modes are discussed in later sections. Figure
4 shows an example of a direct SnapVX restore.

DIRECT RESTORE OF SQL SERVER SNAPVX SNAPSHOTS


Figure 4 illustrates one of the SnapVX sessions being restored directly to the Production SQL storage group. Direct restore is a single
operation at the storage group level, and the SQL Database recovery can start immediately.

Figure 4. Direct SnapVX restore from a SnapVX snapshot.


Typical steps involve the following:
Dropping or detaching the Production database using sp_detach_db.
Unmounting the file systems containing the database data and log files.
Executing the restore of the SnapVX snapshot.
Mounting the restored database volumes and file systems.
Re-attaching the database files using create database for attach.
Refer to SQL Server Books OnLine for complete documentation on this Transact-SQL stored procedure, and Appendix VI on how to
automate it using Windows PowerShell. There are two ways to restore a SQL Server SnapVX snapshot, direct and indirect.
Direct restore is restoring directly from the snapshot itself, as shown in Figure 4.
Indirect restore is via the linked targets, which have undergone a surgical repair or modification. The restore from linked targets
is possible by first creating a snapshot of the linked target, and then link that snapshot back to the original production devices,
as shown in Figure 5.
Restores are possible from snapshots from any point-in-time, and will not affect or invalidate any other snapshot used previously.
Effected entities during SnapVX snapshot restore is another key consideration. An effected entity is data that resides on the
Production SQL database host that have unintentionally become part of a SQL Server SnapVX snapshot replica and can be restored
in addition to the ones that are needed. To prevent effected entity situations, proper planning of the database layout base on restore
granularity should be taken. The steps needed to execute direct restore are as follows.
1. Detach the SQL server database.
2. Unmount the database volumes.

3. Execute SnapVX restore.


4. Rescan and remount the volumes.
5. Attach the SQL server database.

Once background restore is completed, it can be terminated.

The SnapVX steps necessary to do a direct restore are shown in the following example:

14
Direct SnapVX restore to Production SQL database

1. Create SnapVX on Production SQL database storage group.

symsnapvx sid 536 sg PROD_SQL_SG establish name PROD_SQL_SnapVX ttl delta 2 nop

2. Verify SnapVX details

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify -summary

symsnapvx sid 536 sg PROD_SQL_SG list v


symsnapvx sid 536 sg PROD_SQL_SG list -detail

3. Restore SnapVX to the Production database.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX restore v nop

4. Verify SnapVX restore progress.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify summary

symsnapvx sid 536 sg PROD_SQL_SG list v


symsnapvx sid 536 sg PROD_SQL_SG list -detail

Once restore is complete, clean-up the restored sessions as shown below

Terminate Direct SnapVX restore session to Production SQL database

1. Terminate the restored SnapVX session on Production database.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX terminate restored nop

2. Verify SnapVX terminate progress.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify -summary

symsnapvx sid 536 sg PROD_SQL_SG list v


symsnapvx sid 536 sg PROD_SQL_SG list -detail

3. Terminate SnapVX snapshot (if not needed).

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX terminate nop


symsnapvx sid 536 sg PROD_SQL_SG list v

INDIRECT RESTORE OF SQL SERVER SNAPVX SNAPSHOTS INVOLVING LINKED TARGETS


Figure 5 shows a two-step process of in-direct restore back to the production SQL storage group, using SnapVX. Once a surgical
repair or is executed on the mounted target replica, it can be synched back to production by re-snapping and linking it to production.

15
Figure 5. Indirect restore to production database using establish and link
The actual steps in executing an indirect restore are shown below.
Create a SnapVX replica of surgically repaired mount SQL database storage group
Detach the production SQL server database

Unmount the production volumes


Link the SnapVX replica back to the production SQL database in copy mode
Rescan the windows disk subsystem using the symntctl command
Remount the database volumes
Re-attach the production SQL server database for use

The next few steps illustrate the actual step-by-step process of executing an in-direct SnapVX restore:
1. Execute a SnapVX establish (See Part A-Figure 6)
2. Link to mount (See Part B-Figure 7)
3. Re-snap (See Part C-Figure 8)
4. Re-link to production (See Part D-Figure 9)
5. Execute cleanup for participating SnapVX sessions that are no longer needed

16
Figure 6. Indirect SnapVX restore (Part A), SnapVX creation

Figure 7. Indirect SnapVX restore (Part B), linking SnapVx replica to target SQL storage group

Figure 8. Indirect SnapVX restore (Part C), establish a SnapVx replica of the mount host storage group

17
Figure 9. Link the SnapVX replica (Part D) of the Mount SQL database storage group

The equivalent symsnapvx steps for Figure 6 (Part A), Figure 7(Part B) are listed below.

Indirect SnapVX restore of modified linked target back to Production SQL database (Part A & Part B)

1. Create SnapVX on Production SQL database storage group.

symsnapvx sid 536 sg PROD_SQL_SG name PROD_SQL_SnapVX establish ttl delta 2 nop

2. Verify SnapVX details.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify -summary

symsnapvx sid 536 sg PROD_SQL_SG list v


symsnapvx sid 536 sg PROD_SQL_SG list -detail

3. Link SnapVX to Mount SQL database storage group in copy mode.

symsnapvx sid 536 sg PROD_SQL_SG lnsg MOUNT_SQL_SG snapshot_name PROD_SQL_SnapVX link copy nop

4. Verify link.

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify -summary

symsnapvx sid 536 sg PROD_SQL_SG list v


symsnapvx sid 536 sg PROD_SQL_SG list -detail
symsnapvx sid 536 sg PROD_SQL_SG list detail -linked

The equivalent symsnapvx steps for Figure 8 (Part C), Figure 9(Part D) are listed below.

Indirect SnapVX restore of modified linked target back to Production SQL database (Part C & Part D)

1. Make surgical repairs or modifications on the Mounted SQL database, and then create a SnapVX replica of this Mount SQL
database storage group (cascaded).

symsnapvx sid 536 sg MOUNT_SQL_SG name MOUNT_SQL_SnapVX establish ttl delta 2 nop
symsnapvx sid 536 sg MOUNT_SQL_SG snapshot_name MOUNT_SQL_SnapVX verify -summary

symsnapvx sid 536 sg MOUNT_SQL_SG list v


symsnapvx sid 536 sg MOUNT_SQL_SG list -detail

2. Link the SnapVX replica of the Mount SQL database storage group back to the Production SQL database in copy mode.

symsnapvx sid 536 sg MOUNT_SQL_SG lnsg PROD_SQL_SG snapshot_name MOUNT_SQL_SnapVX link copy nop

18
3. Verify link copy progress.

symsnapvx sid 536 sg MOUNT_SQL_SG snapshot_name MOUNT_SQL_SnapVX verify -summary


symsnapvx sid 536 sg MOUNT_SQL_SG list v

symsnapvx sid 536 sg MOUNT_SQL_SG list -detail


symsnapvx sid 536 sg MOUNT_SQL_SG list detail -linked

Unlink all targets to both Production and Mount SQL storage groups before terminating

1. Unlink SnapVX session for Mount SQL database and verify.

symsnapvx sid 536 sg MOUNT_SQL_SG lnsg PROD_SQL_SG snapshot_name MOUNT_SQL_SnapVX unlink nop

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name PROD_SQL_SnapVX verify -summary

symsnapvx sid 536 sg PROD_SQL_SG list v


symsnapvx sid 536 sg PROD_SQL_SG list -detail

2. Unlink SnapVX session for Production SQL database and verify.

symsnapvx sid 536 sg PROD_SQL_SG lnsg MOUNT_SQL_SG snapshot_name PROD_SQL_SnapVX unlink nop

symsnapvx sid 536 sg PROD_SQL_SG snapshot_name MOUNT_SQL_SnapVX verify -summary


symsnapvx sid 536 sg MOUNT_SQL_SG list v

symsnapvx sid 536 sg MOUNT_SQL_SG list -detail

19
LEVERAGING VMAX3 REMOTE SNAPS FOR DISASTER RECOVERY
VMAX3 SRDF allows both synchronous and asynchronous replication of Production databases to multiple target sites for disaster
recovery (DR). The remote copies can be used to restore a production database in the event of disaster. Refer to Appendix IV for
specific steps on how to set up SRDF between source and target site. Periodic point-in-time remote snapshots on the R2 site can be
used for DR testing, TEST/DEV, and for restoring back to the R1 site. Figure 10 illustrates a sample configuration of SRDF and
remote snapshots.

Figure 10. Remote SnapVX replicas with SRDF


The typical steps for restore back to R1 SQL production database are shown below.
Note: This example chooses a snapshot name MOUNT_R2_SnapVX for restore. R1 VMAX is 535 and R2 VMAX is 536.
1. Detach the R1 site SQL Server database.

2. Split SRDF link to initiate the restore operation.

symrdf sid 535 sg PROD_R1_SG rdfg 20 split

3. If DR from a point-in-time snapshot is desired, identify R2 Snap and restore that to R2 devices.

symsnapvx sid 536 sg MOUNT_R2_SG snapshot_name MOUNT_R2_SnapVX restore

4. Verify the completion of the restore.

symsnapvx sid 536 sg MOUNT_R2_SG snapshot_name MOUNT_R2_SnapVX verify -summary

5. Terminate once restore completes.

symsnapvx sid 536 sg MOUNT_R2_SG snapshot_name MOUNT_R2_SnapVX terminate -restored

6. As soon as the restore from snap is initiated, SRDF restore can be started. SRDF will start performing incremental restore from
R2 to R1. The devices will show SyncInProg to indicate that the restore is going on. The state of Synchronized will indicate
completion of the restore.

symrdf sid 535 sg MOUNT_R2_SG rdfg 20 restore

7. Attach the R1 site SQL Server database.

20
SQL SERVER OPERATIONAL DETAILS
MICROSOFT SQL SERVER AlwaysOn WITH SnapVX
SQL Server high availability and native continuous data protection with AlwaysOn and TimeFinder SnapVX restartable snapshots
provide better protection levels and reduced outage. The SQL Server AlwaysOn Availability Group (AAG) feature is a disaster-
recovery solution that improves database availability and reduces downtime with multiple database copies. An Availability Group
supports a set of read-write primary databases and one to eight sets of corresponding secondary databases. Secondary databases
can be made available for read-only access, backup, reporting, and database consistency checking operations. The primary and
secondary copies of databases are not sharing storage in an AAG; each node in the cluster needs to have their own separate copies
of storage configured and zoned on it. The database copy on each secondary node of an AAG is independent of the primary copy. If a
logical corruption replicates across the AAG databases, TimeFinder SnapVX can create a lagged copy to help return to a previous
point in time. Figure 11 illustrates an example of SQL Server AlwaysOn deployment.

Figure 11. SnapVX with SQL Server AlwaysOn

STORAGE RESOURCE POOL USAGE WITH MULTIPLE SNAPVX COPIES


The symsnapvx list detail sg <storage_group> [-GB|-MB] output also has a "Non-Shared" field (with a choice of tracks, MB,
or GB units) that provides the amount of storage uniquely allocated to each snapshot. Refer to the EMC VMAX3 Local Replication
TimeFinder SnapVX technical notes for more details on non-shared capacity. Non-shared capacity is the most important value
because non-shared snapshot deltas will be discarded when the snapshot is terminated, which will free space in the SRP. The non-
shared capacity can also be viewed from Unisphere for VMAX.
Figure 12 shows less than 2% of Storage Resource Pool (SRP) being utilized to capture and track 256 unique SnapVX snapshots
taken at 30 second intervals, with a 60 MB/sec change rate on a 1.2 TB SQL OLTP workload. It shows SRP space usage percentages
for SnapVX increments of 5, 10, 25, 50, 75, 100, 125, 150,175, 200, and 250. The allocations were minimal, efficient, and space
saving for all these snapshots. A smaller RTO is enabled by using less than two percent of the entire SRP to track these snapshots
that had a time to live of 2 days (TTL). This sample data capture of track allocation was observed in a lab environment where the

21
SQL server OLTP database storage group was on Platinum SLO. Different results may be observed, depending on factors like
retention time, number of snaps, change rate, competing workloads run on the linked targets, and so forth.

% SRP utilized for "Non-Shared" unique


tracks
1.600
1.400 <2%
1.200 SRP utlilized
1.000
0.800
0.600
0.400
0.200
0.000

Figure 12. % SRP utilized for Non-Shared unique tracks

RECLAIMING SPACE OF LINKED TARGET VOLUMES


Figure 13 shows the pool allocation changes when creating multiple snaps. Shared and non-shared allocation will vary depending on
the change rate, locality and timing of the snaps, and the number of delta tracks.. As shown in the chart, pool utilization will grow as
more snaps are created and eventually it will stabilize as most of the changes are occurring in the space already allocated for prior
snaps. Termination of the snaps will automatically reclaim any non-shared tracks used by them while decrementing the count of
snaps for shared tracks. However, allocations that are associated with linked targets need to be returned to the SRP at some point.
This can be achieved either with automatic windows reclaim or by using symdev free all. See Example 12 for usage.

Pool Allocation (GB) During Snap Life


600
Cycle Delta Tracks

Non shared
400 tracks
Shared Tracks

200

0
Figure 13. Pool Allocation (GB) During Snap Life Cycle

Windows Server 2012 supports the ability to detect thinly provisioned storage and issue T10 standard UNMAP or TRIM based reclaim
commands against the storage. Reclaim operations are performed in the following situations:
When the target linked volumes are no longer in use and the volumes are formatted with the quick option. The quick option
requests that the entire size of the volume be reclaimed in real-time. Figure 14 shows an example track allocation layout before

22
and after a quick formatted set of Windows volumes 00020 through 00024. The unlinked target volumes after quick format are
reduced to 702 total written tracks.

Figure 14. Before and after a quick format of Windows volumes

When the optimize options is selected for a volume as part of a regularly scheduled operation, or when optimize-volume
Windows PowerShell is used with the retrim option, or selected from the Defragment and Optimize Drives GUI. Figure 15
shows an example.

Figure 15. Windows Optimize Drives GUI

23
When a group of SQL Server database files are deleted from the target file system, Windows automatically issues reclaim
commands for the area of the file system that was freed based on the file deletion. Figure 16 shows the effect of reclamation on
Device ID 00023.

Figure 16. Reclaimed tracks after SQL server database files were deleted on target FS

24
SnapVX PERFORMANCE USE CASES WITH MICROSOFT SQL SERVER
TEST BED CONFIGURATION
Figure 17 shows the use cases test environment. It consists of a test production server running OLTP workload and a target mount
server for linked target replicas used for test/development/reporting or other lightweight OLTP workloads.

Figure 17. Test Configuration Layout details

DATABASES CONFIGURATION DETAILS


Table 1 shows the VMAX3 storage environment, Table 2 shows the host environment, and Table 3 shows the databases storage
configuration.

Table 1 Test storage environment


Configuration aspect Description

Storage array VMAX 200K with 1-engine

HYPERMAX OS 5977.596

Drive mix(excluding spares) 16 200GB-EFDs - RAID5 (3+1)


64 300GB-15K HDD - RAID1
32 1TB-7K HDD - RAID6 (6+2)

25
Table 2 Test host environment
Configuration aspect Description

Microsoft SQL Server SQL Server 2014 Enterprise Edition 64-bit

Windows Windows Server 2012 R2 64-bit

Multipathing EMC Powerpath 6.0

Hosts 1 x Cisco C240, 96 GB memory (Production)


1 x Cisco R210, 64 GB memory (Mount)

Table 3 Test database configuration


Database Thin devices (LUNs)
LUN layout STORAGE GROUP SRP Start SLO
Production OLTP1 DATA_SG: 4 x 500 GB thin LUNs SQL_PRODB_SG Default Gold

Size: 1.1 TB LOG_SG: 1 x 500 GB thin LUNs Default Gold

Mount OLTP1 DATA_SG: 4 x 1 TB thin LUNs SQL_MOUNTDB_SG Default Bronze

Size: 1.1 TB LOG_SG: 1 x 1 TB thin LUNs Default Bronze

Table 4 Table 1 SQL Database layout details


SQL DB details
Database Total SQL Data files
Mount point SQL Server File Groups SQL Server Data files
size
MSSQL_OLTP_root.mdf,
Fixed_1.ndf,
C:\OLTP1_Data1 295G
Growing_1.ndf,
Scaling_1.ndf
Fixed_2.ndf,
C:\OLTP1_Data2 FIXED_FG, Growing_2.ndf, 287G
GROWING_FG, Scaling_2.ndf
OLTP1 SCALING_FG Fixed_3.ndf,
(Production C:\OLTP1_Data3 Growing_3.ndf, 287G
& Mount) Scaling_3.ndf
Fixed_4.ndf,
C:\OLTP1_Data4 Growing_4.ndf, 287G
Scaling_4.ndf

C:\OLTP1_Logs OLTP1_log.ldf

TEST OVERVIEW
General test notes:
OLTP1 was configured to run a 90/10 read/write ratio OLTP workload derived from an industry standard. No special database
tuning was done as the focus of the test was not on achieving maximum performance and rather comparative differences of a
standard database workload.
All the tests maintained a steady OLTP workload on production SQL Server database with a variable OLTP change rate in the
range of 30-60MB/sec on a 1.1 TB SQL dataset, even when running the SnapVX replications.

Most of the SnapVX snapshots were taken at an aggressive 30 sec interval, to minimize RTO and RPO, and show the role
SnapVX replications play in continuous data protection (CDP).
DATA and LOG Storage Groups were cascaded into a common parent storage group for ease of provisioning, replication and
performance management.

26
Data collection included storage performance metrics using Solutions Enabler and Unisphere for VMAX, host performance
statistics using Windows Perfmon, and EMC PowerPath statistics.

High level test overview:


Impact of taking 256 SnapVX snapshots on Production workload database with a steady SLO Gold (Usecase-1A), and varying
SLO Gold -> SLO Platinum (Usecase-1B)
Impact of No-Copy vs Copy mode linked target snapshots on production workload, without an OLTP workload on mount host
(Usecase-2A), and then introduce an OLTP workload on mount host and observe the impact (Usecase-2B).

USE CASE 1A IMPACT OF TAKING 256 SNAPVX SNAPSHOTS ON PRODUCTION WORKLOAD


DATABASE ON GOLD SLO
Objectives:
The purpose of this test case is to see if database performance is maintained when 256 SnapVX snapshots on SQL Server data file
storage group are taken. Apply Gold SLO to the SQL Server data files storage group and analyze the effect of TimeFinder SnapVX on
SQL Server performance statistics. During this test the SQL Server transaction log storage group remained on Bronze SLO.
Test execution steps 1A:
1. Run steady state workload for 1 hour to get baseline with Gold SLO.
2. Create 256 SnapVX snapshots on the production SQL Server storage group at 30 second intervals, and record the performance
statistics for SQL Server database. Continue to run the OLTP workload for the next 3 hours.
3. Undo the previously created snaps by terminating them sequentially at 30 second intervals.
4. Continue to run the OLTP workload for the next 3 hours.
5. Ensure all snaps are terminated and run steady state workload pattern for next hour.

Test results:
1. Table 5 and Figure 18 show the observed pattern for SQL Batch Requests/sec and SQL response times (ms). Note that while the
workload is being run there is no major significant impact of creating snaps or terminating snaps on the production workload.
The Create SnapVX phase shows a minimal decrease in batch requests/sec from 2441 to 2366, and a slight increase in SQL
Server response times from 2.2ms to 2.28ms, but well within the boundaries of Gold SLO compliance set on the SQL storage
group.

The increase in SQL Batch Requests/sec after termination of SnapVX snapshot is likely due to Redirect-On-Write (ROW) technology
in VMAX3, where new writes are asynchronously written to the new location while snapshots and their deltas point to the original
location. These new locations could be located in the flash tier, based on availability, capacity, compliance, and SLO track movement.
Refer to the EMC VMAX3 TM Local Replication technical notes for further details on ROW.

Table 5 SQL Server Database performance statistics as an effect of SnapVX on Gold SLO

27
Figure 18. SQL Stats as part of Use case 1A Taking 256 SnapVx snapshots

USE CASE 1B IMPACT OF TAKING 256 SNAPVX SNAPSHOTS ON PRODUCTION WORKLOAD


DATABASE WITH A VARYING SLO
Objectives:
The purpose of this test case is to understand how SQL database performance is impacted while snapshots are being created and
while SLO changes are made. Similar to Use Case 1A, we have 2 tests run back to back on SQL Server Database OLTP1. However in
this scenario, we show the impact of creating 256 SnapVX just not on the Gold SLO baseline, but also observe the impact of SnapVX
when the SLO setting is changed to Platinum SLO. First, apply the Gold SLO to SQL Server storage group and gather performance
statistics. Next, transition the SQL OLTP storage groups containing SQL to Platinum SLO. Note the corresponding SQL transaction
rate in SQL Batch Requests/sec, and the SQL response time. For both runs the SQL Server transaction log storage group remained
on Bronze SLO.
Test execution steps 1B:
1. Run an OLTP workload on OLTP1 SQL Database, keeping the SQL data files on Gold SLO, and SQL transaction log storage group
on Bronze SLO. Run the test for 1 hour and record SQL Response time and SQL Batch Requests/sec.

2. Create 256 SnapVX snapshots on production SQL Server storage group at 30 second intervals and record the performance
statistics for SQL Server database.

28
3. Continue to run the OLTP workload on Gold for the next 3 hours.
4. Terminate the previously created SnapVX snapshots at 30 second intervals.

5. Repeat the above steps; however, during the SnapVX creation phase, transition the SQL storage group to Platinum SLO, and
create 256 SnapVX snapshots at 30 second intervals.
6. Continue to run the OLTP workload load at steady state and with the SQL storage group set at Platinum SLO for the next 3
hours.
Test results:
Table 6 and Figure 19 for Use Case 1B show the database transaction rate in SQL Batch Requests/sec and the SQL Response time
(ms), for both Steady State, and SnapVX for each SLO. With Gold SLO, SnapVX showed the SQL transaction rate at 2366 Batch
requests/sec, while a similar run with Platinum workload improved to 3260 Batch requests/sec. The SQL response times remained in
the range of 1.44 ms to 2.28 ms for Platinum and Gold respectively. SLO rules of compliance and objectives were met.

Table 6 Use Case 1B results

Figure 19. Use Case 1B SQL Batch Request changes, SQL Reponse time with Gold to Platinum changes

29
USE CASE 2A IMPACT OF NO-COPY VS COPY MODE LINKED TARGET SNAPSHOTS ON WITH
WORKLOAD ON PRODUCTION
Objectives:

This use case shows the impact of SnapVX linked targets in No-Copy mode and in Copy mode. In the No-Copy mode test, the 256
Snaps were created and linked in No-Copy mode with targets, while an OLTP workload is running on the production SQL Server for 3
hours. In the Copy mode test, the 256 SnapVX snaps were created and linked to a target storage group 5 in Copy mode every 30
seconds. The Copy mode tracks were allocated and created asynchronously in the background for new changes being generated for
the OLTP workload. This test case used Gold SLO for SQL Server Data files and Bronze SLO for SQL Server transaction LOG storage
group. Note: There was no workload run on the target mount SQL Server.

Test case execution steps:


1. Run OLTP workload at steady state for 1 hour without SnapVX involved.
2. Continue running OLTP workload, but run SnapVX with link to target storage group in no-copy mode and gather performance
statistics. This test run is for 3 hours.
3. Repeat the above steps, but this time with SnapVX snapshots created and linked to target storage group in copy mode. Gather
SQL performance statistics; this run is for 3 hours as well.

Test results:
Table 7 and Figure 20 show that the Average Batch Requests/sec for SQL Server is slightly higher for Copy mode test at 1710
compared to No-Copy LINK at 1684. The response times were slightly lower at 3.23 ms for COPY LINK compared to 3.62 ms for NO-
COPY LINK. As we can see from the Figure 20 graphs, No-Copy linked snaps and Copy linked snaps are almost identical in behavior
when there is no intensive workload run on the mount SQL storage group. The very slight differences between COPY and NO-COPY
SQL stats can be attributed to the ROW (Redirect-on-Write) technology in VMAX3, to align with Gold SLO compliance latency range.
Refer to the EMC VMAX3 TM Local Replication technical notes for further details on ROW.

Table 7 Use Case 2A results

5
The target storage groups contain the linked-target devices of Productions snapshots. They should be added to a masking view to make the target
devices accessible to the Mount host.

30
Figure 20. Use Case 2A results, for No-Copy versus Copy linked SQL target storage groups

USE CASE 2B IMPACT OF NO-COPY VS COPY MODE LINKED TARGET SNAPSHOTS WITH
WORKLOAD ON BOTH PRODUCTION AND MOUNT HOSTS
Objectives:
This use case is different from Use Case 2A in that only 30 SnapVX replicas were created, instead of 256 SnapVX replicas, but an
OLTP workload was run on the linked storage group that belongs to the target host set with the Bronze SLO. The workload was
kicked off on 30th relinked snap in both cases. Also note that the 30 Snaps with COPY mode linked targets were created after at least
the first SnapVX replica was linked and fully defined, and copied over to the target storage group 6. All subsequent relinks were to the
newly created snaps. The mount host was running a similar OLTP workload on bronze SLO compared to similar OLTP workload on the
production host running Gold SLO.
Hence this test shows the impact of running workload on the linked target replica in No-copy mode and Copy-mode replica, with
different SLOs set on the storage groups. Both in No-Copy mode and Copy mode tests, the 30 Snaps were created and linked to
target storage group every 30 seconds, while an OLTP workload was running on the production SQL Server and target mount SQL
Server for 3 hours. The copy-mode tracks were allocated and created asynchronously in the background for new changes being
generated for the OLTP workload. This test case used the Bronze SLO for SQL Server transaction LOG storage group.
Test case execution steps:
1. Run OLTP workload on Production SQL database for 3 hours, with 30 SnapVX snapshots created every 30 seconds.

2. Run parallel OLTP workload on Mount SQL database on linked target storage group in no-copy mode.
3. Gather SQL performance statistics.
4. Repeat the above steps, but this time with new set of 30 SnapVX snapshots created and linked to target storage group in copy
mode.

6
The target storage groups contain the linked-target devices of Productions snapshots. They should be added to a masking view to make the target
devices accessible to the Mount host.

31
Note before creating the 30 SnapVX snapshots in copy mode, one initial full copy of the production volumes were defined and
copied (defined) over to the target volumes. This creates a situation, where following copy-mode SnapVX snapshots when
recreated and relinked had fewer tracks to be asynchronously copied over.
5. Gather SQL performance statistics for this 3 hour run as well.
Results
Table 8 and Figure 21 show SQL Server Batch Requests/sec and SQL Response time for this test. As seen from Figure 21, the SQL
database on the mount host met the expectations of the Bronze SLO. Note that the Average SQL Batch Requests/sec for Target (NO
COPY) in the Bronze SLO is 1976 while that of Target (COPY) in Bronze SLO is 1453. The reason is that the mount volumes in the
NO COPY state still share tracks with the production SnapVX snapshots and production volumes in the Gold SLO.

Table 8 Use Case 2B results.

Figure 21. SQL Server stats for Production and Mount for Copy and No-Copy (OLTP on Mount Host)

32
CONCLUSION
VMAX3 SnapVX local replication technology enables SQL administrators to meet their protection and backup needs with scale, speed,
and ease of use. SnapVX capability reduces host I/O and CPU overhead, allowing the database host to focus on servicing database
transactions. It not only helps reduce RPO and RTO, but also enables multiple copies of the production database for
test/development/reporting purposes, with the added benefits of reduced array space usage.

REFERENCES
EMC VMAX3 Family with HYPERMAX OS Product Guide
Unisphere for VMAX Documentation set
EMC Unisphere for VMAX Database Storage Analyzer
EMC VMAX3 Local Replication Tech Note
Deployment Best Practice for Microsoft SQL Server with VMAX3 SLO Management

33
APPENDIXES
APPENDIX I - CONFIGURING SQL SERVER DATABASE STORAGE GROUPS FOR REPLICATION
VMAX3 TimeFinder SnapVX and SRDF allow the use of VMAX3 Auto-Provisioning Groups Storage Groups to provision storage for
SQL Server database and also to create Enginuity Consistent Assist based write-order-consistent snapshots. Any changes to SQL
Server database provisioning using these storage groups will also be reflected into any new snapshots created after that, making it
very easy to manage database growth. This simplifies configuration and provisioning SQL Server database storage for data
protection, availability and recoverability.
Cascading DATA and LOG into a parent SG allows the creation of restartable copies of the database. Separating transaction logs from
this group allows independent management of data protection for transactions logs, while providing desired control over SLO
management. Figure 22 shows how to provision storage for SQL Server DATA and transaction logs to ensure database recover SLAs
are achievable. Following this provisioning model along with the use cases described earlier provides proper deployment guidelines
for SQL Server databases on VMAX3 to database and storage administrators.

Figure 22. SQL server using cascaded storage group

Creating snapshots for SQL Server database storage groups


Figure 23 shows how to create the snapshot for SQL Server database storage. A new named storage snapshot can be created or an
existing snapshot can be refreshed using the screen. It also allows setting the time to live (in number of days) for automatic
expiration based on the user-provided period. Additional snapshots from the linked target can also be created in the same way.

Figure 23. Creating snapshots for SQL Server database storage groups

34
Linking SQL Server database snapshots for backup offload or repurposing
Figure 24 shows how to select an existing snapshot to link to a target storage group for backup offloading or repurposing. By default
the snapshots are linked in space-saving no copy mode where copy operation is differed until the source tracks are written. If the full
copy is desired, the select the Copy checkbox. One snapshot can be linked to multiple targets storage groups. If relink to the same
target storage group is desired, select the existing target storage group option.

Figure 24. Linking SQL Server database snapshots for backup offload or repurposing

Restoring SQL Server database using a storage snapshot


Figure 25 shows how to select an existing snapshot to restore a storage group.

Figure 25. Restoring SQL Server database using storage snapshot

35
Creating cascaded snapshot from an existing snapshot
TimeFinder Snap VX allows creating snaps from an existing snapshot for repurposing the same point-in-time copy for other uses.
Figure 26 shows how to do so.

Figure 26. Creating cascaded snapshot from an existing snapshot

APPENDIX II SRDF MODES AND TOPOLOGIES


SRDF Modes
SRDF Modes define SRDF replication behavior. These basic modes can be combined to create different replication topologies.
SRDF Synchronous (SRDF/S) is used to create a no-data-loss of committed transactions solution.
o In SRDF/S, each host write to an R1 device gets acknowledged only after the I/O was copied to the R2 storage system
persistent cache.
o SRDF/S makes sure that data on both the source and target devices is exactly the same.
o Host I/O latency will be affected by the distance between the storage arrays.
o It is recommended to have SRDF Consistency Enabled even in an SRDF/S mode to ensure that if any single device
cannot replicate its data, then all the devices in the group will cease replications, preserving target consistency.
SRDF Asynchronous (SRDF/A) is used to create consistent replicas at unlimited distances, without write response time
penalty to the application.
o In SRDF/A, each host write to an R1 device gets acknowledged immediately after it registered with the local VMAX3
persistent cache, preventing any write response time penalty to the application.

o Writes to the R1 devices are grouped into cycles. The capture cycle is the cycle that accepts new writes to R1 devices
while it is open. The transmit cycle is a cycle that was closed for updates and its data is being sent from the local to
the remote array. The receive cycle on the remote array receives the data from the transmit cycle. The destaged
cycle on the remote array destages the data to the R2 devices. SRDF software makes sure to only destage full cycles to
the R2 devices.
- The default time for the capture cycle to remain open for writes is 15 seconds, though it can be set differently.

36
- In legacy mode (at least one of the arrays is not a VMAX3) cycle time can increase during peak workloads as
more data needs to be transferred over the links. After the peak, the cycle time will go back to its set time (default
of 15 seconds).
- In multi-cycle mode (both arrays are VMAX3) cycle time remains the same, though during peak workload more
than one cycle can be waiting on the R1 array to be transmitted.
- While the capture cycle is open, only the latest update to the same storage location will be sent to the R2, saving
bandwidth. This feature is called write-folding.
- Write-order fidelity is maintained between cycles. For example, two dependent I/Os will always be in the same
cycle, or the first of the I/Os in one cycle and the dependent I/O in the next.
- To limit VMAX3 cache usage by capture cycle during peak workload time and to avoid stopping replication due to
too many outstanding I/Os, VMAX3 offers a Delta Set Extension (DSE) pool which is local storage on the source
side that can help buffer outstanding data to target during peak times.
o The R2 target devices maintain a consistent replica of the R1 devices, though slightly behind, depend on how fast the
links can transmit the cycles and the cycle time. For example, when cycles are received every 15 seconds at the remote
storage array its data will be 15 seconds behind production (if transmit cycle was fully received), or 30 seconds behind
(if transmit cycle was not fully received it will be discarded during failover to maintain R2 consistency).
o Consistency should always be enabled when protecting databases and applications with SRDF/A to make sure the R2
devices create a consistent restartable replica.
SRDF Adaptive Copy (SRDF/ACP) mode allows bulk transfers of data between source and target devices without maintaining
write-order fidelity and without write performance impact to source devices.
o While SRDF/ACP is not valid for ongoing consistent replications, it is a good way to transfer changed data in bulk
between source and target devices after replications were suspended for an elongated period of time, accumulating
many changes on the source. ACP mode can be maintained until a certain skew of leftover changes to transmit is
achieved. Once the amount of changed data has been reduced, the SRDF mode can be changed to Sync or Async as
appropriate.
o SRDF/ACP is also good for migrations (also referred to as SRDF Data Mobility) as it allows a point-in-time data push
between source and target devices.
SRDF Topologies
A two-site SRDF topology includes SRDF sessions in SRDF/S, SRDF/A, and/or SRDF/ACP between two storage arrays, where each
RDF group can be set in a different mode, and each array may contain R1 and R2 devices of different groups.
Three-site SRDF topologies include:
Concurrent SRDF: Concurrent SRDF is a three-site topology in which replication take place from site A simultaneously to site B
and site C. Source R1 devices are replicated simultaneously to two different sets of R2 target devices on two different remote
arrays. For example, one SRDF group can be set as SRDF/S replicating to a near site and the other as SRDF/A replicating to a
far site.

Cascaded SRDF: Cascaded SRDF is a three-site topology in which replication take place from site A to site B, and from there to
site C. R1 devices in site A replicate to site B to a set of devices called R21. R21 devices behave as R2 to site A, and as R1 to
site C. Site C has the R2 devices. In this topology, site B holds the full capacity of the replicated data and if site A fails and
Production operations continue on site C, site B can turn into the DR site for site C.
SRDF/EDP: Extended data protection SRDF topology is similar to cascaded SRDF. Site A replicates to site B, and from there to
site C. However, in EDP, site B does not hold R21 devices with real capacity. This topology offers capacity and cost savings, as
site B only uses cache to receive the replicated data from site A and transfer it to site C.
SRDF/Star: SRDF/Star offers an intelligent three-site topology similar to concurrent SRDF, where site A replicates
simultaneously to site B and site C. However, if site A fails, site B and site C can communicate to merge the changes and resume
DR. For example, SRDF/Star replication between site A and B uses SRDF/S and replication between site A and C uses SRDF/A. If

37
site A fails, site B can send the remaining changes to site C for a no-data-loss solution at any distance. Site B can become a DR
site for site C afterwards, until site A can come back.

SRDF/AR: SRDF Automatic Replication can be set as either a two or a three-site replication topology. It offers slower replication
when network bandwidth is limited and without performance overhead. In two-site topology, SRDF/AR uses TimeFinder to create
a PiT replica of production on site A, then uses SRDF to replicate it to site B, where another TimeFinder replica is created as a
gold copy. Then the process repeats. In a three-site topology site A replicates to Site B using SRDF/S. In site B TimeFinder is
used to create a replica which is then replicated to site C. In site C the gold copy replica is created and the process repeats itself.
There are also 4-site topologies, though they are beyond the scope of this paper. For full details on SRDF modes, topologies, and
other details refer to the VMAX3 Family with HYPERMAX OS Product Guide.

APPENDIX III SOLUTIONS ENABLER CLI COMMANDS FOR TIMEFINDER SNAPVX


MANAGEMENT
Creation of periodic snaps
This command allows creation of periodic snaps from a database storage group. All the objects associated with that storage group
will be included in the snap and a consistent point-in-time snap will be created. The same syntax can also be used for a linked target
storage group 7. The snapshot with the same name can be created which will increment generation, with 0 being the most recent
one.

# symsnapvx -sid 536 -sg SQLDB_SG -name SQLDB_SG snapshot_name SQLDB_Snap_1 establish [-ttl delta <#of days>]
Execute Establish operation for Storage Group SQLDB_SG (y/[n]) ? y

Establish operation execution is in progress for the storage group SQLDB_SG. Please wait...
Polling for Establish.............................................Started.
Polling for Establish.............................................Done.
Polling for Activate..............................................Started.
Polling for Activate..............................................Done.

Establish operation successfully executed for the storage group SQLDB_SG

Listing details of a snap


This command shows the details about a snapshot including delta and non-shared tracks and expiration time. To obtain the count of
shared tracks for this snap, get the difference between the delta track and non-shared tracks from the listing. The command also
lists all the snaps for the given storage group.

# symsnapvx -sid 536 -sg SQLDB_SG -name SQLDB_Snap_1 list -detail


Storage Group (SG) Name : SQLDB_SG
SG's Symmetrix ID : 000196700536 (Microcode Version: 5977)
Total
Sym Flgs Deltas Non-Shared
Dev Snapshot Name Gen FLRG Snapshot Timestamp (Tracks) (Tracks) Expiration Date
----- ---------------------- ---- ---- ------------------------ ---------- ---------- ------------------------
000BC SQLDB_Snap_1 0 .... Tue Mar 31 10:12:51 2015 3 3 Wed Apr 1 10:12:51 2015

Establish operation successfully executed for the storage group SQLDB_SG

Flgs:
(F)ailed : X = Failed, . = No Failure
(L)ink : X = Link Exists, . = No Link Exists
(R)estore : X = Restore Active, . = No Restore Active
(G)CM : X = GCM, . = Non-GCM

7
The target storage groups contain the linked-target devices of Productions snapshots. They should be added to a masking view to make the target
devices accessible to the Mount host.

38
Linking the snap to a storage group
This command shows how to link a snap to target storage group 8. By default, linking is done using no_copy mode.
# symsnapvx -sid 536 -sg SQLDB_SG -snapshot_name SQLDB_Snap_1 -lnsg SQLDB_MNT link [-copy]

Execute Link operation for Storage Group SQLDB_SG (y/[n]) ? y


Link operation execution is in progress for the storage group SQLDB_SG. Please wait...
Polling for Link..................................................Started.
Polling for Link..................................................Done.
Link operation successfully executed for the storage group SQLDB_SG
-
Verifying current state of the snap
This command provides the current summary of the given snapshot. This shows the number of devices included in the snap and the
total number of tracks protected but not copied. By default, all the snaps are created with nocopy. When the link is created using
the copy option, the 100% copy is indicated by Total Remaining count reported as 0. The same command can be used to check
the remaining tracks to copy during the restore operation.
# symsnapvx -sid 536 -sg SQLDB_SG -snapshot_name SQLDB_Snap_1 verify -summary

Storage Group (SG) Name : SQLDB_SG


Snapshot State Count
----------------------- ------
Established 8
EstablishInProg 0
NoSnapshot 0
Failed 0
----------------------- ------
Total 8
Track(s)
-----------
Total Remaining 38469660
All devices in the group 'SQLDB_SG' are in 'Established' state.

Listing linked snaps


This command lists the named linked snap and its current status.
# symsnapvx -sid 536 -sg SQLDB_SG -snapshot_name SQLDB_Snap_1 list -linked

Storage Group (SG) Name : SQLDB_SG


SG's Symmetrix ID : 000196700536 (Microcode Version: 5977)
-------------------------------------------------------------------------------
Sym Link Flgs
Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp
----- -------------------------------- ---- ----- ---- ------------------------
000BC SQLDB_Snap_1 0 00053 ..X. Tue Mar 31 10:12:52 2015
000BD SQLDB_Snap_1 0 00054 .... Tue Mar 31 10:12:52 2015
000BE SQLDB_Snap_1 0 00055 .... Tue Mar 31 10:12:52 2015
000BF SQLDB_Snap_1 0 00056 .... Tue Mar 31 10:12:52 2015
000C0 SQLDB_Snap_1 0 00057 ..XX Tue Mar 31 10:12:52 2015
000C1 SQLDB_Snap_1 0 00058 ...X Tue Mar 31 10:12:52 2015
000C2 SQLDB_Snap_1 0 00059 ...X Tue Mar 31 10:12:52 2015
000C3 SQLDB_Snap_1 0 0005A ...X Tue Mar 31 10:12:52 2015

Flgs:
(F)ailed : F = Force Failed, X = Failed, . = No Failure
(C)opy : I = CopyInProg, C = Copied, D = Copied/Destaged, . = NoCopy Link
(M)odified : X = Modified Target Data, . = Not Modified
(D)efined : X = All Tracks Defined, . = Define in progress

8
The target storage groups contain the linked-target devices of Productions snapshots. They should be added to a masking view to make the target
devices accessible to the Mount host.

39
Restore from a snap
This command shows how to restore a storage group from a point-in-time snap. Once the restore operation completes, the restore
session can be terminated while keeping the original point-in-time snap for subsequent use.
# symsnapvx -sid 536 -sg SQLDB_SG -snapshot_name SQLDB_Snap_1 terminate restored
# symsnapvx -sid 536 -sg SQLDB_SG -snapshot_name SQLDB_Snap_1 verify summary
# symsnapvx -sid 536 -sg SQLDB_SG -snapshot_name SQLDB_Snap_1 terminate -restored

APPENDIX IV SOLUTIONS ENABLER CLI COMMANDS FOR SRDF MANAGEMENT


Listing local and remote VMAX SRDF adapters
This command shows how to list existing SRDF directors, available ports and dynamic SRDF groups. The command listed below
should be run on both local and remote VMAX arrays to get the full listing needed for subsequent commands.
# symcfg -sid 536 list -ra all
Symmetrix ID: 000196700536 (Local)
S Y M M E T R I X R D F D I R E C T O R S
Remote Local Remote Status
Ident Port SymmID RA Grp RA Grp Dir Port
----- ---- ------------ -------- -------- ---------------
RF-1H 10 000196700536 1 (00) 1 (00) Online Online
RF-2H 10 000196700536 1 (00) 1 (00) Online Online

# symcfg -sid 535 list -ra all


Symmetrix ID: 000196700536 (Local)
S Y M M E T R I X R D F D I R E C T O R S
Remote Local Remote Status
Ident Port SymmID RA Grp RA Grp Dir Port
----- ---- ------------ -------- -------- ---------------
RF-1E 7 000196700535 1 (00) 1 (00) Online Online
RF-2E 7 000196700535 1 (00) 1 (00) Online Online

Creating dynamic SRDF groups


This command shows how to create a dynamic SRDF group. Based on the output generated from the prior command, a new dynamic
SRDF group can be created with proper director ports and group numbers.
# symrdf addgrp -label SQLDB -rdfg 20 -sid 536 -dir 1H:10 -remote_sid 535 -remote_dir 1E:7 -remote_rdfg 20
Execute a Dynamic RDF Addgrp operation for group
'SQLDB_1' on Symm: 000196700536 (y/[n]) ? y
Successfully Added Dynamic RDF Group 'SQLDB_1' for Symm: 000196700536

Creating SRDF device pairs for a storage group


This command shows how to create SRDF device pairs between local and remote VMAX arrays, identify R1 and R2 devices, and start
syncing the tracks from R1 to R2 between those for remote protection.
# symrdf -sid 536 -sg SQLDB_SG -rdfg 20 createpair -type R1 -remote_sg SQLDB_R2 -establish
Execute an RDF 'Create Pair' operation for storage
group 'SQLDB_SG' (y/[n]) ? y

An RDF 'Create Pair' operation execution is


in progress for storage group 'SQLDB_SG'. Please wait...

Create RDF Pair in (0536,020)....................................Started.


Create RDF Pair in (0536,020)....................................Done.
Mark target device(s) in (0536,020) for full copy from source....Started.
Devices: 00BC-00C3 in (0536,020).................................Marked.
Mark target device(s) in (0536,020) for full copy from source....Done.
Merge track tables between source and target in (0536,020).......Started.
Devices: 00BC-00C3 in (0536,020).................................Merged.
Merge track tables between source and target in (0536,020).......Done.
Resume RDF link(s) for device(s) in (0536,020)...................Started.
Resume RDF link(s) for device(s) in (0536,020)...................Done.

The RDF 'Create Pair' operation successfully executed for


storage group 'SQLDB_SG'.

40
Listing the status of SRDF groups
This command shows how to get information about the existing SRDF group.
# symrdf -sid 536 list -rdfg 20
Symmetrix ID: 000196700536

Local Device View


---------------------------------------------------------------------------
STATUS MODES RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MDATE Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- ------- ------- --- ---- -------------
000BC 00068 R1:20 RW RW NR S..1. 0 3058067 RW RW Split
000BD 00069 R1:20 RW RW NR S..1. 0 3058068 RW RW Split

Total ------- -------


Track(s) 0 13141430
MB(s) 0 1642679
Legend for MODES:
M(ode of Operation) : A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
: M = Mixed
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
(Mirror) T(ype) : 1 = R1, 2 = R2
(Consistency) E(xempt): X = Enabled, . = Disabled, M = Mixed, - = N/A

Restoring SRDF groups


This command shows how to restore an SRDF group from R2 to R1.
# symrdf -sid 536 -sg SQLDB_R2 -rdfg 20 restore
Execute an RDF 'Incremental Restore' operation for storage
group 'SQLDB_R2' (y/[n]) ? y

An RDF 'Incremental Restore' operation execution is


in progress for storage group 'SQLDB_R2'. Please wait...

Write Disable device(s) in (0536,020) on SA at source (R1).......Done.


Write Disable device(s) in (0536,020) on RA at target (R2).......Done.
Suspend RDF link(s) for device(s) in (0536,020)..................Done.
Mark Copy Invalid Tracks in (0536,020)...........................Started.
Devices: 0068-006B in (0536,020).................................Marked.
Mark Copy Invalid Tracks in (0536,020)...........................Done.
Mark source device(s) in (0536,020) to refresh from target.......Started.
Devices: 00BC-00C0, 00C3-00C3 in (0536,020)......................Marked.
Mark source device(s) in (0536,020) to refresh from target.......Done.
Merge track tables between source and target in (0536,020).......Started.
Devices: 00BC-00C3 in (0536,020).................................Merged.
Merge track tables between source and target in (0536,020).......Done.
Resume RDF link(s) for device(s) in (0536,020)...................Started.
Resume RDF link(s) for device(s) in (0536,020)...................Done.
Read/Write Enable device(s) in (0536,020) on SA at source (R1)...Done.

The RDF 'Incremental Restore' operation successfully initiated for


storage group 'SQLDB_R2'.

APPENDIX V - SYMNTCTL VMAX INTEGRATION UTILITY FOR WINDOWS DISK MANAGEMENT


Symntctl is a useful integration utility that extends disk management functionality to better operate with VMAX storage devices. It
comes packaged with the Solutions Enabler kit, and ensures that NTFS is updated with any changes. It helps perform the following
actions:
Flush all database locations
View the physical disk, volume or mount points for every database file and log location

Update the partition table on a disk

41
Set and clear volume flags
Flush any pending cached file system data to disk

Show individual disk, volume, mount point details


Mount and unmount volumes to a drive letter or mount point
Manipulate disk signatures
Scan the drive connections and discover new disks available to the system
Mask devices to and unmask devices from Windows host.
Figure 27 shows symntctl flush and umount usage for a mount path

Figure 27. Symntctl command usage to flush, un-mount, mount

42
APPENDIX VI SCRIPTING ATTACH OR DETACH FOR A SQL SERVER DATABASE USING
WINDOWS POWERSHELL
Figure 28 and Figure 29 illustrate the steps for attaching or detaching an SQL Server database.

Figure 28. Detach a sqlserver database, using Windows PowerShell and sp_detach_db

Figure 29. Attach a sql database using Windows PowerShell Invoke-sqlcmd

43
APPENDIX VII EXAMPLE OUTPUTS
Example 1 in Figure 30 shows a listing of an SQL database storage group and its storage device details.

Figure 30. Listing of a SQL Database storage group

Example 2 in Figure 31 shows a listing of allocations and tracks allocations for a range of devices (0020-0024) that are part of an
SQL database storage group.

Figure 31. Listing of allocations and tracks allocations

44
Example 3 in Figure 32 shows how to create a TimeFinder SnapVX snapshot for an entire storage group. In cases where the SQL
server database is the entire storage group, SnapVX has the ability to perform SnapVX operations at the storage group level itself.
This simplifies the task for an SQL server admin to perform replication for a group of devices.

Figure 32. Creating a TimeFinder SnapVX

Example 4 in Figure 33 shows a SnapVX listing details for storage group snapsource.

Figure 33. SnapVX listing

Example 5 in Figure 34 illustrates linking a TimeFinder SnapVX snapshot in default no-copy mode to a target SQL storage group.

Figure 34. Linking a TimeFinder SnapVX

45
Example 6 in Figure 35 lists the output of a linked TimeFinder SnapVX snapshot in copy mode to a target SQL storage group.

Figure 35. Output of a linked TimeFinder SnapVX

Example 7 in Figure 36 illustrates re-linking a target SQL storage group to a TimeFinder SnapVX snapshot. Re-linking provides
incremental refreshes of a linked target storage group from a different SnapVX snapshot with a different PiT.

Figure 36. Re-linking a target

46
Example 8 in Figure 37 shows a TimeFinder SnapVX snapshot list of a linked no-copy storage group.

Figure 37. SnapVX snapshot linked no-copy

Example 9 in Figure 38 shows a copied and de-staged linked storage group output with -detail.

Figure 38. De-staged linked storage group

Example 10 in Figure 39 shows source volumes (500GB each) which will be participating in a TimeFinder SnapVX linked session in
copy mode. The linked target volumes (1TB each) are larger than the source volumes, as shown in Example 11. Example 10 and
Example 11 together show how linked target volumes in copy mode that are larger than the source volume can be mounted back to
the original host.

47
Figure 39. Source volumes of sizes (500GB each)

Example 11 in Figure 40 shows linked target windows volumes of size 1TB that are targets of source volumes of smaller size (500GB
each), see above. The targets can be extended using Windows disk management to realize their full capacity. This helps meet
capacity growth needs (although not seamless) with LUN expansion to the database and might involve production I/O downtime
during switch-over to larger sized volumes.

Figure 40. Linked targets can be extended using windows disk management

48
Example 12 in Figure 41 demonstrates the Symdev free all command. This command frees all allocations on unlinked target
windows volumes. The Symdev free all command provides the ability to wipe volumes of all data. The sleep mentioned in the
script is an arbitrary value. In this example, device ranges 00020 through 00024 are freed of all allocations, and the allocations are
reclaimed into the storage resource pool (SRP). The free all command should be used with utmost caution in production
environments and only with the knowledge of administrators. The example highlights the end result of free all in Pool allocated
tracks % for the range of devices to be at zero.

Figure 41. Symdev free all

49

You might also like