Dell EMC Unisphere For PowerMax Product Guide V9.2.1
Dell EMC Unisphere For PowerMax Product Guide V9.2.1
Dell EMC Unisphere For PowerMax Product Guide V9.2.1
Guide
9.2.1
Rev. 02
March 2021
• Overview of Unisphere........................................................................................................................................................................................... 3
• Capacity information............................................................................................................................................................................................... 4
• Understanding Data Reduction............................................................................................................................................................................. 4
• Login authentication................................................................................................................................................................................................ 7
• Functionality supported by each OS type.......................................................................................................................................................... 7
• Unisphere dashboards overview...........................................................................................................................................................................9
• Understanding the system health score............................................................................................................................................................ 11
• Manage jobs............................................................................................................................................................................................................. 12
• Server alerts.............................................................................................................................................................................................................12
• Understanding settings......................................................................................................................................................................................... 13
• Understanding licenses..........................................................................................................................................................................................13
• Understanding user authorization...................................................................................................................................................................... 14
• Individual and group roles..................................................................................................................................................................................... 14
• Roles...........................................................................................................................................................................................................................14
• User IDs..................................................................................................................................................................................................................... 15
• Roles and associated permissions...................................................................................................................................................................... 16
• RBAC roles for TimeFinder SnapVX local and remote replication actions............................................................................................... 18
• RBAC roles for SRDF local and remote replication actions......................................................................................................................... 19
• Understanding access controls for volumes................................................................................................................................................... 20
• Storage Management........................................................................................................................................................................................... 20
• Understanding storage provisioning.................................................................................................................................................................. 21
• Understanding storage groups........................................................................................................................................................................... 22
• Understanding data reduction............................................................................................................................................................................ 23
• Understanding service levels.............................................................................................................................................................................. 23
• Suitability Check restrictions.............................................................................................................................................................................. 23
• Understanding storage templates......................................................................................................................................................................24
• Understanding Storage Resource Pools.......................................................................................................................................................... 24
• Understanding volumes........................................................................................................................................................................................ 24
• Understanding Federated Tiered Storage ...................................................................................................................................................... 24
• Understanding FAST ............................................................................................................................................................................................ 24
• Understanding Workload Planner...................................................................................................................................................................... 25
• Understanding time windows..............................................................................................................................................................................25
• Understanding FAST.X......................................................................................................................................................................................... 25
• Overview of external LUN virtualization.......................................................................................................................................................... 26
• Understanding tiers............................................................................................................................................................................................... 27
• Understanding thin pools..................................................................................................................................................................................... 27
• Understanding disk groups.................................................................................................................................................................................. 27
• Understanding Virtual LUN Migration...............................................................................................................................................................27
• Understanding vVols............................................................................................................................................................................................. 28
• Host Management................................................................................................................................................................................................. 28
• Understanding hosts............................................................................................................................................................................................. 28
• Understanding masking views............................................................................................................................................................................ 29
• Understanding port groups..................................................................................................................................................................................29
• Understanding initiators....................................................................................................................................................................................... 29
• Understanding PowerPath hosts....................................................................................................................................................................... 29
• Understanding mainframe management.......................................................................................................................................................... 29
• Data protection management............................................................................................................................................................................. 30
• Manage remote replication sessions..................................................................................................................................................................31
• Understanding Snapshot policy...........................................................................................................................................................................31
• Understanding SRDF/Metro Smart DR............................................................................................................................................................32
• Understanding non-disruptive migration..........................................................................................................................................................33
• Understanding Virtual Witness .......................................................................................................................................................................... 34
• Understanding SRDF Delta Set Extension (DSE) pools...............................................................................................................................35
• Understanding TimeFinder/Snap operations.................................................................................................................................................. 35
• Understanding Open Replicator......................................................................................................................................................................... 35
• Open Replicator session options........................................................................................................................................................................35
• Understanding device groups............................................................................................................................................................................. 37
• Understanding SRDF groups............................................................................................................................................................................... 37
• SRDF session modes............................................................................................................................................................................................. 38
• SRDF session options............................................................................................................................................................................................38
• SRDF/A control actions........................................................................................................................................................................................ 41
• SRDF group modes.................................................................................................................................................................................................41
• SRDF group SRDF/A flags.................................................................................................................................................................................. 42
• Understanding TimeFinder/Clone operations.................................................................................................................................................42
• Understanding TimeFinder/Mirror sessions.................................................................................................................................................... 43
• Understanding TimeFinder SnapVX...................................................................................................................................................................43
• Understanding RecoverPoint.............................................................................................................................................................................. 43
• Understanding Performance Management..................................................................................................................................................... 43
• Database Storage Analyzer (DSA) Management...........................................................................................................................................43
• Understanding Unisphere support for VMware............................................................................................................................................. 44
• Understanding eNAS.............................................................................................................................................................................................44
• Understanding iSCSI............................................................................................................................................................................................. 45
• Understanding Cloud Mobility for Dell EMC PowerMax.............................................................................................................................. 45
• Manage CloudIQ settings.....................................................................................................................................................................................45
• Manage CyberSecIQ settings............................................................................................................................................................................. 46
• Understanding dynamic cache partitioning..................................................................................................................................................... 46
2
Overview of Unisphere
Unisphere enables the user to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems.
Unisphere is a HTML5 web-based application that enables you to configure and manage PowerMax, VMAX All Flash, and VMAX
storage systems. The term Unisphere incorporates "Unisphere for PowerMax" for the management of PowerMax and All Flash
storage systems running PowerMaxOS 5978, and "Unisphere for VMAX" for the management of VMAX All Flash and VMAX
storage systems running HYPERMAX OS 5977 and Enginuity OS 5876.
Blog posts and videos on Unisphere functionality can be accessed by clicking here.
The side panel has the following items when the All Systems view is selected:
● HOME—View system view dashboard of all storage systems being managed
● PERFORMANCE—Monitors and manages storage system performance data (Dashboards, Charts, Analyze, Heatmap,
Reports, Plan, Real-Time traces, and Performance Database management). Refer to Understanding Performance
Management on page 43 for more information.
● VMWARE—Views all the relevant storage-related objects at an ESXi server and helps troubleshooting storage performance-
related issues at the ESXi server. Refer to Understanding Unisphere support for VMware on page 44 for more information.
● DATABASES—Monitors and troubleshoots database performance issues. Refer to Database Storage Analyzer (DSA)
Management on page 43 for more information.
● EVENTS—Includes Alerts and Job List.
NOTE: For additional information about event and alerts, see the Events and Alerts for PowerMax and VMAX Users
Guide.
● SUPPORT—Displays support information.
You can hide the side panel by clicking and you can display the display panel by clicking again.
You can return to the All Systems view by clicking HOME.
The side panel has the following items when the storage system-specific view is selected:
● HOME—View system view dashboard of all storage systems being managed
● DASHBOARD—View the following dashboards for a selected storage system: Capacity and Performance, System Health,
Storage Group compliance, Capacity, and Replication
● STORAGE—Manage storage (storage groups, service levels, templates, Storage resource Pools, volumes, external storage,
vVols, FAST policies, tiers, thin pools, disk groups, and VLUN migration). Refer to Storage Management on page 20 for
more information.
● HOSTS—Manage hosts (hosts, masking views, port groups, initiators, XtremSW Cache Adapters, PowerPath Hosts,
mainframe, and CU images). Refer to Host Management on page 28 for more information.
● DATA PROTECTION—Manage data protection (storage groups, device groups, SRDF groups, migrations, virtual witness,
Snapshot Policies, MetroDR, Open Replicator, SRDF/A DSE pools, TimeFinder SnapVX pools, and RecoverPoint systems).
Refer to Data protection management on page 30 for more information.
● PERFORMANCE—Monitors and manages storage system performance data (Dashboards, Charts, Analyze, Heatmap,
Reports, Plan, Real-Time traces, and Performance Database management). Refer to Understanding Performance
Management on page 43 for more information.
● SYSTEM—Includes Hardware, Properties, File (eNAS), Cloud, and iSCSI.
● EVENTS—Includes Alerts, Job List, and Audit log.
NOTE: For additional information about event and alerts, see the Events and Alerts for PowerMax and VMAX Users
Guide.
● SUPPORT—Displays support information.
The following options are available from the title bar:
● Discover systems
● Refresh system information.
● Search for objects
● View newly added features.
● View and manage alerts.
● View and manage jobs.
● View online help
● Exit the console.
3
A Unisphere Representational State Transfer (REST) API is also available. The API enables you to access diagnostic,
performance and configuration data, and also enables you to perform provisioning operations on the storage system.
Supporting documentation
Perform the following steps to access REST API documentation:
● Point the browser to: https://{UNIVMAX_IP}:{UNIVMAX_PORT}/univmax/restapi/docs where UNIVMAX_IP is
the IP address and UNIVMAX_PORT is the port of the host running Unisphere.
● Copy the .zip file (restapi-docs.zip) locally, extract the file, and go to target/docs/index.html.
● To access the documented resources, open the index.html file.
Information on the installation of Unisphere for PowerMax can be found in the Unisphere for PowerMax Installation Guide at the
Dell EMC support website or the technical documentation page.
For information specific to this Unisphere product release, see the Unisphere for PowerMax Release Notes at the Dell EMC
support website.
Your comments—Your suggestions help continue to improve the accuracy, organization, and overall quality of the user
publications. Send your feedback to: content feedback.
Capacity information
Unisphere supports measurement of capacity using both the base 2 (binary) and base 10 (decimal) systems.
Storage capacity can be measured using two different systems – base 2 (binary) and base 10 (decimal). Organizations such
as the International System of Units (SI) recommend using the base 10 measurement to describe storage capacity. In base 10
notation, one MB is equal to 1 million bytes, and one GB is equal to 1 billion bytes.
Operating systems generally measure storage capacity using the base 2 measurement system. Unisphere and Solutions Enabler
use the base 2 measurement system to display storage capacity with the TB notation as it is more universally understood. In
base 2 notation, one MB is equal to 1,048,576 bytes and one GB is equal to 1,073,741,824 bytes.
Name Abbreviation Binary Power Binary Value (in Decimal) Decimal Decimal (Equivalent)
Power
kilobyte KB 2 10 1,024 10 3 1,000
megabyte MB 2 20 1,048,576 10 6 1,000,000
GB GB 2 30 1,073,741,824 10 9 1,000,000,000
terabyte TB 2 40 1,099,511,627,776 10 12 1,000,000,000,000
4
Host Written data is any data written to a provisioned device, even if the data is stored only for a snapshot. Since Host Written
is a measurement of what has been written, it is not affected by data reduction solutions, not even Pattern Detection which
deallocates all space on disk, and stores the pattern indicator in metadata.
Replication Savings are copies of Host Written data generated internally in the PowerMax using Local Replication technologies.
PowerMax does not consider replication savings a part of any data reduction measurements.
Similarly to initial TDev Creation, when a snapshot is initially created, it does not contain and host written data, only Thin or
Replication data. As the Source or Production TDev takes writes, the snapshot takes ownership of the original data (termed
Changed Data), and the snapshot is now considered to have Host Written data as well. Therefore, snapshots may appear with
all the same statistics and savings options as TDevs.
PowerMax gives you the option to enable or disable data reduction features on different applications within the array. You
may want to disable data reduction on volumes which have poor data reduction, or volumes which have extreme performance
requirements.
The calculation of the PowerMax Data Reduction Ratio (DRR) does not include this disabled area.
The newly introduced flyover is the breakdown of the Data Reduction Enabled (Reducible + Unevaluated + Unreducible) portion
of the user allocations.
Data which is enabled for data reduction is processed by dedicated hardware offload engines that use multiple compression
algorithms and hashing techniques. The result of these techniques is usually a significant amount of savings, but occasionally
there are segments of data that cannot be reduced. Data which PowerMax can reduce with deduplication, pattern detection,
5
compression, or advanced compression is counted as "Reducible", and when all those techniques fail to save any space, that
data is considered "Unreducible ".
There is also a third category of data referred to as "Unevaluated". Unevaluated data is data that the PowerMax data reduction
solution has not yet processed. There are typically two reasons for data to be considered Unevaluated. The first situation occurs
when a user enables data reduction on a large set of already written data. Immediately after enabling data reduction, the written
data stops being counted under "DataReductionEnabled " and it now exists under "Unevaluated ". As the system evaluates the "
Unevaluated " data, it moves to either "Reducible " or "Unreducible ".
It is normal to have a small quantity of "Unreducible" data present, but in certain use cases, the amount of "Unreducible"
data may be high. The typical results of unreducible data on PowerMax systems come from host encryption, archival (host
compression), medical images, audio or video files, or compressed databases. To help identify the source of this Unreducible
data, it is also provided at the volume and Storage Group levels.
"Reducible" data refers to data that benefits from pattern detection, compression, or deduplication. Pattern Detection occurs
when predefined patterns (such as all zeroes) are encountered and instead of allocating disk space and storing the pattern, the
PowerMax notes the pattern in the metadata (resulting in 100% savings). Note that this Pattern Detected data does not count
under PowerMax's traditional term "Allocations".
Deduplication&Compression can occur simultaneously on a track, where a track can be compressed up to 16:1, and deduped up
to 75:1, for an overall max savings of 1200:1 on a particular set of data.
Equations:
Provisioned = Thin Savings + Replication Savings + Host Written
HostWritten = Reducible + Unevaluated + Unreducible + Data Reduction Disabled
Allocations = Host Written - Pattern Detected
DRR = (Unreducible + Reducible - PatternDetection) / (Unreducible + Reducible - PatternDetection -
Deduplication&Compression )
This DRR is the DRR that you can observe on the main Unisphere display.
DRR_Reducible = (Reducible) / (Reducible - PatternDetection - Deduplication&Compression)
This DRR is the DRR that you can observe on the Unisphere flyover.
6
Login authentication
Unisphere authenticates users attempting to access the system.
When you log in, Unisphere checks the following locations for validation:
● Windows — The user has a Windows account on the server. (Log in to Unisphere with your Windows Domain\Username
and Password.)
● LDAP-SSL — The user account is stored on an LDAP-SSL server. (Log in to with your Unisphere LDAP-SSL Username and
Password.)
The Unisphere Administrator or SecurityAdmin must set the LDAP-SSL server location in the LDAP-SSL Configuration dialog
box.
● Local — The user has a local Unisphere account. Local user accounts are stored locally on the Unisphere server host. (Log
in to Unisphere with your Username and Password.)
User names are case-sensitive and allow alphanumeric characters of either case, an underscore, a dash, or a period:
● a-z
● A-Z
● 0-9
● _
● .
● -
Passwords cannot exceed 16 characters. There are no restrictions on special characters when using passwords.
The Initial Setup User, Administrator, or SecurityAdmin must create a local Unisphere user account for each user.
7
Table 1. Functionality supported by each OS type (continued)
Functionality Enginuity OS 5876 HYPERMAX OS 5977 PowerMaxOS 5978
Storage > FAST Policies
8
Table 1. Functionality supported by each OS type (continued)
Functionality Enginuity OS 5876 HYPERMAX OS 5977 PowerMaxOS 5978
Data Protection > TimeFinder Snap Pools
9
System Health dashboard view for a specific storage system
The System Health dashboard provides a single place from which you can quickly determine the health of the system. You can
also access hardware information.
The System Health section displays values for the following five high-level health or performance metrics: System Utilization,
Configuration, Capacity, SG Response Time, and Service Level Compliance. It also displays an overall health score based on
these five categories. The overall system health score is based on the lowest health score out of the categories System
Utilization, Configuration, Capacity, SG Response Time, and service level compliance. See Understanding the system health
score on page 11 for details on how these scores are calculated. These five categories are for systems running HYPERMAX OS
5977 or later. For systems running Enginuity 5876, the health score is based on the Hardware, Configuration, Capacity, and SG
Response time scores. The health score is calculated every five minutes.
NOTE: The Health score values for Hardware, SG Response, and service level compliance are not real time; they are based
on values within the last hour.
The Hardware section shows the director count for Front End, Back End, and SRDF Directors and the available port count on
the system. An alert status is indicated through a colored bell beside the title of the highest level alert in that category. If no
alerts are present, then a green tick is displayed.
Replication dashboard view for a specific storage system
The Replication Dashboard provides storage group summary protection information, summarizing the worst states of various
replication technologies and counts of management objects participating in these technologies. For systems running HYPERMAX
OS 5977 and higher, summary information for SRDF, SRDF/ Metro, and SnapVX (including zDP snapshots) is displayed. For
systems running Enginuity OS 5876, summary information for SRDF and device groups is displayed.
The Replication Dashboard has a SRDF topology view that visually describes the layout of the SRDF connectivity of the selected
storage system in Unisphere.
The Replication Dashboard provides a Migrations Environments topology view that visually describes the layout of the migration
environments of the selected storage system.
Storage Group Compliance dashboard view for a specific storage system
The Storage Group Compliance dashboard displays how well the workload of the storage system is complying with the overall
service level. Storage group compliance information displays for storage systems that are registered with the Performance
component. The total number of storage groups is listed, along with information about the number of storage groups performing
in accordance with service level targets. A list view of the storage groups is also provided and this can be filtered.
Capacity dashboard view for a specific storage system
The storage system capacity dashboard enables you to see the amount of capacity your storage system is subscribed for, and
the amount of that subscribed capacity that has been allocated. You can also see how efficient the storage system is in using
data reduction technologies.
The SRP capacity dashboard reports the capacity and efficiency breakdown of a SRP. For PowerMaxOS 5978 storage systems
running 9.1, FBA and CKD devices can be configured in a single SRP. This reduces the cost of storage array ownership for
a mixed system and enables the efficient management of drive slot consumption in the array. Where the SRP is of mixed
emulation, you can select by emulation to examine breakdown.
Performance and Capacity dashboard view or a specific storage system
The performance and capacity dashboard for a specific storage system provides a view of key performance and capacity
indicators.
● A Capacity panel displays the following:
Subscribed A graphical representation of the subscribed capacity of the system (used = blue and free = gray) and
Capacity the percentage used
Usable Capacity A graphical representation of the usable capacity of the system (used = blue and free = gray) and the
percentage used
Subscribed Usable The percentage of subscribed usable capacity
Capacity
Overall Efficiency The overall efficiency ratio
Trend A panel displaying usable capacity and subscribed capacity in terabytes
● A Performance panel displays the following graphs over a four hour, one week, or two-week period:
○ Host IOs/sec in terms of read and write operations over time.
○ Latency in terms of read and write operations over time.
○ Throughput in terms of read and write operations over time.
10
To the right of each graph, a list of the top five active storage groups for that graph is displayed. Zooming in to a timeframe
on a graph automatically updates the top five storage groups lists for that timeframe. Clicking a particular point in time on
one graph automatically updates the top five storage group lists for that particular time.
11
SRDF Director: % busy - Critical 70, Warning 50
DX Port: - % busy - Critical 70, Warning 55
External Director: % busy- Critical 70, Warning 55
EDS Director: % busy - Critical 70, Warning 55
Cache Partition: %WP utilization - Critical 75, Warning 55
The system utilization score is calculated as follows:
● Critical level: - five points
The Storage Group Response health score is based on software category health scores. Certain key metrics are examined
against threshold values and if they exceed a certain threshold, the health score is negatively affected.
The storage group response score is based on the following:
● Storage Group: Read Response Time, Write Response Time, Response Time
● Database: Read Response Time, Write Response Time, Response Time
For each instance and metric for particular category, the threshold info is found. If not found, default thresholds are used.
The storage group response score is calculated as follows:
● ○ Read Response Time: Critical - five points
○ Write Response Time: Critical - five points
○ Response Time: Critical - five points
Storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978: The Service Level Compliance health score is based on
Workload Planner (WLP) workload state. A reduction from the health score is performed when storage groups that have a
service level that is defined are not meeting the service level requirements.
The Service Level compliance score is calculated as follows:
● Underperforming: - five points
Manage jobs
Certain configuration tasks performed on a storage system may not be not immediately processed, but instead are kept in a job
list for review and submission in batches.
Server alerts
Server alerts are alerts that are generated by Unisphere itself.
Unisphere generates server alerts under the conditions that are listed in the following table:
Checks are run on 10 minute intervals and alerts are raised on 24-hour intervals from the time the server was last started. These
time intervals also apply to discover operations. That is, performing a discover operation does not force the delivery of these
alerts.
NOTE: Runtime alerts are not storage system-specific. They can be deleted if the user has admin or storage admin rights on
at least one storage system. A user with a monitor role cannot delete the server alerts.
12
Server alert Number of volumes Threshold Alert Details
Free disk space on the 0–64,000 100 GB Free disk space <# GB> is below the minimum
Unisphere installed directory requirement of <# GB>
64,000–128,000 140 GB
128,000–256,000 180 GB
Number of managed storage Threshold is 20. Number of managed arrays <#> is over the
systems maximum supported number of #
Number of managed volumes 256,000 Number of managed volumes <#> is over the
maximum supported number of <#>.
Solutions Enabler may indicate a slightly
different number of volumes than indicated in
this alert.
Understanding settings
Systems settings are managed from a central point.
The following categories of settings can be modified:
● Preferences—General and Performance settings
● System and Licenses—License Usage, Solutions Enabler, and System Entitlements settings
● Users and Groups—Authentication, Local Users, User Sessions, and Authorized Users and Groups settings
● System Access Control—Access Control entries, Access groups, and Access Pools settings
● Management—System Attributes, Link and Launch, Secure Remote Services, and CloudIQ settings
● Data Protection—Data Protection settings
● Performance—System Registrations, Dashboard Catalog, Real-Time Traces, Metrics, and Export PV settings
● Unisphere Databases—Performance Databases and System Database settings
● DSA Environment—Database Storage analyzer (DSA) settings
● Alerts—Alerts Policies, Compliance Alert Policies, Performance Thresholds and Alerts, System Thresholds and Alerts, and
Notifications settings
Unisphere 9.1 provides the ability of saving specific settings on one array, so that these settings can be applied to other arrays
of the same family and PowerMax version. Settings can be cloned, imported, and exported.
Understanding licenses
Unisphere supports electronic licensing (eLicensing). eLicensing is a license management solution to help you track and comply
with software license entitlement.
eLicensing uses embedded locking functions and back-office IT systems and processes. It provides you with better visibility into
software assets, easier upgrade, and capacity planning and reduced risk of non-compliance, while still adhering to a strict “do no
harm” policy to your operations.
When installing licenses with eLicensing, you obtain license files from customer service, copy them to a Solutions Enabler or a
Unisphere host, and load them onto storage systems.
Each license file fully defines the entitlements for a specific system, including its activation type (Individual or Enterprise), the
licensed capacity, and the date the license was created. If you want to add a product title or increase the licensed capacity of an
entitlement, obtain a new license file from online support and load it onto the storage system.
When managing licenses, Solutions Enabler, Unisphere, z/OS Storage Manager (EzSM), MF SCF native command line, TPF, and
IBM i platform console, provide detailed usage reports that enable you to better manage capacity and compliance planning.
There are two types of eLicenses: host-based and array-based. Host-based licenses, as the name implies, are installed on the
host. And, array-based licenses are installed on the storage system. For information about the types of licenses and the features
they activate, see the Solutions Enabler Installation Guide.
Unisphere enables you to add and view array-based licenses, and add, view, and remove host-based licenses.
13
Unisphere uses array-based eLicensing.
NOTE:
For more information about eLicensing, see the Solutions Enabler Installation Guide.
Roles
A Unisphere user can assume a number of roles. Tasks and associated permissions are associated with each role.
The following lists the available roles. Note that you can assign up to four of these roles per authorization rule. For a more
detailed look at the permissions that go along with each role, see Roles and associated permissions on page 16.
● None—Provides no permissions.
● Monitor—Performs read-only (passive) operations on a storage system excluding the ability to read the audit log or access
control definitions.
● StorageAdmin—Performs all management (active or control) operations on a storage system and modifies GNS group
definitions in addition to all Monitor operations
● Administrator—Performs all operations on a storage system, including security operations, in addition to all StorageAdmin
and Monitor operations.
● SecurityAdmin—Performs security operations on a storage system, in addition to all Monitor operations.
● Auditor—Grants the ability to view, but not modify, security settings for a storage system, (including reading the audit log,
symacly list and symauth) in addition to all Monitor operations. This is the minimum role required to view the storage system
audit log.
● DSA Admin—Collects and analyzes database activity with Database Storage Analyzer.
A user cannot change their own role so as to remove Administrator or SecurityAdmin privileges from themselves.
● Local Replication—Performs local replication operations (SnapVX or legacy Snapshot, Clone, BCV). To create Secure
SnapVX snapshots a user needs to have Storage Admin rights at the array level. This role also automatically includes Monitor
rights.
● Remote Replication—Performs remote replication (SRDF) operations involving devices and pairs. Users can create, operate
upon or delete SRDF device pairs but can't create, modify or delete SRDF groups. This role also automatically includes
Monitor rights.
● Device Management—Grants user rights to perform control and configuration operations on devices. Note that Storage
Admin rights are required to create, expand or delete devices. This role also automatically includes Monitor rights.
14
In addition to these user roles, Unisphere includes an administrative role, the Initial Setup User. This user, defined during
installation, is a temporary role that provides administrator-like permissions for the purpose of adding local users and roles to
Unisphere.
User IDs
Users and user groups are mapped to their respective roles by IDs.
These IDs consist of a three-part string in the form:
Type:Domain\Name
Where:
● Type—Specifies the type of security authority that is used to authenticate the user or group. Possible types are:
○ L—Indicates a user or group that LDAP authenticates. In this case, Domain specifies the domain controller on the LDAP
server. For example:
L:danube.com\Finance
Indicates that user group Finance logged in through the domain controller danube.com
○ C—Indicates a user or group that the Unisphere server authenticates. For example:
C:Boston\Legal
Indicates that user group Legal logged in through Unisphere server Boston
○ H—Indicates a user or group that is authenticated by logging in to a local account on a Windows host. In this case,
Domain specifies the hostname. For example:
H:jupiter\mason
Indicates that user mason logged in on host jupiter
○ D—Indicates a user or group that is authenticated by a Windows domain. In this case, Domain specifies the domain or
realm name. For example:
D:sales\putman
Indicates that user putman has logged in through a Windows domain sales.
● Name—specifies the username relative to that authority. It cannot be longer than 32 characters, and spaces are allowed if
delimited with quotes. Usernames can be for individual users or user groups.
Within role definitions, IDs can be either fully qualified (as shown above), partially qualified, or unqualified. When the Domain
portion of the ID string is an asterisk (*), the asterisk is treated as a wildcard, meaning any host or domain.
When configuring group access, the Domain portion of the ID must be fully qualified.
For example:
● D:ENG\jones—Fully qualified path with a domain and username (for individual domain users)
● D:ENG.xyz.com\ExampleGroup—Fully qualified domain name and group name (for domain groups)
● D:*\jones—Partially qualified that matches username jones with any domain
● H:HOST\jones—Fully qualified path with a hostname and username
● H:*\jones—Partially qualified that matches username jones within any host
● jones—Unqualified username that matches any jones in any domain on any host
If a user is matched by more than one mapping, the user authorization mechanism uses the more specific mapping. If an exact
match (for example, D:sales\putman) is found, that is used. If a partial match (for example, D:*\putman) is found, that is
used. If an unqualified match (for example, putman) is found, that is used. Otherwise, the user is assigned a role of None.
15
Roles and associated permissions
Users gain access to a storage system or component either directly through a role assignment and/or indirectly through
membership in a user group that has a role assignment.
The Role Based Access Control (RBAC) feature provides a method for restricting the management operations that individual
users or groups of users may perform on storage systems. See https://www.youtube.com/watch?v=2V7KidifeA4 for more
information.
The following diagram outlines the role hierarchy.
16
NOTE: The RBAC roles for SRDF local and remote replication actions are outlined in RBAC roles for SRDF local and remote
replication actions on page 19.
NOTE: The RBAC roles for Timefinder SnapVX local and remote replication actions are outlined in RBAC roles for
TimeFinder SnapVX local and remote replication actions on page 18.
Table 3. Permissions for Local Replication, Remote Replication and Device Management roles
Permissions
Local Replication Remote Device Management
Replication
Create/delete user accounts No No No
Reset user password No No No
Create roles No No No
17
Table 3. Permissions for Local Replication, Remote Replication and Device Management roles (continued)
Permissions
Local Replication Remote Device Management
Replication
Change own password Yes Yes Yes
Manage storage systems No No No
Discover storage systems No No No
Add/show license keys No No No
Set alerts and Optimizer No No No
monitoring options
Release storage system locks No No No
Set Access Controls No No No
Set replication and reservation No No No
preferences
View the storage system audit No No No
log
Access performance data Yes Yes Yes
Start data traces Yes Yes Yes
Set performance thresholds/ No No No
alerts
Create and manage performance Yes Yes Yes
dashboards
Collect and analyze database No No No
activity with Database Storage
Analyzer
Perform control and No No Yes
configuration operations on
devices
Create, expand or delete devices No No No
Perform local replication Yes No No
operations (SnapVX, legacy
Snapshot, Clone, BCV)
Create Secure SnapVX No No No
snapshots
Create, operate upon or delete No Yes No
SRDF device pairs
Create, modify or delete SRDF No No No
groups
NOTE: Unisphere for PowerMax does not support RBAC device group management.
18
Local Replication Remote Replication Device Manager
Protection Wizard - Create Yes (a)
SnapVx Snapshot
Create Snapshot Yes (a)
(a) - Set Secure is blocked for users who only have Local_REP rights.
(b) - The user must have the specified rights on the source volumes.
(c) - The user may only choose existing storage groups to link to. Creating a storage group requires Storage Admin rights.
(d) - The user must have the specified rights on the link volumes.
NOTE: Unisphere for PowerMax does not support RBAC device group management.
19
Local Replication Remote Replication Device Manager
SRDF Set Mode Yes
SRDF Set SRDF/A Yes
SRDF Split Yes
SRDF Suspend Yes
SRDF Swap Yes
SRDF Write Disable Yes
Storage Management
Storage consists of the following: storage groups, service levels, templates, storage resource pools, volumes, external storage,
vVols, FAST policies, tiers, thin pools, disk groups, and VLUN migration.
Storage Management covers the following areas:
● Storage Group management - Storage groups are a collection of devices that are stored on the array, and an application, a
server, or a collection of servers use them. Storage groups are used to present storage to hosts in masking/mapping, Virtual
LUN Technology, FAST, and various base operations.
● Service Level management - A service level is the response time target for a storage group. The service level sets the
storage array with the required response time target for a storage group. It automatically monitors and adapts to the
workload needed maintain the response time target. The service level includes an optional workload type so it can be
optimized to meet performance levels.
● Template management - Using the configuration and performance characteristics of an existing storage group as a
starting point, you can create templates that will pre-populate fields in the provisioning wizard and create a more realistic
performance reservation in your future provisioning requests.
● Storage Resource Pool management - Fully Automated Storage Tiering (FAST) provides automated management of storage
array disk resources to achieve expected service levels. FAST automatically configures disk groups to form a Storage
Resource Pool (SRP) by creating thin pools according to each individual disk technology, capacity, and RAID type.
● Volume management - A storage volume is an identifiable unit of data storage. Storage groups are sets of volumes.
● External Storage management - External Fully Automated Storage Tiering (FAST.X) attaches external storage to storage
systems directs workload movement to these external arrays while having access to the array features such as local
replication, remote replication, storage tiering, data management, and data migration. Also, it simplifies multi-vendor or Dell
EMC storage array management.
● vVol management - VMware vVols enable data replication, snapshots, and encryption to be controlled at the VMDK level
instead of the LUN level, where these data services are performed on a per VM (application level) basis from the storage
array.
● FAST Policies management - A FAST policy consists of one to three DP tiers, or one to four VP tiers, but not a combination
of both DP and VP tiers. Policies define a limit for each tier in the policy. This limit determines the amount of data from a
storage group that is associated with the policy that can reside on the tier.
● Tiers management - FAST automatically moves active data to high-performance storage tiers and inactive data to low-cost,
high-capacity storage tiers.
● Thin Pools management - Storage systems are preconfigured at the factory with virtually provisioned devices. Thin
Provisioning helps reduce cost, improve capacity utilization, and simplify storage management. Thin Provisioning presents a
large amount of capacity to a host and then consumes space only as needed from a shared pool. Thin Provisioning ensures
that thin pools can expand in small increments while protecting performance, and performs nondisruptive shrinking of thin
pools to help reuse space and improve capacity utilization.
20
● Disk Groups management - A disk group is a collection of hard drives within the storage array that share the same
performance characteristics.
● VLUN Migration management - Virtual LUN Migration (VLUN Migration) enables transparent, nondisruptive data mobility
for disk group provisioned storage system volumes, virtually provisioned storage system volumes, between storage tiers and
between RAID protection schemes. Virtual LUN can be used to populate newly added drives or move volumes between high
performance and high capacity drives, resulting in the delivery of tiered storage capabilities within a single storage system.
Migrations are performed while providing constant data availability and protection.
Recommended Advanced
1. Use the Create Host dialog box to group host initiators 1. Use the Create Host dialog box to group host initiators
(HBAs). (HBAs).
2. Use the Provision Storage wizard, which steps you 2. Create one or more volumes on the storage system.
through the process of creating the storage group, port
group, and masking view. 3. Use the Create Storage Group dialog box to add the
created volumes to a storage group, and associate the
storage group with a storage resource pool, a service level,
and a workload.
4. Group Fibre Channel and/or iSCSI front-end directors.
5. Associate the host, storage group, and port group into a
masking view.
Unisphere provides the following methods for provisioning storage on storage systems running Enginuity OS 5876:
21
Recommended: This method relies on wizards to step you through the provisioning process. It is best suited for novice and
advanced users who do not require a high level of customization. Customization is the ability to create their own volumes,
storage groups, and so on.
Advanced: This method, as its name implies, is for advanced users who want the ability to control every aspect of the
provisioning process.
This section provides the high-level steps for each method, with links to the relevant help topics for more detail.
Regardless of the method you choose, once you have completed the process, a masking view has been created. In the masking
view, the volumes in the storage group are masked to the host initiators and mapped to the ports in the port group.
Before you begin:
The storage system has been configured.
To provision storage for storage systems running Enginuity OS 5876:
Recommended Advanced
1. Use the Create Host dialog box to group host initiators 1. Use the Create Host dialog box to group host initiators
(HBAs). (HBAs).
2. Use the Provision Storage wizard, which steps you
2. Create one or more volumes on the storage system.
through the process of creating the storage group, port
group, and masking view. The wizard optionally associates 3. Use the Create Storage Group wizard to create a storage
the storage group with a FAST policy. group. If you want to add the volumes you created in step
2, be sure to set the Storage Group Type to Empty, and
then complete adding volumes to storage groups.
4. Group Fibre Channel and/or iSCSI front-end directors.
5. Associate the host, storage group, and port group into a
masking view.
6. Associate the storage group with a FAST policy.
Optional: Associate the storage that you created in step 3
with an existing FAST policy and assign a priority value for
the association.
22
Understanding data reduction
Data reduction allows users to reduce user data on storage groups and storage resources.
Data reduction is enabled by default and can be turned on and off at storage group and storage resource level.
If a storage group is cascaded, enabling data reduction at this level enables data reduction for each of the child storage groups.
The user has the option to disable data reduction on one or more of the child storage groups if desired.
To turn the feature off on a particular storage group or storage resource, uncheck the Enable Data Reduction check box in the
in the Create Storage Group, Modify Storage Group or Add Storage Resource To Storage Container dialogs or when using
the Provision Storage or Create Storage Container wizards.
The following are the prerequisites for using data reduction:
● Data reduction is only allowed on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or
PowerMaxOS 5978.
● Data reduction is allowed for FBA devices only.
● The user must have at least StorageAdmin rights.
● The storage group needs to be FAST managed.
● The associated SRP cannot be comprised, either fully or partially, of external storage.
Reporting
Users are able to see the current compression ratio on the device, the storage group and the SRP. Efficiency ratios are reported
in units of 1/10th:1.
NOTE: External storage is not included in efficiency reports. For mixed SRPs with internal and external storage only the
internal storage is used in the efficiency ratio calculations.
23
Understanding storage templates
Storage templates are a reusable set of storage requirements that simplify storage management for virtual data centers by
eliminating many of the repetitive tasks required to create and make storage available to hosts or applications.
Understanding volumes
A storage volume is an identifiable unit of data storage. Storage groups are sets of volumes.
The Volumes view on the Unsiphere user interface provides you with a single place from which to view and manage all the
volume types on the system.
Understanding FAST
Fully Automated Storage Tiering (FAST) automates management of storage system disk resources on behalf of thin volumes.
NOTE: This section describes FAST operations for storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978.
FAST automatically configures disk groups to form a Storage Resource Pool by creating thin pools according to each individual
disk technology, capacity, and RAID type.
FAST technology moves the most active parts of your workloads (hot data) to high-performance flash disks and the least-
frequently accessed storage (cold data) to lower-cost drives, using the best performance and cost characteristics of each
different drive type. FAST delivers higher performance using fewer drives to help reduce acquisition, power, cooling, and
24
footprint costs. FAST can factor in the RAID protections to ensure write heavy workloads go to RAID 1 and read heavy
workloads go to RAID 6. This process is entirely automated and requires no user intervention.
FAST also delivers variable performance levels through service levels. Thin volumes can be added to storage groups and the
storage group can be associated with a specific service level to set performance expectations.
FAST monitors the performance of the storage group relative to the service level and automatically provisions the appropriate
disk resources to maintain a consistent performance level.
Understanding FAST.X
FAST.X enables the integration of storage systems running HYPERMAX OS 5977 or higher and heterogeneous arrays.
FAST.X enables LUNs on external storage to be used as raw capacity. Data services such as SRDF, TimeFinder, and Open
Replicator are supported on the external device.
For additional information, see the following documents:
● Solutions Enabler Array Management CLI Guide
● Solutions Enabler TimeFinder CLI User Guide
25
Overview of external LUN virtualization
When you attach external storage to a storage system, the SCSI logical units of an external storage system are virtualized as
disks called eDisks.
eDisks have two modes of operation:
Encapsulation Allows you to preserve existing data on external storage systems and access it through storage volumes.
These volumes are called encapsulated volumes.
External Allows you to use external storage as raw capacity for new storage volumes. These volumes are called
Provisioning externally provisioned volumes. Existing data on the external volumes is deleted when they are externally
provisioned.
Encapsulation
Encapsulation has two modes of operation:
Encapsulation for The eDisk is encapsulated and exported from the storage system as disk group provisioned volumes.
disk group
provisioning (DP
encapsulation)
Encapsulation The eDisk is encapsulated and exported from the storage system as thin volumes.
for virtual
provisioning (VP
encapsulation)
In either case, Enginuity automatically creates the necessary volumes. If the eDisk is larger than the maximum volume capacity
or the configured minimum auto meta capacity, Enginuity creates multiple volumes to account for the full capacity of the eDisk.
These volumes are concatenated into a single concatenated meta volume to enable access to the complete volume of data
available from the eDisk.
External provisioning
After you virtualize an eDisk for external provisioning, you can create volumes from the external disk group and present the
storage to users. You can also use this storage to create a new FAST VP tier.
NOTE: If you use external provisioning, any data that is on the external volume is deleted.
26
Understanding tiers
FAST automatically moves active data to high-performance storage tiers and inactive data to low-cost, high-capacity storage
tiers.
The following rules apply to tier creation:
● This feature requires Enginuity OS 5876.
● The maximum number of tiers that can be defined on a storage system is 256.
● When a disk group or thin pool is specified, its technology type must match the tier technology.
● Disk groups can only be specified when the tier include type is static.
● A standard tier cannot be created if it :
○ Leads to static and dynamic tier definitions in the same technology.
○ Partially overlaps with an existing tier. Two tiers partially overlap when they share only a subset of disk groups. For
example, Tier A partially overlaps with Tier B when Tier A contains disk groups 1 and 2, and Tier B contains only disk
group 2.
EFD RAID5(3+1)
FC 2-Way Mirror
SATA RAID6(6+2)
Virtual LUN can be used to populate newly added drives or move volumes between high performance and high capacity drives,
thereby delivering tiered storage capabilities within a single storage system. Migrations are performed while providing constant
data availability and protection.
27
Virtual LUN Migration performs tiered storage migration by moving data from one RAID group to another, or from one thin pool
to another. It is also fully interoperable with all other storage system replication technologies such as SRDF, TimeFinder/Clone,
TimeFinder/Snap, and Open Replicator.
RAID Virtual Architecture allows, for the purposes of migration, two distinct RAID groups, of different types or on different
storage tiers, to be associated with a logical volume. In this way, Virtual LUN allows for the migration of data from one
protection scheme to another, for example RAID 1 to RAID 5, without interruption to the host or application accessing data on
the storage system volume.
Virtual LUN Migration can be used to migrate regular storage system volumes and metavolumes of any emulation — FBA, CKD,
and IBM i series. Migrations can be performed between all drive types including high-performance enterprise Flash drives, Fibre
Channel drives, and large capacity SATA drives.
Migration sessions can be volume migrations to configured and unconfigured space, or migration of thin volumes to another thin
pool.
Understanding vVols
VMware vVols enables data replication, snapshots, and encryption to be controlled at the VMDK level instead of the LUN level,
where these data services are performed on a per VM (application level) basis from the storage array.
The vVol Dashboard provides a single place to monitor and manage vVols.
The storage system must be running HYPERMAX OS 5977 or PowerMaxOS 5978.
Host Management
Storage hosts are systems that use storage system LUN resources. Unisphere manages the hosts.
Host Management covers the following areas:
● Management of host and host groups
● Management of masking views - A masking view is a container of a storage group, a port group, and an initiator group , and
makes the storage group visible to the host. Devices are masked and mapped automatically. The groups must contain some
devices entries.
● Management of port groups - Port groups contain director and port identification and belong to a masking view. Ports can
be added to and removed from the port group. Port groups that are no longer associated with a masking view can be
deleted.
● Management of initiators and initiator groups - An initiator group is a container of one or more host initiators (Fibre or
iSCSI). Each initiator group can contain up to 64 initiator addresses or 64 child IG names. Initiator groups cannot contain a
mixture of host initiators and child IG names
● Monitor of Xtrem SW Cache (host) cache adapters.
● Management of PowerPath hosts
● Management of mainframe configured splits, CU images, and CKD volumes
Understanding hosts
Storage hosts are systems that use storage system LUN resources. A logical unit number (LUN) is an identifier that is used for
labeling and designating subsystems of physical or virtual storage.
● The maximum number of initiators that are allowed in a host depends on the storage operating environment:
○ For Enginuity OS 5876, the maximum number of initiators that is allowed is 32.
○ For HYPERMAX OS 5977 or higher, the maximum number of initiators that is allowed is 64.
28
Understanding masking views
A masking view is a container of a storage group, a port group, and an initiator group , and makes the storage group visible to
the host.
Masking viewed are manageable from the Unisphere user interface. Devices are masked and mapped automatically. The groups
must contain some devices entries.
Understanding initiators
An initiator group is a container of one or more host initiators (Fibre or iSCSI).
Each initiator group can contain up to 64 initiator addresses or 64 child IG names. Initiator groups cannot contain a mixture of
host initiators and child IG names.
29
Service level provisioning eliminates the need for storage administrators to manually assign physical resources to their
applications. Instead, storage administrators specify the service level and capacity that is required for the application and the
system provisions the storage group appropriately.
You can provision CKD storage to a mainframe host using the Provision Storage wizard.
The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured.
You can map CKD devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit
images, also known as CU images. Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered
0x00 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image.
With the release of HYPERMAX OS 5977 Q2 2017, Unisphere introduces support for All Flash Mixed FBA/CKD arrays.
NOTE: This feature is only available for All Flash 450F/850F/950F arrays that are:
● Purchased as a mixed All Flash system
● Installed at HYPERMAX OS 5977 Q2 2017 or later
● Configured with two Storage Resource Pools - one FBA Storage Resource Pool and one CKD Storage Resource Pool
You can provision FBA/CKD storage to a mainframe host using the Provision Storage wizard.
NOTE:
1. A CKD SG can only provision from a CKD SRP.
2. A FBA SG can only provision from a FBA SRP.
3. FBA volumes cannot reside in a CKD SRP.
4. CKD volumes cannot reside in a FBA SRP.
5. Compression is only for FBA volumes.
You can map FBA devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control
Unit images (CU images). Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x000
through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image.
30
Manage remote replication sessions
Unisphere supports the monitoring and management of SRDF replication on storage groups directly without having to map to a
device group.
The SRDF dashboard provides a single place to monitor and manage SRDF sessions on a storage system, including device
groups types R1, R2, and R21.
See Dell EMC SRDF Introduction for an overview of SRDF.
Unisphere allows you to monitor and manage SRDF/Metro from the SRDF dashboard. SRDF/Metro delivers active/active high
availability for non-stop data access and workload mobility – within a data center and across metro distance. It provides array
clustering for storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978 enabling even more resiliency, agility, and
data mobility. SRDF/Metro enables hosts and host clusters to directly access a LUN or storage group on the primary SRDF
array and secondary SRDF array (sites A and B). This level of flexibility delivers the highest availability and best agility for rapidly
changing business environments.
In a SRDF/Metro configuration, SRDF/Metro uses the SRDF link between the two sides of the SRDF device pair to ensure
consistency of the data on the two sides. If the SRDF device pair becomes Not Ready (NR) on the SRDF link, SRDF/Metro
must respond by choosing one side of the SRDF device pair to remain accessible to the hosts, while making the other side of the
SRDF device pair inaccessible. There are two options which enable this choice: Bias and Witness.
The first option, Bias, is a function of the two storage systems running HYPERMAX OS 5977 taking part in the SRDF/Metro and
is a required and integral component of the configuration. The second option, Witness, is an optional component of SRDF/Metro
which allows a third storage system running Enginuity OS 5876 or HYPERMAX OS 5977 system to act as an external arbitrator
to avoid an inconsistent result in cases where the bias functionality alone may not result in continued host availability of a
surviving non-biased array.
31
Compliance for a snapshot policy that is associated with a storage group is based on the number of valid snapshots within the
retention count. The retention count is translated to a retention period for compliance calculation. The retention period is the
snapshot interval multiplied by the snapshot maximum count. For example, a one hour interval with a 30 snapshot count means a
30-hour retention period.
The compliance threshold value for green to yellow is stored in the snapshot policy definition. Once the number of valid
snapshots falls below this value, compliance turns yellow.
The compliance threshold value for yellow to red is stored in the snapshot policy definition. Once the number of valid snapshots
falls below this value, compliance turns red.
In addition to performance level compliance, snapshot compliance is also calculated by polling the storage system once an hour
for SnapVX related information for storage groups that have snapshot policies that are associated with them. The returned
snapshot information is summarized into the required information for the database compliance entries.
When the maximum count of snapshots for a snapshot policy is changed, this changes the compliance for the storage group or
service level combination. Compliance values are updated accordingly.
If compliance calculation is performed during the creation of a snapshot, then an establish-in-progress state may be detected.
This is acceptable for the most recent snapshot but is considered failed for any older snapshot.
When a storage group and service level have only recently been associated and the full maximum count of snapshots has not
yet been reached, Unisphere scales the calculation to the number of snapshots that are available and represents compliance
accordingly until the full maximum count of snapshots has been reached. If a snapshot failed to be taken for a reason (such as
the storage group or service level was suspended or a snapshot was manually terminated before the maximum snapshot count
was reached), the compliance is reported as out of compliance appropriately.
When the service level interval is changed, the compliance window changes and the number of snapshots may not exist for
correct compliance.
If a service level is suspended or a storage group or service level combination is suspended, snapshots are not created. Older
snapshots fall outside the compliance window and the maximum count of required snapshots is not found.
Manual termination of snapshots inside the compliance window results in the storage group or service level combination falling
out of compliance.
Configuration of alerts related to snapshot policies is available from Settings > Alerts > Alert Policies on the Unisphere user
interface.
NOTE: Snapshot policy offsets (the execution time within the RPO interval) and snapshot time stamps are both mapped
to be relative to the clock (including time zone) of the local management host. If times are not synchronized across hosts,
these appear different to users on those hosts. Even if they are synchronized, rounding that occurs during time conversion
may result in the times being slightly different.
Unisphere supports the following snapshot policy management tasks:
● Create snapshot policies
● View and modify snapshot policies
● Associate a snapshot policy and a storage group with each other
● Disassociate a snapshot policy and a storage group from each other
● View snapshot policy compliance
● Suspend or resume snapshot policies
● Suspend or resume snapshot policies associated with one, more than one, or all storage groups
● Set a snapshot policy snapshot to be persistent
● Bulk terminate snapshots (not specific to snapshots associated with a snapshot policy)
● Delete snapshot policies
32
The MetroR1 array contains:
● One Metro SRDF Group that is configured to the MetroR2 array (MetroR1_Metro_RDFG)
● One DR SRDF Group that is configured to the DR array (MetroR1_DR_RDFG)
● Devices that are concurrent SRDF and are paired using MetroR1_Metro_RDFG and MetroR1_DR_RDFG.
The MetroR2 array contains:
● One Metro SRDF Group that is configured to the MetroR1 array (MetroR2_Metro_RDFG)
● One DR SRDF Group that is configured to the DR array (MetroR2_DR_RDFG).
● Devices that are concurrent SRDF and are paired using MetroR2_Metro_RDFG and MetroR2_DR_RDFG.
The DR array contains one DR SRDF Group that is configured to the MetroR1 array (DR_MetroR1_RDFG).
Unisphere supports the setup, monitoring, and management of a smart DR configuration using both UI and REST API. POST,
PUT and GET methods are accessible thorough the /92/replication/metrodr API resource.
Unisphere blocks attempts at using smart DR SRDF groups for other replication sessions, and also blocks certain active
management on smart DR SRDF groups. including device expansion and adding new devices. This limitation can be overcome by
temporarily deleting the Smart DR environment to perform these operations. Replication is never suspended so Recovery Point
Objective (RPO) is not affected.
Unisphere blocks attempts at SRDF active management of storage groups that are part of a smart DR environment.
33
Suggested best practices
● Try to migrate during slow processing times; QoS can be used to throttle copy rate.
● Use more SRDF links, if possible, to minimize impact:
○ Two is minimum number of SRDF links allowed; NDM can use up to eight SRDF links.
○ More links = more IOPS, lower response time.
● Use dedicated links as they yield more predictable performance than shared links.
You can migrate masked storage groups where the devices can also be in other storage groups. Examples of overlapping storage
devices include:
● Storage groups with the exact same devices, for example, SG-A has devices X, Y, Z; SG-B has devices X, Y, Z.
● Devices that overlap, for example, SG-A has devices X, Y, Z ; SG-B has devices X, Y.
● Storage groups where there is overlap with one other migrated SG, for example, SG-A has devices X, Y, Z ; SG-B has
devices W, X, Y ; SG-C has devices U,V,W.
The following migration tasks can be performed from Unisphere:
● Setting up a migration environment - Configures source and target array infrastructure for the migration process.
● Viewing migration environments
● Creating a NDM session - Duplicates the application storage environment from source array to target array.
● Viewing NDM sessions
● Viewing NDM session details
● Cutting over a NDM session - Switches the application data access form the source array to the target array and duplicates
the application data on the source array to the target array.
● Optional: Stop synchronizing data after NDM cutover and Start synchronizing data after NDM cutover - stop or start the
synchronization of writes to the target array back to source array. When stopped, the application runs on the target array
only.
● Optional: Cancelling a NDM session - cancels a migration that has not yet been committed
● Committing a NDM session - Removes application resources from the source array and releases the resources that are used
for migration. The application permanently runs on the target array.
● Optional: Recovering a NDM session - recovers a migration process following an error.
● Removing a migration environment - Removes the migration infrastructure.
34
Understanding SRDF Delta Set Extension (DSE) pools
SRDF Delta Set Extension (DSE) pools provide a mechanism for augmenting the cache-based delta set buffering mechanism of
SRDF/Asynchronous (SRDF/A) with a disk-based buffering ability.
This feature is useful when links are lost and the R1 system approaches the cache limitation. Data is moved out of cache into
preconfigured storage pools set up to handle the excess SRDF/A data. When links recover, the data is moved back to cache and
pushed over to the R2 system. DSE enables asynchronous replication operations to remain active when system cache resources
are in danger of reaching system Write Pending (WP) or SRDF/A maximum cache limit.
35
Session Option Used with UI operation Description
Consistent Activate Causes the volume pairs to be consistently
activated.
Donor Update Off Consistently stops the donor update portion of a
session and maintains the consistency of data on
the remote volumes.
Copy Create Volume copy takes place in the background. This is
the default for both pull and push sessions.
Cold Create Control volume is write disabled to the host while
the copy operation is in progress. A cold copy
session can be created as long as one or more
directors discovers the remote device.
Differential Create Creates a one-time full volume copy. Only sessions
created with the differential option can be
recreated.
For push operations, this option is selected by
default.
For pull operations, this option is cleared by default
(no differential session).
Donor Update Create Causes data written to the control volume during a
hot pull to also be written to the remote volume.
Incremental Restore Maintains a remote copy of any newly written data
while the Open Replicator session is restoring.
Force Terminate Select the Force option if the copy session is in
progress. This will allow the session to continue to
Restore copy in its current mode without donor update.
Donor Update Off Select the Force option if the copy session is in
progress. This will allow the session to continue to
copy in its current mode without donor update.
Force Copy Activate Overrides any volume restrictions and allows a data
copy.
For a push operation, remote capacity must be
equal to or larger than the control volume extents
and vice versa for a pull operation. The exception
to this is when you have pushed data to a remote
volume that is larger than the control volume, and
you want to pull the data back, you can use the
Force_Copy option.
Front-End Zero Create Enables front end zero detection for thin control
Detection volumes in the session. Front end zero detection
looks for incoming zero patterns from the remote
volume, and instead of writing the incoming data of
all zeros to the thin control volume, the group on
the thin volume is de-allocated.
Hot Create Hot copying allows the control device to be read/
write online to the host while the copy operation
is in progress. All directors that have the local
devices mapped are required to participate in the
session. A hot copy session cannot be created
unless all directors can discover the remote device.
Nocopy Activate Temporarily stops the background copying for a
session by changing the state to CopyOnAccess or
CopyOnWrite from CopyInProg.
36
Session Option Used with UI operation Description
Pull Create A pull operation copies data to the control device
from the remote device.
Push Create A push operation copies data from the control
volume to the remote volume.
● When specifying a local or remote director for a storage system running HYPERMAX OS 5977 or PowerMaxOS 5978, you
can select one or more SRDF ports.
● If the SRDF interaction includes a storage system running HYPERMAX OS 5977, then the other storage system must be
running Enginuity OS 5876. Also, in this interaction the maximum storage system volume number that is allowed on the
system running HYPERMAX OS 5977 is FFFF (65635).
37
SRDF session modes
SRDF transparently remotely mirrors production or primary (source) site data to a secondary (target) site to users, applications,
databases, and host processors.
Mode Description
Adaptive Copy Allow the source (R1) volume and target (R2) volume to be
out of synchronization by a number of I/Os that are defined
by a skew value.
Adaptive copy disk mode Data is read from the disk, and the unit of transfer across
the SRDF link is the entire track. While less global memory is
consumed, it is typically slower to read data from disk than
from global memory. Also, more bandwidth is used because
the unit of transfer is the entire track. Also, because it is
slower to read data from disk than global memory, device
resynchronization time increases.
Adaptive Copy WP Mode The unit of transfer across the SRDF link is the updated
blocks rather than an entire track, resulting in more efficient
use of SRDF link bandwidth. Data is read from global memory
than from disk, thus improving overall system performance.
However, the global memory is temporarily consumed by the
data until it is transferred across the link.
This mode requires that the device group containing the SRDF
pairs with R1 mirrors be on a storage system running Enginuity
OS 5876.
Synchronous Provides the host access to the source (R1) volume on a write
operation only after the storage system containing the target
(R2) volume acknowledges that it has received and checked
the data.
AC Skew Adaptive Copy Skew - sets the number of tracks per volume
the source volume can be ahead of the target volume. Values
are 0–65535.
38
Session option Description Available with action
operation is in progress on the local or Restore
remote storage systems.
Incremental Restore
Split
Suspend
Swap
Write Disable R1
Ready R1
Ready R2
RWDisableR2
Enable
Disable
39
Session option Description Available with action
SymForce Forces an operation on the volume pair Restore
including pairs that would be rejected.
Use caution when checking this option Incremental Restore
because improper use may result in data Write Disable R1
loss.
Ready R1
Ready R2
RWDisableR2
Enable
Disable Swap
40
Session option Description Available with action
41
AC WP Mode On—(adaptive copy write pending) the storage system acknowledges all writes to the source (R1) volume as if it
was a local volume. The new data accumulates in cache until it is successfully written to the source (R1) volume and the remote
director has transferred the write to the target (R2) volume.
AC Disk Mode On—For situations requiring the transfer of large amounts of data without loss of performance; use this mode to
temporarily transfer the bulk of your data to target (R2) volumes; then switch to synchronous or semi synchronous mode.
Domino Mode On—Ensures that the data on the source (R1) and target (R2) volumes are always synchronized. The storage
system forces the source (R1) volume to a Not Ready state to the host whenever it detects one side in a remotely mirrored pair
is unavailable.
Domino Mode Off—The remotely mirrored volume continues processing I/Os with its host, even when an SRDF volume or link
failure occurs.
AC Mode Off—Turns off the AC disk mode.
AC Change Skew—Modifies the adaptive copy skew threshold. When the skew threshold is exceeded, the remotely mirrored
pair operates in the predetermined SRDF state (synchronous or semi-synchronous). When the number of invalid tracks drop
below this value, the remotely mirrored pair reverts to the adaptive copy mode.
(R2 NR If Invalid) On—Sets the R2 device to Not Ready when there are invalid tracks.
(R2 NR If Invalid) Of—Turns off the (R2 NR_If_Invalid) On mode.
Flag Status
(C) Consistency X = Enabled, . = Disabled, - = N/A
(S) Status A = Active, I = Inactive, - = N/A
(R) RDFA Mode S = Single-session, M = MSC, - = N/A
(M) Msc Cleanup C = MSC Cleanup required, - = N/A
(T) Transmit Idle X = Enabled , . = Disabled, - = N/A
(D) DSE Status A = Active, I = Inactive, - = N/A
DSE (A) Autostart X = Enabled, . = Disabled, - = N/A
42
Understanding TimeFinder/Mirror sessions
TimeFinder/Mirror is a business continuity solution that enables the use of special business continuance volume (BCV) devices.
Copies of data from a standard device (which are online for regular I/O operations from the host) are sent and stored on BCV
devices to mirror the primary data. Uses for the BCV copies can include backup, restore, decision support, and applications
testing. Each BCV device has its own host address, and is configured as a stand-alone device.
● TimeFinder/Mirror requires Enginuity OS 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/
Mirror operations are mapped to their TimeFinder/SnapVX equivalents.
● TimeFinder operations are not supported on Open Replicator control volumes on storage systems running HYPERMAX OS
5977 or higher.
The TimeFinder/Mirror dashboard provides a single place to monitor and manage TimeFinder/Mirror sessions on a storage
system.
Understanding RecoverPoint
RecoverPoint provides block-level continuous data protection and continuous remote replication for on-demand protection and
recovery at any point-in-time, and enables you to implement a single, unified solution to protect and/or replicate data across
heterogeneous servers and storage.
RecoverPoint operations on Unisphere require OS Enginuity 5876 on the storage system.
43
The main database list view presents I/O metrics such as response time, Input/Output Operations per second (IOPS) and
throughput from both the database and the storage system which helps to immediately identify any gap between the database
I/O performance and the storage I/O performance.
DSA offers the following benefits:
● Provides a unified view across database and storage.
● Quickly identifies when a database is suffering from high I/O response times.
● Reduces troubleshooting time for database and/or storage performance issues—DBAs and SAs can look at a unified
database and storage I/O metrics view and quickly identify performance gaps or issues on both layers.
● Identifies database bottlenecks that are not related to the storage.
● Maps DB objects to storage devices
● Allows better coordination between the SA and DBA.
● Reduces repetitive manual drill downs for troubleshooting.
DSA supports the mapping of database files located on VMware virtual disks to their storage system volumes. With full database
mapping, DSA can actively monitor 15-30 databases per Unisphere installation, depending on database size. Registering a
database or instance with no extents mapping option allows the user to monitor hundreds of databases.
RAC and ASM are supported for Oracle, for CDB DSA guest user name should be started with c##. An Oracle diagnostic pack
license is required for monitoring Oracle databases.
In addition, DSA supports FAST hinting capabilities for Oracle and MS SQL databases on storage systems running HYPERMAX
OS 5977 or PowerMaxOS 5978 that allows users to accelerate mission-critical database processes in order to achieve improved
response time. The user provides the timeframe, the database objects that should be hinted and the business priority. DSA then
sends hints to the array in advance so that the FAST internal engine promotes those Logical Block Addresses (LBAs) to the
right tier at the right time.
NOTE: FAST hinting is only supported on hybrid arrays running HYPERMAX OS 5977 or PowerMaxOS 5978.
Understanding eNAS
Embedded NAS (eNAS) integrates the file-based storage capabilities of VNX arrays into storage systems running HYPERMAX
OS 5977 or PowerMaxOS 5978.
With this integrated storage solution, the Unisphere StorageAdmin provisions storage to eNAS data movers, which trigger the
creation of storage pools in VNX. The users of Unisphere for VNX then use the storage pools for file-level provisioning, for
example, creating file systems, file shares.
Unisphere provides the following features to support eNAS:
File System Provides a central location from which to monitor and manage integrated VNX file services.
dashboard
Provision Storage Allows you to provision storage to eNAS data movers.
for File wizard
Launch Unisphere Allows you to link and launch Unisphere for VNX.
for VNX
44
Understanding iSCSI
Unisphere provides monitoring and management for Internet Small Computer Systems Interface (iSCSI) directors, iSCSI ports,
iSCSI targets, IP interfaces, and IP routes on storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978.
iSCSI is a protocol that uses the TCP to transport SCSI commands, enabling the use of the existing TCP/IP networking
infrastructure as a SAN. As with SCSI over Fibre Channel (FC), iSCSI presents SCSI targets and devices to iSCSI initiators
(requesters). Unlike NAS, which presents devices at the file level, iSCSI makes block devices available from the network. Block
devices are presented across an IP network to your local system, and can be consumed in the same way as any other block
storage device.
The iSCSI changes address the market needs originating from cloud/service provider space, where a slice of infrastructure,
for example, computes, network and storage, is assigned to different users (tenants). Control and isolation of resources in this
environment is achieved by the iSCSI changes. Also, more traditional IT enterprise environments also benefit from this new
functionality. The changes also provide greater scalability and security.
Prerequisites
The Secure Remote Services (SRS) gateway has already been registered in Unisphere.
Steps
45
During internal registration, you may get a message stating that partial registration was conducted. In this case, the
CLOUDIQ page displays a link that is named connect. This link, when selected, opens the registration dialog to re-register
the arrays that failed to register in the first attempt.
3. Select the Send data to CloudIQ checkbox to enable the transmission of data to CloudIQ.
4. Do one of the following:
● Select the Data Collection Enabled checkbox for one, more than one, or all arrays.
● Clear the Data Collection Enabled checkbox for one, more than one, or all arrays.
5. Click APPLY.
Prerequisites
● The SRS gateway has already been registered in Unisphere.
● The SRS gateway must be directly connected to Unisphere.
● Sending data to CloudIQ setting must be enabled.
● There must be at least one local array in Unisphere.
NOTE: This feature has been pre-loaded in Unisphere for PowerMax 9.2.1 and will be targeted for 1H 2021.
Steps
46
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the
problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2012- 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.