PowerStore 2.0 Administration - Participant Guide - (PDF)
PowerStore 2.0 Administration - Participant Guide - (PDF)
PowerStore 2.0 Administration - Participant Guide - (PDF)
0
ADMINISTRATION
PARTICIPANT GUIDE
PARTICIPANT GUIDE
PowerStore 2.0 Administration
Introduction .......................................................................................................................... 2
Introduction
User Management
User Management
Only the Security Administrator and Administrator roles have the rights to manage
users in the system. The Operator, VM Administrator, Storage Administrator,
Storage Operator roles cannot complete these functions. See details about user
role rights.
Add User
5
2
Delete User
Change Password
Administrators and Security Administrators may change the password for other
users.
1
3 5
Locking a user prevents that user from logging in to PowerStore Manager. For
example, an Administrator may lock users to prevent them from logging in and
making changes during maintenance or migration events. Locking and unlocking
users is managed by the Administrator and Security Administrator roles.
1
3
4
1
3
1. To change your password, select the User icon > Change Password.
2. Enter the Current Password in the slide-out panel.
3. Enter the New Password.
4. PowerStore Manager analyzes the strength of the password.
5. If the password is strong enough, enter it again in the Verify Password field.
Supports:
Centralized authentication against AD, OpenLDAP, or Native LDAP
Role mapping based on single user, AD group, or LDAP group
One or multiple servers
SSL encryption
SSL requires Certification authority (CA) trust certificate for AD/LDAP server
certificate
Configure LDAP for all servers in PowerStore Manager: Settings > Users >
Directory Services > Configure LDAP
Required Information:
1. AD or LDAP Server IP
2. Domain Name
a. Used to specify LDAP at login, usually LDAPuser@LDAPdomain
3. Bind User and Password for object retrieval
4. Port: Upload AD or LDAP Server CA trust certificate for LDAP Secure
a. Default ports: 389 for LDAP, 636 for LDAPS, 3268 for LDAP Global Catalog,
and 3269 for LDAPs Global Catalog
5. Advanced Settings
In PowerStore Manager Settings > Users > Users > LDAP, add a user or group
account. Choose an Account Role to map the user or group to that account.
When mapping a group, the mapping applies to all members of that group and
members from nested groups.
A group configuration allows all group members and members from nested groups
to log in to PowerStore Manager with the associated account role.
User management takes place on the Settings > PowerStore Users screen.
Host Administration
From the Compute > Hosts & Host Groups screen in PowerStore Manager, click
the name of the host to view its details.
Two cards provide a quick view into initiators and mapped volumes.
1. Initiators describe the initiator that is used by the host.
Click the host identifier link to see information about it, including the
appliance and node connections.
2. Mapped Volumes lists the volumes that are mapped to the host. View volume
name, WWN, provisioned and used space, protection and performance policy,
appliance, and LUN from this card.
From the Compute > Hosts & Host Groups screen in PowerStore Manager:
1. Select the host to be modified. Modify one host at a time.
2. Click Modify.
3. In the slide-out panel, modify the name or description and click Apply.
From the Compute > Hosts & Host Groups screen in PowerStore Manager:
1. Click the name of the host to view its details.
2. To modify the properties, click the pencil icon to the right of the name.
3. In the slide-out panel, modify the name or description and click Apply.
A host can have one or more initiators that are registered on it. Each initiator is
uniquely identified by its World Wide Name (WWN) or iSCSI qualified name (IQN).
It is recommended that you register one or more initiators on the host side before
connecting the host to the cluster.
Remove a Host
From the Compute > Hosts & Host Groups screen in PowerStore Manager:
1. Select the host to be removed.
– Only hosts with no mapped volumes can be removed.
2. Select More Actions and click Remove.
3. On the appear window, confirm your decision by clicking Remove.
3. Click + Map.
4. Select the host or hosts to be mapped.
5. Click Apply.
Before a host can access storage, you must define a configuration for the host, and
associate it with a storage resource.
You can map a new volume to a host simultaneously when you create the
volume.
You can also map a volume to a host after it has been created.
Volume Administration
Volume Administration
After volumes have been configured, they can be administered in several ways.
You can:
View and change volume properties.
Remove or replace a protection policy.
Unmap a volume from a host.
Delete a volume.
Remove a volume from a volume group.
Monitor performance.
For all these tasks, start on the PowerStore Manager Volumes view. From the
Storage submenu, select Volumes.
The filters of the volumes query list in PowerStore Manager can be changed to
show additional information, including appliance and the preferred node assigned
to the volume. Preferred node can be viewed from Node Affinity column.
From the Storage > Volumes screen in PowerStore Manager, click the name of
the volume to view its details.
Five cards provide a quick view into capacity, performance, alerts, protection, and
host mappings. Click any tab to see its details.
1. Capacity describes the used and free space in the volume, data efficiency
savings, and a graph showing historical volume usage.
2. Performance displays graphs of system performance including Latency, IOPS,
Bandwidth, I/O Size, and Queue Depth graphs. Note: To verify iSCSI metrics
on the specific port of preferred node the system associated with the volume, go
to the Hardware view.
3. Alerts list any alerts on the volume, which are organized by critical, major, and
minor icons.
4. Protection shows the protection policy in use, snapshots taken on the volume,
and replication involving the volume.
5. Host Mappings shows the hosts and host groups mapped to the volume,
including the hostname, operating system, and protocol.
To change a protection policy for a volume, go to the details screen of that volume.
The protection policy can also be removed using the UNASSIGN POLICY
button on the slide-out panel.
If there is no protection policy assigned to this volume, an ASSIGN POLICY
button is available on the Protection tab.
Follow the same processes to modify the protection policy on a volume group, but
complete the action on the Storage > Volume Groups page.
You can map volumes that are members of a volume group to a host or host group.
Mapping a host to an empty volume group is not allowed.
Follow these steps to remove the host mapping from a volume. Note: There is
more than one way to unmap a volume from a host in the PowerStore Manager.
Delete Volume
Restore and refresh operations require that the volume group membership
match the membership that existed when the snapshot was taken.
To restore or refresh the volume group from a snapshot that was taken
before the volume was removed, you must add the volume back to the
volume group.
From the Storage > Volumes Groups screen in PowerStore Manager, click the
name of the volume group containing the volume to remove. In this example, it is
VolGroup1.
1. Click the MEMBERS card.
2. Select the volume to remove.
3. Click MORE ACTIONS.
4. Click REMOVE and acknowledge the warning to remove the volume.
Monitor Performance
Volume Group Storage > Volume Groups > [volume group] >
Performance
Volume Group member (volume) Storage > Volume Groups > [volume group] >
Members [member] > Performance
Performance Policies
1. Performance Policy
2. Choose the time range to display
3. Download the chart
4. Chart key
You can change the performance policy from the default (medium) after it is
created. Members of a volume group may have different performance policies.
To administer volumes, access the PowerStore Manager Volumes view. From the
Storage submenu, select Volumes.
View and Change Volume Properties:
You can modify the name, description, or size of the volume.
When changing the size, you can make the volume larger, not smaller.
If the volume is attached to a protection policy that includes replication,
pause replication before renaming the volume. The name of the remote
volume changes after replication is resumed.
Volume Protection Policies
You can add a protection policy to a volume after it has been created. You
can also change or delete the protection policy.
A protection policy that is added to volume group is inherited by all volumes
in the group. If a protection policy is removed from the volume group, it is
also removed from all member volumes.
Other Volume Actions:
Unmap a volume from a host.
Delete a volume.
Remove a volume from a volume group.
Monitoring Volumes:
You can monitor the performance of individual volumes, volume groups, and
volume group members.
You can select the time range and the performance metrics to you want to
view. You can download the performance chart.
During volume creation, select an appliance on which to place the volume. In the
example, Vol_3 can be placed on any available appliance that is displayed under
the Placement drop-down menu. The default Placement option is Auto.
The example shows two volumes that are created on a PowerStore two-appliance
cluster which have yet to be mapped to a host (0). The default placement of Auto
was chosen during the creation, allowing PowerStore to select the best appliances
for the volumes. Each volume was assigned to a different appliance.
PowerStore automatically chooses one of the nodes for the active/optimized path
when the volume is mapped to the host to maintain a balanced workload across the
nodes. This characteristic is called node affinity.
In the example, the Node Affinity displays a status of System Select At Attach.
The Appliance and Node Affinity column is not shown by default. Click the
Show/Hide Table Columns icon and check the box for Appliance and Node
Affinity.
Resource Balancing
There are two methods to migrate storage resources to other appliances in the
cluster. You can manually migrate storage resources, or the system can assist in
the resource movement.
In the example, Appliance 1 is nearing full capacity and storage resources must be
moved to another appliance with more available capacity. You have an option to
manually move resources at any time using the Migrate feature from the
PowerStore Manager. System Assisted Migrations can happen if capacity
monitoring determines an appliance is nearing full capacity. Assisted Migration
suggestions are generated if you receive capacity consumption alerts. An alert is
generated, and remediation options are displayed from the Alert window.
The example displays the Physical Capacity from the Dashboard > Capacity
card. In the example on the left, the system has not yet reached the required 15
days of runtime. In this case, a message is generated to inform the user that
insufficient data has been collected. In the middle example, you have at least 25
days until full. In the right example, the capacity of the appliance is at 96% used
and has an estimate of 8 days before it is full.
Alerts
When an alert is generated, view the Monitoring > Alerts page for details.
Capacity monitoring and forecasting focus on when an appliance could run out of
capacity, not how full the appliance is. PowerStore can forecast up to 1 year with a
2-year retention.
The example shows that two Alerts generated stating the appliance is nearing full
capacity. Select the Alert for suggestions on remediation.
24 hours and may require additional remediation. The Assisted Migration process
follows the same steps as a manual migration.
Other options include Add More Drives and Consider deleting unused
snapshots or volumes.
When migrating volumes, always rescan the host adapters associated with those
volumes before migration.
The example shows the Alert page for the Capacity Utilization alert. The Repair
Flow indicates that additional storage should be added. Click the text to view
additional details and recommendations for remediation.
In this case, manually migrate storage to an appliance with more capacity. When
you run the migration wizard, the system prompts you to rescan the volumes from
the server that is mapped to the volume.
The Internal Migration option moves volumes or volume groups from on appliance
to another within the cluster, without disruption. Use this feature to before shutting
down an appliance for service.
When you migrate a volume or volume group, all associated snapshots and thin
clones also migrate with the storage resource. During the migration, additional work
space is allocated on the source appliance to facilitate data movement. The
amount of space that is needed depends on the number of storage objects and
amount of data being migrated. This work space is released after the migration
finishes.
Start Migration
The Migrate Volume window provides information about the migration. The
example shows that PowerStoreDemo-appliance-2 has been selected as the
destination appliance.
Rescanning the host ensures that storage being migrated is still accessible after
the migration completes. To rescan the host, use the Rescan Disks option from
the Computer Management window, then check the box for Yes, the associated
hosts have been rescanned. Once the box is selected, the Start Migration
button is available to select.
After starting the migration, you are presented with a message stating that system
performance may be impacted for several minutes during the migration. Select the
Migrate Now button to begin the migration.
Migration Status
View the migration status from the Internal Migration > Migration page. Once the
migration process starts, monitor the progress by viewing the Status columns
under the migrations window. Look for a Completed status indicating the resource
has been migrated.
The migration process goes though several states the first of which is
synchronizing. During this phase, the majority of the background copy is completed
and there are no interruptions to any services. Sync can be run multiple times to
reduce the amount of data that must be copied during the cutover.
The cutover phase is the final phase of the migration, when ownership of the
volume or volume group is transferred to the new appliance. Active I/O is supported
during the migration, however as a best practice, stop I/O to the volume being
migrated. Migration is asynchronous until the cutover occurs and can be paused or
cancelled anytime during the migration. Before cutover all volumes are fully
synchronized.
Assisted Migration
You can manually move a storage resource. However, the PowerStore Manager
can help with this process through Assisted Migration.
In the example, there are two appliances, PS-2 and PS-3. PS-2 is nearing full
capacity and is forecast to run out of space in eight days. A Major Alert is
generated. PS-3 still has plenty of space available. Selecting the Alert launches the
Alert page.
Selecting the Assisted Migration option from the Remediation Option(s) section
returns a list of volumes recommended for migration. The process chooses
volumes that impact performance and workloads the least. For example, it
recommends any unmapped volumes or mapped volumes that are offline (in MS
Disk Manager) or unmounted (Linux) from the host perspective. The
recommendations are refreshed every 24 hours. A message warns that a rescan of
all HBAs may be required prior to the migration.
There may be a situation where, after the migration, there are still capacity issues.
In this case, manually migrate the storage resource to solve the issue.
To view the results of the migration, look at the capacities of the appliances. The
PS-2 appliance still displays a capacity over the optimal 80 percent. In this case,
further manual remediation may be required.
Monitoring Jobs
Monitoring Jobs
Once a migration is started, Vol_3 displays a blue dot, which indicates that a
migration job is in progress on the volume.
Migration Status
Once the job starts, monitor the progress by navigating to the Migration > Internal
Migrations > Migrations tab to display the status. Selecting the Jobs icon also
displays the status of the migration.
Migration Options
The Delete and Pause buttons are available when the migration job starts. Select
the migration session and click the appropriate button. Both options are available
while the job is in progress up to the point where the migration displays the
Cutover status.
NAS Administration
NDMP Backups
NDMP Backups
PowerStore supports:
Three-way NDMP, which transfers both backup data and metadata over the
LAN
Two-way NDMP is not supported.
Both full and incremental backups
Components:
Primary Storage—Source system to be backed up. For example, PowerStore
Data Management Application (DMA)—backup application that coordinates the
backup sessions. For example, Dell EMC Networker
Other supported backup vendors include: Avamar with ADS/DD, CommVault
with NDMP, IBM Spectrum Protect, Micro Focus Data Protector, Veritas
NetBackup, and Veritas Backup Exec.
See the PowerStore Simple Support Matrix
(https://www.dell.com/support/home/en-us?app=products) for version
information.
Secondary Storage—back-up target. For example, Data Domain.
DMA
Metadata Data
LAN or WAN
Data
Metadata Data
PowerStore Primary
System Secondary
System
(TLU)
NDMP Configuration
1 2
Monitor Performance
Metrics Overview
Metrics
You can view metrics at the appliance or node level.
Chart features include:
Multiple chart comparisons
Zoom
Download
Timeline
Autorefresh
Select metrics using the Summary drop-down:
You can monitor NAS Server Performance by selecting a file system from the
Storage > File Systems tab. From the Performance card, view the performance
charts. You can choose the Category to display and Timeline. Optionally, you may
download a chart as a PNG, PDF, JPG, or .CSV file.
You can monitor File (NFS) appliance performance by selecting an NFS appliance
from the Hardware tab. From the Performance card, view the performance charts.
You can choose to display OVERALL or FILE performance, and Timeline.
Optionally, you may download a chart as a PNG, PDF, JPG, or .CSV file.
You can monitor File (SMB) appliance performance by selecting an SMB appliance
from the Hardware tab. From the Performance card, view the performance charts.
You may display OVERALL or FILE performance, and Timeline.
After NAS services are installed, NAS administration includes creating and moving
NAS Servers, setting up NDMP backups, and Monitoring performance. NAS Server
administration takes place on the Storage > NAS Servers windows.
Summary:
Network Data Management Protocol (NDMP) is a backup and recovery protocol
use to transport data between NAS and backup systems.
Only three-way NDMP is supported. Two-way NDMP is not supported.
Customer must use a Data Management Application (DMA), such as Dell EMC
Networker.
Other supported backup vendors include: Avamar with ADS/DD, CommVault
with NDMP, IBM Spectrum Protect, Micro Focus Data Protector, Veritas
NetBackup, and Veritas Backup Exec.
NAS Server performance may be monitored through PowerStore Manager.
Track and limit drive space consumption by configuring quotas for file systems at
the file system or directory level. You can enable or disable quotas at any time, but
it is recommended to configure them during nonpeak production hours to avoid
impacting file system operations.
Quotas are supported on SMB, NFS, FTP, and multiprotocol file systems.
There are three types of quotas you can put on a file system:
To set default quotas on a file system, go to Storage > File Systems. Select a File
system.
1. On the File Systems page, verify the correct file system is selected. In this
example SMB-FS07.
2. Select the QUOTAS card.
3. Select the USER QUOTA tab.
4. Select PROPERTIES.
The Quotas page is where the defaults for all quotas (both user and tree) are
configured.
You can set a Soft Limit, a Hard Limit, and a Grace Period for all quotas.
If a hard limit is reached for a user quota on a file system or quota tree, the user
cannot write data to the file system or tree until more space becomes available.
A Soft Limit is a limit that can be passed temporarily. A warning is issued when the
soft limit is crossed. You can continue using space until a grace period has
been reached.
You receive an alert when the soft limit is reached, until the grace period is over.
After that, an out of space condition is reached until you get back under the soft
limit.
You are returned to the User Quotas page on the Quotas card. Note the Unix
Name, UNIX ID, Windows SID, Windows Name, Soft Limit, Hard Limit, State,
and Tree Path fields at bottom of the page.
Click ADD to add a user quota. The Add User Quota pane is displayed.
1. From drop-down, select user Host Type. Host Type can be by Windows
Name, Windows SID, Unix Name, or Unix ID.
2. Depending on the host type, the fields below Host Type change.
The example shows Windows Name selected, so a Domain and Windows
Name are required.
3. Set Soft Limit, if custom limits are required.
4. Set Hard Limit, if custom limits are required.
If both Soft and Hard limits are set to 0, space consumption is tracked only
and no limits configured.
5. Click ADD.
You are returned to the User Quotas page on the Quotas card. A message stating
"The user quota was created." is shown. The added user quota is shown.
A Tree Quota limits the total amount of storage that is consumed on the directory
tree. You can use tree quotas to:
Set storage limits on a project basis. For example, you can establish tree
quotas for a project directory that has multiple users sharing and creating files in
it.
Track directory usage by setting the tree quota hard and soft limits to 0 (zero).
Important: If you change the limits for a tree quota, the changes take
effect immediately, without disrupting file system operations.
To set default tree quota on a file system, go to Storage > File Systems. Select a
File system.
1. On the File Systems page, verify the correct file system is selected. In this
example, SMB-FS07.
2. Select the QUOTAS card.
3. Select the TREE QUOTA tab.
4. Click ADD.
From the Storage > File Systems window in PowerStore Manager, view alerts,
size used, capacity, NAS server mappings, and protection policies. Modify one
file system at a time.
Click the name of the file system to view its details.
Click the checkbox of the file system to enable the MODIFY button, and the
PROTECTIONS and MORE ACTIONS drop-downs.
When changing the size, you can increase or decrease the file system
capacity.
For file systems shared using the SMB protocol, you can also change advanced
settings, such as Sync writes and notification on writes.
From Storage > File Systems, select the file system to view its details:
1. To modify the properties:
There are two ways for unmounting an SMB share from the Windows client: using
the File Explorer or through CLI commands.
To unmount the NFS export from the Linux or UNIX client, use the operating
system umount command.
Linux command:
umount /<mountpoint>
To unmount the shared file system from the client, use the mount point that was
used to mount it.
In the example, the NFS export root that was mounted to the nfs folder was
unmounted from the Linux6 system.
2 4
To remove the NAS client access to an existing NFS export, go to the File
Systems page and select the NFS Exports tab:
1. Select the NFS export.
2. Select Host Access from the MORE ACTIONS menu. The Host Access slide-
out panel is launched.
3. Select the checkbox of the NAS client to remove.
4. Click DELETE. The system displays a message that the host was removed.
5. Click APPLY to commit the changes.
The example shows the steps to remove linux6.hmarine.test access to the NFS
export.
For this example, NAS client access to an existing NFS export can be removed.
However, if the Default Access was set differently, it would leave default access.
SMB Shares
To remove an SMB Share, go to Storage > File Systems > SMB Shares tab:
1. Select the SMB share.
2. Click DELETE. The system displays a message that the SMB Share will be
deleted.
3. Click DELETE to commit the operation.
This example shows that the Hmarine_Eng SMB share will be deleted from the
PowerStore cluster.
NFS Exports
To remove an NFS export, browse Storage > FIle Systems > NFS Exports tab:
1. Select the NFS export.
2. Click DELETE. The system displays a message that the NFS export will be
removed.
3. Click DELETE to commit the operation.
Note: Delete all snapshots and the protection policy before deleting the file system.
1 3
File System administration takes place on the Storage > File Systems windows.
You can:
Enable, configure, and apply user and tree quotas on file systems.
View and change file system properties.
Mount and unmount SMB shares.
Start and stop NFS exports.
Enable and remove file system shares and file systems.
Data Virtualization
Volume Properties
From Storage > Volumes, select the name of the volume that is provisioned to the
ESXi host.
The volume properties page opens. The available Information includes storage
consumption, system performance, alerts and event notification, associated
protection policy, and hosts mapped to the volume.
From the properties page of the volume that is mapped to the ESXi host, perform
the following operations:
1. Select the pencil icon on the right of the Volume name.
2. Increase the size of the volume. Note: Shrinking of volume size is not
supported in PowerStore.
3. Click APPLY to save the changes. The slide-out panel closes and the
information at the capacity tab is updated.
Launch a vSphere Web Client session to the vCenter Server, and open the
Storage view from the menu.
In the example, The size of VMFS_Datastore06 was increased by 10 GB. Double-click the image
for an enlarged view.
From Storage > File Systems, select the name of the file system that is
provisioned to the ESXi host using an NFS export.
The file system properties page opens. The available Information includes storage
consumption, system performance, alerts and event notification, associated
protection policy, and the configured user and tree quotas.
From the properties page of the file system that is shared with the ESXi host,
perform the following operations:
1. Select the pencil icon next to the file system name.
2. Change the size of the file system. In the example, the size of the file system
was decreased.
3. Click APPLY to save the changes. The slide-out panel closes and the
information on the capacity tab is updated.
In the example, The size of NFS_Datastore06 was decreased by 10 GB. Double-click the image for
an enlarged view.
Launch a vSphere Web Client session to the vCenter Server, and open the
Storage view from the menu.
On the General page, click REFRESH on the Capacity section. The updated
capacity is displayed, once the refresh is complete.
From Storage > Storage Containers, select the name of the storage container
that is provisioned to the ESXi.
The storage container properties page opens. The available Information includes
storage consumption, stored virtual volumes, and ESXi hosts mapped to the
storage resource.
Quotas can be enabled on storage containers. A high water mark determines when
an alert is generated for the storage administrator.
From the properties page of the storage container that is attached to the ESXi host:
1. Select the pencil icon next to the storage container name.
2. Change the value of the storage container quota. In the example, the storage
container quota was increased.
3. Click APPLY to save the changes. The slide-out panel closes and the
information on the capacity tab is updated.
In the example, The size of VVol_Datastore06 was increase by 10 GB. Double-click the image for
an enlarged view.
Launch a vSphere Web Client session to the vCenter Server, and open the
Storage view from the menu.
On the General page, click REFRESH on the Capacity section. The vVol datastore
capacity now reflects the change to the PowerStore storage container size.
List of vVols
Virtual volumes are storage objects that are provisioned automatically on a storage
container and store VM data.
PowerStore discovers the details about the virtual volumes that are stored in the
storage container and displays them in PowerStore Manager.
From Storage > Storage Containers, select the name of the storage container
that is provisioned to the ESXi host using VASA support.
Select the Virtual Volumes card. The page displays the list of vVols stored in the
storage container.
You can monitor closely the status of any of the objects by selecting the ADD TO
WATCHLIST, and gather support material.
Select the virtual volume name to view storage capacity consumption and system
performance that is related to the vVol. In the example, the vmdk object of the
virtual machine VM_WIN6 was selected.
From the virtual volume properties page, select the pencil icon next to the virtual
volume identification. A slide-out panel displays the virtual volume properties,
including in which appliance it is stored.
Virtual Machines
PowerStore Manager displays the details about the virtual machines and the vVols
that are stored in the storage container.
To monitor the VMs and its vVols in PowerStore Manager, expand the Compute
menu, and select the Virtual Machines option.
A storage administrator can verify the virtual machine details and observe whether
there are any alerts for each virtual machine.
Click the name of the virtual machine to open its properties page. Properties
include reporting of granular I/O statistics, and active management of virtual
volumes and their related entities.
Snapshot Schedule
3 2
From the Protection tab, click Assign Protection Policy to associate a policy:
1. Select the policy that you want to associate with the virtual machine.
2. Click APPLY to commit the changes.
3. Click ASSIGN to confirm association of the selected policy with the virtual
machine.
The policy now protects the virtual machine, and the underlying virtual volumes,
with local protection rules.
For policies that include a replication rule, only the snapshot schedule is used.
Replication is not supported for virtual machines.
Manual Snapshots
From the Protection tab of the virtual machine properties page, perform the
following steps to manually create snapshots:
1. Click + CREATE SNAPSHOT. The slide-out panel displays the snapshot
attributes.
2. Optionally change the name of the snapshot to one that is easier to identify.
3. Click CREATE SNAPSHOT.
Snapshots
The snapshots that were taken manually or as a result of the policy schedule are
listed on the PROTECTION card.
To verify details about the VM snapshot and modify its properties, click MODIFY.
The slide-out panel provides a link to launch vCenter server.
vCenter Server
From the vSphere Web Client session to the vCenter Server, select the ESXi host
under the Hosts and Clusters section.
The most recent snapshot is listed, and you have the option to change its name or
revert the virtual machine to this point-in-time snapshot.
In the example the VM_WIN6 virtual machine was selected, and the scheduled
snapshot is listed. Verify that it matches the same as in the PowerStore Manager.
Plugins
PowerStore supports Dell Technologies tools that enhance its integration with
VMware. Some of the key plugins are listed here.
Plugin Description
In addition, PowerStore X appliances include support for VMware NSX Data Center
for vSphere (NSX-V). NSX-V is a network virtualization and security platform that
enables the implementation of virtual networks on a physical network.
Note: You can verify the latest list of supported plugins on the
PowerStore Virtualization Infrastructure Guide document that
can be found in the PowerStore Info Hub or the Dell support web
site.
VM Migration Overview
Native vSphere features help manage virtual machines placement and storage
utilization with PowerStore.
Migrating VM Storage
Movie:
The web version of this content contains a movie.
https://edutube.emc.com/Player.aspx?vno=4YpzqB0KmtDJblTFxslqtQ
From the vSphere Web Client session, select the ESXi host under the Hosts and
Clusters section.
The storage that is associated with the virtual machine shows as modified. In the
example, virtual machine is migrated from VVol_Datastore06 to
VVol_Datastore02.
Placement
The virtual machine that was migrated and its associated virtual volumes are now
placed in the destination storage container. Open the virtual machine properties
page and select the virtual volumes tab. All virtual machine volumes including
snapshots are moved during Storage vMotion operations.
In the example, the virtual machines and their virtual volumes that were once
stored in storage container VVOL_SC06 in cluster1 are now in storage container
VVOL_SC02 in cluster2.
PowerStore Cluster
VM VM VM VM
1 1 1 1
conf swa sna data
vVols
Storage Container A
VM VM VM
2 2 2
conf swa data
vVol
s
Storage Container B
Appliance 1 Appliance 2
Migration leverages the system bond over ICM/ICD networks to migrate the vVols.
The request to migrate vVol data creates a migration session that replicates the
source vVol to the destination appliance.
The process also migrates all the snapshots that are associated with the virtual
volume.
After the initial synchronization, the replication process performs the last delta copy
of the vVol.
PowerStore and the ESXi hosts handle the rebind orchestration to nondisruptively
cut over to the migrated vVol.
Storage Container
To perform a vVol migration from the Storage Containers page, expand the
Storage submenu and select Storage Containers.
Select the name of the storage container that contains the virtual volumes to
migrate to the other appliance.
The storage container properties page opens. Select the Virtual Volumes card.
To migrate a stored virtual machine data vVol, you must perform the following:
1. Locate and check the box of the VM primary data vVol.
2. Select MIGRATE. The option is only available for single selections.
PowerStore launches the Migrate wizard. The wizard steps are explained on the
next tabs. Note: The operation is only available in a multiappliance cluster.
Select Appliance
The first step of the Migrate wizard shows the list of appliances that are part of the
PowerStore cluster. Details include the current capacity utilization on each
appliance.
The storage container spans all appliances in the PowerStore cluster using storage
from each one. The virtual machine vVols are migrated within the same storage
container between these appliances.
The top of the screen displays some additional information. A notification explains
that any fast clones and snapshots that are associated with the vVol are also
moved with the migration.
Summary
The summary screen displays details about the process that is about to be started.
Recommended knowledge base article with detailed use cases and limitations
to be reviewed before starting the process.
Notification that the virtual machine data vVol and any associated thin clones
and snapshots are also migrated.
Notification that the migration process will take some time.
Message stating that an internal migration session will be created and that you
can track the progress on the migration page.
Review the Summary information and click BACK to make changes if necessary.
The Migrate wizard slide-out panel is closed and the user interface is redirected to
the Internal Migrations page.
Internal Migration
The Required Action for Migration slide-out panel opens with information about
all the storage resources that will be migrated.
Migration Session
Once the synchronization is concluded, the session state changes to Completed. The virtual
volume has successfully migrated to the destination appliance.
Virtual Machines
The Migrate wizard can also be launched from the virtual volumes card of a VM
properties page.
Select the name of the virtual machine that contains the virtual volumes that must
be migrated to the other appliance from Compute > Virtual Machine.
The virtual machine properties page opens. Select the Virtual Volumes card. Then
check the box of the VM vVol and select MIGRATE.
In the example, the config vVol of the virtual machine that was recently migrated is
selected. Only the data vVols and snapshots were migrated. Config and swap
vVols must be individually migrated.
The best practice is to have vVols and VMs located on the same appliance.
Virtual volumes are stored in storage containers which have a 1:1 mapping with
the VMware vVol datastores.
Before removing a storage container, the virtual machines and virtual volumes that
are stored in it must be migrated to another storage.
Only then unmount the vVol datastore from the ESXi host in the vSphere
environment, and remove the storage container in PowerStore Manager.
CLUSTER 1
Virtual Machine Virtual Machine
vVols vVols
1:1 relationship
The tasks emphasized in the graphic are linked to provisioning storage for vVols support.
vSphere
Movie:
The web version of this content contains a movie.
https://edutube.emc.com/Player.aspx?vno=ijZg75W+1UjHhUQqFxG6kQ
From the vSphere Web Client session, select the ESXi host under the Hosts and
Clusters section.
Perform the following actions to unmount the vVol datastore in vCenter server:
1. Click Datastores.
2. Click the name of the datastore that you want to unmount.
3. Click VMs. Verify that the datastore does not contain any virtual machines.
4. From the Actions menu, select Unmount Datastore.
5. In the Unmount Datastore window, check the box next to each host that has the
datastore mounted. Click OK.
PowerStore Manager
In the example, the storage container VVol_SC06 is removed from the storage system. Double-click
the image for an enlarged view.
On the Storage > Storage Containers page, perform the following operations:
1. Select the storage container that is provisioned to the ESXi using VASA
support.
2. Click DELETE.
3. The Delete Storage Container dialog opens, warning the storage is about to
be removed. Click DELETE.
The storage container is removed from the storage system. The operation is useful
in scenarios where there is a need to reclaim some storage space and repurpose
the capacity quota that is used for a vVol datastore.
PowerStore T
The page shows that there is a vCenter Server that is connected to the storage
system. The page also shows the status of the PowerStore storage system
registration as a VASA storage provider. The available options include the vCenter
server network address and login credentials update, open vSphere client, and
disconnect vCenter.
Click DISCONNECT to unregister the vCenter server. A dialog warns that the
storage system will disconnect the vCenter server if the operation is confirmed.
Check the box to confirm the operation. Optionally check the box to also unregister
the storage system as a VASA provider in vSphere. Click DISCONNECT.
PowerStore X
The only supported operation is the update of the network address and account
credentials that are used for the connection with the same original vCenter server.
If PowerStore will not be used for vVol support at any time soon, consider also
unregistering it as a VASA storage provider in vCenter server.
1
2
Select the vCenter server on the left pane and open the Configure tab. Perform
the following operations from the Configure page:
1. Select the Storage Providers option from the sub-menu.
2. Select the VASA provider record to exclude.
3. Click Remove on the top menu.
4. Click YES to confirm the exclusion of the PowerStore cluster.
The dialog box is closed and the VASA record is eliminated from the list.
Protection Policies
A Protection Policy is a set of user-defined rules that are used to establish local or
remote data protection across storage resources.
You create policies for your implementation, and apply a specific policy to a storage
resource based on the business need or criticality of the data.
In the end, what makes a protection policy are the rules that it contains.
Each protection policy can include up to four snapshot rules, and no more than
one replication rule.
You apply a protection policy to a storage resource. For any one storage resource,
you can apply only one protection policy.
Virtual Machine Applies to vVols underlying the VM. Only one snapshot
rule is applied.
You can reuse the same protection policy on many storage resources. This feature
avoids the need to create specific snapshot and/or replication rules for each
storage resource.
A protection policy must contain at least one rule (either snapshot or replication).
You can create rules and then add them to a policy, or you can create the rules at
the same time that you create the policy.
2
3a
1
3b
3c
3d
1 3
4
2
4. In the Assign Protection Policy slideout, check the box next to the policy you
want to apply.
5. Click APPLY.
A snapshot rule can be modified at any time. Protection policies that contain the
rule are automatically changed and the storage resources protected by the policy
immediately see the difference on their snapshot schedule.
3a
1
3b
2
3c
3d
3e
3a
1 2 3b
3c
3d
3
1
2 4
Replication rules specify the destination system, and the Recovery Point
Objective (RPO).
Take a Snapshot
1. Accept the automatically generated name, or specify a new name for the
snapshot. In this case, a new name has been set.
2. Write a description for the snapshot, if wanted.
3. Specify a Local Retention Policy.
4. Click CREATE SNAPSHOT.
Snapshot Created
The snapshot restore operation resets the data in the parent storage resource to
the point in time at which the snapshot was taken. If you restore a volume group, all
member volumes are restored to the point in time of the snapshot.
Server Volume
Snap1
Snap2
Snap3
2
1
After the restore operation completes, reconnect the server to the volume and
verify that the volume holds the expected data.
On the volume Overview page, the PROTECTION Card displays the newly
created backup snapshot.
Snapshot Restrictions
Volume
Snap1
Server
Snap2
Snap3
To access the data within a snapshot, create a thin clone of the snapshot and map
it to a host.
Cloning a Snapshot
To access data contained in a snapshot, make a thin clone of the snapshot and
map the thin clone to a host.
3a
3b
3c
2
3d
1
3e
3f
On the Overview page for the volume, go to the Protection card and:
1. Check the box next to the snapshot you want to thin clone.
2. On the MORE ACTIONS menu, select Create Thin Clone Using Snapshot.
The Thin Clone slide-out panel appears.
3. On the slide-out panel:
Host View
Read only
Default type created by a Snapshot Rule
Accessed by creating an SMB share and/or NFS Export and choosing
snapshot
- Access is provided through the parent NAS server.
Note: Editing the name or the access type of a file system snapshot is not possible.
Expiration:
File system snapshots can be set to not expire by choosing the No Automatic
Deletion option.
Snapshots that are created by a Protection Policy have the following naming
scheme:
Snapshot Rule Name_Resource Name_Timestamp with nano-time.
Add a protection policy with a snapshot rule to a file system to schedule a snapshot
for that file system. Click here for steps to apply a protection policy to a file
system.
Note: The Protection Policy can include a replication rule, but only the snapshot
schedule or schedules will work as expected. Replication is not supported for
file systems.
From PowerStore Manager > Storage > File Systems > [File System] > Protection
Card > + CREATE SNAPSHOT.
Restoring a file system from a snapshot returns that file system to the state it was
in when the snapshot was taken.
1
2
When you refresh a file system snapshot, the contents of the snapshot are
overwritten with the current contents of the file system.
Thin Clones
In PowerStore systems, create a thin clone of a file system, volume, volume group,
or volume/volume group snapshot. The thin clone is not a full backup of the original
resource. It is a read/write copy of the storage resource that shares blocks with the
parent resource. Access to the original resource is maintained.
Thin
Vol Clone
01
Application
1
Thin
Clone Application
Sna 1
p1 Application
Thin 2
Clone
Sna Sna
p3 Application
p2
3
Sna
p4
Base Volume Family
Applications:
Development and test environments
Parallel processing
Online backup
System deployment
With thin clones, you can establish hierarchical snapshots to preserve data over
different stages of data changes within a Base Volume Family.
Data available on the source snapshot at the moment of thin clone creation is
immediately available to the thin clone. The thin clone references the source
snapshot for this data.
Data resulting from changes to the thin clone after its creation is stored on the
thin clone. Changes to the thin clone do not affect the source snapshot,
because the source snapshot is read-only.
On the Thin Clone slide-out panel, specify the thin clone information:
1. In the list of volumes, click the name of the thin clone (shown in blue text) to
view volume details.
2. Click the pencil icon to view the Properties panel.
Create Clone
On the Thin Clone slide-out panel, specify the thin clone information:
To create a thin clone of a file system in PowerStore Manager, expand the Storage
submenu, and select File Systems.
On the Create Thin Clone slide-out panel, specify the thin clone information:
1. In the list of file systems, click the name of the thin clone (shown in blue text) to
view details.
2. Click the pencil icon to view the thin clone properties. Most settings are
inherited from the source file system.
Create Clone
You can create a thin clone of a volume snapshot or volume group snapshot. You
cannot create a thin clone of a file system snapshot.
3
2
On the Thin Clone slide-out panel, specify the thin clone information:
Clone Created
The snapshots and thin clones for a volume, volume group, or storage container
form a hierarchy. These terms are used to describe this hierarchy:
Term Definition
Base volume, base The founding (production) volume or volume group for
volume group, base derivative snapshots and thin clones
storage container
Family A volume or volume group and all of its derivative thin clones
and snapshots. This family includes snapshots and thin
clones of the storage resource.
Parent The original parent volume, volume group, or thin clone for
the snapshot. This resource does not change when a thin
clone is refreshed to a different source snapshot, because
the new source snapshot must be in the same base volume
or volume group.
The base volume family for Volume 1 includes all the snapshots and thin clones
that are shown in the diagram.
Volume 1
Snapshot 1 Snapshot 2
Snapshot 3
Thin Clone 3
Clone Properties
You can view the base resource, parent, and data source of a volume by
examining its properties.
Refresh Functionality
Using the refresh operation, you can update a storage resource with data from
another resource within the same family.
When you refresh a storage resource, the existing data is removed, and the data
from the source resource is copied to it.
The source and destination resource must be in the same volume family.
Snapshots of Volumes, Volume Groups, and Thin Clones are read-only, and
cannot be refreshed.
refresh
refresh Volume 1
Note: For simplicity, diagram does not show every possible refresh operation.
PowerStore storage systems provide local data protection features that include
block and file snapshots and thin clones.
Volume and Volume Group Snapshots:
You can manually take a snapshot of a volume or volume group by viewing
the details of that volume or group, and going to the PROTECTION card.
You can set the name, description, and retention policy of that snapshot
when you create it.
You can use the snapshot to restore the parent volume or volume group to
the point in time that the snapshot was created.
Snapshots cannot be mapped to a host. To view the contents of a snapshot,
you must first clone the snapshot then map the clone to a host.
File System Snapshots:
There are two different types of file system snapshots: Snapshot and
Protocol. The snapshot rule used to create the snapshot specifies which
type of snapshot to create.
o Snapshot type snapshots are read-only and can be accessed through
previous versions or .snapshot
o Protocol type snapshots are read-only and can be exported as an SMB
share, NFS export, or both.
o You can share and mount the Protocol type on a client like any other
file system.
The snapshot type determines the method that is used to access the
snapshot contents.
You can use the restore function to return a file system to the date and time
that the snapshot was captured.
For file system snapshots, the refresh function updates the snapshot with
the current contents of the file system.
Thin Clones:
You can create a thin clone of a file system, volume, volume group, or
volume/volume group snapshot.
The thin clone is not a full backup of the original resource. It is a read/write
copy of the storage resource that shares blocks with the parent resource.
Thin clones can be mapped to a host for read/write access.
Thin clones and snapshots, along with the original volume that is used to
create them, make up a base volume family.
For Volume/Volume Group thin clones, the refresh operation completely
replaces the contents of another member of the base volume family.
Manage Replications
For additional details see the Dell EMC PowerStore: Replication Technologies
white paper on dell.com/support.
Planned Failover
Failover
Perform a failover after events such as source system failure, or events that cause
downtime for production access on the source system.
A failover that is unplanned, is initiated from the destination system and fails over
to the latest available common base image that exists at the target without
synchronization. It is a assumed that When the connection to the source system is
reestablished, the original source resource is placed into destination mode. You
can restore the system from the latest data or any point-in-time snapshot after a
restore.
Reprotect
Synchronization
Use the Pause and Resume functions to stop (Pause) and start (Resume)
replication of data between the resources for a particular replication session.
Failover Test
Perform a failover after events such as source system failure, or events that cause
downtime for production access on the source system
Remove
If a replication session is no longer needed, you may remove it. One way to remove
a replication session is initiated on the source system by removing the protection
policy from the resource on the source system. You can also change the replication
or remove the replication rule to remove a replication session from a resource.
Failover
To start failover:
1. From Protection drop-down menu of the destination system, click Replication.
2. Select replication session.
3. Click FAILOVER.
4. Click FAILOVER.
Planned Failover
During a planned failover, a replication session is manually failed over from the
source system to the destination system. The destination system is fully
synchronized with the source system before the failover and there is no data loss.
Planned failover is run from the source. Both source and destination systems
must be available.
Synchronization
Synchronization updates the destination with the changes on the source since the
previous synchronization cycle.
To start synchronization:
1. From Protection drop-down menu of the source system, click Replication.
2. Select replication session.
3. Click SYNCHRONIZE.
4. Click SYNCHRONIZE.
The Pause and Resume functions are available in PowerStore Manager from the
Protection drop-down menu. Pause pauses the replication session and Resume
resumes the replication session.
Only remove the protection policy from the source volume, an error may occur if
you try to remove the protection policy from the destination system.
NOTE: By removing the protection policy from a volume you are also removing all
data protection from the volume including scheduled snapshots. If you only want
to remove the replication session, remove the replication rule from the protection
policy.
Note: This example is one way to remove the replication rule. You can remove the
replication rule from the protection policy.
DR Failover Test
The test uses destination data (updated with last RPO snap), or a selected user
snapshot as volume on DR/destination for Read/Write operations. There is no time
limitation for running the failover test.
The operation does not affect the source storage resource, and the replication
process continues based on the original RPO setting.
Replication
Session
Operating
Normally
RPO n User RPO n
Snap Shadow Snap Snap
RW Snap
Review the tabs to learn how the replication session behaves during a DR Failover
Test.
Continuous Synchronization
The replication process generates storage resource copies and synchronizes them
with the Shadow Read/Write snap on the destination.
Source Destination
Replication
Session
Synchronization
RPO n RPO n
Shadow
RW Snap
Common Base
Host Access
During a replication session, hosts have Read/Write access to the source storage
resource at the production site.
The hosts at the DR site have limited Read-Only access to the replica during a
normal operation session.
Source Destination
Read/Write Read-only
Replication
Session
RPO n RPO n
Shadow
RW Snap
Once the user starts a failover test at the destination site, the system pauses the
replica update. The system does not update the storage resource that is used for
the test with data block changes resultant of a new synchronization.
The system keeps the storage resource steady and up-to-date with the last
synchronization before the disaster recovery test. Host access to the replica at the
destination site is changed to Read/Write.
Source Destination
Read/Write Read/Write
Replication
Session
RPO n RPO n
Shadow
RW Snap
Incremental Copy
The replication session is not paused or stopped. Any changes to the source
storage resource generate an incremental copy that updates the shadow
Read/Write snap at the destination site.
Source Destination
Read/Write Read/Write
Replication
Session
Common Base
Once the DR failover test is stopped, the replica in the destination site is updated.
The system uses the last synchronized data to overwrite data on the test storage
resource.
Source Destination
Read/Write Read-only
Replication
Session
DR Failover
Test Stopped
In PowerStore Manager, you can start, monitor, or stop a DR failover test from the
properties page of the destination storage resource.
You can perform the same operations on the running replication session from the
DR site.
From the volumes details page, select the PROTECTION card and the
REPLICATION tab.
The storage resource used for the test can be the destination data
or a snapshot on the DR cluster.
The same graphic shows that the destination data is been used for
failover testing.
You must stop the failover test before starting a planned failover.
Key Points:
Planned Failover is initiated from the source and unplanned failover is initiated
from the destination.
You click the Planned Failover or Failover buttons to start the processes.
Synchronization is initiated from the source and asynchronously updates the
destination with new information from the previous synchronization cycle.
Pausing and resuming replication is initiated on the source or destination
system.
DR Failover Test operations can be initiated from the destination system. A test
storage resource data is enabled for host I/O while the replication continues.
After a removing a replication session, the destination replica remains.
Data Efficiency
Overview
PowerStore arrays optimize capacity and improve storage efficiency using features
such as Zero Detection, Deduplication, and Compression. Zero detection,
Deduplication, and the Compression work together to reduce the physical amount
of storage that is required to save a dataset. Data Efficiency, also referred to as
Data Reduction and Storage Efficiency, results in a lower total operational cost.
Zero Detection logically detects and discards consecutive zeros, saves only one
instance, and uses pointers.
Compression uses physical hardware to encode data using fewer bits than the
original representation. The compression hardware offloads the compression
operations from the appliance processors to save CPU cycles.
DRAM (System Cache) is used as a caching layer as data enters and exits the
system. All data passes through and interacts with DRAM memory. How the data
interacts with DRAM memory depends on if the I/O is a read or a write. How a read
and write I/O passes through DRAM memory is explained later.
Stripe Logic assembles the data into 2 MB stripes before writing to the data
drives.
Data Drives provide the physical capacity to the system to store data. If additional
enclosures are attached to the system, they also add to the usable capacity of the
system. Within PowerStore, any drives within the system that are not part of the
NVRAM cache drives contribute to a single, large, usable capacity within the
system. This space is shared for all resources within the system.
Battery backup is required as NVMe NVRAM drives contain both volatile and
nonvolatile media. The volatile media provides fast access speeds, and is used for
write caching within the system while the appliance is under normal operation. If
power is interrupted or the system is being powered off, the volatile write cache is
destaged to the nonvolatile media within the NVMe NVRAM drives. Once the write
cache information is safely stored, power is removed from the drives.
Each NVMe NVRAM drive is dual-ported, meaning each node has access to the
drives through internal connections and the information that is contained in them. If
needed, the peer node can access the data as needed.
DRAM
System Cache
2 MB Stripe Creation
2 or 4 NVMe
NVRAM
Drives
Data Drives (Write Cache)
(1000-9000 models)
Write Operation
The graphic explains the process when the PowerStore receives a write request.
DRAM DRAM
NVMe
Data Drives NVRAM
Drives
Step Description
1. An I/O enters Node B and is saved within the node DRAM memory. The I/O
is analyzed to determine:
a. What type of I/O it is.
b. What resource it is intended for.
c. The location within the resource being updated or requested.
d. Other metadata information.
2. If the I/O is determined to be a write, the data is copied into the write
cache on the NVRAM drives in 1000–9000 models.
On 500T models write cache is provisioned from DRAM. After the
information is stored within the write cache, the information stored within
the DRAM memory is considered clean as it no longer has the latest
copy of the information.
This data then becomes part of the read cache, until it is later replaced
in cache by newer or more highly accessed data.
3. For each write I/O that enters the system, the information is passed
between the nodes using tokens.
This operation updates the peer node that a new write has been
received and that it has the newest copy of the data.
.A token includes information about the I/O such as what resource was
updated and the address within the resource that was updated.
A token also includes information about the location the I/O was saved
to within the write cache.
5. After the host is acknowledged, data is copied from the write cache and is
passed through the deduplication and compression logic.
Read Operation
The graphic explains the process when the PowerStore receives a read request.
DRAM DRAM
Step Description
1. The host issues a read. The resource could be a volume, file system, or
datastore. First, the system must determine where the latest copy of the
data being requested is located.
2. The system reviews the private space of the resource to determine if the
block has been previously written. If so, the system locates the current
data.
3. Several things can happen here depending on where the data is located.
If the data is a pattern, such as all 0s or 1s, the system re-creates the
block in DRAM memory. (Go to step 4.)
If the latest copy is in the node DRAM memory, the system sends the
data to the host.
If the latest copy is in the NVMe NVRAM drives, the system copies
the data to DRAM memory. (Go to step 4.)
If the requested data resides on the data drives, it has been
deduplicated and compressed. The system examines the
deduplication pattern and determines the reference location. The
system copies the data from the reference location. The data is
decompressed, and the page is reconstructed in DRAM memory.
(Go to step 4.)
To view the data reduction savings from PowerStore Manager, go to Dashboard >
Capacity > Data Savings. The system displays metrics for the cluster:
Capacity Shows both the used and free storage capacity on the cluster.
Used—Physical capacity that is used by all appliances in the
cluster. The percentage and amount that is used are both
available in this area.
Free—The amount of free physical capacity on the cluster.
Overall Efficiency
Overall Efficiency is the computed ratio of the Total Space Provisioned to the
Physical Space Used.
Example:
Five 2 GB volumes were provisioned with 1 GB of data that is written to each of
them.
Each of the five volumes has one snapshot, for another five 2 GB volumes.
All volumes are thinly provisioned with deduplication, and compression applied.
There are 2 GB of physical space used.
(10 GB + 10 GB) 20 GB
Space EfficiencyRatio% = =
2 GB 2 GB
Space EfficiencyRatio% = 10 : 1
All volumes are thinly provisioned with deduplication, and compression applied. 2 GB of physical space used.
Thin Savings
Thin Savings is the ratio of volumes’ provisioned size to logical size used.
Example:
Ten 2 GB volumes are provisioned.
500 MB (0.5 GB) of data is written to each of them.
(10 x 2 GB) 20 GB
Thin Savings Ratio% = =
10 x .5 GB 5 GB
Snap Savings
Snap Savings is ratio of the original space used by the snaps to the data uniquely
owned by the snaps.
Example:
A volume has 2 GB of data when a snapshot is taken.
After the snapshot is taken, 0.5 GB (500 MB) of the volume's data is
overwritten.
As a result, the snapshot has the original 2 GB of data written with 1.5 GB of data
shared with the volume and 0.5 GB unique data.
The snap savings would be 2 GB/0.5 GB or 4:1. The snapshot_savings value will
be 4 in this case.
After the snapshot is taken, 0.5 GB (500 MB) of the Snapshot Savings Value = 4
volume's data is overwritten.
The ratio of space which would have been taken if deduplication and compression
were not applied to the physical space occupied after deduplication and
compression.
Example:
A volume was written with 2 GB of data when a snapshot is taken.
After the snapshot is taken, 0.5 GB (500 MB) of the volume's data is
overwritten.
As a result, the volume and the snapshot share 1.5 GB of data. The snapshot and
volume each owns 0.5 GB of unique data. Altogether, the volume and snapshot
own 2.5 GB of data. If all this data occupied 1 GB of space, the Data Reduction
Ratio would be 2.5 GB/1 GB or 2.5:1.
2.5
Data Reduction Ratio% = GB
1 GB
As a result, the volume and the snapshot share 1.5 GB of data. The snapshot and volume each owns 0.5 GB of unique data.
Altogether, the volume and snapshot own 2.5 GB of data. If all this data occupied 1 GB of space, the Data Reduction Ratio would
be 2.5 GB/1 GB or 2.5:1.
Data Encryption
Overview
Tampering with data violates data integrity, and data theft compromises data
availability and confidentiality. PowerStore uses Data At Rest Encryption (D@RE)
to help protect against data tampering and data theft. D@RE guards against
threat agents reading content from any of the drives, even if the drives are removed
from the PowerStore or are physically disassembled.
Data Encryption protects against data tampering and data theft in the following use
cases:
Drive is stolen from a system, and a threat agent attempts to access the data on
the drives.
Attempts to read data during transit of any drive to another location.
Attempts to read data even if drive is broken or discarded.
Due to possibility of data loss the keystore file must be backed up and
saved before and after adding or removing any drive in the system!
Architecture
SEDs
All PowerStore systems leave the factory with encryption enabled except
units destined for countries where encryption is not allowed or countries that
are restricted by the United States federal government.
PowerStore encrypts the data as close to its origin as possible by using SEDs.
SEDs have dedicated hardware on each drive to encrypt and decrypt data as it is
written or read.
All PowerStore drives ship with D@RE enabled and are FIPS-140-2 Level 1
certified.
Encryption is automatically activated during the initial configuration of a
cluster, except for systems that destined for countries where encryption is
restricted. For countries where encryption is prohibited, non-encrypted systems
are available.
In countries where encryption is allowed, there is no way to disable data
encryption.
When a new appliance joins an existing encrypted cluster, a check is run to
ensure that the appliance is encryption capable.
KMS
PowerStore uses an embedded Key Manager Service (KMS) that resides in the
Base System Container (BSC) of the Active node of each appliance in the
cluster. The BSC on the active nodes work together and automate the
management of all encryption keys.
Each appliance has an independent keystore. All keys are aggregated to the
Master appliance in a cluster. A collective cluster backup of the keys can be run
from the master appliance. No external key management is available in the
initial release of PowerStore.
If for any reason the KMS fails or the key file cannot be read, encrypted data on
drives cannot be retrieved. It is very important to backup the keystore.
PowerStore Cluster
KMS Collective
KMS KMS
Key Backup
If for any reason the KMS fails or the keystore file cannot be read,
encrypted data cannot be retrieved! This is why it is very important to
backup the keystore file often and before and after any drive add or
remove procedure.
Encryption Status
Cluster
Appliance
Drive
Cluster Level
Appliance Level
Drive Level
The encryption status is shown at the individual drive level on the Hardware tab of
the Hardware > Appliance page:
Unsupported – The drive does not support encryption. (Not_Supported)
Unknown – The appliance has not yet attempted to enable encryption on the
drive. This status may be seen during initial encryption activation on an
appliance, or during the addition of new drives to a configured appliance.
Foreign – The drive is supported, but was taken from another system. The
drive must be decommissioned before it can be used.
(Supported_Locked_Foreign). This requires the help of Dell EMC support.
Encrypting – The appliance is enabling encryption on the drive. This status
may be seen during the initial activation of encryption, or during the addition of
new drives to a configured appliance.
Encrypted – The drive is encrypted. This is the typical state of a drive in an
appliance that is encryption capable.
Disabled – The drive cannot have encryption enabled due to country-specific
import restrictions. If any drives report this status, all drives in the cluster will
report the same status.
Drive Operation
User Actions
Keystore Alerts
An alert that New keystore changes have occurred indicates that you should
backup (download) the changed keystore and save it for a future restore if required.
Back up Keystore
Backup
A backup from the master appliance is an aggregate of keystore per cluster.
An alert is generated whenever any appliance has key changes.
All keystore backups are synchronized with the primary.
Audit Logs
On the PowerStore Manager Audit Log page, find events related to D@RE, such
as download-backups, and restores.
Repurpose Drive
Note: Using a drive from another system is discouraged, however it can be done
with the assistance of Dell EMC support.
Note: To repurpose a drive for use in PowerStore, contact your service provider.
Serviceability
Here is an overview of some steps that you can take when troubleshooting
encryption issues:
1. Check cluster encryption status.
2. Check drive encryption status.
3. Check audit log for KMS events.
4. With assistance from your service provider, restore the appropriate archived
keystore file that was previously saved.
Array Decommission
Introducing CloudIQ
SupportAssist enabled
Connect to CloudIQ
Overview
The Overview provides a visual look at overall system health and connectivity.
Alerts, performance anomalies, and storage issues are available at a glance.
Clicking the hyperlinks (in blue) on this page opens information specific to that item.
For example, clicking the 3 critical alerts shows the alert details.
Health
Under the Health section, view system health (shown here), issues1, alerts2, and
updates.
Component status
Configuration status
Capacity status
Performance status
1
Displays a comprehensive view of all current health issues across all storage
systems in the environment.
2
Displays any alerts reported in the last 24 hours.
Inventory
For a list of systems and hosts in CloudIQ, view the Inventory. Find the system
software version, physical location, and last contact time.
See how fresh the CloudIQ data is by checking the last contact time on the inventory view.
Capacity
The System Capacity page displays the system-level storage capacity. Quickly
see used and free storage on each system and storage efficiency metrics.
Performance
The System Performance page displays key system level performance metrics
across all systems. This includes IOPS, Bandwidth, and System Latency.
The second option under Performance is the Metrics Browser. Use this browser to
select metrics to create custom performance dashboards. Click here to see metrics
available for PowerStore.
Cybersecurity
The Cybersecurity page displays cybersecurity system risk levels across all
systems. The risk level metrics include Risk Level, Evaluation Plan and Issues.
Other options under Cybersecurity are Cybersecurity Issues and Policy. The
Cybersecurity Issues list includes Severity, Issue, System and Created.
Cybersecurity policies are found on the Policy page.
Reports
Lifecycle
The Lifecycle page provides a way to view system and component life cycles.
The Service Contracts option displays which systems are under service contracts,
contract numbers, type and expiration dates.
Admin
Under Admin, manage the connectivity and users in CloudIQ. Use this view to add
or remove systems and add or remove user privileges.
CloudIQ shows high-level information about all the systems that are sending data
to it. This makes it easy to monitor performance overall.
You can access information about any specific system in CloudIQ by clicking its
title on any of the CloudIQ views. Click to see how to get to these pages.
Health Score
This tab focuses on the Health Score of this particular system, showing the details
about any issues affecting the health score.
Launch PowerStore
Manager for this system
from this link.
Configuration
The Configuration tab shows configuration data about the system. Other tabs
below show details about the appliances, drives, hosts, storage, Virtual Machines,
and Storage Containers. If VMs and Storage Containers are not used, the tabs are
shown but hold no information.
Capacity
The Capacity tab displays detailed capacity information for the system, including
total, used, and free capacity, as well as efficiency metrics and how capacity is
allocated to different storage objects.
Performance
The Performance tab shows key performance metrics for storage objects that are
sorted by their 24-hour averages. These are the resources with the highest latency,
IOPS, and bandwidth. The bottom part of the page provides more detail about each
of those performance categories.
Cybersecurity
System Risk Level, Cybersecurity issues, and an evaluation plan are presented on
the Cybersecurity tab.
Change X X X X X
your
system
local
password
View X X X X X
system
settings,
status,
and
performa
nce
informatio
n
Modify X X
system
settings
Connect X X X
to
vCenter
View a X X
list of
local
accounts
Add, X X
delete, or
modify a
local
account
View X X
system
storage
informatio
n through
a vCenter
server
that is
connecte
d to the
system's
VASA
provider
and
register /
re-
register
OpenLDAP
AD Advanced Settings Advanced Settings
Storage Resources
A protection policy may be selected at the time the storage resource is created, or
associated with an existing storage resource later. Associating a protection policy
with a supported storage resource is not a requirement.
Protection Policy
Volume Group
Virtual Machine
(vVols) Volume
Only one protection policy may be applied to each supported storage resource:
Standalone volume or a volume in a volume group (if the volume group has
no protection policy associated).
A volume group, the policy is applied to all the member volumes.
Member volumes that are removed from the protected group retain the
associated protection policy.
To avoid any conflict with the volume group protection policy, new members
cannot have a policy that is associated with them.
Virtual machines support only snapshot rules. The policy applies to the vVols
underlying the virtual machine.
File systems support only snapshot rules and ignore the replication rules.
You can also apply protection policies to thin clones of volumes, volume groups, file
systems, and snapshots.
You can substitute the protection policy that is applied to a storage resource with
another configured policy at any time.
Substitute the protection policy with one that has different snapshot rules.
If the associated policy has no replication rule, you can associate the resource
with one that has.
If swapping a policy with one that also has a replication rule, ensure both
policies use the same remote system. This restriction avoids an unnecessary
initial, full sync operation.
The tabs show a different method for performing the same task for each one of the
supported storage resources.
Volumes
2
3
In the example, the Policy1 policy is assigned to the volumes: Vol01 and Vol05.
One of the volumes (Vol01) is a member of the VolumeGroup-1 volume group.
The policy can be associated with this volume because the volume group has no
policy that is associated with it.
Volume Groups
To protect a volume group using the PowerStore user interface, expand the
Storage submenu, and select Volume Groups.
2
3
In the example, the Critical Applications policy is associated with volume group C2-
VG01. Two volumes that are members of this group are associated with the
protection policy.
File Systems
To protect a file system using the PowerStore user interface, expand the Storage
submenu, and select File Systems.
2 3
2. Open the PROTECTION menu and select the Assign Protection Policy
option.
3. From the list of existing policies, select the one that you want to associate with
the file system.
4. Click Apply to commit the changes. The policy is applied to the file system.
For policies that include a replication rule, only the snapshot schedule is used.
Replication is not supported for the file systems.
Virtual Machines
To protect a virtual machine using the PowerStore user interface, expand the
Compute submenu, and select Virtual Machines.
3. From the list of existing policies, select the one that you want to associate with
the virtual machine.
4. Click Apply to commit the changes. The policy protects the virtual machine, and
the underlying vVols.
For policies that include a replication rule, only the snapshot schedule is used.
Replication is not supported for virtual machines.
Bandwidth X X X X X
CPU X
Utilization
I/O Size X X X X
IOPS X X X X X
Latency X X X X X
% Read X
Queue Depth X
Days
Select the days on which a snapshot will be created.
Retention
Set a retention period. How long to keep snapshot.
RPO
Recovery point objective (RPO) indicates the acceptable amount of data, which is
measured in units of time, that may be lost in a failure.
Rule Name
Enter a name for the new rule.
Shadow Read/Write
The Shadow Read/Write snap is an internal object that is only used for the
replication and not exposed to the hosts.
Share-based QoS