h18116 Dell Powerstore Vmware Vsphere Best Practices - 2
h18116 Dell Powerstore Vmware Vsphere Best Practices - 2
h18116 Dell Powerstore Vmware Vsphere Best Practices - 2
May 2024
H18116.7
White Paper
Abstract
This document provides best practices for integrating VMware vSphere
hosts with Dell PowerStore.
Copyright
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2020–2024 Dell Inc. or its subsidiaries. All Rights Reserved. Published in the USA May 2024 H18116.7.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change
without notice.
Contents
Executive summary.......................................................................................................................4
Introduction ...................................................................................................................................5
References ...................................................................................................................................32
Executive summary
Introduction This document provides recommendations, tips, and other helpful guidelines for
integrating external VMware vSphere hosts with the Dell PowerStore platform.
Audience This document is intended for IT administrators, storage architects, partners, and Dell
Technologies employees. This audience also includes any individuals who may evaluate,
acquire, manage, operate, or design a Dell Technologies networked storage environment
using PowerStore systems.
May 2023 H18116.5 Updates for PowerStoreOS 3.5: Secure snapshots and
PowerStore backup to PowerProtect DD series
appliances
We value your Dell Technologies and the authors of this document welcome your feedback on this
feedback document. Contact the Dell Technologies team by email.
Note: For links to other documentation for this topic, see the PowerStore Info Hub.
Introduction
PowerStore PowerStore is a robust and flexible storage solution that is ideal for use with VMware
overview vSphere.
PowerStore achieves new levels of operational simplicity and agility. It uses a container-
based microservices architecture, advanced storage technologies, and integrated
machine learning to unlock the power of your data. PowerStore is a versatile platform with
a performance-centric design that delivers multidimensional scale, always-on data
reduction, and support for next-generation media.
Prerequisite Before implementing the best practices in this document, we recommend reviewing and
reading implementing the recommended configurations in the Dell Technologies Host Connectivity
Guide and reviewing other resources available at Dell.com/powerstoredocs.
Terminology The following table provides definitions for some of the terms that are used in this
document.
Table 1. Terminology
Term Definition
Base enclosure The enclosure containing both nodes (node A and node B) and the
NVMe drive slots.
Node The component within the base enclosure that contains processors
and memory. Each appliance consists of two nodes.
NVM Express over NVMe command fabric which includes Fibre Channel and TCP/IP
Fabrics (NVMe-oF) transport protocols, among others.
NVMe over Fibre Allows hosts to access storage systems across a network fabric
Channel (NVMe/FC) with the NVMe protocol using Fibre Channel as the underlying
transport.
Term Definition
NVMe over TCP Allows hosts to access storage systems across a network fabric
(NVMe/TCP) with the NVMe protocol using TCP as the underlying transport.
PowerStore Manager The web-based user interface (UI) for storage management.
Host configuration
Introduction While most settings for stand-alone ESXi hosts that are connected to PowerStore
appliances can remain at the default values, some changes are required for PowerStore
stability, performance, and efficiency. The recommended changes and instructions about
how to set them are specified in the document Dell Technologies Host Connectivity
Guide. While administrators can use this section for high-level explanations and reasoning
behind the recommendations, administrators should always consult the Host Connectivity
Guide for the current settings.
Note: The Virtual Storage Integrator (VSI) allows administrators to easily configure the ESXi host
best-practice settings with PowerStore. See Virtual Storage Integrator for more details.
Queue depth There are multiple locations in ESXi and the guest operating systems to modify queue
depth. While increasing the queue depth in an application, vSCSI device, or ESXi driver
module can potentially increase performance, modifying or increasing queue depths can
potentially overwhelm the array. For details about queue-depth settings, see the
document Dell Technologies Host Connectivity Guide.
Timeouts Setting disk timeouts is an important factor for applications to survive both unexpected
and expected node outages, such as failures or rebooting for updates. While the default
SCSI timeout in most applications and operating systems is 30 seconds, storage vendors
(including Dell Technologies) and application vendors typically recommend increasing
these timeouts to 60 seconds or more to help ensure uptime. Two of the main locations to
change the timeouts are at the ESXi host level and at the virtual-machine-guest-OS level.
For details about setting timeouts for ESXi, see the Dell Technologies Host Connectivity
Guide.
Multipathing With the vSphere Pluggable Storage Architecture (PSA), the storage protocol determines
which Multipathing Plugin (MPP) is assigned to volumes mapped from the PowerStore
array. With SCSI-based protocols such as Fibre Channel and iSCSI, the Native
Multipathing Plug-in (NMP) is used, whereas with NVMe-oF, the VMware High
Performance Plug-in (HPP) is used.
Also, the recommended esxcli command sets the IOPS path-change condition to one I/O
per path. While the default setting in the RR PSP sends 1,000 IOPS down each path
before switching to the next path, this recommended setting instructs ESXi to send one
command down each path. This setting results in better utilization of each path’s
bandwidth, which is useful for applications that send large I/O block sizes to the array.
According to the Dell Technologies Host Connectivity Guide, SSH to each ESXi host
using root credentials to issue the following command (reboot required):
The claim rule can also be added to discovered ESXi hosts using VMware PowerCLI:
Note: The following commands are for vSphere 7 and 8 ESXi hosts. ESXi 6.7 hosts should also
include the disable_action_OnRetryErrors option. See the Dell Technologies Host Connectivity
Guide for more information.
The HPP has multiple Path Selection Schemes (PSS) available to determine which
physical paths are used for I/O requests. Load Balance – IOPs (LB-IOPS) is the preferred
Path Selection Scheme as recommended by the Dell Technologies Host Connectivity
Guide. In addition, the LB-IOPS path switching frequency should be changed from the
default value of 1,000 to 1.
According to the Dell Technologies Host Connectivity Guide, SSH to each ESXi host
using root credentials to issue the following command (reboot required):
For more information about NVMe-oF and the High Performance Plug-in, see the
following resources on the VMware website:
Operating While most versions of VMFS are backwards-compatible, it is a best practice to verify and
system disk use the latest version of VMFS recommended by VMware. Typically, new VMFS versions
formats are bundled with an ESXi upgrade. As a migration path, VMware vCenter allows
administrators to use VMware vSphere Storage vMotion to migrate virtual machines to
new VMFS datastores formatted with the latest version.
Introduction NVMe over Fibre Channel support was introduced in vSphere 7.0 and PowerStoreOS 2.0.
NVMe over TCP support was introduced with vSphere 7.0 Update 3 and PowerStoreOS
2.1.
NVMe-oF vVols PowerStoreOS 3.0 introduced NVMe-vVol host connectivity supporting NVMe/FC vVols.
NVMe-oF vVols is a new specification which introduces VASA 4.0 and vVols 3.0. This
new specification requires HBAs and fabric switches that are NVMe capable, to extend
the volumes from the array to the host. VMware added the corresponding NVMe-oF vVol
support in vSphere 8.0.
Support for NVMe/TCP vVols was introduced with PowerStoreOS 3.6 and VMware
vSphere 8.0 Update 1. NVMe/TCP yields high fibre channel-like performance at Ethernet
prices. Customers of any size can benefit from this feature by delivering enterprise level
performance for demanding application requirements.
Note: TCP ports 8009 and 4420 must be open for discovery and data respectively. In addition, the
ESXi host time must be synchronized with PowerStore. Time synchronization can be automated
using NTP or PTP for vSphere and NTP for PowerStore.
Figure 2. Enabling NVMe over TCP on a VMkernel port in the vSphere Client
Figure 4. Using PowerStore Manager to configure a PowerStore host with an NVMe vVol
initiator
Note: With NVMe-oF vVols, there is no physical Protocol Endpoint (PE): the PE is now a logical
object representation of the ANA group where the vVols reside. Until a VM is powered on, the vPE
does not exist. When a VM is powered on, the vPE is created so the host can access the vVols in
the ANA group. For more information, see What’s New with vSphere 8 Core Storage.
NVMe/FC host When configuring an ESXi host for NVMe/FC, before you add it to a PowerStore
configurations appliance or cluster, you must change the NVMe Qualified Name (NQN), which is similar
to an iSCSI Qualified Name (IQN), to the UUID format.
According to the Dell Technologies Host Connectivity Guide, SSH to each ESXi host
using root credentials, and issue the following command (reboot required):
To verify the host NQN was generated correctly after the reboot, use the following
command:
According to the Dell Technologies Host Connectivity Guide, depending on the NVMe
HBA installed, issue the following commands with root privileges (reboot required):
Note: You must change the host NQN format parameter before adding the host in PowerStore
Manager. Changing the vmknvme_hostnqn_format parameter after the host has already been
added to the appliance changes its NQN, which causes the host to be disconnected from the
array.
NVMe over TCP NVMe over TCP support was introduced with vSphere 7.0 Update 3 and PowerStoreOS
(NVMe/TCP) 2.1. When planning to implement this new protocol, confirm that the host’s networking
hardware is supported in the VMware Compatibility Guide.
This section provides a high-level overview of configuration best practices, but for more
information, see the PowerStore resources on the Dell Technologies Info Hub.
The best practice for storage network redundancy is to add two NVMe over TCP adapters
and associate them with their respective storage network’s physical NICs (see the
following figure).
After you add the storage adapters, you can configure the cluster networking. The best
practice is to use a vSphere Distributed Switch (VDS) with two distributed port groups,
one for each of the redundant storage networks (see the following figure).
Since each NVMe over TCP storage adapter is bound to a physical NIC, you must adjust
the Teaming and Failover for each distributed port group. Set the physical uplink that is
bound to the vmhba to Active, and set the other NICs to Unused (see the following figure).
Figure 10. Teaming and failover settings for the distributed port group
Next, add the VMkernel adapters to their respective distributed port groups, and enable
the NVMe over TCP service (see the following figure). These VMkernel adapters supply
the IP addresses for each of the storage adapters (for example vmhba66 or vmhba67 as
shown in Figure 8).
Figure 11. VMkernel adapter with NVMe over TCP service enabled
After you configure the host and cluster networking pieces, the dual storage networks
should look like the example cluster shown in the following figure.
After you complete the prerequisite networking configuration, add the storage controllers
to discover the PowerStore array ports and IP addresses. You can add the storage
controllers manually, by using direct discovery, or automatically by using the SmartFabric
Storage Software (SFSS) as a Centralized Discovery Controller (CDC). PowerStoreOS
3.0 added enhancements to automate PowerStore registration with the SFSS/CDC. For
more information, see the SmartFabric Storage Software (SFSS) for NVMe over TCP –
Deployment Guide.
After controller discovery, add the respective PowerStore front-end ports to each storage
adapter (see the following figure). For example, add storage network 1 ports to vmhba66,
and add storage network 2 ports to vmhba67. This process can be streamlined when
using zoning capabilities with SFSS.
Finally, add the ESXi hosts to PowerStore Manager before provisioning volumes. If
everything is configured correctly, the host NQN should be associated with both VMK IPs
as listed in the Transport Address field as shown in the following figure.
Figure 14. PowerStore Manager—Adding NVMe/TCP host with both VMK Ips
Note: If an ESXi host has been previously configured with NVMe/FC, set the
vmknvme_hostnqn_format=1 variable back to the hostname option before configuring
NVMe/TCP. For more information, see the Dell Technologies Host Connectivity Guide.
Introduction There are several best practices for provisioning storage from a PowerStore appliance to
an external vSphere cluster. The size of VMFS datastores and the number of virtual
machines that are placed on each datastore can affect the overall performance of the
volume and array.
Volume and When a volume is created on PowerStore, the best practice is to create a volume no
VMFS datastore larger than needed and use a single VMFS partition on that volume.
sizing
While the maximum datastore size can be up to 64 TB, we recommended beginning with
a small datastore capacity and increase it as needed. Right-sizing datastores prevents
accidentally placing too many virtual machines on the datastore and decreases the
probability of resource contention. Since datastore and VMDK sizes can be easily
If a standard for the environment has not already been established, the recommended
starting size for a VMFS datastore volume is 1 TB as shown in the following figure.
Note: The VSI plug-in can automate the process of increasing the size of datastores with only a
few clicks.
Performance While ESXi storage performance tuning is a complex topic, this section describes a few
optimizations simple methods to proactively optimize performance.
Note: The VSI plug-in allows administrators to quickly set host best practices for optimal operation
and performance.
There are two disadvantages when placing a single VM on its own datastore: it reduces
consolidation ratios and increases the management overhead of maintaining numerous
items.
Partition Due to the PowerStore architecture, manual partition alignment is not necessary.
alignment
Guest vSCSI When creating a new virtual machine, vSphere automatically suggests the disk controller
adapter selection option based on the operating system selected (see the following figure). The Dell
Technologies Host Connectivity Guide recommends using the VMware Paravirtual SCSI
controller for optimal performance. You can find more information about the Paravirtual
adapter, including its benefits and limitations, in VMware documentation.
Array offload VMware can offload storage operations to the array to increase efficiency and
technologies performance. This action is performed by vStorage APIs for Array Integration (VAAI), a
feature that contains primitives for both block and file storage types:
Block • Write Same (Zero): Also known as Block zeroing. This primitive is primarily used for
the ESXi host to instruct the storage to zero out eagerzeroedthick VMDKs.
• XCOPY (Extended Copy): Also known as Full copy. Instead of the ESXi host
performing the work of reading and writing blocks of data, this primitive allows the
host to instruct the array to copy data which saves SAN bandwidth. This operation
is typically used when cloning VMs.
• Atomic Test & Set (ATS): Also known as Hardware accelerated locking. This
primitive replaces SCSI-2 reservations to increase VMFS scalability with changing
metadata on VMFS datastores. With SCSI-2 reservations, the entire volume had to
be locked, and all other hosts in the cluster had to wait while that ESXi host
changed metadata. The hardware accelerated locking primitive allows a host to
lock only the metadata on disk it needs, not hampering I/O from other hosts while
the operation is performed.
• UNMAP: Also known as dead space reclamation. This primitive uses the SCSI
UNMAP command to release blocks that are no longer in use back to the array. For
example, after deleting a VM, the ESXi host issues a series of commands to the
PowerStore array to indicate that it is no longer using certain blocks within a
volume. This capacity is returned to the pool so that it can be reused.
File • Full File Clone: Enables the offloading of powered-off virtual disk cloning to the
array. Similar to XCOPY for block.
• Fast File Clone/Native Snapshot Support: Enables the creation of virtual machine
snapshots to be offloaded to the array.
• Extended Statistics: Enables visibility into actual space usage on NAS datastores
and is especially useful for thin-provisioned datastores.
• Reserve Space: Enables provisioning virtual disks using the Lazy Zeroed or Eager
Zeroed options on NFS storage.
Note: For more information about VAAI, see VMware vSphere APIs: Array Integration (VAAI).
Introduction This section describes PowerStore features used to manage and monitor storage.
Mapping or After a volume is created, mapping specifies the hosts that the PowerStore array presents
unmapping storage to.
practices
Cluster mappings
For ESXi hosts in a cluster, we recommend using host groups to uniformly present
storage to all initiators for reduced management complexity (see the following two
figures). This practice allows a volume or set of volumes to be mapped to multiple hosts
simultaneously and maintain the same logical unit number (LUN) across all hosts.
Note: It is required to use consistent LUN numbers for standard volume mappings and Metro
Volume mappings across hosts within the same vSphere cluster, hosts within other vSphere
clusters, or hosts not in a cluster. For additional information, please see the following references:
Dell KB Article 000191503 PowerStore: Inconsistent Logical Unit Numbers between hosts could
VMware vSphere Product Documentation: vSphere Storage | Setting LUN Allocations | Storage
provisioning
Figure 19. Host group details for the vSphere cluster showing three ESXi hosts
Thin clones PowerStore thin clones make block-based copies of a volume or volume group and can
also be created from a snapshot. Because the thin clone volume shares data blocks with
the parent, the capacity usage of the child volume mainly consists of the delta changes
from after it was created. Thin clones are advantageous in a vSphere environment
because a VMFS datastore full of virtual machines can be duplicated for testing purposes,
all while consuming less storage. For example, if a vSphere administrator has to clone a
multi-terabyte database server for a developer to run tests, the VM can be isolated and
tested. Also, the VM only consumes blocks that changed.
Within the PowerStore architecture, thin clones have several advantages for storage
administrators:
• The thin clone can have a different data protection policy from the parent volume.
• The parent volume can be deleted, and the thin clones become their own resource.
• VMs can be cloned for testing monthly patches or development.
Data encryption Data at rest encryption (D@RE) is enabled by default on the PowerStore array. No
configuration steps are necessary to protect the drives.
Space The VAAI dead space reclamation primitive is integrated into the array through the SCSI
reclamation protocol. Depending on the version of ESXi the host is running, the primitive can
automatically reclaim space.
VMFS-6 and ESXi versions that support automatic UNMAP: The best practice is to
keep or reduce the reclamation rate to the Low or 100 MB/s, which is the default setting
(see the following figure).
Note: vSphere 8 introduces the ability to configure the space reclamation rate to as low as 10
MB/s. This can be useful for environments where space reclamation at a higher rate can be
disruptive to the storage fabric or its consumers.
VMFS-5 and ESXi versions that do not support automatic UNMAP: The best practice
is to set the reclamation rate to 200.
esxcli storage vmfs unmap --volume-label=volume_label --reclaim-
unit=200
Note: In certain older versions of ESXi, you must manually invoke the dead space reclamation
primitive. See the VMware Knowledge Base for more information about which versions require
additional steps.
VASA VMware vSphere APIs for Storage Awareness (VASA) is a feature that allows vSphere
hosts to gain insight into the storage types backing the datastores and enables vSphere to
manage storage. For example, the VASA provider that is embedded into PowerStore
allows it to manage vVols.
Caution: The VASA certificate is set with a one-year expiration by default, and you should
periodically renew it through the vCenter Storage Providers by clicking Refresh Certificate. If the
certificate is not refreshed before expiration, see KB 190731: How to renew the PowerStore VASA
storage provider certificate after expiration.
Note: To use the PowerStore VASA provider across multiple vCenter Servers, there are two
options: Use Enhanced Linked Mode or Share the vCenter root certificates across vCenter
Servers. For more information, see PowerStore: Using vVols across multiple vCenters: How to
register the PowerStore VASA provider across multiple vCenters.
Virtual Volumes VMware vSphere Virtual Volumes (vVols) can be used by external ESXi hosts from
PowerStore T models. vVols is a storage methodology that runs on top of existing storage
protocols such as Fibre Channel and iSCSI. It enables administrators to have more
granular control over virtual machines regarding performance, snapshots, and monitoring.
One of the key features of vVols is that it allows administrators to use storage policy-
based management (SPBM) for their environment. This enables you to align application
needs with the appropriate storage resources in an automated manner.
Because vCenter is required for binding and unbinding vVols from the protocol endpoints
during power-on, power-off, and other operations such as vMotion, you should regard
vCenter as a tier 1 application.
Caution: Never migrate the VMware vCenter virtual appliance to a vVol datastore or storage
container. Because vCenter is required for bindings to power on vVol-based virtual machines, this
action might prevent powering on vCenter after the VM is shut down or has experienced an
unplanned outage.
Note: PowerStoreOS 3.0 introduced NVMe-vVol host connectivity supporting NVMe/FC vVols.
PowerStoreOS 3.6 introduced NVMe-vVol host connectivity supporting NVMe/TCP vVols.
Scripting and The PowerStore platform has a REST API and PowerShell cmdlets to automate
automation management tasks. Find more information at Dell Support.
Virtual Storage Another tool for storage management is the Virtual Storage Integrator (VSI) which is the
Integrator vSphere web client plug-in for PowerStore and other Dell Technologies storage products.
You can download the VSI appliance as an OVA and install it into the virtual
infrastructure. It is given an IP address and added to vCenter as part of the installation
process. This plug-in allows administrators to create datastores, expand datastores, apply
ESXi host best-practice settings, perform capacity monitoring, and more.
vRealize PowerStore also supports VMware vRealize Orchestrator (vRO) as a plug-in. vRO
Orchestrator enables administrators to automate many common workflows with PowerStore
appliances. Download the vRO plug-in for PowerStore at Dell Support.
Introduction PowerStore has integrated snapshot and replication capabilities to protect data, and it is
policy driven for ease of administration.
Snapshots and To automate and simplify protecting data, PowerStore uses protection policies. These
recoveries policies are a set of snapshot and replication rules that are applied to a volume or group
of volumes. Snapshot policies can also be applied to file systems and starting with
Also, protection policies can be applied to individual volumes or to volume groups. When
a protection policy is applied to a volume group, it allows multiple volumes to have
snapshots taken, to be replicated, or to be recovered, simultaneously. This ability allows
protecting complex applications that are interdependent and span across multiple
volumes.
You can take vVol snapshots from either the PowerStore Manager or the vCenter client,
but they are inherently managed by vCenter. When you create virtual machine snapshots
from the vCenter client, the best practice is to disable the option for virtual machine
memory which may increase snapshot time significantly.
Secure Starting with PowerStoreOS 3.5, the secure snapshot setting can be enabled for
snapshots snapshots on volumes and volume groups. Snapshot rules can also be configured to
create secure snapshots automatically. With secure snapshots enabled, the snapshots
and parent resource are protected from accidental or malicious deletion and serve as a
cost-effective line of defense against ransomware attacks. If an unauthorized user gains
access to a system, the attacker cannot delete secure snapshots and cause data loss.
Snapshots and Using array-based snapshots is an effective way to protect virtual machine data and
options for establish an RPO. In the PowerStore architecture, you can create the snapshot schedule
application using protection policies. Each protection policy can define snapshot rules to establish a
backup and schedule and retention, and replication rules to specify a destination array and RPO.
restore
Caution: Using the refresh and restore operations on active virtual machine volumes may cause
unexpected results and behaviors. All host access to the volume must cease before attempting
these operations.
If a virtual machine residing on a VMFS datastore requires recovery, the best practice is
to create a thin clone from a snapshot. The high-level steps are as follows:
1. In PowerStore Manager, create a thin clone from a snapshot, and present it to the
vSphere cluster.
2. In the vSphere client, rescan the storage, add a datastore, select the newly created
volume, and assign a new signature (see the following figure).
3. Register the VM from the snap-xxxxxxxx-originaldatastorename datastore.
4. Use Storage vMotion to migrate the virtual machine back to the original datastore, if
applicable.
Crash consistent When taking array-based snapshots of virtual machines, remember that snapshots taken
and application without application coordination are considered crash consistent. Crash consistency is
consistent the storage term for data that has a snapshot taken in-progress without application
snapshots awareness. While most modern applications can recover from crash consistent data, their
recovery can yield varying levels of success. For example, when recovering a Microsoft
Windows virtual machine, as the operating system boots, it responds as if it has
encountered an unexpected power-loss event and can potentially check the disk (chkdsk)
on startup.
Application consistent snapshots are supported by products such as Dell AppSync. This
enables coordination between the array and the application to help assure that the data is
quiesced, the caches are flushed, and the data is preserved in a known good state.
Application consistent snapshots such as these offer a higher probability of recovery
success.
Note: When taking managed snapshots such as with vVols in PowerStore Manager, virtual
machine memory is not included. When performing snapshots from vCenter, we recommend not
including virtual machine memory.
Replication and PowerStore offers asynchronous and synchronous replication of block storage (including
remote recovery volume groups) as well as asynchronous and synchronous replication of NAS Servers
and its underlying File Systems and NFS Exports as of PowerStoreOS v3.0. In addition,
synchronous replication with Metro Volume support was also added as of PowerStoreOS
v3.0. Replication is used to transfer data to one or more remote PowerStore clusters.
When the remote cluster is in a different location than the local cluster, this feature can
help to protect virtual machine data from localized geographical disasters. Replication
RPOs and options are set within protection policies (see the following figure).
PowerStore Starting with PowerStoreOS 3.5, PowerStore can back up virtual machine data to a
backup to physical PowerProtect appliance or to a PowerProtect DD Virtual Edition (DDVE) either
PowerProtect DD on premises or in the cloud. This enables a simplified native backup solution for volumes
series and volume groups and the virtual machine data contained within them. No backup
appliances application is required. By configuring remote backup sessions from PowerStore
Manager, users can quickly back up resources, retrieve remote snapshots, and initiate
instant access sessions. Instant access allows the host to view the content of a remote
snapshot without retrieving the data back to the PowerStore system. Users can instantly
access deleted, corrupted, or modified data within the snapshot and copy it back to the
host for quick recovery. Users can also discover and retrieve snapshot backups created
on a different PowerStore cluster.
References
Dell The following Dell Technologies documentation provides other information related to this
Technologies document. Access to these documents depends on your login credentials. If you do not
documentation have access to a document, contact your Dell Technologies representative.
See the following document:
• Dell Technologies Host Connectivity Guide
• Dell PowerStore Protecting Your Data
See also the following documents on the PowerStore Info Hub: