Dell Emc Powervault Me4 Series and Microsoft Hyper-V
Dell Emc Powervault Me4 Series and Microsoft Hyper-V
Dell Emc Powervault Me4 Series and Microsoft Hyper-V
Hyper-V
Abstract
This document provides best practices for configuring Microsoft® Hyper-
V® to perform optimally with Dell EMC™ PowerVault™ ME4 Series
storage.
September 2018
Revisions
Date Description
September 2018 Initial release
Acknowledgements
Author: Marty Glaser
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
© 2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Dell believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
Table of contents
Revisions.............................................................................................................................................................................2
Acknowledgements .............................................................................................................................................................2
Table of contents ................................................................................................................................................................3
Executive summary.............................................................................................................................................................4
Audience .............................................................................................................................................................................4
1 Introduction ...................................................................................................................................................................5
1.1 ME4 Series overview ..........................................................................................................................................5
1.2 Microsoft Hyper-V overview................................................................................................................................6
1.3 Best practices overview ......................................................................................................................................6
1.4 General best practices for Hyper-V ....................................................................................................................7
2 Design best practices ...................................................................................................................................................8
2.1 Right-size the storage array ...............................................................................................................................8
2.2 Linear and virtual disk groups, pools, and RAID configuration ..........................................................................8
2.3 Determine optimal transport and front-end configuration ...................................................................................9
3 Administration best practices .....................................................................................................................................11
3.1 Guest integration services ................................................................................................................................11
3.2 Hyper-V guest VM generations ........................................................................................................................13
3.3 Virtual hard disks ..............................................................................................................................................14
3.4 Present ME4 Series storage to Hyper-V ..........................................................................................................18
3.5 Optimize format disk wait time for large volumes .............................................................................................22
3.6 Placement of page files ....................................................................................................................................22
3.7 Placement of Active Directory domain controllers ............................................................................................23
3.8 Queue depth best practices for Hyper-V ..........................................................................................................23
4 ME4 Series snapshots with Hyper-V ..........................................................................................................................25
4.1 Crash-consistent and application-consistent snapshots ..................................................................................25
4.2 Guest VM recovery with ME4 Series snapshots ..............................................................................................25
4.3 Create test environment with ME4 Series snapshots.......................................................................................29
4.4 Migrate guest VMs with ME4 Series storage ...................................................................................................29
A Technical support and additional resources...............................................................................................................30
A.1 Related resources ............................................................................................................................................30
Executive summary
This document provides best practices for deploying Microsoft® Windows Server® Hyper-V® based solutions
with Dell EMC™ PowerVault™ ME4 Series storage systems. It builds upon the resources listed in appendix
A.1.
Before configuring an ME4 Series array to work optimally with Hyper-V, review the primary reference
documents including the ME4 Series Administrator’s Guide and Deployment Guide on Dell.com/support. The
information in these two guides is supplemented by the best practices in this document.
Audience
This document is intended for Dell EMC customers, partners, and employees who desire to learn more about
best practices when configuring Hyper-V with ME4 Series storage systems. It is assumed the reader has
working knowledge of ME4 Series storage and Hyper-V.
We welcome your feedback along with any recommendations for improving this document. Send comments
to [email protected].
1 Introduction
Microsoft Hyper-V and Dell EMC PowerVault ME4 Series storage are feature-rich solutions that together
present a diverse range of configuration options to solve key business objectives such as storage capacity,
performance, and resiliency. This section provides an overview of ME4 Series storage, Microsoft Hyper-V,
and general best practices for the solution described in this paper.
Front and rear view of the PowerVault ME4024 array, configured with 24 SSD drives and dual
controllers
The ME4 Series 2U-chassis models include the ME4012 array which supports up to twelve 3.5-inch drives,
and the ME4024 array which supports up to twenty-four 2.5-inch drives. The ME4084 array (5U chassis)
supports up to eighty-four 2.5-inch drives. All three models support additional drive capacity by adding
expansion enclosures.
Note: Most of these features work seamlessly in the background, regardless of the platform. In most cases,
the default settings for these features work well with Hyper-V or at least serve as good configuration starting
points. This document highlights additional configuration or tuning steps that may enhance performance,
usability, or other factors.
To learn more about these and other ME4 Series features, refer to the ME4 Series Administrator’s Guide and
Deployment Guide, and the additional documentation listed in appendix A.
Note: In January 2020, Microsoft will discontinue patches and security updates for Windows Server 2008 R2
(end of support). Customers still running Windows Server 2008 R2 should plan to migrate their Hyper-V
environments before support ends.
Microsoft Hyper-V has evolved to become a mature, robust, proven virtualization platform. In simplest terms,
it is a layer of software that presents the physical host server hardware resources in an optimized and
virtualized manner to guest virtual machines (VMs). Hyper-V hosts (also referred to as nodes when clustered)
greatly enhance utilization of physical hardware (such as processors, memory, NICs, and power) by allowing
many VMs to share these resources at the same time. Hyper-V Manager and related management tools such
as Failover Cluster Manager, Microsoft System Center Virtual Machine Manager (SCVMM), and PowerShell®,
offer administrators great control and flexibility for managing host and VM resources.
Note: Many core Hyper-V features (such as dynamic memory) are storage agnostic, and are not covered in
detail in this guide. To learn more about core Hyper-V features, functionality, and general best practices, see
the Hyper-V Best Practices Checklist and other resources on Microsoft TechNet.
Because default settings typically incorporate best practices, tuning is often unnecessary (and discouraged)
unless a specific design, situation, or workload is known to benefit from a different configuration. For example,
the default queue-depth setting works well for most hosts in a SAN environment. However, increasing the
queue depth for a large sequential workload running on a small number of hosts might result in a significant
performance increase, while doing the same for a non-sequential workload running on many hosts might have
the opposite result, degraded performance. One of the purposes of a best-practices document is to call
attention to situations where using a default setting or configuration may not be optimal.
• In some cases, legacy systems that are performing well and have not reached their life expectancy
may not adhere to current best practices. The best course of action may be to run legacy
configurations until they reach their life expectancy because it is too disruptive or costly to make
changes outside of a normal hardware progression or upgrade cycle. Dell EMC recommends
upgrading to the latest technologies and adopting current best practices at key opportunities such as
when upgrading or replacing infrastructure.
• A common best practices tradeoff is to implement a less-resilient design (to save cost and reduce
complexity) in a test or development environment that is not business critical.
Note: While following the best practices in this document is strongly recommended by Dell EMC, some
recommendations may not apply to all environments. For questions about the applicability of these guidelines
in your environment, contact your Dell EMC representative.
The following provides a high-level summary of some of the most common best practices tuning steps for
Hyper-V:
• Minimize or disable unnecessary hardware devices and services to free up host CPU cycles that can
be used by other VMs (this also helps to reduce power consumption).
• Schedule tasks such as periodic maintenance, backups, malware scans, and patching to run after
hours, and stagger start times when such operations overlap and are CPU or I/O intensive.
• Tune application workloads to reduce or eliminate unnecessary processes or activity.
• Leverage Microsoft PowerShell or other scripting tools to automate step-intensive repeatable tasks to
ensure consistency and avoid human error. This can also reduce administration time.
Many common short- and long-term problems can be avoided by making sure the storage part of the solution
will provide the right capacity and performance in the present and future. Scalability is a key design
consideration. For example, Hyper-V clusters can start small with two nodes, and expand one node at a time,
up to a maximum of 64 nodes per cluster. Storage including ME4 Series arrays can start with a small number
of drives, and expand capacity and I/O performance over time by adding expansion enclosures with more
drives as workload demands increase.
Optimizing performance is a process of identifying and mitigating design limitations that cause bottlenecks —
the point at which performance begins to be impacted under load because a capacity threshold is reached
somewhere within the overall design. The goal is to maintain a balanced configuration that allows the
workload to operate at or near peak efficiency.
One common mistake made when sizing a storage array is assuming that total disk capacity translates to disk
performance. Installing a small number of large-capacity spinning drives in an array does not automatically
translate to high performance just because there is a lot of available storage capacity. There must be enough
of the right kind of drives to support the I/O demands of a workload in addition to raw storage capacity.
Where available, customers can confidently use the configuration guidance in Dell EMC storage reference
architecture white papers as good baselines to right-size their environments.
Work with your Dell EMC representative to complete a performance evaluation if there are questions about
right-sizing an ME4 Series storage solution for your environment and workload.
2.2 Linear and virtual disk groups, pools, and RAID configuration
Choosing the type of disk pools and RAID configurations to use is equally important to right-sizing the ME4
Series storage array for capacity and I/O.
The ME4 Series Administrator’s Guide provides an in-depth review and comparison of linear and virtual disk
groups, pools, the different RAID levels and hot spare configurations available with each, the trade-offs of
choosing one over the other, and application (workload) recommendations for each.
One option discussed in the Administrator’s Guide is the ME4 Series ADAPT option for RAID. ADAPT
supports distributed sparing for extremely fast rebuild times, and large-capacity disk groups of up to 128 total
drives. However, ADAPT requires a minimum of 12 drives to start with, and all disks must be of the same type
and be in the same tier.
From the perspective of Hyper-V, any of the available configurations is supported. Choosing the best type of
disk group and RAID option is a function of the workload running on Hyper-V, and the ME4 Series
Administrator’s Guide provides basic guidance.
Before reading further, refer to the ME4 Series Deployment Guide to gain a thorough understanding of the
different DAS, SAN, host, and replication cabling options available with the ME4 Series.
Hyper-V hosts, nodes, and clusters support all the above configuration options. Consider the following
recommendations:
• If a Hyper-V environment is likely to scale beyond four physical hosts or nodes attached to the same
ME4 Series array, choose the following:
- Start with a SAN configuration (FC or iSCSI) (recommended)
- Start with a DAS configuration (FC or iSCSI), and migrate to a SAN configuration when the fifth
node needs to be added (caution: this might be very disruptive to the environment because it will
require host down time to reconfigure and re-cable the FE ports)
• If the ME4 Series array is configured to replicate to another ME4 Series array, two of the four FE
ports (0 and 1) on each controller head must be dedicated to replication traffic. This will limit the
available FE ports for host connectivity to the other two ports (2 and 3) on each controller. If the
Hyper-V environment is likely to scale beyond two physical hosts or nodes, choose the following:
- Start with a SAN configuration (FC or iSCSI) (recommended)
- Start with a DAS configuration (FC or iSCSI), and migrate to a SAN configuration when the third
node needs to be added (caution: as stated previously, this might be very disruptive to the
environment)
• SAS FE ports are supported in a DAS configuration only. ME4 Series arrays equipped with SAS FE
ports do not support replication to another ME4 Series array. SAS FE ports are a good choice if the
ME4 Series array will not need to expand beyond four Hyper-V hosts or nodes, and will not need to
be configured for replication.
Other factors to consider include the following:
• With DAS, the hosts must be within reach of the physical cable that is used to directly connect the
host to the ME4 Series array. This works well if the hosts are in the same or an adjacent rack that is
within easy cabling distance.
• Choosing the type of transport is often a function of what is already in place in the environment or
according to personal preference. In cases where the infrastructure to support an FC or iSCSI SAN is
already in place, customers can continue using this transport to maximize their return on their
investment.
Installing and updating integration services is one of the most commonly overlooked steps to ensure overall
stability and optimal performance of guest VMs. Although newer Windows-based OSs and some enterprise-
class Linux-based OSs come with integration services out of the box, updates may still be required. New
versions of integration services may become available as the physical Hyper-V hosts are patched and
updated.
With earlier versions of Hyper-V (2012 R2 and prior), during the configuration and deployment of a new VM,
the configuration process does not prompt the user to install or update integration services. In addition, the
process to install integration services with older versions of Hyper-V (2012 R2 and prior) is a bit obscure and
will explained in this section. With Windows Server 2016 Hyper-V, integration services are updated
automatically (in the case of Windows VMs) as a part of Windows updates, requiring less administration to
ensure Windows VMs stay current.
One common issue occurs when VMs are migrated from an older physical host or cluster to a newer one (for
example, from Windows Server 2008 R2 Hyper-V to Windows Server 2012/R2 Hyper-V). The integration
services do not get updated automatically, and degraded performance may be encountered as a result, that
may erroneously point the administrator to suspect the storage array as the cause of the problem.
Aside from performance problems, one of the key indications that integration services are outdated or not
present on a Windows VM is the presence of unknown devices in Device Manager for the VM.
For versions of Hyper-V prior to 2016, use Hyper-V Manager to connect to a VM. Under the Action menu,
mount the Integration Services Setup Disk (an ISO file), and follow the prompts in the guest VM console to
complete the installation. Mounting the integration services ISO is no longer supported with Windows Server
2016 Hyper-V because integration services are provided exclusively as part of Windows updates.
Mount Integration Services Setup Disk in Hyper-V Manager (Hyper-V versions prior to 2016)
To verify the version of integration services, under the Summary tab for each VM, select Failover Cluster
Manager.
Verification can also be performed using PowerShell, as shown in the following example:
Although generation 1 VMs continue to be supported with Hyper-V, it is a best practice to create new VMs as
generation 2 if the host server (Windows Server 2012 R2 Hyper-V and newer) and the guest VM OS support
it. Support for generation 1 VMs may eventually be depreciated in future versions of Hyper-V.
Generation 2 guests use Unified Extensible Firmware Interface (UEFI) when booting instead of a legacy
BIOS. UEFI provides better security and better interoperability between the OS and the hardware, which
offers improved virtual driver support and performance. In addition, one of the most significant changes with
generation 2 guests is the elimination of the dependency on virtual IDE for the boot disk. Generation 1 VMs
require the boot disk to use a virtual IDE disk controller. Generation 2 guests instead use virtual SCSI
controllers for all disks. Virtual IDE is not a supported option with generation 2 VMs.
For both generations of guest VMs, if there are multiple disks requiring high I/O, each disk can be associated
with its own virtual disk controller to further maximize performance.
• VHD is supported with all Hyper-V versions and is limited to a maximum size of 2 TB. This is now
considered a legacy format (use VHDX instead for new VM deployments).
• VHDX is supported with Windows Server 2012 Hyper-V and newer. The VHDX format offers better
resiliency in the event of a power loss, better performance, and supports a maximum size of 64 TB.
VHD files can be converted to the VHDX format using tools such as Hyper-V Manager or PowerShell.
• VHDS (VHD Set) is supported on Windows Server 2016 Hyper-V and newer. VHDS is for virtual hard
disks that are shared by two or more guest VMs in support of highly-available (HA) guest VM
clustering configurations.
The dynamically expanding disk type will work well for most workloads on ME4 Series arrays. If the array is
configured to use virtual disk groups and pools which take advantage of thin provisioning, only data that is
actually written to a virtual hard disk, regardless of the disk type (fixed, dynamic, or differencing), will consume
space on the array. As a result, determining the best disk type is mostly a function of the workload as
opposed to how it will impact storage utilization. For workloads generating very high I/O, such as Microsoft
SQL Server® databases, Microsoft recommends using the fixed size virtual hard disk type for optimal
performance.
As shown in Figure 10, a fixed virtual hard disk consumes the full amount of space from the perspective of the
host server. For a dynamic virtual hard disk, the space consumed is equal to amount of data on the virtual
disk (plus some metadata overhead), and is more space efficient from the perspective of the host. From the
perspective of the guest VM, either type of virtual hard disk shown in this example will present a full 60 GB of
available space to the guest.
There are some performance and management best practices to keep in mind when choosing the right kind of
virtual hard disk type for your environment.
3.3.3 Virtual hard disks and thin provisioning with ME4 Series arrays
It does not matter which type of virtual hard disk is used to in order maximize the space utilization on ME4
Series storage when leveraging thin provisioning at the array level. Regardless of the virtual hard disk type,
only the actual data written by a guest VM will consume space on the storage array due to the advantages of
thin provisioning.
The example shown in Figure 11 illustrates an ME4 Series 100 GB volume presented to a Hyper-V host that
contains two 60 GB virtual hard disks (overprovisioned in this case to demonstrate behavior, but not as a
general best practice). One disk is fixed, and the other is dynamic. Each virtual hard disk contains 15 GB of
actual data. From the perspective of the host server, a total of 75 GB of space is consumed and can be
described as follows:
Note: The host server reports the entire size of a fixed virtual hard disk as consumed.
Comparatively, this is how the ME4 Series array reports storage utilization on this same volume:
Example: 15 GB of used space on the fixed disk + 15 GB of used space on the dynamic disk = 30 GB
Note: Both types of virtual hard disks (dynamic and fixed) will use the same amount of space utilization on
Dell EMC ME4 arrays when using thin provisioning. Other factors such as the I/O performance of the
workload would be primary considerations when determining the type of virtual hard disk in your environment.
• Create a Hyper-V physical host volume that is large enough so that current and future expanding
dynamic virtual hard disks will not fill the host volume to capacity. Creating large Hyper-V host
volumes will not waste space on ME4 Series arrays that leverage thin provisioning.
- If Hyper-V based snapshots are used (which create differencing virtual hard disks on the same
physical volume), allow adequate overhead on the host volume for the extra space consumed by
the differencing virtual hard disks.
- Expand existing host volumes as needed to avoid the risks associated with overprovisioning.
- If a physical host volume that hosts virtual hard disks is overprovisioned, set up monitoring so that
if a percent-full threshold is exceeded (such as 90 percent), an alert is generated with enough
lead time to allow for remediation.
• Monitor alerts on ME4 Series storage so that warnings about disk group and pool capacity thresholds
are remediated before they reach capacity.
• ME4 Series storage can be presented to physical Hyper-V hosts and cluster nodes using FC, iSCSI,
or SAS in either a direct-attached configuration (SAS, FC, iSCSI) or as part of a SAN (FC or iSCSI).
• ME4 Series storage can also be presented directly to Hyper-V guest VMs using the following:
- In-guest iSCSI
- Pass-through disks (this is a legacy configuration option introduced with Hyper-V 2008 that Dell
EMC and Microsoft discourage using with Hyper-V 2012 and 2016)
Typically, an environment is configured to use a preferred transport when it is built and will be part of the
infrastructure core design. When deploying Hyper-V to existing environments, the existing transport is
typically used. Deciding which transport to use is usually based on customer preference and factors such as
size of the environment, cost of the hardware, and the required support expertise.
It is not uncommon, especially in larger environments, to have more than one transport available. This might
be required to support collocated but diverse platforms with different transport requirements. When this is the
case, administrators might be able to choose between different transport options.
Regardless of the transport chosen, it is a best practice to ensure redundant paths to both ME4 Series
controller heads A and B. Refer to section 2.3 and the ME4 Series Deployment Guide for more information.
While the ME4 Series array permits front-end cabling that does not include redundancy, it is a best practice in
a production environment to configure cabling for redundancy. For test or development environments that can
accommodate down time without business impact, a less-costly, less-resilient design may be acceptable to
the business.
There are some use cases where using mixed transports may be necessary, such as when migrating the
overall environment from one type of transport to another, and both transports need to be available to a host
during a transition period. If mixed transports must be used, use a single transport for each LUN mapped to a
Hyper-V host or node.
Windows and Hyper-V hosts default to the Round Robin with Subset policy with ME4 Series storage, unless a
different default MPIO policy is set on the host by the administrator. Round Robin with Subset is typically the
best MPIO policy for Hyper-V environments.
• The active/optimized paths are associated with the ME4 Series storage controller head that owns the
volume. The active/unoptimized paths are associated with the other controller head.
• If each controller has four FE transport paths configured (shown in Figure 12), each volume that is
mapped should list eight total paths: four that are optimized, and four that are unoptimized.
Best practices recommendations include the following:
• Changes to MPIO registry settings on the Windows or Hyper-V host (such as time-out values) should
not be made unless directed by ME4 Series documentation, or unless directed by Dell EMC support
to solve a specific problem.
• Configure all available FE ports on an ME4 Series array (when it is connected to a SAN) to use your
preferred transport to optimize throughput and maximize performance.
• If using a direct-connect option for iSCSI, SAS or FC, configure each host to use at least two
matching ports (one from each controller head) to provide MPIO and failover protection against a
single-path or controller-head failure.
• Verify that current versions of software are installed (such as OS, boot code, firmware, and drivers)
for all components in the data path:
- ME4 Series arrays
- Data switches
- HBAs, NICs, converged network adapters (CNAs)
• Verify that all hardware is supported per the latest version of the Dell EMC hardware Compatibility
Matrix.
• Situations where a workload has very high I/O requirements, and the performance gain over using a
virtual hard disk is important. Direct-attached disks bypass the host server file system. This reduces
host CPU overhead for managing guest VM I/O. For many workloads, there will be no notable
difference in performance between direct-attached and virtual hard disks.
• VM clustering on legacy platforms prior to support for shared virtual hard disks, which became
available with the 2012 R2 release of Hyper-V, and enhanced with Hyper-V 2016.
• When needing to troubleshoot I/O performance on a volume and it must be isolated from all other
servers and workloads.
• When there is a need to create custom snapshot or replication policies or profiles on ME4 Series
storage for a specific data volume.
• When a single data volume presented to a guest VM will exceed the maximum size for a VHD (2 TB)
or VHDX (64 TB).
There are also disadvantages to using direct-attached storage for guest VMs:
• The ability to perform native Hyper-V snapshots is lost. However, the ability to leverage ME4 Series
snapshots of the underlying volume is unaffected.
• Complexity increases, requiring more management overhead to support.
• VM mobility is reduced due to creating a physical hardware layer dependency.
Note: Legacy environments that are using direct-attached disks for guest VM clustering should consider
switching to shared virtual hard disks, particularly when migrating to Windows Server 2016 Hyper-V.
Using pass-through disks is a legacy design that is discouraged unless there is a specific use case that
requires it. They are no longer necessary in most cases because of the feature enhancements with newer
releases of Hyper-V (generation 2 guest VMs, VHDX format, and shared VHDs in Windows Server 2016
Hyper-V.) Use cases for pass-through disks are similar to the list provided for direct-attached iSCSI storage in
section 3.4.5.
• The ability to perform native Hyper-V snapshots is lost, which is similar to direct-attached storage.
• The use of a pass-through disk as a boot volume on a guest VM prevents the use of a differencing
disk.
• VM mobility is reduced by creating a dependency on the physical layer.
• This can result in many LUNs presented to hosts or cluster nodes which can become unmanageable
and impractical at larger scale.
As a best practice and a time-saving tip, configure the nodes in a cluster so that they are identical with regard
to the number of disks and LUNs. In this way, when mapping new storage LUNs, the next available LUN ID
will be the same on all hosts. By doing this, having to change LUN IDs later to make them consistent can be
avoided.
• Easier to isolate and monitor disk I/O patterns for a specific Hyper-V guest VM
• Ability to quickly restore a guest VM by simply recovering the ME4 Series volume from a snapshot
• Gives administrators more granular control over what data gets replicated if ME4 Series volumes are
replicated to another location
• Makes it faster to move a guest VM from one host or cluster to another by remapping the volume
rather than copying large virtual hard disk files from one volume to another over the network
Other strategies include placing all boot virtual hard disks on a common CSV, and data virtual hard disks on
one or more data CSVs.
1. Access a command prompt on the host server with elevated (administrator) rights.
2. To verify the state of the attribute, type fsutil behavior query disabledeletenotify and press [Enter].
A result of zero means the attribute is enabled.
3. To disable the attribute, type fsutil behavior set disabledeletenotify 1 and press [Enter].
The result should display an attribute value of one. To test the result, map a large temporary volume (several
TB) from the ME4 Series array to the host and format the volume. It should complete in a few seconds.
With ME4 Series storage, there can be some advantages to placing a page file on a separate volume from
the perspective of the storage array. The following reasons may not be sufficiently advantageous by
themselves to justify changing the defaults, but in cases where a vendor recommends making changes to
optimize a workload, consider the following tips as part of the overall page-file strategy.
• Moving the page file to a separate dedicated volume reduces the amount of data that is changing on
the system (boot) volume. This can help reduce the size of ME4 Series snapshots of boot volumes
which will conserve space in the disk pool.
• Volumes or virtual hard disks dedicated to page files typically do not require snapshot protection, and
therefore do not need to be replicated to a remote site as part of a DR plan. This is especially
beneficial in cases where there is limited bandwidth for replication of volumes and snapshots to
another ME4 Series array.
Consider this scenario: A service outage takes the cluster offline (including the domain controller VM). When
attempting to recover, unless there is another domain controller available outside of the affected cluster, the
cluster service will not start because it cannot authenticate.
Note: This order dependency can be avoided with Windows Server 2016 Hyper-V because the cluster service
uses certificates to authenticate instead of AD. With Windows Server 2016, Hyper-V clusters can also be
comprised of nodes that are in workgroups or domains.
Encountering this scenario may be a service-affecting event depending on how long it takes to recover. It may
be necessary to manually recover the domain controller VM to a standalone Hyper-V host outside of the
cluster, or to another cluster.
• Configure at least one domain controller as a physical server booting from local disk.
• Place virtualized domain controllers on standalone Hyper-V hosts or on individual cluster nodes if
there is an AD dependency for cluster services.
• Use Hyper-V Replica (2012 and newer) to ensure that the guest VM can be recovered on another
host.
• Place virtualized backup domain controllers on separate clusters, so that a service-affecting event
with any one cluster does not result in all domain controllers becoming unavailable. This does not
protect against cases where there is a site outage that takes all the clusters (and therefore all the
virtualized AD servers) offline.
• Leverage Windows Server 2016 Hyper-V, which does not have an AD dependency to authenticate
cluster services.
In many cases, there is no need to change the default queue depth, unless there is a specific use where
changing the queue depth is known to improve performance. For example, if a storage array is connected to a
small number of Windows Server Hyper-V cluster nodes hosting a large block sequential read application
workload, increasing the queue depth setting may be very beneficial. However, if the storage array has many
hosts all competing for a few target ports, increasing the queue depth on a few hosts might overdrive the
target ports and negatively impact the performance of all connected hosts.
While increasing the queue depth can sometimes increase performance significantly for specific workloads, if
it is set too high, there is an increased risk of overdriving the target ports on the storage array. Generally, if
transactions are being queued and performance is being impacted, and increasing the queue depth results in
saturation of the target ports, then increasing the number of initiators and targets (if available) to spread out
I/O can be an effective remediation.
For direction on adjusting firmware or registry settings to modify queue depth, see the documentation for your
FC HBA, iSCSI NIC, or CNA.
Note: Changes to FC HBA, iSCSI NIC, or CNA firmware or registry settings that affect queue depth should be
evaluated in a test environment prior to implementation on production workloads.
Some examples for how to configure and use Dell EMC ME4 snapshots for a Hyper-V environment are
provided in the following sections.
Option 1: Recover the existing data volume on the host that contains the VM configuration and virtual hard
disks by using an ME4 Series snapshot rollback.
• This may only be practical if the data volume contains only one VM. If the data volume contains
multiple VMs, it will still work if all the VMs are being recovered to the same point in time. Otherwise,
option 2 or 3 would be necessary if needing to recover just one VM.
• This will allow the VM being recovered to power up without any additional configuration or recovery
steps required.
• It is essential to document the LUN number, disk letter, or mount-point information for the volume to
be recovered, before starting the recovery.
Option 2: Map a snapshot containing the VM configuration and virtual hard disks to the host as a new
volume, in a side-by-side fashion using a new drive letter or mount point. The VM can be recovered by
manually copying the virtual hard disks from the recovery snapshot to the original location.
• This involves deleting, moving, or renaming the original virtual hard disks.
• After copying the recovered virtual hard disks to their original location, they must be renamed and
Hyper-V manager must be used to re-associate them with the guest VM. This is necessary to allow
the guest VM to start without permissions errors.
• This may not be practical if the virtual hard disks are extremely large. In this case, the original VM can
be deleted, and the recovery VM imported or created as a new VM directly from the recovery volume.
After the recovery, the original data volume can be unmapped from the host if no longer needed.
• This method also facilitates recovery of a subset of data from a VM by mounting a recovery virtual
hard disk as a volume on the host server temporarily.
Option 3: Map the recovery snapshot to a different Hyper-V host and recover the VM there by importing the
VM configuration or creating a new VM that points to the virtual hard disks on the recovery volume.
• This is common in situations where the original VM and the recovery VM both need to be online at the
same time, but be isolated from each other to avoid name or IP conflicts, or avoid a split-brain
situation with data reads/writes.
• This is a good recovery method when the original host server is no longer available due to a host
failure.
If possible, before beginning any VM recovery, record essential details about the VM hardware configuration
(such as number of virtual CPUs, RAM, virtual networks, and IP addresses) if importing a VM configuration
fails.
Windows Servers assign each volume a unique disk ID (or signature). For example, the disk ID for an MBR
disk is an 8-character hexadecimal number such as 045C3E2F4. No two volumes mapped to a server can
have the same disk ID.
When an ME4 Series snapshot is taken of a Windows or Hyper-V volume, the snapshot is an exact point-in-
time copy, which includes the Windows disk ID. Therefore, recovery volumes based on snapshots will also
have the same disk ID.
With standalone Windows or Hyper-V servers, disk ID conflicts are avoided because standalone servers can
automatically detect duplicate disk IDs and change them dynamically on the offending disk with no user
intervention. However, host servers are not able to dynamically change conflicting disk IDs when disks are
configured as CSVs, because the disks are mapped to multiple nodes at the same time.
When attempting to map a copy (snapshot) of a CSV back to any server in that same cluster, the recovery
volume will cause a disk ID conflict, which may be service-affecting.
There are two methods to work around the duplicate disk ID issue:
Option 1: Map the recovery volume (snapshot) containing the CSV to another host that is outside of the
cluster and copy the guest VM files over the network to recover the guest VM.
Option 2: Map the recovery volume to another Windows host outside of the cluster and use Diskpart.exe or
PowerShell to change the disk ID. Once the ID has been changed, remap the recovery volume to the cluster.
The steps to use Diskpart.exe to change the disk ID are detailed in section 4.2.3.
1. Access the standalone Windows host that the recovery volume (snapshot) containing the CSV will be
mapped to.
2. Open a command window with administrator rights.
3. Type diskpart.exe and press [Enter].
4. Type list disk and press [Enter].
5. Make note of the current list of disks (in this example, Disk 0, Disk 1, and Disk 2).
9. Return to the Diskpart command prompt window, type list disk, and press [Enter].
The new disk (Disk 3 in this example) should now be listed. Usually, the bottom disk will be the one
most recently added.
10. Type select disk # (# represents the number of the new disk, Disk 3 in this example) and press
[Enter].
11. Type uniqueid disk and press [Enter] to view the current ID for the disk.
12. To change the disk ID, type uniqueid disk ID=<newid> and press [Enter].
- For <newid>, provide a random ID of your choice. For an MBR disk, the new ID must be an
eight-character string in hexadecimal format using a mix of the numbers 0–9 and the letters A–F.
- For a GPT disk, the new ID must be a Globally Unique Identifier (GUID).
13. Type uniqueid disk again and press [Enter] to verify the ID is now changed.
Now that the disk has a new signature, it can be unmapped from the standalone host server and re-
mapped to the cluster without causing a disk ID conflict.
Note: To avoid IP, MAC address, or server name conflicts, copies of existing VMs that are brought online
should be isolated from the original VMs.
The procedure to use a snapshot to create a test environment from an existing Hyper-V guest VM is very
similar to VM recovery. The main difference is that the original VM continues operation, and the VM copy is
configured so that it is isolated from the original VM.
However, when an administrator needs to migrate a guest VM from one host or cluster to another host or
cluster, the data (the virtual hard disks) must be copied to the target host or cluster, and this will consume
network bandwidth and may require significant time if the virtual hard disks are extremely large. This can also
consume additional storage space unnecessarily because another copy of the data is created.
When moving VMs to another host or cluster, it may be much quicker to leverage ME4 Series storage to
unmap the host volume containing the VM configuration and virtual hard disks and map the volume to the
new target host or cluster. This can also be done using a using a snapshot.
While this might involve a small amount of downtime for the VM being moved during a maintenance window,
it might be a more practical approach than waiting for a large amount of VM data to copy over the network,
consuming additional array space unnecessarily.
Storage solutions technical documents provide expertise that helps to ensure customer success on Dell EMC
storage platforms.
• Administrator's Guide
• Deployment Guide
• CLI Guide
• Owner's Manual
• Support Matrix
Additionally, see the following third-party referenced or recommended publications and articles: