VMware Horizon Reference Architecture en
VMware Horizon Reference Architecture en
VMware Horizon Reference Architecture en
November 2014 Updated to include 13g servers and increased VM density (v.6.6)
July 2015 Updated density numbers for ESXi 6.0 and added PowerEdge C4130 (v.6.7)
2 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND
TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF
ANY KIND.
2015 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express
written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
PRODUCT WARRANTIES APPLICABLE TO THE DELL PRODUCTS DESCRIBED IN THIS DOCUMENT MAY BE FOUND
AT: http://www.dell.com/learn/us/en/19/terms-of-sale-commercial-and-public-sector Performance of network
reference architectures discussed in this document may vary with differing deployment conditions, network loads, and
the like. Third party products may be included in reference architectures for the convenience of the reader. Inclusion
of such third party products does not necessarily constitute Dells recommendation of those products. Please consult
your Dell representative for additional information.
Trademarks used in this text:
Dell, the Dell logo, Dell Boomi, Dell Precision ,OptiPlex, Latitude, PowerEdge, PowerVault,
PowerConnect, OpenManage, EqualLogic, Compellent, KACE, FlexAddress, Force10 and Vostro are
trademarks of Dell Inc. Other Dell trademarks may be used in this document. Cisco Nexus, Cisco MDS , Cisco NX-
0S, and other Cisco Catalyst are registered trademarks of Cisco System Inc. EMC VNX, and EMC Unisphere are
registered trademarks of EMC Corporation. Intel , Pentium, Xeon, Core and Celeron are registered trademarks of
Intel Corporation in the U.S. and other countries. AMD is a registered trademark and AMD Opteron, AMD
Phenom and AMD Sempron are trademarks of Advanced Micro Devices, Inc. Microsoft , Windows, Windows
Server, Internet Explorer, MS-DOS, Windows Vista and Active Directory are either trademarks or registered
trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat and Red Hat Enterprise
Linux are registered trademarks of Red Hat, Inc. in the United States and/or other countries. Novell and SUSE are
registered trademarks of Novell Inc. in the United States and other countries. Oracle is a registered trademark of
Oracle Corporation and/or its affiliates. Citrix, Xen, XenServer and XenMotion are either registered trademarks or
trademarks of Citrix Systems, Inc. in the United States and/or other countries. VMware, Virtual SMP, vMotion,
vCenter and vSphere are registered trademarks or trademarks of VMware, Inc. in the United States or other
countries. IBM is a registered trademark of International Business Machines Corporation. Broadcom and
NetXtreme are registered trademarks of Broadcom Corporation. QLogic is a registered trademark of QLogic
Corporation. Other trademarks and trade names may be used in this document to refer to either the entities claiming
the marks and/or names or their products and are the property of their respective owners. Dell disclaims proprietary
interest in the marks and names of others.
3 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Table of contents
Revisions ............................................................................................................................................................................................. 2
1 Introduction ................................................................................................................................................................................ 9
1.1 Purpose of this document ............................................................................................................................................. 9
1.2 Scope ................................................................................................................................................................................ 9
1.3 New in this release .......................................................................................................................................................... 9
2 Solution architecture overview ............................................................................................................................................. 10
2.1 Introduction ................................................................................................................................................................... 10
2.1.1 Physical architecture overview .................................................................................................................................... 11
2.1.2 Dell Wyse Datacenter solution layers .................................................................................................................... 12
2.2 Local Tier 1 ..................................................................................................................................................................... 13
2.2.1 Local Tier 1 50 user combined pilot ...................................................................................................................... 13
2.2.2 Local Tier 1 50 user scale-ready pilot.................................................................................................................... 13
2.2.3 Local Tier 1 (iSCSI) ......................................................................................................................................................... 14
2.3 Shared Tier 1 Rack ..................................................................................................................................................... 17
2.3.1 Shared Tier 1 Rack 555 users (iSCSI) .................................................................................................................. 17
2.3.2 Shared Tier 1 Rack (iSCSI EQL) ............................................................................................................................ 18
2.3.3 Shared Tier 1 Rack 1000 users (FC CML) ...................................................................................................... 21
2.4 Shared Tier 1 Blade ................................................................................................................................................... 24
2.4.1 Shared Tier 1 Blade 555 users (iSCSI EQL) .................................................................................................... 24
2.4.2 Shared Tier 1 Blade (iSCSI EQL) .......................................................................................................................... 25
2.4.3 Shared Tier 1 Blade (FC CML) .............................................................................................................................. 28
3 Hardware components ........................................................................................................................................................... 31
3.1 Networking ..................................................................................................................................................................... 31
3.1.1 Force10 S55 (ToR switch) ............................................................................................................................................ 31
3.1.2 Force10 S60 (1Gb ToR switch) ................................................................................................................................... 32
3.1.3 Force10 S4810 (10Gb ToR switch) ............................................................................................................................. 33
3.1.4 Brocade 6510 (FC ToR switch) ................................................................................................................................... 35
3.1.5 PowerEdge M I/O Aggregator (10Gb blade interconnect) .................................................................................... 36
3.1.6 PowerConnect M6348 (1Gb blade interconnect) ................................................................................................... 36
3.1.7 Brocade M5424 (FC blade interconnect) ................................................................................................................. 37
3.2 Servers ............................................................................................................................................................................. 38
4 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.2.1 PowerEdge R730 ........................................................................................................................................................... 38
3.2.2 PowerEdge M620.......................................................................................................................................................... 38
3.3 Storage ............................................................................................................................................................................ 39
3.3.1 EqualLogic Tier 1 storage (iSCSI) ................................................................................................................................ 39
3.3.2 EqualLogic Tier 2 storage (iSCSI)............................................................................................................................... 40
3.3.3 Compellent storage (FC).............................................................................................................................................. 47
3.3.4 NAS .................................................................................................................................................................................. 50
3.4 Wyse Cloud Clients ....................................................................................................................................................... 51
3.4.1 Wyse 5020-P25 ............................................................................................................................................................. 51
3.4.2 Wyse 5012-D10DP ........................................................................................................................................................ 51
3.4.3 Wyse 7020-P45 ............................................................................................................................................................. 51
3.4.4 Wyse 7250-Z50D .......................................................................................................................................................... 52
3.4.5 Wyse 7290-Z90D7 ........................................................................................................................................................ 52
3.4.6 Wyse 7490-Z90Q8 ....................................................................................................................................................... 52
3.4.7 Dell Chromebook 11 ..................................................................................................................................................... 52
4 Software components ............................................................................................................................................................ 54
4.1 What's new in this release of Horizon View 6.0? .................................................................................................... 54
4.2 VMware Horizon View .................................................................................................................................................. 55
4.3 VDI hypervisor platform ............................................................................................................................................... 56
4.3.1 VMware vSphere 5 ........................................................................................................................................................ 56
5 Solution architecture............................................................................................................................................................... 57
5.1 Compute server infrastructure ................................................................................................................................... 57
5.1.1 Local Tier 1 Rack........................................................................................................................................................ 57
5.1.2 Shared Tier 1 Rack ..................................................................................................................................................... 57
5.1.3 Shared Tier 1 Blade ................................................................................................................................................... 58
5.2 Management server infrastructure............................................................................................................................. 59
5.2.1 SQL databases .............................................................................................................................................................. 60
5.2.2 DNS ................................................................................................................................................................................. 60
5.3 Scaling guidance ........................................................................................................................................................... 61
5.3.1 Windows 7 vSphere .................................................................................................................................................. 62
5.3.2 Windows 8 vSphere .................................................................................................................................................. 63
5.3.3 Windows 8.1 vSphere ............................................................................................................................................... 64
5 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.3.4 Windows 2008R2 vSphere ...................................................................................................................................... 64
5.4 Storage architecture overview .................................................................................................................................... 65
5.4.1 Local Tier 1 storage ....................................................................................................................................................... 65
5.4.2 Shared Tier 1 storage .................................................................................................................................................... 65
5.4.3 Shared Tier 2 storage ................................................................................................................................................... 66
5.4.4 Storage networking EqualLogic iSCSI ................................................................................................................... 66
5.4.5 Storage networking Compellent Fibre Channel ..................................................................................................68
5.5 Virtual networking ......................................................................................................................................................... 69
5.5.1 Local Tier 1 Rack iSCSI ......................................................................................................................................... 69
5.5.2 Shared Tier 1 Rack iSCSI ....................................................................................................................................... 71
5.5.3 Shared Tier 1 Rack Fibre Channel ....................................................................................................................... 74
5.5.4 Shared Tier 1 Blade iSCSI ..................................................................................................................................... 75
5.5.5 Shared Tier 1 Blade Fibre Channel ..................................................................................................................... 77
5.6 Solution high availability .............................................................................................................................................. 79
5.6.1 Compute layer HA (Local Tier 1) ............................................................................................................................... 80
5.6.2 vSphere HA (Shared Tier 1) .......................................................................................................................................... 81
5.6.3 Horizon View infrastructure protection .................................................................................................................... 81
5.6.4 Management server high availability ......................................................................................................................... 82
5.6.5 Horizon View VCS high availability ............................................................................................................................ 82
5.6.6 Windows File Services high availability ...................................................................................................................... 82
5.6.7 SQL Server high availability ......................................................................................................................................... 83
5.6.8 Load balancing .............................................................................................................................................................. 83
5.7 VMware Horizon View communication flow ...........................................................................................................84
6 Customer-provided solution components ......................................................................................................................... 85
6.1 Customer-provided storage requirements .............................................................................................................. 85
6.2 Customer-provided switching requirements .......................................................................................................... 85
7 Solution performance and testing ........................................................................................................................................86
7.1 Load generation and monitoring ...............................................................................................................................86
7.1.1 VMware View Planner...................................................................................................................................................86
7.1.2 Login VSI Login Consultants ...................................................................................................................................86
7.1.3 Liquidware Labs Stratusphere UX ...............................................................................................................................86
7.1.4 EqualLogic SAN HQ ...................................................................................................................................................... 87
6 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.1.5 VMware vCenter ............................................................................................................................................................ 87
7.2 Performance analysis methodology ..........................................................................................................................88
7.2.1 Resource utilization ......................................................................................................................................................88
7.2.2 EUE tools information ..................................................................................................................................................89
7.2.3 EUE real user information ............................................................................................................................................89
7.2.4 Dell Wyse Datacenter workloads and profiles .........................................................................................................89
7.2.5 Dell Wyse Datacenter profiles ................................................................................................................................... 90
7.2.6 Dell Wyse Datacenter workloads .............................................................................................................................. 90
7.2.7 Workloads running on shared graphics profile ....................................................................................................... 92
7.2.8 Workloads running on dedicated graphics profile .................................................................................................. 92
7.3 Testing and validation .................................................................................................................................................. 92
7.3.1 Testing process ............................................................................................................................................................. 92
7.4 VMware Horizon View test results ............................................................................................................................. 93
7.4.1 Configuration ................................................................................................................................................................. 93
7.4.2 ESXi 6.0/View 6.1 ........................................................................................................................................................... 95
7.5 Dell PowerEdge C4130 testing ................................................................................................................................. 103
7.5.1 Configuration ............................................................................................................................................................... 103
7.5.2 Test results ................................................................................................................................................................... 104
7.6 Dell EqualLogic PS6210XS testing with VMware Horizon View ......................................................................... 110
7.6.1 Overview ....................................................................................................................................................................... 110
7.6.2 Compute resources ..................................................................................................................................................... 111
7.6.3 Network resources ...................................................................................................................................................... 111
7.6.4 iSCSI SAN configuration overview ........................................................................................................................... 112
7.6.5 Test objectives: ............................................................................................................................................................ 112
7.6.6 Test criteria/thresholds: ............................................................................................................................................. 113
7.6.7 Boot storm I/O ............................................................................................................................................................ 113
7.6.8 Login storm I/O ........................................................................................................................................................... 114
7.6.9 Steady state I/O ........................................................................................................................................................... 116
7.6.10 Server host performance ...................................................................................................................................... 118
7.6.11 Summary ....................................................................................................................................................................... 119
Acknowledgements ...................................................................................................................................................................... 121
About the authors ......................................................................................................................................................................... 121
7 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
8 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
1 Introduction
Dell Wyse Datacenter for VMware Horizon View Reference Architecture scaling from 50 to
50,000+ virtual desktop infrastructure (VDI) users.
Solution options encompass a combination of solution models including local disks, iSCSI or Fibre
Channel based storage options.
This document addresses the architecture design, configuration and implementation considerations for
the key components of the architecture required to deliver virtual desktops via VMware Horizon View on
VMware vSphere 5.
1.2 Scope
Relative to delivering the virtual desktop environment, the objectives of this document are to:
See the attached hyperlinks for focused white papers on each of the above topics.
9 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2 Solution architecture overview
2.1 Introduction
The Dell Wyse Datacenter Solution leverages a core set of hardware and software components consisting
of 4 primary layers:
Networking Layer
Compute Server Layer
Management Server Layer
Storage Layer
These components have been integrated and tested to provide the optimal balance of high performance
and lowest cost per user. Additionally, the Dell Wyse Datacenter Solution includes an approved extended
list of optional components in the same categories. These components give IT departments the flexibility
to custom tailor the solution for environments with unique virtual desktop infrastructure (VDI) feature,
scale or performance needs. The Dell Wyse Datacenter stack is designed to be a cost effective starting
point for IT departments looking to migrate to a fully virtualized desktop environment slowly. This
approach allows you to grow the investment and commitment as needed or as your IT staff becomes
more comfortable with VDI technologies.
10 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.1.1 Physical architecture overview
The core Dell Wyse Datacenter architecture consists of two models: Local Tier 1 and Shared Tier 1. Tier 1
in the Dell Wyse Datacenter context defines from which disk source the VDI sessions execute. Local Tier 1
includes rack servers only while Shared Tier 1 can include rack or blade servers due to the usage of shared
Tier 1 storage. Tier 2 storage is present in both solution architectures and, while having a reduced
performance requirement, is utilized for user profile/data and Management virtual machine (VM)
execution. Management VM execution occurs using Tier 2 storage for all solution models. Dell Wyse
Datacenter is a 100% virtualized solution architecture.
In the Shared Tier 1 solution model, an additional high-performance shared storage array is added to
handle the execution of the VDI sessions. All compute and management layer hosts in this model are
diskless.
CPU RAM CPU VDI Disk RAM CPU RAM CPU RAM
VDI VMs
Mgmt Disk User Data Mgmt Disk User Data VDI Disk
T2 Shared T1 Shared
T2 Shared Storage Storage Storage
11 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.1.2 Dell Wyse Datacenter solution layers
Only a single high performance Force10 48-port switch is required to get started in the network layer. This
switch will host all solution traffic consisting of 1Gb iSCSI and LAN sources for smaller stacks. Above 1000
users we recommend that LAN and iSCSI traffic be separated into discrete switching fabrics. Additional
switches can be added and stacked as required to provide High Availability for the Network layer.
The compute layer consists of the server resources responsible for hosting the Horizon View user
sessions, hosted via the VMware vSphere hypervisor, local or shared tier 1 solution models (local Tier 1
pictured below).
VDI management components are dedicated to their own layer so as to not negatively impact the user
sessions running in the compute layer. This physical separation of resources provides clean, linear and
predictable scaling without the need to reconfigure or move resources within the solution as you grow.
The management layer will host all the VMs necessary to support the VDI infrastructure.
The storage layer consists of options provided by EqualLogic for iSCSI and Compellent arrays for Fibre
Channel to suit your Tier 1 and Tier 2 scaling and capacity needs.
12 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.2 Local Tier 1
13 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.2.3 Local Tier 1 (iSCSI)
The Local Tier 1 solution model provides a scalable rack-based configuration that hosts user VDI sessions
on local disk in the compute layer.
14 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.2.3.1 Local Tier 1 network architecture (iSCSI)
In the local tier 1 architecture, a single Force10 switch can be shared among all network connections for
both management and compute, up to 1000 users. Over 1000 users Dell Wyse Solutions Engineering
recommends separating the network fabrics to isolate iSCSI and LAN traffic as well as making each switch
stack redundant. Only the management servers connect to iSCSI storage in this model. All Top of Rack
(ToR) traffic has been designed to be layer 2 (switched locally), with all layer 3 (routable) VLANs trunked
from a core or distribution switch. The following diagrams illustrate the logical data flow in relation to the
core switch.
DRAC VLAN
Trunk
Mgmt VLAN
Core switch
vMotion VLAN
VDI V LAN
ToR
switches
iSCSI
SAN
Compute hosts
Mgmt hosts
15 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.2.3.2 Local Tier 1 cabling diagram for high availability (HA) (Rack HA)
S55/ S55/
S60 S60
LAN
SAN
16 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.3 Shared Tier 1 Rack
17 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.3.2 Shared Tier 1 Rack (iSCSI EQL)
For 555 or more users on EqualLogic (EQL), the Storage layers are separated into discrete arrays. The
drawing below depicts a 3000 user build where the network fabrics are separated for LAN and iSCSI traffic.
Additional PS6210XS arrays are added for Tier 1 as the user count scales, just as the Tier 2 array models
change also based on scale. The PS4110E, PS6210E and PS6510E are 10Gb Tier 2 array options. NAS is
recommended above 1000 users to provide HA for file services.
18 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.3.2.1 Shared Tier 1 Rack network architecture (iSCSI)
In the Shared Tier 1 architecture for rack servers, both management and compute servers connect to
shared storage in this model. All ToR traffic has designed to be layer 2 (switched locally), with all layer 3
(routable) VLANs routed through a core or distribution switch. The following diagrams illustrate the server
NIC to ToR switch connections, vSwitch assignments, as well as logical VLAN flow in relation to the core
switch.
DRAC VLAN
Trunk
Mgmt VLAN
Core switch
vMotion VLAN
VDI V LAN
ToR
switches
iSCSI
SAN
Compute hosts
Mgmt hosts
19 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.3.2.2 Shared Tier 1 Rack Cabling diagram (Rack EQL)
LAN
SAN
S55/S60 S4810
20 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.3.3 Shared Tier 1 Rack 1000 users (FC CML)
Utilizing Compellent (CML) storage for Shared Tier 1 provides a Fibre Channel solution where Tier 1 and
Tier 2 can optionally be combined in a single array. Tier 2 functions (user data + management VMs) can be
removed from the array if the customer has another tier 2 solution in place or a tier 2 Compellent array
can be used. Scaling this solution is very linear by predictably adding Compellent arrays for every 2000
basic users, on average. The image below depicts a 1000 user array. For 2000 users, 96 total disks in 4
shelves are required. Please see section 3.3.3 for more information.
21 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.3.3.1 Shared Tier 1 Rack Network architecture (FC)
In the Shared Tier 1 architecture for rack servers using Fibre Channel (FC), a separate switching
infrastructure is required for FC. Management and compute servers will both connect to shared storage
using FC. Both management and compute servers connect to all network VLANs in this model. All ToR
traffic has designed to be layer 2 (switched locally), with all layer 3 (routable) VLANs routed through a core
or distribution switch. The following diagrams illustrate the server NIC to ToR switch connections, vSwitch
assignments, as well as logical VLAN flow in relation to the core switch.
DRAC VLAN
Trunk
Mgmt VLAN
Core switch
vMotion VLAN
VDI V LAN
FC switch
ToR Ethernet
switch
FC
SAN
Compute hosts
Mgmt hosts
22 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.3.3.2 Shared Tier 1 Rack Cabling diagram (Rack CML)
S55/S60 6510
LAN
SAN
23 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.4 Shared Tier 1 Blade
24 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.4.2 Shared Tier 1 Blade (iSCSI EQL)
Above 1000 users the storage tiers need to be separated to maximize the performance of the PS6210XS
for VDI sessions. At this scale we also separate LAN from iSCSI switching. Optionally, load balancing and
NAS can be added for HA. The drawing below depicts a 3000 user solution.
25 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.4.2.1 Shared Tier 1 Blade Network architecture (iSCSI)
In the Shared Tier 1 architecture for blades, only iSCSI is switched through a ToR switch. There is no need
to switch LAN ToR since the M6348 in the chassis supports LAN to the blades and can be uplinked to the
core or distribution layers directly. The M6348 has 16 external ports per switch that can be optionally used
for DRAC/IPMI traffic. For greater redundancy, a ToR switch used to support DRAC/IPMI can be used
outside of the chassis. Both Management and Compute servers connect to all VLANs in this model. The
following diagram illustrates the server NIC to ToR switch connections, vSwitch assignments, as well as
logical VLAN flow in relation to the core switch.
DRAC VLAN
Trunk
Mgmt VLAN
Core switch
vMotion VLAN
VDI V LAN
ToR switch
iSCSI
SAN
Compute hosts
Mgmt hosts
26 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.4.2.2 Shared Tier 1 Blade Cabling diagram (Blade EQL)
Core
S4810
Stack
10Gb
LAN
10Gb
SAN
Stacking
27 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.4.3 Shared Tier 1 Blade (FC CML)
Fibre Channel is again an option in Shared Tier 1 using blades. There are a few key differences using FC
with blades instead of iSCSI: Blade chassis interconnects, FC HBAs in the servers and FC IO cards in the
Compellent arrays. ToR FC switching is optional if a suitable FC infrastructure is already in place. The
image below depicts a 4000 user stack.
28 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.4.3.1 Shared Tier 1 Blade Network architecture (FC)
DRAC VLAN
Trunk
Mgmt VLAN
Core switch
vMotion VLAN
VDI V LAN
FC switch
FC
SAN
Compute hosts
Mgmt hosts
29 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
2.4.3.2 Shared Tier 1 Blade Cabling diagram (Blade CML)
Core
6510
FC fabric A
6510
FC fabric B
10Gb
LAN
FC SAN
Stacking
30 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3 Hardware components
3.1 Networking
The following sections contain the core network components for the Dell Wyse Datacenter solutions.
General uplink cabling guidance to consider in all cases is that Twinax is very cost effective for short 10Gb
runs and for longer runs it is best to use fiber with SFPs.
Guidance:
31 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
10Gb uplinks to a core or distribution switch are the preferred design choice using the rear 10Gb
uplink modules. If 10Gb to a core or distribution switch is unavailable the front 4 x 1Gb SFP ports
can be used.
The front 4 SFP ports can support copper cabling and can be upgraded to optical if a longer run is
needed.
For more information on the S55 switch and Dell Force10 networking, please visit:
http://www.dell.com/us/enterprise/p/force10-s55/pd
32 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Guidance:
10Gb uplinks to a core or distribution switch is the preferred design choice using the rear 10Gb
uplink modules. If 10Gb to a core or distribution switch is unavailable the front 4 x 1Gb SFP ports
can be used.
The front 4 SFP ports can support copper cabling and can be upgraded to optical if a longer run is
needed.
The S60 is appropriate for use in solutions scaling higher than 6000 users.
For more information on the S60 switch and Dell Force10 networking, please visit:
http://www.dell.com/us/enterprise/p/force10-s60/pd
33 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
uplink can support four 10 GbE ports with a breakout cable). Priority-based Flow Control (PFC), Data
Center Bridge Exchange (DCBX), Enhance Transmission Selection (ETS), coupled with ultra-low latency
and line rate throughput, make the S4810 ideally suited for iSCSI storage, FCoE Transit & DCB
environments.
Guidance:
The 40Gb QSFP+ ports can be split into 4 x 10Gb ports using breakout cables for stand-alone
units, if necessary. This is not supported in stacked configurations.
10Gb or 40Gb uplinks to a core or distribution switch is the preferred design choice.
The front 4 SFP ports can support copper cabling and can be upgraded to optical if a longer run is
needed.
The S60 is appropriate for use in solutions scaling higher than 6000 users.
For more information on the S4810 switch and Dell Force10 networking, please visit:
http://www.dell.com/us/enterprise/p/force10-s4810/pd
34 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.1.4 Brocade 6510 (FC ToR switch)
The Brocade 6510 Switch meets the demands of hyper-scale, private cloud storage environments by
delivering market-leading speeds up to 16 Gbps Fibre Channel technology and capabilities that support
highly virtualized environments. Designed to enable maximum flexibility and investment protection, the
Brocade 6510 is configurable in 24, 36, or 48 ports and supports 2, 4, 8, or 16 Gbps speeds in an efficiently
designed 1U package. It also provides a simplified deployment process and a point-and-click user
interfacemaking it both powerful and easy to use. The Brocade 6510 offers low-cost access to industry-
leading Storage Area Network (SAN) technology while providing pay-as-you-grow scalability to meet the
needs of an evolving storage environment.
48 x Auto-sensing ports
Guidance:
The 6510 FC switch can be licensed to light the number of ports required for the deployment. If
only 24 or fewer ports are required for a given implementation, then only those need to be
licensed.
Up to 239 Brocade switches can be used in a single FC fabric.
35 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.1.5 PowerEdge M I/O Aggregator (10Gb blade interconnect)
Model Features Options Uses
PowerEdge Up to 32 x 10Gb ports 2-port QSFP+ modules in 4x10Gb Blade switch for
M I/O + 4 x external SFP+ mode iSCSI in Shared
Aggregator 2 x line rate fixed 4-port SFP+ 10Gb module Tier 1 blade
(IOA) QSFP+ ports 4-port 10GBaseT copper module solution
2 x FlexIO module bays (one per IOA)
Stacking available only with Active
System Manager
Guidance:
10Gb uplinks to a ToR switch are the preferred design choice using Twinax or optical cabling for
longer runs.
If copper-based uplinks are necessary, additional FlexIO modules can be used.
http://www.dell.com/us/business/p/poweredge-m-io-aggregator/pd
Act
Act
LK
LK
CONSOLE
M6348
XG1
XG2
XG3
XG4
33
47
Guidance:
10Gb uplinks to a core or distribution switch are the preferred design choice using Twinax or
optical cabling via the SFP+ ports.
16 x external 1Gb ports can be used for Management ports, DRACs, etc.
36 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Stack up to 12 switches using stacking ports.
Guidance:
37 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.1.7.2 QLogic QLE2562 HBA
The QLE2562 is a PCI Express, dual port, Fibre Channel HBA. The QLE2562 is
part of the QLE2500 HBA product family that offers next generation 8 Gb FC
technology, meeting the business requirements of the enterprise data center.
Features of this HBA includes throughput of 3200 MBps (full-duplex), 200,000
initiator and target I/Os per second (IOPS) per port and StarPower technology-
based dynamic and adaptive power management. Benefits include optimizations
for virtualization, power, reliability, availability and serviceability (RAS) and
security.
3.2 Servers
The rack server platform for the Dell Wyse Datacenter solution is the best-in-class Dell PowerEdge R730.
This dual socket CPU platform runs the fastest Intel Xeon E5-2600 v3 family of processors, can host up to
768GB RAM and supports up to 16 2.5 SAS disks. The Dell PowerEdge R730 offers uncompromising
performance and scalability in a 2U form factor. For more information, please visit:
http://www.dell.com/us/business/p/poweredge-r730/pd
38 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
The blade server platform for the Dell Wyse Datacenter solution is the PowerEdge M620. This half-height
blade server is a feature-rich, dual-processor platform that offers a blend of density, performance,
efficiency and scalability. The M620 offers remarkable computational density, scaling up to 24 cores, 2
socket Intel Xeon processors and 24 DIMMs (768GB RAM) of DDR3 memory in an extremely compact
half-height blade form factor. This server platform is currently offered in both the PowerEdge M1000e
blade enclosure and VRTX shared infrastructure platform. For more information, please visit:
http://www.dell.com/us/business/p/poweredge-m620/pd
3.3 Storage
3.3.1.1 PS6210XS
Implement both high-speed, low-latency solid-state disk (SSD) technology and high-capacity HDDs from
a single chassis. The PS6210XS 10GbE iSCSI array is a Dell Fluid Data solution with a virtualized scale-out
architecture that delivers enhanced storage performance and reliability that is easy to manage and scale
for future needs. For more information please visit: http://www.dell.com/us/business/p/equallogic-
ps6210-series/pd
39 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7 x SSD + 17 x 10K SAS
3.3.2.1 PS4100E
Model Features Options Uses
EqualLogic 12 drive bays (NL-SAS/ 12TB 12 x 1TB HDDs Tier 2 array for 1000
PS4100E 7200 RPM) users or less in Local
Dual HA controllers Tier 1 solution model
24TB 12 x 2TB HDDs (1Gb iSCSI)
Snaps/clones
Asynchronous
replication 36TB 12 x 3TB HDDs
SAN HQ
1Gb
40 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
12 x NL SAS drives
Hard Drives
0 1 2 3
4 5 6 7
8 9 10 11
CONTROL MODULE 12
ETHERNET 0 ETHERNET 1 MANAGEMENT
SERIAL PORT
PWR
ERR
ACT
STANDBY
ON/OFF
CONTROL MODULE 12
ETHERNET 0 ETHERNET 1 MANAGEMENT
SERIAL PORT
PWR
ERR
ACT
STANDBY
ON/OFF
41 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.2.2 PS4110E
Model Features Options Uses
EqualLogic 12 drive bays (NL-SAS/ 12TB 12 x 1TB HDDs Tier 2 array for 1000
PS4110E 7200 RPM) users or less in Shared
Dual HA controllers Tier 1 solution model
24TB 12 x 2TB HDDs (10Gb iSCSI)
Snaps/clones
Asynchronous
replication 36TB 12 x 3TB HDDs
SAN HQ
10Gb
42 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.2.3 PS6100E
Model Features Options Uses
EqualLogic 24 drive bays (NL-SAS/ 24TB 24 x 1TB HDDs Tier 2 array for up to
PS6100E 7200 RPM) 1500 users, per array,
Dual HA controllers in local Tier 1 solution
Snaps/clones 48TB 24 x 2TB HDDs model (1Gb)
Asynchronous
replication
72TB 24 x 3TB HDDs
SAN HQ
1Gb
4U chassis 96TB 24 x 4TB HDDs
43 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.2.4 PS6210E
Model Features Options Uses
EqualLogic 24 drive bays (NL-SAS/ 24TB 24 x 1TB HDDs Tier 2 array for up to
PS6210E 7200 RPM) 1500 users, per array,
Dual HA controllers in shared Tier 1
Snaps/clones 48TB 24 x 2TB HDDs solution model (10Gb)
Asynchronous
replication
72TB 24 x 3TB HDDs
SAN HQ
10Gb
4U chassis 96TB 24 x 4TB HDDs
44 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.2.5 PS6500E
Model Features Options Uses
EqualLogic 48 drive SATA (NL-SAS) 48TB 48 x 1TB HDDs Tier 2 array for Local
PS6500E Dual HA controllers Tier 1 solution model
Snaps/clones (1Gb iSCSI)
96TB 48 x 2TB HDDs
Asynchronous
replication
SAN HQ 144TB 48 x 3TB HDDs
1Gb
45 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.2.6 PS6510E
Model Features Options Uses
EqualLogic 48 drive SATA (NL-SAS) 48TB 48 x 1TB HDDs Tier 2 array for Shared
PS6510E Dual HA controllers Tier 1 solution model
Snaps/clones (10Gb iSCSI)
96TB 48 x 2TB HDDs
Asynchronous
replication
SAN HQ 144TB 48 x 3TB HDDs
10Gb
46 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.2.7 EqualLogic configuration
Each tier of EqualLogic storage is to be managed as a separate pool or group to isolate specific workloads.
Manage shared Tier 1 arrays used for hosting VDI sessions together, while managing shared Tier 2 arrays
used for hosting Management server role VMs and user data together.
Dell Wyse Solutions Engineering recommends that all Compellent storage arrays be implemented using 2
controllers in an HA cluster. Fibre Channel is the preferred storage protocol for use with this array, but
Compellent is fully capable of supporting iSCSI as well. Key Storage Center applications used strategically
to provide increased performance include:
47 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
requires 1 hot spare per disk type. RAID is virtualized across all disks in an array (RAID10 or RAID6). Please
refer to the test methodology and results for specific workload characteristics. SSDs can be added for use
in scenarios where boot storms or provisioning speeds are an issue.
Users Controller Disk Shelves 15K SAS Disks RAW Capacity Use
Pairs
500 1 1 22 7TB T1 + T2
1000 1 2 48 15TB T1 + T2
2000 1 4 96 29TB T1 + T2
48 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.3.2 Compellent Tier 2
Compellent Tier 2 storage is completely optional if a customer wishes to deploy discrete arrays for each
tier. The guidance below is provided for informational purposes and arrays built for this purpose will need
to be custom. The optional Compellent Tier 2 array consists of a standard dual controller configuration
and scales upward by adding disks and shelves. A single pair of SC8000 controllers should be able to
support Tier 2 for 10,000 basic users. Additional capacity and performance capability is achieved by adding
disks and shelves, as appropriate. Each disk shelf requires 1 hot spare per disk type. When designing for
Tier 2, capacity requirements will drive higher overall array performance capabilities due to the amount of
disk that will be on hand. Our base Tier 2 sizing guidance is based on 1 IOPS and 5GB per user.
49 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.3.4 NAS
3.3.4.1 FS7600
Model Features Scaling Uses
EqualLogic Dual active-active controllers Each controller can Scale out NAS
FS7600 24GB cache per controller support 1500 concurrent for Local Tier 1
(cache mirroring) users to provide file
SMB & NFS support Up to 6000 total users in share HA
AD-integration a 2 system NAS cluster
Up to 2 FS7600 systems in a NAS
cluster (4 controllers)
1Gb iSCSI via 16 x Ethernet ports
3.3.4.2 FS8600
Model Features Scaling Uses
Compellent Dual active-active controllers Each controller can Scale out NAS for
FS8600 24GB cache per controller support 1500 concurrent Shared Tier 1 on
(cache mirroring) users Compellent, to
SMB & NFS support Up to 12,000 total users provide file share
AD-integration in a 4 system NAS cluster HA (FC Only)
Up to 4 FS8600 systems in a
NAS cluster (8 controllers)
Fibre Channel only
50 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.4 Wyse Cloud Clients
The following Wyse Cloud Clients are the recommended choices for this solution.
51 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
3.4.4 Wyse 7250-Z50D
Designed for power users, the Wyse 7250-Z50D is the highest performing thin client on
the market. Highly secure and ultra-powerful, the 7250-Z50D combines Wyse-enhanced
SUSE Linux Enterprise with dual-core AMD 1.65 GHz processor and a revolutionary
unified engine for an unprecedented user experience. The 7250-Z50D eliminates
performance constraints for high-end, processing-intensive applications like computer-
aided design, multimedia, HD video and 3D modelling.
52 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
two USB 3.0 ports, Bluetooth 4.0 and an HDMI port, end users have endless possibilities for collaborating,
creating, consuming and displaying content. With battery life of up to 10-hours, the Chromebook is
capable of powering end users throughout the day.
Finally with a fully compliant HTML5 browser, the Dell Chromebook11 is an excellent choice as an
endpoint to a HTML5/BLAST connect Horizon View VDI desktop.
53 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
4 Software components
RemoteApp RemoteApp enables administrators to make programs that are accessed remotely through
a Remote Desktop Session (RDS) server appear as if they are running on the client computer versus a
remote desktop.
Virtual SAN Horizon 6 with VMware Virtual SAN is a new storage technology that automates storage
provisioning and pools together with server-attached flash drives and hard disks and virtualizes them into
reliable storage. Built into the vSphere platform, the technology offers greater performance while
simplifying storage management. Virtual SAN eliminates the need to overprovision storage to ensure that
end users have enough IOPS per desktop.
Cloud pod architecture The cloud pod architecture allows organizations to dynamically move and
locate View pods across multiple data centers for efficient management of end users across distributed
locations.
vDGA and vSGA 3D graphics enhancements 3D graphics capabilities are enhanced to augment a
graphically rich user experience. Using Virtual Dedicated Graphics Acceleration (vDGA), a single virtual
machine is mapped to one physical graphics processing unit (GPU) in the ESXi host, providing high-end,
hardware-accelerated workstation graphics. Using Virtual Shared Graphics Acceleration (vSGA), multiple
virtual machines leverage physical GPUs that are installed locally in ESXi hosts, providing hardware
accelerated 3D graphics to multiple virtual desktops.
Unity Touch enhancements Enhancements to VMware Unity Touch technology make it easier to
connect to View Connection Server or a View security server, log in to remote desktops in the data center,
and edit the list of connected servers. Unity Touch for VMware Horizon Client makes it easier to run
Windows apps on iPhone, iPad, and Android devices.
Additional OS support View Connection Server, security server, and View Composer are supported on
Windows Server 2012 R2 operating systems.
Horizon View logs Ability to send Horizon View logs to a Syslog server such as VMware vCenter Log
Insight.
Horizon View Agent The Remote Experience Agent is now integrated with View Agent. Previously, you
had to install View Agent and the Remote Experience Agent to use features such as HTML Access, Unity
Touch, Real-Time Audio-Video, and Windows 7 Multimedia Redirection. In this release these features are
available by installing just the View Agent.
54 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
4.2 VMware Horizon View
The solution is based on VMware Horizon View which provides a complete end-to-end solution delivering
Microsoft Windows virtual desktops to users on a wide variety of endpoint devices. Virtual desktops are
dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time
they log on.
VMware Horizon View provides a complete virtual desktop delivery system by integrating several
distributed components with advanced configuration tools that simplify the creation and real-time
management of the virtual desktop infrastructure. For the complete set of details, please see the Horizon
View resources page at http://www.vmware.com/products/horizon-view/resources.html
View Connection Server (VCS) Installed on servers in the data center and brokers client connections,
The VCS authenticates users, entitles users by mapping them to desktops and/or pools, establishes secure
connections from clients to desktops, support single sign-on, sets and applies policies, acts as a DMZ
security server for outside corporate firewall connections and more.
View Client Installed on endpoints. Is software for creating connections to View desktops that can be
run from tablets, Windows, Linux, or Mac PCs or laptops, thin clients and other devices.
View Portal A web portal to access links for downloading full View clients. With HTML Access Feature
enabled enablement for running a View desktop inside a supported browser is enabled.
View Agent Installed on all VMs, physical machines and Terminal Service servers that are used as a
source for View desktops. On VMs the agent is used to communicate with the View client to provide
services such as USB redirection, printer support and more.
View Administrator A web portal that provides admin functions such as deploy and management of
View desktops and pools, set and control user authentication and more.
View Composer This software service can be installed standalone or on the vCenter server and provides
enablement to deploy and create linked clone desktop pools (also called non-persistent desktops).
vCenter Server This is a server that provides centralized management and configuration to entire virtual
desktop and host infrastructure. It facilitates configuration, provision, management services. It is installed
on a Windows Server 2008 host (can be a VM).
View Transfer Server Manages data transfers between the data center and the View desktops that are
checked out on the end users desktops in offline mode. This Server is required to support desktops that
run the View client with Local Mode options. Replications and syncing are the functions it will perform
with offline images.
55 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
4.3 VDI hypervisor platform
VMware vSphere 5 includes three major layers: Virtualization, Management and Interface. The
Virtualization layer includes infrastructure and application services. The Management layer is central for
configuring, provisioning and managing virtualized environments. The Interface layer includes the vSphere
client and the vSphere web client.
Throughout the Dell Wyse Datacenter solution, all VMware best practices and prerequisites are adhered to
(NTP, DNS, Active Directory, etc.). The vCenter 5 VM used in the solution will be a single Windows Server
2012 R2 VM (Check for current Windows Server OS compatibility at:
http://www.vmware.com/resources/compatibility ), residing on a host in the management tier. SQL server
is a core component of vCenter and will be hosted on another VM also residing in the management tier.
All additional Horizon View components need to be installed in a distributed architecture, 1 role per VM.
56 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5 Solution architecture
57 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.1.2.1 iSCSI
Shared Tier 1 Compute Host Shared Tier 1 Management Host
PowerEdge R730 PowerEdge R730
2 x Intel Xeon E5-2697v3 Processor (2.6Ghz) 2 x Intel Xeon E5-2660v3 Processor (2.6Ghz)
384GB Memory (24 x 16GB RDIMMs, 2133MT/s) 256GB Memory (16 x 16GB RDIMMs, 2133MT/s)
VMware vSphere on internal 8GB Dual SD VMware vSphere on internal 8GB Dual SD
Broadcom 57810 10Gb DP (iSCSI) Broadcom 57810 10Gb DP (iSCSI)
Broadcom 57800 10Gb QP (LAN/iSCSI) Broadcom 57800 10Gb QP (LAN/iSCSI)
Broadcom 5720 1Gb DP NIC (LAN) Broadcom 5720 1Gb DP NIC (LAN)
iDRAC8 Enterprise iDRAC8 Enterprise
2 x 750W PSUs 2 x 750W PSUs
In the above configurations, the R730-based Dell Wyse Datacenter Solution can support the following
user counts per server:
58 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.1.3.1 iSCSI
Shared Tier 1 Compute Host Shared Tier 1 Management Host
PowerEdge M620 PowerEdge M620
2 x Intel Xeon E5-2690v2 Processor (3Ghz) 2 x Intel Xeon E5-2670v2 Processor (2.5Ghz)
256GB Memory (16 x 16GB DIMMs @ 1600Mhz) 96GB Memory (6 x 16GB DIMMs @ 1600Mhz)
VMware vSphere on 2 x 1GB internal SD VMware vSphere on 2 x 1GB internal SD
Broadcom 57810-k 10Gb DP KR NDC (iSCSI) Broadcom 57810-k 10Gb DP KR NDC (iSCSI)
1 x Intel i350 1Gb QP SERDES mezzanine (LAN) 1 x Intel i350 1Gb QP SERDES mezzanine (LAN)
iDRAC7 Enterprise w/ vFlash, 8GB SD iDRAC7 Enterprise w/ vFlash, 8GB SD
In the above configuration, the M620-based Dell Wyse Datacenter Solutions can support the following
single server user densities:
59 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
OS + Data
Role vCPU RAM (GB) NIC Tier 2 Volume (GB)
vDisk (GB)
VMware vCenter 2 8 1 40 + 5 100 (VMDK)
View Connection Server 2 8 1 40 5 -
SQL Server 5 8 1 40 + 5 210 (VMDK)
File Server 1 4 1 40 + 5 2048 (RDM)
Total 7 28 4 180 2358
Initial placement of all databases into a single SQL instance is fine unless performance becomes an issue,
in which case database need to be separated into separate named instances. Enable auto-growth for each
DB.
Best practices defined by VMware are to be adhered to, to ensure optimal database performance.
The EqualLogic PS series arrays utilize a default RAID stripe size of 64K. To provide optimal performance,
configure disk partitions to begin from a sector boundary divisible by 64K.
Align all disks to be used by SQL Server with a 1024K offset and then formatted with a 64K file allocation
unit size (data, logs and TempDB).
5.2.2 DNS
DNS plays a crucial role in the environment not only as the basis for Active Directory but will be used to
control access to the various VMware software components. All hosts, VMs and consumable software
components need to have a presence in DNS, preferably via a dynamic and AD-integrated namespace.
Microsoft best practices and organizational requirements are to be adhered to.
Pay consideration for eventual scaling, access to components that may live on one or more servers (SQL
databases, VMware services) during the initial deployment. Use CNAMEs and the round robin DNS
mechanism to provide a front-end mask to the back-end server actually hosting the service or data
source.
60 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
connecting to SQLServer1\<instance name> for every device that needs access to SQL, the preferred
approach would be to connect to <CNAME>\<instance name>.
For example, the CNAME VDISQL is created to point to SQLServer1. If a failure scenario was to occur and
SQLServer2 would need to start serving data, we would simply change the CNAME in DNS to point to
SQLServer2. No infrastructure SQL client connections would need to be touched.
The components can be scaled either horizontally (by adding additional physical and virtual
servers to the server pools) or vertically (by adding virtual resources to the infrastructure)
Eliminate bandwidth and performance bottlenecks as much as possible
Allow future horizontal and vertical scaling with the objective of reducing the future cost of
ownership of the infrastructure.
61 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
File Services Concurrent Split user profiles and Additional RAM and CPU
connections, home directories for the management
responsiveness of reads/ between multiple file nodes
writes servers in the cluster.
File services can also be
migrated to the optional
NAS device to provide
high availability.
The following tables indicate the server platform, desktop OS, hypervisor and delivery mechanism
62 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.3.2 Windows 8 vSphere
Rack or Blade, Win8, vSphere
Standard Enhanced Physical Physical M1000e View Virtual
Professional
User User Mgmt. Host Blade Conn. vCenter
User Count
Count Count Servers Servers Chassis Servers Server
140 112 90 1 1 1 1 1
500 448 360 1 4 1 1 1
1000 896 720 2 8 1 1 1
2000 1680 1350 2 15 2 1 1
3000 2464 1980 2 22 2 2 1
4000 3248 2610 3 29 2 2 1
5000 4032 3240 3 36 3 3 1
6000 4816 3870 4 43 3 3 1
7000 5600 4500 4 50 4 4 1
8000 6496 5220 4 58 4 4 1
9000 7280 5850 4 65 5 5 1
10,000 8064 6480 4 72 5 5 1
Note: All values based on R720 and M620 density testing.
63 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.3.3 Windows 8.1 vSphere
Rack or Blade, Win8.1, vSphere
Standard Enhanced Physical Physical M1000e View Virtual
Professional
User User Mgmt. Host Blade Conn. vCenter
User Count
Count Count Servers Servers Chassis Servers Server
180* 120* 90* 1 1 1 1 1
500 420 372 1 4 1 1 1
1000 735 651 2 7 1 1 1
2000 1470 1302 2 14 1 1 1
3000 2100 1860 2 20 2 2 1
4000 2835 2511 3 27 2 2 1
5000 3570 3162 3 34 3 3 1
6000 4200 3720 4 40 3 3 1
7000 4935 4371 4 47 4 4 1
8000 5670 5022 4 54 4 4 1
9000 6300 5580 4 60 4 5 1
10,000 7035 6231 4 67 5 5 1
(*) Values based on R730 density testing. All others based on R720 and M620 density testing.
64 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.4 Storage architecture overview
The Dell Wyse Datacenter solution has a wide variety of tier 1 and tier 2 storage options to provide
maximum flexibility to suit any use case. Customers have the choice to leverage best-of-breed iSCSI
solutions from EqualLogic or Fibre Channel solutions from Dell Compellent while being assured the
storage tiers of the Dell Wyse Datacenter solution will consistently meet or outperform user needs and
expectations.
65 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
125 VMs-Storage for VDI Virtual
VDI-Images8 500 GB Tier 1 VMFS
Machines in Horizon View Cluster
For Shared Storage use on Compellent storage it is assumed that all pre-work for the setup of a properly
tiered architecture has been done to ensure proper data progression and optimal performance. General
guidance for configuration is as follows:
66 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
simplified deployment, comprehensive storage management and data protection functionality and
seamless VM mobility. Dell iSCSI solutions give customers the Storage Direct advantage the ability to
seamlessly integrate virtualization into an overall, optimized storage environment.
If iSCSI is the selected block storage protocol, then the Dell EqualLogic MPIO plugin is installed on all
hosts that connect to iSCSI storage. This module is added via a command line using a Virtual Management
Appliance (vMA) from VMware. This plugin allows for easy configuration of iSCSI on each host. The MPIO
plugin allows the creation of new or access to existing data stores and handle IO load balancing. The
plugin will also configure the optimal multi-path settings for the data stores as well. Some key settings to
be used as part of the configuration:
67 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.4.5 Storage networking Compellent Fibre Channel
Based on Fluid Data architecture, the Dell Compellent Storage Center SAN provides built-in intelligence
and automation to dynamically manage enterprise data throughout its lifecycle. Together, block-level
intelligence, storage virtualization, integrated software and Compute &
Mgmt hosts
modular, platform-independent hardware enable exceptional
efficiency, simplicity and security.
on actual use.
A Fabric B Fabric
5.4.5.1 FC Zoning
Zone at least 1 port from each server HBA to communicate with a single Compellent fault domain. The
result of this will be 2 distinct FC fabrics and 4 redundant paths per server. Round Robin or Fixed Paths are
supported. Leverage Compellent Virtual Ports to minimize port consumption as well as simplify
deployment. Zone each controllers front-end virtual ports, within a fault domain, with at least one ESXi
initiator per server.
68 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.5 Virtual networking
Following best practices, LAN and block storage traffic will be separated in solutions >1000 users. This
traffic can be combined within a single switch in smaller stacks to minimize buy-in costs. Each Local Tier 1
Compute host will have a quad port NDC as well as a 1Gb dual port NIC. Configure the LAN traffic from
the server to the ToR switch as a LAG.
69 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.5.1.1 vSphere
Compute Hosts Local Tier1
R730 ToR
vsw0 1Gb
Mgmt DP NIC
F10 - LAN
vsw1 1Gb
LAN QP NDC
The Compute host will require 2 vSwitches, one for VDI LAN traffic and another for the ESXi Management.
Configure both vSwitches so that each is physically connected to both the onboard NIC as well as the
add-on NIC. Set all NICs and switch ports to auto negotiate.
C o m p u te | L o ca l T ie r 1
M gm t vm n ic 0 1 0 0 0 Fu ll
V m k0 : 10 .20 .1.50 | V LA N ID : 10 vm n ic 4 1 0 0 0 Fu ll
V D I V LA N vm n ic 1 1 0 0 0 Fu ll
X virtu a l m a ch in e(s) | V LA N ID : 5 vm n ic 5 1 0 0 0 Fu ll
V D I-1
V D I-2
V D I-3
The Management hosts have a slightly different configuration since they will additionally access iSCSI
storage. The add-on NIC for the Management hosts will be a 1Gb quad port NIC. 3 ports of both the NDC
and add-on NIC will be used for the required connections. Isolate iSCSI onto its own vSwitch with
redundant ports and connections from all 3 vSwitches. Connections should pass through both the NDC
and add-on NIC per the diagram below. Configure the LAN traffic from the server to the ToR switch as a
LAG.
70 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Mgmt Hosts Local Tier1
R730 ToR
vsw0
Mgmt/
1Gb F10 - iSCSI
Migration
QP NIC
vsw1
iSCSI 1Gb F10 - LAN
QP NDC
vsw2
LAN
vSwitch0 carries traffic for both Management and vMotion which needs be VLAN-tagged so that either
NIC can serve traffic for either VLAN. The Management VLAN will be L3 routable while the vMotion VLAN
will be L2 non-routable.
M g m t | L o c a l T ie r 1
S ta n d a rd S w itch: vS w itch 0
V M k e rn e l P o rt P h ysica l A d a p te rs
M gm t vm n ic 1 1 0 0 0 F u ll
v m k0 : 1 0 .2 0 .1 .5 1 | V LA N ID : 1 0 vm n ic 5 1 0 0 0 F u ll
V M k e rn e l P o rt
V M o tio n
v m k1 : 1 0 .1 .1 .1 | V LA N ID : 1 2
S ta n d a rd S w itch: vS w itch 1
V M k e rn e l P o rt P h ysica l A d a p te rs
iS C S I0 vm n ic 0 1 0 0 0 0 F u ll
v m k2 : 1 0 .1 .1 .1 0 | V LA N ID : 1 1
vm n ic 4 1 0 0 0 0 F u ll
V M k e rn e l P o rt
iS C S I1
v m k3 : 1 0 .1 .1 .1 1 | V LA N ID : 1 1
S ta n d a rd S w itch: vS w itch 2
V irtu a l M a ch in e P o rt G ro u p P h ysica l A d a p te rs
V D I M g m t V LA N vm n ic 2 1 0 0 0 F u ll
X v irtu a l m a ch in e(s) | V LA N ID : 6 vm n ic 6 1 0 0 0 F u ll
SQ L
vC e n te r
F ile
71 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
o vMotion VLAN: Configured for vMotion traffic L2 switched only, trunked from Core
o iSCSI VLAN: Configured for iSCSI traffic L2 switched only via ToR switch
o VDI VLAN: Configured for VDI session traffic L3 routed via core switch
Management hosts (Shared Tier 1)
o Management VLAN: Configured for hypervisor Management traffic L3 routed via core
switch
o vMotion VLAN: Configured for vMotion traffic L2 switched only, trunked from Core
o iSCSI VLAN: Configured for iSCSI traffic L2 switched only via ToR switch
o VDI Management VLAN: Configured for VDI infrastructure traffic L3 routed via core
switch
An optional iDRAC VLAN can be configured for all hardware management traffic L3 routed via
core switch
Following best practices, iSCSI and LAN traffic will be physically separated into discrete fabrics. Each
Shared Tier 1 Compute and Management host will have a quad port NDC (2 x 1Gb + 2 x 10Gb SFP+), a
10Gb dual port NIC, as well as a 1Gb dual port NIC. Isolate iSCSI onto its own vSwitch with redundant
ports. Connections from all 3 vSwitches should pass through both the NDC and add-on NICs per the
diagram below. Configure the LAN traffic from the server to the ToR switch as a LAG.
5.5.2.1 vSphere
Compute + Mgmt Hosts Shared Tier 1 iSCSI
R730 ToR
10 Gb
DP NIC
vsw 1
iSCSI F10 - iSCSI
1Gb
vsw 2 DP NIC
LAN
F10 - LAN
vsw0 2 x1Gb
Mgmt/
Migration
2 x 10 Gb
QP NDC
vSwitch0 carries traffic for both Management and vMotion which needs to be VLAN-tagged so that either
NIC can serve traffic for either VLAN. The Management VLAN will be L3 routable while the vMotion VLAN
will be L2 non-routable.
72 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
C o m p u te | S h a re d T ie r 1
M gm t vm n ic 1 1 0 0 0 Fu ll
vm k0 : 10 .20 .1.51 | V LA N ID : 10 vm n ic 5 1 0 0 0 Fu ll
V M kern el P o rt
V M o tio n
vm k1 : 10 .1.1 .1 | V LA N ID : 12
V M kern el P o rt
iSC SI1
vm k3 : 10 .1.1 .11 | V LA N ID : 11
V D I V LA N vm n ic 2 1 0 0 0 Fu ll
X virtu a l m a ch in e(s) | V LA N ID : 6 vm n ic 6 1 0 0 0 Fu ll
V D I-1
V D I-2
V D I-3
The Management server is configured identically except for the VDI Management VLAN which is fully
routed but should be separated from the VDI VLAN used on the Compute host. Care should be taken to
ensure that all vSwitches are assigned redundant NICs that are NOT from the same PCIe device.
M g m t | S h a re d T ie r 1
S ta n d a rd S w itch: v S w itch 0
V M k e rn e l P o rt P h y sica l A d a p te rs
M gm t v m n ic 1 1 0 0 0 F u ll
v m k0 : 1 0 .2 0 .1 .5 1 | V LA N ID : 1 0 v m n ic 5 1 0 0 0 F u ll
V M k e rn e l P o rt
V M o tio n
v m k1 : 1 0 .1 .1 .1 | V LA N ID : 1 2
S ta n d a rd S w itch: v S w itch 1
V M k e rn e l P o rt P h y sica l A d a p te rs
iS C S I0 v m n ic 0 1 0 0 0 0 F u ll
v m k2 : 1 0 .1 .1 .1 0 | V LA N ID : 1 1
v m n ic 4 1 0 0 0 0 F u ll
V M k e rn e l P o rt
iS C S I1
v m k3 : 1 0 .1 .1 .1 1 | V LA N ID : 1 1
S ta n d a rd S w itch: v S w itch 2
V irtu a l M a ch in e P o rt G ro u p P h y sica l A d a p te rs
VD I M gm t VLAN v m n ic 2 1 0 0 0 F u ll
X v irtu a l m a c h in e (s) | V LA N ID : 6 v m n ic 6 1 0 0 0 F u ll
SQ L
v C e n te r
F ile
73 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.5.3 Shared Tier 1 Rack Fibre Channel
Using Fibre Channel based storage eliminates the need to build iSCSI into the network stack but requires
additional fabrics to be built out. The network configuration in this model is identical between the
Compute and Management hosts. Both need access to FC storage since they are hosting VDI sessions
from shared storage and both can leverage vMotion as a result as well. The following outlines the VLAN
requirements for the Compute and Management hosts in this solution model:
FC and LAN traffic are physically separated into discrete switching fabrics. Each Shared Tier 1 Compute
and Management host will have a quad port NDC (4 x 1Gb), a 1Gb dual port NIC, as well as 2 x 8Gb dual
port FC HBAs. Connections from both vSwitches should pass through both the NDC and add-on NICs per
the diagram below. Configure the LAN traffic from the server to the ToR switch as a LAG.
5.5.3.1 vSphere
Compute + Mgmt Hosts Shared Tier 1 FC
R 730 ToR
vsw0 1 Gb
Mgmt /
migration DP NIC
F 10 - LAN
1 Gb
vsw 1 QP NDC
LAN
8 Gb
FC HBA
Brocade - FC
8 Gb
FC HBA
vSwitch0 carries traffic for both Management and vMotion which needs to be VLAN-tagged so that either
NIC can serve traffic for either VLAN. The Management VLAN will be L3 routable while the vMotion VLAN
will be L2 non-routable.
74 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Compute | Shared Tier 1 (FC)
Standard Switch: vSwitch0
VMkernel Port Physical Adapters
Mgmt vmnic2 1000 Full
vmk0: 10.20.1.51 | VLAN ID: 10 vmnic3 1000 Full
VMkernel Port
VMotion
vmk1: 10.1.1.1 | VLAN ID: 12
The Management server is configured identically except for the VDI Management VLAN which is fully
routed but should be separated from the VDI VLAN used on the Compute host.
75 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
o Management VLAN: Configured for hypervisor Management traffic L3 routed via core switch
o vMotion VLAN: Configured for vMotion traffic L2 switched only, trunked from Core
o iSCSI VLAN: Configured for iSCSI traffic L2 switched only via ToR switch
o VDI VLAN: Configured for VDI session traffic L3 routed via core switch
Management hosts (Shared Tier 1)
o Management VLAN: Configured for hypervisor Management traffic L3 routed via core switch
o vMotion VLAN: Configured for vMotion traffic L2 switched only, trunked from Core
o iSCSI VLAN: Configured for iSCSI traffic L2 switched only via ToR switch
o VDI Management VLAN: Configured for VDI infrastructure traffic L3 routed via core switch
An optional iDRAC VLAN can be configured for all hardware management traffic L3 routed via core switch
Following best practices, iSCSI and LAN traffic will be physically separated into discrete fabrics. Each
Shared Tier 1 Compute and Management blade host will have a 10Gb dual port LOM in the A fabric and a
1Gb quad port NIC in the B fabric. 10Gb iSCSI traffic will flow through A fabric using 2 x IOA blade
interconnects. 1Gb LAN traffic will flow through the B fabric using 2 x M6348 blade interconnects. The C
fabric will be left open for future expansion. Connections from 10Gb and 1Gb traffic vSwitches should pass
through the blade mezzanines and interconnects per the diagram below. Configure the LAN traffic from
the server to the ToR switch as a LAG if possible.
5.5.4.1 vSphere
vSwitch0 carries traffic for both Management and vMotion which needs to be VLAN-tagged so that either
NIC can serve traffic for either VLAN. The Management VLAN will be L3 routable while the vMotion VLAN
will be L2 non-routable.
76 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
The Management server is configured identically except for the VDI Management VLAN which is fully
routed but should be separated from the VDI VLAN used on the Compute host.
M g m t | S h a re d T ie r 1
M gm t vm n ic 2 1 0 0 0 Fu ll
v m k0 : 10 .20 .1.51 | V LA N ID : 10 vm n ic 3 1 0 0 0 Fu ll
V M k ern el P o rt
V M o tio n
v m k1 : 10 .1.1 .1 | V LA N ID : 12
V M k ern el P o rt
iSC SI1
v m k3 : 10 .1.1 .11 | V LA N ID : 11
V D I M g m t V LA N vm n ic 4 1 0 0 0 Fu ll
X v irtu a l m a ch in e(s) | V LA N ID : 6 vm n ic 5 1 0 0 0 Fu ll
SQ L
vC e n te r
File
77 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Compute and Management hosts. The following outlines the VLAN requirements for the Compute and
Management hosts in this solution model:
FC and LAN traffic are physically separated into discrete switching fabrics. Each Shared Tier 1 Compute
and Management blade will have a 10Gb dual port LOM in the A fabric and an 8Gb dual port HBA in the B
fabric. All LAN and management traffic will flow through the A fabric using 2 x IOA blade interconnects
partitioned to the connecting blades. 8Gb FC traffic will flow through the B fabric using 2 x M5424 blade
interconnects. The C fabric will be left open for future expansion. Connections from the vSwitches and
storage fabrics should pass through the blade mezzanines and interconnects per the diagram below.
Configure the LAN traffic from the server to the ToR switch as a LAG.
5.5.5.1 vSphere
78 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.6 Solution high availability
High availability (HA) is offered to protect each layers of the solution architecture, individually if desired.
Following the N+1 model, additional ToR switches for LAN, iSCSI, or FC are added to the Network layer
and stacked to provide redundancy as required, additional compute and management hosts are added to
their respective layers, vSphere clustering is introduced in the management layer, SQL is mirrored or
clustered, an F5 device can be leveraged for load balancing and a NAS device can be used to host file
shares. Storage protocol switch stacks and NAS selection will vary based on chosen solution architecture.
The HA options provides redundancy for all critical components in the stack while improving the
performance and efficiency of the solution as a whole.
79 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
An additional switch is added at the network tier which will be configured with the original as a
stack and equally spreading each hosts network connections across both.
At the compute tier an additional ESXi host is added to provide N+1 protection provided by
vSphere for computer tier protection. In a rack based solution with local tier 1 storage, there will
be no vSphere HA cluster in the compute tier as VMs that run here run on local disks.
A number of enhancements occur at the Management tier, the first of which is the addition of
another host. The Management hosts will then be configured in an HA cluster. All applicable
Horizon View server roles can then be duplicated on the new host where connections to each will
be load balanced via the addition of a F5 Load Balancer. SQL will also receive greater protection
through the addition and configuration of a SQL mirror with a witness.
Because only the Management hosts have access to shared storage, in this model, only these hosts need
to leverage the full benefits of hypervisor HA. The Management hosts can be configured in an HA cluster
with or without the HA bundle. An extra server in the Management layer will provide protection should a
host fail.
vSphere HA Admission control can be configured one of three ways to protect the cluster. This will vary
largely by customer preference but the most manageable and predictable options are percentage
reservations or a specified hot standby. Reserving by percentage will reduce the overall per host density
capabilities but will make some use of all hardware in the cluster. Additions and subtractions of hosts will
require the cluster to be manually rebalanced. Specifying a failover host, on the other hand, will ensure
maximum per host density numbers but will result in hardware sitting idle.
80 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.6.2 vSphere HA (Shared Tier 1)
Both compute and management hosts are identically configured, within their respective tiers and leverage
shared storage so can make full use of vSphere HA. The Compute hosts can be configured in an HA
cluster following the boundaries of vCenter with respect to limits imposed by VMware (3000 VMs per
vCenter). This will result in multiple HA clusters managed by multiple vCenter servers.
vCenter
Manage
10000 VMs
A single HA cluster will be sufficient to support the Management layer up to 10K users. An additional host
can be used as a hot standby or to thin the load across all hosts in the cluster.
81 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.6.4 Management server high availability
The applicable core Horizon View roles will be load balanced via DNS by default. In environments
requiring HA, F5 can be introduced to manage load-balancing efforts. Horizon View, VCS and vCenter
configurations (optionally vCenter Update Manager) are stored in SQL which will be protected via the SQL
mirror.
If the customer desires, some Role VMs can be optionally protected further via the form of a cold stand-by
VM residing on an opposing management host. A vSphere scheduled task can be used, for example, to
clone the VM to keep the stand-by VM current. Note In the HA option, there is no file server VM, its
duties have been replaced by introducing a NAS head.
The following will protect each of the critical infrastructure components in the solution:
For further protection in an HA configuration, deploy multiple replicated View Connection Server
instances in a group to support load balancing and HA. Replicated instances must exist on within a LAN
connection environment it is not recommended VMware best practice to create a group across a WAN or
similar connection.
Unlike the FS8600, the FS7600 and NX3300 do not support for 802.1q (VLAN tagging) so configure the
connecting switch ports with native VLANs, both iSCSI and LAN/ VDI traffic ports. Best practice dictates
that all ports be connected on both controller nodes. The back-end ports are used for iSCSI traffic to the
storage array as well as internal NAS functionality (cache mirroring and cluster heart beat). Front-end ports
can be configured using Adaptive Load Balancing or a LAG (LACP).
The Dell Wyse Solutions Engineering recommendation is to configure the original file server VM to use
RDMs to access the storage LUNs, therefore migration to the NAS will be simplified by changing the
presentation of these LUNs from the file server VM to the NAS.
82 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
5.6.7 SQL Server high availability
HA for SQL will be provided via a 3-server synchronous mirror
configuration that includes a witness (High safety with automatic
failover). This configuration will protect all critical data stored within the
database from physical server as well as virtual server problems. DNS will
be used to control access to the active SQL server, please refer to
section 5.7.1 for more details. Place the principal VM that will host the
primary copy of the data on the first Management host. Place the mirror
and witness VMs on the second or later Management hosts. Mirror all
critical databases to provide HA protection.
Dell recommends the F5 for load balancing the Dell Wyse Datacenter for VMware Horizon View solution.
For additional reference, please review the following document. In particular, page 44 has a good
overview and architecture example. http://www.f5.com/pdf/deployment-guides/vmware-view5-iapp-
dg.pdf.
For example, in the base configuration the single VCS server will have its own hostname registered in DNS
as an A record. Create a new A record to be used should additional VCSs come online or be retired for
whatever reason. This creates machine portability at the DNS layer to remove the importance of actual
server hostnames. The name of this new A record is unimportant but must be used as the primary name
record to gain access to the resource, not the servers host name! In this case three new created A records
called WebInterface, all presumably pointing to three different servers.
83 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
When a client requests the name Web Interface, DNS will direct them to the 3 hosts in round robin
fashion. The following resolutions were performed from 2 different clients. Repeat this method of creating
an identical but load-balanced namespace for all applicable components of the architecture stack.
84 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
6 Customer-provided solution components
85 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7 Solution performance and testing
86 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
monitoring did not impact the servers being tested. This core network represents an existing customer
environment and also includes the following services:
Active Directory
DNS
DHCP
Anti-Virus
Stratusphere UX calculates the User Experience by monitoring key metrics within the Virtual Desktop
environment, the metrics and their thresholds are shown in the following screen shot:
87 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.2 Performance analysis methodology
In order to ensure the optimal combination of end user experience (EUE) and cost-per-user, performance
analysis and characterization (PAAC) on Dell Wyse Datacenter solutions is carried out using a carefully
designed, holistic methodology that monitors both hardware resource utilization parameters and EUE
during load-testing. This methodology is based on the three pillars shown below. Login VSI is currently the
load-testing tool used during PAAC of Dell Wyse Datacenter solutions; Login VSI is the de-facto industry
standard for VDI and server-based computing (SBC) environments and is discussed in more detail below.
88 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Resource Utilization Thresholds
Parameter Pass / Fail Threshold
Physical host CPU utilization 85%
Physical host memory utilization 85%
Network throughput 85%
Storage IO latency 20ms
Profile The configuration of the virtual desktop the number of vCPUs and amount of RAM
configured on the desktop (i.e. visible to the user).
Workload The set of applications used for performance analysis and characterization (PAAC) of
Dell Wyse Datacenter solutions (e.g. Microsoft Office applications, PDF reader, Internet Explorer
etc.).
89 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.2.5 Dell Wyse Datacenter profiles
The table below presents the profiles used during PAAC of the Dell Wyse Datacenter solutions. These
profiles have been carefully selected to provide the optimal level of resources for the most common use
cases.
With respect to the table above, additional information for each of the workloads is given below. It should
be noted that for Login VSI testing, the following login and boot paradigm was used:
For single-server / single-host testing (typically carried out to determine the virtual desktop
capacity of a specific physical server), users were logged in every 30 seconds.
For multi-host / full solution testing, users were logged in over a period of 1-hour, to replicate the
normal login storm in an enterprise environment.
All desktops were fully booted prior to each login attempt.
For all testing, virtual desktops ran an industry-standard anti-virus solution (McAfee VirusScan Enterprise)
in order to replicate a typical customer environment.
90 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.2.6.1 Login VSI light workload
Compared to the Login VSI medium workload described below, the light workload runs fewer applications
(mainly Excel and Internet Explorer with some minimal Word activity) and starts/stops the applications less
frequently. This results in lower CPU, memory and disk IO usage.
Once a session has been started the workload will repeat (loop) every 48 minutes.
The loop is divided in 4 segments, each consecutive Login VSI user logon will start a different
segments. This ensures that all elements in the workload are equally used throughout the test.
The medium workload opens up to 5 applications simultaneously.
The keyboard type rate is 160 ms for each character.
Approximately 2 minutes of idle time is included to simulate real world users.
Begins by opening 4 instances of Internet Explorer. These instances stay open throughout the
workload loop.
Begins by opening 2 instances of Adobe Reader. These instances stay open throughout the
workload loop.
There are more PDF printer actions in the workload.
Instead of 480p videos a 720p and a 1080p video are watched.
Increased the time the workload plays a flash game.
The idle time is reduced to 2 minutes.
91 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.2.7 Workloads running on shared graphics profile
Graphics hardware vendors (e.g. NVIDIA) typically market a number of graphics cards that are targeted at
different user segments. Consequently, it is necessary to provide two shared graphics workloads one for
mid-range cards and the other for high-end cards.
Mid-Range Shared Graphics Workload The mid-range shared graphics workload is a modified Login VSI
medium workload with 60 seconds of graphics-intensive activity (Microsoft Fishbowl at
http://ie.microsoft.com/testdrive/performance/fishbowl/) added to each loop.
High-End Shared Graphics Workload The high-end shared graphics workload consists of one desktop
running Heaven Benchmark and n-1 desktops running eDrawings Advanced Animation activity where n =
per-host virtual desktop density being tested at any specific time.
Mid-Range Pass-through Graphics Workload The mid-range pass-through graphics workload consists
of one desktop running Heaven Benchmark and n-1 desktops running eDrawings Advanced Animation
activity where n = per-host virtual desktop density being tested at any specific time.
High-End Pass-through Graphics Workload One desktop running Viewperf benchmark; n-1 desktops
running AutoCAD auto-rotate activity where n = per host virtual desktop density being tested at any
specific time.
At different stages of the testing the testing team will complete some manual User Experience Testing
while the environment is under load. This will involve a team member logging into a session during the run
and completing tasks similar to the User Workload description. While this experience will be subjective, it
will help provide a better understanding of the end user experience of the desktop sessions, particularly
under high load and ensure that the data gathered is reliable.
Parallel Sessions are launched from multiple launcher hosts in a round robin fashion; this mode is
recommended by Login Consultants when running tests against multiple host servers. In parallel mode the
92 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
VSI console is configured to launch a number of sessions over a specified time period (specified in
seconds)
Sequential - Sessions are launched from each launcher host in sequence; sessions are only started from a
second host once all sessions have been launched on the first host- this is repeated for each launcher
host. Sequential launching is recommended by Login Consultants when testing a single desktop host
server. The VSI console is configured to launch a specific number of sessions at a specified interval
specified in seconds
All test runs were conducted using the Login VSI Parallel Launch mode, all sessions were launched over
an hour to try and represent the typical 9am logon storm. Once the last user session has connected, the
sessions are left to run for 15 minutes prior to the sessions being instructed to logout at the end of the
current task sequence, this allows every user to complete a minimum of two task sequences within the
run before logging out. The single server test runs were configured to launch user sessions every 60
seconds, as with the full bundle test runs sessions were left to run for 15 minutes after the last user
connected prior to the sessions being instructed to log out.
7.4.1 Configuration
Validation for this project was completed for VMware View 6.1 on the following platforms.
VMware View 6.1.1 was used to provision the user desktops. The desktops were non-persistent linked
clone desktops. The desktops were captured from a Windows 8.1 master image.
Platform configurations are shown below and the Login VSI workloads used for load testing on each
environment.
Compute and Management resources were split out with the following configuration and all test runs
were completed with this configuration.
Node 1 R720 Dedicated Management (vCenter Appliance 6.0, SQL Server, VMware View
Connection Server 6.1.1, VMware View Composer 6.1.1)
Node 2 R730 Dedicated Compute Host.
93 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
The virtual machines were non-persistent linked clone desktops each configured on Windows 8.1 aligning
with the Login VSI 4.X virtual machine configuration, Office 2010 was used with each Virtual Machine sized
at 32 GB. User Workload configuration of the load generation virtual machines is shown in the table
below.
As a result of the testing, the following density numbers can be applied to the individual solutions. In all
cases CPU percentage used was the limiting factor. Memory usage, IOPs and network usage were not
strained.
The following table summarizes the user workload resources and densities as tested:
Windows 7 and 8 VMware Horizon View best practices for optimizing desktops were followed. Details for
these are located here: http://www.vmware.com/resources/techresources/10157
Windows 8.1 desktops were configured with some optimizations to enable the Login VSI workload to run
and in order to prevent long delays in the login process. Previous experience with Windows 8.1 has shown
that the login delays are somewhat longer that experienced with Windows 7. These were alleviated by
performing the following customizations
Bypass Windows Metro screen to go straight to the Windows Desktop. This is performed by a
scheduled task provided by Login Consultants at logon time.
Disable the Hi, while were getting things ready first time login animation. In randomly assigned
Desktop groups each login is seen as a first time login. This registry setting can prevent the
animation and therefore the overhead associated with it.
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System]
"EnableFirstLogonAnimation"=dword:00000000
McAfee antivirus is configured to treat the Login VSI process VSI32.exe as a low risk process and to
not scan that process. Long delays during login of up to 1 minute were detected as VSI32.exe was
scanned.
Before finalizing the Golden template image, perform a number of logins using domain accounts.
This was observed to significantly speed up the logon process for VMs deployed from the Golden
94 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
image. It is assumed that Windows 8 has a learning process when logging on to a domain for the
first time.
This chart includes the additional 21% of CPU available from the Turbo boost feature. Without the
inclusion there is a total of 72,800 MHz available for desktops, with Turbo boost the total available MHz
value is 88,088 MHz.
The CPU reaches a steady state average of 96% during the test cycle when approximately 230 users are
logged on and a maximum of 97%.
CPU Utilization
140
120
100
CPU Usage
80
40
Turbo Performance Increase
20
21%
0
14:30
12:40
12:50
13:00
13:10
13:20
13:30
13:40
13:50
14:00
14:10
14:20
14:40
14:50
15:00
95 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
With regard to host memory consumption, out of a total of 384 GB available memory there were no
constraints on the host even though consumed memory was quite high. The compute host reached a max
memory consumption of 375 GB with active memory usage reaching a max of 132 GB. There was some
memory ballooning followed by memory swapping as the number of logged on desktops increased
towards the latter stages of testing.
Consumed Memory
380
370
360
350
340
330 Consumed GB
320
310
12:40
14:40
12:50
13:00
13:10
13:20
13:30
13:40
13:50
14:00
14:10
14:20
14:30
14:50
15:00
Active Memory
140
120
100
80
60
Active GB
40
20
0
12:40
12:50
13:00
13:10
13:20
13:30
13:40
13:50
14:00
14:10
14:20
14:30
14:40
14:50
15:00
Network bandwidth is not an issue on this solution with a steady state peak of approximately 29,000 Kbps.
96 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Network Kbps
35000
30000
25000
20000
5000
0
13:00
12:40
12:50
13:10
13:20
13:30
13:40
13:50
14:00
14:10
14:20
14:30
14:40
14:50
15:00
The Login VSI Max user experience score for this test indicates that the VSI Max score reached after
approximately 190 users were logged on. There was little observed deterioration of user experience during
testing as mouse and window response both remained good.
97 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
This chart includes the additional 21% of CPU available from the Turbo boost feature. Without the
inclusion there is a total of 72,800 MHz available for desktops, with Turbo boost the total available MHz
value is 88,088 MHz.
The CPU reaches a steady state average of 94% during the test cycle when approximately 170 users are
logged on and a maximum of 97%.
CPU Utilization
140
120
100
CPU Usage
80
CPU Threshold 85%
60
0
13:50
12:30
12:40
12:50
13:00
13:10
13:20
13:30
13:40
14:00
14:10
14:20
With regard to host memory consumption, with 384 GB of physical memory, the compute host reached a
max memory consumption of 316 GB with active memory usage reaching a max of 133 GB. There was
some memory ballooning towards the end of the test run but no memory swapping took place.
Consumed Memory
400
350
300
250
200
150 Consumed GB
100
50
0
12:35
13:40
14:15
12:30
12:40
12:45
12:50
12:55
13:00
13:05
13:10
13:15
13:20
13:25
13:30
13:35
13:45
13:50
13:55
14:00
14:05
14:10
14:20
98 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Active Memory
160
140
120
100
80
60 Active GB
40
20
0
12:50
12:30
12:35
12:40
12:45
12:55
13:00
13:05
13:10
13:15
13:20
13:25
13:30
13:35
13:40
13:45
13:50
13:55
14:00
14:05
14:10
14:15
14:20
Network bandwidth is not an issue on this solution with a steady state peak of approximately 29,000 Kbps.
Network Kbps
35000
30000
25000
20000
15000 Network Kbps
10000
5000
0
13:25
12:30
12:35
12:40
12:45
12:50
12:55
13:00
13:05
13:10
13:15
13:20
13:30
13:35
13:40
13:45
13:50
13:55
14:00
14:05
14:10
14:15
14:20
The Login VSI Max user experience score for this test indicates that the VSI Max score reached but did not
go much beyond the threshold until near the end of the test cycle. There was little observed deterioration
of user experience during testing as video playback and mouse response both remained good.
99 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.4.2.3 Professional user workload (130 users)
For this testing run the R730 compute host was populated with 130 non-persistent, linked clone virtual
machines provisioned by VMware View 6.1.1.
This chart includes the additional 21% of CPU available from the Turbo boost feature. Without the
inclusion there is a total of 72,800 MHz available for desktops, with Turbo boost the total available MHz
value is 88,088 MHz.
The CPU reaches a steady state average of 94% during the test cycle when approximately 130 users are
logged on and a maximum of 97%.
CPU Utilization
140
120
100 CPU Usage
80
60 CPU Threshold 85%
40
Turbo Performance Increase
20 21%
0
18:20
16:50
16:55
17:00
17:05
17:10
17:15
17:20
17:25
17:30
17:35
17:40
17:45
17:50
17:55
18:00
18:05
18:10
18:15
18:25
18:30
100 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
With regard to host memory consumption, out of a total of 384 GB available memory there were no
constraints. The Compute host reached a max memory consumption of 325 GB with active memory
usage reaching a max of 83 GB. There was no memory ballooning and swapping.
Consumed Memory
350
300
250
200
150
100 Consumed GB
50
0
16:55
17:35
18:30
16:50
17:00
17:05
17:10
17:15
17:20
17:25
17:30
17:40
17:45
17:50
17:55
18:00
18:05
18:10
18:15
18:20
18:25
Active Memory
90
80
70
60
50
40
Active GB
30
20
10
0
16:50
16:55
17:00
17:05
17:10
17:15
17:20
17:25
17:30
17:35
17:40
17:45
17:50
17:55
18:00
18:05
18:10
18:15
18:20
18:25
18:30
Network bandwidth is not an issue on this solution with a steady state peak of approximately 26,000 Kbps.
101 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Network Kbps
30000
25000
20000
15000
5000
18:00
18:25
16:50
16:55
17:00
17:05
17:10
17:15
17:20
17:25
17:30
17:35
17:40
17:45
17:50
17:55
18:05
18:10
18:15
18:20
18:30
The Login VSI Max user experience score for this test indicates that the VSI Max score was reached close
to the end of the test cycle indicating there was little deterioration of user experience during testing. This
is also borne out by the fact that video playback and mouse and window response was good even at the
end of the test cycle.
Notes:
As indicated above, the CPU graphs do not take into account the extra 21% of CPU resources
available through the 2697v3s turbo feature.
102 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Subjective user experience showed mouse movement and window response times when clicking
within a running session during steady state were good. Video playback was good on the
Professional and Enhanced workloads even with all desktops logged on.
User login times were consistent around the 21 - 24 second mark for the Professional workload,
30 33 seconds for the Enhanced workload and 28 -30 seconds on the Standard workload. A few
session took some extra time to login on all workloads especially towards the end of the test
cycles.
384GB of memory saw some ballooning and swapping while running the Standard workload
testing, some ballooning only on the Enhanced workload but none on the Professional. The
greater number of desktops on the host causes more ballooning and swapping.
7.5.1 Configuration
Validation for this project was completed for VMware Horizon View 6.1 on the following platform:
VMware Horizon View was used to provide the persistent, full-clone user desktops. The desktops were
stored locally and were created from a Windows 7 master image. The View Storage Accelerator was
disabled for the host and the desktop virtual machines.
Access to the virtual desktops for test purposes was provided through Windows 7 virtual machines loaded
with Windows 7 and View Client 3.3.0.
Platform configurations are shown below and the Login VSI workloads used for load testing on each
environment.
10 GbE networking was used for the tests. Four NVidia GRID K2 GPU cards were used for all benchmarks.
Compute and Management resources were split out with the following configuration and all test runs
were completed with this configuration.
The virtual machines were full clone desktops each configured with Windows 7. SPECwpc 1.2 was used to
generate a graphics workload based upon Dassault Systemes Solidworks. User Workload configuration of
the load generation virtual machines is shown in the table below.
103 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
User Workload vCPUs Memory OS Bit HDD Size vGPU Profile Graphics Memory
Level
Custom (SPECwpc 3 16 GB x64 120 GB K280Q 4 GB
sw-03)
Custom (SPECwpc 3 16 GB x64 120 GB K260Q 2 GB
sw-03)
Each test run adhered to PAAC best practices with a 30 second session logon interval followed by 1 hour
of steady state after which sessions would begin logging off.
The following table summarizes the test results for the workload once it achieved steady state.
Hypervisor vGPU Workload VMs Avg. Avg. Avg. Avg. GPU Avg. Net
Profile per CPU Memory GPU % Memory % MB/s/User
host % Active
ESXi 6.0 K280Q Custom 8 56% 34 GB 30.03% 10.56% 2.7 MB/sec
ESXi 6.0 K260Q Custom 16 93% 75 GB 50.40% 17.12% 1.4 MB/sec
CPU Utilization - CPU % for ESX Hosts was adjusted to account for the fact that on Intel E5-2670v3 series
processors the ESX host CPU metrics will exceed the rated 100% for the host if Turbo Boost is enabled (by
default). The Adjusted CPU % Usage is based on 100% usage and but is not reflected in the charts. The
figure shown in the table is the Compute host steady state peak CPU Usage.
Memory Utilization - The figure shown in the table above is the average memory consumed per Compute
host over the recorded test period. Active is the average active memory per Compute host over the
recorded test period.
Network Utilization - The figure shown in the table is the average MB/sec per user over the recorded test
period.
This chart does not include the additional CPU available from the Turbo boost feature.
The CPU reached a steady state peak of 56.94% during the test cycle when 8 users are logged on.
104 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Poweredge C4130 CPU Utilization
60
50
40
Percent
30
20
10
0
14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10 15:15 15:20 15:25
Time
In regards to memory consumption for the host, out of a total of 512 GB available memory there were no
constraints on the host. The compute host reached a max memory consumption of 137 GB with active
memory usage reaching a max of 39 GB. There was no memory ballooning and swapping.
Consumed Memory
137.965
137.96
137.955
137.95
137.945
137.94
Consumed GB
137.935
137.93
137.925
137.92
137.915
14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10 15:15 15:20 15:25
105 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Active Memory
45
40
35
30
25
20 Active GB
15
10
5
0
14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10 15:15 15:20 15:25
Network bandwidth is not an issue on this solution with a steady state peak of approximately 24,000 Kbps.
106 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Network KBps
30000
25000
20000
15000
10000
5000
0
14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10 15:15 15:20 15:25
GPU utilization was also not an issue with a steady state peak of approximately 35%.
This chart does not include the additional CPU available from the Turbo boost feature.
The CPU reached a steady state peak of 93% during the test cycle when 16 users are logged on.
107 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Poweredge C4130 CPU Utilization
100
90
80
70
60
Percent
50
40
30
20
10
0
14:00 14:05 14:10 14:15 14:20 14:25 14:30 14:35 14:40 14:45 14:50
Time
In regards to memory consumption for the host, out of a total of 512 GB available memory there were no
constraints on the host. The compute host reached a max memory consumption of 269 GB with active
memory usage reaching a max of 78 GB. There was no memory ballooning and swapping. There is a
large amount of active memory at the beginning of the test as the VMs were rebooted in between tests
and not enough time was allowed for the active memory to be released.
Consumed GB
268.75
268.7
268.65
268.6
268.55
268.5 Consumed GB
268.45
268.4
268.35
268.3
14:00 14:05 14:10 14:15 14:20 14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10
108 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Active GB
250
200
150
Active GB
100
50
0
14:00 14:05 14:10 14:15 14:20 14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10
0.9
0.8
0.7
0.6
0.3
0.2
0.1
0
14:00 14:05 14:10 14:15 14:20 14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10
Network bandwidth is not an issue on this solution with a steady state peak of approximately 52,000 Kbps.
109 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Network KBps
50000
45000
40000
35000
30000
25000
20000
15000
10000
5000
0
14:00 14:05 14:10 14:15 14:20 14:25 14:30 14:35 14:40 14:45 14:50 14:55 15:00 15:05 15:10 15:15 15:20 15:25
GPU utilization was also not an issue with a steady state peak of approximately 52%.
60.00%
GPU1 Utilization (%)
50.00% GPU2 Utilization (%)
Percent
7.6.1 Overview
The objective of this testing was to demonstrate 2000 standard (now called enhanced workloads) users
would perform through various states of the environment. A single PS6210XS was leveraged for this test.
The test infrastructure used the following:
110 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
VMware Horizon View 5.2 (latest available at the time of this test)
VMware vSphere 5.1 (latest available at the time of this test to support View 5.2)
Dell PowerEdge R730 (4) and M620 Servers (16) not including additional PowerEdge Servers to
support VDI Load generation services.
Dell Force10 and PowerConnect switches
Dell EqualLogic storage arrays (PS6210XS and PS6510E)
During this 2000 VDI desktop test the PS6210XS generated which included satisfactory performance
across the entire VDI infrastructure:
Boot Storm IOPS Login Storm IOPS Steady State IOPS ms Avg. Latency
17514 17144 16773 5
The entire infrastructure and test configuration was installed in a single Dell PowerEdge M1000e blade
chassis complete with 16 PowerEdge M620 blade servers and four additional PowerEdge R730 rack
servers. The ESXi clusters used include:
Infrastructure Cluster: PowerEdge M620 blade server hosting virtual machines for Active Directory
services, VMware vCenter 5.1 server, Horizon View 5.2 server (primary and secondary), Horizon
View Composer server, Microsoft Windows Server 2008 R2 based file server and SQL Server
2008 R2.
Horizon View Client Clusters: Four PowerEdge R730 rack servers and 15 PowerEdge M620 blade
servers hosting virtual desktops.
The following considerations were made for designing the network of the VDI solution presented in this
reference architecture:
Two PowerConnect M8024-K blade switches in Fabric A for connectivity to the dedicated iSCSI
SAN
Two PowerConnect M6348 blade switches stacked in Fabric B for connectivity to the
Management LAN, VDI client LAN and a vMotion LAN.
Each PowerEdge M620 blade server configured with one Broadcom 57810S Dual Port 10 GbE NIC
card and one Broadcom 5719 Quad Port 1 GbE NIC Card. Broadcom 57810S card was assigned as
Fabric A LOM and the other Broadcom 5719 1Gb NIC card was assigned as Fabric B on the blade
chassis.
Fabric A was a 10 GbE network dedicated for iSCSI traffic while Fabric B carried the VDI traffic for
all 2,000 VMs.
111 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
The PowerConnect switches in Fabric B were interconnected using the stacking modules to
provide high availability and redundancy of the VDI fabric.
Fabric C was unused.
Two Force10 S4810 switches were used for external SAN access. These switches were stacked
together for failure resiliency and ease of management.
112 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
The medium workload from LoginVSI was used to simulate desktop workloads for each of the
2000 desktops. LoginVSI 3.7 was used.
The above screenshot represents the boot storm for 2,000 VMware Linked Clone VMs with a read/write
ratio of 65% read and 35% write. The Replica volumes contributed to the majority of the I/O. Latency was
extremely low at a weighted average of 2.78 ms.
Due to the large number of VMs being powered on, each Replica volume generated its individual
maximum IOPS at different times during the boot storm, depending on when VMs on a particular Replica
volume got powered on. The next figure below shows two Replica volumes generating the majority of
113 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
IOPS when the boot storm itself was at its peak. I/O on Replica volumes were virtually 100% read
operations.
Storage network utilization was well within the available bandwidth. The peak network utilization during
the boot storm reached approximately 6% of the total storage network bandwidth and then gradually
declined once all the VMs were booted up. There were also no retransmissions on the iSCSI SAN.
The above results show that EqualLogic PS6210XS hybrid array can handle a heavy I/O load like a boot
storms in a VDI environment with no issues.
Login storms generate significantly more write IOPS than a boot storm due to multiple factors including:
114 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Once a virtual desktop has achieved a steady state after user login, the Windows 7 OS has cached
applications in memory and does not need to access storage each time the application is launched. This
leads to lower IOPS during the steady state. The figure below shows the IOPS and latency observed during
the login storm.
The EqualLogic PS6210XS easily handled logging in 2,000 sessions in a short time delivering the required
17,144 IOPS with 4.2 ms of average latency at the peak of the login storm. Table 7 shows the overall disk
usage in the array during the login storm.
Most of the login storm I/O operations are handled by SSDs and therefore the array is able to provide the
best possible performance. Each SSD handled approximately 4,950 IOPS at the peak of login storm; the
115 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
average latency was very low during the entire login storm time period and the array clearly demonstrated
its ability to handle the workload.
All changes that occur on the virtual desktop (including temporary OS writes such as memory paging) are
being written to disk. The I/O pattern is mostly writes due to this activity. Once the desktops are booted
and in a steady state, the read I/O becomes minimal due to Horizon View Storage Accelerator enabling
content based read caching (CBRC) on the ESXi hosts.
During steady state there is minimal activity on the replica volume and most of the activity is seen on the
VDI-Images volumes that host the virtual desktops.
The figure below shows the performance of the array during the steady state test.
116 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
117 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.6.10 Server host performance
All compute infrastructure values were performance thresholds as previously described and shown below:
118 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
7.6.11 Summary
Following are the key observations made over the course of validation:
A single EqualLogic PS6210XS was able to host 2,000 virtual desktops and support a standard user
type of I/O activity.
The VDI I/O was mostly write-intensive I/O with more than 74% writes and less than 26% reads.
119 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
None of the system resources on the ESXi servers hosting the virtual desktops reached maximum
utilization levels at any time.
During the boot storm simulation, nearly 17,500 IOPS with less than 2.8 ms of average latency
were observed and all the 2,000 desktops were available in Horizon View within 25 minutes of the
storm.
To simulate a login storm, 2,000 users were logged in within a span of 30 minutes. A single
EqualLogic PS6210XS array was able to easily sustain this login storm with approximately 17,150
IOPS and 4.2 ms average. Most of the I/O was served by the SSDs on the array.
The user experience for 2,000 desktops was well within acceptable limits. All sessions were in the
upper right quadrant and virtually all of them were in the Good category on Stratusphere UX
scatter plot.
For full results and information please see this document http://dell.to/1g4Gc9v.
120 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7
Acknowledgements
Thanks to the Darin Schmitz and Damon Zaylskie of the Dell Compellent MSV Solutions team for providing
expertise and validation of the Dell Wyse Datacenter Compellent Tier 1 array.
Thanks to Paul Wynne and the Dell Wyse Solutions Ingredients Extended Team for their expertise and
continued support validating VDI architectures and Tier 1 storage.
Thanks to Sujit Somandepalli and Chhandomay Mandal of Storage Engineering and Storage Technical
Marketing respectively for Storage contributions.
Thanks to John Kelly of the Dell Wyse Solutions Engineering team for his expertise and guidance in the
Dell Wyse Datacenter PAAC process.
Peter Fine is the Senior Principal Engineering Architect for VDI-based solutions at Dell. Peter has extensive
experience and expertise on the broader Microsoft, Citrix and VMware solutions software stacks as well as
in enterprise virtualization, storage, networking and enterprise data center design.
Andrew McDaniel is the Solutions Development Manager for VMware solutions at Dell, managing the
development and delivery of enterprise-class desktop virtualization solutions based on Dell Data center
components and core virtualization platforms.
Nicholas Busick is a Senior Solutions Engineer with Dell Wyse Solutions Engineering building, testing,
validating and optimizing enterprise VDI stacks.
Darpan Patel is a Senior Solutions Engineer with Dell Wyse Solutions Engineering with extensive
experience in validating, building and optimizing enterprise class VDI solutions on Microsoft (Hyper-V),
VMware (View) and Citrix (XenDesktop). Darpan has a masters degree in Information Systems from Pace
University in New York and is VCP5-DCV certified. (VMware Certified Professional 5 Data Center
Virtualization).
David Hulama is a Senior Technical Marketing Advisor for VMware Horizon View solutions at Dell. David
has a broad technical background in a variety of technical areas and expertise in enterprise-class
virtualization solutions.
121 Dell Wyse Datacenter for VMware Horizon View Reference Architecture | v.6.7