SVTECH Internal Training Postsale VMware VSphere 6 v1.3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 181

VMware vSphere 6 Basic Guide

1.0

CHU H KHANH
K s

Ma s tai liu: 08-BM/TV/QMS


1
Phin bn: 1.1
I
Software Defined Datacenter

Virtual Machine

vCenter Server

Virtual Network

Virtual Storage

Manage VM

Ma s tai liu: 08-BM/TV/QMS


2
Phin bn: 1.1
II
Access Control

Manage Resouces

High Availability and Fault Tolerance

Host Scalability

Ma s tai liu: 08-BM/TV/QMS


3
Phin bn: 1.1
I
Software Defined Datacenter

Virtual Machine

vCenter Server

Virtual Network

Virtual Storage

Manage VM

Ma s tai liu: 08-BM/TV/QMS


4
Phin bn: 1.1
Virtualization - Concept

Virtualization vs. emulation

Emulation also commonly interprets the instructions


of the guest VM (compared to virtualization, where
the instruction set architecture of the guests must b
the same as the host).

This creates an interesting advantage for emulation


where the guest VM platform can be entirely differe
from the host
(for example, running an x86 guest on an
IBM PowerPC target).

Ma s tai liu: 08-BM/TV/QMS


5
Phin bn: 1.1
Virtualization - Features

Ma s tai liu: 08-BM/TV/QMS


6
Phin bn: 1.1
Virtual Infrastructure

Ma s tai liu: 08-BM/TV/QMS


7
Phin bn: 1.1
Virtual Infrastructure VNPT Cloud 2

Ma s tai liu: 08-BM/TV/QMS


8
Phin bn: 1.1
Virtual Infrastructure - Benefits

Ma s tai liu: 08-BM/TV/QMS


9
Phin bn: 1.1
Software-defined data center ( SDDC)

Ma s tai liu: 08-BM/TV/QMS


10
Phin bn: 1.1
vSphere - Product Suite

Ma s tai liu: 08-BM/TV/QMS


11
Phin bn: 1.1
vSphere - Product Suite License

Ma s tai liu: 08-BM/TV/QMS


12
Phin bn: 1.1
vSphere Version 6 - Whats news

Up to 4X Scale Improvement with vSphere 6

vSphere 5.5 vSphere 6

Hosts per Cluster 32 64 2x


VMs per Cluster 4,000 8,000 2x
Logical CPUs per Host 320 480 1.5x
RAM per Host 4 TB 12 TB 3x
VMs per Host 512 1,024 2x
Virtual CPUs per VM 64 128 2x
Virtual RAM per VM 1 TB 4 TB 4x
Ma s tai liu: 08-BM/TV/QMS
13
Phin bn: 1.1
vSphere Version 6 - Comparision

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/
pdf/infographic/top-10-reasons-why-vmware-better-than-microsoft-
for-sddc-infographic.pdf
Ma s tai liu: 08-BM/TV/QMS
14
Phin bn: 1.1
vSphere Components
1. Check compatibility

https://www.vmware.com/resources/compatibility/search.php

2. Download
https://my.vmware.com/en/web/vmware/info/slug/datacenter_cloud_infrastr
ucture/vmware_vsphere/6_5

Ma s tai liu: 08-BM/TV/QMS


15
Phin bn: 1.1
vSphere Management User Interface

Ma s tai liu: 08-BM/TV/QMS


16
Phin bn: 1.1
I-
Software Defined Datacenter

Virtual Machine

vCenter Server

Virtual Network

Virtual Storage

Manage VM

Manage Resouce

Ma s tai liu: 08-BM/TV/QMS


17
Phin bn: 1.1
Virtual Machine Virtual Hardware
VMware ESXi presents the following fairly generic
hardware to the VM:
Phoenix BIOS
Intel 440BX motherboard
Intel PCI AHCI controller
IDE CD-ROM drive
BusLogic parallel SCSI, LSI Logic parallel SCSI, or LSI
Logic SAS
controller
AMD or Intel CPU, depending on the physical
hardware
Intel E1000, Intel E1000e, or AMD PCnet NIC
Standard VGA video adapter
VMware selected this generic hardware to provide the
broadest level of compatibility across the entire supported
guest OSs.
As a result, its possible to use commercial off-the-shelf
drivers when installing a guest OS into a VM.

Ma s tai liu: 08-BM/TV/QMS


18
Phin bn: 1.1
Virtual Machine Virtual Hardware Version

Default VM Compatibility When a VM is


created, it is created with a specific VM
hardware version. Each VM hardware
version has a certain level of features
available to it based on the version of the
vSphere host, and with each new revision
of vSphere, new VM hardware versions
are introduced and new features are
added. This may cause backward
compatibility issues when you want to
migrate VMs with newer VM hardware
versions from a newer environment to an
older one. Youll learn
more about VM compatibility in Chapter
9; for now, just know that this is
the area where you can set the default
level when a VM is created.

Ma s tai liu: 08-BM/TV/QMS


19
Phin bn: 1.1
Virtual Machine Virtual Hardware Version

Ma s tai liu: 08-BM/TV/QMS


20
Phin bn: 1.1
Virtual Machine Virtual Hardware Version 11 - Benefits

Ma s tai liu: 08-BM/TV/QMS


21
Phin bn: 1.1
Virtual Machine - Files

Ma s tai liu: 08-BM/TV/QMS


22
Phin bn: 1.1
Virtual Machine Memory and CPU

Ma s tai liu: 08-BM/TV/QMS


23
Phin bn: 1.1
Virtual Machine Virtual Disk

Ma s tai liu: 08-BM/TV/QMS


24
Phin bn: 1.1
Virtual Machine Virtual Disk Thin vs Thick

Ma s tai liu: 08-BM/TV/QMS


25
Phin bn: 1.1
Virtual Machine Virtual Disk Thin vs Thick

Ma s tai liu: 08-BM/TV/QMS


26
Phin bn: 1.1
Virtual Machine Virtual Disk Thin vs Thick

https://blog.purestorage.com/vm-performance-on-flash-part-2-thin-vs-thick-provisioning-does-it-matter/

Ma s tai liu: 08-BM/TV/QMS


27
Phin bn: 1.1
Virtual Machine Network Adapter

Ma s tai liu: 08-BM/TV/QMS


28
Phin bn: 1.1
Virtual Machine Miscellaneous Devices

Ma s tai liu: 08-BM/TV/QMS


29
Phin bn: 1.1
Virtual Machine Console

Ma s tai liu: 08-BM/TV/QMS


30
Phin bn: 1.1
I-
Software Defined Datacenter

Virtual Machine

vCenter Server

Virtual Network

Virtual Storage

Manage VM

Ma s tai liu: 08-BM/TV/QMS


31
Phin bn: 1.1
vCenter

Ma s tai liu: 08-BM/TV/QMS


32
Phin bn: 1.1
vCenter - Architecture

Ma s tai liu: 08-BM/TV/QMS


33
Phin bn: 1.1
vCenter Service

Ma s tai liu: 08-BM/TV/QMS


34
Phin bn: 1.1
vCenter - Platform Services Controller

Ma s tai liu: 08-BM/TV/QMS


35
Phin bn: 1.1
vCenter - vCenter Server and Platform Services Controller

Ma s tai liu: 08-BM/TV/QMS


36
Phin bn: 1.1
vCenter ESXi communicaion

vpx is the original name of vCenter Center, so:


vpxa is the VC agent (ESX side)
vpxd is the VC daemon (VC side)
hostd is used to remote management using VIC - responsible for
managing most of the operations on the ESX machine. It knows
about all the VMs that are registered on that host, the luns/vmfs
volumes visible by the host, what the VMs are doing, etc. Most all
commands or operations come down from VC through it. i.e,
powering on a VM, VM vMotion, VM creation, etc.

Ma s tai liu: 08-BM/TV/QMS


37
Phin bn: 1.1
vCenter Plug in

Ma s tai liu: 08-BM/TV/QMS


38
Phin bn: 1.1
vCenter Plug in

Ma s tai liu: 08-BM/TV/QMS


39
Phin bn: 1.1
vCenter Server Appliance

Ma s tai liu: 08-BM/TV/QMS


40
Phin bn: 1.1
vCenter Server Appliance

Ma s tai liu: 08-BM/TV/QMS


41
Phin bn: 1.1
vCenter Server Appliance

Ma s tai liu: 08-BM/TV/QMS


42
Phin bn: 1.1
vCenter - Datacenter

Ma s tai liu: 08-BM/TV/QMS


43
Phin bn: 1.1
vCenter - Folders

Ma s tai liu: 08-BM/TV/QMS


44
Phin bn: 1.1
vCenter - Tags

Ma s tai liu: 08-BM/TV/QMS


45
Phin bn: 1.1
I-
Software Defined Datacenter

Virtual Machine

vCenter Server

Virtual Network

Virtual Storage

Manage VM

Ma s tai liu: 08-BM/TV/QMS


46
Phin bn: 1.1
Virtual Switch

Ma s tai liu: 08-BM/TV/QMS


47
Phin bn: 1.1
Virtual Switch Port Groups

Ma s tai liu: 08-BM/TV/QMS


48
Phin bn: 1.1
Virtual Switch Standard and distributed switch

Ma s tai liu: 08-BM/TV/QMS


49
Phin bn: 1.1
Virtual Switch Standard Feature

Ma s tai liu: 08-BM/TV/QMS


50
Phin bn: 1.1
Virtual Switch Policy

Ma s tai liu: 08-BM/TV/QMS


51
Phin bn: 1.1
Virtual Switch Policy - Security

Ma s tai liu: 08-BM/TV/QMS


52
Phin bn: 1.1
Virtual Switch Policy Traffic Shapping (Limit )

Ma s tai liu: 08-BM/TV/QMS


53
Phin bn: 1.1
Virtual Switch Policy Teaming and Failover

Load Balancing: Configures how


outgoing traffic is handled across
multiple network adapters in a teamed
vSwitch.
Network Failover Detection:
Specifies how the host detects network
failure.
Notify Switches: Tells the physical
switches to route network traffic from
virtual machines to different physical
network adapters.
Failback: Specifies how the failed
adapter should operate if it comes
online again.

Ma s tai liu: 08-BM/TV/QMS


54
Phin bn: 1.1
Virtual Switch Policy Teaming

Ma s tai liu: 08-BM/TV/QMS


55
Phin bn: 1.1
Virtual Switch Policy Failover

The design of Beacon probing works with physical Switch redundant networking to
determine downstream link failures beyond the ESXi host networking.
VMware recommends having 3 or more physical NICs in the team.
VMware does not recommend multiple physical NIC connections to the same physical
Switch. This can cause Beacon probing to fail to detect issues beyond the switch.

Ma s tai liu: 08-BM/TV/QMS


56
Phin bn: 1.1
Virtual Switch Distributed switch

Ma s tai liu: 08-BM/TV/QMS


57
Phin bn: 1.1
Virtual Switch Distributed switch Port Group

Ma s tai liu: 08-BM/TV/QMS


58
Phin bn: 1.1
Virtual Switch Distributed switch - Uplink

Ma s tai liu: 08-BM/TV/QMS


59
Phin bn: 1.1
Virtual Switch Distributed Switch - Features

Ma s tai liu: 08-BM/TV/QMS


60
Phin bn: 1.1
Distributed Switch Features Ingress/Egress Traffi
The terms Ingress source and Egress source are
with respect to the VDS. For example, when you
want to monitor the traffic that is going out of a
virtual machine towards the VDS, it is called
Ingress Source traffic. The traffic seeks ingress to
the VDS and hence the source is called Ingress.
If you want to monitor traffic that is received by a
virtual machine, then configure the port mirroring
session with the traffic source as Egress Source
as shown in the top right corner diagram of the
figure below.

Ma s tai liu: 08-BM/TV/QMS


61
Phin bn: 1.1
Virtual Switch Distributed Switch Features - PVLAN

https://blog.mwpreston.net/2013/10/29/8-weeks-of-vcap-
private-vlans/

Ma s tai liu: 08-BM/TV/QMS


62
Phin bn: 1.1
Distributed Switch Features Load Based Teaming

One of the more exciting and eagerly anticipated


announcements in vSphere 5.1 (at least for me) were
all the distributed switch enhancements. Among the
new feature list is the ability to use LACP (Link
Aggregation Control Protocol) on a distributed switch.
Formerly, the switch was limited to a static
EtherChannel or use of a third party switch, such as the
Cisco 1000v, if LACP was desired. This post will go into
how to actually configure and enable LACP on the
latest vSphere Distributed Switch 5.1.

http://runnsx.com/index.php/2015/10/13/lacp-on-vsphere-6-0-and-hp/
Ma s tai liu: 08-BM/TV/QMS
63
Phin bn: 1.1
Distributed Switch Features Netflow
NetFlow
NetFlow is a networking protocol that collects IP
traffic information as records and sends them to a
collector such as CA NetQoS for traffic flow analysis.

NetFlow on Distributed Switches can be enabled at


the port group level, at an individual port level or at
the uplink level.

http://runnsx.com/index.php/2015/10/13/lacp-on-vsphere-6-0-and-hp/
Ma s tai liu: 08-BM/TV/QMS
64
Phin bn: 1.1
Distributed Switch Features Port Mirroring
Port Mirroring
Port mirroring is the capability on a network switch
to send a copy of network packets seen on a switch
port to a network-monitoring device connected to
another switch port

http://runnsx.com/index.php/2015/10/13/lacp

https://www.netfort.com/category/blog/page/2/
Ma s tai liu: 08-BM/TV/QMS
65
Phin bn: 1.1
I-
Software Defined Datacenter

Virtual Machine

vCenter Server

Virtual Network

Virtual Storage

Manage VM

Ma s tai liu: 08-BM/TV/QMS


66
Phin bn: 1.1
Datastore

Ma s tai liu: 08-BM/TV/QMS


67
Phin bn: 1.1
Datastore - VMFS

Ma s tai liu: 08-BM/TV/QMS


68
Phin bn: 1.1
Datastore - Feature Support

Ma s tai liu: 08-BM/TV/QMS


69
Phin bn: 1.1
Datastore - RDM
Virtual Compatibility Mode
Virtual mode for an RDM specifies full virtualization of the mapped
device. It appears to the guest operating system exactly the same as a
virtual disk file in a VMFS volume. The real hardware characteristics are
hidden.
Virtual mode enables you to use VMFS features such as advanced file
locking and snapshots. Virtual mode is also more portable across
storage hardware than physical mode, presenting the same behavior as
a virtual disk file. When you clone the disk, make a template out of it,
or migrate it (if the migration involves copying the disk), the contents of
the LUN is into a virtual disk (.vmdk) file.

Physical Compatibility Mode


Physical mode for the RDM specifies minimal SCSI virtualization of the
mapped device, allowing the greatest flexibility for SAN management
software.
In physical mode, the VMkernel passes all SCSI commands to the
device, with one exception: the REPORT LUNs command is virtualized,
so that the VMkernel can isolate the LUN for the owning virtual
machine. Otherwise, all physical characteristics of the underlying
hardware are exposed. Physical mode is useful to run SAN
management agents or other SCSI target based software in the virtual
machine. Physical mode also allows virtual-to-physical clustering for
cost-effective high availability. LUNs attached to powered-on virtual
machines and configured for physical compatibility cannot be migrated
if the migration involves copying the disk. Such LUNs cannot be cloned
or cloned to a template either.
Ma s tai liu: 08-BM/TV/QMS
70
Phin bn: 1.1
Datastore - RDM - Requirements and limitations
Blockbased storage. Both iSCSI and FC are supported. No NFS. No local storage. You need a VMFS
datastore to store the mapping file.
No partition mapping RDM requires the mapped device to be a whole LUN.
You cannot create a snapshot of a Physical Compatibility Mode RDM.
You cannot enable Fault Tolerance for a VM using a Physical Compatibility mode RDM.
VCB cannot backup a RDM in Physical Compatibility mode.
VMware Data Recovery cannot backup a RDM in Physical Compatibility mode.
When using VMware vCenter Site Recovery Manager, make sure you include both the raw LUN as well
as the VMFS datastore holding the mapping file to SAN replication schedules. Placing the mapping file
alongside the VMX file is highly recommended.
VMware vCenter Server does not support LUN number changes within the target. When a LUN number
changes, the vml identifier does not change accordingly. Remove and rebuild the RDM to generate a new
vml ID that the LUN recognizes as a workaround.
The maximum size of an RDM is 2TB minus 512 bytes.
Use the Microsoft iSCSI initiator and MPIO inside the VM when you need more than 2TB on a disk.
You cannot use a RDM (in conjuncture with NPIV) to give a VM access to a VMFS volume. This seems
to make no sense at all, unless you want to run your VCB proxy as a virtual machine to access Fibre
Channel LUNs. For iSCSI, you can still use the software iSCSI Initiator in your Guest OS. Mental note:
obviously, it isnt supported at all.
https://www.virtuallifestyle.nl/2010/01/recommended-detailed-material-on-rdms/
Ma s tai liu: 08-BM/TV/QMS
71
Phin bn: 1.1
Datastore - Physical Storage Considers

Ma s tai liu: 08-BM/TV/QMS


72
Phin bn: 1.1
Datastore - iSCSI components

Ma s tai liu: 08-BM/TV/QMS


73
Phin bn: 1.1
Datastore - iSCSI Initiator

Dependent hardware iSCSI initiator:


-Third-party adapter that depends on VMware networking and
iSCSI configuration and management interfaces
-The specialized iSCSI HBA card provides for the TOE(TCP/IP
Offload Engines) , however discovery of the LUN is done by the
VMkernel. Dependent hardware iSCSI takes some (not all) of
the work off the VMkernel and CPU of the host.
-vSwitch VMkernel ports are required for this type of card.

Independent hardware iSCSI initiator:


-Third-party adapter that offloads the iSCSI and network
processing and management from your host
-The specialized NIC card provides for the TOE as well as
discovery of the LUN. This completely removes the
responsibility from the VMkernel and from the processors on the
host
-vSwitch VMkernel ports are not required for this type of card

Ma s tai liu: 08-BM/TV/QMS


74
Phin bn: 1.1
Datastore - iSCSI software initiator

Ma s tai liu: 08-BM/TV/QMS


75
Phin bn: 1.1
Datastore - NFS

Ma s tai liu: 08-BM/TV/QMS


76
Phin bn: 1.1
Datastore Multipath (NFS & iSCSi )

Ma s tai liu: 08-BM/TV/QMS


77
Phin bn: 1.1
Datastore - FC

Ma s tai liu: 08-BM/TV/QMS


78
Phin bn: 1.1
Datastore - FC - Multipath

Ma s tai liu: 08-BM/TV/QMS


79
Phin bn: 1.1
Datastore - FC - Multipath
- Native multipathing plug-in (NMP): The NMP module
handles overall MPIO (multipath I/O) behavior and array
identification. The NMP leverages the SATP and PSP modules
and isnt generally configured in any way.
- Generally, the VMware NMP supports all storage arrays
listed on the VMware storage HCL and provides a default
path selection algorithm based on the array type.
Storage array type plug-in (SATP): SATP modules
handle path failover for a given storage array and
determine the failover type for a LUN.
Path selection plug-in (PSP): handles the actual
path used for every given I/O.
Most Recently Used (VMW_PSP_MRU)
Fixed (VMW_PSP_FIXED)
Round Robin (VMW_PSP_RR)
- Multipathing plug-in (MPP): EMC PowerPath/VE is a third-
party multipathing plug-in that supports a broad set of EMC and
non-EMC array types. PowerPath/VE enhances load balancing,
performance, and availability

Ma s tai liu: 08-BM/TV/QMS


80
Phin bn: 1.1
Datastore - FC Multipath - esxcli
Lists the LUN multipathing information:
esxcli storage nmp device list

naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk (naa.60060480000290301014533030303130)
Storage Array Type: VMW_SATP_SYMM
Storage Array Type Device Config: SATP VMW_SATP_SYMM does not support device configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config: {preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T1:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T1:L0

Ma s tai liu: 08-BM/TV/QMS


81
Phin bn: 1.1
Datastore - FCoE
Key to the unification of storage and networking traffic in next generation datacenters is a new enhanced
Ethernet standard called Converged Enhanced Ethernet (CEE) that is being adopted by major suppliers.

Since classical Ethernet had no priority-based flow control, unlike Fibre Channel, FCoE required
enhancements to the Ethernet standard to support a priority-based flow control mechanism (to reduce frame
loss from congestion). The IEEE standards body added priorities in the data center bridging Task Group.

FCoE heavily depends on CEE. This new form of Ethernet includes enhancements that make it a viable
transport for storage traffic and storage fabrics without requiring TCP/IP overheads.

These enhancements include the Priority-based Flow Control (PFC),


Enhanced Transmission Selection (ETS), and Congestion Notification (CN).

http://www.ciscopress.com/articles/article.asp?p=2030048&seqNum=2
Ma s tai liu: 08-BM/TV/QMS
82
Phin bn: 1.1
Datastore - FCoe Adapter

Ma s tai liu: 08-BM/TV/QMS


83
Phin bn: 1.1
Datastore - FCoE CNA Adapter

Qlogic 8100
There are two required drivers for this adapter, one for SAN
and one for LAN connectivity.
FCoE SAN
driver http://downloads.vmware.com/d/details/esx4_qlogic_ql
a2xx_fc_dt/
10 GbE LAN
driver http://downloads.vmware.com/d/details/esx_esxi4x_qlo
gic_qle8152_schultz/

After successful driver installation you should see Qlogic10


Gigabit Ethernet adapter

and Qlogic 10GbE FCoE storage adapter

Ma s tai liu: 08-BM/TV/QMS


84
Phin bn: 1.1
Datastore - FCoE IP or not? Redhat Example
# cat /etc/sysconfig/network-scripts/ifcfg-ethN DEVICE=ethN HWADDR=00:1A:2B:3C:4D:5E ONBOOT=yes BOOTPROTO=none NM_CONTROLLED=no

Ma s tai liu: 08-BM/TV/QMS


85
Phin bn: 1.1
Datastore - FCoe Software FCoE

Ma s tai liu: 08-BM/TV/QMS


86
Phin bn: 1.1
Datastore - FCoE - Multipath

Ma s tai liu: 08-BM/TV/QMS


87
Phin bn: 1.1
Datastore Create and Delete

Ma s tai liu: 08-BM/TV/QMS


88
Phin bn: 1.1
Datastore Browse

Ma s tai liu: 08-BM/TV/QMS


89
Phin bn: 1.1
Datastore - Increase

Ma s tai liu: 08-BM/TV/QMS


90
Phin bn: 1.1
Virtual SAN

Ma s tai liu: 08-BM/TV/QMS


91
Phin bn: 1.1
Virtual SAN Witness

Ma s tai liu: 08-BM/TV/QMS


92
Phin bn: 1.1
Virtual SAN - Object

Ma s tai liu: 08-BM/TV/QMS


93
Phin bn: 1.1
Virtual SAN Storage Policy

Ma s tai liu: 08-BM/TV/QMS


94
Phin bn: 1.1
Virtual SAN Storage Policy - Rule

Ma s tai liu: 08-BM/TV/QMS


95
Phin bn: 1.1
I-
Software Defined Datacenter

Virtual Machine

vCenter Server

Virtual Network

Virtual Storage

Manage VM

Ma s tai liu: 08-BM/TV/QMS


96
Phin bn: 1.1
VM - Template

Ma s tai liu: 08-BM/TV/QMS


97
Phin bn: 1.1
VM - Template Create VM to Template

Ma s tai liu: 08-BM/TV/QMS


98
Phin bn: 1.1
VM - Template Deploy Template to VMs

Ma s tai liu: 08-BM/TV/QMS


99
Phin bn: 1.1
VM Edit settings

Ma s tai liu: 08-BM/TV/QMS


100
Phin bn: 1.1
VM Hot Plug Devices

Ma s tai liu: 08-BM/TV/QMS


101
Phin bn: 1.1
VM Dynamic

Ma s tai liu: 08-BM/TV/QMS


102
Phin bn: 1.1
VM Options

Ma s tai liu: 08-BM/TV/QMS


103
Phin bn: 1.1
VM - Snapshot

Ma s tai liu: 08-BM/TV/QMS


104
Phin bn: 1.1
VM Snapshot - Manage

Ma s tai liu: 08-BM/TV/QMS


105
Phin bn: 1.1
VM Snapshot Delta File

Ma s tai liu: 08-BM/TV/QMS


106
Phin bn: 1.1
VM Snapshot Virtual Disk Mode

Dependent Mode:
By default all the virtual disks are persistent - I.e Changes
are written directly on the disk . And these disks are
included in the snapshots .

Independent Mode:
This means the disks are independent of the Snapshots .

Persistent Disks
The disks are excluded from the snapshot and all the write
options are permanent.

Non - Persistent
All the write options are temp. Changes are discarded
when the virtual machine is re-set or powered off . If you
restart the system , the data will still be available on the
disk.Changes will be discarded only when the system is
RESET or POWERED OFF.

Ma s tai liu: 08-BM/TV/QMS


107
Phin bn: 1.1
VM Snapshot - Delete

Ma s tai liu: 08-BM/TV/QMS


108
Phin bn: 1.1
VM Snapshot - Consolidation

Ma s tai liu: 08-BM/TV/QMS


109
Phin bn: 1.1
VM vApp

Ma s tai liu: 08-BM/TV/QMS


110
Phin bn: 1.1
VM vApp

Ma s tai liu: 08-BM/TV/QMS


111
Phin bn: 1.1
II
Access Control

Manage Resources

High Availability and Fault Tolerance

Host Scalability

Ma s tai liu: 08-BM/TV/QMS


112
Phin bn: 1.1
Security Profile Services, Firewall

Ma s tai liu: 08-BM/TV/QMS


113
Phin bn: 1.1
Lockdown mode

Ma s tai liu: 08-BM/TV/QMS


114
Phin bn: 1.1
Access Control User Permission

Ma s tai liu: 08-BM/TV/QMS


115
Phin bn: 1.1
Access Control Create Role

Create a Role, add Priviliges


Assign Role to User/Group

Ma s tai liu: 08-BM/TV/QMS


116
Phin bn: 1.1
II
Access Control

Manage Resources

High Availability and Fault Tolerance

Host Scalability

Ma s tai liu: 08-BM/TV/QMS


117
Phin bn: 1.1
Memory - Layer

Ma s tai liu: 08-BM/TV/QMS


118
Phin bn: 1.1
Memory - Reclaim

Ma s tai liu: 08-BM/TV/QMS


119
Phin bn: 1.1
Memory Swapping Performance

Ma s tai liu: 08-BM/TV/QMS


120
Phin bn: 1.1
CPU - Virtual SMP
VMware Virtual SMP is a utility that allows a single virtual machine to use two or more processors simultaneously.

Ma s tai liu: 08-BM/TV/QMS


121
Phin bn: 1.1
CPU Hyperthreading ( HT )

Ma s tai liu: 08-BM/TV/QMS


122
Phin bn: 1.1
Resource Share, Limit, Reservation

Ma s tai liu: 08-BM/TV/QMS


123
Phin bn: 1.1
Resource Share, Limit, Reservation - Summary

Ma s tai liu: 08-BM/TV/QMS


124
Phin bn: 1.1
Resouce - Pool

Ma s tai liu: 08-BM/TV/QMS


125
Phin bn: 1.1
Resouce - Pool

Ma s tai liu: 08-BM/TV/QMS


126
Phin bn: 1.1
Resouce Pool - Share

Ma s tai liu: 08-BM/TV/QMS


127
Phin bn: 1.1
Resouce Pool Expandable Reservation - No

Ma s tai liu: 08-BM/TV/QMS


128
Phin bn: 1.1
Resouce Pool Expandable Reservation - Yes

Ma s tai liu: 08-BM/TV/QMS


129
Phin bn: 1.1
Resources Monitoring Guest OS

Ma s tai liu: 08-BM/TV/QMS


130
Phin bn: 1.1
Resources Monitoring vCenter Server

Ma s tai liu: 08-BM/TV/QMS


131
Phin bn: 1.1
Resources Monitoring vCenter Server

Ma s tai liu: 08-BM/TV/QMS


132
Phin bn: 1.1
Resources Monitoring vCenter Server Log level

Ma s tai liu: 08-BM/TV/QMS


133
Phin bn: 1.1
Resources Monitoring Alarm

Ma s tai liu: 08-BM/TV/QMS


134
Phin bn: 1.1
Resources Monitoring Alarm

Ma s tai liu: 08-BM/TV/QMS


135
Phin bn: 1.1
Resources Monitoring vCenter Server Mail, SNMP

Ma s tai liu: 08-BM/TV/QMS


136
Phin bn: 1.1
vCenter Operation Manager

Ma s tai liu: 08-BM/TV/QMS


137
Phin bn: 1.1
vCenter Operation Manager - Suit

Ma s tai liu: 08-BM/TV/QMS


138
Phin bn: 1.1
II
Access Control

Manage Resources

High Availability and Fault Tolerance

Host Scalability

Ma s tai liu: 08-BM/TV/QMS


139
Phin bn: 1.1
Protection - Level

Ma s tai liu: 08-BM/TV/QMS


140
Phin bn: 1.1
vCenter Availability

Ma s tai liu: 08-BM/TV/QMS


141
Phin bn: 1.1
High Availability

Ma s tai liu: 08-BM/TV/QMS


142
Phin bn: 1.1
vSphere HA

Ma s tai liu: 08-BM/TV/QMS


143
Phin bn: 1.1
vSphere HA - Architecture

Ma s tai liu: 08-BM/TV/QMS


144
Phin bn: 1.1
vSphere HA Architecture - FDM
Slave
A slave host watches the runtime state of the VMs running locally on that host. Significant changes in the runtime state of
these VMs are forwarded to the vSphere HA master.
vSphere HA slaves monitor the health of the master. If the master fails, slaves will participate in a new master election.
vSphere HA slave hosts implement vSphere HA features that dont require central coordination by the master. This
includes VM health monitoring.
Master
Monitors slave hosts and will restart VMs in the event of a slave host failure.
Monitors the power state of all protected VMs. If a protected VM fails, the
vSphere HA master will restart the VM.
Manages the list of hosts that are members of the cluster and manages the
process of adding and removing hosts from the cluster.
Manages the list of protected VMs. It updates this list after each user initiated power-on or power-off operation. These
updates are at therequest of vCenter Server, which requests the master to protect or unprotect VMs.
Caches the cluster configuration. The master notifies and informs slave hosts of changes in the cluster configuration.
The vSphere HA master host sends heartbeat messages to the slave hosts so that the slave hosts know the master is alive.
Reports state information to vCenter Server. vCenter Server typically communicates only with the master.
As you can see, the role of the vSphere HA master is quite important. For this reason, if the existing master fails a new
vSphere HA master is automatically elected. The new master will then take over the responsibilities listed here, including
communication with vCenter Server.

Ma s tai liu: 08-BM/TV/QMS


145
Phin bn: 1.1
vSphere HA - Heartbeat

Ma s tai liu: 08-BM/TV/QMS


146
Phin bn: 1.1
vSphere HA Architecture Network - Heartbeat

Ma s tai liu: 08-BM/TV/QMS


147
Phin bn: 1.1
vSphere HA Heartbeat - Redundancy

Ma s tai liu: 08-BM/TV/QMS


148
Phin bn: 1.1
vSphere HA Architecture Datastore - Heartbeat

Ma s tai liu: 08-BM/TV/QMS


149
Phin bn: 1.1
vSphere HA Architecture Failed Host

Ma s tai liu: 08-BM/TV/QMS


150
Phin bn: 1.1
vSphere HA Failures Scenarios - Host

Ma s tai liu: 08-BM/TV/QMS


151
Phin bn: 1.1
vSphere HA Failures Scenarios - VM

Ma s tai liu: 08-BM/TV/QMS


152
Phin bn: 1.1
vSphere HA Cluster

Ma s tai liu: 08-BM/TV/QMS


153
Phin bn: 1.1
vSphere HA Settings

Ma s tai liu: 08-BM/TV/QMS


154
Phin bn: 1.1
vSphere HA Settings

Ma s tai liu: 08-BM/TV/QMS


155
Phin bn: 1.1
vSphere HA Settings

Ma s tai liu: 08-BM/TV/QMS


156
Phin bn: 1.1
vSphere HA Settings

Ma s tai liu: 08-BM/TV/QMS


157
Phin bn: 1.1
vSphere HA Settings VM Overrides

Ma s tai liu: 08-BM/TV/QMS


158
Phin bn: 1.1
vSphere FT ( Fault Tolerance )

Ma s tai liu: 08-BM/TV/QMS


159
Phin bn: 1.1
vSphere - FT

Ma s tai liu: 08-BM/TV/QMS


160
Phin bn: 1.1
vSphere - SMP FT
vSphere 6.0
A completely new technology that
allows you to protect VMs with up to
four vCPUs
vSphere SMP-FT uses a new
technology called FastCheckpointing
to scale beyond a single vCPU.
vLockstep would take an input
and execute it simultaneously on both
the primary and secondary VMs,
FastCheckpointing instead
executes on the primary VM only and
then sends the result to the secondary
VM.

Ma s tai liu: 08-BM/TV/QMS


161
Phin bn: 1.1
vSphere - SMP FT - Requirements
Cluster level:
Host certificate checking must be enabled. This is the default for vCenterServer 4.1 and later.
The cluster must have at least two ESXi hosts running the same SMP-FT version or build number. The FT version is
displayed in the Fault Tolerance section of the ESXi hosts Summary tab.
vSphere HA must be enabled on the cluster. vSphere HA must be enabled before you can power on vSphere SMP-
FT enabled VMs.
Host level:
SMP-FT is only supported on vSphere 6.
The ESXi hosts must have access to the same datastores and networks.
The ESXi hosts must have a Fault Tolerance logging network connection configured. This vSphere SMP-FT logging
network requires 10 Gigabit Ethernet connectivity. At present, only a single 10 Gb NIC can be allocated to handle
SMP-FT logging.
The hosts must have CPUs that are vSphere SMP-FT compatible.
Hosts must be licensed for vSphere SMP-FT.
Hardware Virtualization (HV) must be enabled in the ESXi hosts BIOS in order to enable CPU support for vSphere
SMP-FT.

Ma s tai liu: 08-BM/TV/QMS


162
Phin bn: 1.1
vSphere - SMP FT - Requirements
VM Level:
Only VMs with up to four vCPUs are supported with vSphere SMP-FT.
VMs with more than four vCPUs are not compatible with vSphere SMPFT.
VMs must be running aa supported guest OS.
VM files must be stored on shared storage that is accessible to all applicable ESXi hosts. vSphere SMP-FT supports
Fibre Channel, FCoE,iSCSI, and NFS for shared storage.
Physical mode RDMs are not supported, although Virtual Mode RDMs are. The Eager Zero Thick requirement has
been removed, which means that the disk format can be Thin Provisioned, Lazy Zero Thick, or EagerZero Thick.
The VM must not have any snapshots. You must remove or commit snapshots before you can enable vSphere SMP-
FT for a VM. Note that snapshots initiated via vStorage APIs for Data Protection (VADP) are
supported.
The VM must not be a linked clone.
The VM cannot have any USB devices, sound devices, serial ports, or parallel ports in its configuration. Remove
these items from the VM configuration before attempting to enable vSphere SMP-FT.
The VM cannot use N_Port ID Virtualization (NPIV).
Nested page tables/extended page tables (NPT/EPT) are not supported.
vSphere SMP-FT will disable NPTs/EPTs on VMs for which vSphere SMPFT
is enabled.
The VM cannot use NIC passthrough or the older vlance network drivers.
Turn off NIC passthrough and update the networking drivers to vmxnet2, vmxnet3, or E1000.
The VM cannot have CD-ROM or floppy devices backed by a physical or remote device. Youll need to disconnect
Ma s tai liu: 08-BM/TV/QMS
these
Phin bn: devices
1.1 or configure themto point to an ISO or FLP image on a shared datastore. 163
II
Access Control

Manage Resouces

High Availability and Fault Tolerance

Host Scalability

Ma s tai liu: 08-BM/TV/QMS


164
Phin bn: 1.1
vSphere - vMotion
A live migration/movement
of running Virtual Machines
(VMs) between physical
hosts with ZERO downtime
and offering continuous
service availability.
It is transparent to the VM
Guest OS & applications
as well as to the end user.

Ma s tai liu: 08-BM/TV/QMS


165
Phin bn: 1.1
vSphere - vMotion - Requirements

HOST Level
1. Both hosts must be correctly licensed for vMotion.
2. Both hosts must have access to VMs shared storage.
3. Both hosts must have access to VMs shared network.
VM Level:
1. VMs using raw disks for clustering purposes, cant be migrated.
2. VMs connected with Virtual Devices that are attached to Client computer, cant be migrated.
3. VMs connected with Virtual Devices that are NOT accessible by destination host, cant be migrated.
Network Level:
1. 10 GbE network connectivity between the physical hosts so that transfer can occur faster
2. Transfer via Multi-NICs is supported, configure them all on one vSwitch

Ma s tai liu: 08-BM/TV/QMS


166
Phin bn: 1.1
vSphere - vMotion - Process

Zero Phase
1.Pre-Copy memory from Source to Destination host. Pre-Copy is achieved through Memory Iterative Pre-Copy which
includes below stages:
First Phase, Trace Phase/HEAT Phase
1.Send the VMs cold pages (least changing memory contents) from source to destination.
2.Trace all the VMs memory.
2.Performance impact: noticeable brief drop in throughput due to trace installation, generally proportional to
memory size.
Subsequent Phases
1.Pass over memory again, sending pages modified since the previous phase.
2.Trace each page as it is transmitted
2.Performance impact: usually minimal on guest performance
Switch-over phase
1.If pre-copy has converged, very few dirty pages remain
2.VM is momentarily quiesced on source and resumed on destination
2.Performance impact: increase of latency as the guest is stopped, duration less than a second

Ma s tai liu: 08-BM/TV/QMS


167
Phin bn: 1.1
vSphere - vMotion - EVC

Ma s tai liu: 08-BM/TV/QMS


168
Phin bn: 1.1
vSphere - vMotion EVC - Cluster

Ma s tai liu: 08-BM/TV/QMS


169
Phin bn: 1.1
vSphere - vMotion EVC - Baseline

Ma s tai liu: 08-BM/TV/QMS


170
Phin bn: 1.1
vSphere - vMotion Enhanced
Cross vCenter vMotion requirements are as follow:
L2 network connectivity (the IP of the VM is not changed).
The same SSO domain for both vCenters if you use the GUI to initiate the
vMotion.
vMotion Network.
vMotion across virtual switches
vSphere 6 provides possibility to perform vMotion between the following VSS
or VDS networks:
from VSS to VSS.
from VSS to VDS.
from VDS to VDS.
Long Distance vMotion
Long Distance vMotion provides migration VMs between two sites with
vcenters and requires only 250 Mbps network bandwidth per vMotion
operation and max 150 ms TTL. As I mentioned earlier, the vMotion process
keeps the VMs historical data (events, alarms, performance counters). There
are requirements as follow:
Two vCenters 6 (one at the each site).
The same SSO domain for both vCenters if you use the GUI to initiate the
vMotion.
L2 connectivity for VM network.
250 Mbps network bandwidth per vMotion operation.

Ma s tai liu: 08-BM/TV/QMS


171
Phin bn: 1.1
vSphere DRS

Ma s tai liu: 08-BM/TV/QMS


172
Phin bn: 1.1
vSphere DRS - Settings
Priority Rating
DRS constantly monitors several CPU and memory
performance indicators to determine the resource demand of
each VM. When a host is not able to satisfy VM demand, DRS
generates a migration recommendation to move the VM to
another host that can provide more resources. Each
recommendation is assigned a priority rating of 1 (highest) to 5
(lowest).
Now lets take a closer look at the priority ratings.
Priority 1
These are recommendations that are generated to enforce a
rule. Priority 1 recommendations are not generated due to
cluster imbalance or VM demand. Examples of Priority 1
recommendations include VM (anti-)affinity rules or a host
entering Maintenance Mode.
Priority 2 to 5
DRS performs a cost/benefit analysis to calculate a priority
rating for each recommendation. The current demand of the VM
is examined and DRS evaluates whether a migration would be
worth the effort. There is a cost associated with every migration,
and each vMotion adds an increased demand on CPU,
memory, and network to the source and destination hosts. DRS
calculates the performance improvement and weighs that with
the cost of performing the migration. The more significant the
benefit, the higher the priority rating.
Let that all sink in. Now to the migration threshold

Ma s tai liu: 08-BM/TV/QMS


173
Phin bn: 1.1
vSphere DRS Settings - Swap

Ma s tai liu: 08-BM/TV/QMS


174
Phin bn: 1.1
vSphere DRS Settings Affinity Rules

Ma s tai liu: 08-BM/TV/QMS


175
Phin bn: 1.1
vSphere DRS Settings DRS Group

Ma s tai liu: 08-BM/TV/QMS


176
Phin bn: 1.1
vSphere DRS Settings - VM-to-Host Affinity Rule

Ma s tai liu: 08-BM/TV/QMS


177
Phin bn: 1.1
vSphere DRS Settings - VM-to-Host Affinity Rule

Ma s tai liu: 08-BM/TV/QMS


178
Phin bn: 1.1
vSphere DRS Settings - VM

Ma s tai liu: 08-BM/TV/QMS


179
Phin bn: 1.1
vSphere Use HA DRS Together

Ma s tai liu: 08-BM/TV/QMS


180
Phin bn: 1.1
Ma s tai liu: 08-BM/TV/QMS
181
Phin bn: 1.1

You might also like