20412D Student
20412D Student
20412D Student
MI CROSOFT
20412D
LEA RN I N G
P RODU CT
Information in this document, including URLs and other Internet website references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not
responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
2014 Microsoft Corporation. All rights reserved.
Microsoft and the trademarks listed at
http://www.microsoft.com/about/legal/en/us/IntellectualProperty/Trademarks/EN-US.aspx are trademarks of
the Microsoft group of companies. All other trademarks are property of their respective owners.
DEFINITIONS.
a. Authorized Learning Center means a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, or such other entity as Microsoft may designate from time to time.
b. Authorized Training Session means the instructor-led training class using Microsoft Instructor-Led
Courseware conducted by a Trainer at or through an Authorized Learning Center.
c.
Classroom Device means one (1) dedicated, secure computer that an Authorized Learning Center owns
or controls that is located at an Authorized Learning Centers training facilities that meets or exceeds the
hardware level specified for the particular Microsoft Instructor-Led Courseware.
d. End User means an individual who is (i) duly enrolled in and attending an Authorized Training Session
or Private Training Session, (ii) an employee of a MPN Member, or (iii) a Microsoft full-time employee.
e. Licensed Content means the content accompanying this agreement which may include the Microsoft
Instructor-Led Courseware or Trainer Content.
f.
Microsoft Certified Trainer or MCT means an individual who is (i) engaged to teach a training session
to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) currently certified as a
Microsoft Certified Trainer under the Microsoft Certification Program.
g. Microsoft Instructor-Led Courseware means the Microsoft-branded instructor-led training course that
educates IT professionals and developers on Microsoft technologies. A Microsoft Instructor-Led
Courseware title may be branded as MOC, Microsoft Dynamics or Microsoft Business Group courseware.
h. Microsoft IT Academy Program Member means an active member of the Microsoft IT Academy
Program.
i.
Microsoft Learning Competency Member means an active member of the Microsoft Partner Network
program in good standing that currently holds the Learning Competency status.
j.
MOC means the Official Microsoft Learning Product instructor-led courseware known as Microsoft
Official Course that educates IT professionals and developers on Microsoft technologies.
k.
MPN Member means an active silver or gold-level Microsoft Partner Network program member in good
standing.
l.
Personal Device means one (1) personal computer, device, workstation or other digital electronic device
that you personally own or control that meets or exceeds the hardware level specified for the particular
Microsoft Instructor-Led Courseware.
m. Private Training Session means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led Courseware.
These classes are not advertised or promoted to the general public and class attendance is restricted to
individuals employed by or contracted by the corporate customer.
n. Trainer means (i) an academically accredited educator engaged by a Microsoft IT Academy Program
Member to teach an Authorized Training Session, and/or (ii) a MCT.
o. Trainer Content means the trainer version of the Microsoft Instructor-Led Courseware and additional
supplemental content designated solely for Trainers use to teach a training session using the Microsoft
Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint presentations, trainer
preparation guide, train the trainer materials, Microsoft One Note packs, classroom setup guide and Prerelease course feedback form. To clarify, Trainer Content does not include any software, virtual hard
disks or virtual machines.
2.
USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one copy
per user basis, such that you must acquire a license for each individual that accesses or uses the Licensed
Content.
2.1
Below are five separate sets of use rights. Only one set of rights apply to you.
a. If you are a Microsoft IT Academy Program Member:
i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User who is enrolled in the Authorized Training Session, and only immediately prior to the
commencement of the Authorized Training Session that is the subject matter of the Microsoft
Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they can
access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they can
access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure each End User attending an Authorized Training Session has their own valid licensed
copy of the Microsoft Instructor-Led Courseware that is the subject of the Authorized Training
Session,
v. you will ensure that each End User provided with the hard-copy version of the Microsoft InstructorLed Courseware will be presented with a copy of this agreement and each End User will agree that
their use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement
prior to providing them with the Microsoft Instructor-Led Courseware. Each individual will be required
to denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
vii. you will only use qualified Trainers who have in-depth knowledge of and experience with the
Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware being taught for
all your Authorized Training Sessions,
viii. you will only deliver a maximum of 15 hours of training per week for each Authorized Training
Session that uses a MOC title, and
ix. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer resources
for the Microsoft Instructor-Led Courseware.
b. If you are a Microsoft Learning Competency Member:
i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User attending the Authorized Training Session and only immediately prior to the
commencement of the Authorized Training Session that is the subject matter of the Microsoft
Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique redemption
code and instructions on how they can access one (1) digital version of the Microsoft InstructorLed Courseware, or
3. you will provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure that each End User attending an Authorized Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Authorized
Training Session,
v. you will ensure that each End User provided with a hard-copy version of the Microsoft Instructor-Led
Courseware will be presented with a copy of this agreement and each End User will agree that their
use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement prior to
providing them with the Microsoft Instructor-Led Courseware. Each individual will be required to
denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
vii. you will only use qualified Trainers who hold the applicable Microsoft Certification credential that is
the subject of the Microsoft Instructor-Led Courseware being taught for your Authorized Training
Sessions,
viii. you will only use qualified MCTs who also hold the applicable Microsoft Certification credential that is
the subject of the MOC title being taught for all your Authorized Training Sessions using MOC,
ix. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
x. you will only provide access to the Trainer Content to Trainers.
c.
ii.
You may customize the written portions of the Trainer Content that are logically associated with
instruction of a training session in accordance with the most recent version of the MCT agreement.
If you elect to exercise the foregoing rights, you agree to comply with the following: (i)
customizations may only be used for teaching Authorized Training Sessions and Private Training
Sessions, and (ii) all customizations will comply with this agreement. For clarity, any use of
customize refers only to changing the order of slides and content, and/or not using all the slides or
content, it does not mean changing or modifying any slide or content.
2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may not
separate their components and install them on different devices.
2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above, you may
not distribute any Licensed Content or any portion thereof (including any permitted modifications) to any
third parties without the express written permission of Microsoft.
2.4 Third Party Programs and Services. The Licensed Content may contain third party programs or
services. These license terms will apply to your use of those third party programs or services, unless other
terms accompany those programs and services.
2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and licenses also
apply to your use of that respective component and supplements the terms described in this agreement.
3.
Pre-release Term. If you are an Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed Content on
the Pre-release technology upon (i) the date which Microsoft informs you is the end date for using the
Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the commercial release of the
technology that is the subject of the Licensed Content, whichever is earliest (Pre-release term).
Upon expiration or termination of the Pre-release term, you will irretrievably delete and destroy all copies
of the Licensed Content in your possession or under your control.
4.
SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you more
rights despite this limitation, you may use the Licensed Content only as expressly permitted in this
agreement. In doing so, you must comply with any technical limitations in the Licensed Content that only
allows you to use it in certain ways. Except as expressly permitted in this agreement, you may not:
access or allow any individual to access the Licensed Content if they have not acquired a valid license
for the Licensed Content,
alter, remove or obscure any copyright or other protective notices (including watermarks), branding
or identifications contained in the Licensed Content,
publicly display, or make the Licensed Content available for others to access or use,
copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.
5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property laws
and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the
Licensed Content.
6.
EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regulations.
You must comply with all domestic and international export laws and regulations that apply to the Licensed
Content. These laws include restrictions on destinations, end users and end use. For additional information,
see www.microsoft.com/exporting.
7.
SUPPORT SERVICES. Because the Licensed Content is as is, we may not provide support services for it.
8.
TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you fail
to comply with the terms and conditions of this agreement. Upon termination of this agreement for any
reason, you will immediately stop all use of and delete and destroy all copies of the Licensed Content in
your possession or under your control.
9.
LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible for
the contents of any third party sites, any links contained in third party sites, or any changes or updates to
third party sites. Microsoft is not responsible for webcasting or any other form of transmission received
from any third party sites. Microsoft is providing these links to third party sites to you only as a
convenience, and the inclusion of any link does not imply an endorsement by Microsoft of the third party
site.
10.
ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
11.
APPLICABLE LAW.
a. United States. If you acquired the Licensed Content in the United States, Washington state law governs
the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws
principles. The laws of the state where you live govern all other claims, including claims under state
consumer protection laws, unfair competition laws, and in tort.
b. Outside the United States. If you acquired the Licensed Content in any other country, the laws of that
country apply.
12.
LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the laws
of your country. You may also have rights with respect to the party from whom you acquired the Licensed
Content. This agreement does not change your rights under the laws of your country if the laws of your
country do not permit it to do so.
13.
14.
LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP
TO US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL,
LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
This limitation applies to
o
anything related to the Licensed Content, services, content (including code) on third party Internet
sites or third-party programs; and
o
claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
or other tort to the extent permitted by applicable law.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion or
limitation of incidental, consequential or other damages.
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque : Ce le contenu sous licence tant distribu au Qubec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en franais.
EXONRATION DE GARANTIE. Le contenu sous licence vis par une licence est offert tel quel . Toute
utilisation de ce contenu sous licence est votre seule risque et pril. Microsoft naccorde aucune autre garantie
expresse. Vous pouvez bnficier de droits additionnels en vertu du droit local sur la protection dues
consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties
implicites de qualit marchande, dadquation un usage particulier et dabsence de contrefaon sont exclues.
LIMITATION DES DOMMAGES-INTRTS ET EXCLUSION DE RESPONSABILIT POUR LES
DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages
directs uniquement hauteur de 5,00 $ US. Vous ne pouvez prtendre aucune indemnisation pour les autres
dommages, y compris les dommages spciaux, indirects ou accessoires et pertes de bnfices.
Cette limitation concerne:
tout ce qui est reli au le contenu sous licence, aux services ou au contenu (y compris le code)
figurant sur des sites Internet tiers ou dans des programmes tiers; et.
les rclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilit
stricte, de ngligence ou dune autre faute dans la limite autorise par la loi en vigueur.
Elle sapplique galement, mme si Microsoft connaissait ou devrait connatre lventualit dun tel dommage. Si
votre pays nautorise pas lexclusion ou la limitation de responsabilit pour les dommages indirects, accessoires
ou de quelque nature que ce soit, il se peut que la limitation ou lexclusion ci-dessus ne sappliquera pas votre
gard.
EFFET JURIDIQUE. Le prsent contrat dcrit certains droits juridiques. Vous pourriez avoir dautres droits
prvus par les lois de votre pays. Le prsent contrat ne modifie pas les droits que vous confrent les lois de votre
pays si celles-ci ne le permettent pas.
Revised September 2012
Acknowledgments
Microsoft Learning wants to acknowledge and thank the following individuals for their contribution
toward developing this title. Their effort at various stages in the development has ensured that you have a
good classroom experience.
Contents
Module 1: Implementing Advanced Network Services
Lesson 1: Co nfiguring Advanced DHCP Fea tures
1-2
1-13
1-25
1-33
1-43
2-2
2-10
2-18
2-28
2-34
3-2
3-9
3-16
3-20
3-23
3-27
4-2
4-9
4-18
4-23
5-2
5-10
5-17
5-25
Module 6: Implementing AD CS
Lesson 1: Using Certificates in a Business Environment
Lesson 2: PKI Overview
6-2
6-9
6-17
6-28
6-32
6-38
6-48
6-53
7-2
7-7
7-12
7-18
Lab: Implementing AD RM S
7-24
8-2
Lesson 2: Deploying AD FS
8-12
8-19
Lab A: Implementing AD FS
8-27
8-33
8-38
8-45
9-2
9-6
9-11
9-17
10-2
10-19
10-25
10-30
10-35
10-41
11-2
11-8
11-19
11-28
12-2
12-8
12-18
12-23
L1-1
L2-11
L2-18
L3-25
L4-39
L5-45
L6-53
L6-59
L7-69
L8-83
L8-89
L9-101
L10-107
L11-117
L12-127
Course Description
Get hands-on instruction and practice configuring advanced Windows Server 2012, including Windows
Server 2012 R2, services in this five-day Microsoft Official Course. This course is the third part in a series of
three courses that provides the skills and knowledge necessary to implement a core Windows Server 2012
infrastructure in an existing enterprise environment.
The three courses collectively cover implementing, managing, maintaining, and provisioning services and
infrastructure in a Windows Server 2012 environment. Although there is some cross-over of skills and
tasks across these courses, this course focuses on advanced configuration of services necessary to deploy,
manage, and maintain a Windows Server 2012 infrastructure, such as advanced networking services,
Active Directory Domain Services (AD DS), Active Directory Rights Management Services (AD RMS), Active
Directory Federation Services (AD FS), Network Load Balancing, failover clustering, business continuity,
and disaster-recovery services. This course also covers access and information provisioning, and protection
technologies such as Dynamic Access Control (DAC), and Web Application Proxy integration with ADFS
and Workplace Join.
This course maps directly to and is the preferred choice for hands-on preparation for Microsoft Certified
Solutions Associate (MCSA): Exam 412: Configuring Advanced Windows Server 2012 Services, which is the
third of three exams required for MCSA: Windows Server 2012 certification.
Note: Labs in this course are based on the General Availability release of Windows Server 2012 R2 and
Windows 8.1.
Module 1 starts the course with topics on advanced network configuration. Students will already be
familiar with Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) services, and
this course is designed for more advanced configurations that they may not have encountered. IP Address
Management (IPAM) is a new Windows Server 2012 feature that will help students streamline the
management of IP addressing in the organization.
Modules 2 and 3 provide a block of topics that are focused on file services. Module 2 expands on previous
knowledge that students have acquired on how to configure file services in a Windows Server
environment by introducing some advanced configuration options. Module 3 describes the new Windows
Server 2012 feature that provides even more advanced options for managing and auditing access to file
server resources in Windows Server 2012.
Modules 4 through 8 discuss the more advanced topics in implementing AD DS and other Active
Directory role services. Modules 4 and 5 describe the scenario where an organization has a highly
complicated environment that cannot be easily managed with a single AD DS domain and site. Therefore,
these modules describe how to implement multi-domain and multi-site AD DS environments.
Modules 6 through 8 take AD DS implementation in a different direction. While modules 4 and 5 focused
on providing AD DS services to users inside the organization, modules 6 to 8 switch the focus to providing
some AD DS services outside of the organization. This includes authentication and authorization to users
or services that might be in the same forest, but that might also be in a different AD DS forest, or might
not even have any AD DS accounts.
Module 6 describes how to implement a public key infrastructure (PKI) environment that will meet
internal certificate services requirements and external requirements. Module 7 describes how to
implement an Active Directory Rights Management Services (AD RMS) deployment to enable internal
access restrictions to be extended outside the organizations boundaries. Module 8 describes how to
implement Active Directory Federation Services (AD FS) environments to extend authentication services to
users who might not have any accounts in the internal AD DS forest.
Modules 9 and 10 provide details on two different options for making applications and services highly
available in a Windows Server 2012 environment. Module 9 describes Network Load Balancing (NLB),
which is used primarily for web-based applications. Module 10 describes failover clustering, which can be
used to make many other applications and services highly available. Module 11 expands on the failover
clustering content from Module 10, by describing how to integrate Hyper-VTM virtual machines with
failover clustering.
Module 12 provides instruction on how to plan for and recover from various data and server loss
scenarios in Windows Server 2012. Because of the options for integrating high availability with disaster
recovery, this module will build on the high-availability content that was presented in the previous
modules, but will also include scenarios and procedures for ensuring data and service availability in the
event of failure in a highly available environment.
Audience
This course is intended for candidates who would typically be experienced Windows Server Administrators
who have real-world experience working in a Windows Server 2008 or Windows Server 2012 enterprise
environment. The audience also includes IT professionals who want to take the course 70-412,
Configuring Advanced Windows Server 2012 Services. Lastly, the audience includes IT professionals who
wish to take the Microsoft Certified Solutions Expert (MCSE) exams in DataCenter, Desktop Infrastructure,
Messaging, Collaboration and Communications. This course may help them as they prepare for the
Microsoft Certified Solutions Associate (MCSA) exams, which are a pre-requisite for their individual
specialties.
Student Prerequisites
This course requires that you meet the following prerequisites:
Experience working with Windows Server 2008 or Windows Server 2012 servers day to day in an
enterprise environment.
Knowledge equivalent to the content covered in courses 20410C: Installing and Configuring Windows
Server 2012; and 20411C: Administering Windows Server 2012.
Course Objectives
After completing this course, the students will be able to:
Configure advanced features for DHCP and DNS, and configure IP address management.
Configure Dynamic Access Control (DAC) to manage and audit access to shared files.
Plan and implement an AD DS deployment that includes multiple domains and forests.
Plan and implement an AD DS deployment that includes multiple locations and data centers.
Implement and configure an Active Directory Certificate Services (AD CS) deployment.
Implement and configure an Active Directory Rights Management Services (AD RMS) deployment.
Implement and configure an Active Directory Federation Services (AD FS) deployment.
Provide high availability and load balancing for web-based applications by implementing Network
Load Balancing (NLB).
Provide high availability for network services and applications by implementing failover clustering.
Deploy and manage Windows Server 2012 Hyper-V virtual machines in a failover cluster.
Implement a backup and disaster-recovery solution based on business and technical requirements.
Course Outline
The course outline is as follows:
Module 1: Implementing Advanced Network Services
Module 2: Implementing Advanced File Services
Module 3: Implementing Dynamic Access Control
Module 4: Implementing Distributed Active Directory Domain Services Deployments
Module 5: Implementing Active Directory Domain Services Sites and Replication
Module 6: Implementing Active Directory Certificate Services
Module 7: Implementing Active Directory Rights Management Services
Module 8: Implementing and Administering AD FS
Module 9: Implementing Network Load Balancing
Module 10: Implementing Failover Clustering
Module 11: Implementing Failover Clustering with Hyper-V
Module 12: Implementing Business Continuity and Disaster Recovery
Exam/Course Mapping
This course, 20412C: Configuring Advanced Windows Server 2012 Services, has a direct mapping of its
content to the objective domain for the Microsoft exam 70-412: Configuring Advanced Windows
Server 2012 Services.
The table below is provided as a study aid that will assist you in preparation for taking this exam and
to show you how the exam objectives and the course content fit together. The course is not designed
exclusively to support the exam but rather provides broader knowledge and skills to allow a real-world
implementation of the particular technology. The course will also contain content that is not directly
covered in the examination and will utilize the unique experience and skills of your qualified Microsoft
Certified Trainer.
Note: The exam objectives are available online at the following URL:
http://www.microsoft.com/learning/en-us/exam-70-412.aspx,%20under%20Skills%20Measured.
Course Content
Module Lesson
Lab
Mod 9
Lesson
1/2/3
Mod 9
Ex 1/2/3
Mod 10
Lesson
1/2/3/4/5
Mod 10
Ex
1/2/3/4
Mod 10
Lesson
1/3
Mod 11
Lesson
1/2
Mod 11
Lesson
1/2/3
Mod 10
Lab Ex
2
Mod 11
Lab Ex
1/2
Mod 11
Lab Ex
2/3
Mod 2
Mod 3
Lesson
1/2/3/4
Mod 3
LAB Ex
1/2/3
Mod 2
Lesson
1/3
Mod 2
Lab A Ex
1
Lesson
2/3
Lab B Ex
1/2/3/4
Mod 12
Lesson
1/2
Mod 12
Lab Ex
1/2
Mod 12
Lesson
1/2/3
Mod 12
Ex 1/2
Mod 11
Lessons
1/3
Mod 11
Lab Ex 1
Mod 10
Lesson
1/5
Mod 10
Lab Ex 1
Mod 1
Lesson 1
Mod 1
Lab Ex 1
4.2 Implement an
advanced DNS solution.
Mod 1
Lesson 2
Mod 1
Lab Ex 2
Mod 1
Lesson
3/4
Mod 1
Lab Ex 3
5. 1 Configure a forest
or a domain
Mod 4
Lesson 1/2
Mod 4 Lab
Ex 1
Mod 4
Lesson 3
Mod 4 Lab
Ex 2
Mod 5
Lesson 2/3
Mod 5 Lab
Ex 1/2
Mod 5
Lesson 1/3
Mod 5 Lab
Ex 3/4
Mod 8
Lesson
1/2/3/4/5
Mod 6
Lesson
1/2/3/5
Mod 6 Lab
A Ex 1/2
Lab B Ex 3
6.3 Manage
certificates.
Mod 6
Lesson
4/5/6
Mod 6 Lab
B Ex
1/2/3/4
Mod 7
Lesson
1/2/3/4
Mod 7 Lab
Ex 1/2/3/4
Note: Attending this course in itself will not successfully prepare you to pass any associated
certification exams.
The taking of this course does not guarantee that you will automatically pass any certification exam. In
addition to attendance at this course, you should also have the following:
Real-world, hands-on experience Installing and configuring a Windows Server 2012 Infrastructure
There may also be additional study and preparation resources, such as practice tests, available for
you to prepare for this exam. Details of these are available at the following URL:
http://www.microsoft.com/learning/en-us/exam-70-412.aspx, under Preparation options.
You should familiarize yourself with the audience profile and exam prerequisites to ensure you are
sufficiently prepared before taking the certification exam. The complete audience profile for this exam
is available at the following URL: http://www.microsoft.com/learning/en-us/course.aspx?ID=20412C,
under Overview, Audience Profile.
You should also check out the Microsoft Virtual Academy, http://www.microsoftvirtualAcademy.com to
view further additional study resources and online courses which are available to assist you with exam
preparation and career development.
The exam/course mapping table outlined above is accurate at the time of printing, however it is subject
to change at any time and Microsoft bears no responsibility for any discrepancies between the version
published here and the version available online and will provide no notification of such changes
Course Materials
The following materials are included with your kit:
Course Handbook: a succinct classroom learning guide that provides the critical technical
information in a crisp, tightly focused format, which is essential for an effective in-class learning
experience.
You may be accessing either a printed course hand book or digital courseware material via the Arvato
Skillpipe reader. Your Microsoft Certified Trainer will provide specific details but both contain the
following:
o
Lessons: guide you through the learning objectives and provide the key points that are critical to
the success of the in-class learning experience.
Labs: provide a real-world, hands-on platform for you to apply the knowledge and skills learned
in the module.
Module Reviews and Takeaways: provide on-the-job reference material to boost knowledge
and skills retention.
Course Companion Content on the http://www.microsoft.com/learning/en/us/companionmoc.aspx Site: searchable, easy-to-browse digital content with integrated premium online resources
that supplement the Course Handbook.
Modules: include companion content, such as questions and answers, detailed demo steps and
additional reading links, for each lesson. Additionally, they include Lab Review questions and
answers and Module Reviews and Takeaways sections, which contain the review questions and
answers, best practices, common issues and troubleshooting tips with answers, and real-world
issues and scenarios with answers.
Resources: include well-categorized additional resources that give you immediate access to the
most current premium content on TechNet, MSDN, or Microsoft Press.
Course evaluation: at the end of the course, you will have the opportunity to complete an online
evaluation to provide feedback on the course, training facility, and instructor.
o
Role
20412C-LON-DC1/-B
20412C-LON-CA1
20412C-LON-CL1
20412C-LON-CL2
20412C-LON-CORE
20412C-LON-SVR1/-B
20412C-LON-SVR2
20412C-LON-SVR3
20412C-LON-SVR4
Virtual machine
Role
20412C-TREY-CL1
20412C-TREY-DC1
20412C-LON-HOST1
20412C-LON-HOST2
20412C-TOR-DC1
Software Configuration
The following software is installed in the course
Windows 8.1
Classroom Setup
Each classroom computer will have the same virtual machine configured in the same way.
You may be accessing the lab virtual machines in either in a hosted online environment with a web
browser or by using Hyper-V on a local machine. The labs and virtual machines are the same in both
scenarios however there may be some slight variations because of hosting requirements. Any
discrepancies will be called out in the Lab Notes on the hosted lab platform.
You Microsoft Certified Trainer will provide details about your specific lab environment.
Dual 120 gigabyte (GB) hard disks 7200 RM Serial ATA (SATA) or better*
16 GB RAM
DVD drive
Network adapter
*Striped
In addition, the instructor computer must be connected to a projection display device that supports SVGA
1024 x 768 pixels, 16-bit colors.
1-1
Module 1
Implementing Advanced Network Services
Contents:
Module Overview
1-1
1-2
1-13
1-25
1-33
1-43
1-50
Module Overview
In Windows Server 2012, network services such as Domain Name System (DNS) provide critical support for
name resolution of network and Internet resources. Within DNS, DNS Security Extensions (DNSSEC) is an
advanced feature that provides a means of securing DNS responses to client queries so that malicious users
cannot tamper with them. With Dynamic Host Configuration Protocol (DHCP), you can manage and
distribute IP addresses to client computers. DHCP is essential for managing IP-based networks. DHCP
failover is an advanced feature that can prevent clients from losing access to the network in case of a DHCP
server failure. IP Address Management (IPAM) provides a unified means of controlling IP addressing.
This module introduces DNS and DHCP improvements, and IP address management, and it provides
details about how to implement these features.
Objectives
After completing this module, you will be able to:
Implement IPAM.
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
DHCP consists of the components that are listed in the following table.
Component
DHCP Server
service
DHCP
scopes
Description
After installing the DHCP Server role, the DHCP server is implemented as a service. This
service can distribute IP addresses and other network configuration information to
clients who request it.
The DHCP administrator configures the range of IP addresses and related information
that is allotted to the server for distribution to requesting clients. Each scope can only
be associated with a single IP subnet. A scope must consist of:
A name and description
A range of addresses that can be distributed
A subnet mask
A scope can also define:
IP addresses that should be excluded from distribution
The duration of the IP address lease
DHCP options
You can configure a single DHCP server with multiple scopes, but the server must be
either connected directly to each subnet that it serves, or have a supporting and
configured DHCP relay agent in place. Scopes also provide the primary way for the
server to manage and distribute any related configuration parameters (DHCP options)
to clients on the network.
DHCP
options
When you assign the IP address to the client, you can also simultaneously assign many
other network configuration parameters. The most common DHCP options include:
Default Gateway IP address
DNS server IP address
DNS domain suffix
Windows Internet Name Service (WINS) server IP address
You can apply the options at different levels. They can be applied as follows:
Globally to all scopes
Specifically to particular scopes
To specific clients based on a class ID value
To clients that have specific IP address reservations configured
Note: Internet Protocol version 6 (IPv6) scopes are slightly different, and will be
discussed later in this lesson.
DHCP
database
The DHCP database contains configuration data about the DHCP server, and stores
information about the IP addresses that have been distributed. By default, the DHCP
database files are stored in the
%systemroot%\System32\Dhcp folder. The DHCP database is a Microsoft JET database.
DHCP
console
The DHCP console is the main administrative tool for managing all aspects of the DHCP
server. This management console is installed automatically on any server that has the
DHCP role installed. However, you also can install it on a remote server or Windows 8
client by using the Remote Server Administration Tools (RSATs) and by connecting to
the DHCP server for remote management.
DHCP Leases
DHCP allocates IP addresses on a dynamic basis. This is known as a lease. You can configure the duration
of the lease. The default lease time for wired clients is eight days, but mobile or handheld devices such as
tablets should usually have a shorter lease duration. Typically, where there is a higher turnover of devices
or users, the lease time should be shorter; and where there is more permanency, it should be longer. You
can configure the lease settings in the DHCP console, under the server name and either the IPv4 or IPv6
node, by clicking Scope, and then clicking Properties dialogue.
When the DHCP lease reaches 50 percent of the lease time, the client attempts to renew the lease. This
automatic process occurs in the background. Computers might have the same IP address for a long time if
they operate continually on a network without being shut down. Client computers also attempt renewal
during the startup process.
Windows PowerShell
You can use Windows PowerShell cmdlets to provide command-line support for managing DHCP. To be
able to use the DHCP cmdlets, you must load the DhcpServer module. In addition to providing commandline support, PowerShell cmdlets are used if you want to script your DHCP management. The following
table includes a subset of the nearly 100 Windows Server 2012 PowerShell cmdlets for managing DHCP.
cmdlet
Additional information
Add-DhcpServerInDC
Add-DhcpServerv4Class
Add-DhcpServerv4ExclusionRange
Add-DhcpServerv4Failover
Add-DhcpServerv4FailoverScope
Add-DhcpServerv4Filter
cmdlet
Additional information
used on an allow list or Deny list.
Add-DhcpServerv4Lease
You use this cmdlet to add a new IPv4 address lease in the
DHCP server service for testing purposes.
Add-DhcpServerv4OptionDefinition
Add-DhcpServerv4Policy
Add-DhcpServerv4PolicyIPRange
Add-DhcpServerv4Reservation
Add-DhcpServerv4Scope
For a complete list of the available cmdlets, refer to DHCP Server Cmdlets in Windows PowerShell:
http://go.microsoft.com/fwlink/?LinkID=386639
Windows Server 2012 R2 added or improved the DHCP cmdlets for additional functionality and to
support new features in Windows Server 2012 R2. The following table lists some of the cmdlets that have
been added or improved.
Cmdlet
New or
Improved
Additional information
Add-DhcpServerSecurityGroup
New
Add-DhcpServerv4MulticastExclusionRange
New
Add-DhcpServerv4MulticastScope
New
Add-DhcpServerv4Policy
Improved
Get-DhcpServerDnsCredential
New
Get-DhcpServerv4DnsSetting
Improved
Cmdlet
New or
Improved
Additional information
Get-DhcpServerv4MulticastExclusionRange
New
Get-DhcpServerv4MulticastLease
New
Get-DhcpServerv4MulticastScope
New
Get-DhcpServerv4MulticastScopeStatistics
New
For information about the DHCP cmdlets that were added or improved Windows Server 2012, refer to
What's New in DHCP in Windows Server 2012 R2:
http://go.microsoft.com/fwlink/?LinkID=386638
The DHCP server updates dynamically the DNS address host (A) resource records and pointer (PTR)
resource records only if requested by the DHCP clients. By default, the client requests that the DHCP
server register the DNS pointer (PTR) resource record, while the client registers its own DNS host (A)
resource record.
The DHCP server discards the host (A) and pointer (PTR) resource records when the clients lease is
deleted.
You can change the Enable DNS dynamic updates according to the settings below: option to Always
dynamically update DNS records so that it instructs the DHCP server to always dynamically update DNS
host (A) and pointer (PTR) resource records no matter what the client requests. In this way, the DHCP
server becomes the resource record owner because the DHCP server performed the registration of the
resource records. Once the DHCP server becomes the owner of the client computers host (A) and pointer
(PTR) resource records, only that DHCP server can update the DNS resource records for the client
computer based on the duration and renewal of the DHCP lease.
Benefits of Superscopes
A superscope is useful in several situations. For example, if a scope runs out of addresses, and you cannot
add more addresses from the subnet, you can add a new subnet to the DHCP server instead. This scope
will lease addresses to clients in the same physical network, but the clients will be in a separate network
logically. This is known as multinetting. Once you add a new subnet, you must configure routers to
recognize the new subnet so that you ensure local communications in the physical network.
A superscope is also useful when you need to move clients gradually into a new IP numbering scheme.
When you have both numbering schemes coexist for the original leases duration, you can move clients
into the new subnet transparently. When you have renewed all client leases in the new subnet, you can
retire the old subnet.
Multicast Scopes
A multicast scope is a collection of multicast addresses from the Class D IP address range of 224.0.0.0 to
239.255.255.255 (224.0.0.0/3). These addresses are used when applications need to communicate with
numerous clients efficiently and simultaneously. This is accomplished with multiple hosts that listen to
traffic for the same IP address. Multicast addresses are used in addition to the Network IP address.
A multicast scope is commonly known as a Multicast Address Dynamic Client Allocation Protocol
(MADCAP) scope. Applications that request addresses from these scopes need to support the MADCAP
application programming interface (API). Windows Deployment Services is an example of an application
that supports multicast transmissions.
Multicast scopes allow applications to reserve a multicast IP address for data and content delivery.
Asia-Pacific Network Information Centre (APNIC) for Asia, Australia, New Zealand, and neighboring
countries.
American Registry for Internet Numbers (ARIN) for Canada, many Caribbean and North Atlantic
islands, and the United States.
Latin America and Caribbean Network Information Centre (LACNIC) for Latin America and parts of
the Caribbean region.
Rseaux IP Europens Network Coordination Centre (RIPE NCC) for Europe, Russia, the Middle East,
and Central Asia.
Stateful configuration. Occurs when the DHCPv6 server assigns the IPv6 address to the client along
with additional DHCP data.
Stateless configuration. Occurs when the subnet router and client agree on an IPv6 automatically, and the
DHCPv6 server only assigns other IPv6 configuration settings. The IPv6 address is built by using the
network portion from the router, and the host portion of the address, which is generated by the client.
Use
Prefix
Property
Use
Preference
This property informs DHCPv6 clients which server to use if you have
multiple DHCPv6 servers.
Exclusions
DHCP options
In the DHCP console, right-click the IPv6 node, and then click New Scope.
2.
Configure a scope prefix and preferencefor example, fe80:409c:f385:9e55:eb82:: as the prefix and 1
as the preference.
3.
4.
5.
You can implement name protection for both IPv4 and IPv6. In addition, you can configure DHCP Name
Protection at both the server level and the scope level. Implementation at the server level will only apply
for newly created scopes.
To enable DHCP Name Protection for an IPv4 or IPv6 node, perform this procedure:
1.
2.
Right-click the IPv4 or IPv6 node, and then open the Property page.
3.
Click DNS, click Configure, and then select the Enable Name Protection check box.
2.
Expand the IPv4 or IPv6 node, right-click the scope, and the open the Property page.
3.
Click DNS, click Configure, and then select the Enable Name Protection check box.
DHCP Failover
DHCP clients renew their leases on their IP
addresses at regular, configurable intervals. When
the DHCP service fails and the leases time out, the
clients no longer have IP addresses. In the past,
DHCP failover was not possible because DHCP
servers were independent and unaware of each
other. Therefore, if you configured two separate DHCP servers to distribute the same pool of addresses,
that could lead to duplicate addresses. Additionally, to provide redundant DHCP services, you had to
configure clustering and perform a significant amount of manual configuration and monitoring.
The new DHCP failover feature enables two DHCP servers to provide IP addresses and optional
configurations to the same subnets or scopes. Therefore, you now can configure two DHCP servers to
replicate lease information. If one of the servers fails, the other server services the clients for the entire
subnet.
Note: In Windows Server 2012, you can configure only two DHCP servers for failover, and
only for IPv4 scopes and subnets.
Note: DHCP failover is time sensitive. You must synchronize time between the partners in
the relationship. If the time difference is greater than one minute, the failover process will halt
with a critical error.
You can configure failover in one of the two following modes.
Mode
Characteristics
Hot standby
In this mode, one server is the primary server and the other is the secondary
server. The primary server actively assigns IP configurations for the scope or
subnet. The secondary DHCP server assumes this role only if the primary server
becomes unavailable. A DHCP server can simultaneously act as the primary for
one scope or subnet, and the secondary for another.
Administrators must configure a percentage of the scope addresses to be
assigned to the standby server. These addresses are supplied during the
Maximum Client Lead Time (MCLT) interval if the primary server is down. The
default MCLT value is five percent of the scope, for example, 5% of the
available addresses are reserved for the secondary server. The secondary server
takes control of the entire IP range after the MCLT interval has passed. When
the primary server is down, addresses from the secondary server use a lease
time equal to the MCLT, one hour by default.
Hot Standby mode is best suited to deployments in which a disaster recovery
site is located at a different location. That way, the DHCP server will not service
clients unless there is a main server outage.
Load sharing
This is the default mode. In this mode, both servers supply IP configuration to
clients simultaneously. The server that responds to IP configuration requests
depends on how the administrator configures the load distribution ratio. The
default ratio is 50:50.
MCLT
The administrator configures the MCLT parameter to determine the amount of time a DHCP server should
wait when a partner is unavailable, before assuming control of the address range. This value cannot be
zero, and the default is one hour.
Message Authentication
Windows Server 2012 enables you to authenticate the failover message traffic between the replication
partners. The administrator can establish a shared secretmuch like a passwordin the Configuration
Failover Wizard for DHCP failover. This validates that the failover message comes from the failover
partner.
Firewall Considerations
DHCP uses TCP port 647 to listen for failover traffic. The DHCP installation creates the following inbound
and outbound firewall rules:
Microsoft-Windows-DHCP-Failover-TCP-In
Microsoft-Windows-DHCP-Failover-TCP-Out
Demonstration Steps
Configure a DHCP failover relationship
1.
Sign in on LON-SVR1 as Adatum\Administrator with the password Pa$$w0rd. Open the DHCP
management console. Note that the server is authorized, but that no scopes are configured.
2.
Switch to LON-DC1. In Server Manager, click Tools, and then on the drop-down list, click DHCP.
3.
4.
Note: LON-SVR1 has two NICs one on the 131.107.0.0 subnet and one on the 172.16.0.0
subnet. LON-DC1 also resides on the 172.16.0.0 subnet.
5.
6.
Switch back to LON-SVR1, refresh the IPv4 node, and note that the Adatum scope is configured and
is active.
Lesson 2
Lesson Objectives
After completing this lesson, you will be able to:
For more verbose logging, you can enable debug logging. Debug logging options are disabled by default,
but they can be selectively enabled. Debug logging options include the following:
Direction of packets.
Contents of packets.
Transport protocol.
Type of request.
Specifying the name and location of the log file, which is located in the %windir%\System32\DNS
directory.
Debug logging can be resource intensive. It can affect overall server performance and consume disk
space. Therefore, you should enable it only temporarily when you require more detailed information
about server performance. To enable debug logging on the DNS server, do the following:
1.
2.
3.
4.
Select Log packets for debugging, and then select the events for which you want the DNS server to
record debug logging.
Note: Logging can generate a large number of files, and if it is left on too long, it can fill a
drive. We highly recommend that you turn on logging only while you are actively
troubleshooting; at all other times, logging should be turned off.
Aging is determined by using parameters known as the No-refresh interval and the Refresh interval. The
No-refresh interval is the period of time that the record is not eligible to be refreshed. By default, this is
seven days. The Refresh interval is the date and time that the record is eligible to be refreshed by the
client. The default is seven days. Usually, a client host record cannot be refreshed in the database for
seven days after it is first registered or refreshed. However, it then must be refreshed within the next seven
days after the No-refresh interval, or the record becomes eligible to be scavenged out of the database. A
client will attempt to refresh its DNS record at startup, and every 24 hours while the system is running.
Note: Records that are added dynamically to the database are time stamped. Static records
that you enter manually have a time stamp value of zero (0); therefore, they will not be affected
by aging and will not be scavenged out of the database.
2.
<zone name> is the name of your DNS zone, and <zone file name> is the file that you want to create to
hold the backup information.
The dnscmd tool exports the zone data to the file name that you designate in the command, to the
%windir%\System32\DNS directory.
You can also use Windows PowerShell to perform the same task. In Windows PowerShell, you use the
Export-DnsServerZone cmdlet. For example, if you want to export a zone named contoso.com, type the
following command:
Export-DnsServerZone Name contoso.com Filename contoso
Note: If DNSSEC is configured, the security information will not be exported with these
commands.
Forwarding
Conditional forwarding
Stub zones
Netmask ordering
Forwarding
A forwarder is a DNS server that you configure to forward DNS queries for host names that it cannot
resolve to other DNS servers for resolution. In a typical environment, the internal DNS server forwards
queries for external DNS host names to DNS servers on the Internet. For example, if the local network
DNS server cannot resolve a query for www.microsoft.com, then the local DNS server can forward the
query to the Internet service providers (ISPs) DNS server for resolution.
Conditional Forwarding
You also can use conditional forwarders to forward queries according to specific domain names. A
conditional forwarder is a setting that you configure on a DNS server that enables forwarding DNS queries
based on the query's DNS domain name. For example, you can configure a DNS server to forward all
queries that it receives for names ending with corp.adatum.com to the IP address of a specific DNS server,
or to the IP addresses of multiple DNS servers. This can be useful when you have multiple DNS
namespaces in a forest, or a partners DNS namespace across firewalls. For example, suppose Contoso.com
and Adatum.com merge. Rather than requiring each domain to host a complete replica of the other
domains DNS database, you could create conditional forwarders so that they point to each others
specific DNS servers for resolution of internal DNS names.
Stub Zones
A stub zone is a copy of a zone that contains only those resource records necessary to identify that zones
DNS servers. A stub zone resolves names between separate DNS namespaces, which might be necessary
when you want a DNS server that is hosting a parent zone to remain aware of all the DNS servers for one
of its child zones. A stub zone that is hosted on a parent domain DNS server will receive a list of all new
DNS servers for the child zone, when it requests an update from the stub zone's master server. By using
this method, the DNS server that is hosting the parent zone maintains a current list of the DNS servers for
the child zone as they are added and removed.
The delegated zones start of authority (SOA) resource record, name server (NS) resource records, and
host (A) resource records.
The IP address of one or more master servers that you can use to update the stub zone.
You can replicate stub zones either in the domain only, or throughout the entire forest or any other
replication scope configured by Active Directory application partitions.
Stub zone master servers are one or more DNS servers that are responsible for the initial copy of the
zone information; the master server is usually the DNS server that is hosting the primary zone for the
delegated domain name.
Netmask Ordering
There are various reasons to associate multiple IP addresses with a single name, for example, load
balancing a web page. Netmask ordering returns addresses for DNS queries that prioritize resources on
the client computers local subnet and returns those addresses to the client. In other words, addresses of
hosts that are on the same subnet as the requesting client will have a higher priority in the DNS response
to the client computer.
Localization is based on IP addresses. For example, if multiple A records are associated with the same DNS
name, and each A record is located on a different IP subnet, netmask ordering returns an A record that is
on the same IP subnet as the client computer that made the request.
This image shows an example of netmask ordering.
2.
Create a new forward lookup zone named GlobalNames (not case sensitive). Do not allow dynamic
updates for this zone.
3.
Manually create CNAME records that point to records that already exist in the other zones that are
hosted on your DNS servers.
For example, you could create a CNAME record in the GlobalNames zone named Data that points to
Data.contoso.com. This enables clients from any DNS domain in the organization to find this server by the
single-label name Data.
You also can use the Windows PowerShell cmdlets Get-DnsServerGlobalNameZone and SetDnsServerGlobalNameZone to configure GlobalNames zones.
2.
3.
Alternatively, you can use the Windows PowerShell Set-DnsServerCache LockingPercent cmdlet to set
this value. For example:
Set-DnsServerCache LockingPercent <value>
2.
3.
In Windows 2012 the dnscmd command functions have been ported to Windows PowerShell commands.
To configure the DNS socket pool size, open an elevated Windows PowerShell window and perform the
following steps:
1.
2.
3.
4.
DNSSEC
DNSSEC enables a DNS zone and all records in the zone to be signed cryptographically so that client
computers can validate the DNS response. DNS is often subject to various attacks, such as spoofing and
cache-tampering. DNSSEC helps protect against these threats and provides a more secure DNS
infrastructure.
Trust Anchors
A trust anchor is an authoritative entity that is represented by a public key. The TrustAnchors zone stores
preconfigured public keys that are associated with a specific zone. In DNS, the trust anchor is the DNSKEY
or DS resource record. Client computers use these records to build trust chains. You must configure a trust
anchor from the zone on every domain DNS server to validate responses from that signed zone. If the
DNS server is a domain controller, then Active Directory-integrated zones can distribute the trust anchors.
Deploying DNSSEC
To deploy DNSSEC:
1.
Install Windows Server 2012, and assign the DNS role to the server. Typically, a domain controller also
acts as the DNS server. However, this is not a requirement.
2.
Sign the DNS zone by using the DNSSEC Configuration Wizard, which is located in the DNS console.
3.
4.
Configure the zone signing parameters. This option guides you through the steps and enables you to
set all values for the key signing key (KSK) and the zone signing key (ZSK).
Sign the zone with parameters of an existing zone. This option enables you to keep the same values
and options that are set in another signed zone.
Use recommended settings. This option signs the zone by using the default values.
Note: Zones also can be unsigned, by using the DNSSEC management user interface to
remove zone signatures.
Purpose
DNSKEY
This record publishes the public key for the zone. It checks the authority of a
response against the private key held by the DNS server. These keys require
periodic replacement through key rollovers. Windows Server 2012 supports
automated key rollovers. Every zone has multiple DNSKEYs that are then
broken down to the ZSK and KSK.
Delegation Signer
(DS)
This record is a delegation record that contains the hash of the public key of a
child zone. This record is signed by the parent zones private key. If a child
zone of a signed parent also is signed, the DS records from the child must be
manually added to the parent so that a chain of trust can be created.
Resource Record
Signature (RRSIG)
This record holds a signature for a set of DNS records. It is used to check the
authority of a response.
When the DNS response has no data to provide to the client, this record
authenticates that the host does not exist.
NSEC3
This record is a hashed version of the NSEC record that prevents alphabet
attacks by enumerating the zone.
Description
AddDnsServerResourceRecordDnsKey
Add-DnsServerResourceRecordDS
cmdlet
Description
Add-DnsServerTrustAnchor
Add-DnsServerSigningKey
Export-DnsServerDnsSecPublicKey
Get-DnsServerDnsSecZoneSetting
You use this cmdlet to get the DNSSEC settings for a zone.
Get-DnsServerSetting
Set-DnsServerDnsSecZoneSetting
Step-DnsServerSigningKeyRollover
Demonstration Steps
Configure DNSSEC
1.
2.
3.
Use the DNSSEC Zone Signing Wizard to sign the Adatum.com zone.
4.
5.
6.
Add the Key Signing Key by accepting default values for the new key.
7.
Add the Zone Signing Key by accepting the default values for the new key.
8.
9.
Do not choose to enable the distribution of trust anchors for this zone.
Lesson 3
Implementing IPAM
With the development of IPv6 and the proliferation of devices that require IP addresses, networks have
become complex and difficult to manage. Maintaining an updated list of static IP addresses that have
been issued has often been a manual task, which can lead to errors. To help organizations manage IP
addresses, Windows Server 2012 provides the IP Address Management (IPAM) tool.
Lesson Objectives
After completing this lesson, you will be able to:
Describe IPAM.
What Is IPAM?
IP address management is a difficult task in large
networks, because tracking IP address usage is
largely a manual operation. Windows Server 2012
introduces IPAM, which is a framework for
discovering, auditing, monitoring utilization, and
managing the IP address space in a network.
IPAM enables the administration and monitoring
of DHCP and DNS, and provides a comprehensive
view of where IP addresses are used. IPAM collects
information from domain controllers and Network
Policy Servers (NPSs), and then stores that
information in the Windows Internal Database.
IPAM assists in the areas of IP administration, as shown in the following table.
IP administration area
IPAM capabilities
Planning
Provides a tool set that can reduce the time and expense of the planning
process when network changes occur.
Managing
Tracking
Auditing
Characteristics of IPAM
Characteristics of IPAM include:
A single IPAM server can support up to 150 DHCP servers and 500 DNS servers.
A single IPAM server can support up to 6,000 DHCP scopes and 150 DNS zones.
IPAM stores three years of forensics data (IP address leases, host MAC addresses, user logon and
logoff information) for 100,000 users in a Windows Internal Database when using Windows Server
2012. Windows Server 2012 R2 added the option to select a Windows Internal Database or SQL
Server. There is no database purge policy provided, and the administrator must purge the data
manually as needed.
IPAM on Windows Server 2012 supports only Windows Internal Database. An external database is
supported only when IPAM is implemented on Windows Server 2012 R2.
IPAM does not check for IP address consistency with routers and switches.
Benefits of IPAM
IPAM benefits include:
Static IP inventory management, lifetime management, and DHCP and DNS record creation and
deletion.
Note: IPAM has limited support for management and configuration of non-Microsoft network
elements.
RBAC. RBAC for IPAM allows you to customize roles, access scopes, and access policies for IPAM
administrators.
Virtual address space management. You can use IPAM to manage IP addresses in a Microsoft-based
network. You can manage both physical and virtual addresses. Integration between IPAM and Virtual
Machine Managers (VMMs) allows end-to-end address space management. You can view virtual
address space in the IPAM consoles new VIRTUALIZED ADDRESS SPACE node.
Enhanced DHCP server management. DHCP management is improved in Windows Server 2012
R2,and includes new DHCP scope and DHCP server operations. Additionally, views were added for
DHCP failover, DHCP policies, DHCP superscopes, DHCP filters, and DHCP reservations.
External database support. You can configure IPAM to use a Windows Internal Database (WID).
Support for using Microsoft SQL Server was added in Windows Server 2012 R2.
Upgrade and migration support. You can upgrade the IPAM database from Windows Server 2012 to
Windows Server 2012 R2.
Enhanced Windows PowerShell support. IPAM includes more than 50 different Windows PowerShell
commands.
For a complete list of the available commands, review IPAM Server cmdlets in Windows PowerShell.
http://go.microsoft.com/fwlink/?LinkID=386637
IPAM Overview
IPAM architecture consists of four main modules,
as listed in the following table.
Module
Description
IPAM discovery
You use AD DS to discover servers that are running Windows Server 2008 and newer
Windows Server operating systems, and that have DNS, DHCP, or AD DS installed. You can
define the scope of discovery to a subset of domains in the forest. You also can add servers
manually.
IP address
space
management
You can use this module to view, monitor, and manage the IP address space. You can issue
addresses dynamically or assign them statically. You also track address utilization and detect
overlapping DHCP scopes.
Multi-server
management
and monitoring
You can manage and monitor multiple DHCP servers. This enables tasks to execute across
multiple servers. For example, you can configure and edit DHCP properties and scopes, and
track the status of DHCP and scope utilization. You also can monitor multiple DNS servers,
and monitor the health and status of DNS zones across authoritative DNS servers.
Operational
auditing and IP
address
tracking
You can use the auditing tools to track potential configuration problems. You can also
collect, manage, and view details of configuration changes from managed DHCP servers.
You also can collect address lease tracking from DHCP lease logs, and collect logon event
information from NPS and domain controllers.
The IPAM server can manage only one Active Directory forest. As such, you can deploy IPAM in one of
three topologies:
Hybrid. You deploy a central IPAM server together with a dedicated IPAM server in each site. You can
manage DHCP services, DNS services, and NPS services for multiple IPAM servers with a central server.
This allows local administrators to manage local servers, while allowing all the servers to be managed
from a central location, if necessary.
Note: IPAM servers do not communicate with one another or share database information.
If you deploy multiple IPAM servers, you must customize each servers discovery scope.
IPAM has two main components:
IPAM server. The IPAM server performs the data collection from the managed servers. It also manages
the Windows Internal Database and provides RBAC.
IPAM client. The IPAM client provides the client computer user interface. It also interacts with the
IPAM server, and invokes Windows PowerShell to perform DHCP configuration tasks, DNS
monitoring, and remote management.
Domain
Controller
Servers and
NPS
DHCP Servers
DNS Servers
<domain>\IPAM
UG group
Added as a
member of
the
BUILTIN\Ev
ent Log
Readers
group
Windows
Firewall with
Advanced
Security
Inbound
firewall rules
to allow
Remote
Event Log
Managemen
t
Network Share
Share the
%SYSTEMROOT%\System32\
DHCP folder as DHCPAudit.
Grant IPAMUG read
permissions
Event Log
Monitoring on
DNS servers
Modify the
HKLM\SYSTEM\CurrentCont
rolSet
\Services\EventLog\DNS
Server registry key
Additional
settings
Add <domain>\IPAMUG
group as DNS Administrator
If you choose to use GPO provisioning, you will run the Invoke-IpamGpoProvisioning Windows
PowerShell command. Running this command will create three GPOs to configure the settings described
in the table above.
IPAM_DC_NPS. This GPO is applied to all managed AD DS servers and NPS servers.
IPAM_DHCP. This GPO is applied to all managed DHCP servers. This GPO includes scripts to configure
the network share for DHCP monitoring.
IPAM_DNS. This GPO is applied to all managed DNS servers. This GPO includes scripts to configure
the event log for DNS monitoring and to configure the IPAMUG group as a DNS administrator.
You must be a member of the correct IPAM local security group on the IPAM server.
You must enable logging of account logon events on domain controller and NPS servers for IPAMs IP
address tracking and auditing feature.
In addition to the previously mentioned requirements, if you manage Windows Server 2008 and Windows
Server 2008 R2 with IPAM, the Windows 2008 or Windows 2008 R2 servers require the following:
For Windows Server 2008 SP2, Windows Management Framework Core (KB968930) also is required.
Demonstration Steps
Install IPAM
1.
2.
In the Server Manager, add the IPAM feature and all required supporting features.
Configure IPAM
1.
In the IPAM Overview pane, provision the IPAM server using Group Policy.
2.
Enter IPAM as the Group Policy Object (GPO) name prefix, and provision IPAM. Provisioning will take
a few minutes to complete.
3.
In the IPAM Overview pane, configure server discovery for the Adatum domain.
4.
In the IPAM Overview pane, start the server discovery process. Discovery may take five to 10 minutes
to run. The yellow bar indicates when discovery is complete.
5.
6.
7.
Use Windows PowerShell to grant the IPAM server permission to manage LON-DC1 by using the
following command:
Invoke-IpamGpoProvisioning Domain Adatum.com GpoPrefixName IPAM IpamServerFqdn
LON-SVR2.adatum.com DelegatedGpoUser Administrator
8.
9.
Switch to LON-DC1.
the virtual addresses used by the customers. The only address space created during installation is the
Default IP Address Space, which is a provider address space located in the VIRTUALIZED IP ADDRESS
SPACE pane.
To create a new Address space, you use the Add-IpamAddressSpace Windows PowerShell cmdlet. When
you create a virtual address space, you must specify a friendly name for the address space, regardless of
whether it is a provider or a customer address space. Additionally, you can add an optional description.
When you create a customer address space, you also must specify the provider address space in which the
customer address space resides, and the isolation method the customer network uses.
To create a new provider address space for the AdatumHQ datacenter based virtual systems, use the
following Windows PowerShell cmdlet.
Add-IpamAddressSpace Name AdatumHQ ProviderAddressSpace Description Adatum HQ
Datacenter
When you create a customer address space, you must configure additional settings. A customer address
space must reside in a provider address space. Additionally, you must specify how the customer network
will interact with other networks when you specify the network isolation method as either IPRewrite or
Network Virtualization using Generic Routing Encapsulation (NVGRE). IPRewrite is a static isolation
method in which each customer IP address gets rewritten when you use a physical address from the
provider network. Network Virtualization using Generic Routing Encapsulation (NVGRE) is an isolation
method that encapsulates the customer traffic and sends all of that traffic using a single IP address from
the provider network.
To create a new customer address space for the Security department, using the AdatumHQ provider
address space and NVGRE isolation, use the following Windows PowerShell cmdlet.
Add-IpamAddressSpace -Name "Security Department" -CustomerAddressSpace AssociatedProviderAddressSpace "AdatumHQ" -IsolationMethod NVGRE Description
Security Department Network
You can create additional optional settings as part of the Windows PowerShell command or add them manually
after creation. These optional settings include custom fields such as AD site or VMM IP Pool Name.
IPAM RBAC
Windows Server 2012 R2 includes RBAC for IPAM.
RBAC allows you to customize how administrative
permissions are defined in IPAM. For example,
some people are assigned the administrator role
and are able to manage all aspects of IPAM, while
other administrators may only be allowed to
manage certain network objects. By default, all
objects inherit the scope of their parent object. To
change the Access Scope of an object, right-click
the object and click on Set Access Scope.
RBAC security is divided into the following three
aspects, roles, access scopes, and access policies:
Roles. A role is a collection of IPAM operations. The roles define the actions an administrator is
allowed to perform. Roles are associated with Windows groups and/or users through the use of
access policies. There are eight built-in RBAC roles for IPAM. New roles are created and added in the
IPAM console, in the ACCESS CONTROL pane.
Description
IPAM administrator
Access scopes. Access scopes define the objects to which an administrator has access. By default, the
Global access scope is created when IPAM is installed, and all administrator-created access scopes are
sub-scopes of the Global access scope. Users or groups assigned to the Global access scope can
manage all the network objects in IPAM. Access scopes have up to 15 major operations that can be
assigned, such as DHCP server operations. These are further defined by multiple related operations,
such as Create DHCP scope, that can be assigned individually. This allows for a large administrative
permissions customization range in IPAM. You can create and add new access scopes in the IPAM
console, in the ACCESS CONTROL pane.
Access Policies. An access policy combines a role with an access scope to assign RBAC permissions
within IPAM. You can create and add new access policies in the IPAM console, in the ACCESS
CONTROL pane.
Lesson 4
Lesson Objectives
After completing this lesson, you will be able to:
Monitor IPAM.
IP address subnets
IP address ranges
IP addresses
IP address inventory
IP Address Blocks
IP address blocks are the highest-level entities within an IP address space organization. Conceptually, an
IP block is either a private IP address space or a public IP address space assigned to an organization by
various Regional Internet Registries. Network administrators use IP address blocks to create and allocate IP
address ranges to DHCP. They can add, import, edit, and delete IP address blocks. IPAM automatically
maps IP address subnets to the appropriate IP address block based on the boundaries of the range. IPAM
utilization statistics and trends are summarized at the block level.
IP Address Subnets
IP address subnets are the next hierarchical level of address space entities after IP address blocks. IPAM
summarizes utilization statistics and trends at the IP address subnet level for the IP address ranges that the
IP address subnet contains. Additionally, you can create subnets as either physical or virtual; if subnets are
virtual, they can be assigned to either a provider or a customer virtual network.
IP Address Ranges
IP address ranges are the next hierarchical level of IP address space entities after IP address subnets.
Conceptually, an IP address range is an IP subnet, or part of an IP subnet marked by a start and end IP
address. It typically corresponds to a DHCP scope, or to a static IPv4 or IPv6 address range or address pool
that assigns addresses to hosts. An IP address range is uniquely identifiable by the value of the mandatory
Managed by Service and Service Instance options, which help IPAM manage and maintain overlapping or
duplicate IP address ranges from the same console. You can add or import IP address ranges from within
the IPAM console. Whenever an IP address range is created, it is associated automatically with an IP
address subnet. If a subnet does not exist, one can be automatically created when the IP address range is
created.
IP Addresses
IP addresses are the addresses that make up the IP address range. IPAM enables end-to-end life cycle
management of IPv4 and IPv6 addresses, including record synchronization with DHCP and DNS servers.
IPAM automatically maps an address to the appropriate range based on the ranges start and end
address. An IP address is uniquely identifiable by the value of mandatory Managed By Service and Service
Instance options that help IPAM manage and maintain duplicate IP addresses from the same console. You
can add or import IP addresses from within the IPAM console.
IP Address Inventory
In the IP address inventory view, you can view a list of all IP addresses in the enterprise, along with their
device names and type. IP address inventory is a logical group defined by the Device Type option within
the IP addresses view. These groups allow you to customize the way your address space displays for
managing and tracking IP usage. You can add or import IP addresses from within the IPAM console. For
example, you could add the IP addresses for printers or routers, assign IP addresses to the appropriate
device type of printer or router, and then view your IP inventory filtered by the device type that you
assigned.
View
Description
DNS and
DHCP
servers
By default, managed DHCP and DNS servers are arranged by their network interface in
/16 subnets for IPv4 and /48 subnets for IPv6. You can select the view to see just DHCP
scope properties, just DNS server properties, or both.
DHCP
scopes
The DHCP scope view enables scope utilization monitoring. Utilization statistics are
collected periodically and automatically from a managed DHCP server. You can track
important scope properties such as Name, ID, Prefix Length, and Status.
DNS zone
monitoring
Zone monitoring is enabled for forward and reverse lookup zones. Zone status is based
on events collected by IPAM. The status of each zone is summarized.
Server
groups
You can organize your managed DHCP and DNS servers into logical groups. For example,
you might organize servers by business unit or geography. Groups are defined by
selecting the grouping criteria from built-in fields or user-defined fields.
Note: The term prefix length is equivalent to using the term subnet mask when you define
an address range. Prefix length is used in PowerShell and refers to the routing prefix that Classless
Inter-Domain Routing (CIDR) notation uses. For example:
192.168.2.0/24, i.e. the 192.168.2.0 network with a prefix length of 24, is equivalent to
192.168.2.0/255.255.255.0, i.e. 192.168.2.0 with a network mask of 255.255.255.0.
IP Address Block. When you add an IP Address Block, when you supply the Network ID and Prefix
length, the start IP address and End IP address will be calculated automatically. Additionally, if you
enter a non-private IP address range, you must specify the Regional Internet Registry where the
addresses are registered and the registration date range. Optionally, you can add a brief description
and an owner.
The following Windows PowerShell cmdlet Add-IpamBlock also can be used to add an IP Address block:
Add-IpamBlock NetworkID <network prefix, in Classless InterDomain Routing (CIDR) notation> Rir <string>
The RIR value is optional for private addresses. If you specify the RIR, the value must be one of: AFRNIC,
APNIC, ARIN, LACNIC, or RIPE.
IP Address Subnet. When you add an IP Address subnet, you must provide a friendly name for the
subnet. Additionally, you must specify the Network ID and Prefix length.
There are several optional settings when you add an IP Address subnet. You can specify one or more
VLANs to be associated with the subnet, whether or not the subnet is virtualized, or custom fields such
as AD site or VMM IP Pool Name. As with the other IP address types, you can add a brief description
and an owner.
The Windows PowerShell cmdlet Add-IpamSubnet also can be used to add an IP address subnet. When
you Add-IpamSubnet, you also must specify if the network type is NonVirtualized, Provider, or Customer
IP Subnet. You must specify the address space to which the Customer IP Subnet will be added.
Add-IpamSubnet NetworkID <network prefix, in Classless InterDomain Routing (CIDR) notation> Rir <string>
IP Address Range. You can use an IP Address range to further divide an IP Subnet. When you create
an IP address range you must specify the Network ID and either the Prefix length or Subnet mask.
Additionally, if an IP address does not already exist that contains the addresses in the IP address
range you create, you can select to have one created automatically. The other required fields,
Managed by Service, Service Instance, and Assignment Type will use default values unless otherwise
specified. As with the other IP address types, a large variety of custom fields is available to describe
the IP address range.
You also can use the Windows PowerShell cmdlet Add-IpamRange to add an IP Address range. When
you use Add-IpamRange, you must also specify if the network type is NonVirtualized, Provider, or
Customer IP range. You must specify the address space to which the Customer IP Subnet will be added.
Add-IpamRange NetworkID <network prefix, in Classless InterDomain Routing (CIDR) notation> CreateSubnetIfNotFound
IP Address. IPAM provides end-to-end IP address management, including synchronization with DHCP
and DNS. You can use the IP address to associate the address with DHCP reservations; however, when
you use Windows PowerShell to create the IP address, IPAM does not create the reservation
automatically. You can discover duplicate addresses by the Managed by Service and Service Instance
properties of an IP address. IPAM automatically maps an address to the range containing the address.
When you create an IP address, the only required information that you must provide is the IP address
itself. The other required fields, Managed by Service, Service Instance, Device Type, Address State, and
Assignment Type, will use default values unless otherwise specified. As with the other IP address
types, a large variety of custom fields is available to describe the IP address.
You can use the Windows PowerShell cmdlet Add-IpamAddress to add an IP Address. When you use
Add-IpamAddress, you also must specify the IP address.
Add-IpamAddress IpAddress <x.x.x.x>
Data must be valid for the field into which it is being imported.
For example, you can use the following entries in a text file to import two addresses into the IPAM
database that manages a DHCP server named DHCP1.adatum.com:
IP Address,Managed by Service,Service Instance,Device Type,IP Address
State,Assignment Type
10.10.0.25,ms dhcp,dhcp1.adatum.com,host,in-use,static
10.10.0.26,ms dhcp,dhcp1.adatum.com,host,in-use,static
For IP address blocks, subnets, and ranges, the network ID and network prefix length are combined in a
single field named Network. For example, to import an IP Address block of 65.52.0.0/14 assigned by the
ARIN regional authority, use the following entries in a text file:
Network,Start IP address,End IP address,RIR
65.52.0.0/14,65.52.0.0,65.52.255.255,ARIN
If a required field is missing or you try to import the wrong data type for a field, an error report is created
in the users Documents folder. The mandatory fields for importing data are as follows:
IP address range import: Network, Start IP address, End IP address, Managed by Service, Service
Instance, Assignment Type, Utilization Calculation
IP address import: IP address, Managed by Service, Service Instance, Device Type, IP Address State,
Assignment Type
Reclaim IP Addresses
When manually added IP Addresses are no longer in use, you need to reclaim them to make them
available for use with other devices. Additionally, the reclaim operations cleans DHCP reservations and
DNS records on managed DNS and DHCP servers. There are two ways to reclaim IP addresses:
To reclaim IP addresses in a range, in the IP ADDRESS SPACE, change the current view to IP Address
Ranges. The Reclaim IP Addresses task is available if you right-click the desired IP address range. If you
choose this operation, it opens the Reclaim IP Addresses dialog box.
The Reclaim IP Addresses dialog box displays all the utilized IP addresses for the range, the IP Address
State, and additional information such as the Device Name and Device Type. Once you have determined
the IP addresses that you want to reclaim, check the select check box next to the IP addresses, and click
the Reclaim button. By default, this operation removes the DNS resource records and DHCP reservations.
Edit IP Address
The Edit IP Address dialog box allows you to add
information to an IP address or change information that was previously configured. You can modify all
aspects of the IP address information.
Create Operations
There are three options available for creating records for an IP Address. These include:
Create DHCP Reservation. This option creates a DHCP reservation in the appropriate IP Address
Range.
Create DNS Host Record. This option creates a DNS record on the appropriate DNS server or servers
for the IP Address Range.
Create DNS PTR Record. This option creates a DNS OTR record on the appropriate DNS server or
servers for the IP Address Range.
Delete Operations
There are four options available for deleting IP addresses or the information associated with them. These
include:
Delete. The delete option will remove the IP address from the IPAM database. By default, this will
remove the DNS records and DHCP reservations if they exist.
Delete DHCP Reservation. The option will remove any DHCP reservations created for the IP address,
without removing the IP address from the IPAM database.
Delete DNS Host Record. The option will remove any DNS Host Records for the IP address, without
removing the IP address from the IPAM database.
Delete DNS PTR Record. The option will remove any DNS PTR Records for the IP address, without
removing the IP address from the IPAM database.
Demonstration Steps
1.
2.
3.
4.
On LON-SVR2, add an IP address block in the IPAM console with the following parameters:
Prefix length: 16
Add IP addresses for the network router by adding to the IP Address Inventory with the following
parameters:
IP address: 172.16.0.1
IP address: 172.16.0.101
Use the IPAM console to create the DNS host record as follows:
5.
On LON-DC1, open the DHCP console and confirm that the reservation was created in the 172.16.0.0
scope.
6.
On LON-DC1, open the DNS Manager console and confirm that the DNS host record was created.
IPAM Monitoring
The IPAM address space management feature
allows you to efficiently view, monitor, and
manage the IP address space on the network.
Address space management supports IPv4 public
and private addresses, and IPv6 global and unicast
addresses. By using the MONITOR AND MANAGE
section and the DNS and DHCP, DHCP Scopes,
DNS Zone Monitoring, and Server Groups views,
you can view and monitor health and
configuration of all the DNS and DHCP servers
that IPAM manages. IPAM uses scheduled tasks to
periodically collect data from managed servers.
You also can retrieve data on demand by using the Retrieve All Server Data option.
Utilization Monitoring
Utilization data is maintained for IP address ranges, IP address blocks, and IP range groups within IPAM.
You can configure thresholds for the percentage of the IP address space that is utilized, and then use
those thresholds to determine under-utilization and over-utilization.
You can perform utilization trend building and reporting for IPv4 address ranges, blocks, and range
groups. The utilization trend window allows you to view trends over time periods, such as daily, weekly,
monthly, or annually. You also can view trends over custom date ranges. Utilization data from managed
DHCP scopes is auto-discovered, and you can view this data.
Demonstration Steps
1.
On LON-SVR2, review the information displayed in the DNS and DHCP Servers pane in the IPAM
console.
2.
3.
4.
Objectives
In this lab, you will see how to:
Lab Setup
Estimated Time: 70 minutes
Virtual machines: 20412D-LON-DC1, 20412D-LON-SVR1,
20412D-LON-SVR2, 20412D-LON-CL1
User Name: Adatum\Administrator
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
Configure a superscope.
2.
3.
2.
On LON-DC1, configure a scope named Scope1, with a range of 192.168.0.50 192.168.0.100, and
with the following settings:
Router: 192.168.0.1
Configure a second scope named Scope2 with a range of 192.168.1.50 192.168.1.100, and with
the following settings:
Router: 192.168.1.1
3.
Create a superscope called AdatumSuper that has Scope1 and Scope2 as members.
4.
Switch to the DHCP console on LON-DC1, and enable DHCP Name Protection found on the DNS
tab of the IPv4 node.
On LON-SVR1, start the DHCP console and observe the current state of DHCP. Note that the server is
authorized, but that no scopes are configured.
2.
3.
4.
5.
On LON-SVR1, refresh the IPv4 node. Notice that the IPv4 node is active, and that Scope Adatum is
configured.
6.
7.
8.
9.
Results: After completing this exercise, you will have configured a superscope, configured DHCP Name
Protection, and configured and verified DHCP failover.
Configure DNSSEC.
2.
3.
4.
2.
Use the DNSSEC Zone Signing Wizard to sign the Adatum.com zone.
3.
4.
5.
Add the Key Signing Key by accepting the default values for the new key.
6.
Add the Zone Signing Key by accepting the default values for the new key.
7.
8.
9.
10. Verify that the DNSKEY resource records have been created in the Trust Points zone.
11. Minimize the DNS console.
12. Use the Group Policy Management Console, in the Default Domain Policy object, to configure the
Name Resolution Policy Table.
13. Create a rule that enables DNSSEC for the Adatum.com suffix, and that requires DNS clients to verify
that the name and address data were validated.
2.
Run the following command to view the current size of the socket pool:
Get-DNSServer
3.
Run the following command to change the socket pool size to 3,000:
dnscmd /config /socketpoolsize 3000
4.
5.
Run the following command to confirm the new socket pool size:
Get-DnsServer
Run the following command to view the current cache lock size:
Get-DnsServer
2.
Run the following command to change the cache lock value to 75 percent:
Set-DnsServerCache LockingPercent 75
3.
4.
Run the following command to confirm the new cache lock value:
Get-DnsServer
Create an Active Directory-integrated forward lookup zone named Contoso.com, by running the
following command:
Add-DnsServerPrimaryZone Name Contoso.com ReplicationScope Forest
2.
3.
Create an Active Directory-integrated forward lookup zone named GlobalNames by running the
following command:
Add-DnsServerPrimaryZone Name GlobalNames ReplicationScope Forest
4.
Open the DNS Manager console, and add a new host record to the Contoso.com domain named
App1 with the IP address of 192.168.1.200.
5.
In the GlobalNames zone, create a new alias named App1 using the FQDN of App1.Contoso.com.
6.
Results: After completing this exercise, you will have configured DNSSEC, the DNS socket pool, DNS
cache locking, and the GlobalName zone.
2.
3.
4.
5.
6.
Configure IP address blocks, record IP addresses, and create DHCP reservations and DNS records.
7.
On LON-SVR2, install the IP Address Management (IPAM) Server feature by using the Add Roles
and Features Wizard in Server Manager.
On LON-SVR2, in the Server Manager, in the IPAM Overview pane, provision the IPAM server using
Group Policy.
2.
Enter IPAM as the GPO name prefix, and provision IPAM using the Provision IPAM Wizard.
In the IPAM Overview pane, configure server discovery for the Adatum domain.
2.
In the IPAM Overview pane, start the server discovery process. Discovery may take five to 10 minutes
to run. The yellow bar will indicate when discovery is complete.
In the IPAM Overview pane, add the servers that you need to manage. Verify that IPAM access is
currently blocked for both LON-DC1 and LON-SVR1.
2.
Use Windows PowerShell to grant the IPAM server permission to manage by running the following
command:
Invoke-IpamGpoProvisioning Domain Adatum.com GpoPrefixName IPAM IpamServerFqdn
LON-SVR2.adatum.com DelegatedGpoUser Administrator
3.
For both LON-DC1 and LON-SVR1, set the manageability status to Managed.
4.
Switch to LON-DC1, and force the update of Group Policy using gpupdate /force.
5.
Switch to LON-SVR1, and force the update of Group Policy by using gpupdate /force.
6.
Return to LON-SVR2, and refresh the server access status for LON-DC1 and LON-SVR1 and the Server
Manager console view. It may take up to 10 minutes for the status to change. If necessary, repeat
both refresh tasks as needed until a green check mark displays next to LON-DC1 and the IPAM Access
Status displays as Unblocked.
7.
In the IPAM Overview pane, right click LON-SVR1 and Retrieve All Server Data.
8.
In the IPAM Overview pane, right-click LON-DC1 and Retrieve All Server Data.
2.
On LON-SVR2, use IPAM to create a new DHCP scope with the following parameters:
Use IPAM to configure failover for the TestScope on LON-DC1 with the following parameters:
3.
4.
On LON-SVR2, add an IP address block in the IPAM console with the following parameters:
Prefix length: 16
2.
3.
4.
Add IP addresses for the network router by adding to the IP Address Inventory with the following
parameters:
IP address: 172.16.0.1
IP address: 172.16.0.10
Use the IPAM console to create the DNS host record as follows:
5.
On LON-DC1, open the DHCP console and confirm that the reservation was created in the 172.16.0.0
scope.
6.
On LON-DC1, open the DNS Manager console and confirm that the DNS host record was created.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have installed IPAM and configured IPAM with IPAMrelated GPOs, IP management server discovery, managed servers, a new DHCP scope, IP address blocks, IP
addresses, DHCP reservations, and DNS records.
Question: Will client computers immediately stop communicating on the network if there is
no functioning DHCP server?
Question: What is the default size of the DNS socket pool?
Question: What value does the DNS cache lock use to determine when to update an IP
address in the DNS cache?
Implement DHCP failover to ensure that client computers can continue to receive IP configuration
information in the event of a server failure.
Ensure that there are at least two DNS servers hosting each zone.
Troubleshooting Tip
Review Question
Question: What is one of the drawbacks of using IPAM?
Tools
Tool
Use
Location
Dnscmd
%systemroot%\System32\dnscmd.exe
DHCP console
%systemroot%\System32\dhcpmgmt.msc
DNS console
%systemroot%\System32\dnsmgmt.msc
IPAM management
console
Server Manager
Tool
Use
Location
Get-DnsServer
Windows PowerShell
Set-DnsServer
Windows PowerShell
Export-Clixml
Windows PowerShell
Import-Clixml
Windows PowerShell
2-1
Module 2
Implementing Advanced File Services
Contents:
Module Overview
2-1
2-2
2-10
2-18
2-28
2-34
2-39
Module Overview
Storage requirements have been increasing since the inception of server-based file shares. The Windows
Server 2012 and Windows 8 operating systems include a new feature named data deduplication to
reduce the disk space that is required. This module provides an overview of this feature, and explains the
steps required to configure it.
In addition to minimizing disk space, another storage concern is the connection between the storage and
the remote disks. Internet SCSI (iSCSI) storage in Windows Server 2012 is a cost-effective feature that
helps create a connection between the servers and the storage. To implement iSCSI storage in Windows
Server 2012, you must be familiar with the iSCSI architecture and components. In addition, you must be
familiar with the tools that are provided in Windows Server to implement an iSCSI-based storage.
In organizations with branch offices, you have to consider slow links and how to use these links efficiently
when sending data between your offices. The Windows BranchCache feature in Windows Server 2012
helps address the problem of slow connectivity. This module explains the BranchCache feature, and how
to configure it.
Objectives
After completing this module, you will be able to:
Configure BranchCache.
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
What Is iSCSI?
iSCSI is a protocol that supports access to remote,
small computer system interface (SCSI)-based
storage devices over a TCP/IP network. iSCSI
carries standard SCSI commands over IP networks
to facilitate data transfers over intranets, and to
manage storage over long distances. You can use
iSCSI to transmit data over local area networks
(LANs), wide area networks (WANs), or even over
the Internet.
iSCSI relies on standard Ethernet networking
architecture. Specialized hardware such as host
bus adapters (HBA) or network switches are
optional. iSCSI uses TCP/IP (typically, TCP port 3260). This means that iSCSI simply enables two hosts to
negotiate tasksfor example, session establishment, flow control, and packet sizeand then exchange
SCSI commands by using an existing Ethernet network. By doing this, iSCSI uses a popular, high
performance, local storage bus subsystem architecture, and emulates it over LANs and WANs to create a
storage area networks (SANs). Unlike some SAN technologies, iSCSI requires no specialized cabling. You
can run it over the existing switching and IP infrastructure. However, you can increase the performance of
an iSCSI SAN deployment by operating it on a dedicated network or subnet, as best practices recommend.
Note: Although you can use a standard Ethernet network adapter to connect the server to
the iSCSI storage device, you can also use dedicated iSCSI HBAs.
An iSCSI SAN deployment includes the following:
TCP/IP network. You can use standard network interface adapters and standard Ethernet protocol
network switches to connect the servers to the storage device. To provide sufficient performance, the
network should provide speeds of at least 1 gigabit per second (Gbps), and should provide multiple
paths to the iSCSI target. As a best practice, use a dedicated physical and logical network to achieve
fast, reliable throughput.
iSCSI targets. iSCSI targets present, or advertise storage, similar to controllers for hard disk drives of
locally attached storage. However, this storage is accessed over a network instead of locally. Many
storage vendors implement hardware-level iSCSI targets as part of their storage devices hardware.
Other devices or appliances, such as Windows Storage Server 2012 devices, implement iSCSI targets
by using a software driver together with at least one Ethernet adapter. Windows Server 2012 provides
the iSCSI target serverwhich is effectively a driver for the iSCSI protocolas a role service.
iSCSI initiators. The iSCSI target displays storage to the iSCSI initiator (also known as the client), which
acts as a local disk controller for the remote disks. All versions of Windows Server beginning with
Windows Server 2008 include the iSCSI initiator, and can connect to iSCSI targets.
iSCSI Qualified Name (IQN). IQNs are globally unique identifiers that are used to address initiators
and targets on an iSCSI network. When you configure an iSCSI target, you must configure the IQN for
the iSCSI initiators that will be connecting to the target. iSCSI initiators also use IQNs to connect to
the iSCSI targets. However, if name resolution on the iSCSI network is a possible issue, iSCSI endpoints
(both target and initiator) can be identified by their IP addresses.
Question: Can you use your organizations internal TCP/IP network to provide iSCSI?
Network/diskless boot. By using boot-capable network adapters or a software loader, you can use
iSCSI targets to deploy diskless servers quickly. By using differencing virtual disks, you can save up to
90 percent of the storage space for the operating system images. This is ideal for large deployments
of identical operating system images, such as a Hyper-V server farm, or for high-performance
computing (HPC) clusters.
Server application storage. Some applications such as Hyper-V and Microsoft Exchange Server require
block storage. The iSCSI target server can provide these applications with continuously available block
storage. Because the storage is remotely accessible, it can also combine block storage for central or
branch office locations.
Heterogeneous storage. iSCSI target server supports iSCSI initiators that are not based on the
Windows operating system, so you can share storage on Windows servers in mixed environments.
Lab environments. The iSCSI target server role enables your Windows Server 2012 computers to be
network-accessible block storage devices. This is useful in situations in which you want to test
applications before deploying them on SAN storage.
New/Updated
Description
Virtual disks
New
Manageability
Updated
Scalability limits
Updated
Local mount
functionality
Updated
iSCSI target servers that provide block storage utilize your existing Ethernet network; no additional
hardware is required. If high availability is an important criterion, consider setting up a high availability
cluster. With a high availability cluster, you will need shared storage for the clustereither hardware Fibre
Channel storage, or a Serial Attached SCSI storage array. The iSCSI target server integrates directly into the
failover cluster feature as a cluster role.
iSCSI Initiator
The iSCSI initiator service has been a standard component installed by default since Windows Server 2008
and Windows Vista. To connect your computer to an iSCSI target, you simply start the Microsoft iSCSI
Initiator service, and then configure it.
The features in Windows Server 2012 include:
Authentication. You can enable Challenge Handshake Authentication Protocol (CHAP) to authenticate
initiator connections, or you can enable reverse CHAP to allow the initiator to authenticate the iSCSI
target.
Query initiator computer for ID. This is only supported with Windows 8.1 or Windows Server 2012.
Description
The iSCSI Initiator performs an iSCSI discovery login and then a
SendTargets operation on portals (where a portal is a targets IP
and TCP port number pair) that are statically configured in the
iSCSI Initiator properties on the Discovery tab, by using the
Windows PowerShell New-IscsiTargetPortal cmdlet, or by
using the iscsicli AddTargetPortal command. Discovery occurs
under three conditions:
When a target portal is added.
When the service starts.
When a management application requests it.
You need to set a static address for the iSNS server by using the
iscsicli AddiSNSServer command. The iSCSI initiator gets a list
of targets under three conditions:
When the service starts.
When an application requests it.
When the iSNS server sends a state notification change.
HBA discovery
Enables multiple TCP/IP connections from the initiator to the target for the same iSCSI session.
Supports automatic failover. If a failure occurs, all outstanding iSCSI commands are reassigned to
another connection automatically.
Requires explicit support by iSCSI SAN devices, although the Windows Server 2012 iSCSI target server
role supports it.
If you have multiple network interface cards in your iSCSI initiator and iSCSI target server, you can use
MPIO to provide failover redundancy during network outages.
MPIO requires a device-specific module (DSM) if you want to connect to a third-party SAN device
that is connected to the iSCSI initiator. The Windows operating system includes a default MPIO DSM
that is installed as the MPIO feature within Server Manager.
MPIO is widely supported. Many SANs can use the default DSM without any additional software,
while others require a specialized DSM from the manufacturer.
MPIO is more complex to configure, and is not as fully automated during failover as MCS is.
Description
Perhaps the most important security measure you can take is to segregate iSCSI traffic
from other network traffic. This prevents exposure of the iSCSI storage to the open
LAN. This segregation can be physical, by establishing network paths using separate
network equipment, or it segregation can be logical through the use of virtual local
area networks (VLANs) and ACLs at the network layer.
Secure management
consoles
The management consoles that control access to data and storage allocation are often
web-based and have well-known default passwords. Use dedicated systems to access
these consoles.
Disable unneeded
services
Services not directly related to the iSCSI implementation should not be running on
systems involved in the iSCSI configuration.
Use CHAP
authentication
CHAP has two possible configurations: one-way CHAP or mutual CHAP authentication.
In one-way CHAP, the iSCSI target on the storage array authenticates the initiator on
the server.
In mutual CHAP, the target and initiator authenticate each other using separate secrets
for each direction of the connection.
iSCSI CHAP authentication can be configured using group policy. These policies
can found in Computer Configuration, Administrative Templates, System iSCSI,
and iSCSI Security.
CHAP authentication can be configured on the Configuration tab of the iSCSI Initiator.
Use IPsec
authentication
IPsec also supports IP traffic encryption to provide the highest levels of security, but
that additional security affects performance. The encryption and decryption process
requires much more processing power and should only be employed where the need is
paramount, such as across untrusted networks. IPsec Tunnel Mode encryption can be
configured on the Configuration tab of the iSCSI Initiator.
Note: In Windows Server 2008 R2, the Storage Explorer snap-in for Microsoft Management
Console (MMC) could be used to configure many iSCSI security settings for iSCSI initiators. That
feature has been removed in Windows Server 2012 R2.
Demonstration Steps
Add the iSCSI target server role service
1.
2.
In the Add Roles and Features Wizard, install the following roles and features on the local server, and
accept the default values:
File And Storage Services (Installed)\File and iSCSI Services\iSCSI Target Server
On LON-DC1, in the Server Manager, in the navigation pane, click File and Storage Services, and
then click iSCSI.
2.
In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-down list box, click New
iSCSI Virtual Disk.
3.
Name: iSCSIDisk1
Disk size: 5 GB
4.
On the View results page, wait until creation completes, and then close the View Results page.
5.
In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-down list, click New iSCSI
Virtual Disk.
6.
7.
Name: iSCSIDisk2
Disk size: 5 GB
On the View results page, wait until creation completes, and then close the View Results page.
Demonstration Steps
Connect to the iSCSI target
1.
Sign in to 20412D-LON-SVR2 with user name Adatum\Administrator and the password Pa$$w0rd.
2.
Open Server Manager, and on the Tools menu, open iSCSI Initiator.
3.
2.
In the Computer Management console, under Storage node, access Disk Management. Notice that
the new disks are added. However, they all are currently offline and not formatted.
3.
IT personnel who will design, configure, and administer the iSCSI storage solution must include IT
administrators with different areas of specialization, such as Windows Server 2012 administrators,
network administrators, storage administrators, and security administrators. This is necessary so that
the iSCSI storage solution has optimal performance and security, and has consistent management and
operations procedures.
When designing an iSCSI storage solution, the design team should also include application-specific
administrators, such as Exchange Server administrators and SQL Server administrators, so that you can
implement the optimal configuration for the specific technology or solution.
Lesson 2
Configuring BranchCache
Branch offices have unique management challenges. A branch office typically has slow connectivity to the
enterprise network and limited infrastructure for securing servers. In addition, you need to back up data
that you maintain in your remote branch offices, which is why organizations prefer to centralize data
where possible. Therefore, the challenge is to provide efficient access to network resources for users in
branch offices. BranchCache helps you overcome these problems by caching files so they do not have to
be transferred repeatedly over the network.
Lesson Objectives
After completing this lesson, you will be able to:
Configure BranchCache.
Monitor BranchCache.
Background Intelligent Transfer Service (BITS). A Windows component that distributes content from a
server to clients by using only idle network bandwidth. BITS is also a component that Microsoft
System Center Configuration Manager uses.
BranchCache improves the responsiveness of common network applications that access intranet servers
across slow WAN links. Because BranchCache does not require additional infrastructure, you can improve
the performance of remote networks by deploying Windows 7 or newer client computers, and by
deploying Windows Server 2008 R2 or newer servers, and then enabling the BranchCache feature.
BranchCache maintains file and folder permissions to ensure that users only have access to files and
folders for which they have permission.
BranchCache works seamlessly with network security technologies, including Secure Sockets Layer (SSL),
SMB signing, and end-to-end IPsec. You can use BranchCache to reduce network bandwidth use and to
improve application performance, even if the content is encrypted.
You can configure BranchCache to use hosted cache mode or distributed cache mode, which are
described below:
Hosted cache mode. This mode operates by deploying a server that is running Windows
Server 2008 R2 or newer versions as a hosted cache server in the branch office. Client computers
locate the server so that they can retrieve content from the hosted cache when the hosted cache is
available. If the content is not available in the hosted cache, the content is retrieved from the content
server by using a WAN link, and then is provided to the hosted cache so that the successive client
requests can retrieve it from there.
Distributed cache mode. For smaller remote offices, you can configure BranchCache in the distributed
cache mode without requiring a server. In this mode, local client computers running Windows 7 or
newer maintain a copy of the content and make it available to other authorized clients that request
the same data. This eliminates the need to have a server in the branch office. However, unlike the
hosted cache mode, this configuration works per subnet only. In addition, clients who hibernate or
disconnect from the network cannot reliably provide content to other requesting clients.
Note: When using BranchCache, you may use both modes in your organization, but you
can configure only one mode per branch office.
BranchCache functionality in Windows Server 2012 R2 has the following improvements:
To allow for scalability, BranchCache allows for more than one hosted cache server per location.
A new underlying database uses the Extensible Storage Engine (ESE) database technology from Exchange
Server. This enables a hosted cache server to store significantly more data (even up to terabytes).
A simpler deployment means that you do not need a Group Policy Object (GPO) for each location. To
deploy BranchCache, you only need a single GPO that contains the settings. This also enables clients
to switch between hosted cache mode and distributed mode when they are traveling between
locations, without needing to use site-specific GPOs, which should be avoided in multiple scenarios.
The client computer that is running Windows 8.1 connects to a content server in the head office that
is running Windows Server 2012, and the content is initially requested the same way it would be
without using BranchCache.
2.
The content server in the head office authenticates the user and verifies that the user is authorized to
access the data.
3.
Instead of sending the content itself, the content server in the head office returns hashes as identifiers
of the requested content to the client computer. The content server sends that data over the same
connection that the content would have been sent over typically.
4.
If you configure the client computer to use distributed cache, then it multicasts on the local
subnet to find other client computers that have already downloaded the content.
If you configure the client computer to use hosted cache, then it searches for the content on the
configured hosted cache.
5.
If the content is available in the branch office, either on one or more clients or on the hosted cache,
the client computer retrieves the data from the branch office. The client computer also ensures that
the data is updated and has not been tampered with or corrupted.
6.
If the content is not available in the remote office, then the client computer retrieves the content
directly from the server across the WAN link. The client computer then either makes it available on
the local network to other requesting client computers (distributed cache mode), or sends it to the
hosted cache, where it is made available to other client computers.
BranchCache Requirements
BranchCache optimizes traffic flow between head
offices and branch offices. Windows
Server 2008 R2 and newer, client computers
running Windows 7 Enterprise or newer enterprise
versions and Windows Vista Ultimate versions can
benefit from using BranchCache. Earlier versions
of Windows operating systems do not benefit
from this feature. You can use BranchCache to
cache only the content that is stored on file
servers or web servers that are running Windows
Server 2008 R2 or newer.
Install the BranchCache feature or the BranchCache for Network Files role service on the host server
that is running Windows Server 2012 R2.
Configure client computers either by using Windows PowerShell cmdlets, such as EnableBCHostedClient, or Group Policy or the netsh branchcache set service command.
If you want to use BranchCache to cache content from the file server, you must perform following tasks:
Install BranchCache for the Network Files role service on the file server.
If you want to use BranchCache for caching content from the web server, you must install the
BranchCache feature on the web server. You do not need additional configurations.
BranchCache is supported on the full installation and Server Core installation of Windows Server 2008 R2
or newer.
The following versions cannot be used as BranchCache content servers:
Content Versions
Content cached on Windows Server 2008 R2 and Windows 7 is named version 1 (V1), whereas content
that is cached on Windows Server 2012 R2 and Windows 8.1 is version 2 (V2). V1 content is a fixed filesegment size and is larger than in V2. Because of these large sizes, when a user makes a change that
modifies the file length, the segment that changed and all the following segments of the file are resent
over the WAN link. V2 content is more tolerant to changes within the file. Only the changed content will
be resent, and it will use less WAN bandwidth.
When you have content servers and hosted cache servers that are running Windows Server 2012, they use
the content version that is appropriate based on the operating system of the BranchCache client that
requests content. When computers running Windows Server 2012 and Windows 8.1 request content, the
content and hosted cache servers use V2 content; when computers running Windows Server 2008 R2 and
Windows 7 request content, the content and hosted cache servers use V1 content.
When you deploy BranchCache in distributed cache mode, clients that use different content information
versions do not share content with each other.
Server
Description
Web server
or BITS server
File server
You must install the BranchCache for the Network Files role service of the File
Services server role before you enable BranchCache for any file shares. After you
install the BranchCache for the Network Files role service, use Group Policy to
enable BranchCache on the server. You must then configure each file share to
enable BranchCache.
Hosted cache
server
For the hosted cache mode, you must add the BranchCache feature to the Windows
Server 2012 R2 server that you are configuring as a hosted cache server.
To help secure communication, client computers use Transport Layer Security (TLS)
when communicating with the hosted cache server.
By default, BranchCache allocates five percent of the disk space on the active
partition for hosting cache data. However, you can change this value by using the
Windows PowerShell Set-BCCache cmdlet or Group Policy or by running the netsh
branchcache set cachesize command.
Enable BranchCache.
2.
3.
Enabling BranchCache
You can enable the BranchCache feature on client computers by using Group Policy, Windows PowerShell,
or the netsh branchcache set service command.
To enable BranchCache settings by using Group Policy, perform the following steps for a domain-based GPO:
1.
2.
Create a GPO that will be linked to the organizational unit (OU) where the branch office client
computers are located.
3.
4.
2.
Create a GPO that will be linked to the OU where client computers are located.
3.
4.
Select either the distributed cache mode or the hosted cache mode. You may also enable both the
distributed cache mode and automatic hosted cache discovery by Service Connection Point policy
settings. The client computers will operate in distributed cache mode unless they find a hosted cache
server in the branch office. If they find a hosted cache server in the branch office, they will work in
hosted cache mode.
Note: A number of the GPO settings in the BranchCache container require at least
Windows Server 2012, Windows 8, or Windows RT.
To enable BranchCache with Windows PowerShell, use the Enable-BCDistributed or EnableBCHostedServer cmdlets. You can also use Enable-BCHostedClient cmdlet to configure BranchCache to
operate in hosted cache client mode.
For example, the following cmdlet enables hosted cache client mode using the SRV1.adatum.com
computer as a hosted cache server for Windows 7 clients and HTTPS.
Enable-BCHostedClient ServerNames SRV1.adatum.com UseVersion Windows7
The following cmdlet enables hosted cache mode and register service connection point in Active
Directory Domain Services (AD DS).
Enable-BCHostedServer RegisterSCP
To configure BranchCache settings by using the netsh branchcache set service command, open a
command prompt window, and perform the following steps:
1.
Type the following netsh syntax for the distributed cache mode:
netsh branchcache set service mode=distributed
2.
In hosted cache mode, BranchCache clients use the HTTP protocol for data transfer between client
computers, but this mode does not use the WS-Discovery protocol. In the hosted cache mode, you should
configure the client firewall to enable the incoming rule, BranchCacheContent Retrieval (use HTTP).
If using a firewall other that Microsoft, create incoming rules for port 80 for Content Retrieval and port
3702 for Peer Discovery.
Demonstration Steps
Add BranchCache for the Network Files role service
1.
2.
In the Add Roles and Features Wizard, install the following roles and features to the local server:
File and Storage Services (Installed)\File and iSCSI Services\BranchCache for Network Files
2.
Select Allow hash publication only for shared folder on which BranchCache is enabled
Open a File Explorer window, and on drive C, create a folder named Share.
2.
Monitoring BranchCache
After the initial configuration, you want to verify
that BranchCache is configured correctly and
functioning correctly. You can use the netsh
branchcache show status all command to
display the BranchCache service status. You can
also use the Get-BCStatus cmdlet to provide
BranchCache status and configuration
information. The client and hosted cache servers
display additional information such as the location
of the local cache, the size of the local cache, and
the status of the firewall rules for HTTP and WSDiscovery protocols that BranchCache uses.
You can also use the following tools to monitor BranchCache:
Event Viewer. Use this tool to monitor the BranchCache events that are recorded in both the
Application log and the Operational log. The Application log is located in the Windows Logs folder,
and the Operational log is located in the Application and Service
Logs\Microsoft\Windows\BranchCache folder.
Performance counters. Use this tool to monitor BranchCache performance monitor counters.
BranchCache performance monitor counters are useful debugging tools for monitoring BranchCache
effectiveness and health. You can also use BranchCache performance monitoring to determine the
bandwidth savings in the Distributed Cache mode or in the hosted cache mode. If you have
implemented Microsoft System Center 2012 Operations Manager in the environment, you can use
the Windows BranchCache Management Pack for Operations Manager 2012.
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
Describe considerations for using file classification options for storage optimization in
Windows Server 2012.
What Is FSRM?
You can use the FSRM to manage and classify
data that is stored on file servers. FSRM includes
the following features:
File management tasks. You can use this feature to apply a conditional policy or action to files, based
on their classification. The conditions of a file management task include the file location, the
classification properties, the date the file was created, the last modified date of the file, and the last
time that the file was accessed. The actions that a file management task can take include the ability to
expire files, encrypt files, and run a custom command.
Quota management. You can use this feature to limit the space that is allowed for a volume or folder.
You can apply quotas automatically to new folders that are created on a volume. You can also define
quota templates that you can apply to new volumes or folders.
File screening management. You can use this feature to control the types of files that users can store
on a file server. You can limit the extension that can be stored on your file shares. For example, you
can create a file screen that disallows files with an MP3 extension from being stored in personal
shared folders on a file server.
Storage reports. You can use this feature to identify trends in disk usage, and identify how your data
is classified. You can also monitor attempts by users to save unauthorized files.
You can configure and manage the FSRM by using the File Server Resource Manager MMC snap-in, or by
using the Windows PowerShell command-line interface.
The following FSRM features are new with Windows Server 2012:
Integration with DAC. DAC can use a File Classification Infrastructure to help you centrally control and
audit access to files on your file servers.
Manual classification. Manual classification enables users to classify files and folders manually without
the need to create automatic classification rules.
Access-denied assistance. You can use access-denied assistance to customize the access denied error
message that displays for users in Windows 8.1 when they do not have access to a file or a folder.
File management tasks. The updates to file management tasks include AD DS and Active Directory
Rights Management Services (AD RMS) file management tasks, continuous file management tasks,
and dynamic namespace for file management tasks.
Automatic classification. The updates to automatic classification increase the level of control you have
over how data is classified on your file servers, including continuous classification, Windows
PowerShell for custom classification, updates to the existing content classifier, and dynamic
namespace for classification rules.
What's New in File Server Resource Manager in Windows Server 2012
http://go.microsoft.com/fwlink/?LinkId=270039
Question: Are you currently using the FSRM in Windows Server 2008 R2? If yes, for what
areas do you use it?
Define classification properties and values, which can be assigned to files by running classification rules.
2.
Create, update, and run classification rules. Each rule assigns a single predefined property and value
to files within a specified directory, based on installed classification plug-ins.
When you run a classification rule, you can reevaluate files that are already classified. You can choose to
overwrite existing classification values, or add the value to properties that support multiple values.
Property type
Description
Yes/No
A Boolean property that can have a value of either YES or NO. When multiple
values are combined, a NO value overwrites a YES value.
Date/Time
A simple date and time property. When multiple values are combined,
conflicting values prevent reclassification.
Property type
Description
Number
Multiple choice
list
A list of values that can be assigned to a property. More than one value can be
assigned to a property at a time. When multiple values are combined, each
value in the list is used.
Ordered list
A list of fixed values. Only one value can be assigned to a property at a time.
When multiple values are combined, the value highest in the list is used.
String
Multi-string
A list of strings that can be assigned to a property. More than one value can be
assigned to a property at a time. When multiple values are combined, each
value in the list is used.
What is the scope of the rule? On the Rule Settings tab, the Scope parameter allows you to select a
folder or folders to which the classification rule will apply. When the rule is run, it processes and
attempts to classify all file system objects within this location.
What classification mechanism will the rule use? On the classification rule Properties page, on the
rules Classification tab, you must choose a classification method that the rule will use to assign the
classification property. By default, there are two methods from which you can choose:
Folder classifier. The folder classifier mechanism assigns properties to a file based on the files
folder path.
Content classifier. The content classifier searches for strings or regular expressions in files. This
means that the content classifier classifies a file based on the textual contents of the file, such as
whether it contains a specific word, phrase, numeric value, or type.
What property will the rule assign? The main function of the classification rule is to assign a property
to a file object based on how the rule applies to that file object. On the Classification tab, you must
specify a property and the specific value that the rule will assign to that property.
What additional classification parameters will be used? The core of the rules logic lies in the
additional classification parameters. Click the Advanced button on the Classification tab will open the
Additional Classification Parameters window. Here, you can specify additional parametersincluding
strings or regular expressionsthat, if found in the file system object, will cause the rule to apply
itself. For example, this parameter could be the phrase Social Security Number or any number with
the format 000-00-000. If this parameter is found, then the classification parameter will apply a YES
value for a Confidential classification property to the file. This classification could then be leveraged
to perform some tasks on the file system object, such as moving it to a secure location.
RegularExpression. Match a regular expression by using the Microsoft .NET syntax. For example,
\d\d\d will match any three-digit string.
StringCaseSensitive. Match a case-sensitive string. For example, Confidential will only match
Confidential, and not confidential or CONFIDENTIAL.
String. Match a string, regardless of case. Confidential will match Confidential, confidential, and
CONFIDENTIAL.
Classification Scheduling
You can run classification rules in two ways, on-demand, or based on a schedule. Either way you choose,
each time you run classification, it uses all rules that you have left in the enabled state.
Configuring a schedule for classification allows you to specify a regular interval at which file classification
rules will run, ensuring that your servers files are regularly classified and up to date with the latest
classification properties.
Demonstration Steps
Create a classification property
1.
Open File Server Resource Manager, and expand the Classification Management node.
2.
Using the Classification Properties node, create a new Classification Property named
Confidential, with the Yes/No property type.
Using the Classification Rules node, create a new Classification Rule named Confidential Payroll
Documents.
2.
Configure the rule to classify documents with a value of Yes for the Confidential classification
property, if the file contains the string expression PAYROLL.
3.
2.
Using the Classification Rule node, manually run Classification With All Rules Now, and view the
report. Ensure that File3.txt is listed at the bottom of the report.
3.
Navigate to E:\Labfiles\Data and view the files to ensure that they were correctly classified.
4.
How movement affects classification properties. When you move a file from one NTFS file system to
another, if you use a standard mechanism such as Copy or Move, the file retains its classification
properties. However, if you move a file to a non-NTFS file system, regardless of how you move the
file, file classification properties are not retained. If the file is the product of a Microsoft Office
application, then the classification properties remain attached, regardless of how the file is moved.
File classification is currently not supported on the Resilient File System (ReFS).
Classification Management process in Windows Server 2012 and Windows Server 2008. Classification
properties are available only to servers that run Windows Server 2008 R2 or newer versions. However,
Microsoft Office documents will retain classification property information in Document Properties,
which is viewable regardless of the operating system being used.
Conflicting classification rules. At times, classification rules can conflict. When this happens, the File
Classification Infrastructure will attempt to combine properties. The following behaviors will occur
when conflicting classification rules arise:
o
For ordered list properties, the highest property value takes priority.
For multiple choice properties, the property sets are combined into one set.
For multiple string properties, a multistring value is set that contains all the unique strings of the
individual property values.
Classification Management cannot classify certain files. File Classification Infrastructure will not
identify individual files within a container, files such as a .zip or .vhd/.vhdx file. In addition, File
Classification Infrastructure will not allow content classification for the contents of encrypted files.
Data Deduplication
In an enterprise with a large volume of shared files, there are often duplicate files, or parts of files that are
duplicates. This occurs particularly when there are source files for applications or virtual disk files. To
address this, data deduplication can be used to segment files into small chunks and identify duplicate
chunks. Those duplicates are replaced by a pointer to a single stored copy. Those chunks are also
compressed to save even more space. All of this is transparent to the user.
Data Deduplication, a role service of File and Storage Services, can be installed using the Add Roles and
Features Wizard or by using the Windows PowerShell cmdlet Add-WindowsFeature to add the FS-DataDeduplication feature. Once installed, the feature must be enabled on a data volume by using the Server
Manager dashboard or by using the Windows PowerShell Enable-DedupVolume cmdlet. Once enabled,
the data deduplication job can be run on demand or scheduled to run at regular intervals.
Demonstration Steps
Add the Data Deduplication role service
1.
2.
3.
In the Add Roles and Features Wizard, install the following roles and features to the local server, and
accept the default values:
In Server Manager, in the navigation pane, click File and Storage Services, and then click Volumes.
2.
In the Volumes pane, right-click drive E, and then click Configure Data Deduplication.
3.
Start time: 2 AM
On LON-SVR2, copy the Group Policy Preferences.docx file from the root folder of drive E to the
E:\LabFiles folder.
2.
3.
In Windows PowerShell, on LON-SVR2, type the following cmdlet to start the deduplication job in
optimization mode:
Start-DedupJob Type Optimization Volume E:
4.
When the job completes, verify the size of the Group Policy Preferences.docx file on the root folder
in drive E.
5.
In the Properties dialog box of the Group Policy Preferences.docx file, note the values for Size and
Size on Disk. The size on the disk should be much smaller than it was previously.
Select the storage pool you where you want to create the virtual disk.
2.
Provide a name for the virtual disk and select the check box to Create storage tiers on this virtual
disk. Two tiers will be created: one named Faster Tier that contains all of the SSDs, and one named
Standard Tier that contains all the remaining hard disk drives.
3.
Select the storage layout: Simple, or Mirror, or Parity (Parity requires three or more physical disks).
4.
5.
Specify the size of the virtual disk. You must configure how much of the SSD space (Faster Tier) and
how much of the hard disk drive space (Standard Tier) will be used by the virtual disk.
6.
You may then select to launch the New Volume Wizard to create volumes on the virtual disk.
Parallelized Repair
Windows Server 2012 R2 supports parallelized repair. A typical Redundant Array of Independent Disks
(RAID) array often involves using a hot spare to replace a failed disk. That method can be slow and may
require human intervention. With parallelized repair, when a disk in the storage space fails, the remaining
healthy disks will rebuild the data that was stored on the failed disk. This provides a much faster recovery
and involves no human intervention. It is recommended to add disks that are active in the storage space
but contain no data. That way they are available for the parallelized repair process.
Objectives
After completing this lab, the students will be able to:
Lab Setup
Estimated Time: 75 minutes
Virtual machines
20412D-LON-DC1
20412D-LON-SVR1
20412D-LON-SVR2
20412D-LON-CL1
20412D-LON-CL2
User name
Adatum\Administrator
Password
Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
2.
In the Hyper-V Manager console, right-click 20412D-LON-SVR2, and then click Settings.
3.
In the Settings for 20412D-LON-SVR2 window, in the left pane, ensure that the first network
adapter is connected to Private Network, and change the second network adapter to also be
connected to Private Network.
Note: You can only perform the previous step if LON-SVR2 has not been started. If LONSVR2 is already started, shut down LON-SVR2, and then perform these steps.
4.
In the Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
5.
In the Actions pane, click Connect. Wait until the virtual machine starts.
6.
7.
Password: Pa$$w0rd
On LON-DC1, right-click the Start button, and then click Network Connections.
8.
9.
10. Repeat steps four through six for 20412D-LON-SVR1 and 20412D-LON-SVR2.
Sign in to LON-DC1 with the user name Adatum\Administrator and the password Pa$$w0rd.
2.
3.
In the Add Roles and Features Wizard, install the following roles and features to the local server, and
accept the default values:
File And Storage Services (Installed)\File and iSCSI Services\iSCSI Target Server
On LON-DC1, in the Server Manager, in the navigation pane, click File and Storage Services, and
then click iSCSI.
2.
Storage location: C:
Size: 5 GB
3.
On the View results page, wait until the creation completes, and then click Close.
4.
5.
Storage location: C:
Size: 5 GB
Storage location: C:
Size: 5 GB
Sign in to LON-SVR2 with the user name Adatum\Administrator and the password Pa$$w0rd.
2.
On LON-SVR2, from the Server Manager, open the Routing and Remote Access console.
3.
Right-click LON-SVR2, and click Disable Routing and Remote Access. Close the Routing and
Remote Access console.
Note: Normally, you do not disable Routing and Remote Access (RRAS) before configuring
MPIO. You do it here because of lab requirements.
4.
In the Server Manager, start the Add Roles and Features Wizard and install the Multipath I/O feature.
5.
In Server Manager, on the Tools menu, open iSCSI Initiator, and configure the following:
6.
In Server Manager, on the Tools menu, open MPIO, and then configure the following:
7.
After the computer restarts, sign in to LON-SVR2 with the user name Adatum\Administrator and
password Pa$$w0rd.
8.
In the Server Manager, on the Tools menu, click MPIO, and then verify that Device Hardware ID
MSFT2005iSCSIBusType_0x9 is added to the list.
On LON-SVR2, in the Server Manager, on the Tools menu, open iSCSI Initiator.
2.
In the iSCSI Initiator Properties dialog box, perform the following steps:
3.
Connect to another target, enable multi-path, and configure the following Advanced settings:
o
Results: After completing this exercise, you will have configured and connected to iSCSI targets.
On LON-SVR1, from the Server Manager, start the File Server Resource Manager.
2.
In File Server Resource Manager, under Classification Management, create a local property with the
following settings:
3.
In the File Server Resource Manager console, create a classification rule with following settings:
General tab, Rule name: Corporate Documents Rule, and ensure that the rule is enabled.
Classification tab:
o
Evaluation type tab: Re-evaluate existing property values and Aggregate the values
2.
Select both Run the classification with all rules and Wait for classification to complete.
3.
Review the Automatic Classification report that displays in Internet Explorer, and ensure that the
report lists the same number of classified files as in the Corporate Documentation folder.
4.
Close Internet Explorer, but leave the File Server Resource Manager console open.
In the File Server Resource Manager console, create a local property with following settings:
2.
In the File Server Resource Manager console, create a classification rule with the following settings:
General tab, Rule name: Expiration Rule, and ensure that the rule is enabled
Evaluation type tab: Re-evaluate existing property values and Aggregate the values
3.
Select both Run the classification with all rules and Wait for classification to complete.
4.
Review the Automatic classification report that appears in Internet Explorer, and ensure that report
lists the same number of classified files as the Corporate Documentation folder.
Close Internet Explorer, but leave the File Server Resource Manager console open.
In the File Server Resource Manager, create a file management task with following settings:
General tab: Task name: Expired Corporate Documents, and ensure that the task is enabled
Note: This value is for lab purposes only. In a real scenario, the value would be 365 days or
more, depending on each companys policy
In the File Server Resource Manager, click Run File Management Task Now, and then click Wait for
the task to complete.
2.
Review the file management task report that displays in Internet Explorer, and ensure that the report
lists the same number of classified files as the Corporate Documentation folder.
3.
Start the Event Viewer, and in the Event Viewer console, open the Application event log.
4.
Review events with numbers 908 and 909. Notice that 908 FSRM started a file management job,
and 909 FSRM finished a file management job.
5.
2.
On the Virtual Machines list, right-click 20412D-LON-SVR1, and then click Revert.
3.
Results: After completing this exercise, you will have configured a File Classification Infrastructure so that
the latest version of the documentation is always available to users.
Question: Why would you implement MPIO together with iSCSI? What problems would you
solve with this approach?
Question: Why must you have the iSCSI initiator component?
Question: Why would you configure file classification for documents located in a folder such
as a Corporate Documentation folder?
Objectives
After completing this lab, the students will be able to:
Monitor BranchCache.
Lab Setup
Estimated Time: 40 Minutes
Virtual machines
User name
Adatum\Administrator
Password
Pa$$w0rd
For this lab, you will continue to use the 20412D-LON-DC1 and 20412D-LON-SVR2 virtual machines that
are still running from the last lab. You will also use 20412D-LON-CL1 and 20412-LON-CL2, but do not
start 20412D-LON-CL1 and 20412D-LON-CL2 until instructed to do so in the lab steps.
Switch to LON-DC1.
2.
Open the Server Manager, and install the BranchCache for network files role service.
3.
4.
5.
Enable the BranchCache setting, and elect Allow hash publication only for shared folders on
which BranchCache is enabled.
2.
On LON-DC1, in the File Explorer window, create a new folder named C:\Share.
2.
Permissions: default
3.
4.
2.
3.
4.
5.
Action: Allow
Action: Allow
6.
Close the Group Policy Management Editor and Group Policy Management console.
7.
Results: At the end of this exercise, you will have deployed BranchCache, configured a slow link, and
enabled BranchCache on a file share.
Install the BranchCache for Network Files role and the BranchCache feature on LON-SVR2
Start the BranchCache host server
Task 1: Install the BranchCache for Network Files role and the BranchCache feature
on LON-SVR2
On LON-SVR2 from Server Manager, add the BranchCache for Network Files role service and the
BranchCache feature.
2.
3.
Ensure that Branch Cache is enabled and running. Note in the DataCache section, the current active
cache size is zero.
4.
Results: At the end of this exercise, you will have enabled the BranchCache server in the branch office.
On LON-DC1, open the Server Manager, and then open Group Policy Management.
2.
3.
4.
5.
6.
Open the Server Manager, and then open Active Directory Users and Computers.
7.
Move LON-CL1 and LON-CL2 from the Computers container to the Branch OU.
8.
9.
At the command prompt, type netsh branchcache show status all, and then press Enter.
10. Verify that the BranchCache Status is Running. If the status is Stopped, restart the client computer.
11. Start the 20412D-LON-CL2, sign in as Adatum\Administrator and open the command prompt window.
12. At the command prompt, type netsh branchcache show status all, and then press Enter.
13. Verify that the BranchCache status is Running. If the status is Stopped, restart the client computer.
Results: At the end of this exercise, you will have configured the client computers for BranchCache.
2.
In the Performance Monitor console, in the navigation pane, under Monitoring Tools, click
Performance Monitor.
3.
Remove existing counters, change to report view, and then add the BranchCache object counters to
the report.
2.
In the navigation pane of the Performance Monitor console, under Monitoring Tools, click
Performance Monitor.
3.
In Performance Monitor, remove existing counters, change to a report view, and then add the
BranchCache object to the report.
2.
In the Performance Monitor console, in the navigation pane, under Monitoring Tools, click
Performance Monitor.
3.
In the Performance Monitor, remove existing counters, change to a report view, and then add the
BranchCache object to the report.
Switch to LON-CL1.
2.
Open \\LON-DC1.adatum.com\share, and copy mspaint.exe to the local desktop. This could take
several minutes because of the simulated slow link.
3.
Read the performance statistics on LON-CL1. This file was retrieved from LON-DC1 (Retrieval: Bytes
from Server). After the file was cached locally, it was passed up to the hosted cache. (Retrieval: Bytes
Served).
4.
Switch to LON-CL2.
5.
Open \\LON-DC1.adatum.com\share, and copy mspaint.exe to the local desktop. This should not
take as long as the first file copy, because the file is cached.
6.
Read the performance statistics on LON-CL2. This file was obtained from the hosted cache (Retrieval:
Bytes from Cache).
7.
Read the performance statistics on LON-SVR2. This server has offered cached data to clients (Hosted
Cache: Client file segment offers made).
8.
Note: In the DataCache section, the current active cache size is no longer zero, it is
6560896.
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-SVR2, 20412D-LON-CL1, and 20412D-LON-CL2.
Results: At the end of this exercise, you will have verified that BranchCache is working as expected.
Question: When would you consider implementing BranchCache into your own
organization?
Data tampering. The BranchCache technology uses hashes to confirm that during the communication,
the client computer and the server did not alter the data.
Information disclosure. BranchCache sends encrypted content to clients, but they must have the
encryption key to decrypt the content. Because potential malicious users would not have the
encryption key, if an attacker attempts to monitor the network traffic to access the data while it is in
transit between clients, the attempt will not be successful.
Denial of service. If an attacker tries to overload the client with requests for data, BranchCache
technology includes queue management counters and timers to prevent clients from being
overloaded.
Your organization is using large amounts of disk space for data storage and faces the challenge of
organizing and managing the data. Furthermore, your organization must satisfy requirements for security,
compliance, and data leakage prevention for company confidential information. What should you do?
Answer: You should deploy the File Classification Infrastructure. Based on file classification, you can
configure file management tasks that will enable you to manage groups of files based on various file and
folder attributes. You can also automate file and folder maintenance tasks, such as cleaning up stale data
or protecting sensitive information.
Tools
Tool
Use
Where to find it
iSCSI initiator
BranchCache Windows
PowerShell module
Best Practice:
When you consider an iSCSI storage solution for your organization, spend most of the time on the
design process. The design process is crucial because it allows you to optimize the solution for all
technologies that will be using iSCSI storage, such as file services, Exchange Server, and SQL Server.
The design should also accommodate future growth of your organizations business data. Successful
design processes help guarantee a successful deployment of the solution that will meet your
organizations business requirements.
When you plan for BranchCache deployment, ensure that you work closely with your network
administrators so that you can optimize network traffic across the WAN.
When you plan for file classifications, ensure that you start with your organizations business
requirements. Identify the classifications that you will apply to documents, and then define a method
that you will use to identify documents for classification. Before you deploy the File Classification
Infrastructure, create a test environment. Then test the scenarios to ensure that your solution will
result in a successful deployment, and that your organizations business requirements will be met.
3-1
Module 3
Implementing Dynamic Access Control
Contents:
Module Overview
3-1
3-2
3-9
3-16
3-20
3-23
3-27
3-36
Module Overview
The Windows Server 2012 operating system introduces new features for enhancing access control for
file-based and folder-based resources, and features for accessing your work data from various locations.
These features, named Dynamic Access Control (DAC) and Work Folders, extend traditional access control.
DAC enables administrators to use claims, resource properties, policies, and conditional expressions to
manage access. Work Folders, specific to Windows Server 2012 R2, give users more flexible data access.
In this module, you will learn about DAC and Work Folders, and how to plan and implement these
technologies.
Objectives
After completing this module, you will be able to:
Describe DAC.
Lesson 1
Overview of DAC
DAC is a new Windows Server 2012 feature that you can use to enable more functional and flexible access
management. DAC offers a new way to secure and control access to resources. Before you implement this
feature, you should understand how it works and the components it uses. This lesson presents an overview
of DAC.
Lesson Objectives
After completing this lesson, you will be able to:
Describe DAC.
Describe claims.
These limitations can be generalized in the following way: The NTFS file system-based approach for access
management does not allow you to use conditional expressions as a way to manage access, and you
cannot use AND between access conditions. This means that you cannot build your own conditions for
access control, and you cannot set two different conditions to apply at the same time.
In Windows Server 2012, DAC technology solves these issues. You can use DAC to take into account
Active Directory Domain Services (AD DS) attribute values of users or resource objects when you provide
or deny access.
What Is DAC?
DAC in Windows Server 2012 is a new access
control mechanism for file system resources. It
enables administrators to define central file access
policies that can apply to every file server in an
organization. DAC implements a safety net over
file servers and any existing Share and NTFS file
system permissions. It also ensures that regardless
of how the Share and NTFS file system
permissions might change, this central overriding
policy still is enforced.
DAC combines multiple criteria into access
decisions. This augments the NTFS file system ACL
so that users need to satisfy Share permissions, NTFS file system ACL, and the central access policy to gain
access to a file. However, DAC also can work independently from NTFS file system permissions.
DAC provides a flexible way to apply, manage, and audit access to domain-based file servers. DAC uses
claims in the authentication token, the Resource Properties on the resource, and the conditional
expressions within permission and auditing entries. With this combination of features, you now can grant
and audit access to files and folders based on AD DS attributes.
DAC is used primarily to control file access in a much more flexible way than NTFS file system and Share
permissions. It also can be used for auditing file access and can provide optional AD RMS protection
integration.
DAC is designed for four scenarios:
Central access policy for managing access to files. This enables organizations to set safety net policies
that reflect business and regulatory compliance.
Auditing for compliance and analysis. This enables targeted auditing across file servers for compliance
reporting and forensic analysis.
Protecting sensitive information. DAC identifies and protects sensitive information within the
Windows Server 2012 environment and, if integrated with AD RMS, it protects the information when
it leaves the Windows Server 2012 environment.
Access-denied remediation. This improves the access-denied experience to reduce help desk load and
the incident time for troubleshooting. This technology puts control to the files closer to the people
who are responsible for those files. Access-denied remediation can inform a different owner for each
folder, with additional information why the access was denied, to allow the owner to make an
educated decision on how this should be fixed.
User Claims
A user claim is information that a Windows Server 2012 domain controller provides about a user.
Windows Server 2012 domain controllers can use most AD DS user attributes as claim information. This
provides administrators with a wide range of possibilities for configuring and using claims for access
control. Before defining a user claim, you should populate the user attributes that you want to use for
access control with appropriate values
Device Claims
A device claim, which is often called a computer claim, is information that a Windows Server 2012 domain
controller provides about a device that is represented by a computer account in AD DS. As with user
claims, device claims can use most of the AD DS attributes that are applicable to computer objects. Unlike
NTFS file system permissions, DAC also can take into account the device that a user is using when trying
to access a resource. Device claims are used to represent device attributes that you want to use for
access control.
If you want to include only specific folders, you can use the Advanced Security Settings Editor to
create conditional expressions directly in the security descriptor.
If you want to include some or all file servers, you can create Central Access Rules, and then link those
rules to the central access policy objects. You then can use Group Policy to apply the central access
policy objects to the file servers, and then configure the share to use the central access policy object.
Using these central Access Policies is the most efficient and preferred method for securing files and
folders. This is discussed further in the next topic.
When you manage access with DAC, you can use file classifications to include certain files with a
common set of properties across various folders or files.
Windows Server 2012 and Windows 8 support one or more conditional expressions within a permission
entry. Conditional expressions simply add another applicable layer to the permission entry. The results of
all conditional expressions must evaluate to TRUE for a Windows operating system to grant the
permission entry for authorization. For example, suppose that you define a claim named Department,
with a source attribute department, for a user and that you define a Resource Property object named
Department. You now can define a conditional expression that says that the user can access a folder, with
the applied Resource Property objects, only if the users attribute Department value is equal to the value
of property Department on the folder. Note that if the Department Resource Property object has not
been applied to the file or files in question, or if Department is a null value, then the user will be granted
access to the data.
At least one Windows Server 2012 domain controller to store the central definitions for the resource
properties and policies. User claims are not required for security groups. If you use the user claims,
then at least one Windows Server 2012 domain controller in the user domain should be accessible by
the file server, so that the file server can retrieve the claims on behalf of the user. If you use device
claims, then all the client computers in the AD DS domain must use the Windows 8 operating system.
Note: Only Windows 8 or newer devices use device claims.
If you use claims across a forest trust, you must have the Windows Server 2012 domain controllers in
each domain, exclusively.
If you use device claims, then you must have a Windows 8 client. Earlier Windows operating systems
do not support device claims.
A Windows Server 2012 domain controller is required when you use user claims. However, there is no
requirement for having a Windows Server 2012 domain and a forest functional level, unless you want to
use the claims across a forest trust. This means that you also can have domain controllers that run
Windows Server 2008 and Windows Server 2008 R2 with the forest functional level set to Windows
Server 2008. However, if you want to always provide claims to users and devices, by configuring that in
Group Policy, you should raise your domain and forest functional level to Windows Server 2012. This is
discussed in following paragraph.
Whichever method you choose, you should open the Group Policy Object Editor, expand Computer
Configuration, expand Policies, expand Administrative Templates, expand System, and then expand
KDC. In this node, open a setting called Support Dynamic Access Control and Kerberos Armoring.
To configure the Support Dynamic Access Control and Kerberos Armoring policy setting, you can
choose one of the four listed options:
1.
2.
3.
4.
Claims and Kerberos armoring support are disabled by default, which is equivalent to the policy setting of
not being configured or being configured as Do not support Dynamic Access Control and Kerberos
Armoring.
The Support Dynamic Access Control and Kerberos Armoring policy setting configures DAC and
Kerberos armoring in a mixed-mode environment, when there is a mixture of Windows Server 2012
domain controllers and domain controllers running older versions of the Windows Server operating
system.
The remaining policy settings are used when all the domain controllers are Windows Server 2012 domain
controllers and the domain functional level is configured to Windows Server 2012. The Always provide
claims and FAST RFC behavior and the Also fail unarmored authentication requests policy settings
enable DAC and Kerberos armoring for the domain. However, the latter policy setting requires all
Kerberos authentication service and ticket granting service communication to use Kerberos armoring.
Windows Server 2012 domain controllers read this configuration, while other domain controllers ignore
this setting.
Note: Implementing DAC in an environment with multiple forests has additional setup
requirements.
Lesson 2
Lesson Objectives
After completing this lesson, you will be able to:
You also can specify the claim identification (ID). This value is generated automatically, but you
might want to specify the claim ID if you define the same claim for multiple forests and want the ID
to be identical.
Note: Claim types are sourced from AD DS attributes. For this reason, you must configure
attributes for your computer and the user accounts in AD DS with information that is correct for
the respective user or computer. Windows Server 2012 domain controllers do not issue a claim
for an attribute-based claim type when the attribute for the authenticating principal is empty.
Depending on the configuration of the data files Resource Property object attributes, a null value
in a claim might result in the user being denied access to DAC-protected data.
Date/Time
Multi-valued Choice
Multi-valued Text
Number
Ordered List
Single-valued Choice
Text
Yes/No
You can set the ID for a resource property to be used in a trusted forest, similar to the process you use with claims.
While suggested values are not mandatory for claims, you must provide at least one suggested value for
each Resource Property you define.
In Windows Server 2012 R2, you also can create reference resource properties. A reference resource
property is a resource property that uses an existing claim type that you created before for its suggested
value. If you want to have claims and resource properties with the same suggested values, then you
should use the reference resource properties.
Note: Access is controlled not by the claim, but by the resource property object. The claim
must provide the correct value that corresponds to the requirements set by the resource property
object. If the resource property object does not involve a particular attribute, then additional or
extra claim attributes associated with the user or device are ignored.
Resource properties are grouped in resource property lists. A global resource property list is predefined,
and it contains all resource properties that applications can use. You also can create your own resource
property lists if you want to group some specific resource properties.
Provide a name and description for the rule. You also should choose to protect the rule against
accidental deletion.
Configure the target resources. In the Active Directory Administrative Center, use the Target
Resources section to create a scope for the access rule. You create the scope by using resource
properties within one or more conditional expressions. You want to create a target condition based
on the business requirement that drives this rule. For example, Resource.Compliancy Equals HIPAA. To
simplify the process, you can keep the default value (All resources), but usually you apply some
resource filtering. You can build the conditional expressions by using many logical and relational
operators. You can use the following operators when building conditional expressions:
Also, you can join multiple conditional expressions within one rule by using AND or OR.
In addition, you can group conditional expressions together to combine the results of two or more
joined conditional expressions. The Target Resources section displays the currently configured
conditional expression that is being used to control the rule's applicability.
Use the following permissions as proposed permissions. Select this option to add the entries in
the permissions list to the list of proposed permissions entries for the newly created central
access rule. You can combine the proposed permissions list with file system auditing to model the
effective access that users have to the resource, without having to change the entries in the
current permissions list. Proposed permissions generate a special audit event to the event log
that describes the proposed effective access for the users.
Note: Proposed permissions do not apply to resources; they exist for simulation purposes only.
o
Use the following permissions as current permissions. Select this option to add the entries in the
permissions list to the list of the current permissions entries for the newly created central access
rule. The current permissions list represents the additional permissions that the Windows
operating system considers when you deploy the central access rule to a file server. Central access
rules do not replace existing security. When it makes authorization decisions, the Windows
operating system evaluates the permission entries from the central access rule's current
permissions list, from NTFS file system, and from the share permissions lists.
Once you are satisfied with your proposed permissions, you can convert them to current permissions.
Alternatively, you can use current permissions in a test environment and effectively test access as specified
in the Advanced Security tab, to model how the policy applies to different users.
Configure claims.
Demonstration Steps
1.
In the Active Directory Administrative Center, in the navigation pane, click Dynamic Access Control.
2.
Open the Claim Types container, and then create a new claim type for users and computers by using
the following settings:
3.
In the Active Directory Administrative Center, in the Tasks pane, click New, and then click
Claim Type.
4.
Create a new claim type for computers by using the following settings:
5.
In the Active Directory Administrative Center, click Dynamic Access Control, and then open
the Resource properties container.
6.
7.
8.
9.
Open the Global Resource Property List, ensure that Department and Confidentiality are included
in the list, and then click Cancel.
10. Click Dynamic Access Control, and then open the Central Access Rules container.
11. Create a new central access rule with the following values:
Current Permissions:
o
Remove Administrators
12. Create another central access rule with the following values:
Current Permissions:
o
Remove Administrators
Define classification properties and values, so you then can assign them to files by running
classification rules.
Classify a folder so that all the files within the folder structure inherit the classification.
Create, update, and run classification rules. Each rule assigns a single predefined property and value
to the files within a specified directory, based on installed classification add-ins.
When you run a classification rule, reevaluate the files that are classified already. You can choose to
overwrite existing classification values, or add the value to properties that support multiple values. You
also can declassify files that are no longer in the classification criteria.
Demonstration Steps
1.
2.
Refresh Classification Properties, and then verify that the Confidentiality and Department properties
are listed.
3.
Scope: C:\Docs
Property: Confidentiality
Value: High
Evaluation Type: Re-evaluate existing property values, and then click Overwrite the existing
value
4.
5.
Open File Explorer, browse to the C:\Docs folder, and then open the Properties window for files
Doc1.txt, Doc2.txt, and Doc3.txt.
6.
Verify values for Confidentiality. Doc1.txt and Doc2.txt should have confidentiality set to High.
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
2.
Define the authorization policies. These policies usually are defined from your business requirements.
Some examples are:
All documents that have property Confidentiality set to High must be available only to managers.
Marketing documents from each country should be writable only by marketing people from the
same country.
Only full-time employees should be able to access technical documentation from previous
projects.
3.
Translate the authorization policies that you require into expressions. In the case of DAC, expressions
are attributes that are associated with both the resources, such as files and folders, and the users or
devices that seek access to these resources. These expressions state additional identification
requirements that must be met to access protected data. Values that are associated with any
expressions on the resource obligate the user or the device to produce the same value.
4.
Lastly, you should break down the expressions that you have created to determine what claim types,
security groups, resource properties, and device claims you must create to deploy your policies. In
other words, you must identify the attributes for access filtering.
Note: You are not required to use user claims to deploy central access policies. You can use
security groups to represent user identities. We recommend that you start with security groups
because it simplifies the initial deployment requirements.
Demonstration Steps
1.
2.
On LON-DC1, in the Active Directory Administrative Center, create a new central access policy with
following values:
3.
On LON-DC1, from the Server Manager, open the Group Policy Management Console.
4.
Create new GPO named DAC Policy, and in the Adatum.com domain, link it to DAC-Protected OU.
5.
6.
Click Manage Central Access Policies, click both Department Match and Protect confidential
docs, click Add, and then click OK.
7.
Close both the Group Policy Management Editor and the Group Policy Management Console.
8.
9.
10. Apply the Protect confidential docs central policy to the C:\Docs folder.
11. Browse to the C:\Research folder.
12. Apply the Department Match Central Policy to the C:\Research folder.
You must first configure Group Policy to use staging. You should open the Group Policy Management
Editor and navigate to: Computer Configuration\Policies\Windows Settings\Security
Settings\Advanced Audit Policy Configuration\Audit Policies\Object Access. In this location, you
should enable Success and Failure auditing for the Audit Central Access Policy Staging and Audit File
System policies.
Demonstration Steps
1.
2.
3.
4.
Double-click Audit Central Access Policy Staging, select all three check boxes, and then click OK.
5.
Double-click Audit File System, select all three check boxes, and then click OK.
6.
Close the Group Policy Management Editor and the Group Policy Management Console.
7.
On LON-DC1, open Active Directory Administrative Center, and then open the Properties for the
Department Match central access rule.
8.
In the Proposed permissions section, configure the condition for Authenticated Users as UserCompany Department-Equals-Value-Marketing.
9.
Lesson 4
Lesson Objectives
After completing this lesson, you will be able to:
Decide on target operating systems. Access Denied Assistance only works with Windows 8 or
Windows Server 2012, or newer versions of these.
The Access Denied Assistance feature provides three ways to troubleshoot issues with access-denied
errors:
Self-remediation. Administrators can create customized access-denied messages that are authored by
the server administrator. By using the information in these messages, users can try to self-remediate
access-denied cases. The message also can include URLs that direct users to self-remediation websites
that are provided by the organization.
Remediation by the data owner. Administrators can define owners for shared folders. This enables
users to send email messages to data owners to request access. For example, if a user is left off a
security group membership accidentally, or the users department attribute value is misspelled, the
data owner might be able to add the user to the group. If the data owner does not know how to
grant access to the user, the data owner can forward this information to the appropriate IT
administrator. This is helpful because the number of user support requests that escalate to the
support desk should be limited to specialized cases, or cases that are difficult to resolve.
Remediation by the help desk and file server administrators. If users cannot self-remediate issues, and
if data owners cannot resolve the issue, then administrators can troubleshoot issues by accessing the
UI to view the effective permissions for the user. For example, an administrator should be involved in
cases in which claims attributes or resource object attributes are defined incorrectly or contain
incorrect information, or when the data itself appears to be corrupted.
You use Group Policy to enable the Access Denied Assistance feature. Open the Group Policy Object
Editor, and navigate to Computer Configuration\Policies\Administrative Templates\System\AccessDenied Assistance. In the Access Denied Assistance node, you can enable Access Denied Assistance, and
you also can provide customized messages for users. Alternatively, you can use the FSRM console to
enable Access Denied Assistance. However, if Access Denied Assistance is enabled in Group Policy, the
appropriate settings in the FSRM console are disabled for configuration.
You also can use the FSRM Management Properties page to configure a customized Access Denied
Assistance message for a particular folder tree within the serverfor example, a per share message.
Demonstration Steps
1.
On LON-DC1, open the Group Policy Management Console and browse to Group Policy objects.
2.
3.
4.
In the details pane, double-click Customize Message for Access Denied errors.
5.
In the Customize Message for Access Denied errors window, click Enabled.
6.
In the Display the following message to users who are denied access text box, type You are
denied access because of permission policy. Please request access.
7.
Select the Enable users to request assistance check box, and then click OK.
8.
Double-click Enable access-denied assistance on client for all file types, enable it, and then
click OK.
9.
Close the Group Policy Management Editor and the Group Policy Management Console.
Lesson 5
Lesson Objectives
After completing this lesson, you will be able to:
Users can use Work Folders on various types of devices while they are in a local network, but also when
they are out of the networkfor example, while they are at home or traveling. Work Folders can be
published to the Internet by using the Web Application Proxy functionality, also specific to Windows
Server 2012 R2, which allows users to synchronize their data whenever they have an Internet connection.
Note: Currently, Work Folders are available only for Windows 8.1 client operating systems.
However, Work Folders support is planned for Windows 7 and iOS-based devices such as the iPad.
The following table shows the comparison between similar technologies for managing and accessing
user data.
Technology
OneDrive
Personal
data
Individual
work data
Team/group
work data
Yes
OneDrive for
Business
(formerly
SkyDrive Pro)
Yes
Work Folders
Yes
Folder
Redirection /
Client-side
caching
Yes
Yes
Personal
devices
Data location
Yes
Public cloud
Yes
Microsoft
SharePoint/
Microsoft
Office 365
Yes
File server
File server
After you install Work Folders functionality, you should provision a share where users data will be stored.
A share can be stored on any location, such as a folder on the local or iSCSI storage that is accessible and
controlled by the file server where you installed Work Folders. When you create a root share, we
recommend that you leave Share and NTFS file system permissions on their default values, and that you
enable access-based enumeration.
After you create a root share where users Work Folders will be located, you should start New Sync Share
Wizard to create the Work Folders structure. You should select the root folder that you provisioned as a
share, and you also should choose the format for the subfolders naming. It can be a user alias or
alias@domain. If you have more than one domain in your AD DS forest, we recommend that you choose
the alias@domain, which is User Principal Name (UPN) naming format.
You can control Sync Access by explicitly listing users who will be able to use the Work Folders structure
that you created, or by specifying a group. We recommend that you specify a group for later, easier
administration. Additionally, we recommend that you disable permission inheritance for Work Folders so
that each user has exclusive access to his or her files. At the end, you can enforce some additional security
settings on devices that are used to access Work Folders. You can enforce Work Folders with encryption
and an automatic lock screen with password requirements.
Note: Enforcement of security settings related to Work Folders is not achieved by using
Group Policy. These settings are enforced when a user establishes the Work Folders connection,
and they are applied on computers that are domain-joined and on computers that are not
domain-joined.
Demonstration Steps
1.
On LON-SVR2, in Server Manager, click File and Storage Services, and then select Work Folders.
2.
3.
Select WF-Share.
4.
5.
6.
Switch to LON-DC1.
7.
8.
9.
Open the Group Policy Management Editor for Work Folders GPO.
Objectives
After completing this lab, you will be able to:
Implement DAC.
Lab Setup
Estimated Time: 90 minutes
Virtual machines: 20412D-LON-DC1,
20412D-LON-SVR1,
20412D-LON-SVR2,
20412D-LON-CL1,
20412D-LON-CL2
User name: Adatum\Administrator
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following procedure:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
Password: Pa$$w0rd
5.
6.
On LON-DC1, in Server Manager, open Active Directory Domains and Trusts console.
2.
Raise the domain and forest functional level to Windows Server 2012.
3.
4.
5.
Move the LON-SVR1 and LON-CL1 computer objects into the DAC-Protected OU.
6.
On LON-DC1, from Server Manager, open the Group Policy Management Console.
7.
8.
In the Group Policy Management Editor, under Computer Configuration, expand Policies, expand
Administrative Templates, expand System, and then click KDC.
9.
Enable the KDC support for claims, compound authentication and Kerberos armoring policy setting.
2.
In the Active Directory Administrative Center, in the navigation pane, click Dynamic Access Control.
3.
Open the Claim Types container, and then create a new claim type for users and computers by using
the following settings:
4.
In the Active Directory Administrative Center, in the Tasks pane, click New, and then click
Claim Type.
5.
Create a new claim type for computers by using the following settings:
In the Active Directory Administrative Center, click Dynamic Access Control, and then open the
Resource Properties container.
2.
3.
4.
5.
Open the Global Resource Property List, ensure that Department and Confidentiality are included
in the list, and then click Cancel.
6.
2.
Refresh Classification Properties, and then verify that Confidentiality and Department properties
are listed.
3.
Scope: C:\Docs
Property: Confidentiality
Value: High
Evaluation Type: Re-evaluate existing property values, and then click Overwrite the existing
value
4.
5.
Open a File Explorer window, browse to the C:\Docs folder, and then open the Properties window for
files Doc1.txt, Doc2.txt, and Doc3.txt.
6.
Verify values for Confidentiality. Doc1.txt and Doc2.txt should have confidentiality set to High.
2.
3.
Results: After completing this exercise, you will have prepared Active Directory Domain Services (AD DS)
for Dynamic Access Control (DAC) deployment, configured claims for users and devices, and configured
resource properties to classify files.
2.
3.
On LON-DC1, in Server Manager, click Tools, and then click Active Directory Administrative
Center.
2.
Click Dynamic Access Control, and then open the Central Access Rules container.
3.
4.
Current Permissions:
o
Remove Administrators
Current Permissions:
o
Remove Administrators
2.
3.
On LON-DC1, in the Active Directory Administrative Center, create a new central access policy with
following values:
On LON-DC1, from the Server Manager, open the Group Policy Management console.
5.
Create new GPO named DAC Policy, and in the Adatum.com domain, link it to the DAC-Protected
OU.
6.
7.
Click Manage Central Access Policies, click both Department Match and Protect confidential
docs, click Add, and then click OK.
8.
Close the Group Policy Management Editor and the Group Policy Management Console.
9.
10. Open File Explorer, and then browse to the C:\Docs folder.
11. Apply the Protect confidential docs central policy to the C:\Docs folder.
12. Browse to the C:\Research folder.
13. Apply the Department Match central policy to the C:\Research folder.
Results: After completing this exercise, you will have implemented DAC.
2.
3.
4.
5.
6.
7.
2.
Open the \\LON-SVR1\Docs folder. Try to open files Doc1.txt and Doc2.txt.
3.
4.
5.
Open \\LON-SVR1\Docs folder, and then try to open Doc3.txt file. You should be able to open that
document.
6.
While still signed in as April, try to open the \\LON-SVR1\Research folder. You should be unable to
access the folder.
7.
2.
Open the Advanced options for Security, and then click Effective Access.
3.
Click select a user, and in the Select User, Computer, Service Account, or Group window, type April,
click Check Names, and then click OK.
4.
Click View effective access, and then review the results. The user should not have access to this
folder.
5.
Click Include a user claim, and then in the drop-down list box, click Company Department.
6.
In the Value text box, type Research, and then click View Effective access. The user should now
have access.
7.
On LON-DC1, open the Group Policy Management Console, and then browse to Group Policy objects.
2.
3.
4.
In the details pane, double-click Customize Message for Access Denied errors.
5.
In the Customize Message for Access Denied errors window, click Enabled.
6.
In the Display the following message to users who are denied access text box, type You are
denied access because of permission policy. Please request access.
7.
Select the Enable users to request assistance check box, and then click OK.
8.
Double-click Enable access-denied assistance on client for all file types, enable it, and then click
OK.
9.
Close the Group Policy Management Editor and the Group Policy Management Console.
2.
3.
Request assistance when prompted. Review the options for sending a message, and then click Close.
4.
Results: After completing this exercise, you will have validated DAC functionality.
Install Work Folders functionality, configure SSL certificate, and create WFSync group
Provision a share for Work Folders.
Configure and implement Work Folders.
Validate Work Folders functionality.
Prepare for the next module.
Task 1: Install Work Folders functionality, configure SSL certificate, and create
WFSync group
1.
2.
3.
Add the Work Folders role service by using the Add Roles and Features Wizard.
4.
5.
6.
a.
b.
Organization: Adatum
c.
Organizational unit: IT
d.
City/locality: Seattle
e.
State/province: WA
f.
Country/region: US
Assign this certificate to the https protocol on the Default Web Site.
7.
8.
9.
On LON-SVR2, in Server Manager, expand File and Storage Services, and then click Shares.
2.
3.
4.
5.
6.
On LON-SVR2, in Server Manager, expand File and Storage Services, and then select Work Folders.
2.
3.
4.
5.
6.
Switch to LON-DC1.
7.
8.
9.
Open the Group Policy Management Editor for Work Folders GPO.
10. Expand User Configuration / Policies / Administrative Templates / Windows Components, and
then click Work Folders.
11. Enable the Work Folders support and type https://lon-svr2.adatum.com as the Work Folders URL.
12. Link the Work Folders GPO to the domain.
2.
3.
Open File Explorer, click This PC and then make sure that Work Folders are created.
4.
Open the Work Folders applet from Control Panel and apply security policies.
5.
6.
7.
8.
9.
Open File Explorer, click This PC and then make sure that Work Folders are created.
10. Open the Work Folders applet from Control Panel, and apply security policies.
11. Ensure that the files that you created on LON-CL1 are present.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-SVR1, 20412D-LON-SVR2, 20412D-LON-CL1, and
20412D-LON-CL2.
Results: After completing this exercise, you will have configured Work Folders.
Question: How do file classifications enhance DAC usage?
Question: Can you implement DAC without central access policy?
Always test changes that you have made to central access rules and central access policies before you
implement them.
Troubleshooting Tip
Review Questions
Question: What is a claim?
Question: What is the purpose of Central Access Policy?
Question: What is the BYOD concept?
Question: How do Work Folders support BYOD concept?
Tools
Tool
Use
Location
Administrative tools
Administrative tools
Editing GPOs
GPMC
Windows PowerShell
4-1
Module 4
Implementing Distributed Active Directory
Domain Services Deployments
Contents:
Module Overview
4-1
4-2
4-9
4-18
4-23
4-26
Module Overview
For most organizations, the Active Directory Domain Services (AD DS) deployment may be the single
most important component in the IT infrastructure. When organizations deploy AD DS or any of the other
Active Directory-linked services within the Windows Server 2012 operating system, they are deploying a
central authentication and authorization service that provides single sign-on (SSO) access to many other
network services and applications in the organization. AD DS also enables policy-based management for
user and computer accounts.
Most organizations deploy only a single AD DS domain. However, some organizations also have
requirements that necessitate that they deploy a more complex AD DS deployment, which may include
multiple domains or multiple forests.
This module describes the key components of a complex AD DS environment, and how to install and
configure a complex AD DS deployment.
Objectives
After completing this module, you will be able to:
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
Explain how AD DS domains and forests form boundaries for security and administration.
AD DS Domain Boundaries
The AD DS domain provides the following
boundaries:
single domain are stored in the domain partition in the AD DS database on each domain controller in
the domain. The replication process ensures that all originating updates are replicated to all of the
other domain controllers in the same domain. Data in the domain partition is not replicated to
domain controllers in other forests.
Administration boundary. By default, an AD DS domain includes several groups, such as the Domain
Admins group, that have full administrative control over the domain. You can also assign
administrative permissions to user accounts and groups within domains. With the exception of the
Enterprise Admins group in the forest root domain, administrative accounts do not have any
administrative rights in other domains in the forest or in other forests.
Group Policy application boundary. Group Policies can be linked at the following levels: local, site,
domain, and organizational unit (OU). Apart from site-level Group Policies, the scope of Group
Policies is the AD DS domain. There is no inheritance of Group Policies from one AD DS domain to
another, even if one AD DS domain is lower than another in a domain tree.
Auditing boundary. Auditing is managed centrally by using Group Policy Objects (GPOs). The
maximum scope of these settings is the AD DS domain. You can have the same audit settings in
different AD DS domains, but they must be managed separately in each domain.
Password and account policy boundaries. By default, password and account policies are defined at
the domain level and applied to all domain accounts. While it is possible to configure fine-grained
password policies to configure different policies for specific users within a domain, you cannot apply
the password and account policies beyond the scope of a single domain.
Replication boundary for domain DNS zones. One of the options when you configure DNS zones in
an AD DS environment is to configure Active Directoryintegrated zones. This means that instead of
the DNS records being stored locally on each DNS Server in text files, they are stored and replicated
in the AD DS database. The administrator can then decide whether to replicate the DNS information
to all domain controllers in the domain (regardless of whether they are DNS servers), to all domain
controllers that are DNS servers in the domain, or to all domain controllers that are DNS servers in the
forest. By default, when you deploy the first domain controller in an AD DS domain, and configure
that server as a DNS server, two separate replication partitions called domainDnsZones and
forestDnsZones are created. The domainDnsZones partition contains the domain-specific DNS
records, and is replicated only to other DNS servers that are also AD DS domain controllers in the
domain.
AD DS Forest Boundaries
The AD DS forest provides the following boundaries:
Security boundary. The forest boundary is a security boundary because, by default, no account
outside the forest has any administrative permissions inside the forest.
Replication boundary for the schema partition. The schema partition contains the rules and syntax for
the AD DS database. This is replicated to all the domain controllers in the AD DS forest.
Replication boundary for the configuration partition. The configuration partition contains the AD DS
domain layout details, including: domains, domain controllers, replication partners, site and subnet
information, and Dynamic Host Configuration Protocol (DHCP) authorization or Dynamic Access
Control configuration. The configuration partition also contains information about applications that
are integrated with the AD DS database. An example of one application is Exchange Server. This
partition is replicated to all domain controllers in the forest.
Replication boundary for the global catalog. The global catalog is the read-only list that contains
every object in the entire AD DS forest. To keep it to a manageable size, the global catalog contains
only some attributes for each object. The global catalog is replicated to all domain controllers in the
entire forest that are also global catalog servers.
Replication boundary for the forest DNS zones. The forestDnsZones partition is replicated to all
domain controllers in the entire forest that are also DNS servers. This zone contains records that are
important to enable forest-wide DNS name resolution.
DNS namespace requirements. Some organizations have a requirement to have more than one DNS
namespace in an AD DS forest. This is typically the case when one company acquires another
company or merges with another organization, and the domain names from the existing environment
must be preserved. It is possible to provide multiple user principal names (UPNs) for users in a single
domain, but many organizations choose to deploy multiple domains in this scenario.
Note: Deploying separate domains provides administrative autonomy, but not administrative
isolation. The only way to ensure administrative isolation is to deploy a separate forest.
Forest administrative group security requirements. Some organizations may choose to deploy a
dedicated or empty root domain. This is a domain that does not have any user accounts other than
the default forest root domain accounts. The AD DS forest root domain has two groupsthe Schema
Admins group and the Enterprise Admins groupthat do not exist in any other domain in the AD DS
forest. Because these groups have far-reaching rights in the AD DS forest, you might want to restrict
the groups use by only using the AD DS forest root domain to store them.
Resource domain requirements. Some organizations deploy resource domains to deploy specific
applications. With this deployment, all user accounts are located in one domain, whereas the application
servers and application administration accounts are deployed in a separate domain. This enables the
application administrators to have complete domain administrative permissions in the resource domain,
without enabling any permissions in the domain that contains the regular user accounts.
Note: As a best practice, choose the simplest design that achieves the required goal, as it
will be less costly to implement and more straightforward to administer.
Incompatible schemas. Some organizations might require multiple forests because they require
incompatible schemas or incompatible schema change processes. The schema is shared among all
domains in a forest.
Multinational requirements. Some countries have strict regulations regarding the ownership or
management of enterprises within the country. Establishing a separate AD DS forest may provide the
administrative isolation required by legislation.
Extranet security requirements. Some organizations have several servers deployed in a perimeter
network. These servers might need AD DS to authenticate user accounts, or might use AD DS to
enforce policies on the servers in the perimeter network. To ensure that the extranet AD DS is as
secure as possible, organizations often configure a separate AD DS forest in the perimeter network.
Business merger or divestiture requirements. One of the most common reasons organizations have
multiple AD DS forests is because of business mergers. When organizations merge, or one
organization purchases another, the organizations need to evaluate the requirement for merging the
AD DS forests deployed in both organizations. Merging the AD DS forests provides benefits related to
simplified collaboration and administration. However, if the two different groups in the organization
will continue to be managed separately, and if there is little need for collaboration, it might not be
worth the expense to merge the two forests. In particular, if the organization plans to sell one part of
the company, it is preferable to retain the two organizations as separate forests.
Best Practice: As a best practice, choose the simplest design that achieves the required
goal, as it will be less costly to implement and more straightforward to administer.
Windows Azure AD is used when you subscribe to Microsoft Office 365, Exchange Online, Microsoft
SharePoint Online, or Microsoft Lync Online. Additionally, you can use Windows Azure AD with
Windows Azure Apps or Internet connected apps that require authentication. You can synchronize your
on-premises AD DS with Windows Azure AD to allow your users to use the same identity across both
internal resources and cloud-based resources.
Windows Azure AD does not include all the services available with an on-premises Windows Server 2012 AD
solution. Windows Server 2012 AD supports five different services: AD DS, Active Directory Lightweight
Directory Services (AD LDS), Active Directory Federation Services (AD FS), Active Directory Certificate Services
(AD CS), and Active Directory Rights Management Service (AD RMS). Besides providing Windows Azure AD
services, Windows Azure also currently supports Windows Azure Access Control Service. This service supports
integration with third-party identity management and federation with your on-premises AD DS.
Note: You do not install Windows Azure Active Directory. Windows Azure Active Directory
is a subscription service that can help you provide authentication in the cloud.
Service healing. While Windows Azure does not provide rollback services to customers, Windows
Azure servers may be rolled back as a regular part of maintenance. Domain controller replication
depends on the update sequence number (USN); when an AD DS system is rolled back, duplicate
USNs could be created. To prevent this, Windows Server 2012 AD DS introduced a new identifier
named VM-Generation ID. VM-Generation ID can detect a rollback, and it prevents the virtualized
domain controller from replicating changes outbound until the virtualized AD DS has converged with
the other domain controllers in the domain.
Virtual machine limitations. Windows Azure virtual machines are limited to 14 gigabytes (GB) of RAM
and one network adapter. Additionally, the snapshot feature is not supported in Windows Azure.
IP addressing. All Windows Azure virtual machines receive DHCP addresses. Your Windows Azure Virtual
Network must be provisioned before the first Windows Azure-based domain controller is provisioned.
DNS. Windows Azure built-in DNS does meet the Active Directory requirements, such as Dynamic
DNS and SRV records. You can install DNS with your domain controller; however, the domain
controller cannot be configured with a static address. To alleviate potential issues, Windows Azure
DHCP leases never expire.
Note: Do not change the default dynamic IP address configuration to static IP addresses on
the Windows Azure-based domain controllers. The network IDs you use in Windows Azure are
subject to change, and if you assign static IP addresses to your domain controllers, they will
eventually lose their connection.
Disks. Windows Azure virtual machines use read-write host caching for operating system (OS) virtual
hard disks. This this can improve the performance of the virtual machine. However, if Active Directory
components are installed on the OS disk, data loss would be possible in the event of a disk failure.
Additional Windows Azure hard disks attached to a VM have the caching turned off. When you install
Active Directory in Windows Azure, the ntds.dit and SYSVOL folders should be located on an
additional disk in the Windows Azure VM.
2.
3.
Click CREATE YOUR DIRECTORY to launch the Create Directory form and create your new Active
Directory domain instance.
4.
5.
Domain Name. Enter a unique name for your new Active Directory domain instance. The domain
you create will be provisioned as a subdomain inside the onmicrosoft.com public DNS domain.
You can assign a custom DNS namespace to this domain after you complete initial provisioning.
Country or region. Select your closest country or region. Windows Azure uses this to determine
the Windows Azure Datacenter Region in which your Active Directory domain instance is
provisioned, and it cannot be changed after provisioning.
Once the Domain is provisioned, you can continue to configure the following:
Integrated Apps. Integrate your cloud-based applications with Windows Azure AD.
Verify and monitor DNS name resolution. Verify that all of your computers, including domain
controllers, are able to perform successful DNS lookups for all domain controllers in the forest.
Domain controllers must be able to connect to other domain controllers to successfully replicate
changes to AD DS. Client computers must be able to locate domain controllers by using service (SRV)
resource records, and they must be able the resolve the domain controller names to IP addresses. In a
multidomain or multiforest environment, client computers may need to locate a variety of crossforest services, including Key Management Servers for Windows Activation, Terminal Services
Licensing servers, licensing servers for specific applications and domain controllers in any domain to
validate trusts when accessing resources in another domain.
Optimize DNS name resolution between multiple namespaces. When organizations deploy multiple
trees in an AD DS forest, or when they deploy multiple forests, name resolution is more complicated
because you need to manage multiple domain namespaces. Use DNS features such as conditional
forwarding, stub zones, and delegation to optimize the process of resolving computer names across
the namespaces.
Use AD DS integrated DNS zones. When you configure a DNS zone as AD DS integrated, the DNS
information is stored in AD DS and replicated through the normal AD DS replication process. This
optimizes the process of replicating changes throughout the forest. You also can configure the scope
of replication for the DNS zones. By default, domain-specific DNS records will be replicated to other
domain controllers that are also DNS servers in the domain. DNS records that enable cross-domain
lookups are stored in the _msdcs.forestrootdomainname zone, and are replicated to domain
controllers that are also DNS servers in the entire forest. This default configuration should not be
changed.
Deploying a GlobalNames zone. A GlobalNames zone allows you to configure single name resolution
for DNS names in your forest. Previously, Windows Internet Name Service (WINS) was configured in a
domain to support single-label name resolution. A GlobalNames zone can be used to replace WINS in
your environment, especially if you deploy Internet Protocol version 6 (IPv6), because WINS does not
support IPv6 addressing.
When you extend your AD DS domain into Windows Azure, you must take a few extra steps.
Windows Azures built-in DNS does not support AD DS domains; to support your cloud-based
domain components, you need to do the following:
o
Register your on-premises DNS with Windows Azure so that it is accessible from Windows Azure.
Lesson 2
Lesson Objectives
After completing this lesson, you will be able to:
Demonstration Steps
Install the AD DS binaries on TOR-DC1
1.
On TOR-DC1, in the Server Manager, use the Add Roles and Features Wizard to install the Active
Directory Domain Services binaries.
2.
Complete the AD DS Add Roles and Features Wizard by using default settings.
Use Promote this server to a domain controller to start the Active Directory Domain Services
Configuration Wizard.
2.
Use the Active Directory Domain Services Configuration Wizard to configure AD DS on TOR-DC1
with the following settings:
3.
Complete the Active Directory Domain Services Configuration Wizard with default settings.
4.
Reboot and sign in as NA\Administrator with the password Pa$$w0rd, on the newly created AD DS
domain controller TOR-DC1.
Domain
functional
level
Microsoft
Windows
2000
Server
native
Features
Universal groups
Group nesting
Note: Windows Server 2012 domain controllers cannot be installed in a domain running at
the Windows 2000 Server native level.
Windows
Server
2003
Netdom.exe. This domain management tool makes it possible to rename domain controllers.
LastLogonTimestamp. This attribute remembers the time of last domain logon for users, and
replicates this to other AD DS domain controllers in the AD DS domain.
InetOrgPerson object support. The InetOrgPerson object is defined in Internet RFC 2798 and is
used for federation with external directory services.
The ability to redirect the default location for user and computer objects.
Constrained delegation. This enables applications to take advantage of the secure delegation
of user credentials by using Kerberos-based authentication.
Selective authentication. This allows you to specify the users and groups that are allowed to
authenticate to specific resource servers in a trusting forest.
Application partitions, which are used to store information for AD-integrated application. ADintegrated DNS uses an application partition, which allows the DNS partition to be replicated
on domain controllers that are also DNS servers in the domain, or even across the forest.
Domain
functional
level
Windows
Server
2008
Features
Distributed File System (DFS) replication is available as a more efficient and robust file
replication mechanism than the File Replication Service (FRS) used for the SYSVOL folders.
Additional interactive logon information is stored for each user, instead of just the last logon time.
Fine-grained password settings allow password and account lockout policies to be set for users
and groups, which replaces the default domain settings for those users or group members.
Personal virtual desktops are available for users to connect to, by using RemoteApp and
Remote Desktop.
Advanced Encryption Services (AES 128 and 256) support for Kerberos is available.
Read-only domain controllers (RODCs). These provide a secure and economical way to provide
AD DS logon services in remote sites, without storing confidential information (such as
passwords) in untrusted environments.
Windows
Server
2008 R2
Windows
Server
2012
Windows Server 2012 domain functional level does not implement new features from Windows
2008 R2 functional level, with one exception: If the key distribution center (KDC) support for
claims, compound authentication, and Kerberos armoring is configured for Always provide
claims or Fail unarmored authentication requests, these functionalities will not be enabled
until the domain is also set to Windows Server 2012 level.
Windows
Server
2012 R2
Domain Controller-based protections for Protected Users. The Protected Users group was
introduced in Windows Server 2012 R2. Members of the Protected Users group can no longer:
o Authenticate with NTLM authentication, Digest Authentication, or CredSSP. Windows 8.1
devices will not cache Protected Users passwords.
o Use DES or RC4 cipher suites in Kerberos pre-authentication. Domains must be configured
to support at least the AES cipher suite
o Be delegated with unconstrained or constrained delegation. Connections for Protected
Users to other systems may fail.
o Renew user Ticket-Granting Tickets (TGTs) beyond the initial four-hour lifetime. After four
hours, Protected Users must authenticate again.
Authentication policies can be applied to accounts in Windows 2012 R2 domains.
Authentication Policy Silos are used to create a relationship between user, managed service
and computer, accounts for authentication policies.
Note: Generally, you cannot roll back AD DS domain functional levels. However, in
Windows Server 2012 and Windows Server 2008 R2, you can roll back to a minimum of Windows
Server 2008, as long as you do not have optional features (such as the Recycle Bin) enabled. If
you have implemented a feature that is only available in a higher domain functional level, you
cannot roll back that feature to an earlier state.
Additional Reading: To learn more about the AD DS domain functional levels, refer to
Understanding Active Directory Domain Services (AD DS) Functional Levels at
http://go.microsoft.com/fwlink/?LinkId=270028.
Trusts. The basic feature of forests is that all domain trusts are transitive trusts, so that any user in any
domain in the forest can access any resource in the forest, when given permission.
Forest trusts. AD DS forests can have trusts set up between them, which enables resource sharing.
There are full trusts and selective trusts.
Linked-value replication. This feature improved Windows 2000 Server replication, and improved how
group membership was handled. In previous versions of AD DS, the membership attribute of a group
would be replicated as a single value. This meant that if two administrators changed the membership
of the same group in two different instances of AD during the same replication period, the last write
would be the final setting. The first changes made would be lost, because the new version of the
group membership attribute would replace the previous one entirely. With linked-value replication,
group membership is treated at the value level; therefore, all updates merge together. This also
reduces significantly the replication traffic that would occur. As an additional benefit, the previous
group membership restriction that limited the maximum number of members to 5,000 is removed.
Support for read-only domain controllers (RODCs). RODCs are supported at the Windows Server 2003
forest functional level. The RODC must be running Windows Server 2008 or newer, and requires at
least one Windows Server 2008 or newer full domain controller as a replication partner.
Deactivation and redefinition of attributes and object classes. Although you cannot delete an
attribute or object class in the schema at the Windows Server 2003 functional level, you can
deactivate or redefine attributes or object classes.
The Windows Server 2008 forest functional level does not add new forest-wide features. The Windows
Server 2008 R2 forest functional level adds the ability to activate AD features, such as the Active Directory
Recycle Bin. This feature allows you to restore deleted Active Directory objects. The forest functional level
cannot be rolled back if features requiring a certain forest level, such as the Active Directory Recycle Bin
feature, have been enabled.
Although the Windows Server 2008 R2 AD DS forest functional level introduced AD DS Recycle Bin, the
Recycle Bin had to be managed with Windows PowerShell. However, the Remote Server Administration
Tools (RSAT) version that comes with Windows Server 2012 has the ability to manage the AD DS Recycle
Bin by using graphical user interface (GUI) tools.
The Windows Server 2012 forest functional level does not provide any new forest-wide features. For
example, if you raise the forest functional level to Windows Server 2012, you cannot add a new domain
running at Windows Server 2008 R2 domain functional level.
The Windows Server 2012 R2 forest functional level does not provide any new forest-wide features. Any
domains that you add to the forest will operate at the Windows Server 2012 R2 domain functional level.
Of these two methods, the second is preferred, because upgrading operating systemsespecially on
servers that have been running for several yearsis often difficult due to all the changes made through
the years. By installing new domain controllers running Windows Server 2012 R2, you will have a clean
installation of the Windows Server 2012 R2 operating system.
You can deploy Windows Server 2012 R2 servers as member servers in a domain with domain controllers
running Windows Server 2003 or newer versions. However, before you can install the first domain
controller that is running Windows Server 2012 R2, you must upgrade the schema. In versions of AD DS
prior to Windows Server 2012 R2, you would run the adprep.exe tool to perform the schema upgrades.
When you deploy new Windows Server 2012 R2 domain controllers in an existing domain, and if you are
logged on with an account that is a member of the Schema Admins and Enterprise Admins groups, the
Active Directory Domain Services Installation Wizard will upgrade the AD DS forest schema automatically.
Note: Windows Server 2012 R2 still provides a 64-bit version of ADPrep, so you can run
Adprep.exe separately. For example, if the administrator who installs the first Windows Server
2012 R2 domain controller is not a member of the Enterprise Admins group, you might need to
run the command separately. You only have to run adprep.exe if you plan an in-place upgrade
for the first Windows Server 2012 R2 domain controller in the domain.
Insert the installation disk for Windows Server 2012 R2, and run Setup.
2.
3.
After the operating system selection window and the license acceptance page appear, in the Which
type of installation do you want? window, click Upgrade: Install Windows and keep files,
settings, and apps.
With this type of upgrade, AD DS on the domain controller is upgraded to Windows Server 2012 R2 AD
DS. As a best practice, you should check for hardware and software compatibility before you perform an
upgrade. Following the operating system upgrade, remember to update your drivers and other services
(such as monitoring agents), and also check for updates for both Microsoft applications and nonMicrosoft software.
Note: You can upgrade directly from Windows Server 2008 and Windows Server 2008 R2
to Windows Server 2012 R2. To upgrade servers that are running a version of Windows Server
that is older than Windows Server 2008, you must either perform an interim upgrade to Windows
Server 2008 or Windows Server 2008 R2, or perform a clean install. Note that Windows Server
2012 R2 AD DS domain controllers can coexist as domain controllers in the same domain as
Windows Server 2003 domain controllers or newer.
Deploy and configure a new installation of Windows Server 2012 R2, and then join it to the domain.
2.
Promote the new server to be a domain controller in the domain by using Server Manager.
When you restructure, you must migrate resources between AD DS domains in either the same forest or in
different forests. There is no option available in AD DS to detach a domain from one forest and then
attach it to another forest. You can rename and rearrange domains within a forest under some
circumstances, but there is no way to easily merge domains within or between forests. The only option for
restructuring a domain in this way is to move all the accounts and resources from one domain to another.
You can use the Microsoft Active Directory Migration Tool (ADMT) to move user, group, and computer
accounts from one domain to another, and to migrate server resources. If managed carefully, the
migration can be completed without disrupting access to the resources users need to do their work.
ADMT provides both a GUI and a scripting interface, and supports the following tasks for completing the
domain migration:
Trust migration.
Functionality to undo the last migration and retry the last migration.
Note: ASMT 3.2 cannot be installed on a Windows 2012 or Windows 2012 R2 server. To use the
ADMT to migrate a Windows 2012 domain, first install the ADMT on a Windows 2008 R2 server.
Pre-Migration Steps
Before you perform the migration, you must perform several tasks to prepare the source and target
domains. These tasks include:
For domain member computers that are pre-Windows Vista Service Pack 1 (SP1) or Windows Server
2008 R2, configure a registry on the target AD DS domain controller to allow cryptography
algorithms that are compatible with the Microsoft Windows NT Server 4.0 operating system.
Enable firewall rules on source and target AD DS domain controllers, to allow file and printer sharing.
Prepare the source and target AD DS domains to manage how the users, groups, and user profiles will
be handled.
Establish the trust relationships that are required for the migration.
Perform a test migration, and fix any errors that are reported.
Create a restructure plan. An adequate plan is critical to the success of the restructuring process.
Complete the following steps to create your restructure plan:
a.
b.
c.
d.
e.
2.
3.
4.
5.
Prepare source and target domains. You must prepare both the source and target domains for the
restructure process by performing the following tasks:
a.
Ensure 128-bit encryption on all domain controllers. Windows Server 2000 Service Pack 3 (SP3)
and newer versions support 128-bit encryption natively. For older operating systems, you will
need to download and install a separate encryption pack.
b.
Establish required trusts. You must configure at least a one-way trust between the source and
target domains.
c.
Establish migration accounts. The ADMT uses migration accounts to migrate objects between
source and target domains. Ensure that these accounts have permissions to move and modify
objects on the source and target domains.
d.
Determine whether ADMT will handle SID history automatically, or if you must configure the
target and source domains manually.
e.
Ensure proper configuration of the target domain OU structure. Ensure that you configure the
proper administrative rights and delegated administration in the target domain.
f.
g.
h.
b.
c.
Migrate accounts. Migrate user and computer accounts in batches to monitor the migrations
progress. If you are migrating local profiles as part of the process, migrate the affected
computers first, and then the associated user accounts.
Migrate resources. Migrate the remaining resources in the domain by performing the following steps:
a.
b.
c.
Finalize migration. Finalize the migration and perform cleanup by performing the following steps:
a.
b.
Ensure that at least two operable domain controllers exist in the target domain. Back up these
domain controllers.
c.
SID-History increases the size of the users access token. After migrating the users to the new domain, the
access control lists (ACLs) in your environment should be examined and ACLs migrated as well. Once a
migration is complete, and the original domain has been removed, you should clean up your users SIDHistory attrib Windows PowerShell cmdlets. You should plan and execute these activities carefully,
because removing the SID-History before the environment is properly prepared could cause business
interruptions.
Lesson 3
Configuring AD DS Trusts
AD DS trusts enable access to resource easily grant access s in a complex AD DS environment. When you
deploy a single domain, you can to resources within the domain to domain users and groups. When you
implement multiple domains or forests, you need to ensure that the appropriate trusts are in place to
enable the same access to resources. This lesson describes how trusts work in an AD DS environment, and
how you can configure trusts to meet your business requirements.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the types of trusts that you can configure in a Windows Server 2012 environment.
Trust type
Transitivity
Direction
Description
Parent and
child
Transitive
Two-way
Tree-root
Transitive
Two-way
External
Nontransitive
One-way
or two-way
Realm
Transitive or
nontransitive
One-way
or two-way
Trust type
Transitivity
Direction
Description
Forest
(Complete or
Selective)
Transitive
One-way
or two-way
Shortcut
Nontransitive
One-way
or two-way
Simplified management of resources across two Windows Server 2008 (or newer version) forests, by
reducing the number of external trusts necessary to share resources.
Use of the Kerberos version 5 protocol to improve the trustworthiness of authorization data that is
transferred between forests.
You can create a forest trust only between two AD DS forests, and you cannot extend the trust implicitly
to a third forest. This means that if you create a forest trust between Forest 1 and Forest 2, and you create
a forest trust between Forest 2 and Forest 3, Forest 1 does not have an implicit trust with Forest 3. Forest
trusts are not transitive between multiple forests.
You must address several requirements before you can implement a forest trust, including ensuring that
the forest functional level is Windows Server 2003 or newer, and that DNS name resolution exists between
the forests.
SID Filtering
By default, when you establish a forest or domain
trust, you enable a domain quarantine, which is
also known as SID filtering. When a user
authenticates in a trusted domain, the user
presents authorization data that includes the SIDs
of all of the groups to which the user belongs. Additionally, the users authorization data includes the SIDHistory of the user and the users groups.
AD DS sets SID filtering by default to prevent users who have access at the domain or enterprise
administrator level in a trusted forest or domain, from granting (to themselves or to other user accounts
in their forest or domain) elevated user rights to a trusting forest or domain. SID filtering prevents misuse
of the SID-History attribute, by only allowing reading the SID from the objectSID attribute and not the
SID-History attribute.
In a trusted-domain scenario, it is possible that an administrator could use administrative credentials in
the trusted domain to load SIDs that are the same as SIDs of privileged accounts in your domain into the
SID-History attribute of a user. That user would then have inappropriate access levels to resources in your
domain. SID filtering prevents this by enabling the trusting domain to filter out SIDs from the trusted
domain that are not the primary SIDs of security principals. Each SID includes the SID of the originating
domain, so that when a user from a trusted domain presents the list of the users SIDs and the SIDs of the
users groups, SID filtering instructs the trusting domain to discard all SIDs without the domain SID of the
trusted domain. SID filtering is enabled by default for all outgoing trusts to external domains and forests.
Selective Authentication
When you create an external trust or a forest trust, you can manage the scope of authentication of trusted
security principals. There are two modes of authentication for an external or forest trust:
Domain-wide authentication (for an external trust) or forest-wide authentication (for a forest trust)
Selective authentication
If you choose domain-wide or forest-wide authentication, this enables all trusted users to authenticate for
services and access on all computers in the trusting domain. Therefore, trusted users can be given
permission to access resources anywhere in the trusting domain. If you use this authentication mode, all
users from a trusted domain or forest are considered Authenticated Users in the trusting domain. Thus, if
you choose domain-wide or forest-wide authentication, any resource that has permissions granted to
Authenticated Users is accessible immediately to trusted domain users.
If, however, you choose selective authentication, all users in the trusted domain are trusted identities.
However, they are allowed to authenticate only for services on computers that you specify. When they use
selective authentication, users will not become authenticated users in the target domain. However, you
can explicitly grant users the Allowed to Authenticate permission on specific computers.
For example, imagine that you have an external trust with a partner organizations domain. You want to
ensure that only users from the partner organizations marketing group can access shared folders on only
one of your many file servers. You can configure selective authentication for the trust relationship, and
then give the trusted users the right to authenticate only for that one file server.
Demonstration Steps
Configure DNS name resolution by using a conditional forwarder
Configure DNS name resolution between adatum.com and treyresearch.net by creating a conditional
forwarder so that LON-DC1 has a referral to TREY-DC1 as the DNS server for the DNS domain
treyresearch.net.
On LON-DC1, in Active Directory Domains and Trusts, create a two-way selective forest trust between
adatum.com and treyresearch.net, by supplying the credentials of the treyresearch.net domain
Administrator account.
Objectives
After completing this lab, you will be able to:
Implement child domains in AD DS.
Implement forest trusts in AD DS.
Lab Setup
Estimated Time: 45 minutes
Virtual machines: 20412D-LON-DC1, 20412D-TOR-DC1
20412D-LON-SVR2 , 20412D-TREY-DC1
User name: Adatum\Administrator
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following procedure:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In the Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
Password: Pa$$w0rd
5.
6.
2.
When the AD DS binaries have installed, use the Active Directory Domain Services Configuration
Wizard to install and configure TOR-DC1 as an AD DS domain controller for a new child domain
named na.adatum.com.
3.
When prompted, use Pa$$w0rd as the Directory Services Restore Mode (DSRM) password.
When the Server Manager opens, click Local Server. Verify that Windows Firewall shows Domain:
Off. If it does not, then next to Local Area Connection, click 172.16.0.25, IPv6 enabled. Right-click
Local Area Connection, and then click Disable. Right-click Local Area Connection, and then click
Enable. The Local Area Connection should now show Adatum.com.
2.
From the Server Manager, launch the Active Directory Domains and Trusts management console,
and verify the parent child trusts.
Note: If you receive a message that the trust cannot be validated, or that the secure
channel (SC) verification has failed, ensure that you have completed step 2, and then wait for at
least 10 to 15 minutes. You can continue with the lab and come back later to verify this step.
Results: After completing this exercise, you will have implemented child domains in AD DS.
On LON-DC1 using the DNS management console, configure a DNS stub zone for TreyResearch.net.
2.
3.
4.
5.
Using the DNS management console, configure a DNS stub zone for adatum.com.
6.
7.
On LON-DC1, create a one-way outgoing trust between the treyresearch.net AD DS forest and the
adatum.com forest. Configure the trust to use Selective authentication.
2.
3.
On LON-DC1, from the Server Manager, open Active Directory Users and Computers.
2.
On LON-SVR2, configure the members of TreyResearch\IT group with the Allowed to authenticate
permission. If you are prompted for credentials, type TreyResearch\administrator with the password
Pa$$w0rd.
3.
On LON-SVR2, create a shared folder named IT-Data, and grant Read and Write access to members
of the TreyResearch\IT group. If you are prompted for credentials, type
TreyResearch\administrator with the password Pa$$w0rd.
4.
5.
Sign in to TREY-DC1 as TreyResearch\Alice with the password Pa$$w0rd, and verify that you can
access the shared folder on LON-SVR2.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have implemented forest trusts.
Question: Why did you configure a delegated subdomain record in DNS on LON-DC1
before adding the child domain na.adatum.com?
Question: What are the alternatives to creating a delegated subdomain record in the
previous question?
Question: When you create a forest trust, why would you create a selective trust instead of a
complete trust?
User cannot be
authenticated to access
resources on another
AD DS domain or
Kerberos realm.
Troubleshooting Tip
5-1
Module 5
Implementing Active Directory Domain Services Sites
and Replication
Contents:
Module Overview
5-1
5-2
5-10
5-17
5-25
5-31
Module Overview
When you deploy Active Directory Domain Services (AD DS), it is important that you provide an efficient
logon infrastructure and a highly available directory service. Implementing multiple domain controllers
throughout the infrastructure helps you meet both of these goals. However, you must ensure that AD DS
replicates Active Directory information between each domain controller in the forest.
In this module, you will learn how AD DS replicates information between domain controllers within a
single site and throughout multiple sites. You also will learn how to create multiple sites and monitor
replication to help optimize AD DS replication and authentication traffic.
Objectives
After completing this module, you will be able to:
Explain how to configure AD DS sites to help optimize authentication and replication traffic.
Lesson 1
AD DS Replication Overview
Within an AD DS infrastructure, standard domain controllers replicate Active Directory information by
using a multimaster replication model. This means that if a change is made on one domain controller, that
change then replicates to all other domain controllers in the domain, and potentially to all domain
controllers throughout the entire forest. This lesson provides an overview of how AD DS replicates
information between both standard and read-only domain controllers (RODCs).
Lesson Objectives
After completing this lesson, you will be able to:
Describe AD DS partitions.
Configuration partition. The configuration partition is created automatically when you create the first
domain controller in a forest. The configuration partition contains information about the forest-wide
AD DS structure, including which domains and sites exist and which domain controllers exist in each
domain. The configuration partition also stores information about forest-wide services such as
Dynamic Host Configuration Protocol (DHCP) authorization and certificate templates. This partition
replicates to all domain controllers in the forest. It is smaller than the other partitions, and its objects
do not change frequently, therefore replication is also infrequent.
Schema partition. The schema partition contains definitions of all the objects and attributes that you
can create in the data store, and the rules for creating and manipulating them. Schema information
replicates to all domain controllers in the forest. Therefore, all objects must comply with the schema
object and attribute definition rules. AD DS contains a default set of classes and attributes that you
cannot modify. However, if you have Schema Admins group credentials, you can extend the schema
by adding new attributes and classes to represent application-specific classes. Many applications such
as Microsoft Exchange Server and Microsoft System Center 2012 Configuration Manager may
extend the schema to provide application-specific configuration enhancements. These changes target
the domain controller that contains the forests schema master role. Only the schema master is
permitted to make additions to classes and attributes. Similar to the configuration partition, the
schema partition is small, and needs to replicate only when changes to the data stored there takes
place, which does not happen frequently, except in those cases when the schema is extended.
Domain partition. When you create a new domain, AD DS automatically creates and replicates an
instance of the domain partition to all of the domains domain controllers. The domain partition
contains information about all domain-specific objects, including users, groups, computers,
organizational units (OUs), and domain-related system settings. This is usually the largest of the
AD DS partitions, as it stores all the objects contained in the domain. Changes to this partition are
fairly constant, as every time an object is created, deleted, or modified by changing an attributes
value, those changes must then be replicated. All objects in every domain partition in a forest are
stored in the global catalog, with only a subset of their attribute values.
Note: You can use the Active Directory Service Interfaces Editor (ADSI Edit) to connect to
and view the partitions.
Characteristics of AD DS Replication
An effective AD DS replication design ensures that
each partition on a domain controller is consistent
with the replicas of that partition that are hosted
on other domain controllers. Typically, not all
domain controllers have exactly the same
information in their replicas at any one moment
because changes occur to the direction constantly.
However, Active Directory replication ensures that
all changes to a partition are transferred to all
replicas of the partition. Active Directory
replication balances accuracy, or integrity, and
consistency (called convergence) with
performance, thus keeping replication traffic to a reasonable level.
Multi-master replication. Any domain controller except an RODC can initiate and commit a change to
AD DS. This provides fault tolerance, and eliminates dependency on a single domain controller to
maintain the operations of the directory store.
Pull replication. A domain controller requests, or pulls changes from other domain controllers. Even
though a domain controller can notify its replication partners that it has changes to the directory, or
poll its partners to see if they have changes to the directory, in the end, the target domain controller
requests and pulls the changes themselves.
Store-and-forward replication. A domain controller can pull changes from one replication partner,
and then make those changes available to another replication partner. For example, domain
controller B can pull changes initiated by domain controller A. Then, domain controller C can pull the
changes from domain controller B. This helps balance the replication load for domains that contain
several domain controllers.
Data store partitioning. A domains domain controllers host the domain-naming context for their
domains, which helps minimize replication, particularly in multidomain forests. The domain
controllers also host copies of schema and configuration partitions, which are replicated forest wide.
However, changes in configuration and schema partitions are much less frequent than in the domain
partition. By default, other data, including application directory partitions and the partial attribute set
(global catalog), do not replicate to every domain controller in the forest. You can enable replication
to be universal by configuring all the domain controllers in the forest as global catalog servers.
Attribute-level replication. When an objects attribute changes, only that attribute and minimal
metadata describing that attribute replicates. The entire object does not replicate, except upon its
initial creation. For multivalued attributes, such as account names in the Member of attribute of a
group account, only changes to actual names replicate, and not the entire list of names.
Distinct control of intersite replication. You can control replication between sites.
Collision detection and management. On rare occasions, you can modify an attribute on two different
domain controllers during a single replication window. If this occurs, you must reconcile the two
changes. AD DS has resolution algorithms that satisfy almost all scenarios.
Connection objects
Notification
Polling
Connection Objects
A domain controller that replicates changes from another domain controller is called a replication partner.
Replication partners are linked by connection objects. A connection object represents a replication path
from one domain controller to another. Connection objects are one-way, representing inbound-only pull
replication.
To view and configure connection objects, open Active Directory Sites and Services, and then select the
NTDS Settings container of a domain controllers server object. You can force replication between two
domain controllers by right-clicking the connection object, and then selecting Replicate Now. Note that
replication is inbound-only, so if you want to replicate both domain controllers, you need to replicate the
inbound connection object of each domain controller.
Notification
When a change is made to an Active Directory partition on a domain controller, the domain controller
queues the change for replication to its partners. By default, the source server waits 15 seconds to notify
its first replication partner of the change. Notification is the process by which an upstream partner informs
its downstream partners that a change is available. By default, the source domain controller then waits
three seconds between notifications to additional partners. These delays, called the initial notification
delay and the subsequent notification delay, are designed to stagger the network traffic that intrasite
replication can cause.
Upon receiving the notification, the downstream partner requests the changes from the source domain
controller, and the directory replication agent pulls the changes from the source domain controller. For
example, suppose domain controller DC01 initializes a change to AD DS. When DC02 receives the change
from DC01, it makes the change to its directory. DC02 then queues the change for replication to its own
downstream partners.
Next, suppose DC03 is a downstream replication partner of DC02. After 15 seconds, DC02 notifies DC03
that it has a change. DC03 makes the replicated change to its directory, and then notifies its downstream
partners. The change has made two hops, from DC01 to DC02, and then from DC02 to DC03. The
replication topology ensures that no more than three hops occur before all domain controllers in the site
receive the change. At approximately 15 seconds per hop, the change fully replicates in the site within
one minute.
Polling
Sometimes, a domain controller may not make any changes to its replicas for an extended time,
particularly during off hours. Suppose this is the case with DC01. This means that DC02, its downstream
replication partner, will not receive notifications from DC01. DC01 also might be offline, which would
prevent it from sending notifications to DC02.
It is important for DC02 to know that its upstream partner is online and simply does not have any
changes. This is achieved through a process called polling. During polling, the downstream replication
partner contacts the upstream replication partner with queries as to whether any changes are queued for
replication. By default, the polling interval for intrasite replication is once per hour. You can configure the
polling frequency from a connection objects properties by clicking Change Schedule, although we do not
recommend it.
If an upstream partner fails to respond to repeated polling queries, the downstream partner launches the
KCC to check the replication topology. If the upstream server is indeed offline, the KCC rearranges the
sites replication topology to accommodate the change.
Question: Describe the circumstances that result when you manually create a connection
object between domain controllers within a site.
Adding objects with the same relative distinguished name into the same container on different
domain controllers.
To help minimize conflicts, all domain controllers in the forest record and replicate object changes at the
attribute or value level rather than at the object level. Therefore, changes to two different object
attributes, such as the users password and postal code, do not cause a conflict even if you change them
at the same time from different locations.
When an originating update is applied to a domain controller, a stamp is created that travels with the
update as it replicates to other domain controllers. The stamp contains the following components:
Version number. The version number starts at one for each object attribute, and increases by one for
each update. When performing an originating update, the version of the updated attribute is one
number higher than the version of the attribute that is being overwritten.
Timestamp. The timestamp is the updates originating time and date in the universal time zone,
according to the system clock of the domain controller where the change is made.
Server globally unique identifier (GUID). The server GUID identifies the domain controller that
performed the originating update.
Resolution
Attribute value
The RODC forwards the write request to a writable domain controller, which then replicates back to
the RODC. Examples of this type of request include password changes, service principal name (SPN)
updates, and computer\domain member attribute changes.
The RODC responds to the client and provides a referral to a writable domain controller. The
application can then communicate directly with a writable domain controller. Lightweight Directory
Access Protocol (LDAP) is an example of acceptable RODC referrals.
The write operation fails because it is not referred or forwarded to a writable domain controller.
Remote procedure call (RPC) writes are an example of communication that may be prohibited from
referrals or forwarding to another domain controller.
When you implement an RODC, the KCC detects that the domain controller is configured with a read-only
replica of all applicable domain partitions. Because of this, the KCC creates one-way-only connection
objects from one or more source Windows Server 2008 or newer Windows Server operating system
domain controllers to the RODC.
For some tasks, an RODC performs inbound replication using a replicate-single-object operation. This is
initiated on demand outside of the standard replication schedule. These tasks include:
DNS updates when a client is referred to a writable DNS server by the RODC. The RODC then
attempts to pull the changes back using a replicate-single-object operation. This only occurs for
Active Directoryintegrated DNS zones.
Updates for various client attributes including client name, DnsHostName, OsName, OsVersionInfo,
supported encryption types, and the LastLogontimeStamp attribute.
Lesson 2
Configuring AD DS Sites
Within a single site, AD DS replication occurs automatically without regard for network utilization.
However, some organizations have multiple locations that are connected by wide area network (WAN)
connections. If this is the case, you must ensure that AD DS replication does not impact network
utilization negatively between locations. You also may need to localize network services to a specific
location. For example, you may want users at a branch office to authenticate to a domain controller
located in their local office, rather than over the WAN connection to a domain controller located in the
main office. You can implement AD DS sites to help manage bandwidth over slow or unreliable network
connections, and to assist in service localization for authentication and many other site-aware services on
the network.
Lesson Objectives
After completing this lesson, you will be able to:
Describe AD DS sites.
Manage replication traffic. Typically, there are two types of network connections within an enterprise
environment: highly connected and less highly connected. Conceptually, a change made to AD DS
should replicate immediately to other domain controllers within the highly connected network in which
the change was made. However, you might not want the change to replicate to another site
immediately if you have a slower, more expensive, or less reliable link. Instead, you might want to
optimize performance, reduce costs, and manage bandwidth by managing replication over less highly
connected segments of your enterprise. An Active Directory site represents a highly connected portion
of your enterprise. When you define a site, the domain controllers within the site replicate changes
almost instantly. However, you can manage and schedule replication between sites as needed.
Provide service localization. Active Directory sites help you localize services, including those provided
by domain controllers. During logon, Windows clients are directed automatically to domain
controllers in their sites. If domain controllers are not available in their sites, then they are directed to
domain controllers in the nearest site that can authenticate the client efficiently. Many other services
such as replicated DFS resources are also site-aware, to ensure that users are directed to a local copy
of the resource.
Group Policy Objects (GPOs) can be linked to a site. In that case, the site represents the top of the
AD DS GPO hierarchy, and the AD DS GPO settings are applied here first.
You want to control service localization. By establishing AD DS sites, you can ensure that clients use
domain controllers that are nearest to them for authentication, which reduces authentication latency
and traffic on WAN connections. In most scenarios, each site will contain a domain controller.
However, you might configure sites to localize services other than authentication, such as DFS,
Windows BranchCache, and Exchange Server services. In this case, some sites might be configured
without a domain controller present in the site.
You want to control replication between domain controllers. There might be scenarios in which
two well-connected domain controllers are allowed to communicate only at certain times of the
day. Creating sites allows you to control how and when replication takes place between domain
controllers.
Demonstration Steps
1.
From the Server Manager, open Active Directory Sites and Services.
2.
3.
Right-click the Sites node, and then click New Site. Specify the name Toronto, and then associate
the new site with the default site-link.
4.
5.
In the navigation pane, right-click Subnets, and then click New Subnet.
6.
Provide the prefix 172.16.0.0/24, and then associate the IP prefix to an available site object.
7.
The network links between sites have limited available bandwidth, may have a higher cost, and may
not be reliable.
Replication traffic between sites can be designed to optimize bandwidth by compressing all
replication traffic. Replication traffic is compressed to 10 percent to 15 percent of its original size
before it transmits. Although compression optimizes network bandwidth, it imposes an additional
processing load on domain controllers when it compresses and decompresses replication data.
Replication between sites occurs automatically after you have defined configurable values, such as a
schedule or a replication interval. You can schedule replication for inexpensive or off-peak hours. By
default, changes are replicated between sites according to a schedule that you define, and not
according to when changes occur. The schedule determines when replication can occur. The interval
specifies how often domain controllers check for changes during the time that replication can occur.
In some networks, you might want to specify that only certain domain controllers are responsible for
intersite replication. You can do this by specifying bridgehead servers. The bridgehead servers are
responsible for all replication into, and out of, the site. ISTG creates the required connection agreement in
its directory, and this information is then replicated to the bridgehead server. The bridgehead server then
creates a replication connection with the bridgehead server in the remote site, and replication begins. If a
replication partner becomes unavailable, the ITSG selects another domain controller automatically, if
possible. If bridgehead servers have been assigned manually, and if they become unavailable, ISTG will not
automatically select other servers.
The ISTG selects bridgehead servers automatically, and creates the intersite replication topology to ensure
that changes replicate effectively between bridgeheads that share a site-link. Bridgeheads are selected per
partition, so it is possible that one domain controller in a site might be the bridgehead server for the
schema, while another is for the configuration. However, you usually will find that one domain controller
is the bridgehead server for all partitions in a site, unless there are domain controllers from other domains
or application directory partitions. In this scenario, bridgeheads will be chosen for those partitions.
Designated bridgehead servers are also useful when you have firewalls in between sites that only allow
replication between specific domain controllers.
The service name and port. This portion of the SRV record indicates a service with a fixed port. It does
not have to be a well-known port. SRV records in Windows Server 2012 include LDAP (port 389),
Kerberos (port 88), Kerberos password protocol (KPASSWD, port 464), and global catalog services
(port 3268).
Protocol. The Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) is indicated as a
transport protocol for the service. The same service can use both protocols in separate SRV records.
Kerberos records, for example, are registered for both TCP and UDP. Microsoft clients use only TCP,
but UNIX clients can use both UDP and TCP.
Host name. The host name corresponds to the host A record for the server hosting the service. When
a client queries for a service, the DNS server returns the SRV record and associated host A records, so
the client does not need to submit a separate query to resolve the services IP address.
The service name in an SRV record follows the standard DNS hierarchy with components separated by
dots. For example, a domain controllers Kerberos service is registered as:
kerberos._tcp.sitename._sites.domainName, where:
Kerberos is a Kerberos Key Distribution Center that uses TCP as its transport protocol.
In certain situations, an organization might have computers in a location that does not have, nor would it
be desirable to have, domain controllers. Sites can be created without domain controllers; however, as
noted above, the site would not have a corresponding domain controller listing in the
_sites\sitename\_tcp path. In this case, there are several potential solutions. If, for example, maintenance
of the domain controller and security of the AD DS database it contains are the main concerns, you could
deploy RODCs. You also can use automatic site coverage. In the case of an empty site, a domain controller
of the next closest site will automatically decide to take care of that site and also register its records for
that site. This also can be adjusted or forced using group policy. Alternatively, if the site is well connected
with only a few computers, you may want to avoid the costs of maintaining a server there. In this case,
you could add the local subnet of the site to a central or a data center site location with multiple domain
controllers. In the SRV record section example shown above, the client computers at the remote, domain
controller-less location would be identified as belonging to the central site. This would be a problem only
if the central sites domain controllers were not available. In this case, the clients could use cached
credentials to authenticate locally. Because automatic site-link bridging, which will be discussed in the
next lesson, is turned on by default, then domain authentication could still take place over the site-link
bridge where multiple sites exist.
2.
The client attempts an LDAP ping to all domain controllers in a sequence. DNS returns a list of all
matching domain controllers and the client attempts to contact all of them on its first startup.
3.
The first domain controller responds. The first domain controller that responds to the client examines
the clients IP address, cross-references that address with subnet objects, and informs the client of the
site to which the client belongs. The client stores the site name in its registry, and then queries for
domain controllers in the site-specific _tcp folder.
4.
The client queries for all domain controllers in the site. DNS returns a list of all domain controllers in
the site.
5.
The client attempts an LDAP ping sequentially to all domain controllers in the site. The domain
controller that responds first authenticates the client.
6.
The client forms an affinity. The client forms an affinity with the domain controller that responded
first, and then attempts to authenticate with the same domain controller in the future. If the domain
controller is unavailable, the client queries the sites _tcp folder again, and again attempts to bind
with the first domain controller that responds in the site.
If the client moves to another site, which may be the case with a mobile computer, the client attempts to
authenticate to its preferred domain controller. The domain controller notices that the clients IP address
is associated with a different site, and then refers the client to the new site. The client then queries DNS
for domain controllers in the local site
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
Describe AD DS site-links.
Because all four sites are on the same site-link, you are instructing AD DS that all four sites can replicate
with each other. That means that Seattle may replicate changes from Amsterdam; Amsterdam may
replicate changes from Beijing; and Beijing may replicate changes from the headquarters, which in turn
replicates changes from Seattle. In several of these replication paths, the replication traffic on the network
flows from one branch through the headquarters on its way to another branch. With a single site-link, you
do not create a hub-and-spoke replication topology even though your network topology is hub-andspoke.
To align your network topology with Active Directory replication, you must create specific site-links. That
is, you can create manually site-links that reflect your intended replication topology. Continuing the
preceding example, you would create three site-links as follows:
After you create site-links, the ISTG will use the topology to build an intersite replication topology that
connects each site, and then creates connection objects automatically to configure the replication paths.
As a best practice, you should set up your site topology correctly and avoid creating connection objects
manually.
branches A and B are both directly connected to the corporate headquarters with the default cost of 100.
The corporate headquarters has a backup datacenter, HQ-HA, which is also connected with the cost of
100 between the corporate headquarters location and the locations of site A and B. In the event that all
domain controllers are not available in HQ-HA, you want to ensure that site A can go to B. This enables
you to keep site-link bridging on, but configure a site-link bridge with the cost of 150 for A to B. This is
greater than the cost of 100 for either to HQ-HA, but less than the cost without the site-link bridge. That
cost would be 200100 from Site A to HQ-HA, plus the cost of 100 to HQ-HA to Site B. This makes the
site-link bridge cost of 150 an in-between cost.
The figure on the previous slide illustrates how you can use a site-link bridge in a forest in which automatic
site-link bridging is disabled. By creating the site-link bridge AMS-HQ-SEA, which includes the HQ-AMS and
HQ-SEA site-links, those two site-links become transitiveor bridged. Therefore, a replication connection
can be made between a domain controller in Amsterdam and a domain controller in Seattle.
Replication frequency. Intersite replication is based only on polling. By default, every three hours a
replication partner polls its upstream replication partners to determine whether changes are available.
This replication interval may be too long for organizations that want directory changes to replicate
more quickly. You can change the polling interval by accessing the properties of the site-link object.
The minimum polling interval is 15 minutes.
Replication schedules. By default, replication occurs 24 hours a day. However, you can restrict intersite
replication to specific times by changing the schedule attributes of a site-link.
Demonstration Steps
1.
From the Server Manager, open Active Directory Sites and Services.
2.
3.
4.
5.
If necessary, open the properties of the IP node, and then modify the Bridge all site-links option.
Demonstration Steps
1.
2.
3.
In the Domain Controllers organizational unit (OU), open the properties of LON-RODC1.
4.
Click the Password Replication Policy tab, and view the default policy.
5.
6.
In the Active Directory Users and Computers console, click the Users container.
7.
Double-click Allowed RODC Password Replication Group, then click the Members tab and
examine the default membership of Allowed RODC Password Replication Group. There should be
no members by default.
8.
Click OK.
9.
Double-click Denied RODC Password Replication Group, and then click the Members tab.
10. Click Cancel to close the Denied RODC Password Replication Group Properties dialog box.
Display the replication partners for a domain controller. To display the replication connections of a
domain controller, type repadmin /showrepl DC_LIST. By default, Repadmin.exe shows only intersite
connections. Add the /repsto argument to see intersite connections, as well.
Display connection objects for a domain controller. Type repadmin /showconn DC_LIST to show the
connection objects for a domain controller.
Display metadata about an object, its attributes, and replication. You can learn much about
replication by examining an object on two different domain controllers to find out which attributes
have or have not replicated. Type repadmin /showobjmeta DC_LIST Object, where DC_LIST
indicates the domain controller(s) to query. You can use an asterisk to indicate all domain controllers.
Object is a unique identifier for the object, its distinguished name or GUID, for example.
You can also make changes to your replication infrastructure by using the Repadmin.exe tool. Some of the
management tasks you can perform are:
Launching the KCC. Type repadmin /kcc to force the KCC to recalculate the inbound replication
topology for the server.
Forcing replication between two partners. You can use Repadmin.exe to force replication of a
partition between a source and a target domain controller. Type repadmin /replicate
Destination_DC_LIST Source_DC_Name Naming_Context.
Synchronizing a domain controller with all replication partners. Type repadmin /syncall DC/A /e to
synchronize a domain controller with all its partners, including those in other sites.
Intersite. Checks for failures that would prevent or delay intersite replication.
Topology. Checks that the replication topology is connected fully for all domain controllers.
VerifyReplicas. Verifies that all application directory partitions are instantiated fully on all domain
controllers that host replicas.
Operations master consistency check. This critical part of replication allows replication partners to be
in agreement on which domain controllers are in an operations master role.
Replication latency monitoring. This ensures that AD DS changes are replicated in a timely manner,
and can periodically send replication events of its own to ensure that all replication partners are
functioning properly.
Replication partner count. This keeps track of how many replication partners a domain controller has.
If the number is either below or above a particular threshold, it will trigger an alert.
Replication provider. This monitors and reports on all replication links for each domain controller. You
use Windows Management Instrumentation (WMI) to find link status.
Data returned
Get-ADReplicationConnection
Get-ADReplicationFailure
Get-ADReplicationPartnerMetadata
Get-ADReplicationSite
Get-ADReplicationSiteLink
Get-ADReplicationSiteLinkBridge
Get-ADReplicationSubnet
Objectives
After completing this lab, you will be able to:
Modify the default site created in AD DS.
Configure AD DS replication.
Lab Setup
Estimated Time: 45 minutes
Virtual machines
20412D-LON-DC1
20412D-TOR-DC1
User Name
Adatum\Administrator
Password
Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following procedure:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
2.
When the AD DS binaries have installed, use the Active Directory Domain Services Configuration
Wizard to install and configure TOR-DC1 as an additional domain controller for Adatum.com.
3.
After the server restarts, sign in as Adatum\Administrator with the password of Pa$$w0rd.
2.
Open Active Directory Sites and Services, and then rename the Default-First-Site-Name site to
LondonHQ.
3.
Verify that both LON-DC1 and TOR-DC1 are members of the LondonHQ site.
If necessary, on LON-DC1, open the Server Manager console, and then open Active Directory Sites
and Services.
2.
Prefix: 172.16.0.0/24
Results: After completing this exercise, you will have reconfigured the default site and assigned IP address
subnets to the site.
If necessary, on LON-DC1, open the Server Manager console, and then open Active Directory Sites
and Services.
2.
3.
Name: Toronto
Name: TestSite
2.
3.
4.
Prefix: 172.16.1.0/24
Prefix: 172.16.100.0/24
In the navigation pane, click the Subnets folder. Verify in the details pane that the two subnets are
created and associated with their appropriate site.
Results: After this exercise, you will have created two additional sites representing the IP subnet addresses
located in Toronto.
2.
3.
Name: TOR-TEST
Sites: Toronto, TestSite
Modify the schedule to only allow replication from Monday 9 AM to Friday 3 PM
Name: LON-TOR
Sites: LondonHQ, Toronto
Replication: Every 60 minutes
2.
3.
Verify that TOR-DC1 is located under the Servers node in the Toronto site.
2.
This command recalculates the inbound replication topology for the server.
Repadmin /showrepl
This command displays the bridgehead servers for the site topology.
Repadmin /replsummary
This command displays a summary of replication tasks. Verify that no errors appear.
DCDiag /test:replications
Switch to TOR-DC1, and then repeat the commands to view information from the TOR-DC1
perspective.
Results: After this exercise, you will have configured site-links and monitored replication.
Produce an error
Monitor AD DS site replication
Troubleshoot AD DS replication
To prepare for the next module
On LON-DC1, in Active Directory Sites and Services, replicate TOR-DC1 with LON-DC1 from the
LondonHQ site.
2.
3.
Observe the results, and note the date/time of the most recent replication event.
4.
On TOR-DC1, in Active Directory Sites and Services, replicate LON-DC1 with TOR-DC1 from the
Toronto site. Acknowledge the error.
2.
In Windows PowerShell, run the following cmdlets, and observe the results:
Get-ADReplicationUpToDatenessVectorTable Target adatum.com
Get-AdReplicationSubnet filter *
Get-AdReplicationSiteLinkfilter *
On TOR-DC1, in Windows PowerShell, determine the IP address settings for the computer, and then
run the following cmdlet:
Get-DnsClient | Set-DnsClientServerAddress -ServerAddresses
("172.16.0.10","172.16.0.25")
Go to Active Directory Sites and Services, and replicate LON-DC1 with TOR-DC1 from the
Toronto site. Acknowledge the error.
3.
4.
In Windows PowerShell, investigate the DNS Server service with the Get-Service cmdlet. If it is not
running, start the service with the Start-Service cmdlet.
5.
Go to Active Directory Sites and Services, and replicate LON-DC1 with TOR-DC1 from the
Toronto site. You should not get an error. Review the objects to determine if any are missing.
6.
7.
Run the recreate Site Links and recreate subnets sections of the script.
8.
Return to Active Directory Sites and Services, and determine if anything is still missing.
9.
Close all open windows, and sign off LON-DC1 and TOR-DC1.
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Tools
Tool
Use
Location
Repadmin.exe
Command line
Dcdiag.exe
Command line
Get-ADReplicationConnection
Windows PowerShell
Get-ADReplicationFailure
Windows PowerShell
Get-ADReplicationPartnerMetadata
Windows PowerShell
Get-ADReplicationSite
Windows PowerShell
Get-ADReplicationSiteLink
Windows PowerShell
Get-ADReplicationSiteLinkBridge
Windows PowerShell
Get-ADReplicationSubnet
Windows PowerShell
Best Practice:
Implement the following best practices when you manage Active Directory sites and replication in
your environment: Always provide at least one or more global catalog servers per site.
Do not set up long intervals without replication when you configure replication schedules for intersite
replication.
Troubleshooting Tip
Verify whether all SRV records for the domain controller are present
in DNS.
Verify whether the domain controller has an IP address from the
subnet that is associated with that site.
Verify that the client is a domain member and has the correct time.
6-1
Module 6
Implementing AD CS
Contents:
Module Overview
6-1
6-2
6-9
6-17
6-28
6-32
6-38
6-48
6-53
6-59
Module Overview
Public key infrastructure (PKI) consists of several components that help you secure corporate
communications and transactions. One component is the certification authority (CA). You can use CAs to
manage, distribute, and validate digital certificates that help secure information. You can install Active
Directory Certificate Services (AD CS) as a root CA or a subordinate CA in your organization. In this
module, you will learn about implementing AD CS server role and certificates.
Objectives
After completing this module, you will be able to:
Describe PKI.
Deploy CAs.
6-2 Implementing AD CS
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
2.
The web browser software connects to a website and requests for the server to identify itself.
3.
The web server sends its SSL certificate. With the certificate, the server also distributes its public key to
the client.
4.
The client performs a check of the server certificate. It checks the subject name and compares it with
the URL that it used to access the server. It also checks if any of the CAs in the trusted root CA store
issue the certificate and it checks the certificate revocation list distribution point (CDP) locations to
verify if the certificate is revoked.
5.
If all checks pass, the client generates a symmetric encryption key. The client and server use a
symmetric key for decrypting data because the public and private key pairs are not very efficient in
encrypting and decrypting large amounts of data. The client generates a symmetric key and then
encrypts this key with the servers public key. After that, the client sends the encrypted symmetric key
to the server.
6.
The server uses its private key to decrypt the encrypted symmetric key. Now both the server and the
client have a symmetric key, and the secure data transfer can begin.
This process involves several very important checks. First, the server proves its identity to the client by
presenting its SSL certificate. If the server name in the certificate matches the URL that the client
requested, and if a trusted CA issues the certificate, the client trusts that the server has valid identity. The
client has also checked the validity of the certificate by checking its lifetime and CDP location for the
certificate revocation lists (CRLs). This means that establishing an SSL session does more than manage
encryption; it also provides authentication from a server to a client.
Note: Client authentication is not part of the classic SSL handshake. This means that a client
does not have to provide its identity to the server. However, you also can configure your website
to require client authentication. The client also can use a certificate to authenticate itself.
6-4 Implementing AD CS
In some scenarios, you need to have more than one server name on the same server. A typical example
for this is Microsoft Exchange Server Client Access server. A certificate installed on the Client Access
server must support its public namefor example, mail.adatum.com and autodiscover.adatum.com. Since
both names are associated to the same website, and you cannot assign more than one certificate to a
single website, you must use a certificate that supports multiple names, also known as subject alternative
names. This means that you have one certificate with more than one name. These certificates can be
issued from both an internal CA on Windows Server 2012 and from public CAs.
Note: Instead of having one certificate with multiple names on the same domain, you also
can issue a wild card certificate with a common namefor example, *.adatum.com. This
certificate will be valid for all names with domain suffix adatum.com. However, we do not
recommend using these certificates for security reasons.
To issue an SSL certificate from an internal CA, you can use the following approaches:
Use the CA console on the server to make the certificate request to the CA. By using this approach,
you can specify any additional attributes for the certificate, such as the certificate template or the
subject alternative name. However, after you install the certificate, you must assign it to the
appropriate website manually.
Use the Internet Information Services (IIS) console. In the IIS console, you make a certificate request
directly to the CA. However, when you use this approach, you are not able to choose a certificate
templateit looks for a web server template by defaultand you cannot specify a subject alternative
name. This is, however, the simplest way to install a certificate on a website.
Use CA Web enrollment. This approach is appropriate if you want to issue a certificate to a server that
is not a member of your domain. For this type of enrollment, you must first make a certificate request
(.req) file and then submit that request on the CA Web enrollment page. There, you also can specify
the certificate template and add subject alternative names, if needed.
If you buy a publicly trusted SSL certificate, the procedure is somewhat different. After you choose a
certificate vendor, you will first have to go through an administrative procedure to prove the identity of
your company and domain name ownership. After you have completed that, you have to create a
Certificate Signing Request (CSR) on your server. This CSR creates the private key and a CSR data file,
which is basically a certificate request. You then send the CSR to the certificate issuer. The CA uses the CSR
data file to create a public key to match your private key without compromising the key itself. The CA
never sees the private key in this or any previous scenario for certificate issuing, except when key archival
is configuredbut even then, the key is encrypted.
Digital Signatures
When a person digitally signs a document in an applicationsuch as in email, a Microsoft Word
document, or similarhe or she confirms that the document is authentic. In this context, authentic means
that the creator of the document is known, and that the document has not been altered in any way since
the person created and signed it.
PKI can achieve this level of security. Compared to the web server from the previous topic, a user also can
have certificate with a public and private key pair. This certificate is used in the process of digital signing.
When an author digitally signs a document or a message, the operating system on his or her machine
creates a message cryptographic digest, which ranges from a 128-bit to a 256-bit number. The operating
system generates the number by running the entire message through a hash algorithm. This number is
then encrypted by using the authors private key, and it is added to the end of the document or message.
When the document or message reaches the recipient, it will go through the same hash algorithm as
when the author digitally signed it. The recipient uses the authors public key to decrypt the digest that is
added to the message. After it is decrypted, it is compared to the digest that the recipient has generated.
If they are the same, the document or the message was not altered during transport. In addition, if the
recipient is able to decrypt the digest by using the authors public key, this means that the digest was
encrypted by using the authors private key, and that confirms the authors identity. At the end, the
recipient also verifies the certificate that was used to prove authors identity. This check also verifies the
validity period, CRL, subject name, and certificate chain trust.
6-6 Implementing AD CS
Demonstration Steps
1.
On LON-CL1, open the Windows PowerShell command-line interface, and then run mmc.exe.
2.
3.
Start the Request New Certificate Wizard, and then enroll for a User certificate.
4.
Open Microsoft Word 2013, type some text in the blank document, and then save the document.
5.
Click Insert on the ribbon, and then insert the signature line.
6.
7.
Right-click the signature line, and then select to sign the document.
8.
9.
EFS
To encrypt a file by using EFS, you must have an EFS certificate issued. Like other certificates, this
certificate also provides a private and public key pair. However, these keys are not used directly to encrypt
or decrypt content. The reason for this is because, the algorithms that use asymmetric encryption, where
one key is used for encryption and another for decryption, are inefficient. These algorithms are 100 to
1,000 times slower than algorithms that use the same key for both encryption and decryption, which is
called symmetric encryption. To overcome this problem, EFS uses a somewhat hybrid approach.
When a user selects the option to encrypt a file, the local computer generates a symmetric key, which is
also known as a file encryption key, and uses that key to encrypt the file. After encrypting the file, the
system uses the users public key to encrypt the symmetric key and then store it in the file header.
When the user who originally encrypted the file wants to decrypt the file and access its content, the local
computer accesses the users private key and first decrypts the symmetric key from file header, which is
also called the Data Decryption Field. After that, the symmetric key is used to decrypt the content.
This works well if the files owner is the only person who accesses the encrypted file. However, there are
scenarios where you want to share encrypted files with other users, and it might be inconvenient or
unacceptable to decrypt the file before sharing it with other people. If the user who originally encrypted
the file loses their private key, the file might be inaccessible to anyone.
To resolve this, a Data Recovery Field is defined for each file encrypted with EFS. When you configure EFS
to use locally or in an AD DS domain, the Data Recovery Agent role is defined by default and assigned to
local or Domain Admin. The Data Recovery Agent is actually a certificate with a key pair that can be used
to decrypt files in case the private key of the originating user is not accessible for any reason.
When a user encrypts the file with EFS, his or her public key is used to encrypt the symmetric key, and that
encrypted key then is stored to the Data Decryption Field in the file header. At the same time, the public
key of the Data Recovery Agent is used to encrypt the symmetric key once more. The symmetric key is
encrypted with a public key of the Data Recovery Agent and then is stored to the Data Recovery Field in
the file header. If more than one Data Recovery Agent is defined, the symmetric key is encrypted with
each Data Recovery Agents public key. Then, if the user who originally encrypted the file does not have a
private key available for any reason, the Data Recovery Agent can use its private key to decrypt the
symmetric key from Data Recovery Field, and then decrypt the file.
Note: As an alternative to the Data Recovery Agent, you also can use the Key Recovery
Agent (KRA) to retrieve a users private key from a CA database, if key archival is enabled for the
EFS certificate template and on the CA.
When a user wants to share an encrypted file with other users, the process is similar to the Data Recovery
Agent process. When EFS sharing is selected, the files owner must select a certificate from each user who
shares the file. These certificates can be published to AD DS and taken from there. When the certificate is
selected, the public key of destination user is taken, and the symmetric key is encrypted and added to the
file header. Then the other user also can access the EFS encrypted content, as he or she can use his or her
private keys to decrypt the symmetric key.
Note: You can also define the Data Recovery Agent for BitLocker Drive Encryption. Because
the BitLocker Data Recovery Agent certificate template is not predefined, you can copy the KRA
template and then add the BitLocker encryption and the BitLocker Drive Recovery Agent from the
application policies. After you enroll a user for this certificate, you can add it to be the BitLocker
Data Recovery Agent at the domain level if you use Group Policy settings, in following path:
Computer Configuration\Windows Settings\Security\Public Key Policies\BitLocker Drive Encryption.
Email Encryption
Besides the use of EFS to encrypt files and BitLocker to encrypt drives, you also can use certificates to
encrypt emails. However, email encryption is more complicated than a digital signature. While you can
send a digitally signed email to anyone, you cannot do the same with an encrypted email. To send an
encrypted email to someone with a PKI, you must possess the recipients public key from his or her key
pair. In the AD DS environment, which uses Exchange Server as an email system, you can publish the
public keys of all mailbox users to a global address list (GAL). When you do that, an application such as
Outlook can extract a recipients public key easily from the GAL, if you are sending encrypted email. When
you send an encrypted email to an internal user, your email application takes the recipient public key
from the GAL, encrypts the email with it, and then sends the email. After the user receives the email, he or
she uses his or her private key from the certificate to decrypt the content of the email.
6-8 Implementing AD CS
Sending an encrypted email to external users is more complicated. While you can publish public keys of
internal users to AD DS or the GAL, you cannot do the same with external users. To send an encrypted
email to an external user, you first must get his or her public key. You can get the key if the external user
sends it to you in a .cer file, which you can import to your local address book. In addition, if an external
user sends you one digitally signed email, you will get his or her public key, which you also can import to
your local address book. After importing the public key into your address book, you can use it to send
encrypted emails to external users.
Note: If you want to provide authenticity, content consistency, and protection, you can
digitally sign and encrypt a message that you are sending.
Lesson 2
PKI Overview
PKI helps you verify and authenticate the identity of each party involved in an electronic transaction. It
also helps you establish trust between computers and the corresponding applications that application
servers are hosting. A common example includes the use of PKI technology to secure websites. Digital
certificates are key PKI components that contain electronic credentials, which are used to authenticate
users or computers. Moreover, certificates can be validated using certificate discovery, path validation,
and revocation checking processes. Windows Server 2012 supports building a certificate services
infrastructure in your organization by using AD CS components.
Lesson Objectives
After completing this lesson, you will be able to:
Describe PKI.
Describe CAs.
What Is PKI?
PKI is a combination of software, encryption
technologies, processes, and services that assist
an organization with securing its communications
and business transactions. It is a system of digital
certificates, CAs, and other registration authorities.
When an electronic transaction takes place, PKI
verifies and authenticates the validity of each
party involved. PKI standards are still evolving,
but they are widely implemented as an essential
component of electronic commerce.
6-10 Implementing AD CS
Infrastructure. The meaning in this context is the same as in any other context, such as electricity,
transportation, or water supply. Each of these elements has a specific job, and requirements that must
be met for it to function efficiently. The sum of these elements allows for the efficient and safe use of
PKI. The elements that make up a PKI include the following:
o
A CA
A registration authority
Client-side processing
A certificate repository
You will learn more about most of these components in later topics and lessons throughout this module.
Public/Private Keys. In general, there are two methods for encrypting and decrypting data:
o
Symmetric encryption: The methods to encrypt and decrypt data are identical, or mirrors of each
other. A particular method or key encrypts the data. To decrypt the data, you must have the
same method or key. Therefore, anyone who has the key can decrypt the data. The key must
remain private to maintain the integrity of the encryption.
Asymmetric encryption: In this case, the methods to encrypt and decrypt data are neither
identical nor mirrors of each other. A particular method or key encrypts data. However, a
different key decrypts the data. This is achieved by using a pair of keys. Each person gets a key
pair, which consists of a public key and a private key. These keys are unique. The private key can
decrypt data that the public key encrypts, and vice versa. In this situation, the keys are sufficiently
different and knowing or possessing one does not allow you to determine the other. Therefore,
you can make one of the keys (public) publicly available without reducing the security of the
data, as long as the other key (private) remains privatehence the name Public Key
Infrastructure.
Algorithms that use symmetric encryption are fast and efficient for large amounts of data. However,
because they use a symmetric key, they are not considered secure enough, because you always must
transport the key to the other party. Alternatively, algorithms that use asymmetric encryption are secure,
but are very slow. Because of this, it is common to use a hybrid approach, which means that symmetric
encryption encrypts data, while asymetric encryption protects the symmetric encryption key.
When you implement a PKI solution, your entire system, and especially the security aspect, can benefit.
The benefits of using PKI include:
Confidentiality. A PKI solution enables you to encrypt both stored and transmitted data.
Integrity. You can use PKI to sign data digitally. A digital signature identifies whether any data was
modified while information was in transit.
Authenticity and nonrepudiation. Authentication data passes through hash algorithms such as Secure
Hash Algorithm 1 to produce a message digest. The sender signs the message digest by using his or
her private key to prove that he or she produced the message digest. Nonrepudiation is digitally
signed data in which the digital signature provides both proof of the integrity of signed data, and
proof of the origin of data.
Standards-based approach. PKI is standards-based, which means that multiple technology vendors
are compelled to support PKI-based security infrastructures. It is based on industry standards defined
in RFC 2527, Internet X.509 Public Key Infrastructure Certificate Policy and Certification Practices
Framework.
Certificate templates. This component describes the content and purpose of a digital certificate.
When requesting a certificate from an AD CS enterprise CA, the certificate requestor will be able,
depending on his or her access rights, to select from a variety of certificate types based on certificate
templates, such as User and Code Signing. The certificate template saves users from low-level,
technical decisions about the type of certificate they need. In addition, they allow administrators to
distinguish who might request which certificates.
CRLs are complete, digitally signed lists of certificates that have been revoked. These lists are
published periodically. Clients can retrieve and cache them (based on the configured lifetime of
the CRL). The lists are used to verify a certificates revocation status.
Online Responders are part of the Online Certificate Status Protocol (OCSP) role service in
Windows Server 2008 and Windows Server 2012. An Online Responder can receive a request to
check for revocation of a certificate without requiring the client to download the entire CRL. This
speeds up certificate revocation checking, and reduces the network bandwidth. It also increases
scalability and fault tolerance, by allowing for array configuration of Online Responders.
Public keybased applications and services. This relates to applications or services that support public
key encryption. In other words, the application or services must be able to support public key
implementations to gain the benefits of it.
Certificate and CA management tools. Management tools provide command-line and GUI-based
tools to:
o
Configure CAs
o
o
6-12 Implementing AD CS
Authority information access (AIA) and CDPs. AIA points determine the location where CA certificates
can be found and validated, and CDP locations determine the points where CRLs can be found during
certificate validation process. Because CRLs can become large, (depending on the number of
certificates issued and revoked by a CA), you can also publish smaller, interim CRLs called delta CRLs.
Delta CRLs contain only the certificates revoked since the last regular CRL was published. This allows
clients to retrieve the smaller delta CRLs and quickly build a complete list of revoked certificates. The
use of delta CRLs also allows revocation data to be published more frequently, because the size of a
delta CRL means that it usually does not require as much time to transfer as a full CRL.
Hardware security module (HSM). A HSM is an optional secure cryptographic hardware device that
accelerates cryptographic processing for managing digital keys. It is a high-security, specialized
storage device that is connected to the CA for managing the certificates. Typically, an HSM is
physically attached to a computer. This is an optional add-on in your PKI, and is the most widely used
in high security environments where a compromised key would have a significant impact.
Note: The most important component of any security infrastructure is physical security. A
security infrastructure is not just the PKI implementation. Other elementssuch as physical
security and adequate security policiesare also important parts of a holistic security
infrastructure.
When you deploy a first CA (root CA) in your network, it issues a certificate for itself. After that, other CAs
receive certificates from the root CA. You can also choose to issue a certificate for your CA by using one of
the public CAs.
CA Web enrollment. This component provides a method to issue and renew certificates for users,
computers, and devices that are not joined to the domain, are not connected directly to the network,
or are for users of operating systems other than Windows.
Online Responder. You can use this component to configure and manage OCSP validation and
revocation checking. Online Responder decodes revocation status requests for specific certificates,
evaluates the status of those certificates, and returns a signed response containing the requested
certificate status information. Unlike in Windows Server 2008 R2, you can install Online Responder on
any version of Windows Server 2012. When using Online Responder, the certificate revocation data
can come from a CA on a computer that is running Windows Server 2003, Windows Server 2008, or a
CA other than Microsoft.
Network Device Enrollment Service (NDES). With this component, routers, switches, and other
network devices can obtain certificates from AD CS. On Windows Server 2008 R2, this component is
only available on the Enterprise and Datacenter editions, but with Windows Server 2012, you can
install this role service on any version.
Certificate Enrollment Web Service (CES). This component works as a proxy between Windows 7 and
Windows 8 client computers and the CA. This component is new to Windows Server 2008 R2 and it is
also present in Windows Server 2012, and requires that the Active Directory forest be at least at the
Windows Server 2008 R2 level. It enables users to connect to a CA by means of a web browser to
perform the following:
Retrieve CRLs.
Enroll over the internet or across forests (new to Windows Server 2008 R2).
Certificate Enrollment Policy Web Service (CEP). This component is new to Windows Server 2008 R2
and it is also present in Windows Server 2012. It enables users to obtain certificate enrollment policy
information. Combined with the CES, it enables policy-based certificate enrollment when the client
computer is not a member of a domain, or when a domain member is not connected to the domain.
6-14 Implementing AD CS
However, implementation of the smart card infrastructure has proved historically too expensive in some
situations. To implement smart cards, companies had to buy hardware, including smart card readers and
smart cards. This cost, in some cases, prevented the deployment of Multi-factor Authentication.
To address these issues, Windows Server 2012 AD CS introduces a technology that provides the security of
smart cards while reducing material and support costs. This technology is Virtual Smart Cards. Virtual
Smart Cards emulate the functionality of traditional smart cards, but instead of requiring the purchase of
additional hardware, they utilize technology that users already own and are more likely to have with them
at all times.
Virtual Smart Cards in Windows Server 2012 leverage the capabilities of the TPM chip that is present on
most of the computer motherboards produced in the past two years. Because the chip is already in the
computer, there is no cost for buying smart cards and smart card readers. However, unlike traditional
smart cards, which required that the user be in physical possession of the card, in the Virtual Smart Card
scenario, a computer (or to be more specific, TPM chip on its motherboard) acts like a smart card. When
using a TPM chip, you also achieve two-factor authentication, similar to when you use a smart card with a
PIN. A user must have his or her computer (which has been set up with the Virtual Smart Card), and know
the PIN required to use his or her Virtual Smart Card.
It is important to understand how Virtual Smart Cards protect private keys. Traditional smart cards have
their own storage and cryptographic mechanism for protecting the private keys. In the Virtual Smart Card
scenario, private keys are protected not by isolation of physical memory, but rather by the cryptographic
capabilities of the TPM. All sensitive information that is stored on a smart card is encrypted by using the
TPM and then stored on the hard drive in its encrypted form. Although private keys are stored on a hard
drive (in encrypted form), all cryptographic operations occur in the secure, isolated environment of the
TPM. Private keys never leave this environment in unencrypted form. If the hard drive of the machine is
compromised in any way, private keys cannot be accessed, because TPM protects and encrypts them. To
provide more security, you can also encrypt the drive with BitLocker Drive Encryption. To deploy Virtual
Smart Cards, you need Windows Server 2012 AD CS and a Windows 8 client machine with a TPM chip on
the motherboard.
Advantages
Disadvantages
6-16 Implementing AD CS
CA type
Internal
private CA
Advantages
Disadvantages
Customized templates
Autoenrollment
Some organizations have started using a hybrid approach to their PKI architecture. A hybrid approach
uses an external public CA for the root CA, and a hierarchy of internal CAs for distribution of certificates.
This gives organizations the advantage of having their internally issued certificates trusted by external
clients, while still providing the advantages of an internal CA. The only disadvantage to this method is
cost. A hybrid approach is typically the most expensive approach, because public certificates for CAs are
very expensive.
You can also choose to deploy an internal PKI for internal purposes such as EFS and digital signatures. For
external purposes, such as protecting web or mail servers with SSL, you must buy a public certificate. This
approach is not very expensive, and it is probably the most cost-effective solution.
Some organizations that require a higher security level might also choose to define their own list of
trusted root CAs, both public and internal.
Cross-Certification Benefits
A cross-certification hierarchy provides the
following benefits:
Companies usually deploy cross-certifications to establish a mutual trust on PKI level and to implement
some other applications that rely on PKI, such as establishing SSL sessions between companies, or when
exchanging digitally signed documents.
Question: Your company is currently acquiring another company. Both companies run their
own PKI. What could you do to minimize disruption and continue to provide PKI services
seamlessly?
Lesson 3
Deploying CAs
The first CA that you install will be a root CA. After you install the root CA, you can optionally install a
subordinate CA to apply policy restrictions and distribute certificates. You can also use a CAPolicy.inf file
to automate additional CA installations and provide additional configuration settings that are not
available with the standard GUIbased installation. In addition, you can use Policy and Exit modules in the
CA to integrate your CA with other services, such as Microsoft Forefront Identity Manager (FIM). In this
lesson, you will learn about deploying and managing CAs in the Windows Server 2012 environment.
Lesson Objectives
After completing this lesson, you will be able to:
Configure CA properties.
6-18 Implementing AD CS
Most commonly, CA hierarchies have two levels, with the root CA at the top level and the subordinate
issuing CA on the second level. You usually take the root CA offline while the subordinate CA issues and
manages certificates for all clients. However, in some more complex scenarios, you also can deploy other
types of CA hierarchies.
In general, CA hierarchies fall into one of following categories:
CA hierarchies with a policy CA. Policy CAs are types of subordinate CAs that are located directly
below the root CA in a CA hierarchy. You use policy CAs to issue CA certificates to subordinate CAs
that are located directly below the policy CA in the hierarchy. The role of a policy CA is to describe
the policies and procedures that an organization implements to secure its PKI, the processes that
validate the identity of certificate holders, and the processes that enforce the procedures that manage
certificates. A policy CA issues certificates only to other CAs. The CAs that receive these certificates
must uphold and enforce the policies that the policy CA defined. It is not mandatory to use policy
CAs unless different divisions, sectors, or locations of your organization require different issuance
policies and procedures. However, if your organization requires different issuance policies and
procedures, you must add policy CAs to the hierarchy to define each unique policy. For example, an
organization can implement one policy CA for all certificates that it issues internally to employees,
and another policy CA for all certificates that it issues to users who are not employees.
CAs with a two-tier hierarchy. In a two-tier hierarchy, there is a root CA and at least one subordinate
CA. In this scenario, the subordinate CA is responsible for policies and for issuing certificates to the
requestors.
Stand-alone CA
Enterprise CA
Typical usage
A stand-alone CA is typically
used for offline CAs, but it can be
used for a CA that is consistently
available on the network.
Active
Directory
dependencies
Characteristic
Certificate
request
methods
Stand-alone CA
Users can only request
certificates from a stand-alone
CA by using a manual procedure
or CA Web enrollment.
Enterprise CA
Users can request certificates from an enterprise
CA using the following methods:
Manual enrollment
Web Enrollment
Autoenrollment
Enrollment agent
Certificate
issuance
methods
Most commonly, the root CA is deployed as stand-alone CA, and it is taken offline after it issues a
certificate for itself and for a subordinate CA. Alternatively, a subordinate CA is usually deployed as an
enterprise CA, and is configured in one of the scenarios described in the previous topic.
6-20 Implementing AD CS
Description
The default CSP is the Microsoft Strong Cryptographic Provider.
Any provider whose name contains a number sign (#) is a
Cryptography Next Generation (CNG) provider.
Specifically, if you decide to deploy an offline, stand-alone root CA, there are some specific considerations
that you should keep in mind:
Before you issue a subordinate certificate from the root CA, make sure that you provide at least one
CDP and AIA location that will be available to all clients. This is because, by default, a stand-alone
root CA has the CDP and AIA located on itself. Therefore, when you take the root CA off the network,
the revocation check will fail because the CDP and AIA locations will be inaccessible. When you define
these locations, you should copy the CRL and AIA information manually to that location.
Set a validity period, one year, for example, for CRLs to which the root CA publishes. This means that
you will have to turn on root CA once per year to publish a new CRL, and then copy it to a location
that is available to the clients. If you fail to do this, after the CRL on the root CA expires, the
revocation check for all certificates also will fail.
Use Group Policy to publish the root CA certificate to a trusted root CA store on all server and client
machines. You must do this manually because a stand-alone CA cannot do it automatically, unlike an
enterprise CA. You also can publish the root CA certificate to AD DS by using the certutil command-line tool.
In this demonstration, you will see how to deploy an enterprise root CA.
Demonstration Steps
Deploy a root CA
1.
In the Server Manager, add the Active Directory Certificate Services role.
2.
3.
After the installation completes successfully, click the text Configure Active Directory Certificate
Services on the destination server.
4.
5.
6.
Usage. You may issue certificates for a number of purposes, such as Secure Multipurpose Internet
Mail Extensions (S/MIME), EFS, or Remote Access Service (RAS). The issuing policy for these uses may
be distinct, and separation provides a basis for administering these policies.
Organizational divisions. You may have different policies for issuing certificates, depending upon an
entitys role in the organization. You can create subordinate CAs to separate and administer these
policies.
Geographic divisions. Organizations often have entities at multiple physical sites. Limited network
connectivity between these sites may require individual subordinate CAs for many or all sites.
Load balancing. If you will be using your PKI to issue and manage a large number of certificates,
having only one CA can result in a considerable network load for that single CA. Using multiple
subordinate CAs to issue the same kind of certificates divides the network load among CAs.
Backup and fault tolerance. Multiple CAs increase the possibility that your network will always have
operational CAs available to respond to user requests.
6-22 Implementing AD CS
Each CAPolicy.inf file is divided into sections, and has a simple structure, which can be described as
follows:
A section is an area in the .inf file that contains a logical group of keys. A section always appears in
brackets in the .inf file.
A key is the parameter that is to the left of the equal (=) sign.
A value is the parameter that is to the right of the equal (=) sign.
For example, if you want to specify an Authority Information Access point in the CAPolicy.inf file, you use
following syntax:
[AuthorityInformationAccess]
URL=http://pki.adatum.com/CertData/adatumCA.crt
Certification practice statement. Describes the practices that the CA uses to issue certificates. This
includes the types of certificates issued, information for issuing, renewing, and recovering certificates,
and other details about the CAs configuration.
CRL publication intervals. Defines the interval between publications for the base CRL.
Key size. Defines the length of the key pair used during the root CA renewal.
Certificate validity period. Defines the validity period for a root CA certificate.
CDP and AIA paths. Provides the path used for root CA installations and renewals.
Once you have created your CAPolicy.inf file, you must copy it into the %SystemRoot% folder of your
server (for example, C:\Windows) before you install the AD CS role, or before you renew the CA certificate.
Note: The CAPolicy.inf file is processed for both the root and subordinate CA installations
and renewals.
Role/group
Purpose
Information
CA administrator
Manage the CA
Certificate manager
Backup operator
Auditor
Enrollees
Role-based administration combines operating system roles and AD CS roles to provide a complete,
segmented management solution for your CAs. Instead of assigning local administrative privileges to the
various information technology (IT) personnel involved in managing the CA, you can assign roles, which
ensure that administrators have the minimum permissions necessary to perform their jobs.
Role-based administration also reduces the administrative overhead of granting rights to administrators
because the process involves adding a user to a group or role.
Managing CA Security
To manage and configure role-based administration of a CA, and to manage security on the CA, you can
use the Security tab of the Certification Authority when you open Properties in the certsrv admin console.
The following are security permissions that you can set on a CA object level:
Read. Security principals assigned with this permission can locate the CA in AD DS or access it by
using the web console or services if the stand-alone CA is deployed.
Issue and Manage Certificates. Security principals assigned with this permission are able to approve or
deny certificate requests that are in a pending state. Also, they are able to revoke an issued certificate,
to specify a revoke reason, and to perform an unrevoke. They also are able to read all issued
certificates and export them into files.
Manage CA. Security principals assigned with this permission are able to manage and configure all
options on the CA level. They are not able to manage certificates, and can manage only a CA.
Request Certificates. Security principals assigned with this permission are able to perform certificate
requests against this CA. However, this does not mean that they are able to enroll certificates. This is
specified on the certificate template level.
6-24 Implementing AD CS
Together with defining security permissions on the access control list (ACL) of the CA object, you also can
use the Certificate Managers tab on the CA Properties. You can then narrow these security principals to
specific certificate templates when you configure security principals that can issue and manage certificates
on the Security ACL. For example, if you want to assign user Bob permission to issue and manage only
users certificates, you will put Bob on the ACL and assign it Issue and Manage Certificates permission.
However, you will use the Certificate Managers tab to restrict Bob to the User certificate template,
because you do not want Bob to be able to issue and manage all certificates.
A CA can use multiple exit modules simultaneously, unlike the policy module, where you can have only
one active policy module at a time.
For example, if you want to send an email to a specific address each time the certificate is issued, you
have to use certutil to specify these settings, as they are not available in the CA administrator console.
First, you should specify the Simple Mail Transfer Protocol (SMTP) server that is used to send emails, which
you can do by typing the following certutil command:
certutil -setreg exit\smtp\<smtpServerName>
You have to enter the fully qualified domain name (FQDN) of your email server instead of the
<smtpServerName> variable. After this, you have to specify the event and email address to which the
notification is sent by typing the following command:
certutil -setreg exit\smtp\CRLIssued\To<E-mailString>
Note: The exit module on the CA that is configured to send emails on an event does not
use SMTP authentication. If your SMTP server requires authentication, you have to configure it on
the CA side by typing the following command:
certutil -setreg exit\smtp\SMTPAuthenticate 1
certutil -setsmtpinfo<UserName>
The <UserName> specifies the user name of a valid account on the SMTP server. You will be
prompted to provide the password for this user name.
Besides sending notification emails when the certificate is issued, you also can configure an exit module to
send notifications of following events:
Certificate revoked.
CRL is issued.
CA service startup.
CA service shutdown.
If you want to configure an exit module to publish certificates to the file system, you can use the CA
admin console to open the properties of the exit module. After you enable the Allow certificates to be
published to the file system option and restart the CA, certificates issued from that CA are copied into the
.cer file in the C:\Windows\System32\CertEnroll folder on the CA. However, for this to happen, the
certificate requestors must include a certfile:true attribute in their request.
If you deploy custom exit modules, their configuration might be possible through the CA admin console
or with some other utility.
6-26 Implementing AD CS
Demonstration Steps
1.
On LON-SVR1, open the Certification Authority console, and then open the Properties for
AdatumRootCA.
2.
3.
4.
5.
6.
7.
Performing a CA Backup
You should have a CA backup even if you are not moving a CA to another computer. A CA backup is
different from ordinary backup scenarios. To perform a CA backup to move a CA to another computer,
you should perform the following procedure:
1.
If you are backing up an enterprise CA, click the Certificate Templates item in the CA console, and
then record the names of the listed certificate templates. These templates are in AD DS, so you do not
have to back them up. You must note which templates are published on the CA that you are moving
because you will have to add them manually after moving the CA.
2.
In the CA snap-in, right-click the CA name, click All Tasks, and then click Back up CA to start the
Certification Authority Backup Wizard. In the backup wizard, you have to select the option to make
the backup of the CAs private key, CA certificate, certificate database, and certificate database log.
You also have to provide an appropriate location for the backup content. For security reasons, a
password should protect the CAs private key.
3.
Note: We recommend that you save this registry key to file in the same folder with the CA
backup from the previous step.
4.
After this is done, in case you want to move the CA to another computer, you should uninstall the CA
from the old server, and then rename the old server or permanently disconnect it from the network.
Before you begin the restore procedure, confirm that the %SystemRoot% folder of the target server
matches the %SystemRoot% folder of the server from which the backup is taken.
In addition, the location of the CA restore must match the location of the CA backup. For example, if you
back up the CA from the D:\Windows\System32\Certlog folder, you must restore the backup to the D:\
Windows\System32\Certlog folder. After you restore the backup, you can move the CA database files to a
different location.
Performing a CA Restore
Restore procedure for CA is initiated when you have to repair your current CA or when you want to move
the CA role to another computer.
To restore the CA, perform the following procedure:
1.
Install AD CS on the target computer. Select to install either a Stand-alone or an Enterprise CA,
depending on the type of CA that you are moving. When you come to the Set Up Private Key page,
click Use existing private key. Then choose to select a certificate and use its associated private key.
This will provide you with ability to use an existing certificate from an old CA.
2.
On the Select Existing Certificate page, click Import, type the path of the .p12 file in the backup
folder, type the password that you selected in the previous procedure to protect the backup file, and
then click OK. When you are prompted for Public and Private Key Pair, verify that Use existing
keys is selected. This is very important, as you want to keep the same root CA certificate.
3.
When prompted on the Certificate Database page, specify the same location for the certificate
database and certificate database log as on the previous CA computer. After you select all these
options, wait for the CA setup to finish.
4.
After the setup is done, open the Services snap-in to stop the AD CS service. You do this to restore
settings from the old CA.
5.
Locate the registry file that you saved in the backup procedure, and then double-click it to import the
registry settings.
6.
After you restore the registry settings, open the CA Management console, right-click the CA name,
click All Tasks, and then click Restore CA. This will start the Certification Authority Restore Wizard. In
the wizard, you should select the Private key and CA certificate and the Certificate database and
certificate database log check boxes. This specifies that you want to restore these objects from
backup. Next, provide a backup folder location and verify the settings for the restore. The Issued Log
and Pending Requests settings should be displayed.
7.
8.
If you restored an enterprise CA, ensure that the certificate templates from AD DS that you recorded
in the previous procedure are present and accessible to the new CA.
6-28 Implementing AD CS
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 50 minutes
Virtual machines
20412D-LON-DC1
20412D-LON-SVR1
20412D-LON-SVR2
20412D-LON-CA1
User name
Adatum\Administrator
Password
Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
Repeat steps two and three for 20412D-LON-SVR1, 20412D-LON-SVR2, and 20412D-LON-CA1. Do
not sign in until instructed to do so.
2.
Use the Add Roles and Features Wizard to install the Active Directory Certificate Services role.
3.
After installation completes successfully, click the text Configure Active Directory Certificate
Services on the destination server.
4.
5.
Set the key length to 4096, and then accept all other values as default.
6.
7.
8.
9.
Task 2: Creating a DNS host record for LON-CA1 and configure sharing
1.
2.
Create a host record for LON-CA1 in the Adatum.com forward lookup zone.
3.
4.
On LON-CA1, from the Network and Sharing Center, turn on file and printer sharing on guest and
public networks.
Results: After completing this exercise, you will have deployed a root stand-alone certification
authority (CA).
6-30 Implementing AD CS
2.
Install the Active Directory Certificate Services role on LON-SVR1. Include the Certification Authority
and Certification Authority Web Enrollment role services.
3.
After installation is successful, click Configure Active Directory Certificate Services on the
destination server.
4.
Select the Certification Authority and Certification Authority Web Enrollment role services.
5.
6.
7.
8.
On LON-SVR1, install the C:\RootCA.cer certificate to the Trusted Root Certification Authority store.
2.
Navigate to Local Disk (C:), and copy the AdatumRootCA.crl and LON-CA1_AdatumRootCA.crt
files to C:\inetpub\wwwroot\CertData.
3.
4.
Switch to LON-CA1.
5.
From the Certification Authority console on LON-CA1, submit a new certificate request by using the
.req file that you copied in step 3.
6.
Issue the certificate, and then export it to .p7b format with a complete chain. Save the file to \\lonsvr1\C$\SubCA.p7b.
7.
Switch to LON-SVR1.
8.
Install the subordinate CA certificate on LON-SVR1 by using the Certification Authority console.
9.
On LON-DC1, from the Server Manager, open the Group Policy Management Console.
2.
3.
Publish the RootCA.cer file from \\lon-svr1\C$ to the Trusted Root Certification Authorities store,
which is located in Computer Configuration\Policies\Windows Settings\Security Settings\Public
Key Policies.
Results: After completing this exercise, you will have deployed and configured an enterprise subordinate
CA.
Question: Why is it not recommended to install just an enterprise root CA?
6-32 Implementing AD CS
Lesson 4
Lesson Objectives
After completing this lesson, you will be able to:
When content is encrypted with the private key, it can be decrypted only with the public key.
There is no other key that is in the same relation with the keys from a single key pair.
The private key cannot be derived in a reasonable amount of time from a public key, or vice versa.
During the enrollment process, key pair is generated on the client, and then the public key is sent with the
certificate signing request (CSR) to the CA. The CA validates the CSR, and then signs the public key with
the CAs private key. The signed public key is returned to the requestor. This ensures that the private key
never leaves the system (or smart card) and that a CA trusts the certificate because the CA signed the
public key of the certificate. Certificates provide a mechanism for gaining confidence in the relationship
between a public key and the entity that owns the corresponding private key.
You can think of a certificate as being similar to a drivers license. Many businesses accept a drivers
license as a form of identification because the community accepts the license issuer (a government
institution) as trustworthy. Because businesses understand the process by which someone can obtain a
drivers license, they trust that the issuer has verified the identity of the individual to whom the license was
issued. Therefore, they can accept the drivers license as a valid form of identification. A certificate trust is
established in a similar way.
Certificate Templates
Certificate templates allow administrators to customize the distribution method of certificates, define
certificate purposes, and mandate the type of usage allowed by a certificate. Administrators can create
templates and then can deploy them quickly to an enterprise by using built-in GUI or command-line
management utilities.
Associated with each certificate template is its DACL, which defines which security principals have
permissions to read and configure the template, and which security principals can enroll or use
autoenrollment for certificates based on the template. Certificate templates and their permissions are
defined in AD DS and are valid within the forest. If more than one enterprise CA is running in the AD DS
forest, permission changes will affect all CAs.
When you define a certificate template, the definition of the certificate template must be available to all
CAs in the forest. You accomplish this by storing the certificate template information in the configuration
naming context of AD DS. The replication of this information depends on the AD DS replication schedule,
and the certificate template might not be available to all CAs until replication completes. Storage and
replication occur automatically.
Note: Prior to Windows Server 2008 R2, only the Enterprise editions of Windows Server
supported management of certificate templates. In Windows Server 2008 R2 and Windows
Server 2012, you also can manage certificate templates in the Standard editions.
6-34 Implementing AD CS
Aside from corresponding with Windows Server operating system versions, certificate template versions
also have some functional differences:
Windows 2000 Advanced Server operating system provides support for version 1 certificate
templates. The only modification allowed to version 1 templates is changing permissions to either
allow or disallow enrollment of the certificate template. When you install an enterprise CA, version 1
certificate templates are created by default. As of July 13, 2010, Microsoft no longer supports
Windows 2000 Server.
Windows Server 2003 Enterprise Edition operating systems provide support for version 1 and version
2 templates. You can customize several settings in the version 2 templates. The default installation
provides several preconfigured version 2 templates. You can add version 2 templates based on the
requirements of your organization. Alternatively, you can duplicate a version 1 certificate template to
create a new version 2 of the template. You can then modify and secure the newly created version 2
certificate template. When new templates are added to a Windows Server 2003 Enterprise CA, they
are version 2 by default.
Windows Server 2008 Enterprise operating systems bring support for new version 3 certificate
templates. Additionally, they provide support for version 1 and version 2. Version 3 certificate
templates support several features of a Windows Server 2008 enterprise CA, such as CNG. CNG
provides support for Suite B cryptographic algorithms such as elliptic curve cryptography (ECC). In a
Windows Server 2008 Enterprise, you can duplicate default version 1 and version 2 templates to bring
them up to version 3.
Windows Server 2008 provides two new certificate templates by default: Kerberos authentication and
OCSP Response Signing. The Windows Server 2008 R2 operating system version was also able to
support certificate templates. When you use version 3 certificate templates, you can use CNG
encryption and hash algorithms for the certificate requests, issued certificates, and protection of
private keys for key exchange and key archival scenarios.
Windows Server 2012 operating systems provide support for version 4 certificate templates, and for
all other versions from earlier editions of Windows Server. These certificate templates are available
only to Windows Server 2012 and Windows 8. To help administrators separate the features supported
by each operating system version, the Compatibility tab was added to the certificate template
Properties tab. It marks options as unavailable in the certificate template properties, depending upon
the selected operating system versions of certificate client and CA. Version 4 certificate templates also
support both CSPs and key storage provider (KSP)s. You can also configure them to require renewal
with the same key.
Upgrading certificate templates is a process that applies only in situations where the CA has been
upgraded from Windows Server 2008 or Windows Server 2008 R2 to Windows Server 2012. After the
upgrade, you can upgrade the certificate templates by launching the CA Manager console and clicking
Yes at the upgrade prompt.
Read. The Read permission allows a user or computer to view the certificate template when enrolling
for certificates. The certificate server also requires the Read permission to find the certificate
templates in AD DS.
Write. The Write permission allows a user or computer to modify the attributes of a certificate
template, which includes permissions assigned to the certificate template itself.
Enroll. The Enroll permission allows a user or computer to enroll for a certificate based on the
certificate template. However, to enroll for a certificate, you must also have Read permissions for the
certificate template.
Autoenroll. The Autoenroll permission allows a user or computer to receive a certificate through the
autoenrollment process. However, the Autoenroll permission requires the user or computer to also
have both Read and Enroll permissions for a certificate template.
As a best practice, you should assign certificate template permissions to global or universal groups only.
This is because the certificate template objects are stored in the configuration naming context in AD DS.
You should avoid assigning certificate template permissions to individual users or computer accounts.
As a best practice, keep the Read permission allocated to the Authenticated Users group. This permission
allocation enables all users and computers to view the certificate templates in AD DS. This permission
assignment also enables the CA that is running under the System context of a computer account to view
the certificate templates when assigning certificates. This permission, however, does not grant Enroll
rights, so it is safe to configure it this way.
6-36 Implementing AD CS
Note: The intended use of a certificate may relate to users or to computers, based on the
types of security implementations that are required to use the PKI.
CSP supported.
Key length.
Validity period.
You can also define a certificate purpose in certificate settings. Certificate templates can have the
following purposes:
Single Purpose. A single purpose certificate serves a single purpose, such as allowing users to sign in
with a smart card. Organizations utilize single purpose certificates in cases where the certificate
configuration differs from other certificates that are being deployed. For example, if all users will
receive a certificate for smart card logon but only a couple of groups will receive a certificate for EFS,
organizations will generally keep these certificates and templates separate to ensure that users only
receive the required certificates.
Multiple Purposes. A multipurpose certificate serves more than one purpose (often unrelated) at the
same time. While some templates (such as the User template) serve multiple purposes by default,
organizations will often modify templates to serve additional purposes. For example, if a company
intends to issue certificates for three purposes, it can combine those purposes into a single certificate
template to ease administrative effort and maintenance.
Modify the original certificate template. To modify a certificate template of version 2, 3, or 4, you
need to make changes and then apply them to that template. After this, any certificate issued by a CA
based on that certificate template will include the modifications that you made.
Supersede existing certificate templates. The CA hierarchy of an organization may have multiple
certificate templates that provide the same or similar functionality. In such a scenario, you can
supersede or replace the multiple certificate templates by using a single certificate template. You can
make this replacement in the Certificate Templates console by designating that a new certificate
template supersedes, or replaces, the existing certificate templates. Another benefit of superseding
the template is that the new version will be used when a certificate expires.
Demonstration Steps
Modify and enable a certificate template
1.
2.
3.
Open the IPsec certificate template Properties, and review available settings.
4.
Duplicate the Exchange User certificate template. Name it Exchange User Test1, and then configure
it to supersede the Exchange User template.
5.
Allow Authenticated Users to enroll for the Exchange User Test1 template.
6.
6-38 Implementing AD CS
Lesson 5
Lesson Objectives
After completing this lesson, you will be able to:
Manual enrollment. Using this method, a device, such as a web service or a computer, generates the
private key and a certificate request. The certificate request is then transported to the CA to generate
the certificate being requested. The certificate is then transported back to the device for installation.
Use this method when the requestor cannot communicate directly with the CA, or if the device does
not support autoenrollment.
CA Web enrollment. Using this method, you can enable a website CA so that users can obtain
certificates. To use CA Web enrollment, you must install IIS and the web enrollment role on the CA of
AD CS. To obtain a certificate, the requestor logs on to the website, selects the appropriate certificate
template, and then submits a request. The certificate is issued automatically if the user has the
appropriate permissions to enroll for the certificate. The CA Web enrollment method should be used
to issue certificates when autoenrollment cannot be used. This can happen in the case of an
Advanced Certificate request. However, there are cases where autoenrollment can be used for certain
certificates, but not for all certificates.
6-40 Implementing AD CS
Have membership in Domain Admins or Enterprise Admins, or equivalent, which is the minimum
required to complete this procedure.
Enrollment Agent (Computer). Used to request certificates on behalf of another computer subject.
Exchange Enrollment Agent (Offline Request). Used to request certificates on behalf of another
subject and supply the subject name in the request. The NDES uses this template for its Enrollment
Agent certificate.
When you create an Enrollment Agent, you can further refine the agent's ability to enroll for certificates
on behalf of others by a group and by a certificate template. For example, you might want to implement a
restriction that the Enrollment Agent can enroll for smart card logon certificates only and just for users in
a certain office or organizational unit (OU) that is the basis for a security group.
In older versions of Windows Server CA, it was not possible to permit an Enrollment Agent to enroll only a
certain group of users. As a result, every user with an Enrollment Agent certificate was able to enroll on
behalf of any user in an organization.
The Windows Server 2008 Enterprise edition operating system introduced the restricted Enrollment Agent
functionality. This functionality allows you to limit the permissions for users who are designated as
Enrollment Agents in enrolling smart card certificates on behalf of other users.
Typically, one or more authorized individuals within an organization are designated as Enrollment Agents.
The Enrollment Agent needs to be issued an Enrollment Agent certificate, which enables the agent to
enroll for certificates on behalf of users. Enrollment agents typically are members of corporate security, IT
security, or help desk teams, because these individuals are already responsible for safeguarding valuable
resources. In some organizations, such as banks that have many branches, help desk and security workers
might not be conveniently located to perform this task. In this case, designating a branch manager or
another trusted employee to act as an Enrollment Agent is required to enable smart card credentials to be
issued from multiple locations.
On a Windows Server 2012 CA, the restricted Enrollment Agent features allow an Enrollment Agent to be
used for one or many certificate templates. For each certificate template, you can choose the users or
security groups on behalf of whom the Enrollment Agent can enroll. You cannot constrain an Enrollment
Agent based on a certain Active Directory OU or container; instead, you must use security groups.
Note: Using restricted Enrollment Agents will affect the performance of the CA. To
optimize performance, you should minimize the number of accounts that are listed as Enrollment
Agents. You minimize the number of accounts in the Enrollment Agents certificate template ACL.
As a best practice, use group accounts in both lists instead of individual user accounts.
Demonstration Steps
Configure the Restricted Enrollment Agent
1.
2.
Configure Allie Bellew with permissions to enroll for an Enrollment Agent certificate.
3.
4.
5.
Open a Microsoft Management console (MMC), and add the Certificates snap-in.
6.
7.
8.
Configure the restricted Enrollment Agent so that Allie can only issue certificates based on the User
template, and only for the Marketing security group.
6-42 Implementing AD CS
What Is NDES?
The Network Device Enrollment Service (NDES) is
the Microsoft implementation of Simple
Certificate Enrollment Protocol (SCEP). SCEP is a
communication protocol that makes it possible for
software that is running on network devices such
as routers and switcheswhich cannot otherwise
be authenticated on the networkto enroll for
X.509 certificates from a CA.
You can use NDES as an Internet Server API filter
on IIS to perform the following functions:
Collect and process SCEP enrollment requests for the software that runs on network devices.
This feature applies to organizations that have PKIs with one or more Windows Server 2012based CAs
and that want to enhance the security for their network devices. Port security, based on 802.1x, requires
certificates be installed on switches and access points. Secure Shell, instead of Telnet, requires a certificate
on the router, switch, or access point. NDES is the service that allows administrators to install certificates
on devices using SCEP.
Adding support for NDES can enhance the flexibility and scalability of an organization's PKI. Therefore,
PKI architects, planners, and administrators may be interested in this feature.
Before installing NDES, you must decide:
Whether to set up a dedicated user account for the service, or use the Network Service account.
The name of the NDES registration authority and the country/region to use. This information is
included in any SCEP certificates that are issued.
The CSP to use for the signature key that is used to encrypt communication between the CA and the
registration authority.
The CSP to use for the encryption key that is used to encrypt communication between the
registration authority and the network device.
In addition, you need to create and configure the certificate templates for the certificates that are used in
conjunction with NDES.
When you install NDES on a computer, this creates a new registration authority and deletes any
preexisting registration authority certificates on the computer. Therefore, if you plan to install NDES on a
computer where another registration authority has already been configured, any pending certificate
requests should be processed and any unclaimed certificates should be claimed before you install NDES.
The CRL is published using the CA MMC snap-in (or the scheduled revocation list is published
automatically based on the configured value). CRLs can be published in AD DS, in some shared folder
location, or on a website.
When Windows client computers are presented with a certificate, they use a process to verify
revocation status by querying the issuing CA. This process determines whether the certificate is
revoked, and then presents the information to the application that requested the verification. The
Windows client computer uses one of the CRL locations specified in certificate to check its validity.
The Windows operating systems include a CryptoAPI, which is responsible for the certificate revocation
and status checking processes. The CryptoAPI utilizes the following phases in the certificate checking
process:
Path validation. Path validation is the process of verifying the certificate through the CA chain (or
path) until the root CA certificate is reached.
Revocation checking. Each certificate in the certificate chain is verified to ensure that none of the
certificates is revoked.
Network retrieval and caching. Network retrieval is performed by using OCSP. CryptoAPI is
responsible for checking the local cache first for revocation information and, if there is no match,
making a call by using OCSP, which is based on the URL that the issued certificate provides.
6-44 Implementing AD CS
What Is AIA?
AIA addresses are the URLs in the certificates that a CA issues. These addresses tell the verifier of a
certificate where to retrieve the CA's certificate. AIA access URLs can be HTTP, File Transfer Protocol (FTP),
Lightweight Directory Access Protocol (LDAP), or FILE addresses.
What Is CDP?
The CDP is a certificate extension that indicates the location from which the CRL for a CA can be retrieved.
It can contain none, one, or many HTTP, FTP, FILE, or LDAP addresses.
Each certificate that you issue from your CA contains information about the CDP and AIA location.
Each time a certificate is used, these locations are checked. The AIA location is checked to verify the
validity of the CA certificate, while the CDP location is checked to verify content of the CRL for that CA.
At least one AIA and one CDP location must be available for each certificate. If they are not available, the
system will presume that the certificate is not valid, the revocation check will fail, and you will not be able
to use that certificate for any purpose. As CDP and AIA locations are written in each certificate that the
CA issues, it is important to configure these locations properly on the CA properties before you start
issuing certificates. Once the certificate is issued, you cannot change the CDP and AIA locations that the
certificate uses.
AD DS
Web servers
FTP servers
File servers
Publication Points
To ensure accessibility to all computers in the forest, publish the offline root CA certificate and the offline
Root CAs CRL to AD DS by using the certutil command. This places the root CA certificate and the CRL in
the Configuration naming context, which is then replicated to all domain controllers in the forest.
For computers that are not members of an AD DS domain, place the CA certificate and the CRL on web
servers by using the HTTP protocol. Locate the web servers on the internal network and on the external
network if external client computersor the internal clients from the external networksrequire access.
This is very important if you use internally issued certificates outside of your company.
You also can publish certificates and CRLs to the ftp:// and file:// URLs, but we recommend that you use
only the LDAP and HTTP URLs because they are the most widely supported URL formats for
interoperability purposes. The order in which you list the CDP and AIA extensions is important, because
the certificate-chaining engine searches the URLs sequentially. If your certificates are mostly used
internally in an AD DS domain, place the LDAP URL first in the list.
Note: Besides configuring CDP and AIA publication points, you also should make sure that
the CRL is valid. An online CA will automatically renew the CRL periodically, but an offline root CA
will not. If the offline root CA CRL expires, the revocation check will fail. To prevent failure, make
sure that you configure the validity period for the offline root CA CRL to be long enough, and set
a reminder to turn that CA on and issue a new CRL before the old one expires.
Windows Vista
Windows 7
Windows 8
Windows 8.1
6-46 Implementing AD CS
For scalability and high availability, you can deploy the Online Responder in a load-balanced array using
Network Load Balancing, which processes certificate status requests. You can monitor and manage each
member of the array independently. To configure the Online Responder, you must use the Online
Responder management console.
You must configure the CAs to include the URL of the Online Responder in the AIA extension of issued
certificates. The OCSP client uses this URL to validate the certificate status. You must also issue the OCSP
Response Signing certificate template, so that the Online Responder can also enroll that certificate.
IIS must be installed on the computer during the Online Responder installation. When you install an
Online Responder, the correct configuration of IIS for the Online Responder is installed automatically.
An OCSP Response Signing certificate template must be configured on the CA, and autoenrollment
used to issue an OCSP Response Signing certificate to the computer on which the Online Responder
will be installed.
The URL for the Online Responder must be included in the AIA extension of certificates issued by the
CA. The Online Responder client uses this URL to validate certificate status.
After installing an Online Responder, you need to create a revocation configuration for each CA and CA
certificate that the Online Responder serves. A revocation configuration includes all of the necessary
settings for responding to status requests regarding certificates that have been issued using a specific CA
key. These configuration settings include:
CA certificate. This certificate can be located on a domain controller, in the local certificate store, or
imported from a file.
Signing certificate for the Online Responder. This certificate can be selected automatically for you,
selected manually (which involves a separate import step after you add the revocation configuration),
or you can use the selected CA certificate.
Revocation provider. The revocation provider will provide the revocation data used by this
configuration. This information is entered as one or more URLs where the valid base and delta CRLs
can be obtained.
Demonstration Steps
Configure an Online Responder
1.
On LON-SVR1, use the Server Manager to add an Online Responder role service to the existing AD CS
role.
2.
3.
On AdatumRootCA, publish the OCSP Response signing certificate template, and allow
Authenticated users to enroll.
4.
5.
6.
7.
6-48 Implementing AD CS
Lesson 6
Lesson Objectives
After completing this lesson, you will be able to:
A user profile is deleted or corrupted. A CSP encrypts a private key and stores the encrypted private
key in the local file system and registry in the user profile folder. Deletion or corruption of the profile
results in the loss of the private key material.
An operating system is reinstalled. When you reinstall the operating system, the previous installations
of the user profiles are lost, including the private key material. In this scenario, the computers
certificates also are lost.
A disk is corrupted. If a hard disk becomes corrupted and the user profile is unavailable, the private
key material is lost automatically, in addition to installed computer certificates.
A computer is lost or stolen. If a users computer is lost or stolen, the user profile with the private key
material is unavailable.
Note: Losing a key pair (certificate) is not always a critical situation. For example, if you lose
a certificate used for digital signing or logging, you simply can issue a new one, and no data will
be affected. However, losing a certificate that was used for data encryption will result in the
inability to access data. For that reason, archival and recovery is critical.
6-50 Implementing AD CS
The server where the keys are archived is in a separate, physically secure location.
After the KRA certificate is issued, we recommend that you remove this template from the CA. We also
recommend that you import the KRA certificate only when a key recovery procedure should be
performed.
The user requests a certificate from a CA and provides a copy of the private key as part of the request.
The CA, which processes the request, archives the encrypted private key in the CA database and issues
a certificate to the requesting user.
2.
An application such as EFS can use the issued certificate to encrypt sensitive files.
3.
If, at some point, the private key is lost or damaged, the user can contact the organizations
Certificate Manager to recover the private key. The Certificate Manager, with the help of the KRA,
recovers the private key, stores it in a protected file format, and then sends it back to the user.
4.
After the user stores the recovered private key in the users local keys store, the key once again can be
used by an application such as EFS to decrypt previously encrypted files or to encrypt new ones
2.
3.
4.
A CA Officer is defined as a Certificate Manager. This user has the security permission to issue and
manage certificates. The security permissions are configured on a CA in the CA MMC snap-in, in
the CA Properties dialog box, from the Security tab.
A KRA is not necessarily a CA Officer or a Certificate Manager. These roles may be segmented as
separate roles. A KRA is a person who holds a private key for a valid KRA certificate.
Enable KRA:
Sign in as the Administrator of the server or as the CA Administrator if role separation is enabled.
In the CA console, right-click the CA name, and then click Properties. To enable key archival, on
the Recovery Agents tab, click Archive the key.
By default, the CA uses one KRA. However, you must first select the KRA certificate for the CA to
begin archival by clicking Add.
The system finds valid KRA certificates, and then displays available KRA certificates. These are
generally published to AD DS by an enterprise CA during enrollment. KRA certificates are stored
under the KRA container in the Public Key Services branch of the configuration partition in
AD DS. Because CA issues multiple KRA certificates, each KRA certificate will be added to the
multivalued user attribute of the CA object.
Select one certificate, and then click OK. Ensure that you have selected the intended certificate.
After you have added one or more KRA certificates, click OK. KRA certificates are only processed
at service start.
In the Certificate Templates MMC, right-click the key archival template, and then click Properties.
To always enforce key archival for the CA, in the Properties dialog box, on the Request
Handling tab, select the Archive subjects encryption private key check box. In Windows
Server 2008 or newer CAs, select the Use advanced symmetric algorithm to send the key to
the CA option.
Demonstration Steps
Configure automatic key archival
1.
2.
3.
Configure AdatumRootCA to use the certificate enrolled in step 2 as Key Recovery Agent.
4.
Configure the Exchange User Test 1 certificate template to allow key archival.
5.
6-52 Implementing AD CS
2.
3.
Retrieve PKCS #7 binary large object (BLOB) from the database. This is the first half of the key
recovery step. A Certificate Manager or a CA Administrator retrieves the correct BLOB from the CA
database. The certificate and the encrypted private key to be recovered are present in PKCS #7 BLOB.
The private key is encrypted alongside the public key of one or more KRAs.
4.
The Certificate Manager transfers the PKCS #7 BLOB file to the KRA.
5.
Recover key material and save to PKCS #12 (.pfx). This is the second half of the key recovery step. The
holder of one of the KRA private keys decrypts the private key to be recovered. The holder also
generates a password-protected .pfx file that contains the certificate and private key.
6.
Import recovered keys. The password-protected .pfx file is delivered to the end user. This user imports
the .pfx file into the local user certificate store. Alternatively, the KRA or an administrator can perform
this part of the procedure on behalf of the user.
Demonstration Steps
Recover a lost private key
1.
2.
Delete the certificate from the Administrator personal store to simulate key loss.
3.
On LON-SVR1 in the CA console, retrieve the serial number of the lost certificate.
Use command Certutil -getkey <serialnumber> outputblob to generate blob file.
Use command Certutil -recoverkey outputblob recover.pfx, to recover the private key.
4.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 75 minutes
Virtual machines
20412D-LON-DC1
20412D-LON-SVR1
20412D-LON-SVR2
20412D-LON-CA1
20412D-LON-CL1
User name
Adatum\Administrator
Password
Pa$$w0rd
For this lab, you will use the available virtual machine environment. All virtual machines needed for this
lab should be running from the previous lab.
6-54 Implementing AD CS
On LON-SVR1, from the Certification Authority console, open the Certificate Templates console.
2.
3.
4.
5.
Task 2: Create a new template for users that includes smart card logon
1.
2.
3.
On the Subject Name tab, clear both the Include e-mail name in subject name and the E-mail
name check boxes.
4.
Add Smart Card Logon to the Application Policies of the new certificate template.
5.
6.
Allow Authenticated Users to Read, Enroll, and Autoenroll for this certificate.
7.
Configure LON-SVR1 to issue certificates based on the Adatum User and Adatum WebSrv
templates.
Task 4: Update the web server certificate on the LON-SVR2 web server
1.
2.
3.
From the Server Manager, open the Internet Information Services (IIS) Manager.
4.
5.
Organization: Adatum
Organizational Unit: IT
City/locality: Seattle
State/province: WA
Country/region: US
Create HTTPS binding for the Default Web Site, and associate it with a new certificate.
Results: After completing this exercise, you will have created and published new certificate templates.
2.
3.
4.
Enable the Certificate Services Client Auto-Enrollment option, and enable Renew expired
certificates, update pending certificates, and remove revoked certificates and Update
certificates that use certificate templates.
5.
6.
Close Group Policy Management Editor and the Group Policy Management console.
On LON-SVR1, open the Windows PowerShell and use gpupdate /force to refresh Group Policy.
2.
Open an mmc.exe console and add the Certificates snap-in focused on the user account.
3.
Verify that you have been issued a certificate based on the Adatum Smart Card User template.
On LON-SVR1, from the Certification Authority console, open the Certificate Templates console.
2.
3.
4.
5.
Results: After completing this exercise, you will have configured and verified autoenrollment for users,
and configured an Enrollment Agent for smart cards.
6-56 Implementing AD CS
On LON-SVR1, in the Certification Authority console, right-click Revoked Certificates, and then click
Properties.
2.
Set the CRL publication interval to 1 Days, and set the Delta CRL publication interval to 1 Hours.
3.
On LON-SVR1, use the Server Manager to add an Online Responder role service to the existing AD CS role.
2.
When the message displays that installation succeeded, click Configure Active Directory Certificate
Services on the destination server.
3.
4.
5.
6.
On Adatum-IssuingCA, publish the OCSP Response signing certificate template, and then allow
Authenticated users to enroll.
7.
8.
9.
On LON-SVR1, in the Certification Authority console, right-click the Certificates Templates folder,
and then click Manage.
2.
In the Certificates Templates console, open the Key Recovery Agent certificate properties dialog
box.
3.
On the Issuance Requirements tab, clear the CA certificate manager approval check box.
4.
On the Security tab, notice that only Domain Admins and Enterprise Admins groups have the Enroll
permission.
5.
Right-click the Certificates Templates folder, and enable the Key Recovery Agent template.
Create an MMC console window that includes having the Certificates snap-in for the current user
loaded.
2.
Use the Certificate Enrollment Wizard to request a new certificate and to enroll the KRA certificate.
3.
Refresh the console window, and view the KRA in the personal store.
2.
On LON-SVR1, in the Certification Authority console, open the Adatum-IssuingCA Properties dialog
box.
3.
On the Recovery Agents tab, click Archive the key, and then add the certificate by using the Key
Recovery Agent Selection dialog box.
4.
2.
3.
On the Request Handling tab, set the option for the Archive subject's encryption private key. By
using the archive key option, the KRA can obtain the private key from the certificate store.
4.
Click the Subject Name tab, and then clear both the E-mail name and Include e-mail name in
subject name check boxes.
5.
2.
3.
Request and enroll a new certificate based on the Archive User template.
4.
5.
6.
Switch to LON-SVR1.
7.
Open the Certification Authority console, expand Adatum-IssuingCA, and then click the
Issued Certificates store.
6-58 Implementing AD CS
8.
In the Certificate Authority console, note the serial number of the certificate that has been issued for
Aidan Delaney.
9.
On LON-SVR1, open a command prompt, and then type the following command:
Certutil getkey <serial number> outputblob
Note: Replace serial number with the serial number that you wrote down.
10. Verify that the Outputblob file now displays in the C:\Users\Administrator folder.
11. To convert the Outputblob file into an importable .pfx file, at the command prompt, type the
following command:
Certutilrecoverkey outputblob aidan.pfx
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-CL1, 20412D-LON-SVR1, 20412D-LON-CA1, and
20412D-LON-SVR2.
Results: After completing this exercise, you will have implemented key archival and tested private key
recovery.
Question: What is the main benefit of OCSP over CRL?
Question: What must you do to recover private keys?
2.
What kind of certificates should Contoso use for EFS and digital signing?
3.
4.
How will Contoso ensure that EFS-encrypted data is not lost if a user loses a certificate?
Tools
Certificates console
Certutil.exe
Best Practice:
When you deploy CA infrastructure, deploy a stand-alone (non-domain-joined) root CA, and an
enterprise subordinate CA (issuing CA). After the enterprise subordinate CA receives a certificate from
the root CA, take the root CA offline.
Issue a certificate for the root CA for a long period of time, such as 15 or 20 years.
6-60 Implementing AD CS
Troubleshooting Tip
Review Questions
Question: What are some reasons that an organization would utilize PKI?
Question: What are some reasons that an organization would use an enterprise root CA?
Question: List the requirements to use autoenrollment for certificates.
Question: What are the steps to configure an Online Responder?
7-1
Module 7
Implementing Active Directory Rights Management Services
Contents:
Module Overview
7-1
7-2
7-7
7-12
7-18
7-24
7-33
Module Overview
Active Directory Rights Management Services (AD RMS) provides a method for protecting content that
goes beyond encrypting storage devices using BitLocker Drive Encryption, or encrypting individual files
using Encrypting File System (EFS). AD RMS provides a method to protect data in transit and at rest, and
ensures that it is accessible only to authorized users for a specific duration.
This module introduces you to AD RMS. It also describes how to deploy AD RMS, how to configure
content protection, and how to make AD RMSprotected documents available to external users.
Objectives
After completing this module, you will be able to:
Lesson 1
AD RMS Overview
Before you deploy AD RMS, you need to know how AD RMS works, what components are included in an
AD RMS deployment, and how you should deploy AD RMS. You must also understand the concepts
behind various AD RMS certificates and licenses.
This lesson provides an overview of AD RMS, and reviews the scenarios in which you can use it to protect
your organization's confidential data.
Lesson Objectives
After completing this lesson you will be able to:
Describe AD RMS.
What Is AD RMS?
AD RMS is an information protection technology
that is designed to minimize the possibility of data
leakage. Data leakage is the unauthorized
transmission of informationeither to people
within the organization or people outside the
organizationwho should not be able to access
that information. AD RMS integrates with existing
Microsoft products and operating systems
including Windows Server, Microsoft Exchange
Server, Microsoft SharePoint Server, and the
Microsoft Office Suite.
AD RMS uses symmetric key content encryption
via public and private keys. The use license and rights policy data in the publishing license are also
encrypted via the private key which can only be decrypted with the corresponding private key. The public
keys are used by AD RMS to digitally sign the AD RMS certificates and licenses which ensures they came
from the proper authority.
AD RMS can protect data in transit and at rest. For example, AD RMS can protect documents that are sent
as email messages by ensuring that a message cannot be opened even if it is accidentally addressed to
the wrong recipient. You can also use AD RMS to protect data that is stored on devices such as removable
USB drives. A drawback of file and folder permissions is that once the file is copied to another location,
the original permissions no longer apply. A file that is copied to a USB drive will inherit the permissions on
the destination device. Once copied, a file that was read-only can be made editable by altering the file
and folder permissions.
With AD RMS, the file can be protected in any location, irrespective of file and folder permissions that
grant access. With AD RMS, only the users who are authorized to open the file will be able to view the
contents of that file. The author can decide which permissions (read, write, print) do apply to whom and
for which timeframe.
Scenario 1
The chief executive officer (CEO) copies a spreadsheet file containing the compensation packages of an
organization's executives from a protected folder on a file server to the CEOs personal USB drive. During
the commute home, the CEO leaves the USB drive in the taxi, where someone with no connection to the
organization finds it. Without AD RMS, whoever finds the USB drive can open the file. With AD RMS, it is
possible to ensure that unauthorized users cannot open the file.
Scenario 2
An internal document should be viewable by a group of authorized people within the organization.
However, these people should not be able to edit or print the document. While you can use the native
functionality of Microsoft Office Word to restrict these features, by using a password for each document
you must remember different passwords for potentially hundreds of documents. With AD RMS, you can
configure these permissions based on existing accounts in Active Directory Domain Services (AD DS) or
even share with business partners through other means.
Scenario 3
People within the organization should not be able to forward sensitive email messages that have been
assigned a particular classification. With AD RMS, you can allow a sender to assign a particular
classification to a new email message, and that classification will ensure that the recipient cannot forward
the message.
AD RMS Server
AD RMS servers must be members of an Active Directory domain. When you install AD RMS, information
about the location of the cluster is published to AD DS to a location known as the service connection
point. Computers that are members of the domain query the service connection point to determine the
location of AD RMS services.
AD RMS Client
AD RMS client is built into the Windows Vista, Windows 7, and Windows 8 operating systems. The AD
RMS client allows AD RMSenabled applications to enforce the functionality dictated by the AD RMS
template. Without the AD RMS client, AD RMSenabled applications would be unable to interact with AD
RMSprotected content.
AD RMSEnabled Applications
AD RMSenabled applications allow users to create and consume AD RMSprotected content. For
example, Microsoft Outlook allows users to view and create protected email messages. Microsoft Word
allows uses to view and create protected word processing documents. Microsoft provides an AD RMS
software development kit (SDK) to allow developers to enable their applications to support AD RMS
protection of content.
SLC
The SLC is generated when you create the
AD RMS cluster. It has a validity of 250 years. The
SLC allows the AD RMS cluster to issue:
Publishing licenses.
Use licenses.
The SLC public key encrypts the content key in a publishing license. This allows the AD RMS server to
extract the content key and issue end-user licenses against the publishing key.
Active Directory Federation Services (AD FS) RACs are issued to federated users. They have a validity
of seven days.
Two types of Windows Live ID RACs are supported. Windows Live ID RACs used on private
computers have a validity of six months; Windows Live ID RACs used on public computers are valid
until the user logs off.
Publishing License
A publishing license (PL) determines the rights that apply to AD RMSprotected content. For example, the
publishing license determines if the user can edit, print, or save a document. The publishing license
contains the content key, which is encrypted using the public key of the licensing service. It also contains
the URL and the digital signature of the AD RMS server.
End-User License
An end-user license is required to consume AD RMSprotected content. The AD RMS server issues one
end-user license per user per document. End-user licenses are cached by default.
2.
3.
4.
5.
This symmetric key is encrypted to the public key of the AD RMS server that is used by the author.
6.
The recipient of the file opens it using an AD RMS application or browser. It is not possible to open
AD RMSprotected content unless the application or browser supports AD RMS. If the recipient does
not have an account certificate on the current device, one will be issued to the user at this point. The
application or browser transmits a request to the author's AD RMS server for a use license.
7.
The AD RMS server determines if the recipient is authorized. If the recipient is authorized, the AD RMS
server issues a use license.
8.
The AD RMS server decrypts the symmetric key that was encrypted in step 3, using its private key.
9.
The AD RMS server re-encrypts the symmetric key using the recipient's public key, and adds the
encrypted session key to the use license.
Lesson 2
Lesson Objectives
After completing this lesson you will be able to:
Service account. We recommend that you use a standard domain user account with additional
permissions. You can use a managed service account as the AD RMS service account.
Cryptographic mode. Choose the strength of the cryptography used with AD RMS:
o
Cluster key storage. Choose where the cluster key is stored. You can either store it within AD RMS, or
use a special cryptographic service provider (CSP). If you choose to use a CSP and you want to add
additional servers, you need to distribute the key manually.
Cluster key password. This password encrypts the cluster key, and is required if you want to join other
AD RMS servers to the cluster, or if you want to restore the cluster from backup.
Cluster website. Choose the website on the local server that will host the AD RMS cluster website.
Cluster address. Specify the fully qualified domain name (FQDN) for use with the cluster. You have the
option of choosing between a Secure Sockets Layer (SSL)encrypted and non-SSLencrypted website.
If you choose non-SSL-encrypted, you will be unable to add support for identity federation. Once you
set the cluster address and port, you cannot change them without completely removing AD RMS.
Licensor certificate. Choose the friendly name that the SLC will use. It should represent the function of
the certificate.
Service connection point (SCP) registration. Choose whether the service connection point is registered
in AD DS when the AD RMS cluster is created. The service connection point allows computers that are
members of the domain to locate the AD RMS cluster automatically. Only users that are members of
the Enterprise Admins group are able to register the service connection point. You can perform this
step after the AD RMS cluster is created; you do not have to perform it during the configuration
process. However, failure to do so could result in an Error ID of 189 or 190 (failure to delete or create
the SCP registration), when a client attempts to find the SCP.
In this demonstration, you will see how to deploy AD RMS on a computer that is running Windows Server 2012.
Demonstration Steps
Configure Service Account
1.
Use the Active Directory Administrative Center to create an organizational unit (OU) named Service
Accounts in the adatum.com domain.
2.
Create a new user account in the Service Accounts OU with the following properties:
Password: Pa$$w0rd
Prepare DNS
Use the DNS Manager console to create a host (A) resource record in the adatum.com zone with the
following properties:
o
Name: adrms
IP Address: 172.16.0.21
Sign in to LON-SVR1 with the Adatum\Administrator account using the password Pa$$w0rd.
2.
Use the Add Roles and Features Wizard to add the AD RMS role to LON-SVR1 using the following option:
Configure AD RMS
1.
In Server Manager, from the AD RMS node, click More to start post deployment configuration of AD RMS.
2.
3.
Port: 80
Client applications, such as those included in Office 2003, Office 2007, Office 2010, and Office 2013, can
publish and consume AD RMSprotected content. You can use the AD RMS SDK to create applications
that can publish and consume AD RMSprotected content. XML Paper Specification Viewer (XPS Viewer)
and Windows Internet Explorer also are able to view AD RMSprotected content.
Note: Microsoft has released the new version of the AD RMS Client SoftwareAD RMS
Client 2.1. You can download it from the Microsoft Download Center. Among other things, the
new version provides a new SDK that you can also download from Microsoft Download Center.
The new AD RMS SDK provides a simple mechanism for developers to create applications and
solutions that protect and consume critical content. With the new SDK, it is now possible to
rights-enable applications and solutions much faster and easier than before.
Download Center: Active Directory Rights Management Service Client 2.1
http://go.microsoft.com/fwlink/?LinkID=392469
Sign in to the server that is hosting AD RMS, and that you wish to decommission.
2.
3.
In the Active Directory Rights Management Services console, expand the Security Policies node, and
then click the Decommissioning node.
4.
5.
Click Decommission.
6.
When prompted to confirm that you want to decommission the server, click Yes.
After the AD RMS decommissioning process is complete, you should export the server licensor certificate
before you uninstall the AD RMS role.
Lesson 3
Lesson Objectives
After completing this lesson you will be able to:
Explain how to implement strategies to ensure rights policy templates are available for offline use.
Save. Allows a user to use the Save function with an AD RMSprotected document.
Export (Save as). Allows a user to use the Save As function with an AD RMSprotected document.
Forward. Used with Exchange Server. Allows the recipient of an AD RMSprotected message to
forward that message.
Reply. Used with Exchange Server. Allows the recipient of an AD RMSprotected message to reply to
that message.
Reply All. Used with Exchange Server. Allows the recipient of an AD RMSprotected message to use
the Reply All function to reply to that message.
Extract. Allows the user to copy data from the file. If this right is not granted, the user cannot copy
data from the file.
Rights can only be granted, and cannot be explicitly denied. For example, to ensure that a user cannot
print a document, the template associated with the document must not include the Print right.
Administrators also are able to create custom rights that can be used with custom AD RMSaware applications.
AD RMS templates can also be used to configure documents with the following properties:
Content Expiration. Determines when the content expires. The options are:
o
o
o
Use license expiration. Determines the time interval in which the use license will expire, and a new
one will need to be acquired.
Enable users to view protected content using a browser add-on. Allows content to be viewed using a
browser add-on. Does not require the user have an AD RMS-aware application.
Require a new use license each time content is consumed. When you enable this option, client-side
caching is disabled. This means that the document cannot be consumed when the computer is offline.
Revocation policies. Allows the use of a revocation list. This allows an author to revoke permission to
consume content. You can specify how often the revocation list is checked, with the default being
once every 24 hours.
Once an AD RMS policy template is applied to a document, any updates to that template will also be
applied to that document. For example, if you have a template without a content expiration policy that is
used to protect documents, and you modify that template to include a content expiration policy, those
protected documents will now have an expiration policy. Template changes are reflected when the enduser license is acquired. If end-user licenses are configured not to expire, and the user who is accessing a
document already has a license, then the user may not receive the updated template.
Note: You should avoid deleting templates, because documents that use those templates will
become inaccessible to everyone except for members of the Super Users group. As a best practice,
archive templates instead of deleting them.
You can view the rights associated with a template by selecting the template within the Active Directory
Rights Management Services console, and then in the Actions menu, clicking View Rights Summary.
Demonstration Steps
In the Active Directory Rights Management Services console, use the Rights Policy Template node to
create a Distributed Rights Policy Template with the following properties:
o
Require a new use license every time content is consumed (disable client-side caching): Enabled
Name: ReadOnly
Windows 7
Windows 8
Windows 8.1
To enable this functionality, in the Task Scheduler, enable the AD RMS Rights Policy Template
Management (Automated) Scheduled Task, and then edit the following registry key:
HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Common\DRM
When computers that are running these operating systems are connected to the domain, the AD RMS
client polls the AD RMS cluster for new templates, or updates to existing templates.
As an alternative for templates distribution, you can also use shared folders to store templates. You can
configure a shared folder for templates by performing the following steps:
1.
In the Active Directory Rights Management Services console, right-click the Rights Policy Templates
node, and then click Properties.
2.
In the Rights Policy Templates Properties dialog box, specify the location of the shared folder to
which templates will be published.
User Exclusion
The User Exclusion policy allows you to configure
AD RMS so that specific user accounts, which are
identified based on email addresses, are unable to
obtain Use Licenses. You do this by adding each
user's RAC to the exclusion list. User Exclusion is
disabled by default. Once you have enabled User
Exclusion, you can exclude specific RACs.
You can use User Exclusion in the event that you need to lock a specific user out of AD RMSprotected
content. For example, when users leave the organization, you might exclude their RACs to ensure that
they are unable to access protected content. You can block the RACs that are assigned to both internal
users and external users.
Application Exclusion
The Application Exclusion policy allows you to block specific applicationssuch as Microsoft
PowerPointfrom creating or consuming AD RMSprotected content. You specify applications based on
executable names. You also specify a minimum and a maximum version of the application. Application
Exclusion is disabled by default.
Note: It is possible to circumvent Application Exclusion by renaming an executable file.
Lockbox Exclusion
The Lockbox exclusion policy allows you to exclude AD RMS clients, such as those used with specific
operating systemsfor example, Windows XP and Windows Vista. Lockbox version exclusion is disabled
by default. Once you have enabled Lockbox version exclusion, you must specify the minimum lockbox
version that can be used with the AD RMS cluster.
Additional Reading: To find out more about enabling exclusion policies, see Enabling
Exclusion Policies at http://go.microsoft.com/fwlink/?LinkId=270031.
Demonstration Steps
1.
In the Active Directory Rights Management Services console, enable Application exclusion.
2.
In the Active Directory Rights Management Services console, expand the server node, and then click
Security Policies.
2.
In the Security Policies area, under Super Users, click Change Super User Settings.
3.
In the Security Policies\Super Users area, click Change super user group.
2.
Provide the email address associated with the Super Users group.
You create a rule to apply RMS protection automatically to any file that contains the word
confidential.
2.
A user creates a file with the word confidential in the text, and then saves it.
3.
The AD RMS DAC classification engine, following rules set in the central access policy, discovers the
document with the word confidential, and then initiates AD RMS protection accordingly.
4.
AD RMS applies a template and encryption to the document on the file server, and then encrypts and
classifies it.
Lesson 4
Lesson Objectives
After completing this lesson, you will be able to:
Describe the options available for making AD RMSprotected content accessible to external users.
Describe the steps necessary to configure AD RMS to share protected content with users who have
Windows Live IDs.
Determine the appropriate solution for sharing AD RMSprotected content with external users.
Federation Trust
Federation Trust provides single sign-on (SSO) for partner technologies. Federated partners can consume
AD RMSprotected content without deploying their own AD RMS infrastructure. Federation Trust requires
AD FS deployment.
The TUD of the AD RMS deployment that you want to trust must have already been exported, and
the file must be available. (TUD files use the .bin extension.)
2.
In the AD RMS console, expand Trust Policies, and then click Trusted User Domains.
3.
4.
In the Trusted User Domain dialog box, enter the path to the exported TUD file with the .bin
extension.
5.
Provide a name to identify this TUD. If you have configured federation, you can also choose to extend
the trust to federated users of the imported server.
You can also use the Windows PowerShell cmdlet Import-RmsTUD, which is part of the ADRMSADMIN
Windows PowerShell module, to add a TUD.
To export a TUD, perform the following steps:
1.
In the Active Directory Rights Management Services console, expand Trust Policies, and then click
Trusted User Domains.
2.
3.
You can also use the Windows PowerShell cmdlet Export-RmsTUD to export an AD RMS server TUD.
Implementing TPD
You can use Trusted Publisher Domain (TPD) to
set up a trust relationship between two AD RMS
deployments. An AD RMS TPD, which is a local
AD RMS deployment, can grant end-user licenses
for content published using the Trusted
Publishing domain's AD RMS deployment. For
example, Contoso, Ltd and A. Datum Corporation
are set up as TPD partners. TPD allows users of the
Contoso AD RMS deployment to consume
content published using the A. Datum AD RMS
deployment, by using end-user licenses that are
granted by the Contoso AD RMS deployment.
You can remove a TPD at any time. When you do this, clients of the remote AD RMS deployment will not
be able to issue end-user licenses to access content protected by your AD RMS cluster.
When you configure a TPD, you import the SLC of another AD RMS cluster. TPDs are stored in .xml
format, and are protected by passwords.
To export a TPD, perform the following steps:
1.
In the Active Directory Rights Management Services console, expand Trust Policies, and then click
Trusted Publishing Domains.
2.
In the Results pane, click the certificate for the AD RMS domain that you want to export, and then in
the Actions pane, click Export Trusted Publishing Domain.
3.
When you export a TPD, it is possible to save it as a Version 1 compatible TPD file. This allows the TPD to
be imported into organizations that are using AD RMS clusters on earlier versions of the Windows Server
operating system, such as the version available in Windows Server 2003. You can also use the Windows
PowerShell cmdlet Export-RmsTPD to export a TPD.
In the Active Directory Rights Management Services console, expand Trust Policies, and then click
Trusted Publishing Domains.
2.
3.
Specify the path of the Trusted Publishing Domain file that you want to import.
4.
Enter the password to open the Trusted Publishing Domain file, and enter a display name that
identifies the TPD.
Alternatively, you can also use the Windows PowerShell cmdlet Import-RmsTPD to import a TPD.
Additional Reading: To learn more about importing TPDs, see Add a Trusted Publishing
Domain at http://go.microsoft.com/fwlink/?LinkId=270033.
In the Active Directory Rights Management Services console, expand Trust Policies, and then click
Trusted User Domains.
2.
To exclude specific Microsoft account email domains, right-click the Windows Live ID certificate, click
Properties, and then click the Excluded Windows Live IDs tab. You can then enter the Windows Live ID
accounts that you want to exclude from being able to procure RACs.
To allow users with Microsoft accounts to obtain RACs from your AD RMS cluster, you need to configure
IIS to support anonymous access. To do this, perform the following steps:
1.
2.
Navigate to the Sites\Default Web Site\_wmcs node, right-click the Licensing virtual directory, and
then click Switch to Content View.
3.
4.
5.
Additional Reading: To can learn more about using Windows Live ID to establish RACs for
users, see http://go.microsoft.com/fwlink/?LinkId=270034.
Has the external user's organization established a relationship with the Microsoft Federation
Gateway?
Does the external user need to publish AD RMSprotected content that is accessible to internal RAC
holders?
Are the users bringing in personal devices that need access to rights managed documents?
It is possible that organizations may use one solution before deciding to implement another. For example,
during initial stages, only a small number of external users may require access to AD RMSprotected
content. In this case, using Windows Live ID accounts for RACs may be appropriate. When large numbers
of external users from a single organization require access, a different solution may be appropriate. The
financial benefit a solution brings to an organization must exceed the cost of implementing that solution.
IRM integration with Microsoft Office. All locally deployed Microsoft Office apps can use Windows
Azure AD Rights Management for content protection.
Exchange Online IRM integration. Windows Azure AD Rights Management gives you the ability to
protect and consume email messages in the Microsoft Outlook Web App. You also can consume
IRM-protected messages via Exchange ActiveSync on devices that have IRM support, such as
Windows Phone 8 or iOS-based devices. Additionally, administrators can use Outlook protection
rules and Exchange transport rules for protection and decryption to ensure that content is not
exposed inadvertently to outside an organization.
SharePoint Online IRM integration. When you use Windows Azure AD Rights Management is used,
administrators can configure automatic IRM protection of documents in a SharePoint library.
Comparing Windows Azure Rights Management and AD RMS
http://go.microsoft.com/fwlink/?LinkID=331184
Objectives
Lab Setup
Estimated Time: 60 minutes
Virtual machines: 20412D-LON-DC1, 20412D-LON-SVR1,
20412D-LON-CL1, 20412D-TREY-DC1,
20412D-TREY-CL1
User name: Adatum\Administrator
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
2.
3.
Sign in to LON-DC1 with the Adatum\Administrator account and the password Pa$$w0rd.
2.
Use the Active Directory Administrative Center to create an OU named Service Accounts in the
adatum.com domain.
3.
Create a new user account in the Service Accounts OU with the following properties:
Password: Pa$$w0rd
4.
Create a new Global security group in the Users container named ADRMS_SuperUsers. Set the email
address of this group as [email protected].
5.
Create a new global security group in the Users container named Executives. Set the email address of
this group as [email protected].
6.
Add the user accounts Aidan Delaney and Bill Malone to the Executives group.
7.
Use the DNS Manager console to create a host (A) resource record in the adatum.com zone with the
following properties:
Name: adrms
IP Address: 172.16.0.21
Sign in to LON-SVR1 with the Adatum\Administrator account and the password Pa$$w0rd.
2.
Use the Add Roles and Features Wizard to add the Active Directory Rights Management Services role
to LON-SVR1 using the following option:
3.
From the AD RMS node in the Server Manager, click More to start post deployment configuration of
AD RMS.
4.
Port: 80
5.
Use the Internet Information Services (IIS) Manager console to enable Anonymous Authentication on
the Default Web Site\_wmcs and the Default Web Site\_wmcs\licensing virtual directories.
6.
Note: You must sign out before you can manage AD RMS. This lab uses port 80 for
convenience. In production environments, you would protect AD RMS using an encrypted
connection.
Sign in to LON-SVR1 with the Adatum\Administrator account and the password Pa$$w0rd.
2.
3.
From the Active Directory Rights Management Services console, enable Super Users.
4.
Results: After completing this exercise, you should have installed and configured AD RMS.
2.
3.
On LON-SVR1, use the Rights Policy Template node of the Active Directory Rights Management
Services console to create a Distributed Rights Policy Template with the following properties:
o
Name: ReadOnly
Require a new use license: every time content is consumed (disable client-side caching)
On LON-SVR1, open a Windows PowerShell prompt, and then issue the following commands,
pressing Enter at the end of each line:
New-Item c:\rmstemplates -ItemType Directory
New-SmbShare -Name RMSTEMPLATES -Path c:\rmstemplates -FullAccess ADATUM\ADRMSSVC
New-Item c:\docshare -ItemType Directory
New-SmbShare -Name docshare -Path c:\docshare -FullAccess Everyone
2.
In the Active Directory Rights Management Services console, set the Rights Policy Templates file
location to \\LON-SVR1\RMSTEMPLATES.
3.
In File Explorer, view the c:\rmstemplates folder. Verify that the ReadOnly.xml template displays.
In the Active Directory Rights Management Services console, enable Application exclusion.
2.
Results: After completing this exercise, you should have configured AD RMS templates.
2.
3.
Import the Trusted User Domain policy from the partner domain.
4.
Import the Trusted Publishing Domains policy from the partner domain.
On LON-SVR1, open a Windows PowerShell prompt, and then issue the following commands,
pressing Enter at the end of each line:
New-Item c:\export -ItemType Directory
New-SmbShare -Name Export -Path c:\export -FullAccess Everyone
2.
Use the Active Directory Rights Management Services console to export the TUD policy to the \\LONSVR1\export share as ADATUM-TUD.bin.
3.
Sign in to TREY-DC1 with the TREYRESEARCH\Administrator account and the password Pa$$w0rd.
4.
5.
Export the Trusted User Domains policy to the \\LON-SVR1\export share as TREYRESEARCHTUD.bin.
6.
On TREY-DC1, open a Windows PowerShell prompt, issue the following command, and then press
Enter:
Add-DnsServerConditionalForwarderZone -MasterServers 172.16.0.10 -Name adatum.com
Switch to LON-SVR1.
2.
Use the Active Directory Rights Management Services console to export the TPD policy to the \\LONSVR1\export share as ADATUM-TPD.xml. Protect this file by using the password Pa$$w0rd.
3.
Switch to TREY-DC1.
4.
Use the Active Directory Rights Management Services console to export the TPD policy to the \\LONSVR1\export share as TREYRESEARCH-TPD.xml.
5.
Task 3: Import the Trusted User Domain policy from the partner domain
1.
Switch to LON-SVR1.
2.
Import the TUD policy for Trey Research by importing the file \\LON-SVR1\export\treyresearchtud.bin. Use the display name TreyResearch.
3.
Switch to TREY-DC1.
4.
Import the TUD policy for Trey Research by importing the file \\LON-SVR1\export\adatum-tud.bin.
Use the display name Adatum.
Task 4: Import the Trusted Publishing Domains policy from the partner domain
1.
Switch to LON-SVR1.
2.
Import the Trey Research TPD by importing the file \\LON-SVR1\export\ TREYRESEARCH-TPD.xml,
using the password Pa$$w0rd and the display name Trey Research.
3.
Switch to TREY-DC1.
4.
Import the Adatum Trusted Publishing Domain by importing the file \\LON-SVR1\export\adatumtpd.xml, using the password Pa$$w0rd, and the display name Adatum.
Results: After completing this exercise, you should have implemented the AD RMS trust policies.
2.
3.
4.
Open and edit the rights-protected document as an authorized user at Trey Research.
5.
Sign in to LON-CL1 with the Adatum\Administrator account and the password Pa$$w0rd.
2.
Add Aidan, Bill, and Carol as local Remote Desktop Users in the Systems properties.
3.
4.
Sign in to LON-CL1 with the Adatum\Aidan account and the password Pa$$w0rd.
5.
Add the http://adrms.adatum.com URL to the Local intranet group on the Internet options
Security tab using the Advanced button in Sites.
Note: This above step is necessary for the Office program to find the proper AD RMS
Cluster URL. It must be in the local intranet sites, and it must be done for each user.
6.
7.
8.
From the Permissions item, choose to restrict access. Grant [email protected] permission to read the
document.
2.
3.
Sign in to LON-CL1 with the Adatum\Bill account using the password Pa$$w0rd.
2.
Add the http://adrms.adatum.com URL to the Local intranet group on the Internet options
Security tab using the Advanced button in Sites.
3.
4.
When prompted, provide the credentials Adatum\Bill with the password of Pa$$w0rd.
5.
6.
7.
Right-click the line of text. Verify that you cannot modify this text.
8.
9.
2.
Add the http://adrms.adatum.com URL to the Local intranet group on Internet options Security
tab using the Advanced button in Sites.
3.
4.
Verify that Carol does not have permission to open the document.
5.
Sign in to LON-CL1 with the Adatum\Aidan account using the password Pa$$w0rd.
2.
3.
4.
2.
Sign in to Trey-CL1 with the TREYRESEARCH\Administrator account and the password Pa$$w0rd.
3.
4.
5.
6.
Add the http://adrms.treyresearch.net URL to the Local intranet group in Internet options Security
tab using the Advanced button in Sites.
7.
8.
9.
10. Attempt to open the document. When prompted, enter the following credentials, select the
Remember my credentials check box, and then click OK:
a.
Username: April
b.
Password: Pa$$w0rd
11. Verify that you can open the document, but that you cannot make modifications to it.
12. View the permissions that the [email protected] account has for the document.
2.
In the Virtual Machines list, right-click 20412C-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you should have verified that the AD RMS deployment is
successful.
Question: What considerations should you make and steps you can take when you use the
AD RMS role?
Before you deploy AD RMS, you must analyze your organizations business requirements and create
the necessary templates. You should meet with users to inform them of AD RMS functionality, and ask
for feedback on the types of templates that they would like to have available.
Strictly control membership of the Super Users group. Users in this group can access all protected
content. Granting a user membership of this group gives them complete access to all AD RMS
protected content.
Tools
Tool
Where is it?
Windows PowerShell
Regedit.exe
Review Questions
Question: What are the benefits of having an SSL certificate installed on the AD RMS server
when you are performing AD RMS configuration?
Question: You need to provide access to AD RMSprotected content to five users who are
unaffiliated contractors, and who are not members of your organization. Which method
should you use to provide this access?
Question: You want to block users from protecting Microsoft PowerPoint content by using
AD RMS templates. What steps should you take to accomplish this goal?
8-1
Module 8
Implementing and Administering AD FS
Contents:
Module Overview
8-1
Lesson 1: Overview of AD FS
8-2
Lesson 2: Deploying AD FS
8-12
8-19
Lab A: Implementing AD FS
8-27
8-33
8-38
8-45
8-54
Module Overview
Active Directory Federation Services (AD FS) in the Windows Server 2012 operating system provides
flexibility for organizations that want to enable their users to log on to applications that are located on a
local network, at a partner company, or in an online service. With AD FS, an organization can manage its
own user accounts, and users only have to remember one set of credentials. However, those credentials
can provide access to a variety of applications, which typically are located in different locations.
This module provides an overview of AD FS, and it provides details on how to configure AD FS in both a
single-organization scenario and in a partner-organization scenario. Finally, this module describes the
Web Application Proxy feature in Windows Server 2012 R2 that functions as an AD FS proxy and reverse
proxy for web-based applications.
Objectives
After completing this module, you will be able to:
Describe AD FS.
Lesson 1
Overview of AD FS
AD FS is the Microsoft implementation of an identity federation framework that enables organizations to
establish federation trusts and share resources across organizational and Active Directory Domain Services
(AD DS) boundaries. AD FS is compliant with common Web services standards, thus enabling
interoperability with identity federation solutions that other vendors provide.
AD FS addresses a variety of business scenarios where the typical authentication mechanisms used in an
organization do not work. This lesson provides an overview of the concepts and standards that AD FS
implements, and the business scenarios that AD FS can address.
Lesson Objectives
After completing this lesson, you will be able to:
Describe AD FS.
In an identity-federation solution, user identities and their associated credentials are stored, owned, and
managed by the organization where the user is located. As part of the trust, each organization also
defines how to share user identities securely to restrict access to resources. Each partner must define the
services that it makes available to trusted partners and customers, and which other organizations and
users it trusts. Each partner also must define what types of credentials and requests it accepts, and each
partner must define its privacy policies to ensure that private information is not accessible across the trust.
A single organization also can use identity federation. For example, an organization might plan to deploy
several web-based applications that require authentication. When you use AD FS, the organization can
implement one authentication solution for all of the applications, making it easy for users in multiple
internal domains or forests to access the application. The solution also can extend to external partners in
the future, without changing the application.
Most Web services use XML to transmit data through HTTP and HTTPS. With XML, developers can
create their own customized tags, thereby facilitating the definition, transmission, validation, and
interpretation of data between applications and organizations.
Web services expose useful functionality to web users through a standard web protocol. In most
cases, Web services use the SOAP protocol, which is the communications protocol for XML Web
services. SOAP is a specification that defines the XML format for messages, and it essentially describes
what a valid XML document looks like.
Web services provide a way to describe their interfaces in enough detail to enable a user to build a
client application to communicate with the service. Typically, a WSDL document, which is XML-based,
provides this description. In other words, a WSDL file is an XML document that describes a set of
SOAP messages and how the exchange of messages occurs.
Web services are registered, so that potential users can find them easily. This is done with Universal
Description, Discovery, and Integration (UDDI). A UDDI directory entry is an XML file that describes a
business and the services it offers.
WS-Security: SOAP Message Security and X.509 Certificate Token Profile. WS-Security describes
enhancements to SOAP messaging that provide quality of protection through message integrity,
message confidentiality, and single-message authentication. WS-Security also provides a generalpurpose, yet extensible, mechanism for associating security tokens with messages, and it provides a
mechanism to encode binary security tokensspecifically, X.509 certificates and Kerberos ticketsin
SOAP messages.
Web Services Trust (WS-Trust). WS-Trust defines extensions that build on WS-Security to request and
issue security tokens and manage trust relationships.
Web Services Federation (WS-Federation). WS-Federation defines mechanisms that WS-Security can
use to enable attribute-based identity, authentication, and authorization federation across different
trust realms.
WS-Federation Passive Requestor Profile (WS-F PRP). This WS-Security extension describes how
passive clients, such as web browsers, can acquire tokens from a federation server, and how the
clients can submit tokens to a federation server. Passive requestors of this profile are limited to the
HTTP or HTTPS protocol.
WS-Federation Active Requestor Profile (WS-F ARP). This WS-Security extension describes how active
clients, such as SOAP-based mobile-device applications, can be authenticated and authorized, and
how the clients can submit claims in a federation scenario.
What Is AD FS?
AD FS is the Microsoft implementation of an
identity federation solution that uses claims-based
authentication. AD FS provides the mechanisms to
implement both the identity provider and the
service provider components in an identity
federation deployment.
AD FS provides the following features:
Federation Service provider for identity federation across domains. This service offers federated web
SSO across domains, thereby enhancing security and reducing overhead for information technology
(IT) administrators.
AD FS Features
The following are some of the key features of AD FS:
Web SSO. Many organizations have deployed AD DS. After authenticating to AD DS through
Integrated Windows authentication, users can access all other resources that they have permission to
access within the AD DS forest boundaries. AD FS extends this capability to intranet or Internet-facing
applications, enabling customers, partners, and suppliers to have a similar, streamlined user
experience when they access an organizations web-based applications.
Passive and smart client support. Because AD FS is based on the WS-* architecture, it supports
federated communications between any WS-enabled endpoints, including communications between
servers and passive clients, such as browsers. AD FS on Windows Server 2012 also enables access for
SOAP-based smart clients, such as mobile phones, personal digital assistants, and desktop
applications. AD FS implements the WS-F PRP and some of the WS-F ARP standards for client
support.
Integration with the Windows Server 2012 operating system. In Windows Server 2012, AD FS is
included as a server role that you can install by using Server Manager. When you install the server
role, all required operating system components install automatically.
Integration with Dynamic Access Control (DAC). When you deploy DAC, you can configure user and
device claims that AD DS domain controllers issue. AD FS can consume the AD DS claims that domain
controllers issue. This means that AD FS can make authorization decisions based on both user
accounts and computer accounts.
Windows PowerShell command-line interface cmdlets for administering AD FS. Windows Server
2012 provides several new cmdlets that you can use to install and configure the AD FS server role.
Large organizations frequently have multiple domains and forests that might result from mergers and
acquisitions, or due to security requirements. Users in multiple forests might require access to the
same applications.
Users from outside the office might require access to applications that are running on internal servers.
External users might log on to applications from computers that are not part of the internal domain.
Note: Implementing AD FS does not necessarily mean that users are not prompted for
authentication when they access applications. Depending on the scenario, users might be
prompted for their credentials. However, users always authenticate by using their internal
credentials in the trusted account domain, and they never need to remember alternate
credentials for the application. In addition, the internal credentials are never presented to the
application or to the partner AD FS server.
Organizations can use AD FS to enable SSO in these scenarios. If the organization has a single AD DS
forest, the organization only has to deploy a single federation server. This server can operate as the claims
provider so that it authenticates user requests and issues the claims. The same server also is the relying
party to provide authorization for application access.
Note: The slide and the following description use the terms Federation Service and
Federation Service Proxy to describe AD FS role services. The federation server is responsible for
issuing claims and, in this scenario, is responsible for consuming the claims, as well. The
Federation Service Proxy is a proxy component that we recommend for deployments in which
users outside of the network need access to the AD FS environment. The next lesson provides
more detail about these components.
The following steps describe the communication flow in this scenario:
1.
The client computer, which is located outside of the network, must access a web-based application on
the Web server. The client computer sends an HTTPS request to the Web server.
2.
The Web server receives the request and identifies that the client computer does not have a claim.
The Web server redirects the client computer to the Federation Service Proxy.
3.
The client computer sends an HTTPS request to the Federation Service Proxy. Depending on the
scenario, the Federation Service Proxy might prompt the user for authentication or use Integrated
Windows authentication to collect the users credentials.
4.
The Federation Service Proxy passes on the request and the credentials to the federation server.
5.
6.
If authentication is successful, the federation server collects AD DS information about the user, which
it uses to generate the users claims.
7.
If the authentication is successful, the authentication information and other information is collected in
a security token, which a Federation Service Proxy passes back to the client.
8.
The client then presents the token to the Web server. The web resource receives the request, validates
the signed tokens, and uses the claims in the users token to provide access to the application.
A user at Trey Research uses a web browser to establish an HTTPS connection to the Web server at A.
Datum.
2.
The web application receives the request, and verifies that the user does not have a valid token stored
in a cookie by the web browser. Because the user is not authenticated, the web application redirects
the client to the federation server at A. Datum by using an HTTP 302 redirect message.
3.
The client computer sends an HTTPS request to A. Datums federation server. The federation server
determines the home realm for the user. In this case, the home realm is Trey Research.
4.
The web application again redirects the client computer to the federation server in the users home
realm, which is Trey Research.
5.
The client computer sends an HTTPS request to the Trey Research federation server.
6.
If the user is logged on to the domain already, the federation server can take the users Kerberos
ticket and request authentication from AD DS on the users behalf by using Integrated Windows
authentication. If the user is not logged on to the domain, the user is prompted for credentials.
7.
The AD DS domain controller authenticates the user and sends the success message back to the
federation server, along with other information about the user that the federation server can use to
generate the users claims.
8.
The federation server creates the claim for the user based on the rules defined for the federation
partner. The federation server places the claims data in a digitally signed security token, and then
sends it to the client computer, which posts it back to A. Datums federation server.
9.
A. Datums federation server validates that the security token came from a trusted federation partner.
10. A. Datums federation server creates and signs a new token, which it sends to the client computer. The
client computer then sends the token back to the original URL requested.
11. The application on the web server receives the request and validates the signed tokens. The web
server issues the client a session cookie, indicating that authentication was successfully. The federation
server issues a file-based persistent cookie, which is good for 30 days by default. It eliminates the
home-realm discovery step during the cookies lifetime. The server then provides access to the
application based on the claims that the user provides.
The user opens a web browser and sends an HTTPS request to the Exchange Online Microsoft
Outlook Web App server.
2.
The Outlook Web App server receives the request and verifies whether the user is part of a hybrid
Exchange Server deployment. If this is the case, the server redirects the client computer to the
Microsoft Online Services federation server.
3.
The client computer sends an HTTPS request to the Microsoft Online Services federation server.
4.
The Outlook Web App server redirects the client computer again to the on-premises federation
server. The redirection to the users home realm is based on the users UPN suffix.
5.
The client computer sends an HTTPS request to the on-premises federation server.
6.
If the user is logged on to the domain already, the on-premises federation server can take the users
Kerberos ticket and request authentication from AD DS on the users behalf by using Integrated
Windows authentication. If the user logs on from outside of the network or from a computer that is
not a member of the internal domain, the user is prompted for credentials.
7.
The AD DS domain controller authenticates the user, and then sends the success message back to the
federation server, along with other information about the user that the federation server can use to
generate the users claims.
8.
The federation server creates the claim for the user based on the rules defined during the AD FS
server setup. The federation server places the claims data in a digitally signed security token and
sends it to the client computer, which posts it back to the Microsoft Online Services federation server.
9.
The Microsoft Online Services federation server validates that the security token came from a trusted
federation partner. This trust is configured when you configure the hybrid Exchange Server
environment.
10. The Microsoft Online Services federation server creates and signs a new token that it sends to the
client computer, which then sends the token back to the Outlook Web App server.
11. The Outlook Web App server receives the request and validates the signed tokens. The server issues
the client a session cookie indicating that it has authenticated successfully. The user then is granted
access to his or her Exchange Server mailbox.
Installation Requirements
The version of AD FS included with Windows
Server 2012 required the installation of Internet Information Services (IIS) 8. Due in part to the IIS
requirement, we did not recommend the installation of AD FS on domain controllers for Windows Server
2012. In Windows Server 2012 R2, AD FS does not require the installation of IIS, and installation on a
domain controller is now acceptable.
During installation of AD FS for Windows Server 2012, you had an option to install AD FS as a stand-alone
server. This option was useful for test environments, but we did not recommend it for production
environments because there were no options for expansion after installation. AD FS installation AD FS in
Windows Server 2012 R2 does not include the option to install a stand-alone server. Instead, you can
install a single server farm that provides the option for future expansion.
Enhanced Authentication
The authentication methods available in AD FS are enhanced in Windows Server 2012 R2 to provide
greater flexibility. You can configure authentication policies with global scope for all applications and
services. You also can configure authentication policies that apply only to specific applications, specific
devices, or clients in a specific location.
Multifactor authentication is another new feature in the Windows Server 2012 R2 version of AD FS. By
default, AD FS allows the use of certificates for multifactor authentication. You also can integrate thirdparty providers for multifactor authentication to provide additional authentication methods.
Client Application
Device OS Type
Device OS Version
Public Key
Thumbprint
Lesson 2
Deploying AD FS
After you understand how AD FS works, you can deploy the service. Before you deploy AD FS, you must
understand the components that you will need to deploy and the prerequisites that you must meet,
particularly with regard to certificates. This lesson provides an overview of deploying the AD FS server role
in Windows Server 2012 and Windows Server 2012 R2.
Lesson Objectives
After completing this lesson, you will be able to:
Describe AD FS components.
Describe AD FS prerequisites.
Describe the Public Key Infrastructure (PKI) and certificate requirements for AD FS.
AD FS Components
AD FS is installed as a server role in Windows
Server 2012. However, there are many different
components that you install and configure in an
AD FS deployment. The following table lists the
AD FS components.
Component
Federation
server
The federation server issues, manages, and validates requests involving identity claims. All
implementations of AD FS require at least one Federation Service for each participating forest.
Federation
service
proxy/Web
Application
Proxy
The federation server proxy is an optional component that you usually deploy in a perimeter
network. It does not add any functionality to the AD FS deployment, but its deployment
provides a layer of security for connections from the Internet to the federation server. In
Windows Server 2012 R2, the Web Application Proxy provides the federation service proxy
functionality.
Claim
A claim is a statement that a trusted entity makes about an object, such as a user. The claim
could include the users name, job title, or any other factor that might be used in an
authentication scenario. With Windows Server 2012, the object also can be a device used in a
DAC deployment.
Claim rules
Claim rules determine how federation servers process claims. For example, a claim rule might
state that an email address is accepted as a valid claim, or that a group name from one
organization is translated into an application-specific role in the other organization. The rules
usually are processed in real time, as claims are made.
Component
Attribute
store
AD FS uses an attribute store to look up claim values. AD DS is a common attribute store and
is available by default because the federation server role must be installed on a domainjoined server.
Claims
providers
The claims provider is the server that issues claims and authenticates users. A claims
provider is one side of the AD FS authentication and authorization process. The claims
provider manages user authentication, and then issues the claims that the user presents
to a relying party.
Relying party
The relying party is the party where the application is located, and it is the other side of the
AD FS authentication and authorization process. The relying party is a web service that
consumes claims from the claims provider. The relying party server must have the Microsoft
Windows Identity Foundation (WIF) installed or use the AD FS 1.0 claims-aware agent.
Claims
provider trust
A claims provider trust configures data that defines rules under which a client might request
claims from a claims provider and subsequently submit them to a relying party. The trust
consists of various identifiers such as names, groups, and various rules.
Relying-party
trust
A relying-party trust defines the claim information about a user or client that AD FS will pass
to a relying party. It consists of various identifiers, such as names, groups, and various rules.
Certificates
AD FS uses digital certificates when communicating over Secure Sockets Layer (SSL) or as part
of the token-issuing process, the token-receiving process, and the metadata-publishing
process. Digital certificates also are used for token signing.
Endpoints
Note: Subsequent sections of this module describe many of these components in more detail.
AD FS Prerequisites
Before you deploy AD FS, you must ensure that your
internal network meets some basic prerequisites. The
configuration of the following network services is
critical for a successful AD FS deployment:
The federation server proxies must be able to communicate with the federation servers in the
same organization by using HTTPS.
Federation servers and internal client computers must be able to communicate with domain
controllers for authentication.
AD DS. AD DS is a critical piece of AD FS. Domain controllers should run Windows Server 2003 Service
Pack 1 as a minimum. Federation servers must be joined to an AD DS domain. The Federation Service
Proxy does not have to be domain-joined.
Attribute stores. AD FS uses an attribute store to build claims information. The attribute store contains
information about users, which the AD FS server extracts from the store after the user is
authenticated. AD FS supports the following attribute stores:
o
Active Directory Lightweight Directory Services (AD LDS) in Windows Server 2008, Windows
Server 2008 R2, and Windows Server 2012
Note: You can use AD DS as both the authentication provider and as an attribute store.
AD FS can use AD LDS only as an attribute store.
Domain Name System (DNS). Name resolution allows clients to find federation servers. Client
computers must resolve DNS names for all federation servers or AD FS farms to which they connect,
and the web applications that the client computer is trying to use. If a client computer is external to
the network, the client computer must resolve the DNS name for the Federation Service Proxy, not
the internal federation server or AD FS farm. The Federation Service Proxy must resolve the name of
the internal federation server or farm. If internal users have to access the internal federation server
directly, and external users have to connect through the federation server proxy, you must configure
different DNS records in the internal and external DNS zones.
Operating system prerequisites. You can only deploy the Windows Server 2012 version of AD FS as a
server role on a Windows Server 2012 server.
Token-Signing Certificates
The token-signing certificate is used to sign every token that a federation server issues. This certificate is
critical in an AD FS deployment because the token signature indicates which federation server issued the
token. The claims provider uses this certificate to identify itself, and the relying party uses it to verify that
the token is coming from a trusted federation partner.
The relying party also requires a token-signing certificate to sign the tokens that it prepares for AD FSaware applications. The relying partys token-signing certificate must sign these tokens in order for the
destination applications to validate them.
When you configure a federation server, the server assigns a self-signed certificate as the token-signing
certificate. In most cases, it is not required to update this certificate with a certificate from a third party
CA. When a federation trust is created, the trust of this certificate is configured at the same time. You can
have multiple token-signing certificates configured on the federation server, but only the primary
certificate is used to sign tokens.
Token-Decrypting Certificates
Token-decrypting certificates are used to encrypt the entire user token before it is transmitted across the
network from the claims provider federation server to the relying party federation server. To provide this
functionality, the public key from the certificate for the relying party federation server is provided to the
claims provider federation server. The certificate is sent without the private key, and the claims provider
server uses the public key from the certificate to encrypt the user token.
When the token is returned to the relying party federation server, it uses the private key from the
certificate to decrypt the token. This provides an extra layer of security when transmitting the certificates
across an untrusted network such as the Internet.
When you configure a federation server, the server assigns a self-signed certificate as the tokendecrypting certificate. In most cases, you do not need to update this certificate with a certificate from
a third party CA. When a federation trust is created, the trust of this certificate is configured at the
same time.
Note: The federation server proxies only require a SSL certificate. The certificate is used to
enable SSL communication for all client connections.
Choosing a CA
AD FS federation servers can use self-signed certificates, certificates from an internal, private CA, or
certificates that have been purchased from an external, public CA. In most AD FS deployments, the most
important factor when you choose certificates is that all involved parties trust them. This means that if you
configure an AD FS deployment that interacts with other organizations, you almost certainly will use a
public CA for the SSL certificate on a federation server proxy, because the certificates that the public CA
issues are trusted by all partners automatically.
If you deploy AD FS just for your organization, and all servers and client computers are under your
control, you can consider using a certificate from an internal, private CA. If you deploy an internal
enterprise CA on Windows Server 2012, you can use Group Policy to ensure that all computers in the
organization automatically trust the certificates issued by the internal CA. Using an internal CA can
decrease the cost of certificates significantly.
If you use an internal CA, you must ensure that users at any location can verify a certificate revocation. For
example, if your users access applications from the Internet, then you must ensure that those users can
access certificate revocation information from the Internet. This means that you need to configure a
certificate revocation list (CRL) distribution point in your perimeter network.
Note: Deploying an internal CA by using Active Directory Certificate Services (AD CS) is a
straightforward process, but it is critical that you plan and implement the deployment carefully.
Relying party. A relying party is a federation server that receives security tokens from a trusted claims
provider. Relying party federation servers are deployed in organizations that provide application
access to claims provider organizations. The relying party accepts and validates the claim, and then it
issues new security tokens that the Web server can use to provide appropriate access to the
application.
Note: A single AD FS server can operate as both a claims provider and a relying party, even
with the same partner organizations. The AD FS server functions as a claims provider when it
authenticates users and provides tokens for another organization. Additionally, it can accept
tokens from the same or different organizations in a relying-party role.
Federation server proxy. A federation server proxy provides an extra level of security for AD FS traffic
that comes from the Internet to internal AD FS federation servers. Federation server proxies can be
deployed in both claims-provider and relying-party organizations. On the claims provider side, the
proxy collects the authentication information from client computers and passes it to the claims
provider federation server for processing. The federation server issues a security token to the proxy,
which sends it to the relying-party proxy. The relying-party federation server proxy accepts these
tokens, and then passes them on to the internal federation server. The relying-party federation server
issues a security token for the web application, and then it sends the token to the federation-server
proxy, which then forwards the token to the client. The federation-server proxy does not provide any
tokens or create claims; it only forwards requests from clients to internal AD FS servers. All
communication between the federation-server proxy and the federation server uses HTTPS.
Demonstration Steps
Install AD FS
On LON-DC1, use the Server Manager to install the Active Directory Federation Services role on LONDC1.Adatum.com.
On LON-DC1, use the DNS Manager to add a new host record for AD FS in the Adatum.com forward
lookup zone with the following settings:
o
Name: adfs
IP address: 172.16.0.10
Configure AD FS
1.
In the Server Manager notifications, click Configure the federation services on this server.
2.
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
Describe AD FS claims.
Claim Types
Each AD FS claim has a claim type, such as email address, UPN, or last name. Users can be issued claims
based on any defined claim type. Therefore, a user might be issued a claim with a type of Last Name and
a value of, for example, Weber. AD FS provides many built-in claim types. Optionally, you can create new
ones based on organizational requirements.
A Uniform Resource Identifier (URI) identifies each AD FS claim type uniquely. This information is provided
as part of the AD FS server metadata. For example, if the claims-provider organization and the relyingparty organization decide to use a claim type of AccountNumber, both organizations must configure a
claim type with this name. The claim type is published, and the claim type URI must be identical on both
AD FS servers.
Note: In Windows Server 2012 R2, the number of claims types has increased to support
various device types and certificate characteristics.
The federation server can retrieve the claim from an attribute store. Frequently, the information
required for the claim is already stored in an attribute store that is available to the federation server.
For example, an organization might decide that the claim should include the users UPN, email
address, and specific group memberships. This information is stored in AD DS already, so the
federation server can retrieve this information from AD DS when it creates the claim. Because AD FS
can use AD DS, AD LDS, SQL Server, a non-Microsoft Lightweight Directory Access Protocol (LDAP)
directory, or a custom attribute store to populate claims, you can define almost any value within the
claim.
The claims-provider federation server can calculate the claim based on information that it gathers
from an attribute store. For example, a vendors database may contain weight of inventory in pounds,
while your application requires the weight in kilograms to calculate shipping costs. A calculated claim
could make the conversion from pounds to kilograms.
You can transforms the claim from one value to another. In some cases, the information that is stored
in an attribute store does not exactly match the information the application requires when it creates
authorization information. For example, the application might have different user roles defined that
do not directly match the attributes that are stored in any attribute store. However, the application
role might correlate to AD DS group membership. For example, users in the Sales group might
correlate to one application role, while users in the Sales Management group might correlate to a
different application role. To establish the correlation in AD FS, you can configure a claims
transformation that takes the value that the claims provider provides, and then translates the value
into to a claim that is useful to the application in the relying party.
If you have deployed DAC, you can transform a DAC device claim into an AD FS claim. You can use
this to ensure that users can access an AD FS website only from trusted workstations that have been
issued a valid device claim.
Claim rules for a claims provider trust. A claims provider trust is the AD FS trust relationship that you
configure between an AD FS server and a claims provider. You can configure claim rules to define
how the claims provider processes and issues claims.
Claim rules for a relying-party trust. A relying-party trust is the AD FS trust relationship that you
configure between an AD FS server and a relying party. You can configure claim rules that define how
the relying party accepts claims from the claims provider.
Claim rules that you configure on an AD FS claims provider all are considered acceptance transform rules.
These rules determine what claim types are accepted from the claims provider and then sent to a relyingparty trust. When configuring AD FS within a single organization, there is a default claims provider trust
that is configured with the local AD DS domain. This rule set defines the claims that are accepted from
AD DS.
There are three types of claim rules for a relying-party trust:
Issuance transform rules. These rules define the claims that are sent to the relying party that was
defined in the relying party trust.
Issuance authorization rules. These rules define which users are permitted or denied access to the
relying party defined in the relying-party trust. This rule set can include rules that explicitly permit
access to a relying party, and/or rules that explicitly deny access to a relying party.
Delegation-authorization rules. These rules define the claims that specify which users can act on
behalf of other users when accessing the relying party. This rule set can include rules that explicitly
permit delegates for a relying party, or rules that explicitly deny delegates to a relying party.
Note: Each claim rule is associated with a single federated trust, and you cannot reuse the claim
rules for other federated trusts. This is because each federated trust represents a unique business
relationship.
AD FS servers are preconfigured with a set of default rules and several default templates that you can use
to create common claim rules. You can create custom claim rules by using the AD FS claim rule language.
Import data about the claims provider through the federation metadata. If the AD FS federation
server or federation server proxy is accessible through the network from your AD FS federation server,
you can enter the host name or URL for the partner federation server. Your AD FS federation server
connects to the partner server and downloads the federation metadata from the server. The
federation metadata includes all the information that is required to configure the claims-provider
trust. As part of the federation metadata download, your federation server also downloads the SSL
certificate that the partner federation server uses.
Import data about the claims provider from a file. Use this option if the partner federation server is
not directly accessible from your federation server, but the partner organization has exported its
configuration and provided you the information in a file. The configuration file must include
configuration information for the partner organization, and the SSL certificate that the partner
federation server uses.
Manually configure the claims provider trust. Use this option if you want to configure all of the
settings for the claims-provider trust. When you choose this option, you must provide the features
that the claims provider supports and the URL that is used to access the claims provider AD FS servers.
You also must add the SSL certificate that the partner organization uses.
Import data about the relying party through the federation metadata. If the AD FS federation or
federation server proxy is accessible through the network from your AD FS federation server, you can
enter the host name or URL for the partner federation server. Your AD FS federation server connects
to the partner server and then downloads the federation metadata from the server. The federation
metadata includes all the information that is required to configure the relying-party trust. As part of
the federation metadata download, your federation server also downloads the SSL certificate that the
partner federation server uses.
Import data about the relying party from a file. Use this option if the partner federation server is not
accessible from your federation server directly. In this case, the partner organization can export its
configuration information to a file and then provide it to you. The configuration file must include
configuration information for the partner organization and the SSL certificate that the partner
federation server uses.
Manually configure the claims-provider trust. Use this option if you want to configure all of the
settings for the trust.
Demonstration Steps
Configure a Claims Provider Trust
1.
2.
Browse to the Claims Provider Trusts, and then edit claim rules for Active Directory.
3.
E-Mail-Addresses
User-Principal-Name: UPN
On LON-SVR1, open Internet Information Services (IIS) Manager and view the server certificates.
2.
3.
Organization: A. Datum
Organizational unit: IT
City/locality: London
State/Province: England
Country/region: GB
On LON-SVR1, in the Server Manager, open the Windows Identity Foundation Federation Utility tool.
2.
No encryption
On LON-DC1, in the AD FS console, add a relying-party trust with the following settings:
Import data about the relying party published online or on a local network
2.
I do not want to configure multi-factor authentication settings for the relying party trust at this
time
Leave the Edit Claim Rules for A. Datum Test App window open for the next demonstration.
Authentication Methods
You can use the global authentication policy to
define which authentication methods AD FS
supports for your intranet and extranet. The ADFS server supports the intranet methods on the internal
network, while the AD FS proxy functionality supports the extranet methods on the Web Application
Proxy server.
The authentication methods are:
Windows authentication. When you use Windows authentication, workstation credentials can be
passed directly to AD FS if the application being accessed is in the local intranet zone of Internet
Explorer. Otherwise, the user is prompted for credentials by a pop-up window. This authentication
method may experience issues when traversing some firewalls and when used with web browsers
other than Internet Explorer. AD FS supports this authentication method only for the intranet.
Forms authentication. This authentication method presents a web page in which users can enter
authentication credentials. Use forms authentication to provide better compatibility for users
accessing applications from outside the organization. This authentication method is available for the
intranet and extranet.
Certificate authentication. This authentication method accepts a certificate from the web browser as
an alternative to a username and password. You can use certificate authentication to increase security
of credentials entering because it is typically more difficult to steal a certificate than a username and
password. This authentication method is available for the intranet and extranet.
It is possible to select multiple authentication methods for the intranet or extranet. If you select multiple
authentication methods, then you can use any of the selected methods. Browsers that support Windows
authentication will use it as the default authentication method.
Intranet or extranet
Phone calls. When this method is used, you receive a call on your phone to confirm your
authentication. You press the # (pound or hash) symbol to confirm after receiving the call.
Text messages. When this method is used, you receive a text message with a passcode. You respond
to the text message and include the passcode.
Mobile App. When this method is used, an authentication prompt appears in the mobile app that you
must acknowledge.
You can use Windows Azure Multi-Factor Authentication for many scenarios other than AD FS
authentication. You can integrate it into many situations where you require increased security, such as for
authentication to virtual private networks (VPNs), cloud-based applications hosted in Windows Azure,
Remote Authentication Dial-In User Service (RADIUS) servers, or AD DS.
To learn about Windows Azure Multi-Factor Authentication, go to:
http://go.microsoft.com/fwlink/?LinkID=386642
Lab A: Implementing AD FS
Scenario
A. Datum Corporation has set up a variety of business relationships with other companies and customers.
Some of these partner companies and customers must access business applications that are running on
the A. Datum network. The business groups at A. Datum want to provide a maximum level of functionality
and access to these companies. The Security and Operations departments want to ensure that the
partners and customers can access only the resources to which they require access, and that
implementing the solution does not increase the workload for the Operations team significantly. A. Datum
also is working on migrating some parts of its network infrastructure to Microsoft Online Services,
including Windows Azure and Office 365.
To meet these business requirements, A. Datum plans to implement AD FS. In the initial deployment, the
company plans to use AD FS to implement SSO for internal users who access an application on a Web server.
As one of the senior network administrators at A. Datum, it is your responsibility to implement the AD FS
solution. As a proof-of-concept, you plan to deploy a sample claims-aware application, and you will
configure AD FS to enable internal users to access the application.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
Virtual machines: 20412D-LON-DC1,
20412D-LON-SVR1,
20412D-LON-CL1
User name: Adatum\Administrator
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and then in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
b.
2.
3.
Install AD FS.
4.
Configure AD FS.
5.
Verify AD FS functionality.
On LON-DC1, use the DNS Manager to add a new host record for AD FS:
o
Name: adfs
IP address: 172.16.0.10
2.
3.
4.
Set-ADAccountPassword adfsService
Enable-ADAccount adfsService
Task 3: Install AD FS
On LON-DC1, in the Server Manager, add the Active Directory Federation Services role.
Task 4: Configure AD FS
1.
On LON-DC1, in the Server Manager notifications, click Configure the federation services on this
server.
2.
Adatum\adfsService
Password: Pa$$w0rd
Note: The adfs.adatum.com certificate was preconfigured for this task. In your own
environment, you need to obtain this certificate.
2.
3.
Verify that the file loads, and then close Internet Explorer.
Results: In this exercise, you installed and configured AD FS. You also verified that it is functioning by
viewing the FederationMetaData.xml file contents.
2.
3.
4.
5.
6.
7.
On LON-SVR1, open Internet Information Services (IIS) Manager, and then view the server certificates.
2.
3.
Organization: A. Datum
Organizational unit: IT
City/locality: London
State/Province: England
Country/region: GB
2.
Browse to the Claims Provider Trusts, and then edit the claim rules for Active Directory.
3.
User-Principal-Name: UPN
Display-Name: Name
On LON-SVR1, in the Server Manager, open the Windows Identity Foundation Federation Utility tool.
2.
No encryption
2.
On LON-DC1, in the AD FS console, add a relying-party trust with the following settings:
Import data about the relying party published online or on a local network
I do not want to configure multi-factor authentication settings for this relying party trust
at this time
Leave the Edit Claim Rules for A. Datum Test App window open for the next task.
On LON-DC1, in the Edit Claim Rules for A. Datum Test App window, add a rule on the Issuance
Transform Rules tab.
2.
Complete the Add Transform Claim Rule Wizard with the following settings:
3.
Create three more rules to pass through the E-Mail Address, UPN, and Name claim types.
2.
3.
4.
2.
On the Security tab, add the following sites to the Local intranet zone:
3.
https://adfs.adatum.com
https://lon-svr1.adatum.com
4.
5.
6.
Results: After completing this exercise, you will have configured AD FS to support authentication for an
application.
Question: Why was it important to configure adfs.adatum.com to use as a host name for the
AD FS service?
Question: How can you test whether AD FS is functioning properly?
Lesson 4
Lesson Objectives
After completing this lesson, you will be able to:
To configure the account-partners organization to prepare for federation, use the following steps:
1.
Implement the physical topology for the account-partner deployment. This step could include
deciding on the number of federation servers and federation server proxies to deploy, and
configuring the required DNS records and certificates.
2.
Add an attribute store. Use the AD FS management console to add the attribute store. In most cases,
you use the default Active Directory attribute store, which must be used for authentication, but you
also can add other attribute stores, if required, to build the user claims. You connect to a resourcepartner organization by creating a relying-party trust. The simplest way to do this is to use the
federation metadata URL that is provided by the resource-partner organization. With this option,
your AD FS server automatically collects the information required for the relying-party trust.
3.
Add a claim description. The claim description lists the claims that your organization provides to the
relying partner. This information might include user names, email addresses, group membership
information, or other identifying information about a user.
4.
Prepare client computers for federation. This might involve two steps:
Add the account-partner federation server. In the browsers of client computers, add the accountpartner federation server to the local intranet sites list. By adding the account-partner federation
server to the local intranet list on client computers, you enable Integrated Windows
authentication, which means that users are not prompted for authentication if they are logged
on to the domain already. You can use Group Policy Objects (GPOs) to assign the URL to the local
intranet site list.
Configure certificate trusts. This is an optional step that is required only if one or more of the
servers that clients access do not have trusted certificates. The client computer might have to
connect to the account-federation servers, resource-federation servers, or federation-server
proxies, and the destination Web servers. If any of these certificates are not from a trusted public
CA, you might have to add the appropriate certificate or root certificate to the certificate store on
the clients. You can do this by using GPOs.
Web servers must have either WIF or the AD FS 1.x Claims-Aware Web Agent role services installed to
externalize the identity logic and accept claims. WIF provides a set of development tools that enable
developers to integrate claims-based authentication and authorization into their applications. WIF also
includes a software development kit and sample applications.
Note: You can use SAML tokens to integrate applications on non-Microsoft Web servers
with AD FS. Additional open-source or third-party software typically is necessary to support the
use of SAML tokens on a non-Microsoft Web server.
Configuring a resource-partner organization is similar to configuring an account-partner organization,
and consists of the following steps:
1.
Implement the physical topology for the resource-partner deployment. The planning and
implementation steps are the same as for the account partner, with the addition of planning the Web
server location and configuration.
2.
Add an attribute store. On the resource partner, the attribute store is used to populate the claims that
are offered to the client to present to the Web server.
3.
4.
Send LDAP Attributes as Claims. Use this template when you select specific attributes in an LDAP
attribute store to populate claims. You can configure multiple LDAP attributes as individual claims in
a single claim rule that you create from this template. For example, you can create a rule that extracts
the sn (surname) and givenName AD DS attributes from all authenticated users, and then sends
these values as outgoing claims to be sent to a relying party.
Send Group Membership as a Claim. Use this template to send a particular claim type and an
associated claim value that is based on the users AD DS security group membership. For example,
you might use this template to create a rule that sends a group claim type with a value of
SalesAdmin, if the user is a member of the Sales Manager security group within their AD DS domain.
This rule issues only a single claim based on the AD DS group that you select as a part of the
template.
Pass Through or Filter an Incoming Claim. Use this template to set additional restrictions on which
claims are submitted to relying parties. For example, you might want to use a user email address as a
claim, but only forward the email address if the domain suffix on the email address is adatum.com.
When you use this template, you can either pass through whatever claim you extract from the
attribute store, or you can configure rules that filter whether the claim is passed on based on various
criteria.
Transform an Incoming Claim. Use this template to map the value of an attribute in the claimsprovider attribute store to a different value in the relying-party attribute store. For example, you
might want to provide all members of the Marketing department at A. Datum limited access to a
purchasing application at Trey Research. At Trey Research, the attribute used to define the limited
access level might have an attribute of LimitedPurchaser. To address this scenario, you can configure
a claims rule that transforms an outgoing claim where the Department value is Marketing, to an
incoming claim where the ApplicationAccess attribute is LimitedPurchaser. Rules created from this
template must have a one-to-one relationship between the claim at the claims provider and the claim
at the relying partner.
Permit or Deny Users Based on an Incoming Claim. This template is available only when you configure
issuance-authorization rules or delegation-authorization rules on a relying-party trust. Use this
template to create rules that enable or deny access by users to a relying party, based on the type and
value of an incoming claim. This claim rule template allows you to perform an authorization check on
the claims provider before claims are sent to a relying party. For example, you can use this rule
template to create a rule that only permits users from the Sales group to access a relying party, while
authentication requests from members of other groups are not sent to the relying party.
If none of the built-in claim rule templates provides the functionality that you require, you can create
more-complex rules by using the AD FS claim-rule language. By creating a custom rule, you can extract
claims information from multiple attribute stores and combine claim types into a single claim rule.
Ask users to select their home realm. With this option, when users are redirected to the relying partys
federation server, the federation server can display a web page that asks them to identify their
company. Once users select the appropriate company, the federation server can use that information
to redirect client computers to the appropriate home federation server for authentication.
Modify the link for the web application to pass the WHR parameter that contains the users home
realm. The relying-partys federation server uses this parameter to redirect the user to the appropriate
home realm automatically. This means that the user does not have to be prompted to select the
home realm because the WHR parameter in the URL that the user clicks includes the needed
information for the relying-partys federation server. The modified link might look something like the
following: https://www.adatum.com/OrderApp/?whr=urn:federation:TreyResearch.
Note: One of the options available for home realm discovery with SAML 2.0-compliant
applications is a SAML profile called IdPInitiated SSO. This SAML profile configures users to access
their local claims provider first, which can prepare the users token with the claims required to
access the partners web application. The Windows Server 2012 version of AD FS does not
implement the IdPInitiated SSO profile fully, but it provides some of the same functionality by
implementing a feature named RelayState.
To learn more about the Supporting Identity Provider Initiated RelayState, go to:To learn more
about the Supporting Identity Provider Initiated RelayState, go to:To learn more about the Supporting
Identity Provider Initiated RelayState, go to:To learn more about the Supporting Identity Provider Initiated
RelayState, go to: http://go.microsoft.com/fwlink/?LinkId=269666
Note: The home realm discovery process occurs the first time a user tries to access a web
application. After the user authenticates successfully, a home realm discovery cookie is issued to the
client. This ensures that the user does not have to go through the process the next time. However,
this cookie expires after a month, unless the user clears the cookie cache prior to expiration.
Demonstration Steps
1.
On LON-DC1, in the AD FS Manager, in the Edit Claim Rules for A. Datum Test App window, add an
Issuance Transform Rule with the following settings:
2.
3.
4.
5.
View the rule language for the Allow A. Datum Users rule.
Lesson 5
Lesson Objectives
After completing this lesson, you will be able to:
Pass-Through
When you select pass-through as the type of preauthentication, no preauthentication occurs, and valid
requests are passed to web-based applications on an internal network without performing authentication
on a user. All authentication to an application is performed directly by the application after a user is
connected. You can use pass-through for any web application.
A web application that protected Web Application Proxy without preauthentication is protected from
malformed packets that could cause a DoS attack. However, the web application is not protected from
application-level threats where the application mishandles valid data. For example, an HTTPS request with
valid HTTP commands would pass through to the application, even if the actions that the HTTP
commands request may cause the web application to fail.
Preauthentication
When you select AD FS for preauthentication, AD FS authenticates a user request before it is passes to an
internal, web-based application. This ensures that only authorized users can send data to a web-based
application. Preauthentication provides a higher level of protection than pass-through authentication
because unauthenticated users cannot submit requests to the application.
Only a claims-aware application that uses AD FS for authentication can use preauthentication. You must
configure the claims-aware application in AD FS as a relying party, and then select it from a list when you
configure Web Application Proxy. Web Application Proxy is aware of the relying parties configured in AD
FS because of the integration between AD FS and Web Application Proxy.
Note: When you use preauthentication, the Web Application Proxy effectively becomes a
relying party for authenticating the users and obtaining claims.
URLs
For each application that you publish, you must configure an external URL and backend server URL.
External users utilize the external URL when accessing the application, while the Web Application Proxy
uses the back-end server URL to access the application for external users.
If you are using split DNS, it is possible to leave the external URL and the back-end server URL as the same
value. Some applications experience errors when the external URL and the back-end server URL are
different. When the external URL and the back-end server URL are different, only the host name in the
URL can change. The path to the application must remain the same. For example, if the back-end URL
for an application is https://server1.adatum.com/app1, then you cannot have an external URL of
https://extranet.adatum.com/application1.
Certificates
When you define the external URL, you also need to select a certificate that contains the host name in the
external URL. You must install this certificate on the local server. However, it does not need to match the
certificate used on the back-end server that hosts the application. You can have one certificate for each
host name used on the Web Application Proxy server, or a single certificate with multiple names.
Authentication Process
An internal AD FS server uses Windows authentication to prompt for authentication. This works well for
internal, domain-joined computers that can pass workstation credentials automatically to AD FS and
automate authentication. This prevents users from seeing a request for authentication credentials.
When computers that are not domain-joined communicate with AD FS, users encounter a logon prompt that
the web browser presents. This logon prompt asks for a user name and password, but provides no context.
When you use federation service proxy, it provides an authentication web page for computers that are not
domain members. This provides better compatibility than browser-based Windows authentication for
AD FS clients that use non-Microsoft operating systems. You also can customize the web page to provide
more context for users, such as a company logo.
DNS Resolution
To provide seamless movement between internal and external networks, the same host name is used
when AD FS is accessed internally and externally. On the internal network, the AD FS host name resolves
to the IP address of the internal AD FS server. On the external network, the AD FS host name resolves to
the IP address of the federation service proxy. In both cases, the AD FS host name is different from the
host name of the computers that host the AD FS roles.
Certificates
The certificate an internal AD FS server uses has a subject name that is the same as the host name for
AD FSfor example, adfs.adatum.com. Because the same host name is used to access AD FS internally
and externally through the AD FS proxy, you must configure the federation service proxy with the same
certificate as the AD FS server. If the certificate subject does not match the host name, AD FS
authentication will fail.
Note: Export the certificate from the AD FS server an import it on the Web Application
Proxy server to ensure that you have a certificate with the same subject name. Remember to
include the private key when you export the certificate.
Demonstration Steps
Install the Web Application Proxy
On LON-SVR2, in the Server Manager, add the remote access server role and the Web Application
Proxy role service.
On LON-DC1, open a Microsoft Management Console, and then add the Certificates snap-in for the
Local Computer.
2.
Password: Pa$$w0rd
On LON-SVR2, open a Microsoft Management Console, and then add the Certificates snap-in for the
Local Computer.
2.
From the Personal folder, import the adfs.adatum.com certificate with the following information:
Password: Pa$$w0rd
On LON-SVR2, in the Server Manager, click the Notifications icon, and then click Open the Web
Application Proxy Wizard.
2.
In the Web Application Proxy Wizard, provide the following configuration settings:
Password: Pa$$w0rd
Untrusted devices
Supported Clients
The only Windows client that supports Workplace Join is Windows 8.1. You cannot use earlier versions of
Windows clients for Workplace Join. However, Workplace Join is cross platform, and it supports iOS
devices such as iPads and iPhones. Support for Android devices is planned.
Supported Applications
Only claims-aware applications that use AD FS can use device registration information. AD FS provides
device information to the claims-aware application during the authentication process.
Single Sign-on
When you use a workplace-joined device, you have SSO for your enterprise applications. After you
authenticate once to an application, you are not prompted for authentication credentials the second time.
2.
Run the following Windows PowerShell command and provide the name of a service account such as
Adatum\ADFS$ when prompted:
Enable-AdfsDeviceRegistration
3.
In the AD FS management console, in the Global Authentication Policy, select the Enable Device
Authentication check box.
Workplace Join is supported through Web Application Proxy automatically after you perform the
configuration steps above.
Certificates on Devices
The Workplace Join process places a certificate on the device. The device uses this certificate to prove its
identity and to authenticate to the object created for the device in AD DS.
2.
3.
4.
In the Workplace settings, enter the email address/UPN of the user, and then click Join.
5.
When prompted, the user must authenticate. By default, the email address/UPN from the previous
screen displays. However, you can also enter credentials in the domain\user name format.
6.
7.
When Workplace Join is complete, you can verify that it was successful.
Note: The option to turn on device management enables a device to start using Windows
Intune to manage the device. You must have Windows Intune configured to use this option.
8.
In Active Directory Administrative Center, you can view the objects for devices enabled with the
Workplace Join feature in the RegisteredDevices organizational unit (OU).
9.
In the properties of the registered device, you can verify that the displayName attribute matches the
name of the computer that is registered.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 45 minutes
20412D-LON-DC1
20412D-LON-SVR1
20412D-LON-SVR2
20412D-TREY-DC1
User name: Adatum\Administrator
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and then in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
2.
3.
4.
5.
6.
7.
8.
9.
2.
On LON-DC1, use the DNS Manager to create a new conditional forwarder with the following
settings:
Store this conditional forwarder in Active Directory and replicate it as follows: All DNS
servers in this forest
On TREY-DC1, use the DNS Manager to create a new conditional forwarder with the following
settings:
Store this conditional forwarder in Active Directory and replicate it as follows: All DNS
servers in this forest
Note: In a production environment, it is likely that you would use Internet DNS instead of
conditional forwarders.
2.
Open Group Policy Management, and then edit the Default Domain Policy.
3.
4.
5.
6.
Right-click LON-DC1.Adatum.com_AdatumCA.crt, and then install the certificate into the Trusted
Root Certification Authority store.
7.
8.
Note: If you obtain certificates from a trusted certification authority, you do not need to configure a
certificate trust between the organizations.
On TREY-DC1, use the DNS Manager to add a new host record for AD FS:
o
Name: adfs
IP address: 172.16.10.10
On TREY-DC1, open Internet Information Services (IIS) Manager, and then view the server certificates.
2.
Organizational unit: IT
City/locality: London
State/Province: England
Country/region: GB
2.
3.
4.
Set-ADAccountPassword adfsService
Enable-ADAccount adfsService
On TREY-DC1, in the Server Manager, add the Active Directory Federation Services role.
In the Server Manager notifications, click Configure the federation services on this server.
2.
TREYRESEARCH\adfsService
Password: Pa$$w0rd
2.
On LON-DC1, use the AD FS management console to add a new claims provider trust with the
following settings:
Import data about the claims provider published online or on a local network
Open the Edit Claim Rules dialog for this claims provider trust when the wizard closes
Create a claim rule for Trey Research by using the following settings:
2.
On TREY-DC1, use the AD FS management console to create a new relying-party trust with the
following settings:
Import data about the relying party published online or on a local network
I do not want to configure multi-factor authentication settings for this relying party trust
at this time
Open the Edit Claim Rules dialog box for the relying party trust when the wizard closes
2.
Select the Trey Research home realm, and then sign in as TreyResearch\April with the password
Pa$$w0rd.
3.
4.
Close Internet Explorer, and then connect to the same website. Verify that you are not prompted for a
home realm this time.
Note: You are not prompted for a home realm on the second access. Once users have
selected a home realm and have been authenticated by a realm authority, they are issued a
_LSRealm cookie by the relying partys federation server. The default lifetime for the cookie is 30
days. Therefore, to sign in multiple times, you should delete that cookie after each logon attempt
to return to a clean state.
On TREY-DC1, in the AD FS management console, remove the issuance authorization rule from the A.
Datum Corporation relying party trust that permits access for all users.
2.
Add an issuance authorization rule to the A. Datum Corporation relying party trust that allows all
users that are members of the Production group:
3.
Add a transform claim rule to the Active Directory claims provider trust to send group membership as a claim:
2.
3.
Verify that you cannot access the application because April is not a member of the production group.
4.
5.
6.
Verify that you can access the application because April is a member of the production group.
Results: After completing this exercise, you will have configured access for a claims-aware application in a
partner organization.
2.
3.
4.
5.
6.
7.
On LON-SVR2, in the Server Manager, add the Remote Access server role and the Web
Application Proxy role service.
On LON-DC1, open the Microsoft Management Console, and then add the Certificates snap-in for
the Local Computer.
2.
Password: Pa$$w0rd
3.
On LON-SVR2, open a Microsoft Management Console, and then add the Certificates snap-in for the
Local Computer.
4.
Password: Pa$$w0rd
On LON-SVR1, open the Microsoft Management Console, and then add the Certificates snap-in for
the Local Computer.
2.
Password: Pa$$w0rd
3.
On LON-SVR2, open the Microsoft Management Console, and then add the Certificates snap-in for
the Local Computer.
4.
Password: Pa$$w0rd
In the Server Manager, click the Notifications icon, and then click Open the Web Application
Proxy Wizard.
2.
In the Web Application Proxy Wizard, provide the following configuration settings:
3.
Password: Pa$$w0rd
Leave the Remote Access Management Console open for the next task.
On LON-SVR2, in the Remote Access Management Console, publish a new application with the
following settings:
o
172.16.0.22 adfs.adatum.com
172.16.0.22 lon-svr1.adatum.com
2.
3.
Note: You edit the hosts to force TREY-DC1 to access the application through Web
Application Proxy. In a production environment, you would do this by using split DNS.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have configured Web Application Proxy to secure access
to AdatumTestApp from the Internet.
Question: Why does using certificate from a trusted provider on the Internet negate the
need to configure certificate trusts between organizations?
Question: Could you have created authorization rules in Adatum.com and achieved the
same result if you had instead created authorization rules in TreyResearch.net?
9-1
Module 9
Implementing Network Load Balancing
Contents:
Module Overview
9-1
9-2
9-6
9-11
9-17
9-22
Module Overview
Network Load Balancing (NLB) is a feature available to computers that run the Windows Server operating
system. NLB uses a distributed algorithm to balance an IP traffic load across multiple hosts. It helps to
improve the scalability and availability of business-critical, IP-based services. NLB also provides high
availability, because it detects host failures and automatically redistributes traffic to surviving hosts.
To deploy NLB effectively, you must understand its functionality and the scenarios where its deployment
is appropriate. The main update to NLB in Windows Server 2012 and Windows Server 2012 R2,
compared to Windows Server 2008 R2, is the inclusion of a comprehensive set of Windows PowerShell
cmdlets. These cmdlets enhance your ability to automate the management of Windows Server 2012 NLB
and Windows Server 2012 clusters. The Network Load Balancing console, which is also available in
Windows Server 2008 and Windows Server 2008 R2, also is present in Windows Server 2012 and.
This module introduces you to NLB, and shows you how to deploy this technology. This module also
discusses the situations for which NLB is appropriate, how to configure and manage NLB clusters, and how
to perform maintenance tasks on NLB clusters.
Objectives
After completing this module, you will be able to:
Describe NLB.
Lesson 1
Overview of NLB
Before you deploy NLB, you need to have a firm understanding of the types of server workloads for which
this high availability technology is appropriate. If you do not understand NLB functionality, you might
deploy it in a manner that does not accomplish your overall objectives. For example, you need to
understand why NLB is appropriate for web applications, but not for Microsoft SQL Server databases.
This lesson provides an overview of NLB, and the features new to NLB in Windows Server 2012. It also
describes how NLB works normally, and how it works during server failure and server recovery.
Lesson Objectives
After completing this lesson, you will be able to:
What Is NLB?
NLB is a scalable, high-availability feature that you
can install on all editions of Windows Server 2012.
A scalable technology is one that enables you to
add additional components, such as additional
cluster nodes in this case, to meet an increasing
demand. A node in a Windows Server 2012 or
Windows Server 2012 R2 NLB cluster is a
computer, either physical or virtual, that is running
the Windows Server 2012 or the Windows
Server 2012 R2 operating system.
Windows Server 2012 NLB clusters can have
between two and 32 nodes. When you create an
NLB cluster, it creates a virtual network address and virtual network adapter. The virtual network adapter
has an IP address and a media access control (MAC) address. Network traffic to this address is distributed
evenly across the nodes in the cluster. In a basic NLB configuration, each node in an NLB cluster will
service requests at a rate that is approximately equal to that of all other nodes in the cluster. When an
NLB cluster receives a request, it will forward that request to the node that is the least utilized currently.
You can configure NLB to prefer certain nodes over others.
NLB is suitable for stateless applications such as the web tier of multi-tier applications because it does not
matter which web server a client connects to when connecting to a multi-tier application. NLB is
unsuitable for stateful applications such as traditional file servers and database servers, as these
applications require a persistent connection to a particular server, rather than having any server handle
the connection.
NLB is failure-aware. This means that if one of the nodes in the NLB cluster goes offline, requests will no
longer be forwarded to that node, but other nodes in the cluster will continue to accept requests. When
the failed node returns to service, incoming requests will be redirected until traffic is balanced across all
nodes in the cluster.
NLB can only detect server failure; it cannot detect application failure. This means that if a web application
fails but the server remains operational, the NLB cluster will continue to forward traffic to the cluster node
that hosts the failed application. One way to manage this problem is to implement a monitoring solution
such as Microsoft System Center 2012 - Operations Manager. With Operations Manager, you can
monitor the functionality of applications. You also can configure Operations Manager to generate an alert
in the event that an application on a cluster node fails. An alert, in turn, can configure a remediation
action, such as restarting services, restarting the server, or withdrawing the node from the NLB cluster so
that the node does not receive further incoming traffic.
Description
NlbClusterNode
NlbClusterNodeDip
NlbClusterPortRule
NlbClusterVip
NlbCluster
NlbClusterDriverInfo
Get
NlbClusterNodeNetworkInterface
Get
NlbClusterIpv6Address
New
Description
NlbClusterPortRuleNodeHandlingPriority
Set
NlbClusterPortRuleNodeWeight
Set
Note: To see the list of Windows PowerShell cmdlets for NLB, you can use the
get-command module NetworkLoadBalancingClusters command.
Lesson 2
Lesson Objectives
After completing this lesson you will be able to:
Deploy NLB.
Demonstration Steps
Create a Windows Server 2012 R2 NLB cluster
1.
2.
From the Tools menu, open the Windows PowerShell Integrated Scripting Environment (ISE).
3.
Enter the following commands, and press Enter after each command:
Invoke-Command -Computername LON-SVR1,LON-SVR2 -command {Install-WindowsFeature
NLB,RSAT-NLB}
New-NlbCluster -InterfaceName "Ethernet" -OperationMode Multicast -ClusterPrimaryIP
172.16.0.42 -ClusterName LON-NLB
Add-NlbClusterNode -InterfaceName "Ethernet" -NewNodeName "LON-SVR2" NewNodeInterface "Ethernet"
4.
Open Network Load Balancing Manager from the Tools menu, and view the cluster.
Port Rules
With port rules, you can configure how the NLB
cluster directs requests to specific IP addresses
and ports. You can load balance traffic on
Transmission Control Protocol (TCP) port 80
across all nodes in an NLB cluster, while directing all requests to TCP port 25 to a specific host.
To specify how you want to distribute requests across nodes in the cluster, you configure a filtering mode
when creating a port rule. You can do this in the Add/Edit Port Rule dialog box, which you can use to
configure one of the following filtering modes:
Multiple hosts. When you configure this mode, all NLB nodes respond according to the weight
assigned to each node. Node weight is calculated automatically, based on the performance
characteristics of the host. If a node fails, other nodes in the cluster continue to respond to incoming
requests. Multiple host filtering increases availability and scalability, as you can increase capacity by
adding nodes, and the cluster continues to function in the event of node failure.
Single host. When you configure this mode, the NLB cluster directs traffic to the node that is assigned
the highest priority. If the node that is assigned the highest priority is unavailable, the host assigned
the next highest priority manages the incoming traffic. Single host rules increase availability but do
not increase scalability.
Note: The highest priority is the lowest number, with a priority of one being a higher
priority than a priority of 10.
Disable this port range. When you configure this option, all packets for this port range are dropped,
without being forwarded to any cluster nodes. If you do not disable a port range, and there is no
existing port rule, the traffic is forwarded to the host with the lowest priority number.
You can use the following Windows PowerShell cmdlets to manage port rules:
Set-NlbClusterPortRule. Use this cmdlet to modify the properties of an existing port rule.
Note: Each node in a cluster must have identical port rules. The exception to this is the load
weight (in multiple-hosts filter mode) and handling priority (in single-host filter mode).
Otherwise, if the port rules are not identical, the cluster will not converge.
Affinity
Affinity determines how the NLB cluster distributes requests from a specific client. Affinity settings only come into
effect when you use the multiple hosts filtering mode. You can select from the following affinity modes:
None. In this mode, any cluster node responds to any client request, even if the client is reconnecting
after an interruption. For example, the first webpage on a web application might be retrieved from
the third node, the second webpage from the first node, and the third webpage from the second
node. This affinity mode is suitable for stateless applications.
Single. When you use this affinity mode, a single cluster node handles all requests from a single client.
For example, if the third node in a cluster handles a clients first request, then all subsequent requests
are also handled by that node. This affinity mode is useful for stateful applications.
Class C. When you set this mode, a single node will respond to all requests from a class C network
(one that uses the 255.255.255.0 subnet mask). This mode is useful for stateful applications where the
client is accessing the NLB cluster through load balanced proxy servers. These proxy servers will have
different IP addresses, but they will be within the same class C (24-bit) subnet block.
Host Parameters
You configure the host parameters for a host by clicking the host in the Network Load Balancing Manager
console, and then from the Host menu, clicking Properties. You can configure the following host settings
for each NLB node:
Priority. Each NLB node is assigned a unique priority value. If no existing port rule matches the traffic
that is addressed to the cluster, traffic will be assigned to the NLB node that is assigned the lowest
priority value.
Dedicated IP address. You can use this parameter to specify the address that the host uses for remote
management tasks. When you configure a dedicated IP address, NLB configures port rules so that
they do not affect traffic to that address.
Subnet mask. When you select a subnet mask, ensure that there are enough host bits to support the
number of servers in the NLB cluster, and any routers that connect the NLB cluster to the rest of the
organizational network. For example, if you plan to have a cluster that has 32 nodes and supports two
routes to the NLB cluster, you will need to set a subnet mask that supports 34 host bits or more
such as 255.255.255.192.
Initial host state. You can use this parameter to specify the actions the host will take after a reboot. In
the default Started state, the host will rejoin the NLB cluster automatically. The Suspended state
pauses the host, and allows you to perform operations that require multiple reboots without
triggering cluster convergence. The Stopped state stops the node.
Demonstration Steps
Configure affinity for NLB cluster nodes
1.
2.
In Windows PowerShell, enter each of the following commands, and press Enter after each command:
Cmd.exe
Mkdir c:\porttest
Xcopy /s c:\inetpub\wwwroot c:\porttest
Exit
New-Website Name PortTest PhysicalPath C:\porttest Port 5678
New-NetFirewallRule DisplayName PortTest Protocol TCP LocalPort 5678
2.
3.
In Network Load Balancing Manager, edit the properties of the LON-NLB cluster.
4.
5.
Port range: 80 to 80
Protocols: Both
Affinity: None
Protocols: Both
6.
7.
Configure the port rule for port 5678 and set handling priority to 10.
Unicast Mode
When you configure a NLB cluster to use unicast
mode, all cluster hosts use the same unicast MAC
address. Outgoing traffic uses a modified MAC
address that is determined by the cluster hosts
priority setting. This prevents the switch that
handles outbound traffic from having problems
with all cluster hosts using the same MAC address.
When you use unicast mode with a single network adapter on each node, only computers that use the
same subnet can communicate with the node using the nodes assigned IP address. If you have to
perform any node management tasks, (such as connecting with the Windows operating system feature
Remote Desktop to apply software updates), you will need to perform these tasks from a computer that is
on the same TCP/IP subnet as the node.
When you use unicast mode with two or more network adapters, one adapter will be used for dedicated
cluster communication, and the other adapter or adapters can be used for management tasks. When you
use unicast mode with multiple network adapters, you can perform cluster management tasks such as
connecting using Remote PowerShell to add or remove roles and features.
Unicast mode can also minimize problems that occur when cluster nodes also host other non-NLB related
roles or services. For example, using unicast mode means that a server that participates in a web server
cluster on port 80 may also host another service such as DNS or DHCP. Although this is possible, we
recommend that all cluster nodes have the same configuration.
Multicast Mode
When you configure an NLB cluster to use multicast mode, each cluster host keeps its original MAC
address, but also is assigned an additional multicast MAC address. Each node in the cluster is assigned the
same additional MAC multicast address. Multicast mode requires network switches and routers that
support multicast MAC addresses.
Network Considerations
You can improve NLB cluster performance when you use unicast mode by using separate virtual local area
networks (VLANs) for cluster traffic and management traffic. Using VLANs segment traffic, you can
prevent management traffic from affecting cluster traffic. When you host NLB nodes on virtual machines
using Windows Server 2012 or Windows Server 2012 R2, you can also use network virtualization to
segment management traffic from cluster traffic.
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
Describe the special considerations for deploying NLB clusters on virtual machines.
Describe the considerations for upgrading an NLB cluster to Windows Server 2012 or Windows
Server 2012 R2.
When created, these firewall rules do not include scope settings. In high-security environments, you would
configure an appropriate local IP address or IP address range, and a remote IP address for each rule. The
remote IP address or address range should include the addresses that other hosts in the cluster use.
When you configure additional firewall rules, remember the following:
When you use multiple network adapters in unicast mode, configure different firewall rules for each
network interface. For the interface used for management tasks, you should configure the firewall
rules to allow inbound management traffic onlyfor example, enabling the use of remote Windows
PowerShell, Windows Remote Management, and Remote Desktop for management tasks. You should
configure the firewall rules on the network interface that the cluster node uses, to provide an
application to the cluster and to allow access to that application. For example, allow incoming traffic
on TCP ports 80 and 443 on an application that uses the HTTP and HTTPS protocols.
When you use multiple network adapters in multicast mode, configure firewall rules that allow access
to applications that are hosted on the cluster, but block access to other ports.
An NLB cluster supports up to 32 nodes. This means that you can scale out a single NLB cluster so that 32
separate nodes participate in that cluster. When you consider scaling an application so that it is hosted on
a 32-node NLB cluster, remember that each node in the cluster must be on the same TCP/IP subnet.
An alternative to building single NLB clusters is to build multiple NLB clusters, and use DNS round robin
to share traffic between them. DNS round robin is a technology that allows a DNS server to provide
requesting clients with different IP addresses to the same hostname, in sequential order. For example, if
three addresses are associated with a hostname, the first requesting host receives the first address, the
second receives the second address, and the third receives the third address, and so forth. When you use
DNS round robin with NLB, you associate the IP addresses of each cluster with the hostname that is used
by the application.
Distributing traffic between NLB clusters using DNS round robin also allows you to deploy NLB clusters
across multiple sites. You also can use DNS round robin in conjunction with netmask ordering. This
technology ensures that clients on a subnet are provided with an IP address of a host on the same
network, if one is available. For example, you might deploy three four-node NLB clusters in the cities of
Sydney, Melbourne, and Canberra, and use DNS round robin to distribute traffic between them. With
netmask ordering, a client in Sydney that is accessing the application in Sydney will be directed to the NLB
cluster hosted in Sydney. A client that is not on the same subnet as the NLB cluster nodes, such as a client
in the city of Brisbane, would be directed by DNS round robin to either the Sydney, Melbourne, or
Canberra NLB cluster.
When you are performing an upgrade, you can use one of the following strategies:
Piecemeal upgrade. During this type of upgrade, you add new Windows Server 2012 nodes to an
existing cluster, and then remove the nodes that are running older versions of the Windows Server
operating system. This type of upgrade is appropriate when the original hardware and operating
system does not support a direct upgrade to Windows Server 2012.
Rolling upgrade. During this type of upgrade, you upgrade one node in the cluster at a time. You do
this by taking the node offline, performing the upgrade, and then rejoining the node back to the
cluster.
To learn more about upgrading NLB clusters, go to:
http://go.microsoft.com/fwlink/?LinkId=270037
Objectives
After completing this lab, the students will be able to:
Lab Setup
Estimated Time: 45 minutes
Virtual machines
20412D-LON-DC1
20412D-LON-SVR1
20412D-LON-SVR2
User name
Adatum\Administrator
Password
Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
2.
Open iis-85.png in Microsoft Paint, and use the Paintbrush tool and the color red to mark the IIS
logo in a distinctive manner.
3.
4.
5.
Navigate to http://LON-SVR1, and verify that the web page is marked in a distinctive manner with
the color red.
6.
Navigate to http://LON-SVR2, and verify that the website is not marked in a distinctive manner.
7.
2.
On LON-SVR1, in Windows PowerShell ISE, type the following command, and then press Enter:
New-NlbCluster -InterfaceName "Ethernet" -OperationMode Multicast -ClusterPrimaryIP
172.16.0.42 -ClusterName LON-NLB
2.
In Windows PowerShell ISE, type the following command, and then press Enter:
Invoke-Command -Computername LON-DC1 -command {Add-DNSServerResourceRecordA
zonename adatum.com name LON-NLB Ipv4Address 172.16.0.42}
On LON-SVR1, in Windows PowerShell ISE, type the following command, and then press Enter:
Add-NlbClusterNode -InterfaceName "Ethernet" -NewNodeName "LON-SVR2" NewNodeInterface "Ethernet"
On LON-SVR1, open the Network Load Balancing Manager console, and verify that nodes LON-SVR1
and LON-SVR2 display with the status of Converged.
2.
View the properties of the LON-NLB cluster, and verify the following:
There is a single port rule named All that starts at port 0 and ends at port 65535 for both TCP
and UDP protocols, and that it uses Single affinity.
Results: After completing this exercise, you will have successfully implemented an NLB cluster.
2.
In Windows PowerShell, enter the following commands, and then press Enter after each command:
Cmd.exe
Mkdir c:\porttest
Xcopy /s c:\inetpub\wwwroot c:\porttest
Exit
New-Website -Name PortTest -PhysicalPath "C:\porttest" -Port 5678
New-NetFirewallRule -DisplayName PortTest -Protocol TCP -LocalPort 5678
3.
Open File Explorer, and then browse to and open c:\porttest\iis-85.png in Microsoft Paint.
4.
Use the Blue paintbrush to mark the IIS logo in a distinctive manner.
5.
Switch to LON-DC1.
6.
7.
Verify that the IIS Start page with the image marked with blue displays.
8.
Switch to LON-SVR1.
9.
On LON-SVR1, open Network Load Balancing Manager, and view the cluster properties of LON-NLB.
Port range: 80 to 80
Protocols: Both
Affinity: None
Protocols: Both
Switch to LON-DC1.
2.
Using Internet Explorer, navigate to http://lon-nlb, refresh the web page 20 times, and verify that
web pages with and without the distinctive red marking display.
3.
On LON-DC1, navigate to address http://LON-NLB:5678, refresh the web page 20 times, and verify
that only the web page with the distinctive blue marking displays.
Switch to LON-SVR1.
2.
On LON-SVR1, use the Network Load Balancing Manager console to suspend LON-SVR1.
3.
Verify that node LON-SVR1 displays as Suspended, and that node LON-SVR2 displays as Converged.
4.
5.
Verify that both node LON-SVR1 and LON-SVR2 now display as Converged.
Results: After completing this exercise, you will have successfully configured and managed an NLB cluster.
Restart LON-SVR1.
2.
Switch to LON-DC1.
3.
4.
Refresh the website 20 times. Verify that the website is available, but that it does not display the
distinctive red mark on the IIS logo until LON-SVR1 has restarted.
On LON-SVR1, open the Network Load Balancing Manager console, and initiate a Drainstop on LONSVR2.
2.
On LON-DC1, navigate to http://lon-nlb, and verify that only the welcome page with the red IIS
logo displays.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have successfully validated high availability for the NLB
cluster.
Question: How many additional nodes can you add to the LON-NLB cluster?
Question: What steps would you take to ensure that LON-SVR1 always manages requests for
web traffic on port 5678, given the port rules established by the end of this exercise?
Question: What is the difference between a Stop and a Drainstop command?
10-1
Module 10
Implementing Failover Clustering
Contents:
Module Overview
10-1
10-2
10-19
10-25
10-30
10-35
10-41
10-47
Module Overview
Providing high availability is very important for any organization that wants to provide continuous service
to its users. Failover clustering is one of the main technologies in the Windows Server 2012 operating
system that can provide high availability for various applications and services. In this module, you will
learn about failover clustering, failover clustering components, and implementation techniques.
Objectives
After completing this module, you will be able to:
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
Define a quorum.
Improved CSVs. Windows Server 2008 R2 introduced this technology, and it became very popular for
providing virtual machine storage. In Windows Server 2012, CSV volumes appear as CSV File System,
and CSV supports Server Message Block (SMB) version 3.0 storage for Hyper-V in Windows
Server 2012 and other applications. In addition, CSV can use SMB multichannel and SMB Direct to
enable traffic to stream across multiple networks in a cluster. For additional security, you can use
BitLocker Drive Encryption for CSV disks, and you can also make CSV storage visible only to a subset
of nodes in a cluster. For reliability, CSV volumes can be scanned and repaired with no time offline.
Cluster-Aware Updating (CAU). In Windows Server 2012, a new technology is introduced called CAU
that automatically updates cluster nodes with Windows Update hotfix, by keeping the cluster online,
and minimizing downtime. Lesson 4: Maintaining a Failover Cluster will explain CAU in more detail.
Updating cluster nodes with little to no downtime required a lot of preparation and planning in older
versions of Windows Server. In addition, the procedure of updating cluster nodes was mostly manual,
which required additional administrative effort.
Active Directory integration improvements. Beginning with Windows Server 2008, failover clustering
has been integrated with AD DS. In Windows Server 2012, this integration is improved. Administrators
can create cluster computer objects in targeted organizational units (OUs), or by default in the same
OUs as the cluster nodes. This aligns failover cluster dependencies on AD DS with the delegated
domain administration model that many IT organizations use. In addition, you can now deploy
failover clusters with access only to read-only domain controllers.
Management improvements. Failover clustering in Windows Server 2012 uses a very similar
management console and the same administrative techniques as is Windows Server 2008, but there
are some important management improvements to the Validation Wizard and validation speed for
large failover clusters. Also, new tests for CSVs, virtual machines, and the Hyper-V role in Windows
Server 2012 have been added. In addition, new Windows PowerShell cmdlets are available for
managing clusters, monitoring clustered virtual machine applications, and creating highly available
Internet SCSI (iSCSI) targets.
The Cluster.exe command-line tool is deprecated. You can still choose to install it with the failover
clustering tools. Windows PowerShell cmdlets for failover clustering roles provide a functionality that
is similar to the Cluster.exe commands.
The Cluster Automation Server (MSClus) COM interface has been deprecated, but you can still choose
to install it with the failover clustering tools.
Support for 32-bit cluster resource DLLs has been deprecated, but you can still choose to install 32-bit
DLLs. As a best practice, you should update cluster resource DLLs to 64-bit.
The Print Server role has been removed from the High Availability Wizard, and it cannot be
configured in the Failover Cluster Manager.
The Add-ClusterPrintServerRole cmdlet has been deprecated, and it is not supported in Windows
Server 2012.
The most important new features in failover clustering quorum in Windows Server 2012 R2 are the
following:
Dynamic quorum. This feature enables a cluster to recalculate quorum in the event of node failure
and still maintain working clustered roles, even when the number of voting nodes remaining in the
cluster is less than 50 percent.
Dynamic witness. This feature dynamically determines if the witness has a vote to maintain quorum in
the cluster.
Force quorum resiliency. This feature provides additional support and flexibility to manage split brain
syndrome cluster scenarios. These occur when a cluster breaks into subsets of cluster nodes that are
not aware of each other.
Tie breaker for 50 percent node split. By using this feature, the cluster can adjust the running nodes
vote status automatically to keep the total number of votes in the cluster an odd number.
These new quorum options and modes of work are discussed in more detail later in this lesson.
In addition to updating quorum, the most important changes to failover clustering in Windows Server
2012 R2 are the Global Update Manager mode, cluster node health detection, and AD DS detached
cluster.
AD DS Detached Cluster
Failover clusters in Windows Server 2012 are integrated with AD DS, and you cannot deploy a cluster if
nodes are not members of the same domain. When a cluster is created, appropriate computer objects for
a cluster name and a clustered role name are created in AD DS.
In Windows Server 2012 R2, you can deploy an AD DS-detached cluster. An AD DS-detached cluster is a
cluster that does not have dependencies in AD DS for network names. When you deploy clusters in
detached mode, the cluster network name and the network names for clustered roles are registered in a
local Domain Name System (DNS), but corresponding computer objects for a cluster and clustered roles
are not created in AD DS.
Cluster nodes still have to be joined to the same AD DS domain, but the person who creates a cluster does
not need to have permission to create new objects in AD DS. Also, management of these computer
objects is not needed.
When you deploy AD DS-detached clusters, side effects occur. Because computer objects are not created,
you cannot use Kerberos authentication when you access cluster resources. Although Kerberos
authentication is used between cluster nodes because they have their computer accounts and objects
created outside the cluster, Windows NT LAN Manager (NTLM) authentication is used. Because of this,
we do not recommend that you deploy AD DS-detached clusters for any scenario that requires Kerberos
authentication.
To create an AD DS-detached cluster, you must run Windows Server 2012 R2 on all cluster nodes. These
features cannot be configured by using the Failover Cluster Manager, so you must use Windows
PowerShell.
Network. This is a network across which cluster nodes can communicate with one another and with
clients. There are three types of networks that can be used in a cluster. These networks are discussed
in more detail in the Failover Cluster Networks topic.
Resource. This is an entity that is hosted by a node. Resource is managed by the Cluster service and
can be started, stopped, and moved to another node.
Cluster storage. This is a storage system that is usually shared between cluster nodes. In some
scenarios, such as clusters of servers running Microsoft Exchange Server, shared storage is not
required.
Clients. These are computers or users that are using the Cluster service.
Service or application. This is a software entity that is presented to clients and used by clients.
Witness. This can be a file share or disk that is used to maintain quorum. Ideally, the witness should
be located in a network that is both logically and physically separate from those used by the failover
cluster. However, the witness must remain accessible by all cluster node members. The concepts of
quorum and how the witness comes into play will be examined more closely in the following lessons.
Has full connectivity and communication with the other nodes in the cluster.
Is connected to a network through which client computers can access the cluster.
Is aware of the services or applications that are running locally, and the resources that are running on
all other cluster nodes.
Cluster storage usually refers to logical devicestypically hard disk drives or logical unit numbers
(LUNs)to which all the cluster nodes attach, through a shared bus. This bus is separate from the bus that
contains the system and boot disks. The shared disks store resources such as applications and file shares
that the cluster will manage.
A failover cluster typically defines at least two data communications networks: one network enables the
cluster to communicate with clients, and the second, isolated network enables the cluster node members
to communicate directly with one another. If a directly-connected shared storage is not being used, then
a third network segment, for iSCSI or Fibre Channel, can exist between the cluster nodes and a data
storage network.
Most clustered applications and their associated resources are assigned to one cluster node at a time. The
node that provides access to those cluster resources is the active node. If the nodes detect the failure of
the active node for a clustered application, or if the active node is taken offline for maintenance, the
clustered application is started on another cluster node. To minimize the impact of the failure, client
requests are redirected immediately and transparently to the new cluster node.
CSVFS benefits. In Disk Management, CSVs now appear as CSVFS. However, this is not a new file
system. The underlying technology is still the NTFS file system, and CSVFS volumes are still formatted
with NTFS. However, because volumes appear as CSVFS, applications can discover that they run on
CSVs, which helps improve compatibility. Because of a single file namespace, all files have the same
name and path on any node in a cluster.
Multi-subnet support for CSVs. CSVs have been enhanced to integrate with SMB Multichannel to help
achieve faster throughput for CSVs.
Support for BitLocker Drive Encryption. Windows Server 2012 supports BitLocker volume encryption
for both traditional clustered disks and CSVs. Each node performs decryption by using the computer
account for the cluster itself.
Support for SMB 3.0 and higher storage. CSVs in Windows Server 2012 provide support for Server
Message Block (SMB) 3.0 storage for Hyper-V and applications such as Microsoft SQL Server.
Windows Server 2012 R2 supports SMB version 3.0.2.
Integration with SMB Multichannel and SMB Direct. This enables CSV traffic to stream across multiple
networks in the cluster and to take advantage of network adapters that support Remote Direct
Memory Access (RDMA).
Integration with the Storage Spaces feature in Windows Server 2012. This can provide virtualized
storage on clusters of inexpensive disks.
Ability to scan and repair volumes. CSVs in Windows Server 2012 support the ability to scan and
repair volumes with zero offline time by using new functionality in chkdsk and fsutil commands. New
Windows PowerShell cmdlet repair-volume is also available in Windows Server 2012.
Implementing CSVs
You can configure a CSV only when you create a failover cluster. After you create the failover cluster, you
can enable the CSV for the cluster, and then add storage to the CSV.
Before you can add storage to the CSV, the LUN must be available as shared storage to the cluster. When
you create a failover cluster, all of the shared disks configured in Server Manager are added to the cluster,
and you can add them to a CSV. If you add more LUNs to the shared storage, you must first create
volumes on the LUN, add the storage to the cluster, and then add the storage to the CSV.
As a best practice, you should configure CSV before you make any virtual machines highly available. However,
you can convert from regular disk access to CSV after deployment. The following considerations apply:
When you convert from regular disk access to CSV, the LUNs drive letter or mount point is removed.
This means that you must re-create all virtual machines that are stored on the shared storage. If you
must retain the same virtual machine settings, consider exporting the virtual machines, switching to
CSV, and then importing the virtual machines in Hyper-V.
You cannot add shared storage to CSV if it is in use. If you have a running virtual machine that is
using a cluster disk, you must shut down the virtual machine, and then add the disk to CSV.
For more information on SMB, go to:
http://go.microsoft.com/fwlink/?linkID=269659
For more information on Storage Spaces, go to:
http://go.microsoft.com/fwlink/?linkID=269680
CSV Diagnosis
In Windows Server 2012 R2, you now can see the state of CSV on a per-node basis. For example, you can
see whether I/O is direct or redirected, or whether the CSV is unavailable. If a CSV is in I/O redirected
mode, you can view the reason. This information can be retrieved by using the Windows PowerShell
cmdlet Get-ClusterSharedVolumeState with the parameters StateInfo,
FileSystemRedirectedIOReason, or BlockRedirectedIOReason. This provides you with a better view of
how CSV works across cluster nodes.
CSV Interoperability
CSVs in Windows Server 2012 R2 also support interoperability with the following technologies:
Data Deduplication.
This added support expands the scenarios in which you can use CSVs, and enables you to take advantage
of the efficiencies that are enabled by these features.
2.
After all the resources are offline, the Cluster service attempts to transfer the instance to the node
that is listed next on the instances list of preferred owners.
3.
If the Cluster service successfully moves the instance to another node, it attempts to bring all the
resources online. This time, it starts at the lowest part of the dependency hierarchy. Failover is
complete when all the resources are online on the new node.
The Cluster service can failback instances that were originally hosted on the offline node after the offline
node becomes active again. When the Cluster service fails back an instance, it follows the same
procedures that it performs during failover. That is, the Cluster service takes all the resources in the
instance offline, moves the instance, and then brings all the resources in the instance back online.
What Is Quorum?
Quorum is the number of elements that must be
online for a cluster to continue running. In effect,
each element can cast one vote to determine
whether the cluster continues to run. Each cluster
node is an element that has one vote. In case
there is an even number of nodes, then an
additional element, which is known as a witness, is
assigned to the cluster. The witness element can
be either a disk or a file share. Each voting
element contains a copy of the cluster
configuration; and the Cluster service works to
keep all copies synchronized at all times.
The cluster will stop providing failover protection if most of the nodes fail, or if there is a problem with
communication between the cluster nodes. Without a quorum mechanism, each set of nodes could
continue to operate as a failover cluster. This results in a partition within the cluster.
Quorum prevents two or more nodes from concurrently operating a failover cluster resource. If a clear
majority is not achieved between the node members, then the vote of the witness becomes crucial to
maintaining the validity of the cluster. Concurrent operation could occur when network problems prevent
one set of nodes from communicating with another set of nodes. That is, a situation might occur in which
more than one node tries to control access to a resource. If that resource is, for example, a database
application, damage could result. Imagine the consequence if two or more instances of the same database
are made available on the network, or if data was accessed and written to a target from more than one
source at a time. If the application itself is not damaged, the data could easily become corrupted.
Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster can
calculate the number of votes that are required for the cluster to continue providing failover protection. If
the number of votes drops below the majority, the cluster stops running. That means that it will not
provide failover protection if there is a node failure. Nodes will still listen for the presence of other nodes,
in case another node appears again on the network, but the nodes will not function as a cluster until a
majority consensus or quorum is reestablished.
Note: The full functioning of a cluster depends not just on quorum, but also on the
capacity of each node to support the services and applications that failover to that node. For
example, a cluster that has five nodes could still have quorum after two nodes fail, but each
remaining cluster node would continue serving clients only if it has enough capacity, such as disk
space, processing power, network bandwidth, or RAM, to support the services and applications
that failed over to it. An important part of the design process is to plan each nodes failover
capacity. A failover node must be able to run its own load and also the load of additional
resources that might failover to it.
Node Majority. Each node that is available and in communication can vote. The cluster functions only
with a majority, or more than half of the votes. This model is preferred when the cluster consists of an
odd number of server nodes. No witness is required to maintain or achieve quorum.
Node and Disk Majority. Each node plus a designated disk in the cluster storage can vote. The disk
witness can vote when it is available and in communication. The cluster functions only with a majority,
that is, when it has more than half of the votes. This model is based on an even number of server
nodes being able to communicate with one another in the cluster, in addition to the disk witness.
Node and File Share Majority. Each node plus a designated file share created by the administrator
which is the file share witnesscan vote when they are available and in communication. The cluster
functions only with a majority of the votes. This model is based on an even number of server nodes
being able to communicate with one another in the cluster, in addition to the file share witness.
No Majority: Disk Only. The cluster has quorum if one node is available and in communication with a
specific disk in the cluster storage. Only the nodes that are also in communication with that disk can
join the cluster.
Except for the No Majority: Disk Only mode, all quorum modes in Windows Server 2012 failover clusters
are based on a simple-majority vote model. As long as a majority of the votes are available, the cluster
continues to function. For example, if there are five votes in the cluster, the cluster continues to function if
at least three votes are available. The source of the votes is not relevantthe vote could be a node, a disk
witness, or a file share witness. The cluster will stop functioning if a majority of votes is not available.
In the No Majority: Disk Only mode, the quorum-shared disk can veto all other possible votes. In this
mode, the cluster will continue to function as long as the quorum-shared disk and at least one node are
available. This type of quorum also prevents more than one node from assuming the primary role.
Note: If the quorum-shared disk is not available, the cluster will stop functioning, even if all
nodes are still available. In this mode, the quorum-shared disk is a single point of failure, so this
mode is not recommended.
When you configure a failover cluster in Windows Server 2012, the Installation Wizard automatically
selects one of two default configurations. By default, failover clustering selects:
Node and Disk Majority, if there is an even number of nodes in the cluster.
Modify this setting only if you determine that a change is appropriate for your cluster, and ensure that
you understand the implications of making the change.
In addition to planning your quorum mode, you should also consider the capacity of the nodes in your
cluster, and their ability to support the services and applications that may failover to that node. For
example, a cluster that has four nodes and a disk witness still has quorum after two nodes fail. However, if
you have several applications or services deployed on the cluster, each remaining cluster node may not
have the capacity to provide services.
A value of 0 indicates that the witness does not have a vote. A value of 1 indicates that the witness has a vote.
The cluster can now decide whether to use the witness vote based on the number of voting nodes that are
available in the cluster. As an additional benefit, quorum configuration is much simpler when you create a
cluster. Windows Server 2012 R2 configures quorum witness automatically when you create a cluster.
Also, when you add or evict cluster nodes, you no longer have to adjust the quorum configuration manually.
The cluster now automatically determines quorum management options and the quorum witness.
When you use Dynamic Quorum, you can view the current state of votes in the Failover Clustering
Management console, or you can use Windows PowerShell cmdlets.
Public network. A public network provides client systems with access to cluster application services. IP
address resources are created on networks that provide clients with access to the Cluster service.
When you configure networks in failover clusters, you must also dedicate a network to connect to the
shared storage. If you use iSCSI for the shared storage connection, the network will use an IP-based
Ethernet communications network. However, you should not use this network for node or client
communication. Sharing the iSCSI network in this manner may result in contention and latency issues for
both users and the resource that that the cluster provides.
You can use the private and public networks for both client and node communications. Preferably, you
should dedicate an isolated network for the private node communication. The reasoning for this is similar
to using a separate Ethernet network for iSCSI: to avoid resource bottleneck and contention issues. The
public network is configured to allow client connections to the failover cluster. Although the public
network can provide backup for the private network, a better design practice is to define alternative
networks for the primary private and public networks or at least team the network interfaces used for
these networks.
The networking features in Windows Server 2012-based clusters include the following:
The nodes transmit and receive heartbeats by using User Datagram Protocol (UDP) unicast, instead of
UDP broadcast, which was used in legacy clusters. The messages are sent on port 3343.
You can include clustered servers on different IP subnets, which reduces the complexity of setting up
multisite clusters.
The Failover Cluster Virtual Adapter is a hidden device that is added to each node when you install the
failover clustering feature. The adapter is assigned a media access control (MAC) address based on the
MAC address that is associated with the first enumerated physical network adapter in the node.
Failover clusters fully support IPv6 for both node-to-node and node-to-client communication.
You can use Dynamic Host Configuration Protocol (DHCP) to assign IP addresses, or to assign static IP
addresses to all nodes in the cluster. However, if some nodes have static IP addresses and you configure
others to use DHCP, the Validate a Configuration Wizard will raise an error. The cluster IP address resources
are obtained based on the configuration of the network interface supporting that cluster network.
iSCSI. iSCSI is a type of storage area network (SAN) that transmits SCSI commands over IP networks.
Performance is acceptable for most scenarios when 1 gigabit per second (Gbps) or 10 Gbps Ethernet
is used as the physical medium for data transmission. This type of SAN is inexpensive to implement
because no specialized networking hardware is required. In Windows Server 2012, you can implement
iSCSI target software on any server, and present local storage over an iSCSI interface to clients.
Fibre channel. Fibre channel SANs typically have better performance than iSCSI SANs, but are much
more expensive. Additional knowledge and hardware are also required to implement a Fibre Channel
SAN.
Shared .vhdx. In Windows Server 2012 R2, you can use a shared virtual hard disk drive as storage for
virtual machine guest clustering. A shared virtual hard drive should be located on a CSV or Scale-Out
File Server cluster, and it can be added to two or more virtual machines that are participating in a
guest cluster, by connecting to the SCSI interface.
Note: The Microsoft iSCSI Software Target is now an integrated feature in Windows
Server 2012. It can provide storage from a server over a TCP/IP network, including shared storage
for applications that are hosted in a failover cluster. In addition, in Windows Server 2012, a highly
available iSCSI Target Server can be configured as a clustered role by using the Failover Cluster
Manager or Windows PowerShell.
In Windows Server 2012 R2, you can use failover clustering to provide high availability for the storage, in
addition to using storage as a cluster component. This is done by implementing clustered storage spaces.
When you implement clustered storage spaces, you help to protect your environment from risks such as
physical disk failures, data access failures, data corruptions, volume unavailability, and server node failures.
Deploy Clustered Storage Spaces
http://go.microsoft.com/fwlink/?LinkID=386644
Storage Requirements
After you select a storage type, you should also be aware of the following storage requirements:
To use the native disk support included in failover clustering, use basic disks, not dynamic disks.
We recommend that you format the partitions with NTFS. For the disk witness, the partition must be
NTFS, because file allocation table (FAT) is not supported.
For the partition style of the disk, you can use either a master boot record (MBR) or a GUID partition
table (GPT).
Because improvements in failover clusters require that the storage respond correctly to specific SCSI
commands, the storage must follow the SCSI Primary Commands-3 standard. In particular, the
storage must support persistent reservations, as specified in the SCSI Primary Commands-3 standard.
The miniport driver used for the storage must work with the Microsoft Storport storage driver.
Storport offers a higher performance architecture and better Fibre Channel compatibility in Windows
systems.
You must isolate storage devices, in a ratio of one cluster per device. Servers from different clusters
must be unable to access the same storage devices. In most cases, a LUN that is used for one set of
cluster servers should be isolated from all other servers through LUN masking or zoning.
10-18
Consider using Multipath I/O (MPIO) software. In a highly available IT environment, you can deploy
failover cluster nodes with multiple host bus adapters. Windows Server supports this scenario by using
MPIO software. Implementing MPIO with multiple host adapters provides you with alternate paths to
your storage devices. This provides the highest level of redundancy and availability. For Windows
Server 2012, your multipath solution must be based on MPIO. Your hardware vendor usually supplies
an MPIO device-specific module (DSM) for your hardware, although Windows Server 2012 includes
one or more DSMs as part of the operating system.
If you use a shared virtual hard disk drive, you must have a separate cluster with CSV or a file server
cluster to store the virtual hard disk drive.
Lesson 2
Lesson Objectives
After completing this lesson, you will be able to:
Consider the following guidelines when you plan node capacity in a failover cluster:
Spread out the highly available applications from a failed node. When all nodes in a failover cluster
are active, the highly available services or applications from a failed node should be spread out
among the remaining nodes to prevent a single node from being overloaded.
Ensure that each node has sufficient idle capacity to service the highly available services or
applications that are allocated to it when another node fails. This idle capacity should be a sufficient
buffer to avoid nodes running at near capacity after a failure event. Failure to adequately plan
resource utilization can result in a decrease in performance after a node failure.
Use hardware with comparable capacity for all nodes in a cluster. This simplifies the planning process
for failover because the failover load will be evenly distributed among the surviving nodes.
Use standby servers to simplify capacity planning. When a passive node is included in the cluster, then
all highly available services or applications from a failed node can be failed over to the passive node.
This avoids the need for complex capacity planning. If this configuration is selected, it is important for
the standby server to have sufficient capacity to run the load from more than one node failure.
You also should examine all cluster configuration components to identify single points of failure. You can
remedy many single points of failure with simple solutions, such as adding storage controllers to separate
and stripe disks, teaming network adapters, or using multipath software. These solutions reduce the
probability that a failure of a single device will cause a failure in the cluster. Typically, server-class
computer hardware includes options for multiple power supplies for power redundancy, and for creating
redundant array of independent disks (RAID) sets for disk data redundancy.
You should install the same or similar hardware on each failover cluster node. For example, if you
choose a specific model of network adapter, you should install this adapter on each cluster node.
If you are using serial attached SCSI or Fibre Channel storage connections, the mass-storage device
controllers that are dedicated to the cluster storage should be identical in all clustered servers. The
controllers should also use the same firmware version.
If you use iSCSI storage connections, each clustered server should have one or more network adapters
or host bus adapters dedicated to the cluster storage. The network that you use for iSCSI storage
connections should not be used for network communication. In all clustered servers, the network
adapters that you use to connect to the iSCSI storage target should be identical, and we recommend
that you use Gigabit Ethernet (GigE) or more.
After you select the hardware for your cluster nodes, all tests provided in the Validate a Configuration
Wizard must pass before the cluster configuration is supported by Microsoft.
The network adapters in a cluster network must have the same IP address assignment method, which
means either that they all use static IP addresses or that they all use DHCP.
Network settings and IP addresses. When you use identical network adapters for a network, also use
identical communication settings on those adapters such as speed, duplex mode, flow control, and
media type. Also compare the settings between the network adapter and the switch to which it
connects, and ensure that no settings are in conflict. Otherwise, network congestion or frame loss
might occur that could adversely affect how the cluster nodes communicate among themselves, with
clients, or with storage systems.
Unique subnets. If you have private networks that are not routed to the rest of the network
infrastructure, ensure that each of these private networks uses a unique subnet. This is necessary even
if you give each network adapter a unique IP address. For example, if you have a cluster node in a
central office that uses one physical network, and another node in a branch office that uses a separate
physical network, do not specify 10.0.0.0/24 for both networks, even if you give each adapter a
unique IP address. This avoids routing loops and other network communication problems if, for
example, the segments are accidentally configured into the same collision domain because of
incorrect virtual local area network (VLAN) assignments.
Note: If you connect cluster nodes with a single network, the network passes the
redundancy requirement in the Validate a Configuration Wizard. However, the report from the
wizard includes a warning that the network should not have single points of failure.
DNS. The servers in the cluster typically use DNS for name resolution. DNS dynamic update protocol
is a supported configuration.
Domain role. All servers in the cluster must be in the same Active Directory domain. As a best
practice, all clustered servers should have the same domain role, either member server or domain
controller. The recommended role is member server because AD DS inherently includes its own
failover protection mechanism.
Account for administering the cluster. When you first create a cluster or add servers to it, you must be
logged on to the domain with an account that has administrator rights and permissions on all servers
in that cluster. The account does not have to be a Domain Admins account, but can be a Domain
Users account that is in the Administrators group on each clustered server. In addition, if the account
is not a Domain Admins account, the account, or the group in which the account is a member, must
be given the Create Computer Objects permission in the domain.
In Windows Server 2012, there is no cluster service account. Instead, the Cluster service runs automatically
in a special context that provides the specific permissions and credentials that are necessary for the
service, which is similar to the local system context, but with reduced credentials. When a failover cluster is
created and a corresponding computer object is created in AD DS, that object is configured to prevent
accidental deletion. In addition, the cluster Network Name resource includes additional health check logic,
which periodically checks the health and properties of the computer object that represents the Network
Name resource.
Demonstration Steps
1.
2.
Start the Validate Configuration Wizard. Add LON-SVR3 and LON-SVR4 as cluster nodes.
3.
4.
5.
6.
Perform an in-place migration on a two-node cluster. This is a more complex scenario, where you
want to migrate a cluster to a newer version of the Windows Server operating system. In this scenario,
you do not have additional computers for new cluster nodes. For example, you may want to upgrade
a cluster that is currently running on Windows Server 2008 R2 to a cluster that is running Windows
Server 2012. To achieve this, you must first remove resources from one node, and then evict that
node from a cluster. Next, you perform a clean installation of Windows Server 2012 on that server.
After Windows Server 2012 is installed, you create a one-node failover cluster, migrate the clustered
services and applications from the old cluster node to that failover cluster, and then remove the old
node from cluster. The last step is to install Windows Server 2012 on another cluster node, together
with failover cluster feature, and add the server to the failover cluster. Then you run validation tests to
confirm that the overall configuration works correctly.
The Cluster Migration Wizard is a tool that enables you to migrate clustered roles. Because the Cluster
Migration Wizard does not copy data from one storage location to another, you must copy or move data
or folders, including shared folder settings, during a migration. The Cluster Migration Wizard can migrate
physical disk resource settings to and from disks that use mount points. Note that the Cluster Migration
Wizard does not migrate mount-point information. Mount-point information is information about hard
disk drives that do not use drive letters and are mounted in a folder on another hard disk drive.
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
To manage resources, the Cluster service communicates to a resource DLL through a resource monitor.
When the Cluster service makes a request of a resource, the resource monitor calls the appropriate entrypoint function in the resource .dll to check and control the resource state.
Dependent Resources
A dependent resource is one that requires another resource to operate. For example, a network name
must be associated with an IP address. Because of this requirement, a network name resource depends on
an IP address resource. Dependent resources are taken offline before the resources upon which they
depend are taken offline. Similarly, dependent resources are brought online after the resources on which
they depend are brought online. A resource can specify one or more resources on which it is dependent.
Resource dependencies also determine bindings. For example, clients will be bound to the particular IP
address on which a network name resource depends.
When you create resource dependencies, consider the fact that, although some dependencies are strictly
required, others are recommended but not required. For example, a file share that is not a Distributed File
System (DFS) root has no required dependencies. However, if the disk resource that holds the file share
fails, the file share will be inaccessible to users. Therefore, it is logical to make the file share dependent on
the disk resource.
A resource can also specify a list of nodes on which it can run. Possible nodes and dependencies are
important considerations when administrators organize resources into groups.
2.
3.
Install the role on all cluster nodes. Use Server Manager to install the server role that you want to use
in the cluster.
4.
5.
Configure the application. Configure options for the application that is used in the cluster.
6.
Test failover. Use the Failover Cluster Management snap-in to test failover by intentionally moving
the service from one node to another.
After the cluster is created, you can monitor its status and manage available options by using the Failover
Cluster Management console.
Demonstration Steps
1.
Open the Failover Cluster Manager and verify that three cluster disks are available.
2.
Start the Configure Role Wizard and Configure the File Server as a clustered role.
3.
For the Client Access Point, use the name AdatumFS and the IP address 172.16.0.130.
4.
Select Cluster Disk 2 as the storage for the File Server role.
Managing cluster networks. You can add or remove cluster networks, and you can configure networks
that will be dedicated just for inter-cluster communication.
Configuring cluster quorum settings. By configuring quorum settings, you determine how quorum is
achieved and who can vote in a cluster.
Migrating services and applications to a cluster. You can implement existing services to the cluster
and make them highly available.
Configuring new services and applications to work in a cluster. You can implement new services to
the cluster.
Removing a cluster. You might want to remove the cluster if you are removing or moving a service to
a different cluster, but first, before you destroy the cluster you must remove the service being
clustered.
You can perform most of these administrative tasks by using the Failover Cluster Management console.
Pause a node. You can pause a node to prevent resources from being failed over or moved to the
node. You typically pause a node when a node is undergoing maintenance or troubleshooting.
Evict a node. You can evict a node, which is an irreversible process for a cluster node. After you evict
the node, it must be re-added to the cluster. You evict nodes when a node is damaged beyond repair
or is no longer needed in the cluster. If you evict a damaged node, you can repair or rebuild it, and
then add it back to the cluster by using the Add Node Wizard.
Each of these management actions is available in the Failover Cluster Management Actions pane.
Setting
Result
Example 1:
General tab, Preferred owner: Node1
Failover tab, Failback setting: Allow
failback (immediately)
Example 2:
Failover tab, Maximum failures in the
specified period: 2
Failover tab, Period (hours): 6
Lesson 4
Lesson Objectives
After completing this lesson, you will be able to:
Describe CAU.
Event Viewer
When problems arise in the cluster, use the Event Viewer to view events with a Critical, Error, or Warning
severity level. In addition, informational-level events are logged to the Failover Clustering Operations log,
which can be found in the Event Viewer in the Applications and Services Logs\Microsoft\Windows folder.
Informational-level events are usually common cluster operations, such as cluster nodes leaving and
joining the cluster and resources going offline or coming online.
In earlier versions of Windows Server, event logs were replicated to each node in the cluster. The Failover
Clustering Operations log simplifies cluster troubleshooting because you can now review all event logs on a
single cluster node. Windows Server 2012 does not replicate the event logs between nodes. However, the
Failover Cluster Management snap-in has a Cluster Events option that enables you to view and filter events
across all cluster nodes. This feature is helpful when you need to correlate events across cluster nodes.
The Failover Cluster Management snap-in also provides a Recent Cluster Events option that queries all the
Error and Warning events from all the cluster nodes in the last 24 hours.
You can access additional logs, such as the Debug and Analytic logs, in the Event Viewer. To display these
logs, modify the view on the top menu by selecting the Show Analytic and Debug Logs options.
Trend application performance on each node. To determine how an application is performing, you
can view trend-specific information on system resources that are being used on each node.
Trend application failures and stability on each node. You can pinpoint when application failures
occur, and match the application failures with other events on the node.
Modify trace log settings. You can start, stop, and adjust trace logs, including their size and location.
You must first add the Windows Server Backup feature, if you decide to use it. You can do this by
using the Server Manager Add Roles and Features Wizard. Windows Server Backup uses Volume
Shadow Copy Service (VSS) to perform a backup.
Windows Server Backup is the built-in backup and recovery feature available in Windows Server 2012. To
complete a successful backup, consider the following:
For a backup to succeed in a failover cluster, the cluster must be running and must have quorum. In
other words, enough nodes must be running and communicatingperhaps with a witness disk or
witness file share, depending on the quorum configurationthat the cluster has achieved quorum.
You must back up all clustered applications. For example, if you cluster a Microsoft SQL Server
database, you must have a backup plan for the databases and configuration outside the cluster
configuration.
If the application data must be backed up, the disks on which you store the data must be made
available to the backup software. You can achieve this by running the backup software from the
cluster node that owns the disk resource, or by running a backup against the clustered resource over
the network. When you have CSVs enabled in your cluster, you need to run the backup from any
node which is a member of the CSV cluster.
The cluster service keeps track of which cluster configuration is the most recent, and it replicates that
configuration to all cluster nodes. If the cluster has a witness disk, the cluster service also replicates
the configuration to the witness disk.
Restoring a Cluster
The two types of restore are:
Non-authoritative restore. Use a non-authoritative restore when a single node in the cluster is
damaged or rebuilt, and the rest of the cluster is operating correctly. Perform a non-authoritative
restore by restoring the system recoveryor system stateinformation to the damaged node. When
you restart that node, it joins the cluster and receives the latest cluster configuration automatically.
Authoritative restore. Use an authoritative restore when the cluster configuration must be rolled back to
a previous time. For example, you would use an authoritative restore if an administrator accidentally
removed clustered resources or modified other cluster settings. Perform the authoritative restore by
stopping the cluster resource on each node, and then performing a system recoverysystem stateon
a single node by using the command-line Windows Server Backup interface. After the restored node
restarts the cluster service, the remaining cluster nodes can also start the cluster service.
Review cluster events and trace logs to identify application or hardware issues that might cause an
unstable cluster.
Review hardware events and logs to help pinpoint specific hardware components that might cause an
unstable cluster.
Review SAN components, switches, adapters, and storage controllers to help identify any potential
problems.
Identify the perceived problem by collecting and documenting the symptoms of the problem.
Identify the scope of the problem so that you can understand what is being affected by the problem,
and the impact of that effect on both the application and the clients.
Collect information so that you can accurately understand and pinpoint the possible problem. After
you identify a list of possible problems, you can prioritize them by probability, or by the impact of a
repair. If you cannot pinpoint the problem, you should attempt to re-create the problem.
Create a schedule for repairing the problem. For example, if the problem only affects a small subset of
users, you can delay the repair to an off-peak time so that you can schedule downtime.
Complete and test each repair one at a time so that you can identify the fix.
To troubleshoot SAN issues, start by checking physical connections and by checking each of the hardware
component logs. Additionally, run the Validate a Configuration Wizard to verify that the current cluster
configuration is still supportable.
Note: When you run the Validate a Configuration Wizard, ensure that the storage tests that
you select can be run on an online failover cluster. Several of the storage tests cause loss of
service on the clustered disk when the tests are run.
Use the Dependency Viewer in the Failover Cluster Management snap-in to identify dependent resources.
Check the Event Viewer and trace logs for errors from the dependent resources.
Determine whether the problem only happens on a specific node or nodes by trying to re-create the
problem on different nodes
What Is CAU?
Applying operating system updates to nodes in a
cluster requires special attention. If you want to
provide zero downtime for a clustered role, you
must manually update cluster nodes one after the
other, and you must manually move resources
from the node being updated to another node.
This procedure can be very time consuming. In
Windows Server 2012, Microsoft has implemented
a new feature for automatic updating of cluster
nodes called CAU.
CAU is a feature that enables administrators to
update cluster nodes automatically with little or
no loss in availability during the update process. During an update procedure, CAU transparently takes
each cluster node offline, installs the updates and any dependent updates, and then performs a restart if
necessary. CAU then brings the node back online, and moves to update the next node in a cluster.
For many clustered roles, this automatic update process triggers a planned failover, and it can cause a
transient service interruption for connected clients. However, for continuously-available workloads in
Windows Server 2012, such as Hyper-V with the live migration feature or file server with SMB Transparent
Failover, CAU can orchestrate cluster updates with no effect on service availability.
Remote-updating mode. In this mode, a computer that is running Windows Server 2012 or
Windows 8 is called and configured as an orchestrator. To configure a computer as a CAU
orchestrator, you must install failover clustering administrative tools on it. The orchestrator computer
is not a member of the cluster that is updated during the procedure. From the orchestrator computer,
the administrator triggers on-demand updating by using a default or custom Updating Run profile.
Remote-updating mode is useful for monitoring real-time progress during the Updating Run, and for
clusters that are running on Server Core installations of Windows Server 2012.
Self-updating mode. In this mode, the CAU clustered role is configured as a workload on the failover
cluster that is to be updated, and an associated update schedule is defined. In this scenario, CAU does
not have a dedicated orchestrator computer. The cluster updates itself at scheduled times by using a
default or custom Updating Run profile. During the Updating Run, the CAU orchestrator process
starts on the node that currently owns the CAU clustered role, and the process sequentially performs
updates on each cluster node. In the self-updating mode, CAU can update the failover cluster by
using a fully automated, end-to-end updating process. An administrator can also trigger updates on
demand in this mode, or use the remote-updating approach if desired. In the self-updating mode, an
administrator can access summary information about an Updating Run in progress by connecting to
the cluster and running the Get-CauRun Windows PowerShell cmdlet.
To use CAU, you must install the failover clustering feature in Windows Server 2012 and create a failover
cluster. The components that support CAU functionality are automatically installed on each cluster node.
You must also install the CAU tools on the orchestrator node or any cluster node; these tools are included
in the failover clustering tools and are also part of the Remote Server Administration Tools (RSAT). The
CAU tools consist of the CAU user interface (UI) and the CAU Windows PowerShell cmdlets. The failover
clustering tools and CAU tools are installed by default on each cluster node when you install the failover
clustering feature. You can also install these tools on a local or a remote computer that is running
Windows Server 2012 or Windows 8 and that has network connectivity to the failover cluster.
Demonstration Steps
1.
Make sure that the cluster is configured and running on LON-SVR3 and LON-SVR4.
2.
3.
4.
Preview updates that are available for nodes LON-SVR3 and LON-SVR4.
5.
6.
7.
Lesson 5
Lesson Objectives
After completing this lesson, you will be able to:
When a site fails, a multisite cluster can automatically failover the clustered service or application to
another site.
Because the cluster configuration is automatically replicated to each cluster node in a multisite
cluster, there is less administrative overhead than with a cold standby server, which requires you to
manually replicate changes.
The automated processes in a multisite cluster reduce the possibility of human error, a risk which is
present in manual processes.
Because of the increased cost and complexity of a multisite failover cluster, it might not be an ideal solution for
every application or business. When you consider whether to deploy a multisite cluster, you should evaluate the
importance of the applications to the business, the type of applications, and any alternative solutions. Some
applications can provide multisite redundancy easily with log shipping or other processes, and can still achieve
sufficient availability with only a modest increase in cost and complexity.
The complexity of a multisite cluster requires better architectural and hardware planning than is required
for a single-site cluster. It also requires you to develop business processes to routinely test the clusters
functionality.
All nodes must have the same operating system and service pack version.
You must provide at least one low-latency and reliable network connection between sites. This is
important for cluster heartbeats. By default, regardless of subnet configuration, heartbeat frequency,
also known as subnet delay, is once every second, or 1,000 milliseconds. The range for heartbeat
frequency is once every 250 to 2,000 milliseconds on a common subnet and 250 to 4,000 milliseconds
across subnets. By default, when a node misses a series of five heartbeats, another node will initiate
failover. The range for this value, also known as subnet threshold, is three through 10 heartbeats.
You must provide a storage replication mechanism. Failover clustering does not provide a storage
replication mechanism, so you must provide another solution. This also requires that you have
multiple storage solutions, including one for each cluster you create.
You must ensure that all other necessary services for the cluster, such as AD DS and DNS, are also
available on a second site.
You must ensure that client connections can be redirected to a new cluster node when failover happens.
When you use asynchronous replication, the node receives a write complete response from the
storage after the data is written successfully on the primary storage. The data is written to the
secondary storage on a different schedule, depending on the hardware or software vendors
implementation. Asynchronous replication can be storage based, host based, or even application
based. However, not all forms of asynchronous replication are sufficient for a multisite cluster. For
example, a DFS Replication provides file-level asynchronous replication. However, it does not support
multisite failover clustering replication. This is because DFS Replication replicates smaller documents
that are not held open continuously. Therefore, it is not optimized for high-speed, open-file
replication.
You preserve the semantics of the SCSI commands across the sites, even if a complete communication
failure occurs between sites.
You replicate the witness disk in real-time synchronous mode across all sites.
Because multisite clusters can have wide area network (WAN) failures, in addition to node and local
network failures, Node Majority and Node and File Share Majority are better solutions for multisite
clusters. If there is a WAN failure that causes the primary and secondary sites to lose communication, a
majority must still be available to continue operations.
If there is an odd number of nodes, use the Node Majority quorum. If there is an even number of nodes,
which is typical in a geographically-dispersed cluster, you can use the Node Majority with File Share
quorum.
If you use Node Majority and the sites lose communication, you need a mechanism to determine which
nodes stay up, and which nodes drop out of the cluster membership. The second site requires another
vote to obtain quorum after a failure. To obtain another vote for quorum, you must join another node to
the cluster, or create a file share witness.
The Node and File Share Majority mode can help maintain quorum without adding another node to the
cluster. To provide for a single-site failure and enable automatic failover, the file share witness might have
to exist at a third site. In a multisite cluster, a single server can host the file share witness. However, you
must create a separate file share for each cluster.
Note: If you use Windows Server 2012 R2 as host operating system in multisite cluster
environment, you should use Dynamic Quorum, as discussed earlier in this module.
You must use three locations to enable automatic failover of a highly available service or application.
Locate one node in the primary location that runs the highly available service or application. Locate a
second node in a disaster-recovery site, and locate the third node for the file share witness in another
location. Direct network connectivity among all three locations must exist. In this manner, if one site
becomes unavailable, the two remaining sites can still communicate and have enough nodes for a
quorum.
Note: In Windows Server 2008 R2, administrators could configure the quorum to include
nodes. However, if the quorum configuration included nodes, all nodes were treated equally
according to their votes. In Windows Server 2012, cluster quorum settings can be adjusted so
that when the cluster determines whether it has quorum, some nodes have a vote and some do
not. This adjustment can be useful when solutions are implemented across multiple sites.
When you use Windows Server 2012 R2 as the operating system for cluster nodes in a multisite cluster,
you can also leverage Force Quorum Resiliency technology. This technology, as discussed earlier in this
module, can be particularly useful when sites that have cluster nodes lose connectivity.
2.
3.
Ensure that you have deployed a reliable storage replication mechanism between sites. Also, choose
the type of replication you should use.
4.
Ensure that key infrastructure services such as AD DS, DNS, and DHCP are present on each site.
5.
Run the Validate a Configuration Wizard on all of the cluster nodes to determine if your configuration
is acceptable for creating a cluster.
6.
7.
8.
9.
Services for failover. You should clearly define the critical services, such as AD DS, DNS, and DHCP, that
must be available, and that should failover to another site. It is not enough to have a cluster designed to
failover to another site. Failover clustering requires that you have AD DS services up and running on a
second site. You cannot make all necessary services highly available by using failover clustering, so you
have to consider other technologies to achieve the desired result. For example, for AD DS and DNS, you
can deploy additional domain controllers that also run DNS service on a second site.
Quorum maintenance. It is very important to design the quorum model in a way that each site has
enough votes for maintaining the cluster functionality. If that is not possible, you can use options
such as forcing a quorum, or Dynamic Quorum, in Windows Server 2012 R2, to establish a quorum in
case of disaster.
Storage connection. A multisite cluster usually requires that you have storage available at each site.
Because of this you should carefully design storage replication and establish the procedure for how to
failover to secondary storage in case of a disaster.
Published services and name resolution. If you have services published to your internal or external
users, such as email, or a web page, failover to another site in some cases requires a name or IP
address change. If that is the case, you should have a procedure of changing DNS records in an
internal or public DNS. To reduce the downtime, we recommended that you reduce TTL on the DNS
Server that contains critical the DNS records.
Client connectivity. A failover plan must also include a design for client connectivity in case of
disaster. This includes both internal and external clients. If your primary site fails, you should have a
way for your clients to connect to a second site.
Failback procedure. Once the primary site comes back online, you should plan and implement a
failback process. Failback is as important as a failover, because if you perform failback incorrectly, you
might cause data loss and services downtime. Therefore, it is very important to define the steps to
perform failback to a primary site clearly, without data loss or corruption. The failback process is only
very rarely automated, and it usually happens in a very controlled environment.
Establishing a multisite cluster involves much more than just defining the cluster, cluster role, and quorum
options. When you design a multisite cluster, you should consider the broader picture of failover as a
part of a disaster-recovery strategy. Windows Server 2012 R2 has several technologies that can help
failover and failback, but you should also consider including other technologies in your infrastructure. In
addition, each failover and failback procedure greatly depends on a service or the services implemented
in a cluster.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 60 minutes
Virtual machines
20412D-LON-DC1
20412D-LON-SVR1
20412D-LON-SVR3
20412D-LON-SVR4
User name
Adatum\Administrator
Password
Pa$$w0rd
Lab Setup
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Microsoft Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
Repeat steps two through four for 20412D-LON-SVR1, 20412D-LON-SVR3, and 20412D-LON-SVR4.
On LON-SVR3, start the iSCSI Initiator, and configure Discover Portal with the IP address 172.16.0.21.
2.
3.
4.
5.
6.
7.
On LON-SVR4, open Disk Management, and bring online and initialize the three new disks.
On LON-SVR3, install the Failover Clustering feature by using the Server Manager.
2.
On LON-SVR4, install the Failover Clustering feature by using the Server Manager.
2.
3.
4.
5.
On LON-SVR3, in the Failover Cluster Manager, start the Create Cluster Wizard.
2.
3.
4.
2.
Locate the disk that is assigned to Available Storage. If possible, use Cluster Disk 2.
3.
Results: After this exercise, you will have installed and configured the failover clustering feature.
Add the File Server role service to LON-SVR4 by using the Server Manager console. LON-SVR3 already
has File Server Role service installed.
2.
3.
In the Storage node, click Disks, and verify that three cluster disks are online.
4.
Add File Server as a cluster role. Select the File Server for general use option.
5.
6.
7.
Select Cluster Disk 3 as the storage disk for the AdatumFS role.
8.
2.
Start the New Share Wizard, and add a new shared folder to the AdatumFS cluster role.
3.
4.
Accept the default values on the Select the server and the path for this share page.
5.
6.
Accept the default values on the Configure share settings and Specify permissions to control
access pages.
7.
On LON-SVR4, in the Failover Cluster Manager, open the Properties for the AdatumFS cluster role.
2.
3.
4.
Results: After this exercise, you will have configured a highly available file server.
On LON-DC1, open File Explorer, and attempt to access the \\AdatumFS\ location. Make sure that
you can access the Docs folder.
2.
3.
On LON-SVR3, in the Failover Cluster Manager, move AdatumFS to the second node.
4.
On LON-DC1, in File Explorer, verify that you can still access the \\AdatumFS\ location.
Task 2: Validate the failover and quorum configuration for the file server role
1.
2.
Stop the Cluster service on the node that is the current owner of the AdatumFS role.
3.
Verify that AdatumFS has moved to another node, and that the \\AdatumFS\ location is still
available, by trying to access it from LON-DC1.
4.
Start the Cluster service on the node in which you stopped it in step two.
5.
Browse to the Disks node, and take the disk marked as Disk Witness in Quorum offline.
6.
7.
8.
9.
10. Change the witness disk to Cluster Disk 3. Do not make any other changes.
Results: After this exercise, you will have tested the failover scenarios.
Configure CAU
Update the failover cluster and configure self-updating
Prepare for the next module
On LON-DC1, install the failover clustering feature by using the Server Manager console.
2.
On LON-SVR3, open the Windows Firewall with Advanced Security window, and verify that the
following two inbound rules are enabled:
3.
4.
5.
Connect to Cluster1.
6.
Preview the updates available for nodes in Cluster1. (Note: An Internet connection is required for this
step.)
On LON-DC1, start the update process for Cluster1, by selecting Apply updates to this cluster.
2.
3.
Wait until the update process is completed. The process is finished when both nodes have a
Succeeded value in the Last Run status column.
4.
5.
6.
Choose to add the CAU clustered role with the self-updating mode enabled in this cluster.
7.
8.
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-SVR1, 20412D-LON-SVR3, and 20412D-LON-SVR4.
Tools
The tools for implementing failover clustering include the following.
Tool
Use for
Where to find it
Cluster management
Administrative Tools
Administrative Tools
Windows PowerShell
Cmdlet-based management
Administrative Tools
Server Manager
Taskbar
Administrative Tools
Disk Management
Computer Management
Best Practice:
Try to avoid using a quorum model that depends just on the disk for Hyper-V failover cluster or a
Scale-Out File Server cluster.
Ensure that other nodes can handle the load if one node fails.
Troubleshooting Tip
11-1
Module 11
Implementing Failover Clustering with Hyper-V
Contents:
Module Overview
11-1
11-2
11-8
11-19
11-28
11-33
Module Overview
One benefit of implementing server virtualization is the opportunity to provide high availability, both for
applications or services that have built-in high availability functionality, and for applications or services
that do not provide high availability in any other way. With the Windows Server 2012 Hyper-V
technology and failover clustering, you can configure high availability by using several different options.
In this module, you will learn about how to implement failover clustering in a Hyper-V scenario to achieve
high availability for a virtual environment.
Objectives
After completing this module, you will be able to:
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
Describe the options for making applications and services highly available.
Describe the new features of failover clustering for Hyper-V in Windows Server 2012.
Describe the new features of failover clustering for Hyper-V in Windows Server 2012 R2.
Describe the best practices for implementing high availability in a virtual environment.
Host Clustering
Host clustering enables you to configure a failover cluster by using the Hyper-V host servers. When you
configure host clustering for Hyper-V, you configure the virtual machine as a highly available resource.
You implement failover protection at the host-server level. This means that the guest operating system
and applications that are running within the virtual machine do not have to be cluster-aware. However,
the virtual machine is still highly available.
Some examples of non-cluster aware applications are a print server (in Windows Server 2012 and newer),
or a proprietary network-based application such as an accounting application. If the host node that
controls the virtual machine becomes unavailable unexpectedly, the secondary host node takes control
and restarts or resumes the virtual machine as quickly as possible. You can also move the virtual machine
from one node in the cluster to another in a controlled manner. For example, you could move the virtual
machine from one node to another, while patching the host management operating system.
The applications or services that are running in the virtual machine do not have to be compatible with
failover clustering, and they do not have to be aware that the virtual machine is clustered. Because the
failover is at the virtual machine level, there are no dependencies on software that is installed in the
virtual machine.
Guest Clustering
Guest failover clustering is configured similarly to physical-server failover clustering, except that the
cluster nodes are virtual machines. In this scenario, you create two or more virtual machines, and enable
failover clustering within the guest operating system. The application or service is then enabled for high
availability between the virtual machines. Because failover clustering is implemented within each virtual
machine nodes guest operating system, you can locate the virtual machines on a single host. This
configuration can be quick and cost-effective in a test or staging environment.
For production environments, however, you can better protect the application or service if you deploy the
virtual machines on separate failover clustering-enabled Hyper-V host computers. When you implement
failover clustering at both the host and virtual machine levels, the resource can restart regardless of
whether the node that fails is a virtual machine or a host. This configuration is also known as a Guest
Cluster Across Hosts. It is considered an optimal high-availability configuration for virtual machines
running mission-critical applications in a production environment.
You should consider several factors when you implement guest clustering:
The application or service must be failover cluster-aware. This includes any of the Windows
Server 2012 services that are cluster-aware, and any applications, such as clustered Microsoft SQL
Server and Microsoft Exchange Server.
Hyper-V virtual machines can use Fibre Channel-based connections to shared storage. However, this
is specific only to Microsoft Hyper-V Server 2012 and newer. Alternatively, you can implement
Internet Small Computer System Interface (iSCSI) connections from the virtual machines to the shared
storage. In Windows Server 2012 R2, you also can use the shared virtual hard disk feature to provide
shared storage for virtual machines.
You should deploy multiple network adapters on the host computers and the virtual machines. Ideally,
you should dedicate a network connection to the iSCSI connection if you use this method to connect to
storage. You should also dedicate a private network between the hosts, and a network connection that
the client computers use.
NLB
NLB works with virtual machines in the same manner that it works with physical hosts. It distributes IP
traffic to multiple instances of a TCP/IP service, such as a web server that is running on a host within the
NLB cluster. NLB transparently distributes client requests among the hosts, and it enables the clients to
access the cluster by using a virtual host name or a virtual IP address. From the client computers
perspective, the cluster appears to be a single server that answers these client requests. As enterprise
traffic increases, you can add another server to the cluster.
Therefore, NLB is an appropriate solution for resources that do not have to accommodate exclusive read
or write requests. Examples of NLB-appropriate applications include web-based front ends to database
applications or Exchange Server Client Access Servers.
When you configure an NLB cluster, you must install and configure the application on all virtual machines
that will participate in the NLB cluster. After you configure the application, you install the NLB feature in
Windows Server 2012 within each virtual machines guest operating systemnot on the Hyper-V hosts
and then configure an NLB cluster for the application. Older versions of Windows Server also support NLB,
so that the guest operating system is not limited to only Windows Server 2012; however, you should use
the same operating system versions within one NLB cluster. Similar to a Guest Cluster Across Hosts, the
NLB resource typically benefits from overall increased I/O performance when the virtual machine nodes
are located on different Hyper-V hosts.
Note: As with older versions of Windows Server, in Windows Server 2012 NLB and failover
clustering should not be implemented within the same operating system because the two
technologies conflict with each other.
Question: Do you use any high availability solution for virtual machines in your environment?
The node where the virtual machine is running owns the clustered instance of the virtual machine, controls
access to the shared bus or iSCSI connection to the cluster storage, and has ownership of any disks, or Logical
Unit Numbers (LUNs), assigned to the virtual machine. All nodes in the cluster use a private network to send
regular signals, known as heartbeat signals, to one another. The heartbeat indicates that a node is functioning
and communicating on the network. The default heartbeat configuration specifies that each node send a
heartbeat over TCP/UDP port 3343 each second, or 1,000 milliseconds.
2.
Failover initiates when the node that hosts the virtual machine does not send regular heartbeat
signals over the network to the other nodes. By default, this is five consecutively missed heartbeats, or
5,000 milliseconds elapsed. Failover might occur because of a node failure or network failure. When
heartbeat signals stop arriving from the failed node, one of the other nodes in the cluster begins
taking over the resources that the virtual machines use.
You define the one or more nodes that could take over by configuring the Preferred and Possible
Owners properties. The Preferred Owner specifies the hierarchy of ownership if there is more than
one possible failover node for a resource. By default, all nodes are members of Possible Owners.
Therefore, removing a node as a Possible Owner absolutely excludes it from taking over the resource
in a failure situation. Suppose that a failover cluster is implemented by using four nodes. However,
only two nodes are configured as Possible Owners. In a failover event, the resource might still be
taken over by the third node if neither of the Preferred Owners is online. Although the fourth node is
not configured as a Preferred Owner, as long as it remains a member of Possible Owners, the failover
cluster uses it to restore access to the resource, if necessary. Resources are brought online in order of
dependency. For example, if the virtual machine references an iSCSI LUN, access to the appropriate
host bus adapters (HBAs), network(s), and LUNs will be stored in that order. Failover is complete when
all the resources are online on the new node. For clients that are interacting with the resource, there is
a short service interruption, which most users might not notice.
3.
You also can configure the cluster service to fail back to the offline node after it becomes active
again. When the cluster service fails back, it uses the same procedures that it performs during failover.
This means that the cluster service takes all the resources associated with that instance offline, moves
the instance, and then brings all of the resources in the instance back online.
Administrators can configure virtual machine priority attributes to control the order in which virtual
machines are started. Priority also is used to ensure that lower-priority virtual machines automatically
release resources if they are needed by higher-priority virtual machines.
The Cluster Shared Volume (CSV) feature, which simplifies virtual machine configuration and
operation, is improved to allow more security and better performance. It now supports scalable filebased server-application storage, increased backup and restore, and single, consistent file namespace.
In addition, you now can protect CSVs by using the BitLocker Drive Encryption feature and
configuring them to make storage visible to only a subset of nodes.
Virtual machine application monitoring is enhanced. You now can monitor services running on
clustered virtual machines. In clusters running Windows Server 2012, administrators can configure
monitoring of services on clustered virtual machines that are also running Windows Server 2012. This
functionality extends the high-level monitoring of virtual machines that is implemented in Windows
Server 2008 R2 failover clusters.
It is now possible to store virtual machines on server message block (SMB) file shares in a file server
cluster. This is a new way to provide high availability for virtual machines. Instead of making a cluster
between Hyper-V Server nodes, you now can have Hyper-V nodes out of cluster but with virtual
machine files on a highly available file share. To enable this feature, you should deploy a file-server
cluster in a Scale-Out File Server mode. Scale-Out File Servers also can use CSVs for storage.
Virtual machine drain on shutdown. This feature provides an additional safety mechanism in scenarios
when one cluster node shuts down. In Windows Server 2012 R2, if such a scenario occurs, virtual
machines are automatically live migrated (instead of placed in a saved state, such as in a Quick
Migration) to another cluster node.
Network health detection. This feature helps in scenarios when virtual machines lose a connection to
the physical or external network. If this happens to highly available virtual machines, failover
clustering will migrate affected virtual machines to another cluster node automatically.
When you plan for high availability for virtual machines in Windows Server 2012 R2, you should be aware
of these features so that you can build a stable environment with fewer downtime periods. These features
are discussed in more detail in the next lesson.
Question: Do you think that these new features will be useful for your environment? If yes,
which one(s)?
Plan for failover scenarios. When you design the hardware requirements for the Hyper-V hosts, ensure
that you include the hardware capacity required when hosts fail. For example, if you deploy a sixnode cluster, you must determine the number of host failures that you want to accommodate. If you
decide that the cluster must sustain the failure of two nodes, then the four remaining nodes must
have the capacity to run all the virtual machines in the cluster.
Plan the network design for failover clustering. To optimize the failover cluster performance and
failover, you should dedicate a fast network connection for internode communication. As with older
versions of Windows Server, this internode network should be logically and physically separate from
the network segment(s) used for clients to communicate with the cluster. You also can use this
network connection to transfer virtual machine memory during a Live Migration. If you use iSCSI or
SMB shares for any virtual machines, ensure that you also dedicate a network connection to the iSCSI
network connection, and that you have SMB shares highly available.
Plan the shared storage for failover clustering. When you implement failover clustering for Hyper-V, the
shared storage must be highly available. If the shared storage fails, all the virtual machines will fail, even if
the physical nodes are functional. To ensure the storage availability, plan for redundant connections to the
shared storage, and redundant array of independent disks (RAID) on the storage device. If you decide to
use a shared virtual hard disk, which is specific to Windows Server 2012 R2 Hyper-V, ensure that the shared
disk is located on a highly available resource, such as a Scale-Out File Server.
Use the recommended failover cluster quorum mode. If you deploy a cluster with an even number of
nodes, and shared storage is available to the cluster, the Failover Cluster Manager automatically
selects Node and Disk Majority quorum mode. If you deploy a cluster with an odd number of nodes,
the Failover Cluster Manager selects the Node Majority quorum mode. You should not modify the
default configuration unless you understand the implications of doing this. Consider using Dynamic
Quorum if you are using Windows Server 2012 R2.
Deploy standardized Hyper-V hosts. To simplify the deployment and management of the failover
cluster and Hyper-V nodes, develop a standard server hardware and software platform for all nodes.
Develop standard management practices. When you deploy multiple virtual machines in a failover
cluster, you increase the risk that a single mistake might shut down a large part of the server
deployment. For example, if an administrator accidentally configures the failover cluster incorrectly,
and the cluster fails, all virtual machines in the cluster will be offline. To avoid this, develop and
thoroughly test standardized instructions for all administrative tasks.
Lesson 2
Lesson Objectives
After completing this lesson, you will be able to:
Configure CSV.
Explain how to use Scale-Out File Servers over SMB 3.0 for virtual machine storage.
2.
3.
Install the Hyper-V and failover clustering features on the host servers. You can use Server Manager in
Microsoft Management Console (MMC) or Windows PowerShell to do this.
4.
Validate the cluster configuration. The Validate This Cluster Wizard checks all of the prerequisite
components that are required to create a cluster, and provides warnings or errors if any components
do not meet the cluster requirements. Before you continue, resolve any issues that the Validate This
Cluster Wizard identifies.
Create the cluster. When the components pass the Validate This Cluster Wizard, you can create a
cluster. When you configure the cluster, assign a cluster name and an IP address. A computer account
for the cluster name is created in Active Directory domain, and the IP address is registered in DNS. In
Windows Server 2012 R2, you can also create an Active Directory-detached cluster.
Note: You can enable Clustered Shared Storage for the cluster only after you create the
cluster and add eligible storage to it. If you want to use CSV, you should configure CSV before
you move to the next step.
6.
Create a virtual machine on one of the cluster nodes. When you create the virtual machine, ensure
that all files associated with the virtual machine, including both the Virtual Hard Disk (VHD or VHDX)
and virtual machine configuration files, are stored on the shared storage. You can create and manage
virtual machines in either Hyper-V Manager or Failover Cluster Manager. We recommend that you
use the Failover Cluster Manager console for creating virtual machines. When you create a virtual
machine using Failover Cluster Manager, the virtual machine is automatically made highly available.
7.
Make the virtual machine highly available only for existing virtual machines. If you created a virtual
machine before you implemented failover clustering, you should manually make it highly available.
To make the virtual machine highly available, in the Failover Cluster Manager, select to make a new
service or application highly available. Failover Cluster Manager then presents a list of services and
applications that can be made highly available. When you select the option to make virtual machines
highly available, you can select the virtual machine that you created on shared storage.
Note: When you make a virtual machine highly available, you see a list of all virtual
machines hosted on all cluster nodes, including virtual machines that are not stored on the
shared storage. If you make a virtual machine that is not located on shared storage highly
available, you will receive a warning, but Hyper-V adds the virtual machine to the services and
applications list. However, when you try to migrate the virtual machine to a different host, the
migration fails.
8.
Test virtual machine failover. After you make the virtual machine highly available, you can migrate the
computer to another node in the cluster. If you are running Windows Server 2008 R2, Windows
Server 2012, or Windows Server 2012 R2, you can select to perform a Quick Migration or a Live
Migration.
Configuring CSV
CSVs in a Windows Server 2012 failover cluster
allow multiple nodes in the cluster to
simultaneously have read-write access to the same
disk that is provisioned as an NTFS volume and
added as storage to the cluster. When you use
CSVs, clustered roles can fail over from one node
to another more quickly without requiring a
change in drive ownership, or dismounting and
remounting a volume. CSVs also help in
simplifying the management of a potentially large
number of LUNs in a failover cluster.
CSVs provide a general-purpose, clustered file
system in Windows Server 2012, which is layered above NTFS. They are not restricted to specific clustered
workloads, but currently, they are only supported for Hyper-V clusters and Scale-Out File Server clusters.
Although CSVs provide additional flexibility and reduce downtime, it is not required to configure and use
CSV when you implement high availability for virtual machines in Hyper-V. You can also cluster Hyper-V
by using the traditional approach. However, we recommend that you use CSV because of the following
advantages:
Reduced LUNs for the disks. You can use CSV to reduce the number of LUNs that your virtual
machines require. When you configure a CSV, you can store multiple virtual machines on a single
LUN, and multiple host computers can access the same LUN concurrently.
Better use of disk space. Instead of placing each .vhd or .vhdx file on a separate disk with empty space
so that the .vhd/vhdx file can expand, you can oversubscribe disk space by storing multiple .vhd/.vhdx
files on the same LUN.
Single location for virtual machine files. You can track the paths of .vhd or .vhdx files and other files
that virtual machines use. Instead of using drive letters or Globally Unique Identifiers (GUIDs) to
identify disks, you can specify the path names. When you implement CSV, all added storage appears
in the \ClusterStorage folder. The \ClusterStorage folder is created on the cluster nodes system
folder, and you cannot move it. This means that all Hyper-V hosts that are members of the cluster
must use the same drive letter as their system drive, or the virtual machine failovers fail.
No specific hardware requirements. There are no specific hardware requirements to implement CSV.
You can implement CSV on any supported disk configuration, and on either Fibre Channel or iSCSI
SANs.
Increased resiliency. CSV increases resiliency because the cluster can respond correctly even if
connectivity between one node and the SAN is interrupted, or part of a network is down. The cluster
reroutes the CSV traffic through an intact part of the SAN or network.
In Windows Server 2012 R2, you can use CSVs on disks provisioned with Resilient File System (ReFS).
Implementing CSV
After you create the failover cluster, you can enable CSV for the cluster, and then add storage to the CSV.
Before you can add storage to the CSV, the LUN must be available as shared storage to the cluster. When
you create a failover cluster, all of the shared disks configured in Server Manager are added to the cluster,
and you can add them to a CSV. You also have the option to add storage to the cluster, after the cluster is
created. If you add more LUNs to the shared storage, you must first create volumes on the LUN, add the
storage to the cluster, and then add the storage to the CSV.
As a best practice, you should configure CSV before you make any virtual machines highly available.
However, you can convert from regular disk access to CSV after deployment. The following considerations
apply:
The LUNs drive letter or mount point is removed when you convert from regular disk access to CSV.
This means that you must re-create all virtual machines that are stored on the shared storage. If you
must keep the same virtual machine settings, consider exporting the virtual machines, switching to
CSV, and then importing the virtual machines in Hyper-V.
You cannot add shared storage to CSV if it is in use. If you have a running virtual machine that is
using a cluster disk, you must shut down the virtual machine, and then add the disk to CSV.
Configured shared storage resources must be availablefor example, CSVs on block storage, such as
clustered storage spaces, or a Scale-Out File Server cluster that is running Windows Server 2012 R2,
with SMB 3.0 for file-based storage.
Sufficient memory, disk, and processor capacity within the failover cluster is necessary to support
multiple virtual machines that are implemented as guest failover clusters.
For the guest operating systems, you can use both Windows Server 2012 R2 and Windows Server 2012.
However, if you use Windows Server 2012 in virtual machines that use shared virtual hard disks, you must
install Hyper-V integration services from Windows Server 2012 R2. Both Generation 1 and Generation 2
virtual machines are supported.
When you decide to implement shared virtual hard disks as guest cluster storage, you must first decide
where to store the shared virtual hard disk. You can deploy the shared virtual hard disk at the following
locations:
CSV location. In this scenario, all virtual machine files, including the shared .vhdx files, are stored on a
CSV that is configured as shared storage for a Hyper-V failover cluster.
Scale-Out File Server SMB 3.0 share. This scenario uses an SMB file-based storage as the location for
the shared .vhdx files. You must deploy a Scale-Out File Server, and create an SMB file share as the
storage location. You also need a separate Hyper-V failover cluster.
Note: You cannot deploy a shared virtual hard disk on an ordinary file share or on a host
machines local hard disk. You must deploy the shared virtual hard disk on a highly available
location.
You can configure a shared virtual hard disk by using Hyper-V Manager graphical user interface (GUI), or
by using Windows PowerShell. After you prepare your environment, and create a virtual hard disk in .vhdx
format in an appropriate location, open virtual machine settings in Hyper-V Manager. Then add a new
SCSI disk drive. When you add a new drive, you must specify the location of your shared virtual hard disk.
Before you accept changes in the virtual machine settings interface, you must mark this drive as shared in
the advanced properties of the SCSI disk. Then, repeat this procedure on all virtual machines that will use
this shared virtual disk drive.
To share a virtual hard disk by using Windows PowerShell, you should use the Add-VMHardDiskDrive
cmdlet with the ShareVirtualDisk parameter. This command must run under administrator privileges on
the Hyper-V host, for each virtual machine that will use the shared .vhdx file.
For example, the following command adds a shared virtual hard disk (Data1.vhdx) stored on volume 1 of
CSV to a virtual machine that is named VM1.
Add-VMHardDiskDrive -VMName VM1 -Path C:\ClusterStorage\Volume1\Data1.vhdx ShareVirtualDisk
In addition, the following command adds a shared virtual hard disk (Witness.vhdx) that is stored on an
SMB file share (\\Server1\Share1) to a virtual machine that is named VM2.
Add-VMHardDiskDrive -VMName VM2 -Path \\Server1\Share1\Witness.vhdx -ShareVirtualDisk
Shared VHDX
Supported storage
Storage spaces,
serial attached SCSI,
Fibre Channel,
iSCSI, SMB
iSCSI SAN
Virtual serial
attached SCSI
iSCSI LUN
No
No
Yes
Storage is configured at
the Hyper-V host level
Yes
Yes
No
No
No
Yes
No
Requires switch to be
reconfigured when virtual
machine is migrated
No
Yes
No
Exposes storage
architecture
No
Yes
Yes
Question: What is the main benefit of using shared hard virtual disks?
Active-active clustering. When all other failover clusters work in an active-passive mode, a Scale-Out
File Server cluster works so that all nodes can accept and serve SMB client requests. In Windows
Server 2012 R2, SMB 3 is upgraded to SMB 3.02. This version improves scalability and manageability
for Scale-Out File Servers. SMB client connections, in Windows Server 2012 R2, are tracked per file
share (instead of per server), and clients are then redirected to the cluster node with the best access
to the volume the file share uses.
Increased bandwidth. In previous versions of Windows Server, file server cluster bandwidth was
constrained to the bandwidth of a single cluster node. Because of the active-active mode in the ScaleOut File Server cluster, you can have much higher bandwidth, which can be further increased by
adding cluster nodes.
CSV cache. Because the Scale-Out File Server clusters use CSVs, the clusters also benefit from CSV
Cache use. CSV Cache is a feature that you can use to allocate random access memory (RAM) as a
write-through cache. The CSV Cache provides caching of read-only unbuffered I/O. This can improve
performance for applications such as Hyper-V, which conducts unbuffered I/O when it accesses a
VHD file. With Windows Server 2012, you can allocate up to 20 percentand with Windows
Server 2012, 80 percentof the total physical RAM for CSV write-through cache, which will be
consumed from non-paged pool memory.
Simpler management. When you use a Scale-Out File Server cluster, you can add CSV storage and
shares at any time after you create the cluster.
One or more computers running Windows Server 2012 with the Hyper-V role must be installed.
One or more computers running Windows Server 2012 with the File and Storage Services role must
be installed.
A common Active Directory infrastructure must be used. The servers that run Active Directory
Domain Services (AD DS) do not need to run Windows Server 2012.
Before you implement virtual machines on an SMB file share, you should set up a file server cluster. To do
this, you should have at least two cluster nodes with file services and failover clustering installed. In the
Failover Clustering console, you should create a Scale-Out File Server cluster. After you configure the
cluster, you deploy the new SMB file share for applications. This share is used to store virtual machine files.
When the share is created, you can use the Hyper-V Manager console to deploy new virtual machines on
the SMB file share, or you can migrate existing virtual machines to the SMB file share by using the Storage
Migration method.
Question: Have you considered storing virtual machines on the SMB share? Why or why not?
Identify the components that must be highly available to make the applications highly available. In
some cases, the application might run on a single server. If so, making that server highly available is
all that you have to do. Other applications might require that several servers, and other components
such as storage or the network, be highly available.
Identify the application characteristics. You must understand several things about the application:
o
Is virtualizing the server that is running the application an option? Some applications are not
supported or recommended in a virtual environment.
What options are available for making the application highly available? You can make some
applications highly available through options other than host clustering. If other options are
available, evaluate the benefits and disadvantages of each option.
What are the performance requirements for each application? Collect performance information
on the servers that run the applications currently to gain an understanding of the hardware
requirements that must be met when you virtualize the server.
What capacity is required to make the Hyper-V virtual machines highly available? As soon as you
identify all of the applications that must be highly available by using host clustering, you can start to
design the actual Hyper-V deployment. By identifying the performance requirements, and the
network and storage requirements for applications, you can define the hardware that you must
implement in a highly available environment.
Live Migration is one of the most important aspects of Hyper-V clustering. When you implement Live
Migration, consider the following:
Verify basic requirements. The basic requirements for Live Migration are that all hosts be part of a
Windows Server 2012 or Windows Server 2012 R2 failover cluster, and that host processors be from
the same manufacturer. All hosts in the cluster must have access to shared storage.
Configure a dedicated network adapter for the private virtual network. When you implement failover
clustering, you should configure a private network for the cluster heartbeat traffic. You use this network to
transfer the virtual machine memory during a failover. To optimize this configuration, configure a network
adapter for this network that has a capacity of one gigabit per second (Gbps) or faster.
Note: You must enable the Client for Microsoft Networks and File and Printer Sharing for
Microsoft Networks components for the network adapter that you want to use for the private
network.
Use similar host hardware. All failover cluster nodes must use the same hardware to connect to shared
storage, and all cluster nodes must have processors from the same manufacturer. Although you can
enable failover for virtual machines on a host with different processor versions by configuring
processor compatibility settings, the failover experience and performance is more consistent if all
servers have very similar hardware.
Verify network configuration. All nodes in the failover cluster must connect through the same IP
subnet, so that the virtual machine can keep the same IP address after Live Migration. In addition, the
IP addresses assigned to the private network on all nodes must be on the same logical subnet. This
means that multisite clusters must use a stretched virtual local area network (VLAN), which is a subnet
that spans a wide area network (WAN) connection.
Demonstration Steps
1.
Ensure that LON-HOST1 is the owner of the ClusterVMs disk. If it is not, move the ClusterVMs disk to
LON-HOST1.
2.
3.
In the Failover Cluster Manager, click the Roles node, and then start the New Virtual Machine Wizard.
4.
5.
6.
7.
8.
Connect the machine to the existing virtual hard disk drive 20412D-LON-CORE.vhd, located at
C:\ClusterStorage\Volume1.
9.
10. Enable the option for migration to computers with a different processor version.
11. From the Roles node, start the virtual machine.
12. On LON-HOST2, in Failover Cluster Manager, start Live Migration failover of TestClusterVM from
LON-HOST1 to LON-HOST2.
13. Connect to TestClusterVM, and ensure that you can operate it.
You also can configure a virtual network adapter to connect to a protected network. If network
connectivity to such a network is lost because of reasons such as a physical switch failure or a
disconnected network cable, the failover cluster will move the virtual machine to a different node to
restore network connectivity.
Windows Server 2012 R2 also enhances virtual machine availability in scenarios when one Hyper-V node
shuts down before being placed in maintenance mode, and before draining any clustered roles from it. In
Windows Server 2012, shutting down the cluster node before draining it results in virtual machines being
put into a saved state, and then moved to other nodes and resumed. This caused an interruption to the
availability of the virtual machines. In Windows Server 2012 R2, if such a scenario occurs, the cluster live
migrates all running virtual machines automatically before the Hyper-V node shuts down.
Note: We still recommend that you drain clustered roles and place the node in
maintenance mode before you perform a shutdown operation.
Configuration of this functionality, called virtual machine drain on shutdown, is not accessible through the
Failover Cluster Manager. To configure it, you must use Windows PowerShell, and configure the
DrainOnShutdown cluster property. It is enabled by default, and the value of this property is set to 1. If
you want to check the value, you should run Windows PowerShell as Administrator, and execute the
following command:
(Get-Cluster).DrainOnShutdown
Question: What are some alternative technologies that you can use for virtual machine and
network monitoring?
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
Describe benefits of using Offloaded Data Transfer (ODX)-capable storage with Hyper-V.
Virtual Machine and Storage Migration. With this method, you move a powered-on virtual machine
from one location to another or from one host to another by using the Move Virtual Machine Wizard
in Hyper-V Manager. Virtual Machine and Storage Migration does not require failover clustering or
any other high availability technology.
Quick Migration. This method also is available in Windows Server 2008. It requires that failover
clustering be installed and configured. During the migration process, when you use Quick Migration
to move virtual machines between cluster nodes, a virtual machine is placed in a saved state. This
causes some downtime until the memory content is copied to another node, and the machine is
restored from the saved state.
Live Migration. This improvement over Quick Migration is also available in Windows Server 2008 R2. It
enables you to migrate a virtual machine from one host to another without experiencing downtime.
In Windows Server 2012 and Windows Server 2012 R2, you also can perform Shared Nothing Live
Migration, which does not require failover clustering. In addition, hosts do not have to share any
storage for this type of migration to be performed.
Hyper-V Replica. This new Windows Server 2012 feature enables you to replicate a virtual machine to
another host or into the cloud, instead of moving the virtual machine, and to synchronize all virtual
machine changes from the primary host to the host that holds the replica.
Exporting and importing virtual machines. This is an established method of moving virtual machines
without using a cluster. You export a virtual machine on one host, and then move exported files
physically to another host by performing an import operation. This is a very time-consuming
operation. It requires that you turn off a virtual machine during export and import. In Windows
Server 2012, this migration method is improved. You can import a virtual machine to a Hyper-V host
without exporting it before import. Windows Server 2012 Hyper-V is now capable of configuring all
necessary settings during the import operation.
Question: When would you export and import a virtual machine instead of migrating it?
The time that is required to move a virtual machine depends on the source and destination location, the
speed of hard disks or storage, and the size of the virtual hard disks. The moving process is accelerated if
the source and destination locations are on storage, and the storage supports ODX.
When you move a virtual machines VHDs/VHDXs and configuration files to another location, a wizard
presents three available options:
Move all the virtual machines data to a single location: You specify one single destination location,
such as disk file, configuration, checkpoint, or smart paging.
Move the virtual machines data to a different location: You specify individual locations for each
virtual machine item.
Move only the virtual machines virtual hard disk: You move only the virtual hard disk file.
A user copies or moves a file by using File Explorer, command-line tools, or as part of a virtual
machine migration.
2.
Windows Server 2012 automatically translates this transfer request into an ODX, if supported by the
storage device, and it receives a token that represents the data.
3.
The token is copied between the source server and destination server.
4.
5.
The storage array performs the copy or move internally and provides status information to the user.
Storage arrays that support ODX must be connected to iSCSI, Fibre Channel, Fibre Channel over Ethernet,
or a serial-attached SCSI interface. On the volumes where you want to use an ODX file transfer, you
cannot use Data Deduplication or BitLocker Drive Encryption, or any other file encryption. In addition,
Storage Spaces and dynamic volumes are not supported.
Note: ODX file transfer is not supported by all applications that can perform copy or move
operations. Currently, you can use ODX with Hyper-V management tools, File Explorer,
command-line copy utilities, and Windows PowerShell cmdlets.
The VMM Administrator console, if you use VMM to manage your physical hosts.
Note: Live Migration enables you to reduce the perceived outage of a virtual machine
significantly during a planned failover. During a planned failover, you start the failover manually.
Live Migration does not apply during an unplanned failover, such as when the node that hosts
the virtual machine fails.
Migration setup. When the administrator starts the failover of the virtual machine, the source node
creates a TCP connection with the target physical host. This connection is used to transfer the virtual
machine configuration data to the target physical host. Live Migration creates a temporary virtual
machine on the target physical host, and allocates memory to the destination virtual machine. The
migration preparation also checks to determine whether a virtual machine can be migrated.
2.
Guest-memory transfer. The guest memory is transferred iteratively to the target host while the
virtual machine is still running on the source host. Hyper-V on the source physical host monitors the
pages in the working set. As the system modifies memory pages, it tracks and marks them as being
modified. During this phase, the migrating virtual machine continues to run. Hyper-V iterates the
memory copy process several times, and every time that a smaller number of modified pages are
copied to the destination physical computer. A final memory-copy process copies the remaining
modified memory pages to the destination physical host. Copying stops as soon as the number of
dirty pages drops below a threshold or after 10 iterations are complete.
3.
State transfer. To actually migrate the virtual machine to the target host, Hyper-V stops the source
partition, transfers the state of the virtual machine, including the remaining dirty memory pages, to
the target host, and then restores the virtual machine on the target host. The virtual machine must be
paused during the final state transfer.
4.
Cleanup. The cleanup stage finishes the migration by tearing down the virtual machine on the source
host, terminating the worker threads, and signaling the completion of the migration.
Note: In Windows Server 2012 R2, you can perform a virtual machine Live Migration by
using SMB 3.0 as a transport. This means that you can take advantage of key SMB features, such
as traffic compression, SMB Direct (RDMA), and SMB Multichannel, which provide high-speed
migration with low CPU utilization.
To resolve this problem, and to enable administrators to have an up-to-date copy of a single virtual
machine, Windows Server 2012 implements Hyper-V Replica technology. This technology enables virtual
machines running at a primary site, or a location or host, to be replicated efficiently to a secondary site (a
location or host) across a WAN or a LAN link. Hyper-V Replica enables you to have two instances of a
single virtual machine residing on different hosts, one as the primary, or live, copy and the other as a
replica, or offline copy. These copies are synchronized on a regular interval, which is configurable in the
Windows Server 2012 R2 version. You also can fail over at any time.
In the event of a failure at a primary site, caused by natural disaster, a power outage, or a server failure, an
administrator can use Hyper-V Manager to execute a failover of production workloads to replica servers at
a secondary location within minutes, thus incurring minimal downtime. Hyper-V Replica enables an
administrator to restore virtualized workloads to a specific point in time depending on the Recovery
History configuration settings for the virtual machine.
Hyper-V Replica technology consists of several components:
Replication engine. This component is the core of Hyper-V Replica. It manages the replication
configuration details and handles initial replication, delta replication, failover, and test-failover
operations. It also tracks virtual machine and storage mobility events, and takes appropriate actions
as required. For example, the replication engine pauses replication events until migration events
complete, and then resumes where these events left off.
Change tracking. This component tracks changes that are happening on the primary copy of the
virtual machine. It is designed to make the scenario work regardless of where the virtual machine
VHD file or files reside.
Network module. This module provides a secure and efficient way to transfer virtual machine replicas
between the primary host and the replica host. Data compression is enabled by default. This
communication is secured by HTTPS and certification-based authentication.
Hyper-V Replica Broker role. This is a new role implemented in Windows Server 2012. It is configured
in failover clustering, and it enables you to have Hyper-V Replica functionality even when the virtual
machine being replicated is highly available and can move from one cluster node to another. The
Hyper-V Replica Broker redirects all virtual machine-specific events to the appropriate node in the
Replica cluster. The Broker queries the cluster database to determine which node should handle
which events. This ensures that all events are redirected to the correct node in the cluster, in the event
that a Quick Migration, Live Migration, or Storage Migration process was executed.
When you plan hardware configurations on the sites, you do not have to use the same server or storage
hardware. It is important, however, to ensure that sufficient hardware resources are available to run the
Hyper-V Replica virtual machine.
The server hardware supports the Hyper-V role on Windows Server 2012.
Sufficient storage exists on both the primary and replica servers to host the files that are used by
replicated virtual machines.
Network connectivity exists between the locations that host the primary and replica servers. This can
be a WAN or LAN link.
Firewall rules are correctly configured to enable replication between the primary and replica sites
(default traffic is going over TCP port 80 or 443).
You do not have to install Hyper-V Replica separately because it is not a Windows Server role or feature.
Hyper-V Replica is implemented as part of the Hyper-V role. It can be used on Hyper-V servers that are
stand-alone, or on servers that are part of a failover cluster, in which case you should configure Hyper-V
Replica Broker. Unlike failover clustering, a Hyper-V role is not dependent on AD DS. You can use it with
Hyper-V servers that are stand-alone, or that are members of different Active Directory domains, except
when servers that participate in Hyper-V replica are part of the same failover cluster.
To enable Hyper-V Replica technology, first configure Hyper-V server settings. In the Replication
Configuration group of options, enable the Hyper-V server as a replica server, select the authentication
and port options and configure the authorization options. You can choose to enable replication from any
server that successfully authenticates, which is convenient in scenarios where all servers are part of the
same domain, or you can type fully qualified domain names (FQDNs) of servers that you accept as replica
servers. In addition, you must configure the location for replica files. These settings should be configured
on each server that will serve as a replica server.
After you configure options on the server level, enable replication on a virtual machine. During this
configuration, you must specify both the replica server name and the connection options. You can select
which virtual hard disk drives you replicate, in cases when a virtual machine has more than one VHD, and
you can also configure the Recovery History and the initial replication method. Specific to Windows
Server 2012 R2, you can also configure replication interval; for example, for 30 seconds, five minutesthis
is a default in Windows Server 2012or 15 minutes. After you have configured these options, you can
start replication. After you make the initial replica, in Windows Server 2012 R2, you can also make an
extended replica to a third physical or cloud-based instance running Hyper-V. The extended replica site is
built from the first replica site, not from the primary virtual machine. It is possible to configure the
different replication intervals for replica and extended replica instances of a virtual machine.
You can perform three types of failovers with Hyper-V Replica: test failover, planned failover, and failover.
These three options offer different benefits, and are useful in different scenarios.
Test failover
After you configure a Hyper-V Replica and after the virtual machines start replicating, you can perform a
test failover. A test failover is a nondisruptive task that enables you to test a virtual machine on the replica
server while the primary virtual machine is running, and without interrupting the replication. You can
initiate a test failover on the replicated virtual machine, which will create a new checkpoint. You can use
this checkpoint to select a recovery point, from which the new test virtual machine is created. The test
virtual machine has the same name as the replica, but with - Test appended to the end. The test virtual
machine is not started, and is disconnected by default to avoid potential conflicts with the running
primary virtual machine.
After you finish testing, you can stop a test failover. This option is available only if test failover is running.
When you stop the test failover, it stops the test virtual machine and deletes it from the replica Hyper-V
host. If you run a test failover on a failover cluster, you will have to remove the Test-Failover role from the
failover cluster manually.
Planned failover
You can initiate a planned failover to move the primary virtual machine to a replica site, for example,
before site maintenance or before an expected disaster. Because this is a planned event, there is no data
loss, but the virtual machine will be unavailable for some time during its startup. A planned failover
confirms that the primary virtual machine is turned off before the failover executes. During the failover,
the primary virtual machine sends all the data that has not yet been replicated, to the replica server. The
planned failover process then fails over the virtual machine to the replica server, and starts the virtual
machine at the replica server. After the planned failover, the virtual machine will be running on the replica
server, and its changes are not replicated. If you want to establish replication again, you should reverse
the replication. You will have to configure settings similar to when you enabled replication, and the
existing virtual machine will be used as an initial copy.
Failover
In the event that an occurrence disrupts the primary site, you can perform a failover. You initiate a failover
at the replicated virtual machine only if the primary virtual machine is either unavailable or turned off. A
failover is an unplanned event that can result in data loss, because changes at the primary virtual machine
might not have replicated before the disaster happened. (The Replication frequency setting controls how
often changes are replicated.) Similar to a planned failover, during a failover, the virtual machine is
running on a replica server. If you need to start failover from a different recovery point and discard all the
changes, you can cancel the failover. After you recover the primary site, you can reverse the replication
direction to reestablish replication. This also will remove the option to cancel failover.
Extended replication. In Windows Server 2012, it is possible to have only one replica of an existing
virtual machine. Windows Server 2012 R2 provides you the ability to replicate a single virtual machine
to a third server. This means that you can replicate a running virtual machine to two independent
servers. However, the replication does not happen from one server to two other servers. The server
that is running an active copy of the virtual machine replicates to the replica server, and the replica
server then replicates to the extended replica server. You create a second replica by running the
Extend Replication Wizard on a passive copy. In this wizard, you can set the same options that you
chose when you configured the first replica.
Administrators now can benefit from these features, which help to optimize the usage of Hyper-V Replica
and increase the availability of critical virtual machines.
Note: Hyper-V Replica now allows administrators to use a Windows Azure instance as a
replica repository. This enables administrators to leverage Windows Azure, rather than having to
build out a Disaster Recovery site, or manage backup tapes off-site. To use Windows Azure for
this purpose, you must have a valid subscription. Note that this service might not be available in
all world regions.
Question: Do you see extended replication as a benefit for your environment?
Demonstration Steps
1.
2.
3.
4.
Create and use the folder E:\VMReplica as a default location to store replica files.
5.
Enable the firewall rule named Hyper-V Replica HTTP Listener (TCP-In) on both hosts.
6.
7.
Wait for initial replication to finish, and ensure that the 20412D-LON-CORE virtual machine has
appeared in the Hyper-V Manager console on LON-HOST2.
8.
9.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 75 minutes
Virtual machines
Host machines
20412D-LON-DC1-B
20412D-LON-SVR1-B
20412D-LON-HOST1
20412D-LON-HOST2
User name
Adatum\Administrator
Password
Pa$$w0rd
You should perform this lab with a partner. To perform this lab, you must start the host computers to
Windows Server 2012 R2. Ensure that you and your partner have started different hostsone should start
the LON-HOST1, and the other should start the LON-HOST2. In addition, ensure that LON-DC1-B is
imported on LON-HOST1, and LON-SVR1-B is imported on LON-HOST2, and that these virtual machines
are started.
On LON-HOST1, open the Hyper-V Manager, and import the 20412D-LON-CORE virtual machine.
2.
3.
Note: The drive letter might be different based upon the number of drives on the physical
host machine.
2.
Create and use the folder E:\VMReplica as a default location to store replica files.
Enable the firewall rule named Hyper-V Replica HTTP Listener (TCP-In) on both hosts.
2.
Wait for initial replication to finish, and ensure that the 20412D-LON-CORE virtual machine has
appeared in Hyper-V Manager console on LON-HOST2.
2.
3.
4.
Results: After completing this exercise, you will have configured Hyper-V Replica.
2.
Use the 172.16.0.21 address to discover and connect to the iSCSI target.
3.
4.
Use the 172.16.0.21 address to discover and connect to the iSCSI target.
5.
On LON-HOST2, open Disk Management, and initialize and bring online all iSCSI drives:
6.
On LON-HOST1, open Disk Management, and bring all three iSCSI drives online.
2.
Name it VMCluster.
In Failover Cluster Manager on LON-HOST1, add all three iSCSI disks to the cluster.
2.
Verify that all three iSCSI disks appear available for cluster storage.
3.
Add the disk with the volume name ClusterVMs to Cluster Shared Volumes.
4.
From the VMCluster.adatum.com node, select More Actions, and then configure the Cluster
Quorum Settings to use default configuration.
Results: After completing this exercise, the students will have the failover clustering infrastructure
configured for Hyper-V.
Ensure that LON-HOST1 is the owner of the ClusterVMs disk. If it is not, move the ClusterVMs disk to
LON-HOST1.
2.
In the Failover Cluster Manager, click the Roles node, and then start the New Virtual Machine Wizard.
Connect the machine to the existing virtual hard disk drive 20412D-LON-CORE.vhd, located at
C:\ClusterStorage\Volume1.
2.
3.
Enable the option for migration to computers with a different processor version.
4.
On LON-HOST2, in the Failover Cluster Manager, start Live Migration failover of TestClusterVM
from LON-HOST1 to LON-HOST2.
2.
2.
3.
Name: LON-GUEST1
Memory: 1024 MB
4.
5.
Perform a Move operation on LON-GUEST1. Move the virtual machine from its current location to
C:\GUEST1.
6.
7.
Restart LON-HOST1.
2.
When you are prompted with the boot menu, select Windows Server 2012, and then press Enter.
3.
4.
Results: After completing this exercise, the students will have configured the virtual machine as highly
available.
Question: How can you extend Hyper-V Replica in Windows Server 2012 R2?
Question: What is the difference between Live Migration and Storage Migration?
Tools
The tools for implementing failover clustering with Hyper-V include:
Tools
Where to Find
Use
Administrative Tools
Hyper-V Manager
Administrative Tools
VMM Console
Start menu
Best Practice:
Develop standard configurations before you implement highly available virtual machines. The host
computers should be configured as close to identically as possible. To ensure that you have a
consistent Hyper-V platform, you should configure standard network names, and use consistent
naming standards for CSVs.
Use new features in Hyper-V Replica to extend your replication to more than one server.
Consider using Scale-Out File Server clusters as storage for highly available virtual machines.
Implement VMM. VMM provides a management layer on top of Hyper-V and Failover Cluster
Management that can block you from making mistakes when you manage highly available virtual
machines. For example, it blocks you from creating virtual machines on storage that is inaccessible
from all nodes in the cluster.
Troubleshooting Tip
All the nodes in a host cluster must have the same networks
configured. If they do not, then the virtual machines cannot
connect to a network when they failover to another node.
12-1
Module 12
Implementing Business Continuity and Disaster Recovery
Contents:
Module Overview
12-1
12-2
12-8
12-18
12-23
12-26
Module Overview
Organizations are always vulnerable to losing some of their datafor reasons such as unintentional
deletion, file system corruption, hardware failures, malicious users, and natural disasters. Because of this,
organizations must have well-defined and tested recovery strategies that will help them to bring their
servers and data back to a healthy and operational state, and in the fastest time possible.
In this module, you will learn how to identify security risks for your organization. You will also learn about
data protection and recovery, including how to back up specific data locally and to the cloud, how to
back up servers, and how you can recover data.
Objectives
After completing this module, you will be able to:
Implement the Windows Server Backup feature in Windows Server 2012 R2.
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
Disaster protection. Allows recovery of servers, virtual machines, applications, and data in the event
that these are lost because of causes external to software and hardware failure, such as a server-room
fire or a flood that damages a site.
The following is a high-level list of steps that you can use to identify data protection requirements:
1.
Define organization-critical resources. These resources include data, services, and the servers that the
data and services run.
2.
Identify risks associated with those critical resources. For example, data can be deleted accidentally or
intentionally, and a hard drive or storage controller where data is stored might fail. Additionally, services
that use critical data might fail for many reasons (such as network problems), and servers might fail
because of hardware failures. Major power outages also could cause entire sites to shut down.
3.
Determine the amount of time needed to perform the recovery. Based on their business
requirements, organizations should decide how much time is acceptable for recovering critical
resources. Scenarios might vary from minutes to hours, or even one day.
4.
Develop a recovery strategy. Based on the previous steps, organizations will define a service-level
agreement (SLA) that will include information such as service levels and service hours. Organizations
should develop a data protection strategy that will help them minimize the risks, and at the same time
recover their critical resources within the minimum time acceptable for their business requirements.
Note: Organizations will have differing data protection requirements based on their
business requirements and goals. Data protection requirements should not be static, but they
should be evaluated and updated on a regular basisfor example, once every few months. It is
also important that administrators test the data protection strategies on a regular basis. This
testing should be performed in an isolated, non-production environment by using a copy of the
production data.
Hours of operation. Hours of operation defines how much time the data and services are available to
users, and how much planned downtime there will be due to system maintenance.
Service availability. Service availability is defined as a percentage of time per year that data and services
will be available to users. For example, a service availability of 99.9 percent per year means that data and
services will have unplanned downtime not more than 0.1 percent per year, or 8.75 hours per year on a
24-hour-a-day, seven-day-a-week basis. In some cases, this will only apply to business hours, although
in a globalized environment, business hours usually mean 24 hours each day.
Recovery point objective (RPO). A RPO sets a limit on how much data can be lost due to failure,
measured as a unit of time. For example, if an organization sets an RPO of six hours, it would be
necessary to perform a backup every six hours, or to create a replication copy on different locations at
six-hour intervals. In the event of a failure, it would be necessary to go back to the most recent
backup, which, in the worst-case scenario, assuming that the failure occurred just before (or during)
the next backup, would be six hours earlier.
You can configure backup software to perform backups every hour, offering a theoretical RPO of 60
minutes. When you calculate RPO, it is also important to take into account the time it takes to
perform the backup. For example, suppose it takes 15 minutes to perform a backup, and you back up
every hour. If a failure occurs during the backup process, your best possible RPO will be one hour and
15 minutes. A realistic RPO must always balance the desired recovery time with the realities of the
network infrastructure. You should not aim for an RPO of two hours, for example, when a backup
itself takes three hours to complete.
The RPO also depends on the backup software technology. For example, when you use the snapshot
feature in Windows Server Backup, or if you use another backup software that uses Volume Shadow
Copy Service (VSS), you are backing up to the point in time when the backup was started.
Recovery time objective (RTO). A RTO is the amount of time it takes to recover from failure. The RTO
will vary depending on the type of failure. A motherboard on a critical server will have a different
RTO than a disk loss on a critical server, because one of these components takes significantly longer
to replace than the other.
Retention objectives. Retention is a measure of the length of time you need to store backed-up data.
For example, you might need to recover data quickly from up to a month ago, but need to store data,
in some cases, for several years. The speed at which you agree to recover data in your SLA will
depend on the datas age, because some data is quickly recoverable and other data might need to be
recovered from the archives.
System performance. Although not directly related to data protection, system performance is also an
important component of SLAs, because applications that an SLA includes should be available, and
they should also have acceptable response times to users requests. If the system performance is slow,
then business requirements will not be met.
Note: Each organizations data protection SLA depends on the components that are
important to the organization.
Mitigation Strategies
No matter how prepared your organization is, you
cannot prevent problems from occurring. Therefore,
organizations must also develop mitigation
strategies that will minimize the impact of an
unexpected loss of data, a server, a service, or sites.
To prepare mitigation strategies, organizations must
create risk assessments that analyze all possible
disaster scenarios, and document how to mitigate
each of those scenarios.
The following table lists some of the risks
associated with data or services loss, and the
appropriate mitigation strategies.
Problem
Mitigation strategy
Have at least two copies of your backup data, and validate your
backups on a regular basis.
Ensure that each organization has its own data protection plan.
Document in detail all of the steps that should be performed in a disaster scenario.
Test your data protection plan on a regular basis in an isolated, non-production environment.
Use a production backup to test those recovery strategies, to ensure the backups contain valid data
and to evaluate the amount of time needed to recover the amount of data.
Evaluate your data protection plan on a regular basis, and update your plan based on your
evaluation.
Lesson 2
Lesson Objectives
After completing this lesson, you will be able to:
Describe data and service information that needs to be backed up in a Windows Server environment.
Summarize the features available with Microsoft System Center 2012 Data Protection Manager.
Critical resources
Backup verification
Backup security
You also need to distinguish between technical reasons and regulatory reasons for backing up data. Due
to legal requirements, you might need to be able to provide your business with business-critical data for
the previous 10 years or even longer.
To determining what to back up, consider the following:
If the data is only stored in one place, ensure that it is backed up.
If data is replicated, it might not be necessary to back up each replica. However, you must back up at
least one location to ensure that the backup can be restored. A better strategy is to back up at
multiple locations.
If this server or disk failed, or if this data became corrupted, what steps would be required to recover
it?
Many organizations ensure the availability of critical services and data through redundancy. For example,
Microsoft Exchange Server 2013 provides continuous replication of mailbox databases to other servers
through a technology called Database Availability Groups (DAGs). While the use of DAGs does not mean
that an organization should not back up its Exchange Server 2013 Mailbox servers, it does change how an
organization should think about backing up its Mailbox servers or centralizing its backup strategies.
Selected volumes.
Select specific items for backup, such as specific folders or the system state.
Perform a bare-metal restore. A bare-metal backup contains at least all critical volumes, and allows
you to restore without first installing an operating system. You do this by using the product media on
a DVD or USB key, and the Windows Recovery Environment (Windows RE). You can use this backup
type together with the Windows RE to recover from a hard disk failure, or if you have to recover the
whole computer image to new hardware.
Use system state. The backup contains all information to roll back a server to a specific point in time.
However, you need to install an operating system before you can recover the system state.
Recover individual files and folders or volumes. The Individual files and folders option enables you to
select to back up and restore specific files, folders, or volumes, or you can add specific files, folders, or
volumes to the backup when you use an option such as critical volume or system state.
Exclude selected files or file types. For example, you can exclude temporary files from the backup.
Select from more storage locations. You can store backups on remote shares or non-dedicated
volumes.
If events such as hard disk failures occur, you can perform system recovery by using a full server backup
and Windows RE. This will restore your complete system onto the new hard disk.
Windows Server Backup is a single-server backup solution. You cannot use one instance of Windows
Server Backup to back up multiple servers. You would need to install and configure Windows Server
Backup on each server.
Backup Types
When you use Windows Server Backup, you can
perform the following types of backups:
Backup Technologies
Most backup products in use today use the VSS
infrastructure that is present in Windows Server
2012 and Windows Server 2012 R2. Some older
applications, however, use streaming backup. It
might be necessary to support such older
applications in complex, heterogeneous
environments.
One challenge when you perform backups is to
ensure consistency of the data that you back up.
Backups do not occur instantly; they can take
seconds, minutes, or hours. Unfortunately, servers
are not static, and the state of a server at the
beginning of a backup might not be the same state that the server is in when the backup completes. If
you do not take consistency into account, this can cause problems during restoration because the servers
configuration might have changed during the backup.
VSS
VSSa technology that Microsoft included with the server operating system since Windows Server 2003
R2, and which is present in all newer server operating systemssolves the consistency problem at the
disk-block level by creating what is known as a shadow copy. A shadow copy is a backup of the file table,
which also marks all used blocks as un-updateable. Whenever write requests occur after the snapshot is
taken, the old blocks are compressed and stored before the blocks data is changed. This enables you to
have an in-time view of the file system. When a backup occurs, the old blocks are backed up, which
means that any changes that might have occurred since the freeze are not backed up.
Creating a shadow copy tells the operating system first to put all files, such as DHCP databases and Active
Directory database files, in a consistent state for a moment. Then the current state of the file system is
recorded at that specific point in time. After VSS creates the shadow copy, all write accesses that would
overwrite data store the previous data blocks first. Therefore, a shadow copy is small in the beginning, and
it grows over the time as data changes. By default, the operating system is configured to reserve 12
percent of the volume for VSS data, and VSS automatically deletes older snapshots when this limit is
reached. You can change this default value, and you can change the default location of the VSS data. This
ensures that the backup has a snapshot of the system in a consistent state, no matter how long it actually
takes to write the backup data to the backup storage device.
Streaming Backup
Streaming backup is often used by older applications that do not use VSS. You back up applications that are
not VSS-aware by using a method known as a streaming backup. In contrast to VSS, in which the operating
system ensures that data is kept in a consistent state and at a current point in time, when you use streaming
backup, the application or the data protection application is responsible for ensuring that the data remains in a
consistent state. In addition, after streaming backup completes, some files have the state they had in the
beginning of the backup, while other files have the state of the end of the backup window.
Hyper-V Replica
Hyper-V Server in Windows Server 2012 and Windows Server 2012 R2 supports creating replicas of
virtual machines. These replicas can be stored on another server in the same site, on another server in
another site, or even in a public cloud. Virtual machine replication allows you to have consistent versions
of production virtual machines stored in a separate location. While Hyper-V replica does allow you to
keep copies of virtual machines that are nearly up to date (there is always some lag involved when
replicating across sites) a replica virtual machine only protects you against some types of failures. If an
application running on the virtual machine or data hosted on the virtual machine becomes corrupt, but
the virtual machine remains operational, the corrupted files likely will also be replicated across to the
replica virtual machine.
Backup frequency.
Backup retention.
Backup Frequency
Backup frequency is a measure of how often backups are taken. With incremental block-level backups, no
substantial difference will exist between the amount of data written over the sum of four 30-minute
sessions and one two-hour incremental session on the same server. This is because over the two hours, the
same number of blocks will have changed on the server as during the four 30-minute sessions. However,
the four 30-minute sessions have broken up the data into smaller parts. When backups occur more
frequently, they reduce the time required to perform the backup by splitting it into smaller parts. The
overall total will be about the same.
Backup Retention
When you attempt to determine the required backup capacity, you should determine precisely how long
you need to retain backup data. For example, if you need to be able to recover to any backup point in the
last 28 days, and if you have recovery points generated every hour, you will need more space than if you
have recovery points generated once a day, and you only need to restore data from the last 14 days.
Windows Server Backup does not encrypt backups. Windows Server Backup writes backups in VHD
format. This means that anyone who has access to Windows 8, Windows 8.1, Windows Server 2012,
or Windows Server 2012 R2 can mount those backups as volumes, and then extract data from them.
An even more sophisticated attack might include booting into the backup VHD to impersonate the
backed up system on the organizational network.
Keep backup media in a secure location. At a minimum, backups should be kept locked up in a secure
location. If your organization backs up to disk drives that are attached to servers by USB cable, ensure
that those disk drives are locked in place, even if they are located in a secure server room, and even if
your organizations server room has a security camera.
Demonstration Steps
1.
2.
Password: Pa$$w0rd
3.
Run the Backup Once Wizard using the scheduled backup options.
4.
Key Features
The key features of Windows Azure Backup include:
Integrated recovery experience to recover files and folders from a local disk or from a cloud
platform.
Easy data recoverability for data that was backed up onto any server of your choice.
Block-level incremental backups. The Windows Azure Backup Agent performs incremental backups by
tracking file and block-level changes, and only transferring the changed blocks, which reduces the
storage and bandwidth usage. Different point-in-time versions of the backups use storage efficiently
by only storing the changed blocks between these versions.
Data compression, encryption, and throttling. The Windows Azure Backup Agent ensures that data is
compressed and encrypted on the server before it is sent to the Windows Azure Backup on the
network. Therefore, the Windows Azure Backup only stores encrypted data in cloud storage. The
encryption passphrase is not available to the Windows Azure Backup, and therefore, the data is never
decrypted in the cloud. In addition, users can set up throttling and configure how the Windows Azure
Online uses the network bandwidth when it backs up or restores information.
Data integrity verified in the cloud. In addition to the secure backups, the backed-up data also is
checked automatically for integrity after the backup completes. Therefore, any corruptions that might
arise because of data transfer can be easily identified. These corruptions are fixed automatically in the
next backup.
Configurable retention policies for storing data in the cloud. The Windows Azure Backup accepts and
implements retention policies to recycle backups that exceed the desired retention range, thereby
meeting business policies and managing backup costs.
Windows Azure Backup can only be used to back up files and folders. You cannot use Windows Azure
Backup to back up system state data or perform a full-server or volume backup and recovery, although
you can back up all files and folders on a volume. Windows Server Backup allows you to back up a
maximum of 850 gigabytes (GB) per volume during a backup session.
Although each instance of Windows Azure Backup can back up to the same recovery vault in Windows
Azure, you must install and configure each instance of Windows Azure Backup separately. You cannot
manage multiple instances of Windows Azure Backup centrally.
To learn more about Windows Azure SQL Database, go to:
http://go.microsoft.com/fwlink/?LinkId=270041
At this time, Windows Azure Backup is not available in all countries. For updated
information, go to:
http://go.microsoft.com/fwlink/?LinkID=386645
How quick is RTO recovery? How long does it take to go from failure to restored functionality? Being
able to restore to the last SQL Server transaction is the optimal solution, but if it takes two days to
recover to that point, the solution is not as helpful as it may appear.
Does the solution provide centralized backup? Does the product allow you to centralize your backup
solution on one server, or must backups be performed directly on each server in the organization?
Do vendors support the solution? Some vendors use undocumented application programming
interfaces (APIs) to back up and recover specific products, or to back up files without ensuring that
the service is at a consistent state.
Is the backup solution compatible with your applications? For example, a new update to a product
may make your backup solution incompatible with the application. Check with the application vendor
to determine whether the enterprise backup solution is supported.
Recovery-point capacity. Determine the products recovery-point capacity. How many restore points
does the enterprise data protection solution offer, and is this adequate for your organizations needs?
15-minute RPO. DPM allows 15-minute snapshots of supported products. This includes most of the
Microsoft enterprise suite of products, including Windows Server with its roles and services, Exchange
Server, Hyper-V, and Microsoft SQL Server.
Microsoft workload support. DPM was designed specifically by Microsoft to support Microsoft
applications such as Exchange Server, SQL Server, and Hyper-V. However, DPM has not been
specifically designed to support non-Microsoft server applications that do not have consistent states
on disk, or that do not support VSS.
Disk-based backup. DPM can perform scheduled backups to disk arrays and storage area networks
(SANs). You can also configure DPM to export specific backup data to tape for retention and
compliance-related tasks.
Remote-site backup. DPM uses an architecture that allows it to back up clients that are located in
remote sites. This means that a DPM server that is located in a head office site can perform backups
of servers and clients that are located across wide area network (WAN) links.
Backup-to-cloud strategy support. DPM supports backup of DPM servers to a cloud platform. This
means that a DPM server at a cloud-based hosting facility can be used to back up the contents of a
head office DPM server. For disaster redundancy, you also can configure DPM servers to back up each
other.
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
Operating system. You can recover the operating system through the Windows Recovery
Environment (Windows RE), the product DVD, or a USB flash drive.
Full server. You can recover the full server through Windows RE.
System state. System state creates a point-in-time backup that you can use to restore a server to a
previous working state.
The Recovery Wizard in Windows Server Backup provides several options for managing file and folder
recovery. They are:
Recovery Destination. Under Recovery Destination, you can select one of the following options:
o
Original location. The original location restores the data to the location to which it was backed
up originally.
Conflict Resolution. When you restore data from a backup, it frequently conflicts with existing
versions of the data. Conflict resolution allows you to determine how to handle those conflicts. When
conflicts occur, you have the following options:
o
Security Settings. Use this option to restore permissions to the data that is being recovered.
Two successive failed attempts to start Windows Server 2012 or Windows Server 2012 R2 occur.
Two successive unexpected shutdowns occur within 120 seconds of successful boot.
You can use Windows RE to recover volumes or server images from locally attached disks or from network
locations.
When you perform full server restore, consider the following:
Bare-metal restore. Bare-metal restore is the process you use to restore an existing server in its
entirety to new or replacement hardware. When you perform a bare-metal restore, the restore
proceeds and the server restarts. Later, the server becomes operational. In some cases, you may have
to reset the computers AD DS account, because these accounts can sometimes become
desynchronized.
Same or larger disk drives. The server hardware to which you are restoring must have disk drives that
are the same size or larger than the original host servers drives. If this is not the case, the restore will
fail. It is possible, although not advisable, to successfully restore to hosts that have slower processors
and less random access memory (RAM).
Importing to Hyper-V. Because server backup data is written to the VHDX format (the same format
that you use for virtual machine hard disks), if you are careful it is possible to use full server backup
data as the basis for creating a virtual machine. Doing this ensures business continuity while you
identify the appropriate replacement hardware.
Recover a Volume
If a disk fails, the quickest way to recover the data could be to perform a volume recovery, instead of a
selective recovery of files and folders. When you perform a volume recovery, you must check whether any
shared folders are configured for the disks, and whether the quotas and File Server Resource Manager
(FSRM) management policies are still in effect.
Note: During the restore process, you should copy event logs before you start the process.
If you overwrite the event log filesfor example, with a system recoveryyou will be uable to
read event-log information that occurred before the restore started. That data could lead you to
information about what caused the issue.
Demonstration Steps
1.
2.
In Windows Server Backup, run the Recovery Wizard and specify the following information:
3.
In Windows Explorer, browse to drive C, and ensure that the HR Data folder is restored.
2.
Browse for files that have to be restored, or you can search for them in the Windows Azure Backup.
3.
After you locate the files, select them for recovery, and select a location where the files will be
restored.
4.
Create copies so that you have both the restored file and original file in the same location. The
restored files name will be in the following format: Recovery Date+Copy of+Original File Name.
Do not recover the items that already exist on the recovery destination.
After you complete the restore procedure, the files will be restored on the Windows Server 2012 server
that is located in your site.
Objectives
Lab Setup
Estimated Time: 45 minutes
Virtual machines: 20412D-LON-DC1,
20412D-LON-SVR1
User name: Adatum\Administrator
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must
complete the following steps:
1.
On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2.
In Hyper-V Manager, click 20412D-LON-DC1, and in the Actions pane, click Start.
3.
In the Actions pane, click Connect. Wait until the virtual machine starts.
4.
5.
Password: Pa$$w0rd
2.
3.
Switch to LON-SVR1.
2.
From the Server Manager, install the Windows Server Backup feature. Accept the default values on
the Add Roles and Features Wizard.
2.
Password: Pa$$w0rd
Note: In a production environment, you will not store backup to a domain controller. You
do it here for lab purposes only.
2.
Run the Backup Once Wizard to back up the C:\Financial Data folder to the remote folder \\LONDC1\Backup.
Results: After you complete this exercise, you will have configured the Windows Server Backup feature,
scheduled a backup task, and completed an on-demand backup.
2.
3.
On LON-SVR1, open Windows Explorer, and then delete the C:\Financial Data folder.
2.
In the Windows Server Backup MMC, run the Recovery Wizard, and specify the following information:
Open drive C, and ensure that the Financial Data folder is restored.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have tested and validated the procedure for restoring a
file from backup.
Question: You are concerned about business-critical data that is located on your company's
servers. You want to perform backups every day, but not during business hours. What should
you do?
Question: Users report that they can no longer access data that is located on the server. You
connect to the server, and you realize that the shared folder where users were accessing data
is missing. What should you do?
Analyze your important infrastructure resources and mission-critical and business-critical data. Based
on that analysis, create a backup strategy that will protect the company's critical infrastructure
resources and business data.
Work with the organizations business managers to identify the minimum recovery time for businesscritical data. Based on that information, create an optimal restore strategy.
Troubleshooting Tip
Review Questions
Question: You want to create a strategy that includes guidance on how to back up different
technologies that are used in your organization such as DHCP, DNS, AD DS, and SQL Server.
What should you do?
Question: How frequently should you perform backups on critical data?
Tools
Tool
Use
Where to find it
Windows Server
Backup
Windows Azure
Backup
Course Evaluation
Your evaluation of this course will help Microsoft understand the quality of your learning experience.
Please work with your training provider to access the course evaluation form.
Microsoft will keep your answers to this survey private and confidential and will use your responses to
improve your future learning experience. Your open and honest feedback is valuable and appreciated.
L1-1
2.
In the DHCP console, click lon-dc1.adatum.com, select and then right-click IPv4, and then click
New Scope.
3.
4.
On the Scope Name page, in the Name box, type Scope1, and then click Next.
5.
On the IP Address Range page, in the Start IP address box, type 192.168.0.50, and then in the
End IP address box, type 192.168.0.100.
6.
In the Subnet mask box, ensure that 255.255.255.0 is entered, and then click Next.
7.
8.
9.
On the Configure DHCP Options page, select Yes, I want to configure these options now, and
then click Next.
10. On the Router (Default Gateway) page, in the IP address box, type 192.168.0.1, click Add, and
then click Next.
11. On the Domain Name and DNS Servers page, ensure that the parent domain is Adatum.com, and
then click Next.
12. On the WINS Servers page, click Next.
13. On the Activate Scope page, click No, I will activate this scope later, and then click Next.
14. On the Completing the New Scope Wizard page, click Finish.
15. Right-click IPv4, and then click New Scope.
16. In the New Scope Wizard, click Next.
17. On the Scope Name page, in the Name box, type Scope2, and then click Next.
18. On the IP Address Range page, in the Start IP address box, type 192.168.1.50, and then in the End
IP address box, type 192.168.1.100.
19. In the Subnet mask box, ensure that 255.255.255.0 is entered, and then click Next.
20. On the Add Exclusions and Delay page, click Next.
21. On the Lease Duration page, click Next.
22. On the Configure DHCP Options page, select Yes, I want to configure these options now, and
then click Next.
23. On the Router (Default Gateway) page, in the IP address box, type 192.168.1.1, click Add, and
then click Next.
24. On the Domain Name and DNS servers page, ensure the parent domain is Adatum.com, and then
click Next.
25. On the WINS Servers page, click Next.
26. On the Activate Scope page, click No, I will activate this scope later, and then click Next.
27. On the Completing the New Scope Wizard page, click Finish.
28. Right-click the IPv4 node, and then click New Superscope.
29. In the New Superscope Wizard, click Next.
30. On the Superscope Name page, in the Name box, type AdatumSuper, and then click Next.
31. On the Select Scopes page, select Scope1, hold down the Ctrl key, select Scope2, and then click
Next.
32. On the Completing the New Superscope Wizard page, click Finish.
33. In the DHCP console, under IPv4, select and then right-click Superscope Adatum Super, and then
click Activate.
2.
3.
4.
5.
Select the Enable Name Protection check box, and then click OK.
6.
Click OK again.
On LON-SVR1, in Server Manager, click Tools, and then from the drop-down list, click DHCP. Note
that the server is authorized, but that no scopes are configured.
2.
On LON-DC1, in the DHCP console, right-click the IPv4 node, and then click Configure Failover.
3.
4.
On the Specify the partner server to use for failover page, in the Partner Server box, type
172.16.0.21, and then click Next.
5.
On the Create a new failover relationship page, in the Relationship Name box, type Adatum.
6.
In the Maximum Client Lead Time field, set the hours to 0, and set the minutes to 15.
7.
Ensure that the Mode field is set to Load balance, and that the Load Balance Percentage is set to
50%.
8.
Select the State Switchover Interval check box. Keep the default value of 60 minutes.
9.
In the Enable Message Authentication Shared Secret box, type Pa$$w0rd, and then click Next.
L1-3
13. Click the Scope Options node, and note that the scope options are configured.
14. Start 20412D-LON-CL1, and then sign in as Adatum\Administrator with the password Pa$$w0rd.
15. On the Start screen, type Control Panel.
16. In the Apps Results box, click Control Panel.
17. In Control Panel, click Network and Internet, click Network and Sharing Center, click Change
adapter settings, right-click Ethernet, and then click Properties.
18. In the Ethernet Properties dialog box, click Internet Protocol Version 4 (TCP/IPv4), and then click
Properties.
19. In the Properties dialog box, select the Obtain an IP address automatically radio button, click
Obtain DNS server address automatically, and then click OK.
20. In the Ethernet Properties dialog box, click Close.
21. Hover over the bottom right corner to expose the fly-out menu, and then click the Search charm.
22. In the Apps search box, type Cmd, and then press Enter.
23. In the command prompt window, type ipconfig, and then press Enter. Record your IP address.
24. On LON-DC1, on the taskbar, click the Server Manager icon.
25. In Server Manager, click Tools, and then click Services.
26. In the Services window, right-click the DHCP Server service, and then click Stop to stop the service.
27. Close the Services window, and close the DHCP console.
28. On LON-CL1, in the command prompt window, type ipconfig /release, and then press Enter.
29. Type ipconfig /renew, and then press Enter.
30. Type ipconfig, and then press Enter. What is your IP address? Answers may vary.
31. On LON-DC1, in the Services console, start the DHCP server service.
Results: After completing this exercise, you will have configured a superscope, configured DHCP Name
Protection, and configured and verified DHCP failover.
On LON-DC1, in Server Manager, click Tools, and then in the drop-down list, click DNS.
2.
Expand LON-DC1, expand Forward Lookup Zones, click Adatum.com, and then right-click
Adatum.com.
3.
4.
5.
On the Signing options page, click Customize zone signing parameters, and then click Next.
6.
On the Key Master page, ensure that the Domain Name System (DNS) server LON-DC1 is selected as
the Key Master, and then click Next.
7.
8.
9.
L1-5
2.
In the Windows PowerShell window, type the following command, and then press Enter:
Get-DNSServer
This command displays the current size of the DNS socket pool (on the fourth line in the
ServerSetting section). Note that the current size is 2,500.
3.
Type the following command, and then press Enter to change the socket pool size to 3,000.
dnscmd /config /socketpoolsize 3000
4.
Type the following command, and then press Enter to stop the DNS server:
net stop dns
5.
Type the following command, and then press Enter to start the DNS server.
net start dns
6.
Type the following command, and then press Enter to confirm the new socket pool size.
Get-DnsServer
In the Windows PowerShell window, type the following command, and then press Enter.
Get-Dnsserver
This displays the current percentage value of the DNS cache lock. Note that the current value is 100
percent. The value displays in the ServerCache section.
2.
Type the following command, and then press Enter to stop the DNS server.
net stop dns
4.
Type the following command, and then press Enter to start the DNS server:
net start dns
5.
This command displays the current percentage value of the DNS cache lock. Note that the new value
is 75 percent.
6.
Leave the Windows PowerShell window open for the next task.
Create an Active Directory -integrated forward lookup zone named Contoso.com by running the
following cmdlet in Windows PowerShell:
Add-DnsServerPrimaryZone Name Contoso.com ReplicationScope Forest
2.
In the Windows PowerShell window, type the following command, and then press Enter to enable
support for GlobalName zones:
Set-DnsServerGlobalNameZone AlwaysQueryServer $true
3.
Create an Active Directory-integrated forward lookup zone named GlobalNames by running the
following command:
Add-DnsServerPrimaryZone Name GlobalNames ReplicationScope Forest
4.
5.
6.
7.
In the DNS console, refresh and then expand Forward Lookup Zones, click the Contoso.com zone,
right-click Contoso.com, and then click New Host (A or AAAA).
8.
In the New Host dialog box, in the Name box, type App1.
Note: The Name box uses the parent domain name if it is left blank.
9.
In the IP address box, type 192.168.1.200, and then click Add Host.
Results: After completing this exercise, you will have configured DNSSEC, the DNS socket pool, DNS
cache locking, and the GlobalName zone.
L1-7
On LON-SVR2, in the Server Manager Dashboard, click Add roles and features.
2.
3.
4.
5.
6.
On the Select features page, select the IP Address Management (IPAM) Server check box.
7.
In the Add features that are required for IP Address Management (IPAM) Server popup, click
Add Features, and then click Next.
8.
9.
2.
In the IPAM Overview pane, click Connect to IPAM server, click LON-SVR2.Adatum.com, and then
click OK.
3.
4.
In the Provision IPAM Wizard, on the Before you begin page, click Next.
5.
6.
On the Select provisioning method page, ensure that the Group Policy Based method is selected.
In the GPO name prefix box, type IPAM, and then click Next.
7.
On the Confirm the Settings page, click Apply. Provisioning will take a few minutes to complete.
8.
2.
In the Configure Server Discovery settings dialog box, click Add, and then click OK.
3.
In the IPAM Overview pane, click Start server discovery. Discovery may take five to 10 minutes to
run. The yellow bar will indicate when discovery is complete.
In the IPAM Overview pane, click Select or add servers to manage and verify IPAM access. Notice
that the IPAM Access Status is blocked.
2.
Scroll down to the Details view, and note the status report, which is that the IPAM server has not yet
been granted permission to manage LON-DC1 via Group Policy.
3.
On the taskbar, right-click Windows PowerShell, and then click Run as Administrator.
4.
At the Windows PowerShell prompt, type the following command, and then press Enter:
Invoke-IpamGpoProvisioning Domain Adatum.com GpoPrefixName IPAM IpamServerFqdn
LON-SVR2.adatum.com DelegatedGpoUser Administrator
5.
When you are prompted to confirm the action, type Y, and then press Enter. The command will take a
few minutes to complete.
6.
7.
In Server Manager, in the SERVER INVENTORY>IPv4 pane, right-click LON-DC1, and then click Edit
Server.
8.
In the Add or Edit Server dialog box, set the Manageability status to Managed, and then click OK.
9.
Switch to LON-DC1.
On LON-SVR2, in the IPAM navigation pane, under MONITOR AND MANAGE, click DNS and DHCP
Servers.
2.
In the details pane, right-click the instance of LON-DC1.Adatum.com that contains the DHCP server
role, and then click Create DHCP Scope.
3.
In the Create DHCP Scope dialog box, in the Scope Name box, type TestScope.
4.
5.
6.
7.
8.
9.
In the Configure options pane, click the Option drop-down arrow, and then select 003 Router.
10. Under Values, in the IP Address box, type 10.0.0.1, click Add Configuration, and then click OK.
11. In the navigation pane, click DHCP Scopes.
L1-9
12. Right-click Test Scope, and then click Configure DHCP Failover.
13. In the Configure DHCP Failover Relationship dialog box, for the Partner server field, click the
Select drop-down arrow, and then click lon-svr1.adatum.com.
14. In the Relationship Name field, type TestFailover.
15. In the Enable Message Authentication Secret field, type Pa$$w0rd.
16. In the Maximum Client Lead Time field, set the hours to zero, and then set the minutes to 15.
17. Ensure the Mode field is set to Load balance.
18. Ensure that the Load Balance Percentage is set to 50%.
19. Select the Enable state switchover check box. Leave the default value of 60 minutes.
20. Click OK.
21. On LON-DC1, on the Server Manager toolbar, click Tools, and then click DHCP.
22. In the DHCP console, expand lon-dc1.adatum.com, expand IPv4, and confirm that TestScope exists.
On LON-SVR2, in the Server Manager, in the IPAM console tree, click IP Address Blocks.
2.
In the right pane, click the Tasks drop-down arrow, and then click Add IP Address Block.
3.
In the Add or Edit IPv4 Address Block dialog box, provide the following values, and then click OK:
Prefix length: 16
4.
5.
In the right pane, click the Tasks drop-down arrow, and then click Add IP Address.
6.
In the Add IP Address dialog box, under Basic Configurations, provide the following values, and
then click OK:
IP address: 172.16.0.1
7.
Click the Tasks drop-down arrow, and then click Add IP Address.
8.
In the Add IP Address dialog box, under Basic Configuration, provide the following values:
9.
IP address: 172.16.0.10
In the Add IPv4 Address pane, click DHCP Reservation, and then enter the following values:
10. In the Add IPv4 Address pane, click DNS Record, enter the following values, and then click OK:
Check the Automatically create DNS records for this IP address check box.
11. On LON-DC1, open the DHCP console, expand IPv4, expand Scope (172.16.0.0) Adatum, and then
click Reservations. Ensure that the Webserver reservation for 172.16.0.10 displays.
12. Open the DNS console, expand Forward Lookup Zones, and then click Adatum.com. Ensure that a
host record displays for Webserver.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have installed IPAM and configured IPAM with IPAMrelated GPOs, IP management server discovery, managed servers, a new DHCP scope, IP address blocks, IP
addresses, DHCP reservations, and DNS records.
L2-11
Sign in to LON-DC1 with the user name Adatum\Administrator and the password Pa$$w0rd.
2.
3.
In the Add Roles and Features Wizard, on the Before You Begin page, click Next.
4.
5.
On the Select destination server page, ensure that Select server from the server pool is selected,
and then click Next.
6.
On the Select server roles page, expand File And Storage Services (2 of 12 Installed), expand File
and iSCSI Services (1 of 11 Installed), select the iSCSI Target Server check box, and then click
Next.
7.
8.
9.
On LON-DC1, in the Server Manager, in the navigation pane, click File and Storage Services.
2.
3.
In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-down list box, click New
iSCSI Virtual Disk.
4.
In the New iSCSI Virtual Disk Wizard, on the Select iSCSI virtual disk location page, under Storage
location, click drive C, and then click Next.
5.
On the Specify iSCSI virtual disk name page, in the Name text box, type iSCSIDisk1, and then click
Next.
6.
On the Specify iSCSI virtual disk size page, in the Size text box, type 5, in the drop-down list box,
ensure that GB is selected, and then click Next.
7.
On the Assign iSCSI target page, click New iSCSI target, and then click Next.
8.
On the Specify target name page, in the Name box, type LON-DC1, and then click Next.
9.
10. In the Select a method to identify the initiator dialog box, click Enter a value for the selected
type, in the Type drop-down list box, click IP Address, in the Value text box, type 172.16.0.22, and
then click OK.
11. On the Specify access servers page, click Add.
12. In the Select a method to identify the initiator dialog box, click Enter a value for the selected
type, in the Type drop-down list box, click IP Address, in the Value text box, type 131.107.0.2, and
then click OK.
Sign in to LON-SVR2 with the user name Adatum\Administrator and the password Pa$$w0rd.
2.
In the Server Manager, on the menu bar, click Tools, and then in the Tools drop-down list, click
Routing and Remote Access.
3.
Right-click LON-SVR2 (local), and click Disable Routing and Remote Access. Click Yes, and after it
has stopped, close the Routing and Remote Access console.
Note: Normally, you do not disable Routing and Remote Access (RRAS) before configuring
Multipath input/output (MPIO). You do it here because of lab requirements.
4.
5.
In the Add Roles and Features Wizard, on the Before You Begin page, click Next.
6.
7.
On the Select destination server page, ensure that Select server from the server pool is selected,
and then click Next.
L2-13
8.
9.
On the Select features page, click Multipath I/O, and then click Next.
On LON-SVR2, in the Server Manager, on the menu bar, click Tools, and then in the Tools dropdown list box, click iSCSI Initiator.
2.
In the iSCSI Initiator Properties dialog box, on the Targets tab, click Disconnect.
3.
4.
In the iSCSI Initiator Properties dialog box, on the Targets tab, click Connect.
5.
In the Connect to Target window, click Enable multi-path, verify that the Add this connection to
the list of Favorite Targets check box is selected, and then click Advanced.
6.
In the Advanced Settings dialog box, on the General tab, change the Local Adapter from Default
to Microsoft iSCSI Initiator. In the Initiator IP drop-down list box, click 172.16.0.22, and in the
Target Portal IP drop-down list box, click 172.16.0.10 / 3260.
7.
8.
9.
In the iSCSI Initiator Properties dialog box, on the Targets tab, click Connect.
10. In the Connect to Target window, click Enable multi-path, verify that the Add this connection to
the list of Favorite Targets check box is selected, and then click Advanced.
11. In the Advanced Settings dialog box, on the General tab, change the Local Adapter from Default
to Microsoft iSCSI Initiator. In the Initiator IP drop-down list box, select 131.107.0.2, and in the
Target Portal IP drop-down list box, select 131.107.0.1 / 3260.
12. In the Advanced Settings dialog box, click OK.
13. In the Connect to Target window, click OK.
14. In the iSCSI Initiator Properties dialog box, click the Volumes and Devices tab.
15. In the iSCSI Initiator Properties dialog box, on the Volumes and Devices tab, click Auto
Configure.
16. In the iSCSI Initiator Properties dialog box, click the Targets tab.
17. In the Targets list, select iqn.1991-05.com.microsoft:lon-dc1-lon-dc1-target, and then click
Devices.
18. In the Devices dialog box, click MPIO.
19. Verify that in Load balance policy, Round Robin is selected. Under This device has the following
paths, notice that two paths are listed. Select the first path, and click Details.
20. Note the IP address of the Source and Target portals, and click OK.
21. Select the second path, and click Details.
22. Verify that the Source IP address is of the second network adapter, and click OK.
23. To close the Device Details dialog box, click OK.
24. To close the Devices dialog box, click OK.
25. To close the iSCSI Initiator Properties dialog box, click OK.
Results: After completing this exercise, you will have configured and connected to iSCSI targets.
L2-15
On LON-SVR1, in the Server Manager, in the upper-right corner, click Tools, and then click File
Server Resource Manager.
2.
In the File Server Resource Manager window, expand Classification Management, select and then
right-click Classification Properties, and then click Create Local Property.
3.
In the Create Local Classification Property window, in the Name text box, type Corporate
Documentation, in the Property Type drop-down list box, ensure that Yes/No is selected, and then
click OK.
4.
In the File Server Resource Manager, click Classification Rules, and then in the Actions pane, click
Create Classification Rule.
2.
In the Create Classification Rule window, on the General tab, in the Rule name text box, type
Corporate Documents Rule, and then ensure that the Enabled check box is selected.
3.
4.
In the Browse For Folder window, expand Allfiles (E:), expand Labfiles, click Corporate
Documentation, and then click OK.
5.
In the Create Classification Rule window, on the Classification tab, in the Classification method
drop-down list box, click Folder Classifier. In the Property-Choose a property to assign to files
drop-down list box, click Corporate Documentation, and then in the Property-Specify a value
drop-down list box, click Yes.
6.
Click the Evaluation type tab, click Re-evaluate existing property values, ensure that the
Aggregate the values radio button is selected, and then click OK.
7.
In the File Server Resource Manager, in the Actions pane, click Run classification with all rules now.
8.
In the Run classification window, select the Wait for classification to complete radio button, and
then click OK.
9.
Review the Automatic classification report that displays in Internet Explorer, and ensure that the
report lists the same number of classified files as in the Corporate Documentation folder.
10. Close Internet Explorer, but leave the File Server Resource Manager console open.
In the File Server Resource Manager console, expand Classification Management, right-click
Classification Properties, and then click Create Local Property.
2.
In the Create Local Classification Property window, in the Name text box, type Expiration Date. In
the Property Type drop-down list box, ensure that Date-time is selected, and then click OK.
3.
In the File Server Resource Manager, expand Classification Management, click Classification Rules,
and then in the Actions pane, click Create Classification Rule.
4.
In the Create Classification Rule window, on the General tab, in the Rule name text box, type
Expiration Rule, and ensure that the Enabled check box is selected.
5.
6.
In the Browse For Folder window, expand Allfiles (E:), expand Labfiles, click Corporate
Documentation, and then click OK.
7.
Click the Classification tab. In the Classification method drop-down list box, click Folder Classifier,
and then in the Property-Choose a property to assign to files drop-down list box, click Expiration
Date.
8.
Click the Evaluation type tab. Click Re-evaluate existing property values, ensure that the
Aggregate the values radio button is selected, and then click OK.
9.
In the File Server Resource Manager console, in the Actions pane, click Run classification with all
rules now.
10. In the Run classification window, select the Wait for classification to complete radio button, and
then click OK.
11. Review the Automatic classification report that displays in Internet Explorer, and ensure that the
report lists the same number of classified files as in the Corporate Documentation folder.
12. Close Internet Explorer, but leave the File Server Resource Manager console open.
In File Server Resource Manager, select and right-click File Management Tasks, and then click
Create File Management Task.
2.
In the Create File Management Task window, on the General tab, in the Task name text box, type
Expired Corporate Documents, and then ensure that the Enable check box is selected.
3.
4.
In the Browse For Folder window, click E:\Labfiles\Corporate Documentation, and then click OK.
5.
In the Create File Management Task window, on the Action tab, in the Type drop-down list box,
ensure that File expiration is selected, and then in the Expiration directory box, type
E:\Labfiles\Expired.
6.
7.
In the Add Notification window, on the Event Log tab, select the Send warning to event log check
box, and then click OK.
8.
Click the Condition tab, select the Days since file was last modified check box, and then in the
same row, replace the default value of 0 with 1.
Note: This value is for lab purposes only. In a real scenario, the value would be 365 days or
more, depending on the organizations policy.
9.
Click the Schedule tab, ensure that the Weekly radio button is selected, select the Sunday check
box, and then click OK.
L2-17
In the File Server Resource Manager, click File Management Tasks, right-click Expired Corporate
Documents, and then click Run File Management Task Now.
2.
In the Run File Management Task window, click Wait for the task to complete, and then click OK.
3.
Review the File management task report that displays in Internet Explorer, and ensure that the report
lists the same number of classified files as in the Corporate Documentation folder.
4.
5.
In the Event Viewer console, expand Windows Logs, and then click Application.
6.
Review events with numbers 908 and 909. Notice that 908 FSRM started a file management job,
and that 909 FSRM finished a file management job.
7.
2.
On the Virtual Machines list, right-click 20412D-LON-SVR1, and then click Revert.
3.
Results: After completing this exercise, you will have configured a File Classification Infrastructure so that
the latest version of the documentation is always available to users.
2.
3.
In the Add Roles and Features Wizard, on the Before You Begin page, click Next.
4.
5.
On the Select destination server page, ensure that Select server from the server pool is selected,
and then click Next.
6.
On the Select server roles page, expand File And Storage Services (3 of 12 installed), expand File
and iSCSI Services (2 of 11 installed), select the BranchCache for Network Files check box, and
then click Next.
7.
8.
9.
10. Right-click the Start charm and click Run, in the Run text box, type gpedit.msc, and then press Enter.
11. In the Local Group Policy Editor console, in the navigation pane, under Computer Configuration,
expand Administrative Templates, expand Network, and then click Lanman Server.
12. On the Lanman Server result pane, in the Setting list, right-click Hash Publication for BranchCache,
and then click Edit.
13. In the Hash Publication for BranchCache dialog box, click Enabled, in the Hash publication
actions list, select the Allow hash publication only for shared folders on which BranchCache is
enabled, and then click OK.
In the Local Group Policy Editor console, in the navigation pane, under Computer Configuration,
expand Windows Settings, right-click Policy-based QoS, and then click Create new policy.
2.
In the Policy-based QoS Wizard, on the Create a QoS policy page, in the Policy name text box, type
Limit to 100 Kbps, and then select the Specify Outbound Throttle Rate check box. In the Specify
Outbound Throttle Rate text box, type 100, and then click Next.
3.
4.
On the Specify the source and destination IP addresses page, click Next.
5.
On the Specify the protocol and port numbers page, click Finish.
6.
2.
3.
In the Local Disk (C:) window, on the menu, click the Home tab, and then click New Folder.
L2-19
4.
5.
6.
In the Share Properties dialog box, on the Sharing tab, click Advanced Sharing.
7.
Select the Share this folder check box, and click Caching.
8.
In the Offline Settings dialog box, select the Enable BranchCache check box, and then click OK.
9.
On LON-DC1, in the Server Manager, click Tools, and then click Group Policy Management.
2.
In the Group Policy Management console, expand Forest: Adatum.com, expand Domains, expand
Adatum.com, right-click Default Domain Policy, and then click Edit.
3.
In the Group Policy Management Editor, in the navigation pane, under Computer Configuration,
expand Policies, expand Windows Settings, expand Security Settings, and then expand Windows
Firewall with Advanced Security.
4.
In Windows Firewall with Advanced Security, in the navigation pane, expand Windows Firewall with
Advanced Security, and then click Inbound Rules.
5.
In the Group Policy Management Editor, right-click Inbound Rules, and then click New Rule.
6.
In the New Inbound Rule Wizard, on the Rule Type page, click Predefined, click BranchCache
Content Retrieval (Uses HTTP), and then click Next.
7.
8.
On the Action page, click Finish to create the firewall inbound rule.
9.
In the Group Policy Management Editor, in the navigation pane, click Inbound Rules, and then in the
Group Policy Management Editor, on the Action menu, click New Rule.
10. On the Rule Type page, click Predefined, click BranchCache Peer Discovery (Uses WSD), and
then click Next.
11. On the Predefined Rules page, click Next.
12. On the Action page, click Finish.
13. Close the Group Policy Management Editor and Group Policy Management Console.
14. Right-click the Start charm and click Run. Type CMD in the Run box, and then click Enter.
15. At the command prompt type gpupdate /force and press Enter.
Results: At the end of this exercise, you will have deployed BranchCache, configured a slow link, and
enabled BranchCache on a file share.
2.
In the Add Roles and Features Wizard, on the Before You Begin page, click Next.
3.
4.
On the Select destination server page, ensure that Select server from the server pool is selected,
and then click Next.
5.
On the Select server roles page, expand File And Storage Services (1 of 12 Installed), expand File
and iSCSI Services, and then select the BranchCache for Network Files check box.
6.
In the Add Roles and Features Wizard dialog box, click Add Features.
7.
8.
On the Select features page, click BranchCache, and then click Next.
9.
On the Confirm installation selections page, click Install, and then click Close.
2.
In the Windows PowerShell window, type the following cmdlet, and then press Enter:
Enable-BCHostedServer
RegisterSCP
3.
In the Windows PowerShell window, type the following cmdlet, and then press Enter:
Get-BCStatus
4.
Ensure that BranchCache is enabled and running. Note in the DataCache section, the current active
cache size is zero.
5.
Results: At the end of this exercise, you will have enabled the BranchCache server in the branch office.
L2-21
2.
In Server Manager, on the menu bar, click Tools, and then in the Tools drop-down list box, select
Group Policy Management.
3.
In the Group Policy Management console, expand Forest: Adatum.com, expand Domains, expand
Adatum.com, right-click Adatum.com, and then click New Organizational Unit.
4.
In the New Organizational Unit dialog box, type Branch in the Name field, and then click OK.
5.
Right-click the Branch OU and click Create a GPO in this domain, and link it here.
6.
In the New GPO dialog box, type BranchCache, and then click OK.
7.
Expand the Branch OU and right-click the BranchCache GPO and click Edit.
8.
In the Group Policy Management Editor, in the navigation pane, under Computer Configuration,
expand Policies, expand Administrative Templates, expand Network, and then click BranchCache.
9.
In the BranchCache results pane, in the Setting list, right-click Turn on BranchCache, and then click
Edit.
10. In the Turn on BranchCache dialog box, click Enabled, and then click OK.
11. In the BranchCache results pane, in the Setting list, right-click Enable Automatic Hosted Cache
Discovery by Service Connection Point, and then click Edit.
12. In the Enable Automatic Hosted Cache Discovery by Service Connection Point dialog box, click
Enabled, and then click OK.
13. In the BranchCache results pane, in the Setting list, right-click Configure BranchCache for network
files, and then click Edit.
14. In the Configure BranchCache for network files dialog box, click Enabled, in the Type the
maximum round trip network latency (milliseconds) after which caching begins text box, type
0, and then click OK. This setting is required to simulate access from a branch office and is not
typically required.
15. Close the Group Policy Management Editor.
16. Close the Group Policy Management Console.
17. In Server Manager, on the menu bar, click Tools, and then in the Tools drop-down list box, select
Active Directory Users and Computers.
18. Expand Adatum.com and click the Computers container.
19. While pressing the Ctrl key, select both LON-CL1 and LON-CL2. Right-click the selection and click
Move.
20. Click the Branch OU and then click OK.
21. Close Active Directory Users and Computers.
22. Start 20412D-LON-CL1, and sign in as Adatum\Administrator with the password Pa$$w0rd.
23. On the Start screen, in the lower-right corner of the screen, click Search, in the Search text box, type
cmd, and then press Enter.
24. At the command prompt, type the following command, and then press Enter:
netsh branchcache show status all
25. Verify that the BranchCache Current Status is Running. If the status is Stopped, restart the client
machines.
26. Start 20412D-LON-CL2, and sign in as Adatum\Administrator with the password Pa$$w0rd.
27. On the Start screen, in the lower-right corner of the screen, click Search, in the Search text box, type
power, and then press Enter.
28. In the Windows PowerShell window, type the following command, and then press Enter:
netsh branchcache show status all
29. Verify that BranchCache Current Status is Running. If the status is Stopped, restart the client.
Results: At the end of this exercise, you will have configured the client computers for BranchCache.
L2-23
Switch to LON-SVR2.
2.
In Server Manager, on the menu bar, click Tools, and then from the Tools drop-down list box, click
Performance Monitor.
3.
In the Performance Monitor console, in the navigation pane, under Monitoring Tools, click
Performance Monitor.
4.
5.
In the Performance Monitor results pane, click the Add (Ctrl+N) icon.
6.
In the Add Counters dialog box, under Select counters from computer, click BranchCache, click
Add, and then click OK.
7.
Switch to LON-CL1.
2.
Point to the lower-right corner of the screen, click Search, in the Search text box, type perfmon, and
then press Enter.
3.
In the Performance Monitor console, in the navigation pane, under Monitoring Tools, click
Performance Monitor.
4.
5.
In the Performance Monitor results pane, click the Add (Ctrl+N) icon.
6.
In the Add Counters dialog box, under Select counters from computer, click BranchCache, click
Add, and then click OK.
7.
Change graph type to Report. Notice that the value of all performance statistics is zero.
Switch to LON-CL2.
2.
Point to the lower-right corner of the screen, click Search, in the Search text box, type perfmon, and
then press Enter.
3.
In the Performance Monitor console, in the navigation pane, under Monitoring Tools, click
Performance Monitor.
4.
5.
In the Performance Monitor results pane, click the Add (Ctrl+N) icon.
6.
In the Add Counters dialog box, under Select counters from computer, click BranchCache, click
Add, and then click OK.
7.
Change graph type to Report. Notice that the value for all performance statistics is zero.
Switch to LON-CL1.
2.
3.
In File Explorer address bar, type \\LON-DC1.adatum.com\Share, and then press Enter.
4.
In the Share window, in the Name list, right-click mspaint.exe, and then click Copy.
5.
6.
7.
Read the performance statistics on LON-CL1. This file was retrieved from LON-DC1 (Retrieval: Bytes
from Server). After the file was cached locally, it was passed up to the hosted cache. (Retrieval: Bytes
Served)
8.
Switch to LON-CL2.
9.
10. In the File Explorer address bar, type \\LON-DC1.adatum.com\Share, and then press Enter.
11. In the Share window, in the Name list, right-click mspaint.exe, and then click Copy.
12. In the Share window, click Minimize.
13. On the desktop, right-click anywhere, and then click Paste.
14. Read the performance statistics on LON-CL2. This file was obtained from the hosted cache (Retrieval:
Bytes from Cache).
15. Read the performance statistics on LON-SVR2. This server has offered cached data to clients (Hosted
Cache: Client file segment offers made).
16. On LON-SVR2, on the taskbar, click the Windows PowerShell icon.
17. In the Windows PowerShell window, type the following cmdlet, and then press Enter:
Get-BCStatus
Note: In the DataCache section, the current active cache size is no longer zero, it is 6560896.
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-SVR2, 20412D-LON-CL1, and 20412D-LON-CL2.
Results: At the end of this exercise, you will have verified that BranchCache is working as expected.
L3-25
On LON-DC1, in Server Manager, click on Tools, and then click Active Directory Domains and
Trusts.
2.
In the Active Directory Domains and Trusts console, right-click Adatum.com and select Raise
Domain Functional Level.
3.
In the Raise domain functional level window, in the Select an available domain functional level
window, select Windows Server 2012 and click Raise.
4.
Click OK twice.
5.
Right-click Active Directory Domains and Trusts [LON-DC1.Adatum.com] and click Raise Forest
Functional Level.
6.
In the Raise forest functional level window, in the Select an available forest functional level
window, select Windows Server 2012 and click Raise.
7.
Click OK twice.
8.
9.
On LON-DC1, in Server Manager, click Tools, and then click Active Directory Users and Computers.
10. In Active Directory Users and Computers, right-click Adatum.com, click New, and then click
Organizational Unit.
11. In the New Object Organizational Unit dialog box, in the Name field, type DAC-Protected, and
then click OK.
12. Click the Computers container.
13. Right-click the LON-SVR1 and then click Move.
14. In the Move window, click DAC-Protected, and then click OK.
15. Repeat steps 13 and 14 for the LON-CL1 computer.
16. Close Active Directory Users and Computers.
17. On LON-DC1, in Server Manager, click Tools, and then click Group Policy Management.
18. Expand Forest: Adatum.com, expand Domains, and then expand Adatum.com.
19. Click the Group Policy Objects container.
20. In the results pane, right-click Default Domain Controllers Policy, and then click Edit.
21. In the Group Policy Management Editor, under Computer Configuration, expand Policies, expand
Administrative Templates, expand System, and then click KDC.
22. In the details pane, double-click KDC support for claims, compound authentication and Kerberos
armoring.
23. In the KDC support for claims, compound authentication and Kerberos armoring window, select
Enabled, in the Options section, click the drop-down list box, select Always provide claims, and
then click OK.
24. Close Group Policy Management Editor and the Group Policy Management Console.
25. On the taskbar, click the Windows PowerShell icon.
26. In the Windows PowerShell window, type gpupdate /force, and then press Enter. After Group Policy
updates, close Windows PowerShell.
27. On LON-DC1, in Server Manager, click Tools, and then click Active Directory Users and Computers.
28. Expand Adatum.com, right-click Users, click New, and then click Group.
29. In the Group name field, type ManagersWKS, and then click OK.
30. Click the DAC-Protected container, right-click LON-CL1, and then click Properties.
31. Click the Member Of tab, and then click Add.
32. In Select Groups window, type ManagersWKS, click Check Names, and then click OK twice.
33. Click the Managers OU, right-click Aidan Delaney, and then click Properties.
34. In the Aidan Delaney Properties window, click the Organization tab. Ensure that the Department
field is populated with the value Managers, and then click Cancel.
35. Click the Research OU, right-click Allie Bellew, and then click Properties.
36. In the Allie Bellew Properties window, click the Organization tab. Ensure that the Department field
is populated with the value Research, and then click Cancel.
37. Close Active Directory Users and Computers.
On LON-DC1, click Tools, and then click Active Directory Administrative Center.
2.
In the Active Directory Administrative Center, in the navigation pane, click Dynamic Access Control,
and then double-click Claim Types.
3.
In the Claim Types container, in the Tasks pane, click New, and then click Claim Type.
4.
In the Create Claim Type window, in the Source Attribute section, select department.
5.
6.
7.
Scroll down to Suggested Values section and select The following values are suggested: option.
8.
Click Add.
9.
In the Add a suggested value window, type Managers in both Value and Display name fields, and
click OK.
L3-27
2.
3.
In the Resource properties list, right-click Department, and then click Enable.
4.
In the Resource properties list, right-click Confidentiality, and then click Enable.
5.
In the Resource Property List, ensure that both the Department and Confidentiality properties are
enabled.
6.
Double-click Department, scroll down to the Suggested Values section, and then click Add.
7.
In the Add a suggested value window, in both Value and Display name text boxes, type Research,
and then click OK twice.
8.
Click Dynamic Access Control, and then double-click Resource Property Lists.
9.
In the central pane, double-click Global Resource Property List, ensure that both Department and
Confidentiality display, and then click Cancel. If they do not display, click Add, add these two
properties, and then click OK.
On LON-SVR1, in Server Manager, click Tools, and then click File Server Resource Manager.
2.
3.
4.
5.
Click Classification Rules, and in the Actions pane, click Create Classification Rule.
6.
In the Create Classification Rule window, for the Rule name, type Set Confidentiality.
7.
8.
In the Browse For Folder dialog box, expand Local Disk (C:), click the Docs folder, and then click
OK.
9.
Click the Classification tab. Make sure that the following settings are set, and then click Configure:
Property: Confidentiality
Value: High
10. In the Classification Parameters dialog box, click the Regular expression drop-down list box, and
then click String.
11. In the Expression field next to the word String, type secret, and then click OK.
12. Click the Evaluation Type tab, select Re-evaluate existing property values, click Overwrite the
existing value, and then click OK.
13. In File Server Resource Manager, in the Actions pane, click Run Classification with all rules now.
14. Click Wait for classification to complete, and then click OK.
15. After the classification is complete, you will be presented with a report. Verify that two files were
classified. You can confirm this in the Report Totals section.
2.
3.
4.
Click Department.
5.
6.
Click OK.
Results: After completing this exercise, you will have prepared Active Directory Domain Services (AD DS)
for Dynamic Access Control (DAC) deployment, configured claims for users and devices, and configured
resource properties to classify files.
L3-29
On LON-DC1, in Server Manager, click Tools, and then click Active Directory Administrative
Center.
2.
In the Active Directory Administrative Center, in the navigation pane, click Dynamic Access Control,
and then double-click Central Access Rules.
3.
In the Tasks pane, click New, and then click Central Access Rule.
4.
In the Create Central Access Rule dialog box, in the Name field, type Department Match.
5.
6.
7.
8.
9.
24. In the Permissions section, click Use following permissions as current permissions.
25. In the Permissions section, click Edit.
26. Remove permission for Administrators.
27. In Advanced Security Settings for Permissions, click Add.
On LON-DC1, in the Active Directory Administrative Center, click Dynamic Access Control, and then
double-click Central Access Policies.
2.
In the Tasks pane, click New, and then click Central Access Policy.
3.
In the Name field, type Protect confidential docs, and then click Add.
4.
Click the Access Confidential Docs rule, click >>, and then click OK twice.
5.
In the Tasks pane, click New, and then click Central Access Policy.
6.
In the Name field, type Department Match, and then click Add.
7.
Click the Department Match rule, click >>, and then click OK twice.
8.
On LON-DC1, in Server Manager, click Tools, and then click Group Policy Management.
2.
In the Group Policy Management Console, under Domains, expand Adatum.com, right-click DACProtected, and then click Create a GPO in this domain, and link it here.
3.
4.
5.
Expand Computer Configuration, expand Policies, expand Windows Settings, expand Security
Settings, expand File System, right-click Central Access Policy, and then click Manage Central
Access Policies.
6.
Press and hold the Ctrl button, and click both Department Match and Protect confidential docs,
click Add, and then click OK.
7.
Close the Group Policy Management Editor and the Group Policy Management Console.
8.
9.
At a Windows PowerShell command prompt, type gpupdate /force, and then press Enter. Wait until
Group Policy is updated.
10. Close Windows PowerShell when you get the message that both Computer and User policies update
completed successfully.
L3-31
Results: After completing this exercise, you will have implemented DAC.
2.
3.
Click the Desktop tile, and then on the taskbar, click the File Explorer icon.
4.
In the File Explorer address bar, type \\LON-SVR1\Research, and then press Enter.
5.
Because Allie is a member of the Research team, verify that you can access this folder and open the
documents inside.
6.
7.
8.
Click the Desktop tile, and then on the taskbar, click the File Explorer icon.
9.
10. Verify that you can access this folder and open all the files inside.
11. Sign out of LON-CL1.
2.
Click the Desktop tile, and then on the taskbar, click the File Explorer icon.
3.
In the File Explorer address bar, type \\LON-SVR1\Docs. You should be unable to view Doc1.txt or
Doc2.txt, because LON-CL2 is not permitted to view secret documents.
4.
5.
6.
Click the Desktop tile, and then on the taskbar, click the File Explorer icon.
7.
In the File Explorer address bar, type \\LON-SVR1\Docs, and then press Enter.
8.
In the Docs folder, try to open Doc3.txt. You should be able to open that document. Close Notepad.
9.
In the File Explorer address bar, type \\LON-SVR1\Research, and then press Enter. You should be
unable to access the folder.
2.
In the File Explorer window, navigate to C:\Research, right-click Research, and then click Properties.
3.
In the Properties dialog box, click the Security tab, click Advanced, and then click Effective Access.
4.
Click select a user, and in the Select User, Computer, Service Account, or Group window, type April,
click Check Names, and then click OK.
5.
Click View effective access, and then review the results. The user April should not have access to this
folder.
6.
Click Include a user claim, and then in the drop-down list box, click Company department.
L3-33
7.
In the Value drop-down box, select Research, and then click View Effective access. April should
now have access.
8.
On LON-DC1, in Server Manager, click Tools, and then click Group Policy Management.
2.
In the Group Policy Management Console, expand Forest: Adatum.com, expand Domains, expand
Adatum.com, and then click Group Policy objects.
3.
4.
5.
In the details pane, double-click Customize Message for Access Denied errors.
6.
In the Customize Message for Access Denied errors window, click Enabled.
7.
In the Display the following message to users who are denied access text box, type You are
denied access because of permission policy. Please request access.
8.
9.
Review the other options, but do not make any changes, and then click OK.
10. In the details pane of the Group Policy Management Editor, double-click Enable access-denied
assistance on client for all file types, click Enabled, and then click OK.
11. Close the Group Policy Management Editor and the Group Policy Management Console.
12. Switch to LON-SVR1, and on the taskbar, click the Windows PowerShell icon.
13. At the Windows PowerShell command prompt, type gpupdate /force, and then press Enter. Wait
until Group Policy is updated.
2.
Click the Desktop tile, and then on the taskbar, click the File Explorer icon.
3.
In the File Explorer address bar, type \\LON-SVR1\Research, and then press Enter. You should be
unable to access the folder.
4.
Click Request assistance. Review the options for sending a message, and then click Close.
5.
Results: After completing this exercise, you will have validated DAC functionality.
2.
3.
On the Select installation type page, ensure that Role - based or feature - based installation is
selected, and then click Next.
4.
5.
On the Select server roles page, expand File and Storage Services (1 of 2 installed), expand File
and iSCSI Services, and then select Work Folders.
6.
In the Add features that are required for Work Folders dialog box, note the features, and then
click Add Features.
7.
8.
9.
b.
Organization: Adatum
c.
Organizational unit: IT
d.
City/locality: Seattle
e.
State/province: WA
f.
Country/region: US
L3-35
On LON-SVR2, in Server Manager, in the navigation pane, click File and Storage Services.
2.
Click Shares, and in the SHARES area, click Tasks, and then select New Share.
3.
In the New Share Wizard, on the Select the profile for this share page, ensure that SMB Share
Quick is selected, and then click Next.
4.
On the Select the server and path for this share page, accept the defaults, and then click Next.
5.
On the Specify share name page, in the Share name field, type WF-Share, and then click Next.
6.
On the Configure share settings page, select Enable access - based enumeration, leave the other
settings at their defaults, and then click Next.
7.
On the Specify permissions to control access page, note the default settings, and then click Next.
8.
9.
On LON-SVR2, in Server Manager, expand File and Storage Services, and then click Work Folders.
2.
In the WORK FOLDERS tile, click Tasks, and then click New Sync Share.
3.
In the New Sync Share Wizard, on the Before you begin page, click Next.
4.
On the Select the server and path page, select Select by file share, ensure that the share you
created in the previous task (WF-Share) is highlighted, and then click Next.
5.
On the Specify the structure for user folders page, accept the default selection (user alias), and
then click Next.
6.
On the Enter the sync share name page, accept the default, and then click Next.
7.
On the Grant sync access to groups page, note the default selection to disable inherited
permissions and grant users exclusive access, and then click Add.
8.
In the Select User or Group dialog box, in the Enter the object names to select field, type WFsync,
click Check Names, and then click OK.
9.
10. On the Specify device policies page, note the selections, accept the default selection, and then click
Next.
11. On the Confirm selections page, click Create.
2.
On Start screen, start typing PowerShell, and then click the Windows PowerShell icon in the Search
pane.
3.
At the Windows PowerShell command prompt, type gpupdate /force, and then press Enter.
4.
5.
Note: The presence of the Work Folders folder indicates that the Work Folders
configuration is successful.
6.
In File Explorer, create a few text files in the Work Folders folder.
Note: File Explorer displays the synchronization status of the files in the Work Folders folder.
7.
Right-click the Windows button on the taskbar, and then click Control Panel.
8.
In Control Panel, click System and Security, and then click Work Folders.
9.
L3-37
13. On Start screen, start typing PowerShell, and then click the Windows PowerShell icon in the Search
pane.
14. At the Windows PowerShell command prompt, type gpupdate /force, and then press Enter.
15. Open File Explorer and navigate to This PC.
16. Verify that the WorkFolders folder is created.
17. Right-click the Windows button on the taskbar, and then click Control Panel.
18. In Control Panel, click System and Security, and then click Work Folders.
19. Click Apply policies. Click Yes.
20. Open the Work Folders folder and verify that the files that you created on LON-CL1 are present.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-SVR1, 20412D-LON-SVR2, 20412D-LON-CL1, and
20412D-LON-CL2.
Results: After completing this exercise, you will have configured Work Folders.
L4-39
On TOR-DC1, in the Server Manager, click Manage, and from the drop-down list box, click Add
Roles and Features.
2.
3.
On the Select installation type page, confirm that Role-based or feature-based installation is
selected, and then click Next.
4.
On the Select destination server page, ensure that Select a server from the server pool is
selected, and that TOR-DC1.Adatum.com is highlighted, and then click Next.
5.
On the Select server roles page, click Active Directory Domain Services.
6.
On the Add features that are required for Active Directory Domain Services? page, click Add
Features.
7.
8.
9.
10. On the Confirm installation selections page, click Install. (This may take a few minutes to
complete.)
11. When the Active Directory Domain Services (AD DS) binaries have installed, click the blue Promote
this server to a domain controller link.
12. In the Deployment Configuration window, click Add a new domain to an existing forest.
13. Verify that Select domain type is set to Child Domain, and that Parent domain name is set to
Adatum.com. In the New domain name text box, type na.
14. Confirm that Supply the credentials to perform this operation is set to ADATUM\administrator
(Current user), and then click Next.
Note: If this is not the case, then use the Change button to enter the credentials
Adatum\Administrator and the password Pa$$w0rd.
15. In the Domain Controller Options window, ensure that Domain functional level is set to Windows
Server 2012 R2.
16. Ensure that both the Domain Name system (DNS) server and Global Catalog (GC) check boxes are
selected.
17. Confirm that Site name: is set to Default-First-Site-Name.
18. Under Type the Directory Services Restore Mode (DSRM) password, type Pa$$w0rd in both text
boxes, and then click Next.
2.
3.
Verify that Windows Firewall shows Domain: Off. If it does not, then next to Ethernet, click
172.16.0.25, IPv6 enabled. Right-click Ethernet, and then click Disable. Right-click Ethernet, and
then click Enable. The Local Area Connection should now show Adatum.com.
4.
In the Server Manager, from the Tools menu, click Active Directory Domains and Trusts.
5.
In the Active Directory Domains and Trusts console, expand Adatum.com, right-click
na.adatum.com, and then click Properties.
6.
In the na.adatum.com Properties dialog box, click the Trusts tab, and in the Domain trusted by
this domain (outgoing trusts) box, click Adatum.com, and then click Properties,
7.
In the Adatum.com Properties dialog box, click Validate, and then click Yes, validate the
incoming trust.
8.
In the User name text box, type administrator, and in the Password text box, type Pa$$w0rd, and
then click OK.
9.
When the message The trust has been validated. It is in place and active displays, click OK.
Note: If you receive a message that the trust cannot be validated, or that the secure
channel (SC) verification has failed, ensure that you have completed step 2, and then wait for at
least 10 to 15 minutes. You can continue with the lab and come back later to verify this step.
10. Click OK twice to close the Adatum.com Properties dialog box.
Results: After completing this exercise, you will have implemented child domains in AD DS.
L4-41
On LON-DC1, in the Server Manager, click the Tools menu, and then from the drop-down menu,
click DNS.
2.
In the DNS tree pane, expand LON-DC1, right-click Forward Lookup Zones, and then click New
Zone.
3.
4.
On the Zone Type page, click Stub zone, and then click Next.
5.
On the Active Directory Zone Replication Scope page, click To all DNS servers running on
domain controllers in this forest: adatum.com, and then click Next.
6.
In the Zone name: text box, type treyresearch.net, and then click Next.
7.
On the Master DNS Servers page, click <Click here to add an IP Address or DNS Name>, type
172.16.10.10, click on the free space, and then click Next.
8.
On the Completing the New Zone Wizard page, click Next, and then click Finish.
9.
Select and then right-click the new stub zone TreyResearch.net, and then click Transfer from
Master.
On LON-DC1, from the Tools menu, click Active Directory Domain and Trusts.
2.
In the Active Directory Domains and Trusts management console window, right-click
Adatum.com, and then click Properties.
3.
In the Adatum.com Properties dialog box, click the Trusts tab, and then click New Trust.
4.
5.
In the Name text box, type treyresearch.net, and then click Next.
6.
On the Trust Type page, click Forest trust, and then click Next.
7.
On the Direction of Trust page, click One-way: outgoing, and then click Next.
8.
On the Sides of Trust page, click Both this domain and the specified domain, and then click Next.
9.
On the User Name and Password page, type Administrator as the user name and Pa$$w0rd as the
password in the appropriate boxes, and then click Next.
10. On the Outgoing Trust Authentication Level--Local Forest page, click Selective authentication,
and then click Next.
11. On the Trust Selections Complete page, click Next.
12. On the Trust Creation Complete page, click Next.
13. On the Confirm Outgoing Trust page, click Next.
14. Click Finish.
15. In the Adatum.com Properties dialog box, click the Trusts tab.
16. On the Trusts tab, under Domains trusted by this domain (outgoing trusts), click
TreyResearch.net, and then click Properties.
17. In the treyresearch.net Properties dialog box, click Validate.
18. Review the message that displays: The trust has been validated. It is in place and active.
19. Click OK, and then click No at the prompt.
20. Click OK twice.
21. Close Active Directory Domains and Trusts.
On LON-DC1, in the Server Manager, from the Tools menu, click Active Directory Users and
Computers.
2.
In the Active Directory Users and Computers console, from the View menu, click Advanced Features.
3.
4.
5.
In the LON-SVR2 Properties dialog box, click the Security tab, and then click Add.
6.
On the Select Users, Computers, Service Accounts, or Groups page, click Locations.
7.
8.
In the Enter the object name to select (examples:) text box, type IT, and then click Check Names.
When prompted for credentials, type treyresearch\administrator with the password Pa$$w0rd, and
then click OK.
L4-43
9.
On the Select Users, Computers, Service Accounts, or Groups page, click OK.
10. In the LON-SVR2 Properties window, ensure that IT (TreyResearch\IT) is highlighted, select the
Allow check box that is in line with Allowed to authenticate, and then click OK.
11. Switch to LON-SVR2.
12. On the taskbar, click the Windows Explorer icon.
13. In the Windows Explorer window, click Local Disk (C).
14. Right-click in the details pane, click New, and then click Folder.
15. In the Name text box, type IT-Data, and then press Enter.
16. Right-click IT-Data, point to Share with, and then click Specific People.
17. In the File Sharing dialog box, type TreyResearch\IT, and then click Add.
18. Click Read, and then click Read/Write. Click Share, and then click Done.
19. Sign out of TREY-DC1.
20. Sign in to TREY-DC1 as TreyResearch\Alice with the password Pa$$w0rd.
21. Hover your pointer in the lower-right corner of the desktop, and when the sidebar displays, click
Search.
22. In the Search text box, type \\LON-SVR2\IT-Data, and then press Enter. The folder will open.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have implemented forest trusts.
L5-45
On TOR-DC1, in Server Manager, click Manage, and from the drop-down list box, click Add Roles
and Features.
2.
3.
On the Select installation type page, confirm that Role-based or feature-based installation is
selected, and then click Next.
4.
On the Select destination server page, ensure that Select a server from the server pool is
selected, and that TOR-DC1.adatum.com is highlighted, and then click Next.
5.
On the Select server roles page, click Active Directory Domain Services.
6.
On the Add features that are required for Active Directory Domain Services? page, click Add
Features, and then click Next.
7.
8.
9.
On the Confirm installation selections page, click Install. (This may take a few minutes to
complete.)
10. When the AD DS binaries have installed, do not click Close, but click the blue Promote this server to
a domain controller link.
11. In the Deployment Configuration window, click Add a domain controller to an existing domain,
and then click Next.
12. In the Domain Controller Options window, ensure that both the Domain Name system (DNS)
server and Global Catalog (GC) check boxes are selected.
13. Confirm that Site name: is set to Default-First-Site-Name, and then under Type the Directory
Services Restore Mode (DSRM) password, type Pa$$w0rd in both the Password and Confirm
password boxes. Click Next.
14. On the DNS Options page, click Next.
15. In the Additional Options page, click Next.
16. In the Paths window, click Next.
17. In the Review Options window, click Next.
18. In the Prerequisites Check window, click Install. The server will restart automatically.
19. After TOR-DC1 restarts, sign in as Adatum\Administrator with the password Pa$$w0rd.
2.
In Server Manager, click Tools, and then click Active Directory Sites and Services.
3.
In Active Directory Sites and Services, in the navigation pane, expand Sites.
4.
5.
6.
Expand LondonHQ, expand the Servers folder, and then verify that both LON-DC1 and TOR-DC1
belong to the LondonHQ site.
If necessary, on LON-DC1, open the Server Manager console, and then open Active Directory Site
and Services.
2.
In the Active Directory Sites and Services console, in the navigation pane, expand Sites, and then click
the Subnets folder.
3.
4.
In the New Object Subnet dialog box, under Prefix, type 172.16.0.0/24.
5.
Under Select a site object for this prefix, click LondonHQ, and then click OK.
Results: After completing this exercise, you will have reconfigured the default site and assigned IP address
subnets to the site.
L5-47
If necessary, on LON-DC1, open the Server Manager console, click Tools, and then click Active
Directory Sites and Services.
2.
In the Active Directory Sites and Services console, in the navigation pane, right-click Sites, and then
click New Site.
3.
In the New Object Site dialog box, next to Name, type Toronto.
4.
Under Select a site-link object for this site, select DEFAULTIPSITELINK, and then click OK.
5.
In the Active Directory Domain Services dialog box, click OK. The Toronto site displays in the
navigation pane.
6.
In the Active Directory Sites and Services console, in the navigation pane, right-click Sites, and then
click New Site.
7.
In the New Object Site dialog box, next to Name, type Test.
8.
Under Select a site-link object for this site, select DEFAULTIPSITELINK, and then click OK. The
Test site displays in the navigation pane.
If necessary, on LON-DC1, open the Server Manager console, click Tools and then click Active
Directory Sites and Services.
2.
In the Active Directory Sites and Services console, in the navigation pane, expand Sites, and then click
the Subnets folder.
3.
4.
In the New Object Subnet dialog box, under Prefix, type 172.16.1.0/24.
5.
Under Select a site object for this prefix, click Toronto, and then click OK.
6.
7.
In the New Object Subnet dialog box, under Prefix, type 172.16.100.0/24.
8.
Under Select a site object for this prefix, click Test, and then click OK.
9.
In the navigation pane, click the Subnets folder. Verify in the details pane that the two subnets are
created and associated with their appropriate site. Note that there are three subnets in total
(172.16.0.0 was created in Exercise 1 Task 3).
Results: After this exercise, you will have created two additional sites representing the IP subnet addresses
located in Toronto.
If necessary, on LON-DC1, open the Server Manager console, click Tools, and then click Active
Directory Sites and Services.
2.
In the Active Directory Sites and Services console, in the navigation pane, expand Sites, expand InterSite Transports, and then click the IP folder.
3.
4.
In the New Object Site Link dialog box, next to Name, type TOR-TEST.
5.
Under Sites not in this site link, press CTRL on the keyboard, click Toronto, click Test, click Add,
and then click OK.
6.
7.
8.
In the Schedule for TOR-TEST dialog box, highlight the range from Monday 9 AM to Friday 3 PM,
as follows:
9.
With the mouse button still pressed down, drag the cursor to the Friday at 3:00 PM tile.
If necessary, on LON-DC1, click Tools, and then click Active Directory Sites and Services.
2.
In Active Directory Sites and Services, in the navigation pane, expand Sites, expand LondonHQ, and
then expand the Servers folder.
3.
4.
In the Move Server dialog box, click Toronto, and then click OK.
5.
In the navigation pane, expand the Toronto site, expand Servers, and then click TOR-DC1.
2.
At the Windows PowerShell prompt, type the following, and then press Enter:
Repadmin /kcc
This command recalculates the inbound replication topology for the server.
L5-49
3.
At the Windows PowerShell prompt, type the following command, and then press Enter:
Repadmin /showrepl
4.
5.
At the Windows PowerShell prompt, type the following command, and then press Enter:
Repadmin /bridgeheads
This command displays the bridgehead servers for the site topology.
6.
At the Windows PowerShell command prompt, type the following, and then press Enter:
Repadmin /replsummary
This command displays a summary of replication tasks. Verify that no errors appear.
7.
At the Windows PowerShell command prompt, type the following, and then press Enter:
DCDiag /test:replications
8.
9.
Switch to TOR-DC1, and then repeat steps 1 through 8 to view information from TOR-DC1. For step 4,
verify that the last replication with LON-DC1 was successful.
Results: After this exercise, you will have configured site-links and monitored replication.
If necessary, on LON-DC1, click Tools, and then click Active Directory Sites and Services.
2.
In Active Directory Sites and Services, in the navigation pane, expand Sites, expand LondonHQ,
expand the Servers folder, expand LON-DC1, and then select NTDS Settings.
3.
In the Details pane, right-click the<automatically generated> connection object and click
Replicate Now.
4.
5.
In Active Directory Sites and Services, examine all the objects you created earlier, and on the taskbar,
click the Windows PowerShell icon.
6.
At the Windows PowerShell prompt, type the following, and then press Enter:
Get-ADReplicationUpToDatenessVectorTable Target adatum.com
This cmdlet will show you the last several replication events. Make a note of the date/time of the last
(top) event.
7.
Go to TOR-DC1.
8.
At the Windows PowerShell prompt, type the following, and then press Enter:
\\LON-DC1\E$\Mod05\Mod05Ex4.ps1
If necessary, on TOR-DC1, open the Server Manager console, click Tools, and then click Active
Directory Sites and Services.
2.
In the Active Directory Sites and Services console, in the navigation pane, expand Sites, then
Toronto, then Servers, then TOR-DC1, and then select NTDS Settings.
3.
In the details pane, right click the <automatically generated>, and select Replicate Now.
4.
The Replicate Now pop-up will appear, indicating an error informing you that The RPC service is
unavailable. Click OK on the Replicate Now pop-up.
5.
6.
At the Windows PowerShell prompt, type the following, and then press Enter:
Get-ADReplicationUpToDatenessVectorTable Target adatum.com
This cmdlet will show you the last several replication events. Note that the last date/time shown
(Replication from LON-DC1) is not updating. This indicates that one-way replication is not occurring.
7.
At the Windows PowerShell prompt, type the following, and then press Enter:
Get-AdReplicationSubnet filter *
This cmdlet will show detailed information about any subnets assigned to any sites.
8.
L5-51
9.
At the Windows PowerShell prompt, type the following, and then press Enter:
Get-AdReplicationSiteLink filter *
This cmdlet will show detailed information about any site-links assigned to particular sites.
10. Note that nothing is returned.
2.
At the Windows PowerShell prompt, type the following, and then press Enter:
Ipconfig /all
3.
4.
At the Windows PowerShell prompt, type the following, and then press Enter:
Get-DnsClient | Set-DnsClientServerAddress -ServerAddresses
("172.16.0.10","172.16.0.25")
5.
Run the Ipconfig /all command again. You should get proper results.
6.
If necessary, on TOR-DC1, open the Server Manager console, click Tools, and then click Active
Directory Sites and Services.
7.
In the Active Directory Sites and Services console, in the navigation pane, expand Sites, then
Toronto, then Servers, then TOR-DC1, and then select NTDS Settings.
8.
In the details pane, right click the <automatically generated>, and select Replicate Now.
9.
The Replicate Now pop-up will appear, without an error. Click OK.
10. At the Windows PowerShell prompt, type the following, and then press Enter:
Get-DnsServer
11. You will get the following error: Failed to retrieve DNS Server configuration information on TORDC1.
12. If necessary, on TOR-DC1, open the Server Manager console, click Tools, and then click DNS.
13. In the Connect to DNS Server pop-up window click OK, and then you should get another pop-up
window stating The DNS Server is unavailable. Would you like to add it anyway? Click No, then
Cancel and close any DNS Manager window that appears.
14. At the Windows PowerShell prompt, type the following, and then press Enter:
Get-Service -DisplayName DNS Server
After this completes, run the following again to ensure the service is running:
Get-Service DisplayName DNS Server
17. If necessary, on TOR-DC1, open the Server Manager console, click Tools, and then click Active
Directory Sites and Services.
18. In the Active Directory Sites and Services console, in the navigation pane, expand Sites, then
Toronto, then Servers, then TOR-DC1, and then select NTDS Settings.
19. In the details pane, right click the <automatically generated>, and select Replicate Now.
20. The Replicate Now pop-up will appear, this time without an error. Click OK.
21. In Active Directory Sites and Services, examine all objects that you created earlier. Are any missing?
22. On TOR-DC1, open File Explorer. On the This PC URL bar, type the following, and then press enter:
\\LON-DC1\E$\Mod05
23. Right-click the file named Mod05EX4Fix.ps1, and select Edit.
24. The Windows PowerShell ISE will open. Examine the cmdlets in the script.
25. Find the section titled #recreate site-links for LON-TOR and TOR-Test. Using the mouse, highlight
all the text on the two lines beginning with $schedule, right-click them, and select Run Selection.
26. Find the section titled #recreate Subnets. . In the three New-ADReplicationSubnet lines, note the
value for the Name entry. If it shows 172.16.1.0/22, change this to 172.16.1.0/24. The last number
must be a 4, not a 2. If the third line shows Asia in the Site parameter, change it to Test. Using the
mouse, highlight all the text on the three lines beginning with New-ADReplicationSubnet, rightclick them, and select Run Selection.
27. In Active Directory Sites and Services, examine all the objects you created earlier. Ensure that the
site-link, in the Inter-Site Transports node, and subnet objects, in the Subnets node, have been
recreated.
28. On LON-DC1 and TOR-DC1, close all open windows and sign off both virtual machines.
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
L6-53
Module 6: Implementing AD CS
2.
3.
4.
5.
6.
On the Select server roles page, select Active Directory Certificate Services. When the Add Roles
and Features Wizard displays, click Add Features, and then click Next.
7.
8.
9.
On the Select role services page, ensure that Certification Authority is selected, and then click
Next.
26. In the AdatumRootCA Properties dialog box, click the Extensions tab.
27. On the Extensions tab, in the Select extension drop-down list box, click CRL Distribution Point
(CDP), and then click Add.
28. In the Location box, type http://lon-svr1.adatum.com/CertData/, in the Variable drop-down list
box, click <CaName>, and then click Insert.
29. In the Variable drop-down list box, click <CRLNameSuffix>, and then click Insert.
30. In the Variable drop-down list box, click <DeltaCRLAllowed>, and then click Insert.
31. In the Location box, position the cursor at the end of the URL, type .crl, and then click OK.
32. Select the following options, and then click Apply:
L6-55
54. In the File Explorer address bar, type \\lon-svr1\C$, and then press Enter.
55. Right-click the empty space, and then click Paste.
56. Close File Explorer.
Task 2: Creating a DNS host record for LON-CA1 and configure sharing
1.
ON LON-DC1, in the Server Manager, click Tools, and then click DNS.
2.
In the DNS Manager console, expand LON-DC1, expand Forward Lookup Zones, click
Adatum.com, right-click Adatum.com, and then click New Host (A or AAAA).
3.
4.
In the IP address window, type 172.16.0.40, click Add Host, click OK, and then click Done.
5.
6.
Switch to LON-CA1.
7.
8.
In the Control Panel window, click View network status and tasks.
9.
In the Network and Sharing Center window, click Change advanced sharing settings.
10. Under Guest or Public (current profile), select the Turn on file and printer sharing option, and
then click Save changes.
Results: After completing this exercise, you will have deployed a root stand-alone certification authority
(CA).
2.
3.
4.
5.
6.
On the Select server roles page, select Active Directory Certificate Services.
7.
When the Add Roles and Features Wizard displays, click Add Features, and then click Next.
8.
9.
10. On the Select role services page, ensure that Certification Authority is selected already, and then
select Certification Authority Web Enrollment.
11. When the Add Roles and Features Wizard displays, click Add Features, and then click Next.
12. On the Confirm installation selections page, click Install.
13. On the Installation progress page, after installation is successful, click the text Configure Active
Directory Certificate Services on the destination server.
14. In the AD CS Configuration Wizard, on the Credentials page, click Next.
15. On the Role Services page, select both Certification Authority and Certification Authority Web
Enrollment, and then click Next.
16. On the Setup Type page, select Enterprise CA, and then click Next.
17. On the CA Type page, click Subordinate CA, and then click Next.
18. On the Private Key page, ensure that Create a new private key is selected, and then click Next.
19. On the Cryptography for CA page, keep the default selections, and then click Next.
20. On the CA Name page, in the Common name for this CA box, type Adatum-IssuingCA, and then
click Next.
21. On the Certificate Request page, ensure that Save a certificate request to file on the target
machine is selected, and then click Next.
22. On the CA Database page, click Next.
23. On the Confirmation page, click Configure.
24. On the Results page, click Close.
25. On the Installation progress page, click Close.
On LON-SVR1, open a File Explorer window, and then navigate to Local Disk (C:).
2.
3.
In the Certificate Import Wizard, click Local Machine, and then click Next.
4.
On the Certificate Store page, click Place all certificates in the following store, and then click
Browse.
L6-57
5.
Select Trusted Root Certification Authorities, click OK, click Next, and then click Finish.
6.
7.
In the File Explorer window, select the AdatumRootCA.crl and LON-CA1_AdatumRootCA.crt files,
right-click the files, and then click Copy.
8.
Double-click inetpub.
9.
Double-click wwwroot.
2.
In the Server Manager, click Tools, and then click Group Policy Management.
3.
In the Group Policy Management Console, expand Forest: Adatum.com, expand Domains, expand
Adatum.com, right-click Default Domain Policy, and then click Edit.
4.
In the Computer Configuration node, expand Policies, expand Windows Settings, expand Security
Settings, expand Public Key Policies, right-click Trusted Root Certification Authorities, click
Import, and then click Next.
5.
6.
In the file name box, type \\lon-svr1\C$, and then press Enter.
7.
8.
9.
10. Close the Group Policy Management Editor and the Group Policy Management Console.
Results: After completing this exercise, you will have deployed and configured an enterprise subordinate
CA.
L6-59
2.
In the Certificate Templates console, locate the Web Server template in the list, right-click it, and
then click Duplicate Template.
3.
4.
In the Template display name field, type Adatum WebSrv, and set the Validity period to 3 years.
5.
Click the Request Handling tab, select Allow private key to be exported, and then click OK.
Task 2: Create a new template for users that includes smart card logon
1.
In the Certificate Templates console, right-click the User certificate template, and then click
Duplicate Template.
2.
In the Properties of New Template dialog box, click the General tab, and then in the Template
display name text box, type Adatum User.
3.
On the Subject Name tab, clear both the Include e-mail name in subject name and the E-mail
name check boxes.
4.
On the Extensions tab, click Application Policies, and then click Edit.
5.
6.
In the Add Application Policy dialog box, select Smart Card Logon, and then click OK twice.
7.
8.
9.
On the Security tab, click Authenticated Users. Under Permissions for Authenticated Users, select
the Allow check box for Read, Enroll, and Autoenroll, and then click OK.
On LON-SVR1, in the Certification Authority console, right-click Certificate Templates, point to New,
and then click Certificate Template to Issue.
2.
In the Enable Certificate Templates window, select Adatum User and Adatum WebSrv, and then
click OK.
Task 4: Update the web server certificate on the LON-SVR2 web server
1.
2.
3.
At the Windows PowerShell prompt, type gpupdate /force, and then press Enter.
4.
If prompted, restart the server, and sign in as Adatum\Administrator with the password Pa$$w0rd.
5.
6.
From Server Manager, click Tools, and then click Internet Information Services (IIS) Manager.
7.
In the IIS console, click LON-SVR2 (ADATUM\Administrator), at the Internet Information Services
(IIS) Manager prompt, click No, and then in the central pane, double-click Server Certificates.
8.
9.
On the Distinguished Name Properties page, complete the following fields, and then click Next:
Organization: Adatum
Organizational Unit: IT
City/locality: Seattle
State/province: WA
Country/region: US
Results: After completing this exercise, you will have created and published new certificate templates.
L6-61
On LON-DC1, in the Server Manager, click Tools, and then click Group Policy Management.
2.
Expand Forest: Adatum.com, expand Domains, expand Adatum.com, right-click Default Domain
Policy, and then click Edit.
3.
Expand User Configuration, expand Policies, expand Windows Settings, expand Security Settings,
and then click to highlight Public Key Policies.
4.
5.
6.
Select the Renew expired certificates, update pending certificates, and remove revoked
certificates option.
7.
8.
9.
In the right pane, double-click the Certificate Services Client Certificate Enrollment Policy
object.
10. On the Enrollment Policy tab, set the Configuration Model to Enabled, and ensure that the
certificate enrollment policy list displays the Active Directory Enrollment Policy (it should have a
checkmark next to it, and display a status of Enabled).
11. Click OK to close the window.
12. Close both the Group Policy Management Editor and the Group Policy Management console.
2.
At the Windows PowerShell prompt, type gpupdate /force, and then press Enter.
3.
After the policy refreshes, type mmc.exe, and then press Enter.
4.
In Console1, click File, and then in the File menu, click Add/Remove Snap-in.
5.
6.
7.
Expand Certificates Current User, expand Personal, and then click Certificates.
8.
Verify that a certificate based on Adatum User template is issued for Administrator.
9.
On LON-SVR1, in the Server Manager console, click Tools, and then open Certification Authority.
2.
In the certsrv console, expand Adatum-IssuingCA, right-click Certificate Templates, and then click
Manage.
3.
4.
5.
In the Select Users, Computers, Service Accounts, or Groups window, type Allie, click Check Names,
and then click OK.
6.
On the Security tab, click Allie Bellew, select Allow for Read and Enroll permissions, and then click
OK.
7.
8.
In the certsrv console, right-click Certificate Templates, point to New, and then click Certificate
Template to Issue.
9.
In the list of templates, click Enrollment Agent, and then click OK.
10. Switch to LON-CL1, and sign in as Adatum\Allie with the password Pa$$w0rd.
11. Open a command-prompt window, at the command prompt, type mmc.exe, and then press Enter.
12. In Console1, click File, and then click Add/Remove Snap-in.
13. Click Certificates, click Add, and then click OK.
14. Expand Certificates Current User, expand Personal, click Certificates, right-click Certificates,
point to All Tasks, and then click Request New Certificate.
15. In the Certificate Enrollment Wizard, on the Before You Begin page, click Next.
16. On the Select Certificate Enrollment Policy page, click Next.
17. On the Request Certificates page, select Enrollment Agent, and then click Enroll.
18. Click Finish.
19. Switch to LON-SVR1.
20. In the Certification Authority console, right-click Adatum-IssuingCA, and then click Properties.
21. Click the Enrollment Agents tab.
22. Click Restrict Enrollment agents.
23. On the pop-up window that displays, click OK.
24. In the Enrollment agents section, click Add.
25. In the Select User, Computer or Group field, type Allie, click Check Names, and then click OK.
26. Click Everyone, and then click Remove.
27. In the certificate templates section, click Add.
28. In the list of templates, select Adatum User, and then click OK.
29. In the Certificate Templates section, click <All>, and then click Remove.
30. In the Permission section, click Add.
31. In the Select User, Computer or Group field, type Marketing, click Check Names, and then click
OK.
32. In the Permission section, click Everyone, and then click Remove.
33. Click OK.
Results: After completing this exercise, you will have configured and verified autoenrollment for users,
and configured an Enrollment Agent for smart cards.
L6-63
On LON-SVR1, in the Certification Authority console, right-click Revoked Certificates, and then click
Properties.
2.
In the Revoked Certificates Properties dialog box, set the CRL publication interval to 1 Days, and
the Delta CRL publication interval to 1 Hours, and then click OK.
3.
4.
In the Adatum-IssuingCA Properties dialog box, click the Extensions tab, and review the values for
CDP.
5.
Click Cancel.
2.
3.
4.
On the Select server roles page, expand Active Directory Certificate Services (2 of 6 Installed),
and then click Online Responder.
5.
6.
7.
When the message displays that installation succeeded, click Configure Active Directory Certificate
Services on the destination server.
8.
In the Active Directory Certificate Services (AD CS) Configuration Wizard, click Next.
9.
22. In the Certification Authority console, right-click the Certificate Templates folder, point to New, and
then click Certificate Template to Issue.
23. In the Enable Certificate Templates dialog box, select the OCSP Response Signing template, and
then click OK.
24. On LON-SVR1, in the Server Manager, click Tools, and then click Online Responder Management.
25. In the OCSP Management console, right-click Revocation Configuration, and then click Add
Revocation Configuration.
26. In the Add Revocation Configuration Wizard, click Next.
27. On the Name the Revocation Configuration page, in the Name box, type AdatumCA Online
Responder, and then click Next.
28. On the Select CA Certificate Location page, click Next.
29. On the Choose CA Certificate page, click Browse, click the Adatum-IssuingCA certificate, click OK,
and then click Next.
30. On the Select Signing Certificate page, verify that Automatically select a signing certificate is
selected and Auto-Enroll for an OCSP signing certificate are both selected, and then click Next.
31. On the Revocation Provider page, click Finish. The revocation configuration status will appear as
Working.
32. Close the Online Responder console.
Results: After completing this exercise, you will have configured certificate revocation settings.
L6-65
2.
In the Certification Authority console, expand the Adatum-IssuingCA node, right-click the
Certificates Templates folder, and then click Manage.
3.
In the Details pane, right-click the Key Recovery Agent certificate, and then click Properties.
4.
In the Key Recovery Agent Properties dialog box, click the Issuance Requirements tab.
5.
6.
Click the Security tab. Notice that Domain Admins and Enterprise Admins are the only groups that
have the Enroll permission, and then click OK.
7.
8.
In the Certification Authority console, right-click Certificate Templates, point to New, and then click
Certificate Template to Issue.
9.
In the Enable Certificate Templates dialog box, click the Key Recovery Agent template, and then
click OK.
2.
At the Windows PowerShell prompt, type MMC.exe, and then press Enter.
3.
In the Console1-[Console Root] console, click File, and then click Add/Remove Snap-in.
4.
In the Add or Remove Snap-ins dialog box, click Certificates, and then click Add.
5.
In the Certificates snap-in dialog box, select My user account, click Finish, and then click OK.
6.
Expand the Certificates - Current User node, right-click Personal, point to All Tasks, and then click
Request New Certificate.
7.
In the Certificate Enrollment Wizard, on the Before You Begin page, click Next.
8.
9.
On the Request Certificates page, select the Key Recovery Agent check box, click Enroll, and then
click Finish.
10. Refresh the console, and view the Key Recovery Agent (KRA) in the personal store; that is, scroll across
the certificate properties and verify that the Certificate Template Key Recovery Agent is present.
11. Close Console1 without saving changes.
2.
On LON-SVR1, in the Certification Authority console, right-click Adatum-IssuingCA, and then click
Properties.
3.
In the Adatum-IssuingCA Properties dialog box, click the Recovery Agents tab, and then select
Archive the key.
4.
5.
In the Key Recovery Agent Selection dialog box, click the certificate that is for the Key Recovery
Agent purpose (it will most likely be last on the list, or you can click the link Click here to view the
certificate properties for each certificate on the list to ensure that you select the right certificate),
and then click OK twice.
6.
On LON-SVR1, in the Certification Authority console, right-click the Certificates Templates folder,
and then click Manage.
2.
In the Certificate Templates console, right-click the User certificate, and then click Duplicate
Template.
3.
In the Properties of New Template dialog box, on the General tab, in the Template display name
box, type Archive User.
4.
On the Request Handling tab, select the Archive subject's encryption private key check box.
5.
6.
Click the Subject Name tab, clear both the E-mail name and Include e-mail name in subject name
check boxes, and then click OK.
7.
8.
In the Certification Authority console, right-click the Certificates Templates folder, point to New,
and then click Certificate Template to Issue.
9.
In the Enable Certificate Templates dialog box, click the Archive User template, and then click OK.
2.
On the Start screen, type mmc.exe, and then press Enter. Click Yes in the User Account Control
dialog box.
3.
In the Console1-[Console Root] console, click File, and then click Add/Remove Snap-in.
4.
In the Add or Remove Snap-ins dialog box, click Certificates, click Add, click Finish, and then click
OK.
5.
Expand the Certificates - Current User node, right-click Personal, click All Tasks, and then click
Request New Certificate.
6.
In the Certificate Enrollment Wizard, on the Before You Begin page, click Next twice.
7.
On the Request Certificate page, select the Archive User check box, click Enroll, and then click
Finish.
8.
Refresh the console, and notice that a certificate is issued to Aidan, based on the Archive User
certificate template.
9.
Simulate the loss of a private key by deleting the certificate. In the central pane, right-click the
certificate that you just enrolled, select Delete, and then click Yes to confirm.
L6-67
12. In the details pane, double-click a certificate with Requestor Name Adatum\Aidan, and Certificate
Template name Archive User.
13. Click the Details tab, copy the Serial Number, and then click OK. (You may either copy the number
to Notepadselect it and press CTRL+C or write it down on paper.)
14. On the taskbar, click the Windows PowerShell icon.
15. At the Windows PowerShell prompt, type the following command (where <serial number> is the
serial number that you copied), and then press Enter:
Certutil getkey < serial number> outputblob
Note: If you paste the serial number from Notepad, remove spaces between numbers.
16. Verify that the outputblob file now displays in the C:\Users\Administrator.Adatum folder.
17. To convert the outputblob file into a .pfx file, at the Windows PowerShell prompt, type the following
command, and then press Enter:
Certutil recoverkey outputblob aidan.pfx
18. When prompted for the new password, type Pa$$w0rd, and then confirm the password.
19. After the command executes, close Windows PowerShell.
20. Browse to C:\Users\Administrator.ADATUM, and then verify that aidan.pfxthe recovered keyis
created.
21. Switch to LON-CL1 machine.
22. On the Start screen, type Control Panel and then click Control Panel.
23. In the Control Panel window, click View network status and tasks.
24. In the Network and Sharing Center window, click Change advanced sharing settings.
25. Under Guest or Public (current profile), select the option Turn on file and printer sharing.
26. Click Save changes.
27. If asked for credentials, use Adatum\administrator as the user name, and Pa$$w0rd as the
password.
28. Switch back to the LON-SVR1 machine.
29. Copy the aidan.pfx file to \\lon-cl1\C$.
30. Switch to LON-CL1, and ensure that you are still signed in as Aidan.
31. Browse to drive C, and double-click the aidan.pfx file.
32. On the Welcome to the Certificate Import Wizard page, click Next.
33. On the File to Import page, click Next.
34. On the Password page, enter Pa$$w0rd as the password, and then click Next.
35. On the certificate store page, click Next, click Finish, and then click OK.
36. In the Console1-[Console Root\Certificates - Current User\Personal\Certificates], expand the
Certificates - Current User node, expand Personal, and then click Certificates.
37. Refresh the console, and verify that the certificate for Aidan is restored.
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-CL1, 20412D-LON-SVR1, 20412D-LON-CA1, and
20412D-LON-SVR2.
Results: After completing this exercise, you will have implemented key archival and tested private key
recovery.
L7-69
Sign in to LON-DC1 with the Adatum\Administrator account and the password Pa$$w0rd.
2.
In Server Manager, click Tools, and then click Active Directory Administrative Center.
3.
Select and then right-click Adatum (local), click New, and then click Organizational Unit.
4.
In the Create Organizational Unit dialog box, in the Name text box, type Service Accounts, and
then click OK.
5.
Right-click the Service Accounts OU, click New, and then click User.
6.
On the Create User dialog box, enter the following details, and then click OK:
Password: Pa$$w0rd
7.
Right-click the Users container, click New, and then click Group.
8.
In the Create Group dialog box, enter the following details, and then click OK:
9.
E-mail: [email protected]
Right-click the Users container, click New, and then click Group.
10. In the Create Group dialog box, enter the following details, and then click OK.
E-mail: [email protected]
Aidan Delaney
Bill Malone
17. In the DNS Manager console, expand LON-DC1, and then expand Forward Lookup Zones.
18. Select and then right-click Adatum.com, and then click New Host (A or AAAA).
19. In the New Host dialog box, enter the following information, and then click Add Host:
Name: adrms
IP address: 172.16.0.21
Sign in to LON-SVR1 with the Adatum\Administrator account and the password Pa$$w0rd.
2.
In the Server Manager, click Manage, and then click Add roles and features.
3.
In the Add Roles and Features Wizard, click Next three times.
4.
On the Server Roles page, click Active Directory Rights Management Services.
5.
In the Add Roles and Features dialog box, click Add Features, and then click Next four times.
6.
7.
8.
Next to Configuration required for Active Directory Rights Management Services at LON-SVR1,
click More.
9.
On the All Servers Task Details and Notifications page, click Perform Additional Configuration.
Username: ADRMSSVC
Password: Pa$$w0rd
15. On the Cryptographic Mode page, click Cryptographic Mode 2, and then click Next.
16. On the Cluster Key Storage page, click Use AD RMS centrally managed key storage, and then
click Next.
17. On the Cluster Key Password page, enter the password Pa$$w0rd twice, and then click Next.
18. On the Cluster Web Site page, verify that Default Web Site is selected, and then click Next.
19. On the Cluster Address page, provide the following information, and then click Next:
Port: 80 (Note that in production, we would use an encrypted, that is, https connection)
20. On the Licensor Certificate page, type Adatum AD RMS, and then click Next.
21. On the SCP Registration page, click Register the SCP now, and then click Next.
L7-71
22. Click Install, close All Servers Task Details dialog box and then click Close.
Note: The installation may take several minutes.
23. In the Server Manager, click Tools, and then click Internet Information Services (IIS) Manager.
24. In the Internet Information Services (IIS) Manager, expand LONSVR1(ADATUM\Administrator)\Sites\Default Web Site, and then click _wmcs.
25. Under /_wmcs Home, In the details pane, in the IIS section, double-click Authentication, click
Anonymous Authentication, and in the Actions pane, click Enable.
26. In the Connections pane, expand _wmcs, and then click licensing.
27. Under /_wmcs/licensing Home, in the details pane, in the IIS section, double-click Authentication,
click Anonymous Authentication, and then in the Actions pane, click Enable.
28. Click to the Start screen, click Administrator, and then click Sign Out.
Note: You must sign out before you can manage AD RMS.
Sign in to LON-SVR1 with the Adatum\Administrator account and the password Pa$$w0rd.
2.
In Server Manager, click Tools, and then click Active Directory Rights Management Services.
3.
In the Active Directory Rights Management Services console, expand the lon-svr1(Local) node, and
then click Security Policies.
4.
In the Security Policies area, under Super Users, click Change super user settings.
5.
6.
7.
In the Super Users dialog box, in the Super user group text box, type
[email protected], and then click OK.
Results: After completing this exercise, you should have installed and configured AD RMS.
2.
In the Active Directory Rights Management Services console, click the lon-svr1 (local)\Rights Policy
Templates node.
3.
4.
In the Create Distributed Rights Policy Template Wizard, on the Add Template Identification
information page, click Add.
5.
On the Add New Template Identification Information page, enter the following information, and
then click Add:
Name: ReadOnly
6.
Click Next.
7.
8.
On the Add User or Group page, type [email protected], and then click OK.
9.
When [email protected] is selected, under Rights, click View. Verify that Grant owner
(author) full control right with no expiration is selected, and then click Next.
10. On the Specify Expiration Policy page, choose the following settings, and then click Next:
11. On the Specify Extended Policy page, click Require a new use license every time content is
consumed (disable client-side caching), click Next, and then click Finish.
2.
At the Windows PowerShell prompt, type the following command, and then press Enter:
New-Item c:\rmstemplates -ItemType Directory
3.
At the Windows PowerShell prompt, type the following command, and then press Enter:
New-SmbShare -Name RMSTEMPLATES -Path c:\rmstemplates -FullAccess ADATUM\ADRMSSVC
4.
At the Windows PowerShell prompt, type the following command, and then press Enter:
New-Item c:\docshare -ItemType Directory
5.
At the Windows PowerShell prompt, type the following command, and then press Enter:
New-SmbShare -Name docshare -Path c:\docshare -FullAccess Everyone
6.
7.
L7-73
8.
Click the Rights Policy Templates node, and in the Distributed Rights Policy Templates area, click
Change distributed rights policy templates file location.
9.
10. In the Specify Templates File Location (UNC) text box, type \\LON-SVR1\RMSTEMPLATES, and
then click OK.
11. On the taskbar, click the File Explorer icon.
12. Navigate to the C:\rmstemplates folder, and verify that ReadOnly.xml displays.
13. Close the File Explorer window.
2.
Click the Exclusion Policies node, and then click Manage application exclusion list.
3.
4.
5.
In the Exclude Application dialog box, enter the following information, and then click Finish:
Results: After completing this exercise, you should have configured AD RMS templates.
2.
At the Windows PowerShell prompt, type the following command, and then press Enter:
New-Item c:\export -ItemType Directory
3.
At the Windows PowerShell prompt, type the following command, and then press Enter:
New-SmbShare -Name Export -Path c:\export -FullAccess Everyone
4.
5.
In the Active Directory Rights Management Services console, expand the Trust Policies node, and
then click the Trusted User Domains node.
6.
7.
In the Export Trusted User Domains As dialog box, navigate to \\LON-SVR1\export, set the file
name to ADATUM-TUD.bin, and then click Save.
8.
Sign in to TREY-DC1 with the TREYRESEARCH\Administrator account and the password Pa$$w0rd.
9.
In the Server Manager, click Tools, and then click Active Directory Rights Management Services.
10. In the Active Directory Rights Management Services console, expand trey-dc1(local), expand the
Trust Policies node, and then click the Trusted User Domains node.
11. In the Actions pane, click Export Trusted User Domains.
12. In the Export Trusted User Domains As dialog box, navigate to \\LON-SVR1\export, set the file
name to TREYRESEARCH-TUD.bin, and then click Save.
13. On TREY-DC1, on the taskbar, click the Windows PowerShell icon.
14. At the Windows PowerShell command prompt, type the following command, and then press Enter:
Add-DnsServerConditionalForwarderZone -MasterServers 172.16.0.10 -Name adatum.com
Switch to LON-SVR1.
2.
In the Active Directory Rights Management Services console, under the Trust Policies node, click the
Trusted Publishing Domains node.
3.
4.
In the Export Trusted Publishing Domain dialog box, click Save As.
5.
In the Export Trusted Publishing Domain File As dialog box, navigate to \\LON-SVR1\export, set
the file name to ADATUM-TPD.xml, and then click Save.
6.
In the Export Trusted Publishing Domain dialog box, enter the password Pa$$w0rd twice, and
then click Finish.
7.
Switch to TREY-DC1.
L7-75
8.
In the Active Directory Rights Management Services console, under the Trust Policies node, click the
Trusted Publishing Domains node.
9.
10. In the Export Trusted Publishing Domain dialog box, click Save As.
11. In the Export Trusted Publishing Domain File As dialog box, navigate to \\LON-SVR1\export, set
the file name to TREYRESEARCH-TPD.xml, and then click Save.
12. In the Export Trusted Publishing Domain dialog box, enter the password Pa$$w0rd twice, and
then click Finish.
Task 3: Import the Trusted User Domain policy from the partner domain
1.
Switch to LON-SVR1.
2.
In the Active Directory Rights Management Services console, under the Trust Policies node, click the
Trusted User Domains node.
3.
4.
In the Import Trusted User Domain dialog box, enter the following details, and then click Finish:
5.
Switch to TREY-DC1.
6.
In the Active Directory Rights Management Services console, under the Trust Policies node, click the
Trusted User Domains node.
7.
8.
In the Import Trusted User Domain dialog box, enter the following details, and then click Finish:
Task 4: Import the Trusted Publishing Domains policy from the partner domain
1.
Switch to LON-SVR1.
2.
In the Active Directory Rights Management Services console, under the Trust policies node, click the
Trusted Publishing Domains node.
3.
4.
In the Import Trusted Publishing Domain dialog box, enter the following information, and then
click Finish:
Password: Pa$$w0rd
5.
Switch to TREY-DC1.
6.
In the Active Directory Rights Management Services console, under the Trust policies node, click the
Trusted Publishing Domains node.
7.
8.
In the Import Trusted Publishing Domain dialog box, provide the following information, and then
click Finish:
Password: Pa$$w0rd
Results: After completing this exercise, you should have implemented the AD RMS trust policies.
L7-77
2.
3.
4.
5.
6.
7.
8.
In the Select Users and Groups, pop-up, in the Enter the object names to select text box, type
Aidan; Bill; Carol, and then click OK three times.
9.
10. On the Start screen, click Administrator, and then click Sign out.
11. Sign in to LON-CL1 as Adatum\Aidan using the password Pa$$w0rd.
12. On the Start screen, click the Desktop tile.
13. On the taskbar, click the Internet Explorer icon. Close any warnings about add-ons.
14. In Windows Internet Explorer, in the Address bar, type http://adrms.adatum.com, and then click
the arrow immediately to the right of the uniform resource locator (URL) text box.
15. Click the Gear icon in the far upper right of Internet Explorer.
16. Select Internet Options.
17. Select the Security tab.
18. In the Select a zone to view or change security settings, click the Local intranet icon, and then click
the Sites button.
19. Click the Advanced button.
20. Click the Add button, click Close, and then click OK twice.
21. Close Internet Explorer.
22. Return to the Start screen.
23. On the Start screen, type Word. In the Results area, click Word 2013.
24. In the First things first dialog box, select the Ask me later radio button, and then click Accept. In
the Office dialog box, click the X in the far upper right.
25. In the Word Recent window, click the Blank document icon. In the Microsoft Word document, type
the following text:
This document is for executives only, it should not be modified.
26. Click File, click Protect Document, click Restrict Access, and then click Connect to Digital Rights
Management Servers and get templates.
27. A Microsoft Word dialog box informing you it is connecting to the server will display.
28. After the dialog box closes, click Protect Document and Restrict Access, and then click Restricted
Access.
29. In the Permission dialog box, enable Restrict Permission to this document.
30. In the Read text box, type [email protected], and then click OK.
31. Click Save.
32. In the Save As dialog box, click the Browse icon.
33. In the File name text box, type \\lon-svr1\docshare\ExecutivesOnly.docx, and then click Save.
34. Close Word.
35. Click to the Start screen, click the Aidan Delaney icon, and then click Sign out.
2.
3.
On the taskbar, click the Internet Explorer icon. Close any warnings about add-ons.
4.
In the URL text box, type http://adrms.adatum.com, click the arrow immediately to the right of the
URL text box.
5.
Click the Gear icon in the far upper right of Internet Explorer.
6.
7.
8.
In the Select a zone to view or change security settings, click the Local intranet icon, and then click
the Sites button.
9.
10. Click the Add button, click Close, and then click OK twice.
11. Close Internet Explorer.
12. On the taskbar, click the File Explorer icon.
13. In the File Explorer window, navigate to \\lon-svr1\docshare.
14. In the docshare folder, double-click the ExecutivesOnly document.
15. In the First things first dialog box, select the Ask me later radio button, and then click Accept.
In the Office dialog box, click the letter X in the far upper right.
16. When the document opens, verify that you are unable to modify or save the document.
17. Select a line of text in the document.
18. Right-click the text, and verify that you cannot make changes.
19. Click View Permission on the yellow bar, review the permissions, and then click OK.
20. Close Word.
21. Click to the Start screen, click the Bill Malone icon, and then click Sign out.
L7-79
2.
3.
4.
In the URL text box, type http://adrms.adatum.com, and then click the arrow immediately to the
right of the URL text box.
5.
Click the Gear icon in the far upper right of Internet Explorer.
6.
7.
8.
In the Select a zone to view or change security settings, click the Local intranet icon, and then click
the Sites button.
9.
10. Click the Add button, click Close, and then click OK twice.
11. Close Internet Explorer.
12. On the taskbar, click the File Explorer icon.
13. In the File Explorer window, navigate to \\lon-svr1\docshare.
14. In the docshare folder, double-click the Executives Only document.
15. Verify that Carol is unable to open the document. You will receive a message with option to Change
User or request access.
16. Click No.
17. Select Ask me later, click Accept, and then select the X in the far upper right of the Microsoft Office
window.
18. Close Word.
19. Click to the Start screen, click the Carol Troup icon, and then click Sign out.
Task 4: Open and edit the rights-protected document as an authorized user at Trey
Research
1.
2.
On the Start screen, type Word. In the Results area, click Word 2013.
3.
4.
5.
Click File, click Protect Document, click Restrict Access, and then click Connect to Digital Rights
Management Servers and get templates.
6.
7.
In the Read text box, type [email protected], click OK, click Save, and then click Browse.
8.
In the Save As dialog box, save the document to the \\lon-svr1\docshare location as
TreyResearch-Confidential.docx. Close Word.
9.
Click to the Start screen, click the Aidan Delaney icon, and then click Sign Out.
Username: Adatum\Administrator
Password: Pa$$w0rd
L7-81
39. Select a line of text in the document and verify that you cannot make any changes.
40. Right-click the text, and verify that you cannot make changes.
41. Click View Permission, review the permissions, and then click OK.
2.
In the Virtual Machines list, right-click 20412C-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you should have verified that the AD RMS deployment is
successful.
L8-83
Lab A: Implementing AD FS
Exercise 1: Installing and Configuring AD FS
Task 1: Create a DNS record for AD FS
1.
On LON-DC1, in the Server Manager, click Tools, and then click DNS.
2.
In the DNS Manager, expand LON-DC1, expand Forward Lookup Zones, and then click
Adatum.com.
3.
4.
5.
In the IP address box, type 172.16.0.10, and then click Add Host.
6.
7.
2.
At the Windows PowerShell prompt, type New-ADUser Name adfsService, and then press Enter.
3.
4.
5.
At the second Password prompt, type Pa$$w0rd, and then press Enter.
6.
At the Repeat Password prompt, type Pa$$w0rd, and then press Enter.
7.
8.
Task 3: Install AD FS
1.
On LON-DC1, in the Server Manager, click Manage, and then click Add Roles and Features.
2.
In the Add Roles and Features Wizard, on the Before you begin page, click Next.
3.
On the Select installation type page, click Role-based or feature-based installation, and then
click Next.
4.
On the Select destination server page, click Select a server from the server pool, click LONDC1.Adatum.com, and then click Next.
5.
On the Select server roles page, select the Active Directory Federation Services check box, and
then click Next.
6.
7.
On the Active Directory Federation Services (AD FS) page, click Next.
8.
9.
Task 4: Configure AD FS
1.
On LON-DC1, in the Server Manager, click the Notifications icon, and then click Configure the
federation service on this server.
2.
In the Active Directory Federation Services Configuration Wizard, on the Welcome page, click Create
the first federation server in a federation server farm, and then click Next.
3.
On the Connect to Active Directory Domain Services page, click Next to use
Adatum\Administrator to perform the configuration.
4.
On the Specify Service Properties page, in the SSL Certificate box, select adfs.adatum.com.
5.
In the Federation Service Display Name box, type A. Datum Corporation, and then click Next.
6.
On the Specify Service Account page, click Use an existing domain user account or group
Managed Service Account.
7.
8.
In the Account Password box, type Pa$$w0rd, and then click Next.
9.
On the Specify Configuration Database page, click Create a database on this server using
Windows Internal Database, and then click Next.
2.
3.
4.
Verify that the file loads, and then close Internet Explorer.
Results: In this exercise, you installed and configured AD FS. You also verified that it is functioning by
viewing the FederationMetaData.xml file contents.
L8-85
On LON-SVR1, in Server Manager, click Tools and click Internet Information Services (IIS)
Manager.
2.
If necessary, in the prompt for connecting to Microsoft Web Platform components, select the Do not
show this message check box, and then click No.
3.
4.
5.
In the Create Certificate window on the Distinguished Name Properties page, enter the following
information, and then click Next:
Organization: A. Datum
Organizational unit: IT
City/locality: London
State/Province: England
Country/region: GB
6.
7.
In the Select Certification Authority window, click AdatumCA, and then click OK.
8.
On the Online Certification Authority page, in the Friendly name box, type AdatumTestApp
Certificate, and then click Finish.
9.
In IIS Manager, expand LON-SVR1 (ADATUM\Administrator), expand Sites, click Default Web
Site, and then in the Actions Pane, click Bindings.
On LON-DC1, in the Server Manager, click Tools, and then click AD FS Management.
2.
In the AD FS management console, expand Trust Relationships, and then click Claims Provider
Trusts.
3.
In the middle pane, right-click Active Directory, and then click Edit Claim Rules.
4.
In the Edit Claims Rules for Active Directory window, on the Acceptance Transform Rules tab, click
Add Rule.
5.
In the Add Transform Claim Rule Wizard, on the Select Rule Template page, in the Claim rule
template box, select Send LDAP Attributes as Claims, and then click Next.
6.
On the Configure Rule page, in the Claim rule name box, type Outbound LDAP Attributes Rule.
7.
8.
In the Mapping of LDAP attributes to outgoing claim types section, select the following values for
the LDAP Attribute and the Outgoing Claim Type, and then click Finish:
9.
User-Principal-Name: UPN
Display-Name: Name
In the Edit Claim Rules for Active Directory window, click OK.
On LON-SVR1, in the Server Manager, click Tools, and then click Windows Identity Foundation
Federation Utility.
2.
On the Welcome to the Federation Utility Wizard page, in the Application configuration
location box, type C:\inetpub\wwwroot\AdatumTestApp\web.config for the location of the
sample web.config file.
3.
4.
On the Security Token Service page, click Use an existing STS, in the STS WS-Federation
metadata document location box, type https://adfs.adatum.com/federationmetadata/200706/federationmetadata.xml, and then click Next to continue.
5.
On the STS signing certificate chain validation error page, click Disable certificate chain
validation, and then click Next.
6.
On the Security token encryption page, click No encryption, and then click Next.
7.
On the Offered claims page, review the claims that the federation server will offer, and then click
Next.
8.
On the Summary page, review the changes that will be made to the sample application by the
Federation Utility Wizard, scroll through the items to understand what each item is doing, and then
click Finish.
9.
2.
3.
In the Relying Party Trust Wizard, on the Welcome page, click Start.
4.
On the Select Data Source page, click Import data about the relying party published online or
on a local network.
5.
In the Federation Metadata address (host name or URL) box, type https://lonsvr1.adatum.com/adatumtestapp/, and then click Next. This downloads the metadata configured
in the previous task.
6.
On the Specify Display Name page, in the Display name box, type A. Datum Test App, and then
click Next.
7.
On the Configure Multi-factor Authentication Now page, click I do not want to configure multifactor authentication settings for this relying party trust at this time, and then click Next.
L8-87
8.
On the Choose Issuance Authorization Rules page, click Permit all users to access this relying
party, and then click Next.
9.
On the Ready to Add Trust page, review the relying-party trust settings, and then click Next.
On LON-DC1, in the AD FS management console, in the Edit Claim Rules for A. Datum Test App
window, on the Issuance Transform Rules tab, click Add Rule.
2.
In the Claim rule template box, select Pass Through or Filter an Incoming Claim, and then click
Next.
3.
In the Claim rule name box, type Pass through Windows account name.
4.
In the Incoming claim type drop-down list, click Windows account name, and then click Finish.
5.
6.
In the Claim rule template box, select Pass Through or Filter an Incoming Claim, and then click
Next.
7.
In the Claim rule name box, type Pass through E-Mail Address.
8.
In the Incoming claim type drop-down list, click E-Mail Address, and then click Finish.
9.
10. In the Claim rule template box, select Pass Through or Filter an Incoming Claim, and then click
Next.
11. In the Claim rule name box, type Pass through UPN.
12. In the Incoming claim type drop-down list, click UPN, and then click Finish.
13. On the Issuance Transform Rules tab, click Add Rule.
14. In the Claim rule template box, select Pass Through or Filter an Incoming Claim, and then click
Next.
15. In the Claim rule name box, type Pass through Name.
16. In the Incoming claim type drop-down list, click Name, and then click Finish.
17. On the Issuance Transform Rules tab, click OK.
2.
3.
In the Windows Security window, sign in as Adatum\Brad with the password Pa$$w0rd.
4.
5.
On LON-CL1, on the Start screen, type Internet Options, and then click Internet Options.
2.
In the Internet Properties window, on the Security tab, click Local intranet, and then click Sites.
3.
4.
In the Local intranet window, in the Add this website to the zone box, type
https://adfs.adatum.com, and then click Add.
5.
In the Add this website to the zone box, type https://lon-svr1.adatum.com, click Add, and then
click Close.
6.
7.
8.
9.
Results: After completing this exercise, you will have configured AD FS to support authentication for an
application.
L8-89
On LON-DC1, in the Server Manager, click Tools, and then click DNS.
2.
In the DNS Manager, expand LON-DC1, and then click Conditional Forwarders.
3.
4.
In the New Conditional Forwarder window, in the DNS Domain box, type TreyResearch.net.
5.
In the IP addresses of the master servers box, type 172.16.10.10, and then press Enter.
6.
Select the Store this conditional forwarder in Active Directory, and replicate it as follows check
box, select All DNS servers in this forest, and then click OK.
7.
8.
On TREY-DC1, in the Server Manager, click Tools, and then click DNS.
9.
In the DNS Manager, expand TREY-DC1, and then click Conditional Forwarders.
10. Right-click Conditional Forwarders, and then click New Conditional Forwarder.
11. In the New Conditional Forwarder window, in the DNS Domain box, type Adatum.com.
12. In the IP addresses of the master servers box, type 172.16.0.10, and then press Enter.
13. Select the Store this conditional forwarder in Active Directory, and replicate it as follows check
box, select All DNS servers in this forest, and then click OK.
14. Close the DNS Manager.
Note: In a production environment, it is likely that you would use Internet DNS instead of
conditional forwarders.
On LON-DC1, open File Explorer, browse to \\TREY-DC1\CertEnroll, and then copy TREYDC1.TreyResearch.net_TreyResearchCA.crt to C:\.
2.
3.
In the Server Manager, click Tools, and then click Group Policy Management.
4.
5.
In Group Policy Management Editor, under Computer Configuration, expand Policies, expand
Windows Settings, expand Security Settings, expand Public Key Policies, and then click Trusted
Root Certification Authorities.
6.
7.
In the Certificate Import Wizard, on the Welcome to the Certificate Import Wizard page, click
Next.
8.
9.
On the Certificate Store page, click Place all certificates in the following store, select Trusted
Root Certification Authorities, and then click Next.
10. On the Completing the Certificate Import Wizard page, click Finish, and then click OK to close the
success message.
11. Close the Group Policy Management Editor.
12. Close Group Policy Management.
13. On TREY-DC1, open File Explorer, and then browse to \\LON-DC1\CertEnroll.
14. Right-click LON-DC1.Adatum.com_AdatumCA.crt, and then click Install Certificate.
15. In the Certificate Import Wizard, on the Welcome to the Certificate Import Wizard page, click
Local Machine, and then click Next.
16. On the Certificate Store page, click Place all certificates in the following store, and then click
Browse.
17. In the Select Certificate Store window, click Trusted Root Certification Authorities, and then click
OK.
18. On the Certificate Store page, click Next.
19. On the Completing the Certificate Import Wizard page, click Finish, and then click OK to close the
success message.
20. Close File Explorer.
21. On LON-SVR1, on the taskbar, click Windows PowerShell.
22. At the Windows PowerShell command prompt, type gpupdate, and then press Enter.
23. Close Windows PowerShell.
24. On LON-SVR2, on the taskbar, click Windows PowerShell.
25. At the Windows PowerShell command prompt, type gpupdate, and then press Enter.
26. Close Windows PowerShell.
Note: If you obtain certificates from a trusted certification authority, you do not need to
configure a certificate trust between the organizations.
2.
In DNS Manager, expand TREY-DC1, expand Forward Lookup Zones, and then click
TreyResearch.net.
3.
4.
5.
In the IP address box, type 172.16.10.10, and then click Add Host.
6.
7.
L8-91
On TREY-DC1, in Server Manager, click Tools and click Internet Information Services (IIS)
Manager.
2.
If necessary, in the prompt for connecting to Microsoft Web Platform components, select the Do not
show this message check box, and then click No.
3.
4.
5.
In the Create Certificate window on the Distinguished Name Properties page, enter the following, and
then click Next:
Organizational unit: IT
City/locality: London
State/Province: England
Country/region: GB
6.
7.
In the Select Certification Authority window, click TreyResearchCA, and then click OK.
8.
On the Online Certification Authority page, in the Friendly name box, type
adfs.TreyResearch.net, and then click Finish.
9.
2.
At the Windows PowerShell prompt, type New-ADUser Name adfsService, and then press Enter.
3.
4.
5.
At the second Password prompt, type Pa$$w0rd, and then press Enter.
6.
At the Repeat Password prompt, type Pa$$w0rd, and then press Enter.
7.
8.
On TREY-DC1, in the Server Manager, click Manage, and then click Add Roles and Features.
2.
In the Add Roles and Features Wizard, on the Before you begin page, click Next.
3.
On the Select Installation type page, click Role-based or feature-based installation, and then
click Next.
4.
On the Select destination server page, click Select a server from the server pool, click TREYDC1.TreyResearch.net, and then click Next.
5.
On the Select server roles page, select the Active Directory Federation Services check box, and
then click Next.
6.
7.
On the Active Directory Federation Services (AD FS) page, click Next.
8.
9.
On TREY-DC1, in the Server Manager, click the Notifications icon, and then click Configure the
federation service on this server.
2.
In the Active Directory Federation Services Configuration Wizard, on the Welcome page, click Create
the first federation server in a federation server farm, and then click Next.
3.
On the Connect to Active Directory Domain Services page, click Next to use
TREYRESEARCH\Administrator to perform the configuration.
4.
On the Specify Service Properties page, in the SSL Certificate box, select adfs.TreyResearch.net.
5.
In the Federation Service Display Name box, type Trey Research, and then click Next.
6.
On the Specify Service Account page, click Use an existing domain user account or group
Managed Service Account.
7.
8.
In the Account Password box, type Pa$$w0rd, and then click Next.
9.
On the Specify Configuration Database page, click Create a database on this server using
Windows Internal Database, and then click Next.
2.
In the AD FS management console, expand Trust Relationships, and then click Claims Provider
Trusts.
3.
4.
In the Add Claims Provider Trust Wizard, on the Welcome page, click Start.
5.
On the Select Data Source page, click Import data about the claims provider published online or
on a local network.
6.
In the Federation metadata address (host name or URL) box, type https://adfs.treyresearch.net,
and then click Next.
7.
On the Specify Display Name page, in the Display name box, type Trey Research, and then click
Next.
8.
On the Ready to Add Trust page, review the claims-provider trust settings, and then click Next to
save the configuration.
L8-93
9.
On the Finish page, select the Open the Edit Claim Rules dialog for this claims provider trust
when the wizard closes check box, and then click Close.
10. In the Edit Claim Rules for Trey Research window, on the Acceptance Transform Rules tab, click
Add Rule.
11. In the Add Transform Claim Rule Wizard, on the Select Rule Template page, in the Claim rule
template box, select Pass Through or Filter an Incoming Claim, and then click Next.
12. On the Configure Rule page, in the Claim rule name box, type Pass through Windows account
name.
13. In the Incoming claim type drop-down list, select Windows account name.
14. Select Pass through all claim values, and then click Finish.
15. In the pop-up window, click Yes to acknowledge the warning.
16. In the Edit Claim Rules for Trey Research window, click OK, and then close the AD FS management
console.
On TREY-DC1, in the Server Manager, click Tools, and then click AD FS Management.
2.
In the AD FS management console, expand Trust Relationships, and then click Relying Party
Trusts.
3.
4.
In the Add Relying Party Trust Wizard, on the Welcome page, click Start.
5.
On the Select Data Source page, click Import data about the relying party published online or
on a local network.
6.
In the Federation metadata address (host or URL) box, type adfs.adatum.com, and then click
Next.
7.
On the Specify Display Name page, in the Display name text box, type A. Datum Corporation,
and then click Next.
8.
On the Configure Multi-Factor Authentication Now page, click I do not want to configure
multi-factor authentication settings for this relying party trust at this time, and then click Next.
9.
On the Choose Issuance Authorization Rules page, select Permit all users to access this relying
party, and then click Next.
10. On the Ready to Add Trust page, review the relying-party trust settings, and then click Next to save
the configuration.
11. On the Finish page, select the Open the Edit Claim Rules dialog box for the relying party trust
when the wizard closes check box, and then click Close.
12. In the Edit Claim Rules for A. Datum Corporation window, on the Issuance Transform Rules tab,
click Add Rule.
13. In the Add Transform Claim Rule Wizard, on the Select Rule Template page, in the Claim rule
template box, select Pass Through or Filter an Incoming Claim, and then click Next.
14. On the Configure Rule page, in the Claim rule name box, type Pass through Windows account
name.
15. In the Incoming claim type drop-down list, select Windows account name.
16. Click Pass through all claim values, click Finish, and then click OK.
17. Close the AD FS management console.
2.
3.
4.
In the Windows Security dialog box, sign in as TreyResearch\April with the password Pa$$w0rd.
5.
6.
7.
8.
In the Windows Security dialog box, sign in as TreyResearch\April with the password Pa$$w0rd.
9.
Note: You are not prompted for a home realm on the second access. Once users have
selected a home realm and have been authenticated by a realm authority, they are issued a
_LSRealm cookie by the relying-partys federation server. The default lifetime for the cookie is 30
days. Therefore, to sign in multiple times, you should delete that cookie after each logon attempt
to return to a clean state.
On TREY-DC1, in the Server Manager, click Tools, and then click AD FS Management.
2.
In the AD FS management console, expand Trust Relationships, and then click Relying Party
Trusts.
3.
4.
In the Edit Claim Rules for A. Datum Corporation window, on the Issuance Authorization Rules tab,
click Permit Access to All Users, and then click Remove Rule.
5.
6.
7.
In the Add Issuance Authorization Claim Rules Wizard, on the Select Rule Template page, in the
Claim rule template box, select Permit or Deny Users Based on an Incoming Claim, and then click
Next.
8.
On the Configure Rule page, in the Claim rule name box, type Allow Production Members.
9.
L8-95
13. In the AD FS management console, click Claims Provider Trusts, right-click Active Directory, and
then click Edit Claim Rules.
14. In the Edit Claim Rules for Active Directory window, click Add Rule.
15. In the Add Transform Claim Rule Wizard, on the Select Rule Template page, in the Claim rule
template box, select Send Group Membership as a Claim, and then click Next.
16. On the Configure Rule page, in the Claim rule name box, type Production Group Claim.
17. To set the Users group, click Browse, type Production, and then click OK.
18. In the Outgoing claim type box, select Group.
19. In the Outgoing claim value box, type TreyResearch-Production, and then click Finish.
20. In the Edit Claim Rules for Active Directory window, click OK.
21. Close the AD FS management console.
2.
3.
In the Windows Security dialog box, sign in as TreyResearch\April with the password Pa$$w0rd.
4.
Verify that you cannot access the application because April is not a member of the production group.
5.
6.
7.
8.
In the Windows Security dialog box, sign in as TreyResearch\Ben with the password Pa$$w0rd.
9.
Verify that you can access the application because Ben is a member of the production group.
Results: After completing this exercise, you will have configured access for a claims-aware application in a
partner organization.
On LON-SVR2, in the Server Manager, click Manage, and then click Add Roles and Features.
2.
In the Add Roles and Features Wizard, on the Before you begin page, click Next.
3.
On the Select installation type page, click Role-based or feature-based installation, and then
click Next.
4.
On the Select destination server page, click LON-SVR2.Adatum.com, and then click Next.
5.
On the Select server roles page, expand Remote Access, select the Web Application Proxy check
box, and then click Next.
6.
7.
8.
On LON-DC1, on the Start screen, type mmc, and then press Enter.
2.
In the Microsoft Management Console, click File, and then click Add/Remove Snap-in.
3.
In the Add or Remove Snap-ins window, in the Available snap-ins column, double-click Certificates.
4.
In the Certificates snap-in window, click Computer account, and then click Next.
5.
In the Select Computer window, click Local Computer (the computer this console is running on),
and then click Finish.
6.
7.
In the Microsoft Management Console, expand Certificates (Local Computer), expand Personal,
and then click Certificates.
8.
9.
10. On the Export Private Key page, click Yes, export the private key, and then click Next.
11. On the Export File Format page, click Next.
12. On the Security page, select the Password check box.
13. In the Password and Confirm password boxes, type Pa$$w0rd, and then click Next.
14. On the File to Export page, in the File name box, type C:\adfs.pfx, and then click Next.
15. On the Completing the Certificate Export Wizard page, click Finish, and then click OK to close the
success message.
16. Close the Microsoft Management Console and do not save the changes.
17. On LON-SVR2, on the Start screen, type mmc, and then press Enter.
18. In the Microsoft Management Console, click File, and then click Add/Remove Snap-in.
19. In the Add or Remove Snap-ins window, in the Available snap-ins column, double-click Certificates.
20. In the Certificates snap-in window, click Computer account, and then click Next.
L8-97
21. In the Select Computer window, click Local Computer (the computer this console is running on),
and then click Finish.
22. In the Add or remove Snap-ins window, click OK.
23. In the Microsoft Management Console, expand Certificates (Local Computer), and then click
Personal.
24. Right-click Personal, point to All Tasks, and then click Import.
25. In the Certificate Import Wizard, click Next.
26. On the File to Import page, in the File name box, type \\LON-DC1\c$\adfs.pfx, and then click
Next.
27. On the Private key protection page, in the Password box, type Pa$$w0rd.
28. Select the Mark this key as exportable check box, and then click Next.
29. On the Certificate Store page, click Place all certificates in the following store.
30. In the Certificate store box, select Personal, and then click Next.
31. On the Completing the Certificate Import Wizard page, click Finish, and then click OK to clear the
success message.
32. Close the Microsoft Management Console and do not save the changes.
On LON-SVR1, on the Start screen, type mmc, and then press Enter.
2.
In the Microsoft Management Console, click File, and then click Add/Remove Snap-in.
3.
In the Add or Remove Snap-ins window, in the Available snap-ins column, double-click Certificates.
4.
In the Certificates snap-in window, click Computer account, and then click Next.
5.
In the Select Computer window, click Local Computer (the computer this console is running on),
and then click Finish.
6.
7.
In the Microsoft Management Console, expand Certificates (Local Computer), expand Personal,
and then click Certificates.
8.
9.
10. On the Export Private Key page, click Yes, export the private key, and then click Next.
11. On the Export File Format page, click Next.
12. On the Security page, select the Password check box.
13. In the Password and Confirm password boxes, type Pa$$w0rd, and then click Next.
14. On the File to Export page, in the File name box, type C:\lon-svr1.pfx, and then click Next.
15. On the Completing the Certificate Export Wizard page, click Finish, and then click OK to close the
success message.
16. Close the Microsoft Management Console and do not save the changes.
17. On LON-SVR2, on the Start screen, type mmc, and then press Enter.
18. In the Microsoft Management Console, click File, and then click Add/Remove Snap-in.
19. In the Add or Remove Snap-ins window, in the Available snap-ins column, double-click Certificates.
20. In the Certificates snap-in window, click Computer account, and then click Next.
21. In the Select Computer window, click Local Computer (the computer this console is running on),
and then click Finish.
22. In the Add or remove Snap-ins window, click OK.
23. In the Microsoft Management Console, expand Certificates (Local Computer), and then click
Personal.
24. Right-click Personal, point to All Tasks, and then click Import.
25. In the Certificate Import Wizard, click Next.
26. On the File to Import page, in the File name box, type \\LON-SVR1\c$\lon-svr1.pfx, and then click
Next.
27. On the Private key protection page, in the Password box, type Pa$$w0rd.
28. Select the Mark this key as exportable check box, and then click Next.
29. On the Certificate Store page, click Place all certificates in the following store.
30. In the Certificate store box, select Personal, and then click Next.
31. On the Completing the Certificate Import Wizard page, click Finish, and then click OK to clear the
success message.
32. Close the Microsoft Management Console and do not save the changes.
In the Server Manager, click the Notifications icon, and then click Open the Web Application
Proxy Wizard.
2.
In the Web Application Proxy Wizard, on the Welcome page, click Next.
3.
On the Federation Server page, enter the following, and then click Next:
Password: Pa$$w0rd
4.
On the AD FS Proxy Certificate page, in the Select a certificate to be used by the AD FS proxy
box, select adfs.adatum.com, and then click Next.
5.
6.
7.
The Remote Access Management Console opens automatically. Leave it open for the next task.
On LON-SVR2, in the Remote Access Management Console, click Web Application Proxy.
2.
3.
In the Publish New Application Wizard, on the Welcome page, click Next.
4.
On the Preauthentication page, click Active Directory Federation Services (AD FS), and then click
Next.
L8-99
5.
On the Relying Party page, click A. Datum Test App and click Next.
6.
On the Publishing Settings page, in the Name box, type A. Datum Test App.
7.
8.
9.
2.
3.
4.
In the File name box, type C:\Windows\System32\Drivers\etc\hosts, and then click Open.
5.
At the bottom of the file, add the following two lines, click File, and then click Save:
172.16.0.22 adfs.adatum.com
172.16.0.22 lon-svr1.adatum.com
6.
Close Notepad.
7.
8.
9.
In the Windows Security dialog box, sign in as TreyResearch\Ben with password Pa$$w0rd.
When you finish the lab, revert the virtual machines to their initial state. To do this, perform the
following steps:
2.
3.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
4.
5.
Results: After completing this exercise, you will have configured Web Application Proxy to secure access
to AdatumTestApp from the Internet.
L9-101
2.
3.
Double-click the file iis-85.png. This will open the file in Microsoft Paint.
4.
Ensure that the Paintbrush tool is selected, and then in the palette, click the color Red.
5.
Use the mouse to mark the IIS logo distinctively, using the color red.
6.
Save the changes that you made to iis-85.png, and then close Microsoft Paint.
7.
8.
Switch to LON-DC1.
9.
Click Start.
10. On the Start screen click the Windows Internet Explorer icon.
11. In the Internet Explorer address bar, type the address http://LON-SVR1, and then press Enter. Verify
that the webpage displays the IIS logo with the distinctive color red mark that you added.
12. In the Internet Explorer address bar, enter the address http://LON-SVR2, and then press Enter. Verify
that the webpage does not display the marked IIS logo.
13. Close Internet Explorer.
Switch to LON-SVR1.
2.
3.
In the Server Manager console, click the Tools menu, and then click Windows PowerShell ISE.
4.
In the Windows PowerShell ISE window, enter the following command, and then press Enter:
Invoke-Command -Computername LON-SVR1,LON-SVR2 -command {Install-WindowsFeature
NLB,RSAT-NLB}
On LON-SVR1, in the Windows PowerShell ISE window, type the following command, and then press
Enter:
New-NlbCluster -InterfaceName "Ethernet" -OperationMode Multicast -ClusterPrimaryIP
172.16.0.42 -ClusterName LON-NLB
2.
In the Windows PowerShell ISE window, type the following command, and then press Enter:
Invoke-Command -Computername LON-DC1 -command {Add-DNSServerResourceRecordA
zonename adatum.com name LON-NLB Ipv4Address 172.16.0.42}
On LON-SVR1, in the Windows PowerShell ISE window, type the following command, and then press
Enter:
Add-NlbClusterNode -InterfaceName "Ethernet" -NewNodeName "LON-SVR2" NewNodeInterface "Ethernet"
On LON-SVR1, in the Server Manager console, click the Tools menu, and then click Network Load
Balancing Manager.
2.
In the Network Load Balancing Manager console, verify that nodes LON-SVR1 and LON-SVR2 display
with the status of Converged for the LON-NLB cluster.
3.
4.
In the LON-NLB(172.16.0.42), on the Cluster Parameters tab, verify that the cluster is set to use the
Multicast operations mode.
5.
On the Port Rules tab, verify that there is a single port rule named All that starts at port 0 and ends
at port 65535 for both TCP and UDP protocols, and that it uses Single affinity.
6.
Results: After completing this exercise, you will have successfully implemented an NLB cluster.
L9-103
2.
At the Windows PowerShell prompt, type each of the following commands, and then press Enter
after each command:
Cmd.exe
Mkdir c:\porttest
Xcopy /s c:\inetpub\wwwroot c:\porttest
Exit
New-Website Name PortTest PhysicalPath C:\porttest Port 5678
New-NetFirewallRule DisplayName PortTest Protocol TCP LocalPort 5678
3.
4.
Click drive C, double-click the porttest folder, and then double-click iis-85.png. This will open the
file in Microsoft Paint.
5.
Select the color blue from the palette, and use the Blue paintbrush to mark the IIS logo in a
distinctive manner.
6.
7.
Switch to LON-DC1.
8.
Click Start.
9.
10. In the Internet Explorer address bar, type http://LON-SVR2:5678, and then press Enter.
11. Verify that the IIS Start page with the IIS logo distinctively marked with blue displays.
12. Switch to LON-SVR1.
13. On LON-SVR1, switch to Network Load Balancing Manager.
14. In the Network Load Balancing Manager console, right-click LON-NLB, and then click Cluster
Properties.
15. In the LON-NLB(172.16.0.42), on the Port Rules tab, select the All port rule, and then click
Remove.
16. On the Port Rules tab, click Add.
17. In the Add/Edit Port Rule dialog box, enter the following information, and then click OK:
Port range: 80 to 80
Protocols: Both
Affinity: None
Protocols: Both
Switch to LON-DC1.
2.
Click Start.
3.
4.
In the Internet Explorer address bar, type http://lon-nlb, and then press Enter.
5.
Click the Refresh icon 20 times. Verify that you see web pages with and without the distinctive red
marking.
6.
7.
In the address bar, enter the address http://LON-NLB:5678, and then press Enter.
8.
In the address bar, click the Refresh icon 20 times. Verify that you are able to view only the web page
with the distinctive blue marking.
Switch to LON-SVR1.
2.
3.
4.
Click the LON-NLB node. Verify that node LON-SVR1 displays as Suspended, and that node LONSVR2 displays as Converged.
5.
6.
7.
Click the LON-NLB node. Verify that both nodes LON-SVR1 and LON-SVR2 now display as
Converged. You might have to refresh the view.
Results: After completing this exercise, you will have successfully configured and managed an NLB cluster.
L9-105
2.
3.
Switch to LON-DC1.
4.
5.
In the Internet Explorer address bar, type the address http://LON-NLB, and then press Enter.
6.
Refresh the website 20 times. Verify that the website is available while LON-SVR1 reboots, but that it
does not display the distinctive red mark on the IIS logo until LON-SVR1 has restarted.
Sign in to LON-SVR1 with the username Adatum\Administrator and the password Pa$$word.
2.
3.
In Server Manager, click the Tools menu, and then click Network Load Balancing Manager.
4.
In the Network Load Balancing Manager console, right-click LON-SVR2, click Control Host, and then
click Drainstop.
5.
Switch to LON-DC1.
6.
In Internet Explorer, in the address bar, type http://lon-nlb, and then press Enter.
7.
Refresh the site 20 times, and verify that only the welcome page with the red IIS logo displays.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have successfully validated high availability for the NLB
cluster.
L10-107
On LON-SVR3, in the Server Manager, click Tools, and then click the iSCSI Initiator.
2.
3.
4.
5.
In the IP address or DNS name box, type 172.16.0.21, and then click OK.
6.
7.
Click Refresh.
8.
9.
Select Add this connection to the list of Favorite Targets, and then click OK two times.
10. On LON-SVR4, in the Server Manager, click Tools, and then click iSCSI Initiator.
11. In the Microsoft iSCSI dialog box, click Yes.
12. Click the Discovery tab.
13. Click Discover Portal.
14. In the IP address or DNS name box, type 172.16.0.21, and then click OK.
15. Click the Targets tab.
16. Click Refresh.
17. In the Targets list, select iqn.1991-05.com.microsoft:lon-svr1-target1-target, and then click
Connect.
18. Select Add this connection to the list of Favorite Targets, and then click OK two times.
19. On LON-SVR3, in the Server Manager, click Tools, and then click Computer Management.
20. Expand Storage, and then click Disk Management.
21. Right-click Disk 1, and then click Online.
22. Right-click Disk 1, and then click Initialize disk. In the Initialize Disk dialog box, click OK.
23. Right-click the unallocated space next to Disk 1, and then click New Simple Volume.
24. On the Welcome page, click Next.
25. On the Specify Volume Size page, click Next.
26. On the Assign Drive Letter or Path page, click Next.
27. On the Format Partition page, in the Volume Label box, type Data. Select the Perform a quick
format check box, and then click Next.
28. Click Finish.
Note: If the Microsoft Windows window pops up with a prompt to format the disk, click
Cancel.
29. Repeat steps 21 through 28 for Disk 2 and Disk 3.
Note: Use Data2 and Data3 for Volume Labels.
30. Close the Computer Management window.
31. On LON-SVR4, in the Server Manager, click Tools, and then click Computer Management.
32. Expand Storage, and then click Disk Management.
33. Select and then right-click Disk Management, and then click Refresh.
34. Right-click Disk 1, and then click Online.
35. Right-click Disk 2, and then click Online.
36. Right-click Disk 3, and then click Online.
37. Close the Computer Management window.
On LON-SVR3, if it is not open, click the Server Manager icon to open Server Manager.
2.
3.
4.
5.
On the Select destination server page, make sure that Select server from the server pool is
selected, and then click Next.
6.
7.
On the Select features page, in the Features list, click Failover Clustering. In the Add features that
are required for Failover Clustering? window, click Add Features. Click Next.
8.
9.
When installation is complete, the message Installation succeeded on LON-SVR3 displays. Click
Close.
On LON-SVR3, in the Server Manager, click Tools, and then click Failover Cluster Manager.
2.
In the Actions pane of the Failover Cluster Manager, click Validate Configuration.
3.
4.
In the Enter Name box, type LON-SVR3, and then click Add.
5.
6.
7.
Verify that Run all tests (recommended) is selected, and then click Next.
L10-109
8.
9.
Wait for the validation tests to finish. This might take up to five minutes. On the Summary page, click
View Report.
10. Verify that all tests completed without errors. Some warnings are expected.
11. Close Internet Explorer.
12. On the Summary page, click to remove the check mark next to Create the cluster now using the
validated nodes, and click Finish.
On LON-SVR3, in the Failover Cluster Manager, in the center pane, under Management, click Create
Cluster.
2.
On the Before You Begin page of the Create Cluster Wizard, read the information.
3.
Click Next, in the Enter server name box, type LON-SVR3, and then click Add. Type LON-SVR4,
and then click Add.
4.
5.
In Access Point for Administering the Cluster, in the Cluster Name box, type Cluster1.
6.
7.
In the Confirmation dialog box, verify the information, and then click Next.
8.
On the Summary page, click Finish to return to the Failover Cluster Manager.
2.
In the right pane, locate a disk that is assigned to Available Storage. You can see this in the
Assigned To column. Right-click that disk, and then click Add to Cluster Shared Volumes. If
possible, use Cluster Disk 2.
3.
Results: After this exercise, you will have installed and configured the failover clustering feature.
On LON-SVR4, in the Server Manager, click Dashboard, and then click Add roles and features.
2.
3.
4.
On the Select destination server page, select LON-SVR4.Adatum.com and click Next.
5.
On the Select server roles page, expand File and Storage Services (1 of 12 installed), expand File
and iSCSI services, and select File Server.
6.
7.
8.
9.
On LON-SVR4, in the Server Manager console, click Tools, and open Failover Cluster Manager.
2.
3.
4.
In the New Share Wizard, on the Select the profile for this share page, click SMB Share Quick,
and then click Next.
5.
On the Select the server and the path for this share page, click Next.
6.
On the Specify share name page, in the Share name box, type Docs, and then click Next.
7.
On the Configure share settings page, review the available options, do not make any changes, and
then click Next.
8.
9.
L10-111
On LON-SVR4, in the Failover Cluster Manager, click Roles, right-click AdatumFS, and then click
Properties.
2.
3.
4.
5.
6.
7.
Click OK.
Results: After this exercise, you will have configured a highly available file server.
On LON-DC1, open File Explorer, and in the Address bar, type \\AdatumFS\, and then press Enter.
2.
Verify that you can access the location and that you can open the Docs folder. Create a test text
document inside this folder.
3.
4.
Expand Cluster1.adatum.com, and then click Roles. Note the current owner of AdatumFS.
Note: You can view the owner in the Owner node column. It will be either LON-SVR3 or
LON-SVR4.
5.
6.
In the Move Clustered Role dialog box, select the cluster node (it will be either LON-SVR3 or LONSVR4), and then click OK.
7.
8.
Switch to the LON-DC1 computer, and verify that you can still access the \\AdatumFS\ location.
Task 2: Validate the failover and quorum configuration for the file server role
1.
2.
Note: You can view the owner in the Owner node column. It will be either LON-SVR3 or
LON-SVR4.
3.
Click Nodes, and then select the node that is the current owner of the AdatumFS role.
4.
Right-click the node, select More Actions, and then click Stop Cluster Service.
5.
Verify that AdatumFS has moved to another node. To do this, click Roles and verify that AdatumFS is
running.
6.
Switch to the LON-DC1 computer, and verify that you can still access the \\AdatumFS\ location.
7.
Switch to the LON-SVR3 computer, and on the Failover Cluster Manager, click Nodes. Right-click the
stopped node, select More Actions, and then click Start Cluster Service.
8.
Expand Storage, and then click Disks. In the center pane, right-click the disk that is assigned to Disk
Witness in Quorum (Note: you can view this in the Assigned to column.)
9.
10. Switch to LON-DC1 and verify that you can still access the \\AdatumFS\ location. By doing this, you
verified that the cluster is still running, even if the witness disk is offline.
11. Switch to the LON-SVR3 computer, and in Failover Cluster Manager, expand Storage, click Disks,
right-click the disk that is in Offline status, and then click Bring Online.
12. Right-click Cluster1.Adatum.com, select More Actions, and then click Configure Cluster Quorum
Settings
13. On the Before You Begin page, click Next.
L10-113
14. On the Select Quorum Configuration Option page, click Advanced quorum configuration, and
then click Next.
15. On the Select Voting Configuration page, review the available settings. Notice that you can select
node or nodes that will or will not have votes in the cluster. Do not make any changes, and then click
Next.
16. On the Select Quorum Witness page, make sure that Configure a disk witness is selected, and
then click Next.
17. On the Configure Storage Witness page, select Cluster Disk 3, and then click Next.
18. On the Confirmation page, click Next.
19. On the Summary page, click Finish.
Results: After this exercise, you will have tested the failover scenarios.
2.
In the Add Roles and Features Wizard, on the Before You Begin page, click Next.
3.
4.
On the Select destination server page, make sure that Select server from the server pool is
selected, and then click Next.
5.
6.
On the Select features page, in the list of features, click Failover Clustering. In Add features that
are required for Failover Clustering? dialog box, click Add Features, and then click Next.
7.
8.
9.
Switch to LON-SVR3. Open Server Manager, click Tools, and then click Windows Firewall with
Advanced Security.
10. In the Windows Firewall with Advanced Security window, click Inbound Rules.
11. In the rules list, find the rule Inbound Rule for Remote Shutdown (RPC-EP-In). Verify that rule is
enabled. If it is not enabled, right-click the rule, and select Enable Rule.
12. In the rules list, find the rule Inbound Rule for Remote Shutdown (TCP-In). Verify that rule is
enabled. If it is not enabled, right-click the rule, and then select Enable Rule.
13. Close the Windows Firewall with Advanced Security window.
14. Switch to LON-SVR4, and repeat steps nine through 13.
15. On LON-DC1, in the Server Manager dashboard, click Tools, and then click Cluster-Aware
Updating.
16. In the Cluster-Aware Updating window, in the Connect to a failover cluster drop-down list box,
select CLUSTER1, and then click Connect.
17. In the Cluster Actions pane, click Preview updates for this cluster.
18. In the Cluster1-Preview Updates window, click Generate Update Preview List. After several minutes,
updates will display in the list. Review the updates, and then click Close.
On LON-DC1, in the Cluster-Aware Updating console, click Apply updates to this cluster.
2.
3.
On the Advanced options page, review the options for updating, and then click Next.
4.
5.
6.
In the Cluster nodes pane, you can review the progress of the updating process.
Note: Remember that one node of the cluster is in a waiting state, and the other node is
restarting after it is updated.
L10-115
7.
8.
Sign in to LON-SVR3 with the username Adatum\Administrator and the password Pa$$w0rd.
9.
On LON-SVR3, in the Server Manager, click Tools, and then click Cluster-Aware Updating.
10. In the Cluster-Aware Updating dialog box, in the Connect to a failover cluster drop-down list box,
select CLUSTER1. Click Connect.
11. Click the Configure cluster self-updating options in the Cluster Actions pane.
12. On the Getting Started page, click Next.
13. On the Add CAU Clustered Role with Self-Updating Enabled page, click Add the CAU clustered
role, with self-updating mode enabled, to this cluster, and then click Next.
14. On the Specify self-updating schedule page, click Weekly, in the Time of day box, select 4:00 AM,
and then in the Day of the week box, select Sunday. Click Next.
15. On the Advanced Options page, click Next.
16. On the Additional Update Options page, click Next.
17. On the Confirmation page, click Apply.
18. After the clustered role is added successfully, click Close.
2.
On the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Repeat steps two and three for 20412D-LON-SVR1, 20412D-LON-SVR3, and 20412D-LON-SVR4.
L11-117
2.
3.
4.
On the Before You Begin page of the Import Virtual Machine Wizard, click Next.
5.
6.
Note: The drive letter might be different based upon the number of drives on the physical
host machine.
7.
On the Select Virtual Machine page, select 20412D-LON-CORE, and then click Next.
8.
9.
On the Connect network page, ensure that External Network is selected, and then click Next.
2.
3.
4.
In the Replication Configuration pane, click Enable this computer as a Replica server.
5.
6.
In the Authorization and storage section, click Allow replication from any authenticated server,
and then click Browse.
7.
Click Computer, double-click Local Disk (E), and then click New folder. Type VMReplica for the
folder name, and press Enter. Select the E:\VMReplica\ folder, and click Select Folder.
8.
9.
In the Settings window, read the notice, and then click OK.
14. In the right pane, in the rule list, find and right-click the Hyper-V Replica HTTP Listener (TCP-In)
rule, and then click Enable Rule.
15. Close the Windows Firewall with Advanced Security console, and close the Windows Firewall.
16. Repeat steps 1 through 15 on LON-HOST1.
On LON-HOST1, open the Hyper-V Manager console. Click LON-HOST1, and right-click 20412DLON-CORE.
2.
3.
4.
5.
In the Select Computer window, type LON-HOST2, and click Check Names, and then click OK. Click
Next.
6.
On the Specify Connection Parameters page, review the settings, and ensure that Use Kerberos
authentication (HTTP) is selected, and then click Next.
7.
On the Choose Replication VHDs page, ensure that 20412D-LON-CORE.vhd is selected, and then
click Next.
8.
On the Configure Replication Frequency page, select 30 seconds from drop-down list box, and
then click Next.
9.
On the Configure Additional Recovery Points page, select Maintain only the latest recovery
point, and then click Next.
10. On the Choose Initial Replication Method page, click Send initial copy over the network, select
Start replication immediately, and then click Next.
11. On the Completing the Enable Replication Wizard page, click Finish.
12. Wait five to seven minutes. You can monitor the progress of the initial replication in the Status
column in the Hyper-V Manager console. When it completes (progress reaches 100 percent), ensure
that 20412D-LON-CORE has appeared on LON-HOST2 in Hyper-V Manager.
2.
3.
Review content of the window that appears, and ensure that there are no errors.
4.
Click Close.
5.
On LON-HOST1, open Hyper-V Manager, and verify that 20412D-LON-CORE is turned off.
6.
7.
In the Planned Failover window, ensure that the option Start the Replica virtual machine after
failover is selected, and then click Fail Over.
8.
9.
L11-119
Results: After completing this exercise, you will have configured Hyper-V Replica.
On LON-HOST1, open the Server Manager, click Tools, and then click iSCSI Initiator. At the
Microsoft iSCSI prompt, click Yes.
2.
3.
4.
In the IP address or DNS name box, type 172.16.0.21, and then click OK.
5.
6.
Click Refresh.
7.
8.
Select Add this connection to the list of Favorite Targets, and click OK.
9.
10. On LON-HOST2, open Server Manager, click Tools, and then click iSCSI Initiator.
11. In the Microsoft iSCSI dialog box, click Yes.
12. Click the Discovery tab.
13. Click Discover Portal.
14. In the IP address or DNS name box, type 172.16.0.21, and then click OK.
15. Click the Targets tab.
16. Click Refresh.
17. In the Discovered targets list, select iqn.1991-05.com.microsoft:lon-svr1-target1-target, and
then click Connect.
18. Select Add this connection to the list of Favorite Targets, and click OK. To close iSCSI Initiator
Properties, click OK.
19. On LON-HOST2, in the Server Manager window, click Tools, and then click Computer Management.
20. Expand Storage, and click Disk Management.
21. Right-click Disk 2, and click Online. (Note: The disk letter and number might be different based upon
the number of drives on the physical host machine.)
22. Right-click Disk 2, and click Initialize Disk. In the Initialize Disk dialog box, click OK.
23. Right-click the unallocated space next to Disk 2, and click New Simple Volume.
24. On the Welcome page, click Next.
25. On the Specify Volume Size page, click Next.
26. On the Assign Drive Letter or Path page, click Next.
27. On the Format Partition page, in the Volume label box, type ClusterDisk. Select the Perform a
quick format check box, and click Next.
28. Click Finish.
29. Repeat steps 21 through 28 for Disk 3 and Disk 4. In step 27, provide the name ClusterVMs for Disk 3
and Quorum for Disk 4.
L11-121
30. On LON-HOST1 in the Server Manager, click Tools, and then click Computer Management.
31. Expand Storage, and click Disk Management.
32. Right-click Disk Management, and click Refresh.
33. Right-click Disk 2, and click Online.
34. Right-click Disk 3, and click Online.
35. Right-click Disk 4, and click Online.
On LON-HOST1, on the taskbar, to open the Server Manager, click the Server Manager icon.
2.
3.
4.
5.
On the Select destination server page, ensure that Select server from the server pool is selected,
and then click Next.
6.
7.
On the Select features page, in the Features list, click Failover Clustering. In the Add features that
are required for failover clustering prompt, click Add Features, and then click Next.
8.
9.
2.
3.
In the Add Disks to Cluster dialog box, verify that all disks are selected, and then click OK.
4.
Verify that all disks appear available for cluster storage in Failover Cluster Manager.
5.
Select the disk that displays the Volume name ClusterVMs. Right-click the ClusterVMs disk, and
select Add to Cluster Shared Volumes. (Note: Click the disk, and the Volume name will display).
6.
Right-click VMCluster.adatum.com, select More Actions, and then click Configure Cluster
Quorum Settings. Click Next.
7.
On the Select Quorum Configuration Option page, click Use default quorum configuration, and
then click Next.
8.
9.
Results: After completing this exercise, the students will have the failover clustering infrastructure
configured for Hyper-V.
L11-123
Ensure that LON-HOST1 is the owner of the ClusterVMs disk in Failover Cluster Manager. If it is not,
then move the ClusterVMs resource to LON-HOST1 before doing this procedure.
2.
In the Failover Cluster Manager console, click Roles, and then in the Actions pane, click Virtual
Machines.
2.
3.
4.
5.
On the Specify Name and Location page, type TestClusterVM for the Name, click Store the virtual
machine in a different location, and then click Browse.
6.
7.
8.
Click Next.
9.
On the Assign Memory page, type 1536, and then click Next.
10. On the Configure Networking page, click External Network, and then click Next.
11. On the Connect Virtual Hard Disk page, click Use an existing virtual hard disk, and then click
Browse.
12. Locate C:\ClusterStorage\Volume1, select 20412D-LON-CORE.vhd, and then click Open.
13. Click Next, and click Finish.
14. On the Summary page, click Finish.
15. Right-click the TestClusterVM, and click Settings.
16. In the Settings for TestClusterVM on LON-Host1, expand Processor in the left navigation pane, and
then click Compatibility.
17. In the right pane, select the check box before the Migrate to a physical computer with a different
processor version option.
18. Click OK.
19. Right-click TestClusterVM, and click Start.
20. Ensure that the machine successfully starts.
2.
3.
Right-click TestClusterVM, select Move, select Live Migration, and then click Select Node.
4.
5.
6.
Ensure that you can access and operate the virtual machine while it is migrating to another host.
7.
2.
3.
4.
5.
In the Hyper-V Manager, on the Actions pane, click New, and then click Virtual Machine.
6.
On the Before You Begin page of the New Virtual Machine Wizard, click Next.
7.
On the Specify Name and Location page of the New Virtual Machine Wizard, select Store the
virtual machine in a different location, enter the following values, and then click Next:
Name: LON-GUEST1
8.
9.
On the Assign Memory page of the New Virtual Machine Wizard, enter a value of 1024 MB, select
the Use Dynamic Memory for this virtual machine option, and then click Next.
10. On the Configure Networking page of the New Virtual Machine Wizard, select External Network,
and then click Next.
11. On the Connect Virtual Hard Disk page, choose Use an existing virtual hard disk. Click Browse,
and browse to E:\Program Files\Microsoft Learning\20412\Drives\LON-GUEST1 \20412D-LONCORE.vhd. Click Open, and click Finish.
12. In the central pane of Hyper-V Manager, click LON-GUEST1.
13. In the Actions pane, click Start. Wait until the virtual machine is fully started.
14. Switch back to the Hyper-V Manager console, and in the Actions pane, click Move.
15. On the Before You Begin page, click Next.
16. On the Choose Move Type page, select Move the virtual machine's storage, and then click Next.
17. On the Choose Options for Moving Storage page, select Move all of the virtual machines data
to a single location, and then click Next.
18. On the Choose a new location for virtual machine page, click Browse.
19. Locate C:\, and create a new folder named Guest1. Click Select Folder.
20. Click Next.
21. On the Summary page, click Finish. Wait for the move process to finish. While the virtual machine is
moving, you can connect to it and verify that it is fully operational.
22. Shut down all running virtual machines.
L11-125
Restart LON-HOST1.
2.
When you are prompted with the boot menu, select Windows Server 2012, and then press Enter.
3.
4.
Results: After completing this exercise, the students will have configured the virtual machine as highly
available.
L12-127
Switch on LON-SVR1.
2.
In the Server Manager, in the Welcome pane, click Add roles and features.
3.
In the Add Roles and Features Wizard, on the Before you begin page, click Next.
4.
5.
6.
7.
On the Select features page, select Windows Server Backup, and then click Next.
8.
9.
On the Installation progress page, wait until the Installation succeeded on LONSVR1.Adatum.com message displays, and then click Close.
On LON-SVR1, in the Server Manager, click Tools, and then click Windows Server Backup.
2.
3.
4.
In the Backup Schedule Wizard, on the Getting Started page, click Next.
5.
On the Select Backup Configuration page, click Full server (recommended), and then click Next.
6.
On the Specify Backup Time page, next to Select time of day, select 1:00 AM, and then click Next.
7.
On the Specify Destination Type page, click Backup to a shared network folder, and then click
Next. Review the warning, and then click OK.
8.
On the Specify Remote Shared Folder page, in the Location text box, type \\LON-DC1\Backup,
and then click Next.
9.
In the Register Backup Schedule dialog box, in the Username text box, type Administrator, and in
the Password text box, type Pa$$w0rd, and then click OK.
2.
3.
In the Backup Once Wizard, on the Backup Options page, click Different options, and then click
Next.
4.
On the Select Backup Configuration page, click Custom, and then click Next.
5.
6.
Expand Local disk (C:), select the Financial Data check box, click OK, and then click Next.
7.
On the Specify Destination Type page, click Remote shared folder, and then click Next.
8.
On the Specify Remote Folder page, type \\LON-DC1\Backup, and then click Next.
9.
10. On the Backup Progress page, after the backup is complete, click Close.
Results: After you complete this exercise, you will have configured the Windows Server Backup feature,
scheduled a backup task, and completed an on-demand backup.
L12-129
2.
In File Explorer, click to Local disk (C:), right-click Financial Data, and then click Delete.
In the Windows Server Backup console, in the Actions pane, click Recover.
2.
On the Getting Started page, click A backup stored on another location, and then click Next.
3.
On the Specify Location Type page, click Remote shared folder, and then click Next.
4.
On the Specify Remote Folder page, type \\LON-DC1\Backup, and then click Next.
5.
6.
7.
On the Select Items to Recover page, expand LON-SVR1, click Local disk (C:), and in the right
pane, select Financial Data, and then click Next.
8.
On the Specify Recovery Options page, under Another Location, type C:\, and then click Next.
9.
2.
In the Virtual Machines list, right-click 20412D-LON-DC1, and then click Revert.
3.
4.
Results: After completing this exercise, you will have tested and validated the procedure for restoring a
file from backup.