Cloud Computing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 247

SPCI104

POSTGRADUATE COURSE
M.Sc., Cyber Forensics and Information Security

FIRST YEAR
FIRST SEMESTER

CORE PAPER - IV

IT INFRASTRUCTURE AND
CLOUD COMPUTING

INSTITUTE OF DISTANCE EDUCATION


UNIVERSITY OF MADRAS
M.Sc., Cyber Forensics and Information Security CORE PAPER - IV
FIRST YEAR - FIRST SEMESTER IT INFRASTRUCTURE
AND CLOUD COMPUTING

WELCOME
Warm Greetings.

It is with a great pleasure to welcome you as a student of Institute of Distance


Education, University of Madras. It is a proud moment for the Institute of Distance education
as you are entering into a cafeteria system of learning process as envisaged by the University
Grants Commission. Yes, we have framed and introduced Choice Based Credit
System(CBCS) in Semester pattern from the academic year 2018-19. You are free to
choose courses, as per the Regulations, to attain the target of total number of credits set
for each course and also each degree programme. What is a credit? To earn one credit in
a semester you have to spend 30 hours of learning process. Each course has a weightage
in terms of credits. Credits are assigned by taking into account of its level of subject content.
For instance, if one particular course or paper has 4 credits then you have to spend 120
hours of self-learning in a semester. You are advised to plan the strategy to devote hours of
self-study in the learning process. You will be assessed periodically by means of tests,
assignments and quizzes either in class room or laboratory or field work. In the case of PG
(UG), Continuous Internal Assessment for 20(25) percentage and End Semester University
Examination for 80 (75) percentage of the maximum score for a course / paper. The theory
paper in the end semester examination will bring out your various skills: namely basic
knowledge about subject, memory recall, application, analysis, comprehension and
descriptive writing. We will always have in mind while training you in conducting experiments,
analyzing the performance during laboratory work, and observing the outcomes to bring
out the truth from the experiment, and we measure these skills in the end semester
examination. You will be guided by well experienced faculty.

I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.

With best wishes from mind and heart,

DIRECTOR

(i)
M.Sc., Cyber Forensics and Information Security CORE PAPER - IV
FIRST YEAR - FIRST SEMESTER IT INFRASTRUCTURE
AND CLOUD COMPUTING

COURSE WRITER & EDITOR

Dr. N. Kala
Director i/c.
Cyber Forensics and Information Security,
University of Madras

Dr. S. Thenmozhi
Associate Professor
Department of Psychology
Institute of Distance Education
University of Madras
Chepauk Chennnai - 600 005.

© UNIVERSITY OF MADRAS, CHENNAI 600 005.

(ii)
M.Sc., Cyber Forensics and Information Security

FIRST YEAR

FIRST SEMESTER

Core Paper - IV

IT INFRASTRUCTURE AND CLOUD COMPUTING


SYLLABUS

Unit 1: Computer Hardware Basics


· Basics of Motherboard including CMOS and BIOS
· Working of processors and types of processors
· System memory
· Introduction to RAM
· System storage devices
o Types of hard disks - FAT, NTFS, RAID
o Optical drives
o Removable storage devices
o Tape drives and backup systems
· Common computer ports – Serial – Parallel - USB ports etc.
· Different input systems - Key Board - Mouse etc.
· Display arrays – VGA – SVGA – AGP
· Additional display cards
· Monitors and their types
· Printers and their types

Unit 2: Operating Systems


· Operating system basics
o Functions of operating system
o Functions of Client Operating System

(iii)
o Functions of Server operating system
o Introduction to Command line operation
· Basics on files and directories
· Details about system files and boot process
· Introduction to device drivers

Unit 3: Computer Principles and a Back Box Model of the PC


· Memory and processor
· Address and data buses
· Stored program concept
· Physical components of the PC and how they fit together and interact
· Basic electrical safety
· Motherboards and the design of the PC
· Dismantling and re-building PCs
· Power On Self Test and boot sequence
o Architecture of real mode
o Interrupts
o Start of boot sequence
o Power On Self Test (POST)
Unit 4: Enterprise and Active Directory Infrastructure
· Overview of Enterprise Infrastructure Integration
· Requirement to understand the Enterprise Infrastructure
· Enterprise Infrastructure Architecture and it’s components
· Overview of Active Directory (AD)
· Kerberos
· LDAP
· Ticket Granting Ticket {TGT}
· Forest
· Domain
· Organization Unit (OU)
· Site Topology of a Forest

(iv)
· Trust Relationships
· Object – Creation, Modification, Management and Deletion
o User
o Group
o Computer
o OU
o Domain
· Group Policy (GPO) Management
o Structure of GPO
o Permissions and Privileges
o GPO Security Settings
§ Password Settings
§ Account Lockout Settings
§ Account Timeout Settings
§ USB Enable/ Disable Settings
§ Screen Saver Settings
§ Audit Logging Settings
§ Windows Update Settings
§ User Restriction Settings
o Creation of GPO
o Linking a GPO
o Application of GPO
§ Linking a GPO
§ Enforcing a GPO
§ GPO Status
§ Inclusion / Exclusion of Users/ Groups in a GPO
o Precedence of GPO
o Loopback Processing of GPO
o Fine-Grain Policy / Fine-Grain Password Policy
· Addition of Windows Workstations to Domain and Group Policy Synchronisation
· Addition of Non-Windows Workstations in AD Environment
· Integrating Finger-Print, Smart Card, RSA or secondary authentication source to
Active Directory
· Single-Sign On Integration
· Active Directory Hardening Guidelines
Unit 5: Cloud Computing
· Concept – Fundamentals of Cloud Computing
· Types of clouds
· Security Design and Architecture
· Cloud Computing Service Models
· The Characteristics of Cloud Computing
· Multi Tenancy Model
· Cloud Security Reference Model
· Cloud Computing Deploying Models
· Cloud Identity and Access Management
o Identity Provisioning – Authentication
o Key Management for Access Control – Authorization
o Infrastructure and Virtualization Security
o Hypervisor Architecture Concerns.

· Understanding Cloud Security


o Securing the Cloud
o The security boundary
o Security service boundary
o Security mapping
o Securing Data
o Brokered cloud storage access
o Storage location and tenancy
o Encryption
o Auditing and compliance
o Establishing Identity and Presence
o Identity protocol standards
M.Sc., Cyber Forensics and Information Security

FIRST YEAR

FIRST SEMESTER

Core Paper - IV

IT INFRASTRUCTURE AND CLOUD COMPUTING


SCHEME OF LESSONS

Sl.No. Title Page

1 Computer Hardware Basics 1

2 Operating Systems 49

3 Computer Principles and a Black Box Model of the PC 65

4 Enterprises Infrastructure Integration 95

5 Enterprises Active Directory Infrastructure 115

6 Cloud Computing 187

(v)
1

UNIT I
COMPUTER HARDWARE BASICS
Learning Objectives

After reading this lesson you will be able to understand

 Basics operation of computers

 Basics of motherboard

 Major Motherboard Components and their functions

 Central Processing Unit (CPU)

 The Computer’s Microprocessor

 Random Access Memory (RAM)

 Basic Input/Output System (BIOS)

 Complementary Metal Oxide Semiconductor (CMOS)

 Power on Self- Test (POST)

 The Computer Cache Memory

 The Expansion Buses

 Chipsets

 The CPU Clock

 Switches and Jumpers

 Components of a CPU

 System memory

 Virtual memory

 Protected memory

 Hard Disks

 Optical Drives

 Removable storage devices


2

Structure
1.1 Introduction

1.2 Basics Operation of Computers

1.3 Chipsets

1.4 The CPU Clock

1.5 Switches and Jumpers

1.6 Processor & Types

1.7 Components of a CPU

1.8 Introduction to RAM

1.9 System storage Devices

1.10 Optical Drives

1.11 Removable storage devices

1.12 File Systems

1.13 Redundant Array of Independent Disks RAID

1.14 Standard RAID Levels

1.15 Hierarchical File System (HFS)

1.16 Computer Ports

1.17 USB

1.18 Display Arrays

1.19 Monitors and their types

1.20 Printers and their types

1.1. Introduction
The personal computers became possible in 1974 when a small company named Intel
started selling inexpensive computer chips called 8080 microprocessors. A single 8080
microprocessor contained all of the electronic circuits necessary to create a programmable
3

computer. Almost, immediately a few primitive computers were developed using this
microprocessor. By the early 1980’s, Steve Jobs and Steve Wozniak were mass marketing
Apple Computers and Bill Gates was working with IBM to mass market IBM personal computer
computers. In England, the Acorn and Sinclair computers were being sold. The Sinclair, a small
keyboard that plugged into a standard television and audio cassette player for memory storage,
was revolutionary in 1985. By supplanting expensive, centralized mainframes, these small,
inexpensive computers made by Bill Gate’s dream of putting a computer in every home a
distinct possibly. Additionally, the spread of these computers around the world made a global
network of computers the next logical step.

1.2. Basics operation of computers


Each time a computer is turned on it must familiarize itself with its internal components
and the peripheral world. This start-up process is called boot process, because it is as if a
computer has to pull itself up by its bootstraps. The boot process has three basic stages: the
central processing unit reset, the Power-on Self-Test (POST) and the disk boot.

1.2.1.Basics of motherboard

The main printed circuit board in a computer is known as the motherboard. Other names
for this central computer unit are system board, mainboard, or printed wired board (PWB). The
motherboard is sometimes shortened to Mobo.

Numerous major components, crucial for the functioning of the computer, are attached to
the motherboard. These include the processor, memory, and expansion slots. The motherboard
connects directly or indirectly to every part of the PC.

The type of motherboard installed in a PC has a great effect on a computer’s system


speed and expansion capabilities.
4

1.2.2 Major Motherboard Components

Figure 1.1 : Components of Mother Board

Detailed description of motherboard components is discussed in unit 3.

1.2.3. Central Processing Unit (CPU)

The CPU is the core of any computer. Everything depends on the CPUs ability to process
instructions that it receives. So the first stage in the boot process is to get the CPU started –
reset – with an electric pulse. This pulse is usually generated when the power switch or button
is activated but can also be initiated over a network on some systems. Once the CPU is reset it
starts the computer’s basic input output system (BIOS).

Figure 1.2: An Electrical Pulse resets the CPU, which in turn, activates the BIOS
5

1.2.4. The Computer’s Microprocessor

The Computers Microprocessor is also known as the microprocessor or the processor,


the CPU is the computer’s brain. It is responsible for fetching, decoding, and executing program
instructions as well as performing mathematical and logical calculations.

The processor chip is identified by the processor type and the manufacturer. This
information is usually inscribed on the chip itself. For example, Intel 386, Advanced Micro Devices
(AMD) 386, Cyrix 486, Pentium MMX, Intel Core 2Duo, or iCore7.

If the processor chip is not on the motherboard, it can be identified by the processor
socket as socket 1 to Socket 8, LGA 775 among others. This can help to identify the processor
that fits in the socket. For example, a 486DX processor fits into Socket 3.

1.2.5. Random Access Memory (RAM)

Random Access Memory, or RAM, usually refers to computer chips that temporarily store
dynamic data to enhance computer performance while working.

In other words, it is the working place of the computer, where active programs and data
are loaded so that any time the processor requires them, it doesn’t have to fetch them from the
hard disk.

Random access memory is volatile, meaning it loses its contents once power is turned
off. This is different from non-volatile memory, such as hard disks and flash memory, which do
not require a power source to retain data.

When a computer shuts down properly, all data located in RAM is returned back to
permanent storage on the hard drive or flash drive. At the next boot-up, RAM begins to fill with
programs automatically loaded at startup, a process called booting. Later on, the user opens
other files and programs that are still loaded in the memory.

1.2.6. Basic Input/Output System (BIOS)

BIOS stands for Basic Input/Output System. BIOS is a “read-only” memory, which consists
of low-level software that controls the system hardware and acts as an interface between the
operating system and the hardware. Most people know the term BIOS by another name—
device drivers, or just drivers. BIOS is essentially the link between the computer hardware and
software in a system.
6

All motherboards include a small block of Read Only Memory (ROM) which is separate
from the main system memory used for loading and running software. On PCs, the BIOS contains
all the code required to control the keyboard, display screen, disk drives, serial communications,
and a number of miscellaneous functions.

The system BIOS is a ROM chip on the motherboard used during the startup routine
(boot process) to check out the system and prepare to run the hardware. The BIOS is stored on
a ROM chip because ROM retains information even when no power is being supplied to the
computer. Some BIOS programs allow an individual to set a password and then until the password
is typed in the BIOS will not run and the computer will not function.

1.2.7. Complementary Metal Oxide Semiconductor Random Access


Memory (CMOS RAM)

Motherboards also include a small separate block of memory made from CMOS RAM
chips which are kept alive by a battery (known as a CMOS battery) even when the PC’s power
is off. This prevents reconfiguration when the PC is powered on.

CMOS devices require very little power to operate.

The CMOS RAM is used to store basic Information about the PC’s configuration for
instance:

· Floppy disk and hard disk drive types

· Information about CPU

· RAM size

· Date and time

· Serial and parallel port information

· Plug and Play information

· Power Saving settings

Other Important data kept in CMOS memory is the time and date, which is updated by a
Real Time Clock (RTC).
7

1.2.8. Power on SelfTest (POST)

The BIOS contains a program called the power-on-self test (POST) that tests the
fundamental components of the computer. When the CPU first activates the BIOS, the POST
program is initiated. To be safe the first test verifies the integrity of the CPU and POST program
itself. The rest of the POST verifies that all of the computer’s components are functioning
properly, including the disk drives, monitor, RAM and Keyboard. Notably after the BIOS is activated
and before the POST is complete there is an opportunity to interrupt the boot process and have
it perform specific actions. For instance, Intel based computers allow user to open the
Complementary Metal Oxide Semiconductor tool (silicon configuration tool) at this stage.
Computers use CMOS and RAM chips to retain the date, time hard drive parameters and other
configuration details while the computers main power is off. A small battery powers the CMOS
chips – older computers may not boot even when the main power is turned on because this
CMOS battery is depleted, causing the computer to “forget” it’s hardware settings. Understanding
the CMOS configuration tool, it is possible to determine the system on the time ascertain if the
computer will try to find an operating system on the primary hardware or another disk first, and
change basic computer settings as needed. When collecting digital evidence from a computer,
it is often necessary to interrupt the boot process and examine CMOS settings such as the
system date and time, the configuration of hard drives, and the boot sequence in some instances
it may necessary to change the CMOS settings to ensure that the computer will boot from a
floppy diskette, rather than the evidentiary hard drive. In many computers the results of POST
are checked against a permanent record stored in CMOS microchip. If there is a problem at any
stage in the POST, the computer will emit a series of beeps and possibly an error message on
the screen. The combination of beep sounds indicates various errors. When all of the hardware
tests are complete, the BIOS instruct the CPU to look for a disk containing an operating system.

1.2.9. The Computer Cache Memory

Cache memory is a small block of high-speed memory (RAM) that enhances PC


performance by pre-loading information from the (relatively slow) main memory and passing it
to the processor on demand.

Most CPUs have an internal cache memory (built into the processor) which is referred to
as Level 1 or primary cache memory. This can be supplemented by external cache memory
fitted on the motherboard. This is the Level 2 or secondary cache.
8

In modern computers, Levels 1 and 2 cache memory are built into the processor die. If a
third cache is implemented outside the die, it is referred to as the Level 3 (L3) cache.

CACHE:

A Cache is a small and very fast temporary storage memory. It is designed to speed up
the transfer of data and instructions. It is located inside or close to the CPU chip. It is faster than
RAM and the data/instructions that are most recently or most frequently used by CPU are
stored in cache.

The data and instructions are retrieved from RAM when CPU uses them for the first time.
A copy of that data or instructions is stored in cache. The next time the CPU needs that data or
instructions, it first looks in cache. If the required data is found there, it is retrieved from cache
memory instead of main memory. It speeds up the working of CPU.

Figure 1.3 : Types of Cache Memory


9

Types/Levels of Cache Memory

A computer can have several different levels of cache memory. The level numbers refer
to distance from CPU where Level 1 is the closest. All levels of cache memory are faster than
RAM. The cache closest to CPU is always faster but generally costs more and stores less data
then other level of cache.

The following are the deferent levels of Cache Memory.

Level 1 (L1) Cache

It is also called primary or internal cache. It is built directly into the processor chip. It has
small capacity from 8 Km to 128 Kb.

Level 2 (L2) Cache

It is slower than L1 cache. Its storage capacity is more, i-e. From 64 Kb to 16 MB. The
current processors contain advanced transfer cache on processor chip that is a type of L2
cache. The common size of this cache is from 512 kb to 8 Mb.

Level 3 (L3) Cache

This cache is separate from processor chip on the motherboard. It exists on the computer
that uses L2 advanced transfer cache. It is slower than L1 and L2 cache. The personal computer
often has up to 8 MB of L3 cache.

1.2.10. The Expansion Buses

An expansion bus is an input/output pathway from the CPU to peripheral devices and it is
typically made up of a series of slots on the motherboard. Expansion boards (cards) plug into
the bus. PCI is the most common expansion bus in a PC and other hardware platforms. Buses
carry signals such as data, memory addresses, power, and control signals from component to
component. Other types of buses include ISA and EISA are detailed in Unit 3.

Expansion buses enhance the PCs capabilities by allowing users to add missing features
in their computers by slotting adapter cards into expansion slots.
10

1.3. Chipsets
A chipset is a group of small circuits that coordinate the flow of data to and from a PC’s
key components. These key components include the CPU itself, the main memory, the secondary
cache, and any devices situated on the buses. A chipset also controls data flow to and from
hard disks and other devices connected to the IDE channels. For further details refer unit 3.

A computer has got two main chipsets:

· The NorthBridge (also called the memory controller) is in charge of controlling


transfers between the processor and the RAM, which is why it is located physically
near the processor. It is sometimes called the GMCH, for Graphic and Memory
Controller Hub.

· The SouthBridge (also called the input/output controller or expansion controller)


handles communications between slower peripheral devices. It is also called the
ICH (I/O Controller Hub). The term “bridge” is generally used to designate a
component which connects two buses.

1.4. The CPU Clock


The CPU clock synchronizes the operation of all parts of the PC and provides the basic
timing signal for the CPU. Using a quartz crystal, the CPU clock breathes life into the
microprocessor by feeding it a constant flow of pulses.

For example, a 200 MHz CPU receives 200 million pulses per second from the clock. A 2
GHz CPU gets two billion pulses per second. Similarly, in any communications device, a clock
may be used to synchronize the data pulses between sender and receiver.

A “real-time clock,” also called the “system clock,” keeps track of the time of day and
makes this data available to the software. A “time-sharing clock” interrupts the CPU at regular
intervals and allows the operating system to divide its time between active users and/or
applications.
11

1.5. Switches and Jumpers


· DIP (Dual In-line Package) switches are small electronic switches found on the
circuit board that can be turned on or off just like a normal switch. They are very
small and so are usually flipped with a pointed object, such as the tip of a screwdriver,
a bent paper clip, or a pen top. Dip switches are obsolete and they are not found in
modern systems.

· Jumper pins are small protruding pins on the motherboard. A jumper cap or bridge
is used to connect or short a pair of jumper pins. When the bridge is connected to
any two pins, via a shorting link, it completes the circuit and a certain configuration
has been achieved.

· Jumper caps are metal bridges that close an electrical circuit. Typically, a jumper
consists of a plastic plug that fits over a pair of protruding pins. Jumpers are
sometimes used to configure expansion boards. By placing a jumper plug over a
different set of pins, board’s parameters can be changed.

NOTE:The jumper pins and jumper cap at the back of an IDE hard disk and a CD/DVD
ROM/Writer can be checked.

1.6. Processor & Types


The full form of CPU is Central Processing Unit. Alternatively, it is also known by the
name of processor, microprocessor or a computer processor. A CPU is an electronics circuit
used in a computer that fetches the input instructions or commands from the memory unit,
performs arithmetic and logic operations and stores this processed data back to memory.

A CPU or Central Processing Unit is the heart of a computer and is installed in a


socket specified on a motherboard. Since a CPU performs a lot of calculations at a high speed,
it gets heat up quickly. To cool down the temperature of a CPU a cooling FAN is installed on it.
12

1.7. Components of a CPU

Figure 1.4 – Components of a CPU

Central processing unit (CPU),the hardware within a computer that executes a program

· Microprocessor, a central processing unit contained on a single integrated circuit


(IC). A microprocessor is a silicon chip containing millions of microscopic transistors.
This chip functions as the computer’s brain. It processes the instructions or operations
contained within executable computer programs. Instead of taking instructions directly
off of the hard drive, the processor takes its instructions from memory. This greatly
increases the computer’s speed.

· Application-specific instruction set processor (ASIP), a component used in system-


on-a-chip design

· Graphics processing unit (GPU), a processor designed for doing dedicated graphics-
rendering computations
13

· Physics processing unit (PPU), a dedicated microprocessor designed to handle the


calculations of physics

· Digital signal processor (DSP), a specialized microprocessor designed specifically


for digital signal processing

§ Image processor, a specialized DSP used for image processing in digital cameras,
mobile phones or other devices

· Coprocessor

· Floating-point unit

· Network processor, a microprocessor specifically targeted at the networking


application domain

Multi-core processor, single component with two or more independent CPUs (called
“cores”) on the same chip carrier or on the same die

Front-end processor, a helper processor for communication between a host computer


and other devices.

1.7.1.Control Unit

The Control Unit is an internal part of a CPU that co-ordinates the instructions and data
flow between CPU and other components of the computer. It is the CU that directs the operations
of a central processing unit by sending timing and control signals.

1.7.2. Arithmetic Logic Unit

The ALU is an internal electronic circuitry of a CPU that performs all the arithmetic and
logical operations in a computer. The ALU receives three types of inputs.

· Control signal from CU (Control Unit)

· Data(operands) to be operated

· Status information from operations done previously.

When all the instructions have been operated, the output that consists of data which is
stored in memory and status information is stored in internal registers of a CPU.
14

1.7.3 Working of a CPU

All the CPUs regardless of their origin or type performs a basic instruction cycle that
consists of three steps named Fetch, decode and Execute.

1.7.3.1. Fetch

A program consists of a number of instructions. Various programs are stored in memory.


During this step, the CPU reads instruction that is to be operated from a particular address in
the memory. The program counter of CPU keeps the record of address of the instructions.

1.7.3.2. Decode

A circuitry called instruction decoder decodes all the instructions fetched from the memory.
The instructions are decoded to various signals that control other areas of CPU.

1.7.3.3. Execute

In the last step, the CPU executes the instruction. For example, it stores a value in the
particular register and the instruction pointer then points to other instruction that is stored in
next address location.

1.7.4. Clock Speed

The speed of processor is measured by the number of clock cycles a CPU can perform in
a second. The more the number of clock cycles, the more number of instructions (calculations)
it can carry out. The CPU speed is measured in Hertz. Modern Day processors have speed
units of GHz. (1GHz=1 million thousand cycles per second).

1.7.5 System memory


1.7.5.1. Volatile memory

Volatile memory is computer memory that requires power to maintain the stored information.
Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM
(DRAM). SRAM retains its contents as long as the power is connected and is easy for interfacing,
but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control,
needing regular refresh cycles to prevent losing its contents, but uses only one transistor and
one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit
costs.
15

SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is
used for their cache memories. SRAM is commonplace in small embedded systems, which
might only need tens of kilobytes or less. Forthcoming volatile memory technologies that aim at
replacing or competing with SRAM and DRAM include Z-RAM and A-RAM.

· Non-volatile memory

· Solid-state drives are one example of non-volatile memory.

Non-volatile memory is computer memory that can retain the stored information even
when not powered. Examples of non-volatile memory include read-only memory, flash memory,
most types of magnetic computer storage devices (e.g. hard disk drives, flopp disksand magnetic
tape), optical discs, and early computer storage methods such as paper tape and punched cards.

Forthcoming non-volatile memory technologies include FeRAM, CBRAM, PRAM, STT-


RAM, SONOS, RRAM, racetrack memory, NRAM, 3D XPoint, and millipede memory.

1.7.5.2. Semi-volatile memory

A third category of memory is “semi-volatile”. The term is used to describe a memory


which has some limited non-volatile duration after power is removed, but then data is ultimately
lost. A typical goal when using a semi-volatile memory is to provide high performance/durability/
etc. associated with volatile memories, while providing some benefits of a true non-volatile
memory.

For example, some non-volatile memory types can wear out, where a “worn” cell has
increased volatility but otherwise continues to work. Data locations which are written frequently
can thus be directed to use worn circuits. As long as the location is updated within some known
retention time, the data stays valid. If the retention time “expires” without an update, then the
value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows
a high write rate while avoiding wear on the not-worn circuits.

As a second example, an STT-RAM can be made non-volatile by building large cells, but
the cost per bit and write power go up, while the write speed goes down. Using small cells
improves cost, power, and speed, but leads to semi-volatile behavior. In some applications the
increased volatility can be managed to provide many benefits of a non-volatile memory, for
example by removing power but forcing a wake-up before data is lost; or by caching read-only
data and discarding the cached data if the power-off time exceeds the non-volatile threshold.
16

The term semi-volatile is also used to describe semi-volatile behavior constructed from
other memory types. For example, a volatile and a non-volatile memory may be combined,
where an external signal copies data from the volatile memory to the non-volatile memory, but
if power is removed without copying, the data is lost. Or, a battery-backed volatile memory, and
if external power is lost there is some known period where the battery can continue to power the
volatile memory, but if power is off for an extended time, the battery runs down and data is lost.

1.7.5.3. Virtual memory

Virtual memory is a system where all physical memory is controlled by the operating
system. When a program needs memory, it requests it from the operating system. The operating
system then decides what physical location to place the memory in.

This offers several advantages. Computer programmers no longer need to worry about
where the memory is physically stored or whether the user’s computer will have enough memory.
It also allows multiple types of memory to be used. For example, some memory can be stored
in physical RAM chips while other memory is stored on a hard drive (e.g. in a swapfile), functioning
as an extension of the cache hierarchy. This drastically increases the amount of memory available
to programs. The operating system will place actively used memory in physical RAM, which is
much faster than hard disks. When the amount of RAM is not sufficient to run all the current
programs, it can result in a situation where the computer spends more time moving memory
from RAM to disk and back than it does accomplishing tasks; this is known as thrashing.

Virtual memory systems usually include protected memory, but this is not always the
case.

1.7.5.4. Protected memory

Protected memory is a system where each program is given an area of memory to use
and is not permitted to go outside that range. Use of protected memory greatly enhances both
the reliability and security of a computer system.

Without protected memory, it is possible that a bug in one program will alter the memory
used by another program. This will cause that other program to run off of corrupted memory
with unpredictable results. If the operating system’s memory is corrupted, the entire computer
system may crash and need to be rebooted. At times programs intentionally alter the memory
used by other programs. This is done by viruses and malware to take over computers. It may
17

also be used benignly by desirable programs which are intended to modify other programs; in
the modern age, this is generally considered bad programming practice for application programs,
but it may be used by system development tools such as debuggers, for example to insert
breakpoints or hooks.

Protected memory assigns programs their own areas of memory. If the operating system
detects that a program has tried to alter memory that does not belong to it, the program is
terminated (or otherwise restricted or redirected). This way, only the offending program crashes,
and other programs are not affected by the misbehavior (whether accidental or intentional).

Protected memory systems almost always include virtual memory as well.

1.8 Introduction to RAM


Random access refers to the fact that data that is stored anywhere on RAM can be
accessed directly regardless of its (random) location. This is in contrast with other types of data
storage such as hard disk drives and discs where they have to spin to the data’s location first
before being able to access it.

To understand how RAM works and the role it plays in a computer a few of its important
properties that are to be kept in mind are:

1. RAM is blazing fast compared to hard drives - Even the latest and greatest solid
state drives are embarrassingly slow when pitted against RAM. While top end solid
state drives can achieve transfer rates of more than 1,000 MB/s, modern RAM
modules are already hitting speeds in excess of 15,000 MB/s.

2. RAM storage is volatile (temporary) - Any data stored in RAM will be lost once the
computer is turned off. Comparing computer storage to the human brain, RAM
works like short term memory while hard drives resemble our long term memories.

Whenever a program is run (e.g. operating system, applications) or open a file (e.g.
videos, images, music, documents), it is loaded temporarily from the hard drive into RAM. Once
loaded into RAM, it is possible to access it smoothly with minimal delays.

Once one run’s out of RAM, the operating system will begin dumping some of the open
programs and files to the paging file. Paging file is stored on the much slower hard drive. So
instead of running everything through RAM, a part of it is being accessed from hard drive.
18

This is the time when slow loading times, stuttering and general unresponsiveness

Having enough RAM allows the computer to be more responsive, multitask better and run
memory-intensive programs (e.g. video editors, databases, virtual machines) with ease.

TYPES OF RAM

The following are some common types of RAM:

DRAM (Dynamic Random Access Memory)

DRAM stands for Dynamic Random Access Memory. It is used in most of the computers.
It is the least expensive kind of RAM. It requires an electric current to maintain its electrical
state. The electrical charge of DRAM decreases with time that may result in loss of DATA.
DRAM is recharged or refreshed again and again to maintain its data. The processor cannot
access the data of DRAM when it is being refreshed. That is why it is slow.

SRAM (Static Random Access Memory)

SRAM stands for Static Random Access Memory. It can store data without any need of
frequent recharging. CPU does not need to wait to access data from SRAM during processing.
That is why it is faster than DRAM. It utilizes less power than DRAM. SRAM is more expensive
as compared to DRAM. It is normally used to build a very fast memory known as cache memory.

MRAM (Magneto resistive Random Access Memory)

MRAM stands for Magneto resistive Random Access Memory. It stores data using magnetic
charges instead of electrical charges. MRAM uses far less power than other RAM technologies
so it is ideal for portable devices. It also has greater storage capacity. It has faster access time
than RAM. It retains its contents when the power is removed from computer.

OTHER TYPES OF RAM

FPM DRAM: Fast page mode dynamic random access memory was the original form of
DRAM. It waits through the entire process of locating a bit of data by column and row and then
reading the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately
176 MBps.

EDO DRAM: Extended data-out dynamic random access memory does not wait for all of
the processing of the first bit before continuing to the next one. As soon as the address of the
19

first bit is located, EDO DRAM begins looking for the next bit. It is about five percent faster than
FPM. Maximum transfer rate to L2 cache is approximately 264 MBps.

SDRAM: Synchronous dynamic random access memory takes advantage of the burst
mode concept to greatly improve performance. It does this by staying on the row containing the
requested bit and moving rapidly through the columns, reading each bit as it goes. The idea is
that most of the time the data needed by the CPU will be in sequence. SDRAM is about five
percent faster than EDO RAM and is the most common form in desktops today. Maximum
transfer rate to L2 cache is approximately 528 MBps.

DDR SDRAM: Double data rate synchronous dynamic RAM is just like SDRAM except
that is has higher bandwidth, meaning greater speed. Maximum transfer rate to L2 cache is
approximately 1,064 MBps (for DDR SDRAM 133 MHZ).

RDRAM: Rambus dynamic random access memory is a radical departure from the
previous DRAM architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory
module (RIMM), which is similar in size and pin configuration to a standard DIMM. What makes
RDRAM so different is its use of a special high-speed data bus called the Rambus channel.
RDRAM memory chips work in parallel to achieve a data rate of 800 MHz, or 1,600 Mbps. Since
they operate at such high speeds, they generate much more heat than other types of chips. To
help dissipate the excess heat Rambus chips are fitted with a heat spreader, which looks like a
long thin wafer. Just like there are smaller versions of DIMMs, there are also SO-RIMMs, designed
for notebook computers.

Credit Card Memory: Credit card memory is a proprietary self-contained DRAM memory
module that plugs into a special slot for use in notebook computers.

PCMCIA Memory Card: Another self-contained DRAM module for notebooks, cards of
this type are not proprietary and should work with any notebook computer whose system bus
matches the memory card’s configuration.

CMOS RAM: CMOS RAM is a term for the small amount of memory used by your computer
and some other devices to remember things like hard disk settings. This memory uses a small
battery to provide it with the power it needs to maintain the memory contents.
20

VRAM: VideoRAM, also known as multiport dynamic random access memory


(MPDRAM), is a type of RAM used specifically for video adapters or 3-D accelerators. The
“multiport” part comes from the fact that VRAM normally has two independent access ports
instead of one, allowing the CPU and graphics processor to access the RAM simultaneously.
VRAM is located on the graphics card and comes in a variety of formats, many of which are
proprietary. The amount of VRAM is a determining factor in the resolution and color depth of the
display. VRAM is also used to hold graphics-specific information such as 3-D geometry data
and texture maps.Performance is nearly the same, but SGRAM is cheaper.

1.9 System storage Devices


1.9.1 Types of hard disks

Computers rely on hard disk drives (HDDs) to store data permanently. They are storage
devices used to save and retrieve digital information that will be required for future reference.

Hard drives are non-volatile, meaning that they retain data even when they do not have
power. The information stored remains safe and intact unless the hard drive is destroyed or
interfered with.

The information is stored or retrieved in a random access manner as opposed to sequential


access. This implies that blocks of data can be accessed at any time they are required without
going through other data blocks.

Hard disk drives were introduced in 1956 by IBM. At the time, they were being used with
general purpose mainframes and minicomputers. Like other electronic devices, these have
witnessed numerous technological advancements over the years. This is in terms of capacity,
size, shape, internal structure, performance, interface, and modes of storing data.

These numerous changes have made HDDs here to stay, not like other devices that
became obsolete the moment they are introduced in the market.
21

1.9.2. Hard Drive Interface Types

Currently, hard drives can be grouped into four types:

· Integrated Device Electronics (IDE)

· Parallel Advanced Technology Attachment (PATA)

· Serial ATA (SATA)

· Small Computer System Interface (SCSI)

1.9.2.1. Integrated Drive Electronics or (IDE)

IDE are a type of hard drive controller which bundles the components of the hard drive
and its controller into one interface. This allows for much simpler installation of the hard drive
into the system by removing the difficulties associated with the separation of the components
and controllers such as the predecessor drives. This advancement has lead to several hard
drive options which are both very large and very fast. Taking advantage of these types of hard
drives can be done in nearly any computer system in modern day computing.

1.9.2.2. Parallel Advanced Technology Attachment (PATA)

These were the first types of hard disk drives and they made use of the Parallel ATA
interface standard to connect to computers. These types of drives are the ones that are referred
to as Integrated Drive Electronics (IDE) and Enhanced Integrated Drive Electronics (EIDE)
drives.

These PATA drives were introduced by Western Digital back in 1986. They provided a
common drive interface technology for connecting hard drives and other devices to computers.
Data transfer rate can go up to 133MB/s and a maximum of 2 devices can be connected to a
drive channel. Most of the motherboards have a provision of two channels, thus a total of 4
EIDE devices can be connected internally.

They make use of a 40 or 80 wire ribbon cable transferring multiple bits of data
simultaneously in parallel. These drives store data by the use of magnetism. The internal structure
is one made of mechanical moving parts. They have been superseded by serial ATA.
22

1.9.2.2. Serial ATA Storage Drives

These hard drives have replaced the PATA drives in desktop and laptop computers. The
main physical difference between the two is the interface, although their method of connecting
to a computer is the same. Here are some advantages of SATA Hard Disk Drives. Worth noting
is that their capacities vary a lot and so does the prices.SATA drives can transfer data faster
than PATA types by using serial signaling technology.

· SATA cables are thinner and more flexible than PATA cables.

· They have a 7-pin data connection, with cable limit of 1 meter.

· Disks do not share bandwidth because there is only one disk drive allowed per
SATA controller chip on the computer motherboard.

· They consume less power. They only require 250 mV as opposed to 5V for PATA.

1.9.2.3. Small Computer System Interface

These are quite similar to IDE hard drives but they make use of the Small Computer
System Interface to connect to the computer. SCSI drives can be connected internally or
externally. Devices that are connected in a SCSI have to be terminated at the end. Here are
some of their advantages.

· They are faster.

· They are very reliable.

· Good for 24/7 operations.

· Have a better scalability and flexibility in arrays.

· Well-adapted for storing and moving large amounts of data.

1.10 Optical Drives


There are four types of optical disks-

1. CD-ROM- Compact Disk Read Only Memory

2. WORM – Write Once Read Many

3. Erasable Optical Disk

4. DVD ROM.
23

1. CD-ROM – It is an optical ROM in which. Pre-recorded data can be read out. The
manufacturer writes data on CD-ROMs. The disk is made up of a resin, such
aspolycarbonate. It is coated with a material which will change when a high intensity
laser beam is focused on it. The coating material is highly reflective, usually aluminum.
It is also called a laser disk.

Information in CD-ROM is written by creating pits on the disk surface by shining a laser
beam. As the disk rotates the laser beam traces out a continuous spiral. The sharply focused
beam created a circular pit of around 0.8 micrometer diameter, wherever a 1 is to be written and
no pit (also called a land) if a zero is to written.

The CD-ROM with pre-recorded information is read by a CD-ROM reader, which uses a
laser beam for reading. A laser head moves in and out to the specified position. As the disk
rotates, the head sense pits and land. This is converted to 1’s 0’s by the electronic interface and
sent to the computer.

The advantages of CD-ROM are its high storing capacity, mass copy of information stored,
removable from the computer, etc.

It’s main disadvantages is longer access time, as compared to that of a magnetic hard
disk. It cannot be updated because it is a read only memory. It is suitable for storing information’s
which are not to be changed.

2. WORD or CD-R (CD-Recordable) – It is written- once read-many type optical disk


memories. The users can write data on WORD and read the written data as many times as
desired. To reduce the access time, the disk is rotated at a constant speed. Its tracks are
concentric circles; each track is divided into a number of sectors. It is suitable for data and files,
which are not to be changed. The user can store permanent data, information, and file for
maintaining records.

To write data on the disk the laser beam of modest density is employed, which forms pits
or bubbles on the disk surface. Its disk controller is somewhat expensive than that for CD-
ROM. For writing operation, required laser power is more than that required for reading. It’s
advantages is its high capacity, better reliability and longer life. The drawback is greater access
time compares to that of hard disk.
24

3. Erasable Optical Disk or CD-ROM – It is a read/write optical disk memory. Information


can be written to and read from the disk. The disk contents can be erased and new data can be
rewritten. So it can serve as a secondary memory of a computer. It rotates at a constant speed.
Its tracks are concentric circles. Each track is divided into a number of sectors. The same laser
beam technology is used for recording and reading of data. Its advantages over magnetic hard
disk are-

· Very high storage capacity

· An optical disk can be removed from the drive

· It has long life

· It is more reliable.

The drawback is its longer access time compared to that of a hard disk-

4. DVD ROM – DVD stands for Digital Versatile Disks. A DVD stores much more data
than CD-ROM. Its capacity are 4.7 GB, 8.5 GB, 20 GB, etc. the capacity depends on whether it
is a single layer, double layer single sided or double sided disk. DVD ROM uses the same
principle as a CDROM for reading and writing. But a smaller wavelength beam is used. A lens
system is used to focus on two different layers on the disk. On each layer data is recorded.
Thus, the capacity can be doubled. Further the recording beam is sharper compared to CDROM
and the distance between successive tracks on the surface is smaller. The total capacity of
DVD ROM is 8.5 GB. In double sided DVDROM two such disks are stuck back to back which
allows recording on both sides. This requires the disk to be reversed to read the reverse side.
Hence, the double sided DVDROM’s capacity is 17 GB. However, double sided DVDROM
should be handled carefully as both sides have data, they are thinner, and could be accidentally
damaged.

1.11 Removable storage devices


With the emergence of cloud computing, businesses are storing more and more information
online through services like Apple’s iCloud, Dropbox, and Microsoft’s SkyDrive. There are still
plenty of removable storage devices around, though, and they are still useful, particularly as
backups to information that exists on computer’s hard drive or in an online space. Such devices
include USB drives, memory cards and external hard drives.
25

1.11.1 USB Drives

Also known as “pen drives,” “thumb drives” or “flash drives,” these are identifiable by the
rectangular metal connector that you insert into the computer. Like other removable storage
devices, USB drives are used to transport the files from one place to another.

1.11.2 Memory Cards

Memory cards, also called “memory sticks” or “SD cards,” connect to the computer via a
special slot. Not every computer has these slots, but adapters are available that allow one to
read a memory card via a USB port. Memory cards are used in MP3 players and other portable
gadgets like the Canon PowerShot digital camera.

1.11.3 Smartphones

Handsets like the Samsung Galaxy S-4 also have SD cards for storage and can connect
to the computer with a USB cable like the T-Mobile universal charge cable. Such a cable may
have come packaged with thephone, and will also charge the phone while it’s connected to the
computer.

1.11.4 External Hard Drives

An external hard drive is like the drive inside the computer, but it comes in a protective
case and connects to the computer via a USB cable. If there’s a natural disaster or a break-in,
or if the computer crashes irreparably, one can copy the files from the external drive onto a new
computer and be back in business.

1.11.5 Tape drives and backup systems

Tape backup is the practice of periodically copying data from a primary storage device to
a tape cartridge so the data can be recovered if there is a hard disk crash or failure. Tape backups
can be done manually or be programmed to happen automatically with appropriate software.

Tape backup systems exist for needs ranging from backing up the hard disk on a personal
computer to backing up large amounts of data storage for archiving and disaster recovery (DR)
purposes in a large enterprise. Tape backups can also restore data to storage devices when
needed.
26

1.11.5.1. Tape backup advantages and use cases

Tape can be one of the best options for fixing an unstructured data backup problem
because of its inexpensive operational and ownership cost, capacity and speed. Magnetic tape
is especially attractive in an era of massive data growth. Customers can copy and store archival
and backup data on tape for use with cloud seeding.

The data transfer rate for tape can be significantly faster than disk and on par with flash
drivestorage, with native write rates of at least 300 megabytes per second (MBps). For anyone
concerned with backups increasing the latency of production storage, flash-to-tape, disk-to-
disk-to-tape or other data buffering strategies can mask the tape write operation.

Because disk is easier to restore data from, more secure and benefits from technologies
such as data reduplication, it has replaced tape as the preferred medium for backup. Tape is
still a relevant medium for archiving, however, and remains in use in large enterprises that may
have petabytes of data backed up on tape libraries.

Magnetic tape is well-suited for archiving because of its high capacity, low cost and
durability. Tape is a linear recording system that is not good for random access. But in an
archive, latency is less of an issue.

1.12 FILE SYSTEMS


1.12.1 FILE Allocation Table (FAT)

A file allocation table (FAT) is a file system developed for hard drives that originally used
12 or 16 bits for each cluster entry into the file allocation table. It is used by the operating
system (OS) to manage files on hard drives and other computer systems. It is often also found
on in flash memory, digital cameras and portable devices. It is used to store file information and
extend the life of a hard drive.

Most hard drives require a process known as seeking; this is the actual physical searching
and positioning of the read/write head of the drive. The FAT file system was designed to reduce
the amount of seeking and thus minimize the wear and tear on the hard disc.

FAT was designed to support hard drives and subdirectories. The earlier FAT12 had a
cluster addresses to 12-bit values with up to 4078 clusters; it allowed up to 4084 clusters with
UNIX. The more efficient FAT16 increased to 16-bit cluster address allowing up to 65,517
27

clusters per volume, 512-byte clusters with 32MB of space, and had a larger file system; with
the four sectors it was 2,048 bytes.

FAT16 was introduced in 1983 by IBM with the simultaneous releases of IBM’s personal
computer AT (PC AT) and Microsoft’s MS-DOS (disk operating system) 3.0software. In 1987
Compaq DOS 3.31 released an expansion of the original FAT16 and increased the disc sector
count to 32 bits. Because the disc was designed for a 16-bit assembly language, the whole disc
had to be altered to use 32-bit sector numbers.

In 1997 Microsoft introduced FAT32. This FAT file system increased size limits and allowed
DOS real mode code to handle the format. FAT32 has a 32-bit cluster address with 28 bits used
to hold the cluster number for up to approximately 268 million clusters. The highest level division
of a file system is a partition. The partition is divided into volumes or logical drives. Each logical
drive is assigned a letter such as C, D or E.

A FAT file system has four different sections, each as a structure in the FAT partition. The
four sections are:

· Boot Sector: This is also known as the reserved sector; it is located on the first part
of the disc. It contains the OS’s necessary boot loader code to start a PC system,
the partition table known as the master boot record (MRB) that describes how the
drive is organized, and the BIOS parameter block (BPB) which describes the physical
outline of the data storage volume.

· FAT Region: This region generally encompasses two copies of the File Allocation
Table which is for redundancy checking and specifies how the clusters are assigned.

· Data Region: This is where the directory data and existing files are stored. It uses
up the majority of the partition.

· Root Directory Region: This region is a directory table that contains the information
about the directories and files. It is used with FAT16 and FAT12 but not with other
FAT file systems. It has a fixed maximum size that is configured when created.
FAT32 usually stores the root directory in the data region so it can be expanded if
needed.
28

1.12.2 NTFS (NT file system; sometimes New Technology File System)

NTFS (NT file system; sometimes New Technology File System) is the file system that
the Windows NT operating system uses for storing and retrieving files on a hard disk. NTFS is
the Windows NT equivalent of the Windows 95 file allocation table (FAT) and the OS/2 High
Performance File System (HPFS). However, NTFS offers a number of improvements over FAT
and HPFS in terms of performance, extendibility, and security.

Notable features of NTFS include:

· Use of a b-tree directory scheme to keep track of file clusters

· Information about a file’s clusters and other data is stored with each cluster, not just
a governing table (as FAT is)

· Support for very large files (up to 2 to the 64th power or approximately 16 billion bytes
in size)

· An access control list (ACL) that lets a server administrator control who can access
specific files

· Integrated file compression

· Support for names based on Unicode

· Support for long file names as well as “8 by 3” names

· Data security on both removable and fixed disks

How NTFS Works?

When a hard disk is formatted (initialized), it is divided into partitions or major divisions of
the total physical hard disk space. Within each partition, the operating system keeps track of all
the files that are stored by that operating system. Each file is actually stored on the hard disk in
one or more clusters or disk spaces of a predefined uniform size. Using NTFS, the sizes of
clusters range from 512 bytes to 64 kilobytes. Windows NT provides a recommended default
cluster size for any given drive size. For example, for a 4 GB (gigabyte) drive, the default cluster
size is 4 KB (kilobytes). Note that clusters are indivisible. Even the smallest file takes up one
cluster and a 4.1 KB file takes up two clusters (or 8 KB) on a 4 KB cluster system.
29

The selection of the cluster size is a trade-off between efficient use of disk space and the
number of disk accesses required to access a file. In general, using NTFS, the larger the hard
disk the larger the default cluster size, since it’s assumed that a system user will prefer to
increase performance (fewer disk accesses) at the expense of some amount of space inefficiency.

When a file is created using NTFS, a record about the file is created in a special file, the
Master File Table (MFT). The record is used to locate a file’s possibly scattered clusters. NTFS
tries to find contiguous storage space that will hold the entire file (all of its clusters).

Each file contains, along with its data content, a description of its attributes (its metadata).

1.13 Redundant Array of Independent Disks RAID


RAID is short for redundant array of independent disks.Originally, the term RAID was
defined as redundant array of inexpensive disks, but now it usually refers to aredundant array
of independent disks. RAID storage uses multiple disks in order to provide fault tolerance, to
improve overall performance, and to increase storage capacity in a system. This is in contrast
with older storage devices that used only a single disk drive to store data.

RAID allows to store the same data redundantly (in multiple paces) in a balanced way to
improve overall performance. RAID disk drives are used frequently on servers but aren’t generally
necessary for personal computers.

1.14 Standard RAID Levels


RAID devices use many different architecture, called levels, depending on the desired
balance between performance and fault tolerance. RAID levels describe how data is distributed
across the drives. Standard RAID levels include the following:

1.14.1. Level 0: Striped disk array without fault tolerance

Provides data striping (spreading out blocks of each file across multiple disk drives) but
no redundancy. This improves performance but does not deliver fault tolerance. If one drive
fails, then all data in the array is lost.
30

1.14.2.Level 1: Mirroring and duplexing

Provides disk mirroring. Level 1 provides twice the read transaction rate of single disks
and the same write transaction rate as single disks.

1.14.3. Level 2: Error-correcting coding

Not a typical implementation and rarely used, Level 2 stripes data at the bit level rather
than the block level.

1.14.4. Level 3: Bit-interleaved parity

Provides byte-level striping with a dedicated parity disk. Level 3, which cannot service
simultaneous multiple requests, also is rarely used.

1.14.5. Level 4: Dedicated parity drive

A commonly used implementation of RAID, Level 4 provides block-level striping (like


Level 0) with a parity disk. If a data disk fails, the parity data is used to create a replacement
disk. A disadvantage to Level 4 is that the parity disk can create write bottlenecks.

1.14.6. Level 5: Block interleaved distributed parity

Provides data striping at the byte level and also stripe error correction information. This
results in excellent performance and good fault tolerance. Level 5 is one of the most popular
implementations of RAID.

1.14.7. Level 6: Independent data disks with double parity

Provides block-level striping with parity data distributed across all disks.

1.14.8. Level 10: A stripe of mirrors

Not one of the original RAID levels, multiple RAID 1 mirrors are created, and a RAID 0
stripe is created over these.

1.14.9. Non-Standard RAID Levels

Some devices use more than one level in a hybrid or nested arrangement, and some
vendors also offer non-standard proprietary RAID levels. Examples of non-standard RAID levels
include the following:
31

1.14.10. Level 0+1: A Mirror of Stripes

Not one of the original RAID levels, two RAID 0 stripes are created, and a RAID 1 mirror
is created over them. Used for both replicating and sharing data among disks.

1.14.11. Level 7

A trademark of Storage Computer Corporation that adds caching to Levels 3 or 4.

1.14.12. RAID 1E

A RAID 1 implementation with more than two disks. Data striping is combined with mirroring
each written stripe to one of the remaining disks in the array.

1.14.13. RAID S

Also called Parity RAID, this is EMC Corporation’s proprietary striped parity RAID system
used in its Symmetrix storage systems.

1.15 Hierarchical File System (HFS)


Stands for “Hierarchical File System.” HFS is the file system used for organizing files on
a Macintosh hard disk. When a hard disk is formatted for a Macintosh computer, the hierarchical
file system is used to create a directory that can expand as new files and folders are added to
the disk. Since HFS is a Macintosh format, Windows computers cannot recognize HFS-formatted
drives. Windows hard drives are typically formatted using WIN32 or NTFS file systems.

Since HFS was not originally designed to handle large hard disks, such as the 100GB+
hard disks that are common today, Apple introduced a updated file system called HFS+, or HFS
Extended, with the release of Mac OS 8.1. HFS+ allows for smaller clusters or block sizes,
which reduces the minimum size each file must take up. This means disk space can be used
much more efficiently on large hard disks. Mac OS X uses the HFS+ format by default and also
supports journaling, which makes it easier to recover data in case of a hard drive crash.

1.16 Computer ports


A Computer Port is an interface or a point of connection between the computer and its
peripheral devices. Some of the common peripherals are mouse, keyboard, monitor or display
unit, printer, speaker, flash drive etc.
32

The main function of a computer port is to act as a point of attachment, where the cable
from the peripheral can be plugged in and allows data to flow from and to the device.

A computer port is also called as a Communication Port as it is responsible for


communication between the computer and its peripheral device. Generally, the female end of
the connector is referred to as a port and it usually sits on the motherboard.

In Computers, communication ports can be divided into two types based on the type or
protocol used for communication. They are Serial Ports and Parallel Ports.

A serial port is an interface through which peripherals can be connected using a serial
protocol which involves the transmission of data one bit at a time over a single communication
line. The most common type of serial port is a D-Subminiature or a D-sub connector that carry
RS-232 signals.

A parallel port, on the other hand, is an interface through which the communication between
a computer and its peripheral device is in a parallel manner i.e. data is transferred in or out in
parallel using more than one communication line or wire. Printer port is an example of parallel
port.

1.16.1 PS/2

PS/2 connector is developed by IBM for connecting mouse and keyboard. It was introduced
with IBM’s Personal Systems/2 series of computers and hence the name PS/2 connector. PS/
2 connectors are color coded as purple for keyboard and green for mouse.

PS/2 is a 6-pin DIN connector.

Even though the pinout of both mouse and keyboard PS/2 ports are same, computers do
not recognize the devise when connected to wrong port.

PS/2 port is now considered a legacy port as USB port has superseded it and very few of
the modern motherboards include it as a legacy port.

1.16.2. Serial Port

Even though the communication in PS/2 and USB is serial, technically, the term Serial
Port is used to refer the interface that is compliant to RS-232 standard. There are two types of
serial ports that are commonly found on a computer: DB-25 and DE-9.
33

1.16. 3. DB-25

DB-25 is a variant of D-sub connector and is the original port for RS-232 serial
communication. They were developed as the main port for serial connections using RS-232
protocol but most of the applications did not require all the pins.

Hence, DE-9 was developed for RS-232 based serial communication while DB-25 was
rarely used as a serial port and often used as a parallel printer port as a replacement of the
Centronics Parallel 36 pin connector.

1.16.4. DE-9 or RS-232 or COM Port

DE-9 is the main port for RS-232 serial communication. It is a D-sub connector with E
shell and is often miscalled as DB-9. A DE-9 port is also called as a COM port and allows full
duplex serial communication between the computer and it’s peripheral.

Some of the applications of DE-9 port are serial interface with mouse, keyboard, modem,
uninterruptible power supplies (UPS) and other external RS-232 compatible devices.

The use of DB-25 and DE-9 ports for communication is in decline and are replaced by
USBs or other ports.

1.16.5 Parallel Port or Centronics 36 Pin Port

Parallel port is an interface between computer and peripheral devices like printers with
parallel communication. The Centronics port is a 36 pin port that was developed as an interface
for printers and scanners and hence a parallel port is also called as a Centronics port.

Before the wide use of USB ports, parallel ports are very common in printers. The
Centronics port was later replaced by DB-25 port with parallel interface.

1.16.6 Audio Ports

Audio ports are used to connect speakers or other audio output devices with the computer.
The audio signals can be either analogue or digital and depending on that the port and its
corresponding connector differ.
34

1.16.6.1 Surround Sound Connectors or 3.5 mm TRS Connector

It is the most commonly found audio port that can be used to connect stereo headphones
or surround sound channels. A 6 connector system is included on majority of computers for
audio out as well as a microphone connection.

The 6 connectors are color coded as Blue, Lime, Pink, Orange, Black and Grey. These 6
connectors can be used for a surround sound configuration of up to 8 channels.

1.16.6.2 S/PDIF / TOSLINK

The Sony/Phillips Digital Interface Format (S/PDIF) is an audio interconnect used in home
media. It supports digital audio and can be transmitted using a coaxial RCA Audio cable or an
optical fiber TOSLINK connector.

Most computers home entertainment systems are equipped with S/PDIF over TOSLINK.
TOSLINK (Toshiba Link) is most frequently used digital audio port that can support 7.1 channel
surround sound with just one cable. In the following image, the port on the right is an S/PDIF
port.

1.16.7 Video Ports


1.16.7.1 VGA Port

VGA port is found in many computers, projectors, video cards and High Definition TVs. It
is a D-sub connector consisting of 15 pins in 3 rows. The connector is called as DE-15.

VGA port is the main interface between computers and older CRT monitors. Even the
modern LCD and LED monitors support VGA ports but the picture quality is reduced. VGA
carries analogue video signals up to a resolution of 648X480.

With the increase in use of digital video, VGA ports are gradually being replaced by HDMI
and Display Ports. Some laptops are equipped with on-board VGA ports in order to connect to
external monitors or projectors. The pinout of a VGA port is shown below.

1.16.7.2 Digital Video Interface (DVI)

DVI is a high speed digital interface between a display controller like a computer and a
display device like a monitor. It was developed with an aim of transmitting lossless digital video
signals and replace the analogue VGA technology.
35

There are three types of DVI connectors based on the signals it can carry: DVI-I, DVI-D
and DVI-A. DVI-I is a DVI port with integrated analogue and digital signals. DVI-D supports only
digital signals and DVI-A supports only analogue signals.

The digital signals can be either single link or dual link where a single link supports a
digital signal up to 1920X1080 resolution and a dual link supports a digital signal up to 2560X1600
resolution. The following image compares the structures of DVI-I, DVI-D and DVI-A types along
with the pinouts.

1.16.7.3 Mini-DVI

Mini-DVI port is developed by Apple as an alternative to Mini-VGA port and is physically


similar to one. It is smaller than a regular DVI port.

It is a 32 pin port and is capable of transmitting DVI, composite, S-Video and VGA signals
with respective adapters. The following image shows a Mini-DVI port and its compatible cable.

1.16.7.4 Micro-DVI

Micro-DVI port, as the name suggests is physically smaller than Mini-DVI and is capable
of transmitting only digital signals.

This port can be connected to external devices with DVI and VGA interfaces and respective
adapters are required. In the following image, a Micro-DVI port can be seen adjacent to
headphone and USB ports.

1.16.7.5 Display Port

Display Port is a digital display interface with optional multiple channel audio and other
forms of data. Display Port is developed with an aim of replacing VGA and DVI ports as the
main interface between a computer and monitor.

The Display Port has a 20 pin connector, which is a very less number when compared to
DVI port and offers better resolution.

1.16.7.6 RCA Connector

RCA Connector can carry composite video and stereo audio signals over three cables.
Composite video transmits analogue video signals and the connector is as yellow colored RCA
connector.
36

The video signals are transmitted over a single channel along with the line and frame
synchronization pulses at a maximum resolution of 576i (standard resolution).

The red and white connectors are used for stereo audio signals (red for right channel and
white for left channel).

1.16.7.7 Component Video

Component Video is an interface where the video signals are split into more than two
channels and the quality of the video signal is better that Composite video.

Like composite video, component video transmits only video signals and two separate
connectors must be used for stereo audio. Component video port can transmit both analogue
and digital video signals.

The ports of the commonly found Component video uses 3 connectors and are color
coded as Green, Blue and Red.

1.16.7.8 S-Video

S-Video or Separate Video connector is used for transmitting only video signals. The
picture quality is better than that of Composite video but has a lesser resolution than Component
video.

The S-Video port is generally black in color and is present on all TVs and most computers.
S-Video port looks like a PS/2 port but consists of only 4 pins.

Out of the 4 pins, one pin is used to carry the intensity signals (black and white) and other
pin is used to carry color signals. Both these pins have their respective ground pins.

1.16.7.9 HDMI

HDMI is an abbreviation of High Definition Media Interface. HDMI is a digital interface to


connect High Definition and Ultra High Definition devices like Computer monitors, HDTVs, Blu-
Ray players, gaming consoles, High Definition Cameras etc.

HDMI can be used to carry uncompressed video and compressed or uncompressed audio
signals.
37

The HDMI connector consists of 19 pins and the latest version of HDMI i.e. HDMI 2.0 can
carry digital video signal up to a resolution of 4096×2160 and 32 audio channels.

1.17 USB
Universal Serial Bus (USB) replaced serial ports, parallel ports, PS/2 connectors, game
ports and power chargers for portable devices.

USB port can be used to transfer data, act as an interface for peripherals and even act as
power supply for devices connected to it. There are three kinds of USB ports: Type A, Type B or
mini USB and Micro USB.

USB Type A - USB Type-A port is a 4 pin connector. There are different versions of Type
– A USB ports: USB 1.1, USB 2.0 and USB 3.0. USB 3.0 is the common standard and supports
a data rate of 400MBps.

USB 3.1 is also released and supports a data rate up to 10Gbps. The USB 2.0 is Black
color coded and USB 3.0 is Blue.

1.17.1. USB Type C

USB Type – C is the latest specification of the USB and is a reversible connector. USB
Type – C is supposed to replace Types A and B and is considered future proof.

The port of USB Type – C consists of 24 pins. The pinout diagram of USB Type – C is
shown below. USB Type – C can handle a current of 3A.

This feature of handling high current is used in the latest Fast Charging Technology
where a Smart Phone’s battery will reach its full charge is very less time.

1.17.2. RJ-45

Ethernet is a networking technology that is used to connect the computer to Internet and
communicate with other computers or networking devices.

The interface that is used for computer networking and telecommunications is known as
Registered Jack (RJ) and RJ – 45 port in particular is used for Ethernet over cable. RJ-45
connector is an 8 pin – 8 contact (8P – 8C) type modular connector.
38

The latest Ethernet technology is called Gigabit Ethernet and supports a data transfer
rate of over 10Gigabits per second. The Ethernet or a LAN port with 8P – 8C type connector
along with the male RJ-45 cable is shown below.

The un-keyed 8P – 8C modular connector is generally referred to the Ethernet RJ-45.


Often, RJ-45 ports are equipped with two LEDs for indicating transmission and packet
detection.As mentioned earlier, an Ethernet RJ-45 port has 8 pins and the following picture
depicts the pinout of one.

1.17.3. RJ-11

RJ-11 is another type of Registered Jack that is used as an interface for telephone,
modem or ADSL connections. Even though computers are almost never equipped with an RJ-
11 port, they are the main interface in all telecommunication networks.

RJ-45 and RJ11 ports look alike but RJ-11 is a smaller port and uses a 6 point – 4 contact
(6P – 4C) connector even though a 6 point – 2 contact (6P – 2C) is sufficient. The following is
a picture of an RJ-11 port and its compatible connector.

1.17.4. e-SATA

e-SATA is an external Serial AT Attachment connector that is used as an interface for


connecting external mass storage devices. Modern e-SATA connectors are called e-SATAp
and stands for Power e-SATA ports.

They are hybrid ports capable of supporting both e-SATA and USB. Neither the SATA
organization nor the USB organization has officially approved the e-SATAp port and must be
used at user’s risk.

1.17.5 Input systems


1.17.5.1 Keyboard

Keyboard is the most common and very popular input device which helps to input data to
the computer. The layout of the keyboard is like that of traditional typewriter, although there are
some additional keys provided for performing additional functions.

Keyboards are of two sizes 84 keys or 101/102 keys, but now keyboards with 104 keys or
108 keys are also available for Windows and Internet.
39

1.17.5.2. Mouse

Mouse is the most popular pointing device. It is a very famous cursor-control device
having a small palm size box with a ball at its base, which senses the movement of the mouse
and sends corresponding signals to the CPU when the mouse buttons are pressed.

Generally, it has two buttons called the left and the right button and a wheel is present
between the buttons. A mouse can be used to control the position of the cursor on the screen,
but it cannot be used to enter text into the computer.

1.17.5.3. Advantages
· Easy to use

· Not very expensive

· Moves the cursor faster than the arrow keys of the keyboard.

1.17.5.4. Joystick

Joystick is also a pointing device, which is used to move the cursor position on a monitor
screen. It is a stick having a spherical ball at its both lower and upper ends. The lower spherical
ball moves in a socket. The joystick can be moved in all four directions.

The function of the joystick is similar to that of a mouse. It is mainly used in Computer
Aided Designing (CAD) and playing computer games.

1.17.5.5. Light Pen

Light pen is a pointing device similar to a pen. It is used to select a displayed menu item
or draw pictures on the monitor screen. It consists of a photocell and an optical system placed
in a small tube.

When the tip of a light pen is moved over the monitor screen and the pen button is
pressed, its photocell sensing element detects the screen location and sends the corresponding
signal to the CPU.

1.17.5.6. Track Ball

Track ball is an input device that is mostly used in notebook or laptop computer, instead
of a mouse. This is a ball which is half inserted and by moving fingers on the ball, the pointer
can be moved.
40

Since the whole device is not moved, a track ball requires less space than a mouse. A
track ball comes in various shapes like a ball, a button, or a square.

1.17.5.7. Scanner

Scanner is an input device, which works more like a photocopy machine. It is used when
some information is available on paper and it is to be transferred to the hard disk of the computer
for further manipulation.

Scanner captures images from the source which are then converted into a digital form
that can be stored on the disk. These images can be edited before they are printed.

1.17.5.8. Digitizer

Digitizer is an input device which converts analog information into digital form. Digitizer
can convert a signal from the television or camera into a series of numbers that could be stored
in a computer. They can be used by the computer to create a picture of whatever the camera
had been pointed at.

Digitizer is also known as Tablet or Graphics Tablet as it converts graphics and pictorial
data into binary inputs. A graphic tablet as digitizer is used for fine works of drawing and image
manipulation applications.

1.17.5.9. Microphone

Microphone is an input device to input sound that is then stored in a digital form.

The microphone is used for various applications such as adding sound to a multimedia
presentation or for mixing music.

1.17.5.10. Magnetic Ink Card Reader (MICR)

MICR input device is generally used in banks as there are large number of cheques to be
processed every day. The bank’s code number and chequenumber are printed on the cheques
with a special type of ink that contains particles of magnetic material that are machine readable.

This reading process is called Magnetic Ink Character Recognition (MICR). The main
advantages of MICR is that it is fast and less error prone.
41

1.17.5.11. Optical Character Reader (OCR)

OCR is an input device used to read a printed text.

OCR scans the text optically, character by character, converts them into a machine readable
code, and stores the text on the system memory.

1.17.5.12. Bar Code Readers

Bar Code Reader is a device used for reading bar coded data (data in the form of light
and dark lines). Bar coded data is generally used in labeling goods, numbering the books, etc.
It may be a hand-held scanner or may be embedded in a stationary scanner.

Bar Code Reader scans a bar code image, converts it into an alphanumeric value, which
is then fed to the computer that the bar code reader is connected to.

1.17.5.13. Optical Mark Reader (OMR)

OMR is a special type of optical scanner used to recognize the type of mark made by pen
or pencil. It is used where one out of a few alternatives is to be selected and marked.

It is specially used for checking the answer sheets of examinations having multiple choice
questions.

1.18 Display arrays


1.18.1 VGA

A video graphics array (VGA) cable is a type of computer cable that carries visual display
data from the CPU to the monitor. A complete VGA cable consists of a cable and a connector at
each end, and the connectors are typically blue.A VGA cable is used primarily to link a computer
to a display device. One end of the VGA cable is attached to the port in the graphics card on the
computer motherboard, and the other to the port in the display device. When the computer is
running, the video card transmits video display signals via the VGA cable, which are then displayed
on the display device. VGA cables are available in different types, where shorter cables with
coaxial cable and insulation provide better video/display quality.
42

1.18.2 SVGA

A Super Video Graphics Array (SVGA) monitor is an output device which uses the SVGA
standard. SVGA is a video-display-standard type developed by the Video Electronics Standards
Association (VESA) for IBM PC compatible personal computers (PCs).

SVGA includes an array of computer display standards utilized for the manufacturing of
computer monitors and screens. It features a screen resolution of 800x600 pixels.

Monitors that use the SVGA graphic standard are intended to perform better than normal
VGA monitors. SVGA monitors make use of a VGA connector (DE-15 a.k.a HD-15).

A VGA monitor generally displays graphics in 640x480 pixels, or may be an even smaller
320x200 pixels while SVGA monitors display a better resolution of 800x600 pixels or more.

When comparing SVGA with other display standards like Extended Graphics Array (XGA)
or VGA, the standard resolution of SVGA is identified as 800x600 pixels.

The SVGA standard was referred to as a graphic resolution of 800x600 4-bit pixel (48000
pixels) when it was initially defined. This implies that every single pixel can be one of 16 different
colors. Later, this definition was extended to a resolution of 1024x768 8-bit pixel, which means
that there is a selection of 256 colors.

1.18.3 AGP

Stands for “Accelerated Graphics Port.” AGP is a type of expansion slot designed
specifically for graphics cards. It was developed in 1996 as an alternative to the PCIstandard.
Since the AGP interface provides a dedicated bus for graphics data, AGP cards are able to
render graphics faster than comparable PCI graphics cards.

Like PCI slots, AGP slots are built into a computer’s motherboard. They have a similar
form factor to PCI slots, but can only be used for graphics cards. Additionally, several AGP
specifications exist, including AGP 1.0, 2.0, and 3.0, which each use a different voltage. Therefore,
AGP cards must be compatible with the specification of the AGP slot they are installed in.

Since AGP cards require an expansion slot, they can only be used in desktop computers.
While AGP was popular for about a decade, the technology has been superseded by PCI Express,
which was introduced in 2004. For a few years, many desktop computers included both AGP
43

and PCI Express slots, but eventually AGP slots were removed completely. Therefore, most
desktop computers manufactured after 2006 do not include an AGP slot.

1.18.4 Additional display cards

A video card (also called a display card, graphics card, display adapter or graphics
adapter) is an expansion card which generates a feed of output images to a display (such as
a computer monitor). Frequently, these are advertised as discrete or dedicated graphics cards,
emphasizing the distinction between these and integrated graphics. At the core of both is
the graphics processing unit (GPU), which is the main part that does the actual computations,
but should not be confused as the video card as a whole, although “GPU” is often used to refer
to video cards.

Most video cards are not limited to simple display output. Their integrated graphics
processor can perform additional processing, removing this task from the central processor of
the computer. For example, Nvidia and AMD (ATi) produced cards render the graphics pipeline
OpenGL and DirectX on the hardware level.Usually the graphics card is made in the form of a
printed circuit board (expansion board) and inserted into an expansion slot, universal or
specialized (AGP, PCI Express).[4] Some have been made using dedicated enclosures, which
are connected to the computer via a docking station or a cable.

1.19 Monitors and their types


1.19.1 CRT (cathode ray tube) monitors

These monitors employ CRT technology, which was used most commonly in the
manufacturing of television screens. With these monitors, a stream of intense high energy
electrons is used to form images on a fluorescent screen. A cathode ray tube is basically a
vacuum tube containing an electron gun at one end and a fluorescent screen at another end.

While CRT monitors can still be found in some organizations, many offices have stopped
using them largely because they are heavy, bulky, and costly to replace should they break.
While they are still in use, it would be a good idea to phase these monitors out for cheaper,
lighter, and more reliable monitors.
44

1.19.2 LCD (liquid crystal display) monitors

The LCD monitor incorporates one of the most advanced technologies available today.
Typically, it consists of a layer of color or monochrome pixels arranged schematically between
a couple of transparent electrodes and two polarizing filters. Optical effect is made possible by
polarizing the light in varied amounts and making it pass through the liquid crystal layer. The
two types of LCD technology available are the active matrix of TFT and a passive matrix
technology. TFT generates better picture quality and is more secure and reliable. Passive matrix,
on the other hand, has a slow response time and is slowly becoming outdated.

The advantages of LCD monitors include their compact size which makes them lightweight.
They also don’t consume much electricity as CRT monitors, and can be run off of batteries
which makes them ideal for laptops.

Images transmitted by these monitors don’t get geometrically distorted and have little
flicker. However, this type of monitor does have disadvantages, such as its relatively high price,
an image quality which is not constant when viewed from different angles, and a monitor resolution
that is not always constant, meaning any alterations can result in reduced performance.

1.19.3 LED (light-emitting diodes) monitors

LED monitors are the latest types of monitors on the market today. These are flat panel,
or slightly curved displays which make use of light-emitting diodes for back-lighting, instead of
cold cathode fluorescent (CCFL) back-lighting used in LCDs. LED monitors are said to use
much lesser power than CRT and LCD and are considered far more environmentally friendly.

The advantages of LED monitors are that they produce images with higher contrast, have
less negative environmental impact when disposed, are more durable than CRT or LCD monitors,
and feature a very thin design. They also don’t produce much heat while running. The only
downside is that they can be more expensive, especially for the high-end monitors like the new
curved displays that are being released.

Being aware of the different types of computer monitors available should help you choose
one that’s most suited to your needs.
45

1.20 Printers and their types


There are two types of printers.

1.20.1 Impact printers

An impact printer makes contact with the paper. It usually forms the print image by pressing
an inked ribbon against the paper using a hammer or pins. Following are some examples of
impact printers.

1.20.2 Dot-Matrix Printers

The dot-matrix printer uses print heads containing from 9 to 24 pins. These pins produce
patterns of dots on the paper to form the individual characters. The 24 pin dot-matrix printer
produces more dots that a 9 pin dot-matrix printer, which results in much better quality and
clearer characters. The general rule is: the more pins, the clearer the letters on the paper. The
pins strike the ribbon individually as the print mechanism moves across the entire print line in
both directions, i-e, from left to right, then right to left, and so on. The user can produce a color
output with a dot-matrix printer (the user will change the black ribbon with a ribbon that has
color stripes). Dot-matrix printers are inexpensive and typically print at speeds of 100-600
characters per second.

1.20.3 Daisy-wheel printers

In order to get the quality of type found on typewriters, a daisy-wheel impact printer can
be used. It is called daisy-wheel printer because the print mechanism looks like a daisy; at the
end of each “Petal” is a fully formed character which produces solid-line print. A hammer strikes
a “petal” containing a character against the ribbon, and the character prints on the paper. Its
speed is slow typically 25-55 characters per second.

1.20.4 Line printers

In business where enormous amount of material are printed, the character-at-a-time


printers are too slow; therefore, these users need line-at-a-time printers. Line printers, or line-
at-a-time printers, use special mechanism that can print a whole line at once; they can typically
print the range of 1,200 to 6,000 lines per minute. Drum, chain, and band printers are line-at-a-
time printers.
46

1.20.5. Drum printer

A drum printer consists of a solid, cylindrical drum that has raised characters in bands on
its surface. The number of print positions across the drum equals the number available on the
page. This number typically ranges from 80-132 print positions. The drum rotates at a rapid
speed. For each possible print position there is a print hammer located behind the paper. These
hammers strike the paper, along the ink ribbon, against the proper character on the drum as it
passes. One revolution of the drum is required to print each line. This means that all characters
on the line are not printed at exactly the same time, but the time required to print the entire line
is fast enough to call them line printers. Typical speeds of drum printers are in the range of 300
to 2000 lines per minute.

1.20.6. Chain printers

A chain printer uses a chain of print characters wrapped around two pulleys. Like the
drum printer, there is one hammer for each print position. Circuitry inside the printer detects
when the correct character appears at the desired print location on the page. The hammer then
strikes the page, pressing the paper against a ribbon and the character located at the desired
print position. An impression of the character is left on the page. The chain keeps rotating until
all the required print positions on the line have filled. Then the page moves up to print the next
line. Speeds of chain printers range from 400 to 2500 characters per minute.

1.20.7. Band printers

A band printer operates similar to chain printer except it uses a band instead of a chain
and has fewer hammers. Band printer has a steel band divided into five sections of 48 characters
each. The hammers on a band printer are mounted on a cartridge that moves across the paper
to the appropriate positions. Characters are rotated into place and struck by the hammers. Font
styles can easily be changed by replacing a band or chain.

1.20.8. Non-impact printers

Non-impact printers do not use a striking device to produce characters on the paper; and
because these printers do not hammer against the paper they are much quieter. Following are
some non-impacted printers.
47

1.20.9. Ink-jet printers

Ink-jet printers work in the same fashion as dot-matrix printers in the form images or
characters with little dots. However, the dots are formed by tiny droplets of ink. Ink-jet printers
form characters on paper by spraying ink from tiny nozzles through an electrical field that
arranges the charged ink particles into characters at the rate of approximately 250 characters
per second. The ink is absorbed into the paper and dries instantly. Various colors of ink can also
be used.

One or more nozzles in the print head emit a steady stream of ink drops. Droplets of ink
are electrically charged after leaving the nozzle. The droplets are then guided to the paper by
electrically charged deflecting plates [one plate has positive charge (upper plate) and the other
has negative charge (lover plate)]. A nozzle for black ink may be all that’s needed to print text,
but full-color printing is also possible with the addition of needed to print text, but full-color
printing is also possible with the addition three extra nozzles for the cyan, magenta, and yellow
primary colors. If a droplet isn’t needed for the character or image being formed, it is recycled
back to its input nozzle.

Several manufacturers produce color ink-jet printer. Some of these printers come with all
their color inks in a cartridge. These printers produce less noise and print in better quality with
greater speed.

1.20.10. Laser printers

A laser printer works like a photocopy machine. Laser printers produce images on paper
by directing a laser beam at a mirror which bounces the beam onto a drum. The drum has a
special coating on it to which toner (an ink powder) sticks. Using patterns of small dots, a laser
beam conveys information from the computer to a positively charged drum to become neutralized.
From all those areas of drum which become neutralized, the toner detaches. As the paper rolls
by the drum, the toner is transferred to the paper printing the letters or other graphics on the
paper. A hot roller bonds the toner to the paper.

Laser printers use buffers that store an entire page at a time. When a whole page is
loaded, it will be printed. The speed of laser printers is high and they print quietly without
producing much noise. Many home-use laser printers can print eight pages per minute, but
faster and print approximately 21,000 lines per minute, or 437 pages per minute if each page
contains 48 lines. When high speed laser printers were introduced they were expensive.
48

Developments in the last few years have provided relatively low-cost laser printers for use in
small businesses.

Summary
· The main printed circuit board in a computer is known as the motherboard

· Computers rely on hard disk drives (HDDs) to store data permanently.

· Random Access Memory, or RAM, usually refers to computer chips that temporarily
store dynamic data to enhance computer performance while you are working.

· A Computer Port is an interface or a point of connection between the computer and


its peripheral devices.

· The CPU is the core of any computer. Everything depends on the CPUs ability to
process instructions that it receives.

· Monitors and their types include cathode ray tube monitor, LCD monitors and LEDs.

Check your answers


 The main printed circuit board in a computer is known as the ………………………..

 Computers rely on ………………………. to store data permanently.

 ……………………………….. usually refers to computer chips that temporarily store


dynamic data to enhance computer performance while you are working.

 A ………………………… is an interface or a point of connection between the


computer and its peripheral devices.

 The CPU is the ………………… of any computer.

 Monitors and their types include ……………………., ………………….. and


……………………..

Reference
1. https://searchwindowsserver.techtarget.com/definition/NTFS

2. https://www.techopedia.com/definition/1369/file-allocation-table-fat

3. https://opensource.com/life/16/10/introduction-linux-filesystems
49

UNIT 2
OPERATING SYSTEM
Learning Objectives

After reading this lesson you will be able to understand

· Basic operating system concepts

· Operating System Benefits

· Function of a Client operating System

· Server Operating System (Server OS)

· Client-Server Model

· Benefits of client server model

· Client-Server Systems Architecture

· Command-line interface

· Operating system command-line interfaces

· Application command-line interfaces

· OS inter-process communication

· The advantages of Command line interfaces

· The disadvantages of Command line interfaces

· Files and Directories

· Functions

· Device driver

Structure
2.1 Basic Operating System Concepts

2.2 Operating System Benefits

2.3 Functions of a Client Operating System

2.4 Server Operating System (Server OS)

2.5 Client-Server Model


50

2.6 Command-line Interface

2.7 Files and Directories

2.8 Device Driver

2.1. Basic Operating System Concepts


An Operating system is basically intermediary agent between the user and the computer
hardware.
· Manages the computer’s resources (hardware, abstract resources, software)
· It’s a resource allocator.
· It is also used to control programs to prevent errors and improper computer use.
· It is interrupt driven.

The fig. 2.1 illustrates basics of all Operating System.

Figure 2.1: Basics operating system concepts


51

2.2. Operating System Benefits


· Simplifies hardware control for applications

· Enforcer of sharing, fairness and security with the goal of better overall performance

a. Trade-off between fairness and performance

b. Trade-off between optimal algorithms and lean algorithms – OS is overhead.

· Provides abstract resources

a. Sockets

b. Inter-process communication

2.3. Function of a Client operating System


The primary function of a client-server system is to create a division of labor between a
centralized server and the individual computers that are running the software. This model has a
number of benefits that help small businesses successfully create and market data and processor
intensive applications in the very competitive software market. Figure 2.2 depicts the functions
of a client operating system.

Figure 2.2: Functions of a Client Operating System

A client is a computer or a program that, as part of its operation, relies on sending a


request to another program or a computer hardware or software that accesses a service made
available by a server (which may or may not be located on another computer). For example,
web browsers are clients that connect to web servers and retrieve web pages for display. Email
clients retrieve email from mail servers. Online chat uses a variety of clients, which vary depending
on the chat protocol being used. Multiplayer video games or online video games may run as a
52

client on each computer. The term “client” may also be applied to computers or devices that run
the client software or users that use the client software.

A client is part of a client–server model, which is still used today. Clients and servers may
be computer programs run on the same machine and connect via inter-process communication
techniques. Combined with Internet sockets, programs may connect to a service operating on
a possibly remote system through the Internet protocol suite. Servers wait for potential clients
to initiate connections that they may accept.

The term was first applied to devices that were not capable of running their own stand-
alone programs, but could interact with remote computers via a network. These computer
terminals were clients of the time-sharing mainframe computer.

2.4. Server Operating System (Server OS)


A server operating system (OS) is a type of operating system that is designed to be
installed and used on a server computer.

It is an advanced version of an operating system, having features and capabilities required


within a client-server architecture or similar enterprise computing environment.

Some common examples of server OSs include:

· Red Hat Enterprise Linux

· Windows Server

· Mac OS X Server

Some of the key features of a server operating system include:

· Ability to access the server both in GUI and command-level interface

· Execute all or most processes from OS commands

· Advanced-level hardware, software and network configuration services

· Install/deploy business applications and/or web applications

· Provides central interface to manage users, implement security and other


administrative processes

· Manages and monitors client computers and/or operating systems


53

2.5. Client-Server Model


Client/server system has increasingly minimized application development time by dividing
functions of sharing information into both the client and server. The client is the requester while
the server is the provider of service. Some of the standardized protocols that client and servers
use to communicate with themselves include: File Transfer Protocol (FTP), Simple Mail Transfer
Protocol (SMTP) and Hypertext Transfer Protocol (HTTP). Thus, Client-server system can be
define as a software architecture made up of both the client and server, whereby the clients
always send requests while the server responds to the requests sent. Client-server provides an
inter-process communication because it involves the exchange of data from both the client and
server whereby each of them performs different functions.

Figure 2.3: Client-Server Model

2.5.1. Benefits of client server model


 It splits the processing of application across multiple machines.

 It allows easier sharing of resources from client to servers.

 It reduces data replication by storing data on each server instead of client.

2.5.2. Client-Server Systems Architecture


 Client-server architecture is usually made up of the; application server, database
server and PC.
54

 The two main architectures are the 2-tier and 3-tier architecture.

 2-tier client-server system architecture: This is an architecture which involves only


the Database Server and a client PC. In 2-tier architecture, the users will run
applications on their PC (Client), which connects through a network to the server.
The client application runs both the coding and business logic, and then displays
output to the user. It is also called thick client.

 It is considered when the client has access to the database directly without involving
any intermediary.

 It is also used to perform application logic whereby the application code will be
assigned to each of the client in the workstation.

Figure 2.4: Two Tier Client-Server Architecture

 3-tier client-server system architecture: This architecture involves the client PC,
Database server and Application server.

 3-tier architecture can be extended to N-tier whereby it involves more application


servers.

 In this architecture, the client contains presentation logic only, whereby less resources
and less coding are needed by the client.

 It supports one server being in charge of many clients and provides more resources
in the server.

 It involves an intermediary (Application server) also known as middleware.

 Middleware: The 3-tier architecture involves an application server which serves as


a middleware between the client PC and database server. The middleware tier is
separate software running on a separate machine and performs application logic
55

Figure 2.5: Three Tier Architecture

2.6 Command-line interface


A command-line interface or command language interpreter (CLI), also known as
command-line user interface, console user interface and character user interface (CUI), is a
means of interacting with a computer program where the user (or client) issues commands to
the program in the form of successive lines of text (command lines). A program which handles
the interface is called a command language interpreter or shell (computing).

The CLI was the primary means of interaction with most computer systems on computer
terminals in the mid-1960s, and continued to be used throughout the 1970s and 1980s on
OpenVMS, Unix systems and personal computer systems including MS-DOS, CP/M and Apple
DOS. The interface is usually implemented with a command line shell, which is a program that
accepts commands as text input and converts commands into appropriate operating system
functions.

Today, many end users rarely, if ever, use command-line interfaces and instead rely upon
graphical user interfaces and menu-driven interactions. However, many software developers,
56

system administrators and advanced users still rely heavily on command-line interfaces to perform
tasks more efficiently, configure their machine, or access programs and program features that
are not available through a graphical interface.

Alternatives to the command line include, but are not limited to text user interface menus
(see IBM AIX SMIT for example), keyboard shortcuts, and various other desktop metaphors
centered on the pointer (usually controlled with a mouse). Examples of this include the Windows
versions 1, 2, 3, 3.1, and 3.11 (an OS shell that runs in DOS), Dos Shell, and Mouse Systems
Power Panel.

Programs with command-line interfaces are generally easier to automate via scripting.
Command-line interfaces for software other than operating systems include a number of
programming languages such as Tcl/Tk, PHP, and others, as well as utilities such as the
compression utility WinZip, and some FTP and SSH/Telnet clients.

2.6.1. Operating system command-line interfaces

Operating system (OS) command line interfaces are usually distinct programs supplied
with the operating system. A program that implements such a text interface is often called a
command-line interpreter, command processor or shell.

Examples of command-line interpreters include DEC’s DIGITAL Command Language


(DCL) in OpenVMS and RSX-11, the various Unix shells (sh, ksh, csh, tcsh, bash, etc.), CP/M’s
CCP, DOS’s COMMAND.COM, as well as the OS/2 and the Windows CMD.EXE programs, the
latter groups being based heavily on DEC’s RSX-11 and RSTS CLIs. Under most operating
systems, it is possible to replace the default shell program with alternatives; examples include
4DOS for DOS, 4OS2 for OS/2, and 4NT or Take Command for Windows.

Although the term ‘shell’ is often used to describe a command-line interpreter, strictly
speaking a ‘shell’ can be any program that constitutes the user-interface, including fully graphically
oriented ones. For example, the default W indows GUI is a shell program named
EXPLORER.EXE, as defined in the SHELL=EXPLORER.EXE line in the WIN.INI configuration
file. These programs are shells, but not CLIs.
57

2.6.2. Application command-line interfaces

Application programs (as opposed to operating systems) may also have command line
interfaces. An application program may support none, any, or all of these three major types of
command line interface mechanisms:

Parameters: Most operating systems support a means to pass additional information to


a program when it is launched. When a program is launched from an OS command line shell,
additional text provided along with the program name is passed to the launched program.

Interactive command line sessions: After launch, a program may provide an operator
with an independent means to enter commands in the form of text.

OS inter-process communication: Most operating systems support means of inter-


process communication (for example; standard streams or named pipes). Command lines from
client processes may be redirected to a CLI program by one of these methods.

Some applications support only a CLI, presenting a CLI prompt to the user and acting
upon command lines as they are entered. Other programs support both a CLI and a GUI. In
some cases, a GUI is simply a wrapper around a separate CLI executable file. In other cases,
a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support
different functionality. For example, all features of MATLAB, a numerical analysis computer
program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features

Figure 2.6: Inter Process Communication


58

2.6.3. The advantages of Command line interfaces


· If the user knows the correct commands then this type of interface can be much
faster than any other type of interface.

· This type of interface needs much less memory (Random Access Memory) in order
to use compared to other types of user interfaces.

· This type of interface does not use as much CPU processing time as others.

· A low resolution, cheaper monitor can be used with this type of interface.

· A CLI does not require Windows to run.

2.6.4. Disadvantages Command Line Interface (CLI)


· For someone who has never used a CLI, it can be very confusing.

· Commands have to be typed precisely. If there is a spelling mistake, then the


command will not respond or fail.

· If user can mistype an instruction, it is often necessary to start from scratch again.

· There are a large number of commands which need to be learned-in the case of
Unix it can be more than hundred.

· User can’t just guess what the instruction might be and user can’t just ‘have a go’.

2.7 Files and Directories


A file is a collection of data that is stored on disk and that can be manipulated as a single
unit by its name.

A directory is a file that acts as a folder for other files. A directory can also contain other
directories (subdirectories); a directory that contains another directory is called the parent directory
of the directory it contains.

A directory tree includes a directory and all of its files, including the contents of all
subdirectories. (Each directory is a “branch” in the “tree.”) A slash character alone (`/’) is the
name of the root directory at the base of the directory tree hierarchy; it is the trunk from which
all other files or directories branch.
59

The following image shows an abridged version of the directory hierarchy.

Figure 2.7: Directory hierarchy

2.7.1. Functions
· Naming Files: How to give names to the files and directories.

· Changing Directories: How to move around the filesystem.

· Listing Directories: Listing the contents of a directory.

· Copying Files: Making copies of files.

· Moving Files: Moving files to a different location.

· Removing Files: Removing files and directories you don’t need.

· Linking Files: Creating links between files.

· File Expansions: Shortcuts for specifying file names.

· Browsing Files: Browsing files on the system.

2.7.2. System file

A file critical to the proper function of an operating system which, if deleted or modified,
may cause it to no longer work. Often these files are hidden and cannot be deleted because
they are in use by the operating system. A system file is also an attribute that can be added to
any file in Windows or DOS using the .sys file extension. Although this process allows the
operating system to know the file is important, it does not make the file a system file.
60

2.7.3. The Booting Process

Booting (also known as booting up) is the initial set of operations that a computer system
performs when electrical power is switched on. The process begins when a computer that has
been turned off is re-energized, and ends when the computer is ready to perform its normal
operations. On modern general purpose computers, this can take tens of seconds and typically
involves performing power-on self-test, locating and initializing peripheral devices, and then
finding, loading and starting an operating system. Many computer systems also allow these
operations to be initiated by a software command without cycling power, in what is known as a
soft reboot, though some of the initial operations might be skipped on a soft reboot. A boot
loader is a computer program that loads the main operating system or runtime environment for
the computer after completion of self-tests.

The computer term boot is short for bootstrap or bootstrap load and derives from the
phrase to pull oneself up by one’s bootstraps. The usage calls attention to the paradox that a
computer cannot run without first loading software but some software must run before any
software can be loaded. Early computers used a variety of ad-hoc methods to get a fragment of
software into memory to solve this problem. The invention of integrated circuit Read-only memory
(ROM) of various types solved the paradox by allowing computers to be shipped with a start-up
program that could not be erased, but growth in the size of ROM has allowed ever more elaborate
start up procedures to be implemented.

There are numerous examples of single and multi-stage boot sequences that begin with
the execution of boot program(s) stored in boot ROMs. During the booting process, the binary
code of an operating system or runtime environment may be loaded from nonvolatile secondary
storage (such as a hard disk drive) into volatile, or random-access memory (RAM) and then
executed. Some simpler embedded systems do not require a noticeable boot sequence to
begin functioning and may simply run operational programs stored in read-only memory (ROM)
when turned on.

2.7.4. The order of booting

 In order for a computer to successfully boot, its BIOS, operating system and hardware
components must all be working properly; failure of any one of these three elements
will likely result in a failed boot sequence.
61

 When the computer’s power is first turned on, the CPU initializes itself, which is
triggered by a series of clock ticks generated by the system clock. Part of the CPU’s
initialization is to look to the system’s ROM BIOS for its first instruction in the startup
program. The ROM BIOS stores the first instruction, which is the instruction to run
the power-on self-test (POST), in a predetermined memory address. POST begins
by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect
a battery failure, it then continues to initialize the CPU, checking the inventoried
hardware devices (such as the video card), secondary storage devices, such as
hard drives and floppy drives, ports and other hardware devices, such as the keyboard
and mouse, to ensure they are functioning properly.

 Once the POST has determined that all components are functioning properly and
the CPU has successfully initialized the BIOS looks for an OS to load.

 The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in
most PCs, the OS loads from the C drive on the hard drive even though the BIOS
has the capability to load the OS from a floppy disk, CD or ZIP drive. The order of
drives that the CMOS looks to in order to locate the OS is called the boot sequence,
which can be changed by altering the CMOS setup. Looking to the appropriate boot
drive, the BIOS will first encounter the boot record, which tells it where to find the
beginning of the OS and the subsequent program file that will initialize the OS.

 Once the OS initializes, the BIOS copies its files into memory and the OS basically
takes over control of the boot process. Now in control, the OS performs another
inventory of the system’s memory and memory availability (which the BIOS already
checked) and loads the device drivers that it needs to control the peripheral devices,
such as a printer, scanner, optical drive, mouse and keyboard. This is the final
stage in the boot process, after which the user can access the system’s applications
to perform tasks.

2.8 Device driver


A device driver is a program that controls a particular type of device that is attached to the
computer. There are device drivers for printers, displays, CD-ROM readers, diskette drives,
and so on. When an operating system is bought, many device drivers are built into the product.
However, if later bought a new type of device that the operating system didn’t anticipate, new
62

device drivers should be installed. A device driver essentially converts the more general input/
output instructions of the operating system to messages that the device type can understand.

Some Windows programs are virtual device drivers. These programs interface with the
Windows Virtual Machine Manager. There is a virtual device driver for each main hardware
device in the system, including the hard disk drive controller, keyboard, and serial and parallel
ports. They’re used to maintain the status of a hardware device that has changeable settings.
Virtual device drivers handle software interrupts from the system rather than hardware interrupts.

In Windows operating systems, a device driver file usually has a file name suffix of DLL or
EXE. A virtual device driver usually has the suffix of VXD.

Figure 2.8: Device Drivers

2.8.1. Functions of a driver


 Encapsulation – Hides low-level device protocol details from the client

 Unification – Makes similar devices look the same

 Protection (in cooperation with the OS) – Only authorized applications can use
the device

 Multiplexing (in cooperation with the OS) – Multiple applications can use the
device concurrently
63

Summary
 An Operating system is basically intermediary agent between the user and the
computer hardware.

 The primary function of a client-server system is to create a division of labor between


a centralized server and the individual computers that are running the software.
This model has a number of benefits that help small businesses successfully create
and market data and processor intensive applications in the very competitive software
market.

 A server operating system (OS) is a type of operating system that is designed to be


installed and used on a server computer. It is an advanced version of an operating
system, having features and capabilities required within a client-server architecture
or similar enterprise computing environment.

 3-tier client-server system architecture: This architecture involves the client PC,
Database server and Application server.

 A command-line interface or command language interpreter (CLI), also known as


command-line user interface, console user interface and character user interface
(CUI), is a means of interacting with a computer program where the user (or client)
issues commands to the program in the form of successive lines of text (command
lines). A program which handles the interface is called a command language
interpreter or shell (computing).

 A file is a collection of data that is stored on disk and that can be manipulated as a
single unit by its name.

 A directory is a file that acts as a folder for other files. A directory can also contain
other directories (subdirectories); a directory that contains another directory is called
the parent directory of the directory it contains.

 A device driver is a program that controls a particular type of device that is attached
to the computer. There are device drivers for printers, displays, CD-ROM readers,
diskette drives, and so on. When an operating system is bought, many device drivers
are built into the product.

 Encapsulation , Unification, Protection, Multiplexing


64

Check your Answers

Write short notes on

 Operating system

 Client OS

 Server OS

 Command line

 Files and Directories

 Drivers

Reference
 https://searchenterprisedesktop.techtarget.com/definition/device-driver

 http://faculty.salina.k-state.edu/tim/ossg/Introduction/intro.html

 https://smallbusiness.chron.com/primary-function-clientserver-system-46753.html

 https://www.techopedia.com/definition/30145/server-operating-system-server-os

 h t t p s : / / p d f s . s e m a n t i c s c h o l a r. o r g / e 1 d 2 / 1 3 3 5 4 1 a 5 d 2 2 d 0 e
e60ee39a0fece970a4ddbf.pdf

 http://dsl.org/cookbook/cookbook_8.html

 https://en.wikipedia.org/wiki/System_file

 https://www.computerhope.com/jargon/s/systfile.htm
65

UNIT-3
COMPUTER PRINCIPLES AND A
BLACK BOX MODEL OF THE PC
Learning Objectives

After reading this lesson you will be able to understand:

· Computer Principles

· Two Black Boxes

· The memory and the processor

· Black Box Model

· Address and data buses

· The stored program concept

· Format of instructions

· The processor mechanism

· Motherboard

· Components of PC

Structure
3.1 Computer Principles

3.2 Two Black Boxed

3.3 The Black Box Model

3.4 Buses

3.5 Packaging of Chips

3.6 The Stored Program Concept

3.7 Format of Instructions

3.8 Processor Mechanism

3.9 Motherboard

3.10 The Design of the PC


66

3.1. Computer Principles


In the previous chapter operating systems, client server architecture Server Operating
System (Server OS), Client-Server Model, Benefits of client server model, Client-Server Systems
Architecture, Command-line interface, Files and Directories, Functions, Device driver were
discussed.

The part of the computer that carries out the function of executing instructions iscalled the
processor and the relationship between this element and the memory iswhat needs to be examine
in more detail.This can be done by means of a workedexample, showing step by step the
principles involved and how data in the memory isinterpreted and manipulated by the processor.

3.2. Two Black Boxes


3.2.1. Memory

Computer memory is any physical device capable of storing information temporarily or


permanently. For example, Random Access Memory (RAM), is a volatile memory that stores
information on an integrated circuit used by the operating system, software, and hardware.

Memory can be either volatile and non-volatile memory. Volatile memory is a memory that
loses its contents when the computer or hardware device loses power. Computer RAM is an
example of a volatile memory. This is the reason for loss of data that has not been saved. This
happens when a computer freezes or reboots while working on a program.

Non-volatile memory, sometimes abbreviated as NVRAM, is a memory that keeps its


contents even if the power is lost. EPROM is an example of a non-volatile memory.

Figure.3.1 represents a very simple diagram showing a processor and a memory astwo
black boxes connected together by two arrowed lines.The black boxes are shownas separate
because it is very likely that they will be implemented using differentelectronic chips: a processor
chip and a memory chip (or possibly a set of memorychips). They are connected together by
flexible cables (or tracks on a printed circuitboard) which are made up of several wires in parallel.
Suchmultiple connections arecalled buses.
67

Figure 3.1: Processor and Memory

3.2.2. The Processor

The basic mechanism for our example processor is very simple. The idea of thestored
program concept, as implemented in a modern computer, was firstexpounded by John Von
Neumann (1945). This idea decrees that instructions areheld sequentially in the memory and
that the processor executes each one, in turn,from the lowest address in memory to the highest
address in memory, unlessotherwise instructed.To achieve this the processor maintains a record
of where it has got to so far in executing instructions. It does this using an internal store that is
variously called the counter register or the sequence control register or the program counter.

Again, for the purposes of our example, this sequence has been simplified into foursteps:

· fetch

· interpret

· update and

· execute

3.2.2.1. Fetch

In the fetch step, the processor will first ofall use its program counter to send a signal to
the main memory requesting that it besent a copy of the next instruction to be executed. Itwill
68

do this using the address bus.The memory will then respond by sending back a copy of the
binary patterns that itholds at the address it has been given. It will do this using the data bus.The
processorwill then take the binary patterns that represent the instruction from the data busand
place them in its instruction registers in readiness for decoding.

3.2.2.2. Interpret

Once thetransfer is complete, the processor will then enter the interpret step, where it
willinterpret or decode the patterns as an imperative instruction.Part of the pattern willbe used
to select the action that the processor should perform, and part will be usedto determine the
object to which this action should be applied, as we describedabove.

3.2.2.3. Update

On completion of its preparations to perform the instruction, the processorwill then enter
the update step. In this step, the processor prepares its programcounter so that it is ready for
the next instruction in sequence. In general, it does this by calculating the length in bytes of the
current instruction and adding that value to its program counter.Given that the systemis set up
to obey a sequence of instructions, one after the other, from lower address to higher address,
the program counter, having had this length added,will thus be pointing to the start of the next
instruction in the sequence.

3.2.2.4. Execute

Finally, the processor enters the execute step, where the action defined in the interpret
step is applied to the object defined in the interpret step. To do this, it may well use an additional
register as a scratchpad for interim results, and this is sometimes known as an accumulator or
general purposeregister. After that, the processor repeats the cycle starting with the fetch step
once again.

3.2.3. The Worked Example

Fig. 3.3 shows a more detailed view of the two black boxes that has been considered
earlier,now rotated through 90° and expanded so that it can be seen that is contained within.Here
it is possible able to see into a small portion of the main memory on the left hand side and
observe exactlywhat patterns are in the byteswith addresses 3,4 and 5and 31 through to 36.

All that is required for the processor, on the right-hand side, is a small element ofinternal
memory for the registers and a four-step cyclic control mechanismwhich can be compared with
69

the four stroke internal combustion engine.Where the four strokes namely “suck”, “squeeze”,
“bang” and “blow” of the internalcombustion engine similarly, the four steps of processor life
cycle has “fetch”, “interpret”, “update” and “execute”.

One rather important difference between the twomodels, however, is their rotational speed.
In the case of a typical modern processor, the Intel Pentium4 for example,the speed of operation
can be as high as 10 000 MIPs2or more. This suggests that, since each “revolution” causes
one instruction to becarried out, the equivalent “rotational speed” is 10000 million revolutions
persecond, compared with the 4000 or so revolutions per minute of an internalcombustion
engine.

The processor is shown connected to the main memory by the two buses, the address
bus at the top and the data bus at the bottom.There is a third bus, not shown on the diagram for
the sake of clarity,known as the control bus,and this is concerned with control activities, such as
the direction of data flow on the data bus and the general timing of events throughout the
system.

Figure 3.2: Looking Inside

As it was described above, the program counter in the processor holds the address of
where in main memory the next instruction that the processor is to execute can be found (in this
example, address 31) and the doing and using registers are our versions of the instruction
registers used by the processor to interpret the current instruction. The gp register is the general
purpose scratchpad register that was also referred to earlier.We have used throughout registers
that are only one byte in size so as to keep the example simple.Again, this does not affect the
70

principles, butmodern practical processors are likely to have two, four and even eight byte
registers.

The in-built control mechanism of our example processor causes it to cycle clockwise
through the four steps: fetch, interpret, update and execute, over and over again, repeating the
same cycle continuously all the while that the processor is switched on.

The rate at which the processor cycle is executed is controlled by a system clock and, as
mentioned above, this might well be running at several thousands of millions of cycles per
second.

3.3 The Black Box Model


The black box model can be used this to identify the elements of a real PC that The first
component that is being considered it the mechanism by which the major elements are connected
up.

The interconnections between the major elements is represented in Fig. 3.4:

Fig.3.3: Black box model of an information processing system.


71

The major elements are the address and data buses. Recalling that they are simply sets
of electrical connections, it will be no surprise to note that they tend to be implemented as
parallel tracks on a printed circuit board(PCB).This brings us then to the most important
component of all, the motherboard. This normally hosts the processor and the memory chips,
and as a result the buses between them are usually just parallel tracks on the motherboard.
Also on the motherboard is the chipset that carries out all the housekeeping needed to keep
control of the information transfers between the processor, the memory and all the peripheral
devices. In addition, the motherboard hosts the real-time clock, which contains within it the
battery-backed memory known as the CMOS or Complementary Metal Oxide Semiconductor
memory, and the Basic Input Output System(BIOS) Read-Only Memory (ROM).One particularly
clever idea in the original design of the PC was to arrange for the various buses to be accessible
in a standard form so that expansion cards could be fitted into expansion slots on the motherboard
and thus gain access to all the buses. The motherboard normally has a number of these
expansion slot connectors either directly fitted onto the motherboard itself, or fitted onto a separate
riser board or daughterboard that may be connected at right angles to the motherboard.

The next component that needs to looked at is the processor. This technology has advanced
at an unprecedented rate, in terms of both performance and price, over the past 25 years.

3.4. Buses
One bus (the address bus) has a single arrow on it, indicating a one-way transfer of data
and the second bus (the data bus) has two arrows, indicating a two-way transfer of data. If the
binary patterns are requirement to pass between the processor and the memory, we might
consider an appropriate unit as being the byte that is being processed. Recalling that a byte
consists of eight binary bits, a suitable form of connection that would permit all eight bits of a
byte to be transferred in one go would be eight parallel lines: a separate line for each bit. This
is precisely the form that a bus takes: a set of parallel lines that permits the transfer of several
bits of data all at once. Buses come in many sizes, and the eight-bit data bus.

The buses are nomore than a set of parallel electrical connections: one connection for
each bit ofinformation. Hence an eight-bit bus can transfer eight bits or one byte of informationat
a time. From this, it becomes apparent that although the speed at which theprocessor operates
is a very important factor in the overall performance of thesystem, it is the data transfer rates
across the system buses which effectively act asbottlenecks and limit the performance of the
72

whole. For this reason, there has beenmuch development of buses throughout the life of the
PC to try to overcome thesevarious performance bottlenecks as the major elements of the
system have allbecome so much faster.

3.4.1. Three Buses

A simplistic view of the PC considers the major elements to be interconnected bymeans


of three main buses: the address bus, the data bus and the control bus. In Fig.3.5 depicts an
example of the interconnections of the three buses between the processor unit and the memory
unit.Here the address bus provides the means by which the processor cansignal the memory
with the address of a byte to which it wants access. More generally, the address bus is used by
any autonomous devices.

Figure 3.4: Three Buses

Autonomous devices are devices that can operate without every action being controlledby
the main processor. The writing of a memory block to a hard disk drive, for instance,would be
initiated by the processor, but the disk controller might then carry out thedetailed transfer of
each byte of memory autonomously, referring back to the processorwith an interrupt only when
the transfer was complete. This is sometimes also referredto as Direct Memory Access (DMA)
device to specify the address of some other device (or the address of part of some other device
such as a memory byte) with which it wishes to communicate.

The data bus, in the above diagram, provides the means by which the data bits are
passed, in parallel, between the memory and the processor after the address of the required
byte has been specified by the address bus.
73

The control bus carries, can be expected to a number of control lines concernedwith the
housekeeping that is necessary to make this all work. Examples of suchcontrol lines include
signals to indicate that the

· data bus is being used to read a byte from memory

· data bus is being used to write a byte into memory

· the values on the address bus are currently valid

· processor is using the system buses, and so forth.

In addition, a number of clock timing signals are also distributed by means of the control
bus.

The three-bus model derives from the early processors, with their sets of data, address
and control pins, which were used to construct the first PCs. The buses are implemented in
such a way as to provide a standard interface to other devices. Using this standard interface,
expansion cards containing new devices can easily be slotted into spare sockets on the
motherboard and be connected directly to the three buses.

3.4.2. Size of Buses


· Clearly the size of the data bus, that is, the number of bits that can be transferred in
parallel, is going to be a major factor in determining overall system performance.

· The wider the bus, the more data that can be passed in parallel on each machine
cycle and hence the faster the overall system should be able to run.

· The data bus width is often used to categorize processors.

· Very early processors are known as 8 bit, because they have only 8 pins for access
to their external data bus.

· In the mid- to late 1970s came the first of the 16 bit processors, and the Intel
Pentium processors of today are 64 bit, which means that they can transfer 8 bytes
at a time over their external data bus.

· One-point worth noting, in passing, is that modern processors are likely to have
much larger internal data buses, which interact with their on-chip caches, than the
external data buses that are evident to the rest of the system.
74

· In the case of the Intel Pentium 4, the internal data bus, on the chip itself, is 256 bits
wide.

· The width of the address bus, on the other hand, determines the maximum number
of different devices or memory bytes that can be individually addressed.

· In practice, it imposes a limit on the size of the memory that is directly accessible to
the processor, and thus dictates the memory capacity of the system.

3.5. Packaging of Chips


Obviously then, for high performance and high capacity, the data andaddress buses needs
to be as large as possible. One limitation that is imposed on the size ofthese buses is the need
to connect each separate contact point on the tiny processorchip to a corresponding pin on
some supporting container package and then to beable to plug that package into a suitable
socket on the motherboard.

Figure 3.5: A Typical DIL Chips

One standard packaging arrangement that has been around since the early days of the
PC is for the

· Dual In Line (DIL) chip as shown at Fig. 3.6 (from Microsoft ClipArt Gallery 2.0), and
this is often known as a Dual In line Package (DIP).
75

· For the processor at the heart of the original IBM PC, the Intel 8088, the DIL package
has 40 pins, with 20 down each side.

o The data bus is 8 bits wide and the address bus is 20 bits wide, but 20 pins on the
package are also needed for control signals and for the power supply.

o In order to fit all of this onto a 40 pin package, many of the pins have to be used for
more than one purpose at different times in the processor cycle.

· With the Intel 8088, the address pins 0 to 7 also double up as the eight data bus
pins and the address pins 16 to 19 carry status information as well.

· This technique is known as multiplexing and obviously adds additional complication


to the motherboard in having to separate out the various signals.

· DIL packages with more than 40 legs were found to be very unwieldy and difficult to
plug into their sockets, although the Texas Instruments TMS9900 had 64 pins in a
DIL package (see Adams, 1981).

· In later processor systems, as the number of pin connections required increased,


the DIL packaging was found to be too limiting and was replaced by a square- or
rectangular shaped package with several rows of pins on each side, known as a Pin
Grid Array (PGA).

· With this packaging, now often referred to as the form factor of the chip, we see the
more frequent use of Zero Insertion Force (ZIF) sockets, which allow the relatively
easy replacement and upgrading of pin grid array processor chips. A ZIF socket
allows a chip to be Inserted into the socket without using any significant force.

· When the chip is properly seated in the socket, a spring-loaded locking plate is
moved into place by means of a small lever, which can be seen to the left of Fig.3.7,
and this grips all the pins securely making good electrical contact with them. In Fig.
3.7 the lever is shown in the down (locked) position on a Socket 939 ZIF socket. T

· The form factors of processor chips for the PC introduced by Intel over the years
have seen a variety of pin grid array systems, initially known as Socket 1 through to
Socket 8, as shown at Table 3.2. Socket 8 is a Staggered Pin Grid Array (SPGA),
which was specially designed for Pentium Pro with its integrated L2 cache. Intel
also introduced what they called a Single Edge Contact (SEC) cartridge for some of
the Pentium II and III processors. This form factor is called Slot 1 and is a 242
contact daughter card slot.
76

Figure 3.6: A Socket 939 ZIF Socket

Table.3.1: Initial Socket Numbers

· They then increased the number of contacts on the SEC cartridge to 330 and this
became known as Slot 2. Other manufacturers produced Slot A and Slot B SEC
form factors.

· Subsequently, for the Pentium III and Pentium 4, the Socket form factor returned to
favour and a variety of different Socket numbers were produced by Intel with the
Socket number indicating the number of pins on the PGA.
77

· Some examples are: Socket 370, Socket 423, Socket 478, Socket 479, Socket 775
and so forth. In addition, other manufacturers produced their own versions, such
as:

o Socket 754,

o Socket 939 (the one shown in Fig. 3.7 for an AMD chip),

o Socket A, Socket F and so forth.

· A more radical approach to the packaging problem is to place the die (or silicon
chip) directly onto the printed circuit board and bond the die connections straight
onto lands set up for that purpose on the PCB. The die is then covered with a blob
of resin for protection.

· This technique is known as Chip on Board (COB) or Direct Chip Attach (DCA) and
is now frequently found in the production of Personal Digital Assistants (PDAs) and
electronic organizers.

3.6. The Stored Program Concept


Within the memory box binary patterns have been indicated as both objects and rules.
The rules are ordered sequences of instructions that are to be interpreted by the processor and
which will cause it to carry out a series of specific actions. Such sequences of rules are called
programs and the idea that the computer holds in its memory instructions to itself is sometimes
referred to as the stored program concept. So the situation where the first of the two black
boxes in the diagram, the memory, contains not only the binary patterns that represent the real
world objects (the data) but also the binary patterns that represent the rules (the program).

· These rules specify what is to be done to the binary patterns that are the data, and

· it is these program rule patterns that are to be interpreted by the second of the two
black boxes shown in the diagram: the processor.

The idea can be quite difficult to grasp. There are binary patterns in one part of the
memory. These binary patterns are interpreted by the processor as a sequence of rules. The
processor executes this sequence of rules and, in so doing, carries out a series of actions.
These actions, typically, manipulate binary patterns in another part of the memory. These
manipulations then confer specific interpretations onto the manipulated binary patterns.This
process mimics, in a very simple form, our mentalinterpretation of a binary pattern.
78

3.7. Format of Instructions


Now let us consider the form that one such instruction or rule might take. Keeping our
example as uncomplicated as possible, let us define a rule as consisting of the binary patterns
in two consecutive bytes in memory, as shown in Fig. 3.2. For our simplified processor, we will
decree that the pattern in the first byte of the pair (let us assume it is in big endian format) is to
represent a doing code to the processor. This is an imperative: do this thing; the actual details
of what particular thing is to be done are to be determined by the binary pattern in the doing
byte. In Fig. 3.2 this pattern is 00000101 and it is arbitrarily decided that this should represent
the action subtract one-byte pattern from another.

Figure 3.7: An Instruction

The pattern in the second byte is to represent the object on which the doing code action
is to be carried out. This is called theusing code. In Fig. 3.2 this pattern is 11000101, which in
decimal is 197. In many cases, the value of this second byte will refer to a starting place in
memory where the object to be manipulated resides; that is, it will often be a memory byte
address. The two-byte pattern may therefore be interpreted as an instruction, or rule, which
states: “subtract the thing in byte 197”.

In a practical processor, probably there would be a wide variety of different doing codes
available, known collectively as the order code for the processor, and these would associate
specific patterns in the doing byte with specific actions available in the hardware of the processor.

Typical examples might include:

· add a byte

· subtract a byte
79

· multiply a byte

· divide a byte

· input a byte

· output a byte

· move a byte

· compare a byte and so forth.

There may be similar actions which relate to two or more bytes taken together. The range
and functionality of these doing codes are defined by the hardware of the processor. For example
processor, however, let us consider four such doing codes namely

· load a byte

· store a byte

· add a byte and

· subtract a byte

and we will decree that load a byte is to be 00000001, store a byte is to be 00000010, add
a byte is to be 00000100, and subtract a byte is to be 00000101, as shown in Table 3.1.

Table 3.2: Example Doing Codes


80

Note:

Big Endian and Little Endian are the terms that describe the order in which a sequence of
bytes is stored in computer memory.

Big endian is an order in which the “big end” most significant value in the sequence is
stored first at the lowest storage address.For example, in a big endian format the two bytes
required for the hexadecimal number 4F52 would be stored as 4F52 in storage. If 4F is stored
at storage address 1000, 52 will be stored at storage address 1001. IBMs 370 mainframes,
most RISC based computers and Motorola microprocessors use the big endian approach.
TCP/IP also used big endian approach and hence it is sometimes called network order. For
those who use languages that read left to right, this seems like the natural way to think of a
storing string of characters or numbers in the same order as one would expect to see it (i.e)
forward fashion just as one would read a string.

Little endian is an order in which the “little end” the least significant value in the sequence
is stored first. In a little endian system, the above mentioned hexadecimal two bytes of information
will be stored as 524F (i.e) if 52 is stored at storage address 1000, then 4F will be stored at
storage address 1001. Intel processors and DEC Alphas use little endian.

Two approaches have been adopted by processor chip manufacturers: designs with
largenumbers of complex instructions, known as Complex Instruction Set Computers(CISC),
and designs with a minimal set of high-speed instructions, known as ReducedInstruction Set
Computers (RISC). counter register or the sequence control register or the program counter.
This is a small element of memory, internal to the processor, which normally holds the address
in the main memory of the next instruction that the processor is about to execute. The processor
will go through a series of steps to execute an instruction.

3.8. Processor Mechanism


As the PC developed, the simple idea of having just one set of buses (the address bus,the
data bus and the control bus) which connected everything to everything wasfound wanting. The
problem is that different parts of the system operate at differentspeeds and require different
bus widths, so the “one size fits all” approach leads tounacceptable data transfer bottlenecks.

In order to try to reduce these bottlenecks,a number of different buses were introducedwhich
were tailored to connect particular parts of the system together. In theearly designs, these
81

buses might be called, for example, the processor bus, the I/O(input–output) bus and the memory
bus.

In Fig. 3.8 we see a typical case, where the processor bus connects the processorboth to
the bus controller chipset and to the external cache memory (ignoring for themoment the
connection to the local bus).

This processor bus is a high-speed bus,which for the Pentium might have 64 data lines,
32 address lines and various controllines, and would operate at the external clock rate. For a 66
MHz motherboard clockspeed, this means that the maximum transfer rate, or bandwidth, of the
processordata bus would be 66 × 64 = 4224 Mbit per second.Continuing with our example
case, the memory bus is used to transfer informationfrom the processor to the main dynamic
random access memory (DRAM)chips of the system.

This bus is often controlled by special memory controller chipsin the bus controller chipset
because the DRAM operates at a significantly slowerspeed than the processor.

The main memory data bus will probably be the same sizeas the processor data bus, and
this is what defines a bank of memory.When addingmore DRAM to a system, it has to be
added, for example, 32 bits at a time if theprocessor has a 32bit data bus.For 30 pin,8 bit
SIMMs (see later section on memory),

Figure 3.8: Bus Routes


82

four modules will be required to be added at a time. For 72 pin, 32 bit SIMMs, thenonly
one module is required to be added at a time.

In the figure above the I/O bus is the main bus of the system. It connects
theprocessor,through the chipset, to all the internal I/O devices, such as the primary andsecondary
IDE (Integrated Drive Electronics) controllers, the floppy disk controller,the serial and parallel
ports, the video controller and, possibly, an integrated mouseport. It also connects the processor,
through the chipset, to the expansion slots.

Newer chipsets were designed to incorporate what is called bus mastering, atechnique
whereby a separate bus controller processor takes control of the bus andexecutes instructions
independently of the main processor.I/O bus architectures have evolved since the first PC,
albeit rather slowly.

The requirement has always been quite clear. In order to capitalize on the rapid
improvementsthat have taken place in chip and peripheral technologies, there is a need
toincrease significantly the amount of data that can be transferred at one time and thespeed at
which it can be done.The reason for the relatively slow rate of change in thisarea has been the
need to maintain backward compatibility with existing systems,particularly with respect to
expansion cards.

The original IBMPCbus architecture used an 8bit data buswhich ran at 4.77MHzand
became known as the Industry Standard Architecture (ISA).

With the introductionof the PC AT, the ISA data bus was increased to 16 bits and this ran
first at 6MHz andthen at 8MHz.However,because of the need to support both 8 bit and 16bit
expansioncards,the industry eventually standardized on 8.33MHzas the maximumtransfer ratefor
both sizes of bus, and developed an expansion slot connector which would acceptboth kinds of
cards.

ISA connector slots on motherboards are rarely seen today.When the 32 bit processors
became available, manufacturers started to look atextensions to the ISA bus which would permit
32 data lines. Rather than extend theISA bus again, IBM developed a proprietary 32bit bus to
replace ISA called MicroChannel Architecture (MCA).
83

Because of royalty issues, MCA did not achieve wideindustry acceptance and a competing
32bit data bus architecture was establishedcalled Extended Industry Standard Architecture (EISA)
which can handle 32 bits ofdata at 8.33 MHz.

All three of these bus architectures (ISA, MCA and EISA) run at relatively lowspeed and
as Graphical User Interfaces (GUIs) became prevalent, this speedrestriction proved to be an
unacceptable bottleneck, particularly for the graphicsdisplay.

One early solution to this was to move some of the expansion card slots fromthe traditional
I/O bus and connect themdirectly to the processor bus.This becameknown as a local bus, and
an example of this is shown in our example at Fig. 4.5.Themost popular local bus design was
known as the Video Electronics Standards Association(VESA) Local Bus or just VL-Bus and
this provided much improved performanceto both the graphics and the hard disk controllers.

Several weaknesses were seen to be inherent in the VL-Bus design. In 1992 a groupled
by Intel produced a completely new specification for a replacement bus architecture.This is
known as Peripheral Component Interconnect (PCI).Whereas VL-Buslinks directly into the very
delicate processor bus, PCI inserts a bridge between theprocessor bus and the PCI local bus.This
bridge also contains the memory controllerthat connects to the main DRAM chips. The PCI bus
operates at 33 MHz and at thefull data bus width of the processor.New expansion sockets that
connect directly to the PCI bus were designed and these, together with expansion sockets for
updatedversions of this bus, are what are likely to be found on most modern motherboards.

Figure 3.9: North Bridge and South Bridge


84

3.8.1. Northbridge and Southbridge

The design also incorporates an interface to the traditional I/O bus, whether it beISA,
EISA or MCA, and so backward compatibility is maintained.Further development of this approach
led to the Northbridge and Southbridge chipset that find in common use today.

In Fig. 3.9 a typical layoutdiagram of a motherboard that uses these chipsets is shown.
The Northbridge chip connectsvia a high-speed bus,known as the Front Side Bus (FSB) directly
to the processor.Wehave attempted, in the diagram, to give some idea of relative performance
of thevarious buses by making the thickness of the connecting lines indicative of theirtransfer
rates.

It may be noted that the memory slots are connected to the Northbridgechip, as is the
Accelerated Graphics Port (AGP). More recently, find high performance PCI Express slots
connected to both the Northbridge and Southbridgechips. This is a very fast serial bus consisting
of between 1 and 32 lanes, with eachlane having a transfer capability of up to 2.5 gigabits per
second.

The Northbridge chip is connected to the Southbridge chip, which in turnconnects to a


wide variety of devices, such as the PCI expansion slots, the Serial ATA(SATA) disk interface,
the Parallel ATA (PATA) disk interface, the sound system,Ethernet, the ISA bus (if one exists)
and so forth. In addition, the slower speeddevices, such as the parallel port (for printers), the
serial communication ports, thePS2 mouse port, the floppy disks and the keyboard, are often
connected to theSouthbridge chips via a Super IO chip, as shown in Fig. 3.7.

Intel then introduced the Intel Hub Architecture (IHA) where, effectively, theNorthbridge
chip is replaced by the Memory Controller Hub (MCH) and the Southbridge chip is replaced by
the I/O Controller Hub (ICH). There is also a 64 bitPCI Controller Hub (P64H).The Intel Hub
Architecture is said to bemuch faster thanthe Northbridge/Southbridge design because the
latter connected all the low-speedports to the PCI bus, whereas the Intel architecture separates
them out.

Two other technologieswhich are in widespread use areFireWire is a serial bus technology
with very high transfer rates which has been designed largely for audio and video multimedia
devices. Most modern camcorders include this interface,which is sometimes knownas i.Link.
85

The official specifications for Firewire are IEEE-1394-1995, IEEE 1394a-2000 and IEEE
1394b (Apple Computer Inc., 2006), and it supports up to 63 devicesdaisy chained to a single
adapter card.

The second technology is that of Universal Serial Bus (USB) (USB, 2000),which is also a
high-speed serial bus that allows for upto a theoretical maximumof 127 peripheral devices to be
daisy chained froma singleadapter card. The current version, USB 2.0, is up to 40 times faster
than the earlierversion of USB 1.1. A good technical explanation of USB can be found in
Peacock(2005). With modern Microsoft Windows systems, “hot swapping” of hard diskdrives
can be achieved using either Firewire or USB connections. This is of significanceto the forensic
analyst in that it enables the possible collection of evidencefrom a system that is kept running
for a short while when first seized. This might berequired when, for example, an encrypted
container is found open on a computerthat is switched on.

Recent news hits are data breaches, lost drives, laptops, stolen identities. Data leakage
is a serious threat to organization which the organizations cannot afford. In order to protect
organizational critical information from being stolen by employees or contractors, encrypted
USB devices such as IronKey security solutions protect data, digital identities.

3.9. Motherboard
In Fig. 3.10 is shown a typical modern motherboard, an Asus A8N32-SLI (Asus, 2005).On
the left-hand side of the diagram we can see clearly the three PCI expansion slots.This modern
board, as expected, has no ISA or VESA slots, but it does have three ofthe relatively new PCI
Express slots.

Two of these slots are PCI Express × 16withwhatis known as Scalable Link Interface
(SLI) support, and this provides the motherboardwith the capability for fitting two identical graphics
cards in order to improveoverall graphics performance. These two slots are of a darker colour
than the PCIslots and slightly offset from them.

o One is located between the first and second PCIslot and

o the other,which is marked “PCI Expressin the diagram,is to the right of thethird PCI
slot.

o The third PCI Express slot is a × 4 slot, which is much smaller and islocated just to
the right of this second PCI Express slot.
86

It can be seen that the ZIF Socket 939 for the AMD processor in the figure. The two
IDEsockets for the ribbon cables to the Primary and Secondary parallel ATA hard disksare at
the bottom of the diagram close to the ATX power socket and the floppy diskcontroller socket.‘

This motherboard also has four Serial ATA sockets to the left of thePrimary IDE parallel
socket, and at the top of the diagram can be seen in addition aSerial ATA RAID socket.

Figure 3.10(a): Asus A8N32-SLI motherboard

At the bottom left of the diagram can be seen an 8 Mbyte flash EPROM, whichcontains
the BIOS, and the motherboard is controlled by Northbridge andSouthbridge chips which, as
can be seen, are connected together by a copperheatpipe. This is said to be provides an
innovative fanless design for a much quietermotherboard.

This motherboard is also fitted with a Super I/O chip, as we discussedabove.Along the
left-hand side of the diagram we note the COM1 port socket, USB andFireWire (IEEE 1394)
sockets, and the CR2032 lithium cell battery which providespower for the real-time clock and
the CMOS memory.
87

Along the top we note gigabitLocal Area Network (LAN) sockets,more USB sockets, the
audio sockets, the parallelport and the PS2 mouse and keyboard sockets.

The main random access memory is fitted into DIMM (Dual In-line MemoryModule) slots,of
which four 184 pin Double Data Rate (DDR) slots can be seen in thediagram, although two are
darker in colour and are not quite so evident.

This motherboard supports a maximum of 4 Gbyte of memory and, as for most


motherboards, there are various rules about what mix of memory modules are permitted inthe
four memory slots.

Figure 3.10(b): Asus A8N32-SLI motherboard

3.10. The Design of the PC


The original IBM PC architecture, dating from 1981, was based on the Intel 8088processor
chip.This architecture became known as the PC/XT with XT referring toExtra Technology.

The Intel 8088 is a later version of the Intel 8086, a processor chip that was first produced
in 1976.

Microcomputer systems of this time were all 8 bitand the 8086,which was one of the first
chips to have an external data bus of 16 bits,did not immediately gain widespread support,
88

mainly because both the chip andthe 16 bit motherboard designed to support it were, at the
time, very expensive.

In 1978, Intel introduced the 8088, which is almost identical (Intel, 1979) to the 8086, but
has an 8 bit external data bus rather than the 16 bits of the 8088. Both theseprocessors have a
16 bit internal data bus and fourteen 16 bit registers. They arepackaged as 40 pin DIL chips and
have an address bus size of 20 bits, enabling themto address up to 220 bytes; that is, up to
1,048,576 bytes or 1 Mbyte.

With the XT architecture designed round the 8088 chip it was able to use the then industry
standard 8 bit chip sets and printed circuit boards that were in common use andrelatively cheap.
Bus connections in the original XT architecture were very simple.

Everything was connected to everything else using the same data bus width of 8bits and
the same data bus speed of 4.77 MHz. This was the beginning of the 8 bitISA bus that we
discussed above.

The layout of the PC memory map is shown in the figure.3.11 and part of the basic design
of the PCis a consequence of the characteristics of these Intel 8088 and 8086 processors.
Thememory map is, of course, limited to 1 Mbyte, which is the address space of thisprocessor
family (20 bits).

Figure 3.11: The PC-XT system memory map


89

The first 1024 bytes of this address space are reserved bythe processor for its interrupt
vectors, each of which is a four-byte pointer to an interrupt handling routine located elsewhere
in the address space. To ensure aflexible and upgradeable system, the interrupt vectors are
held in RAM so that theycan be modified.

In addition,when the processor is first switched on, and before anyvolatile memory has
yet been loaded with programs, it expects to start executingcode froman address that is 16
bytes fromthe top of the address space.This indicatesthat this area will have to be ROM.

The memory map that results is thus not surprising.The entire address space of 1Mbyte
cannot all be allocated to RAM.The compromise made was to arrange for thelower 640 kbyte to
be available as the main RAM6 and the upper part of the addressspace to be taken up with the
ROM BIOS, with the video RAM and to give room forfuture expansion with BIOS extensions.The
reason for the 640 kbyte figure is said tobe that the original designers looked at the then current
microprocessor systems, with their address buses of 16 bits and their consequent user address
spaces of 64kbyte of RAM,and felt that ten times this amount was a significant improvement
forthe new PC.

In practice, of course, the transient program area in which the user’s application programs
run does not get the whole of the 640 kbyte. Some is taken upby the interrupt vectors and by
the BIOS data, and some by the disk operating system(DOS).

The basic philosophy behind the design is very sound.The ROMBIOS, producedfor the
manufacturer of the motherboard, provides the programs for dealing indetail with all the vagaries
of the different kinds and variations of the specifichardware related to that motherboard.

The operating system and the applicationprograms can interact with the standard interface
of the BIOS and, provided thatthis standard is kept constant, both the operating system and the
application programs are transportable to any other PC that observes this same standard. The
standard BIOS interface utilizes yet another feature of this processor family, that ofthe software
interrupt. This works in a very similar manner to the hardwareinterrupt.

On detection of a particular interrupt number, the processor saves thecurrent state of the
system,causes the interrupt vector associated with that numberto be loaded and then transfers
control to the address to which the vector points.
90

In the case of a hardware interrupt, this will be to the start location of where code todeal
with some intervention request from the hardware resides. In the case of asoftware interrupt,
which calls on the BIOS, this will have been issued as an INTinstruction code by some calling
program, and will cause an appropriate part of theBIOS ROM code to be executed. In both
cases, when the interrupt is complete, theoriginal state of the system, saved at the time of the
interrupt, will be restored.Oneof the major benefits of this approach is the ability to change the
interrupt vectors, because they are held in RAM.

Let us consider, for example, that we are using theoriginal BIOS to control our graphics
display and that this therefore contains a setof programs which control the actual display controller
chip which is on ourmotherboard.When one of the applications uses the display, itwill issue a
standardBIOS software interrupt and the associated interrupt vectorwill have been set up
totransfer control to where these original BIOS graphics programs reside.

Nowconsider the case where a super, high-performance,modern graphicscontroller


expansion card is purchasedand fitted into one of the expansion slots on ourPC.Onthe graphics
expansion card will be new BIOS programs for dealing with the high performance graphics
controller that is fitted to this card.What is arranged for usby the system, during the bootstrap
sequence, is that the graphics controllerinterrupt vector is changed from pointing to the original
BIOS addresses to nowpointing to the appropriate addresses in the BIOS extensions area of
the memoryspace where our new graphics card BIOS has been installed.

Components of PC

Hardware interrupts are transmitted along Interrupt Request channels (IRQs)which are
used by various hardware devices to signal to the processor that a requestneeds to be dealt
with. Such a request may arise, for example, because input data is now availablefromthe
hardware and needs processing,or because output data hasnow been dealt with by the hardware
and it is ready for the next tranche.

There are alimited number of IRQs available and each has its own specific address in
theinterrupt vector table which points to the appropriate software driver to handle theHardware
that is assigned to that IRQ.Many IRQs are pre-assigned by the systemtointernal devices and
allocation of IRQs to expansion cards has to be carried outwith great care, since the system is
not able to distinguish between two hardwaredevices which have been set to use the same IRQ
channel.
91

Often, an expansion cardwill have DIP (Dual Inline Package) switches which enable one
of a number ofdifferent IRQ channels to be selected for a given configuration in an attempt
toavoid IRQ conflicts.Autonomous data transfer,which is the sending of data between a hardware
deviceand the main memory without involving the main processor, is provided by Direct Memory
Access (DMA) channels,and these too are a limited resource.

Again, some of the channels are pre-assigned by the system and others are available for
use by expansion cards and may also be set by DIP switches on the card.Conflicts can ariseif
two different hardware devices are trying to use the sameDMAchannel at the same time, though
it is possible for different hardware devices to share channels providing that they are not using
them at the same time.The third system resource is the I/O port address.

The Intel 8088 processor, inaddition to being able to address 1 Mbyte of main memory,
can also address, quiteseparately, up to 65,535 I/O ports. Many hardware device functions are
associatedwith an I/O port address. For example, the issuing by the processor of an INinstruction
to a particular port address may obtain from the hardware associatedwith that address the
current contents of its status register. Similarly, the issuing bythe processor of an OUT instruction
to a port address may transfer a byte of data tothe hardware.This type of activity is known as
Programmed I/O (PIO) or Processor I/O as opposed to Memory Mapped I/O (MMIO), where
the 65,535 port addresses are each assigned space in the overall main memory map.

Using MMIO, any memory access instruction that is permitted by the processor can be
used to access a portaddress. Normally a particular hardware device will be allocated a range
of port addresses.

The final system resource, and perhaps the one in greatest demand, is that ofmain memory
address space itself. MMIO is rarely used in the PCbecause it unnecessarily takes up valuable
main memory address space in the upper part of the memory map, space that is required for
the use of any BIOS extensions in particular.

When a new expansion card is fitted, therefore, consideration has to be given towhat of
these limited system resources it is going to require. It may have to beallocated an IRQ, a DMA
channel, a set of port addresses and, possibly, some address space in the upper part of the
memory map for a BIOS extension.

The concept of Plug and Play (PnP) was introduced with Microsoft Windows 95 to try to
automate this process of assigning these limited system resources.The system BIOS, the
92

operating system and the PnP-compatible hardware devices have to collaborate in order toidentify
the card, assign and configure the resources and find and load a suitabledriver.

A modern PC is both simple and complicated. It is simple in the sense that over the years,
many of the components used to construct a system have become integrated with other
components into fewer and fewer actual parts. It is complicated in the sense that each part in a
modern system performs many more functions than did the same types of parts in older systems.

The components and peripherals necessary to assemble a basic modern PC system:

· Motherboard

· Processor

· Memory (RAM)

· Case/chassis

· Power supply

· Floppy drive

· Hard disk

· CD-ROM, CD-RW, or DVD-ROM drive

· Keyboard

· Mouse

· Video card

· Monitor (display)

· Sound card

· Speakers

· Modem

The block diagram of a computer is shown in figure 3.12.

· Computer hardware - Are physical parts/ intangible parts of a computer. For


example, input devices, output devices, central processing unit and storage devices

· Computer software - also known as programs or applications. They are classified


into two classes namely - system software and application software.
93

· Liveware - is the computer user.

Figure 3.12: Block Diagram of a Computer

Summary
· The part of the computer that carries out the function of executing instructions is
called the processor

· Computer memory is any physical device capable of storing information temporarily


or permanently.

· Four stages of processor are fetch, interpret, update and execute.

· A simplistic view of the PC considers the major elements to be interconnected


bymeans of three main buses: the address bus, the data bus and the control bus.

· Within the memory box binary patterns have been indicated as both objects and
rules. The rules are ordered sequences of instructions that are to be interpreted by
the processor and which will cause it to carry out a series of specific actions. Such
sequences of rules are called programs and the idea that the computer holds in its
memory instructions to itself is sometimes referred to as the stored program concept.
94

· Typical examples might include: add a byte, subtract a byte, multiply a byte, divide
a byte, input a byte, output a byte, move a byte, compare a byte and so forth.

· Big Endian and Little Endian are the terms that describe the order in which a sequence
of bytes is stored in computer memory.

· This processor bus is a high-speed bus,which for the Pentium might have 64 data
lines, 32 address lines and various controllines, and would operate at the external
clock rate.

· the motherboard is controlled by North bridge and Southbridge chips which, as can
be seen, are connected together by a copper heat pipe.

Check your answers


· What is RAM?

· What are the components of PC?

· What is Northbridge and Southbridge?

· Big Endian Vs. Little Endian

· Describe Black Box Model.

· Short notes on form of Instruction

· What is Plug and Play?

Reference
1. https://searchwindowsserver.techtarget.com/definition/NTFS

2. https://www.techopedia.com/definition/1369/file-allocation-table-fat

3. https://opensource.com/life/16/10/introduction-linux-filesystems

4. Tony Sammes and Brian Jenkinson.Forensic Computing .2007.2nd Editon. Springer-


Verlag. ISBN-13: 978-1-84628-397-0
95

UNIT – 4
ENTERPRISE INFRASTRUCTURE INTEGRATION
Learning Objectives

After reading this lesson you will be able to understand

· Overview of Enterprise Infrastructure Integration

· Requirement to understand the Enterprise Infrastructure

· Enterprise Infrastructure Architecture and its components

Structure
4.1 Overview of Enterprise Infrastructure Integration

4.2 Requirement of understand Enterprise Infrastructure

4.3 Enterprise Infrastructure Architecture and its components

4.4 Basic elements of an IT Infrastructure

4.1. Overview of Enterprise Infrastructure Integration


Information technology is built upon both physical and virtual components. These
components include support the infrastructures operations, storage, processing of data and
data analysis. Infrastructure can be centralized or decentralized.

4.1.1. Infrastructure components:

IT infrastructure includes client machines and server machines, as well as modern main
frames. Blade Servers are ultrathin servers, intended for a single dedicated application that are
mounted in space saving racks.

4.1.2. Operating System Platforms:

include platforms for client computers, dominated by windows operating systems, and
servers, dominated by the various forms of UNIX operating systems or Linux Operating Systems
are software that manage the resources and activities of the computer and act as an interface
for the user.
96

4.1.3. Enterprise and other software

Enterprise software applications include SAP, ORACLE, and PeopleSoft and middleware
software are used to link a firms existing application systems.

4.1.4. Data Managements and storage

Data management and storage is handled by database management software and storage
devices include traditional storage methods, such as disk arrays and tape libraries and newer
network based technologies such as storage area networks (SANs). SANs connect multiple
storage devices on dedicated high speed networks.

4.1.5. Networking and telecommunications platforms

Networking and telecommunication platforms include Windows server operating systems,


Novell, Linux and UNIX. Nearly all LAN and many wide area networks (WANs) use the TCP/IP
standards for networking.

4.1.6. Internet Platforms

Internet related infrastructure includes the hardware, software and services to maintain
corporate websites, intranets, extranets including web hosting services and web software
applications development tools. A webhosting service maintains a large web server or series of
servers and provides fee-paying subscribes with space to maintain their web sites.

4.1.7. Consulting and system integration services

Consulting and system integration services are relied on for on for integrating technology
and infrastructure by providing expertise in implementing new infrastructure along with relevant
changes in business process, training and software integration.
97

Figure 4.1: IT infrastructure ecosystems

The figure 4.1 represents the seven major IT infrastructure structure ecosystems.

The independent components in each of these seven major infrastructures are portrayed
in the figure 4.2

Figure 4.2: I.T. Infrastructure Ecosystem

They can be spread across multiple data centres. These decentralized data centers can
be controlled by the organization or by third party. The organization may be the owner and third
party may be a cloud provider or collocation facility.
98

4.2. Requirement to understand Enterprise Infrastructure


Today, service providers and enterprises interested in implementing clouds face the
challenge of integrating complex software and hardware components from multiple vendors.
The resulting system can end up being expensive to build and hard to operate, minimizing the
original motives and benefits of moving to cloud computing. Cloud computing platforms are
attractive because they let businesses quickly access hosted private and public resources on-
demand without the complexities and time associated with the purchase, installation, configuration
and deployment of traditional physical infrastructure.

While 2010 was the year for talking about the cloud, 2011 will be the year for
implementation. It is for this reason that it is important for service providers and enterprises to
gain a better understanding of exactly is needed to build their cloud infrastructure. For both
enterprises and service providers, the successful creation and deployment of cloud services
will become the foundation for their IT operations for years to come making it essential to get it
right from the start.

For the architect employed with building out a cloud infrastructure, there are seven key
requirements that need to be addressed when building their cloud strategy. These requirements
include:

4.2.1. Heterogeneous Systems Support

Not only should cloud management solutions leverage the latest hardware, virtualization
and software solutions, but they should also support a data centre’s existing infrastructure.
While many of the early movers based their solutions on commodity and open source solutions
like general x86 systems running open source Xen and distributions like CentOS, larger service
providers and enterprises have requirements around both commodity and proprietary systems
when building out their clouds. Additionally, cloud management providers must integrate with
traditional IT systems in order to truly meet the requirements of the data center. Companies that
don’t support technologies from the likes of Cisco, Red Hat, NetApp, EMC, VMware and Microsoft
will fall short in delivering a true cloud product that fits the needs of the data center.

4.2.2. Service Management

To productize the functionality of cloud computing, it is important that administrators have


a simple tool for defining and metering service offerings. A service offering is a quantified set of
99

services and applications that end users can consume through the provider — whether the
cloud is private or public. Service offerings should include resource guarantees, metering rules,
resource management and billing cycles. The service management functionality should tie into
the broader offering repository such that defined services can be quickly and easily deployed
and managed by the end user.

4.2.3. Dynamic Workload and Resource Management

In order for a cloud to be truly on-demand and elastic while consistently able to meet
consumer service level agreements (SLAs), the cloud must be workload- and resource- aware.
Cloud computing raises the level of abstraction to make all components of the data center
virtualized, not just compute and memory. Once abstracted and deployed, it is critical that
management solutions have the ability to create policies around workload and data management
to ensure that maximum efficiency and performance is delivered to the system running in the
cloud. This becomes even more critical as systems hit peak demand. The system must be able
to dynamically prioritize systems and resources on-the-fly based on business priorities of the
various workloads to ensure that SLAs are met.

4.2.4. Reliability, Availability and Security

While the model and infrastructure for how IT services are delivered and consumed may
have changed with cloud computing, it is still critical for these new solutions to support the
same elements that have always been important for end users. Whether the cloud serves as a
test bed for developers prototyping new services and applications or it is running the latest
version of a popular social gaming application, users expect it to be functioning every minute of
every day. To be fully reliable and available, the cloud needs to be able to continue to operate
while data remains intact in the virtual data center regardless if a failure occurs in one or more
components. Additionally, since most cloud architectures deal with shared resource pools across
multiple groups both internal and external, security and multi-tenancy must be integrated into
every aspect of an operational architecture and process. Services need to be able to provide
access to only authorized users and in this shared resource pool model the users need to be
able to trust that their data and applications are secure.

4.2.5. Integration with Data Center Management Tools

Many components of traditional data center management sill require some level of
integration with new cloud management solutions even though the cloud is a new way of
100

consuming IT. Within most data centres, a variety of tools are used for provisioning, customer
care, billing, systems management, directory, security and much more. Cloud computing
management solutions do not replace these tools and it is important that there is open application
programming interfaces (APIs) that integrate into existing operation, administration, maintenance
and provisioning systems (OAM&P) out of the box. These include both current virtualization
tools from VMware and Citrix, but also the larger data center management tools from companies
like IBM and HP.

4.2.6. Visibility and Reporting

The need to manage cloud services from a performance, service level, and reporting
perspective becomes paramount to the success of the deployment of the service. Without
strong visibility and reporting mechanisms the management of customer service levels, system
performance, compliance and billing becomes increasingly difficult. Data center operations
have the requirement of having real-time visibility and reporting capabilities within the cloud
environment to ensure compliance, security, billing and chargebacks as well as other instruments,
which require high levels of granular visibility and reporting.

4.2.7. Administrator, Developer and End User Interfaces

One of the primary attributes and successes of existing cloud-based services on the
market comes from the fact that self-service portals and deployment models shield the complexity
of the cloud service from the end user. This helps by driving adoption and by decreasing operating
costs as the majority of the management is offloaded to the end user. Within the self-service
portal, the consumer of the service should be able to manage their own virtual data center,
create and launch templates, manage their virtual storage, compute and network resources
and access image libraries to get their services up and running quickly. Similarly, administrator
interfaces must provide a single pane view into all of the physical resources, virtual machine
instances, templates, service offerings, and multiple cloud users. On top of core interfaces, all
of these features need to be interchangeable to developers and third parties through common
APIs.

Cloud computing is a paradigm shift in how data centres and service providers are
architecting and delivering highly reliable, highly scalable services to their users in a manner
that is significantly more agile and cost effective than previous models. This new model offers
early adopters the ability to quickly realize the benefits of improved business agility, faster time
101

to market and an overall reduction in capital expenditures. However, enterprises and service
providers need to understand what elements their cloud must contain in order to build a truly
successful cloud.

4.3. Enterprise Infrastructure Architecture and its components


4.3.1. Computer Hardware platforms
4.3.1.1. Data Infrastructure

This infrastructure supports the data center hardware with power, cooling and building
elements. This hardware includes:

· Servers

· Storage

· Networking devices – switches, routers, cabling and

· Networking appliances e.g. network firewalls

Organization must ensure that data is secure and protected from unauthorised personnel
stealing information using malicious software and causing damage to the organizations. Hence,
it becomes inevitable for data centers to have physical security inside the premises of data
centre as a part of IT infrastructure security. These include:

o Electronic key entry

o Video and human surveillance and

o Controlled access to servers and storage

Infrastructure is no longer defined only by physical systems such as servers, storage


arrays, switches, and routers. Today it is the foundation for the development and delivery of end
user-centric IT services. It is the enabling technology for mining data-driven, business-relevant
insights. It is the raw material for creating and implementing new business models. As such,
infrastructure must empower the CIO to act as both Chief Information Officer and Chief Innovation
Officer with equal proficiency. It must strike the delicate balance between “keeping the lights
on” and exploring emerging innovations. Figure 4.3 represents the balance between Traditional
IT and Digital Enterprise.
102

Figure 4.3: The Right Balance between Traditional IT Vs. Digital Enterprise

(Source: Integrated infrastructure for the digital enterprise, Capgemini)

4.3.1.2. Server Infrastructure

Growing businesses need a server solution that supports changing demands. It is important
for an organization to develop a server strategy that will help to achieve optimum performance,
availability, efficiency and business value from the investment.

· Servers & Desktops

· Virtualization & storage

· Disaster recovery & planning

· Exchange planning & migrations

· Office 365

· Antivirus

· Email encryption

· Data security and backup

4.3.1.2. Servers & Desktops

A server is a device with a set of programs providing services requested by clients. The
word server refers to a specialized computer or hardware on which the server software works
103

and provides other computers or clients. A server has many functions and they come in different
types to facilitate different uses. A server is a device with a set of programs providing services
requested by clients. Is a device with a particular set of programs or protocols that provide
various services together a server and its clients form a client-server which provides routing
systems and centralized access to information, resources stored data etc at the most ground
level one can consider it as a technology solution that serves files, data, print, facts resources
and multiple computers. The advanced server versions like windows small business server are
to enable the user to handle the accounts and passwords allow or limit the access to shared
resources, automatically support the data and access the business information remotely for
example a file server is a machine that allows clients or users to upload or download files from
it similarly a web server hosts websites and allows users to access these websites. Clients
mainly include computer printers, faxes or other devices that can be connected to the server.
By using a server one can securely share files and resources like fax machines and printers.
Hence with a server network employees can access the internet or company emails
simultaneously.

4.3.1.3. Types of Servers

Server platform is the fundamental hardware or software or system which acts as an


engine that drives the server. It is often used synonymously with operating system. Application
is also known as a type of middleware. It occupies a substantial amount of computing region
between database servers and the end user and is commonly used to connect the two. Audio/
Video servers provide multimedia capabilities to websites by helping the users to broadcast
streaming multimedia content. Chat server it serves the users to exchange data in an environment
similar to internet newsgroup which provides real-time discussion capabilities. Fax servers – it
is one of the best options for organizations that seek minimum incoming and outgoing telephone
resources but require actual documents. FTP server works on one of the oldest of the internet
services. FTP Protocol provides a secure file transfer between computers while ensuring file
security and transfer control group web server it is a software design that enables all the users
to work together irrespective of the location through the internet and function together in a
virtual atmosphere. IRC server - is an ideal option for those looking for real time discussion
capabilities internet. Internet Relay Chat comprises of different network servers which enable
the users to connect to each other through an IRC network. List server provides a better way of
managing mailing lists. The server can open interactive discussion or a one way people or a
one-way list that provides announcements newsletters or advertising. Mail server it transfers
104

and stores mails over corporate mails corporate networks through lands ones and across the
lands ones and across the internet. News server serves as a distribution delivery source for
many pubic news and groups approachable over the Usenet networks. Proxy Servers operate
between a client program and an external server to filter requests, improve performance and
share connections. Telnet Servers - telnet performance and share connections. Telnet server
enables the users to logon to a host computer and execute a host computer and execute tasks
as if they are working on a remote computer. Virtual servers are just like a physical computer
because it is committed to individual customer demands can be individually booted and maintains
privacy of a separate computer basically the distance among shared and dedicated. Hosting
servers has now become omnipresent in data center. Web server –provides static content to a
web browser by loading a file from the disk and transferring it across the network to the uses.
Web browser – this exchange is intermediated by the browser and the server communicating
using HTTP other types of servers and other types of servers include open source gopher like
a plain document similar to www and plain document similar the hyper text being absent and
name hypertext being absent server applies name service protocol the various servers can be
categorized according to their applications according to their applications servers along with
managing networking resources along with managing network resources are also dedicated
that is platform no other tasks other than their tasks than their server.

4.4. Basic elements of an IT infrastructure


Data Centers are physical space where servers and network devices are located. It is a
special space that is conditioned for this purpose and it will have some access control for
security such that only authorized people are allowed to touch the servers. The servers and
network devices emit heat so they must be cool them and there will be very powerful cooling
devices and then supply of power not just power but actually redundant power supplies and so
forth so it’s a rule conditioned prepared to host servers because all this is where production
applications and firms are going to be running. The data centers are usually organized in racks
which can be thought of as a vertical cabinet where one can assemble or organize equipment
especially the devices that are installed in racks – rack mounted devices. Data centers are
going to grow significantly in size whereas a small medium enterprise (SME) might have a very
small room that functions as a data center. Larger firms such as fortune 500 or technology
giants such as Google’s, Facebook’s, Amazon’s will have the data centers spanned several
soccer fields. The cooling devices are not the traditional air-conditioning system.
105

4.4.1. System Administrator

System administrator can be seen as this know-it-all person who knows how every single
element on the data center is connected. This is very powerful sophisticated engineer. However
not all data centers look huge. Sometimes data centers might be complete mess where it is
difficult to find where things are located and how they are connected and in these cases the
system administrator cannot do much but somehow ensure nothing untoward happens, nothing
breaks and systems continue operating. Today setting up data centre quite simple, all that is
required is a kind of blueprint. In early 1990s the network diagram was very complicated. Today
the network blueprint is rather easy to understand because the layout can be made with
powerpoint or visio and it can clearly show how different devices such as servers, clients,
switches access points and also the access to the internet. However these are static diagrams
not too useful to system administrators. Since administrator needs to monitor the performance
of every single node connected to a network.

Networks are built of series of elements. This figure shows any typical elements that is
found on any local area network. If a business is using IT chances are all of these elements or
at least most of them are going to be part of IT infrastructure. Within this network one can find
servers, clients and network devices.

4.4.2. Servers

Is a computer that serves or supplies the data and applications used by clients. Servers
get their names based on the functions they perform.

4.4.3. File Server

A file server has the functions to share files with potential users. Linux based Samba for
more traditional file transfer protocols (FTP). These servers would allow one to share files with
users. These have been replaced by modern cloud based systems such as Google Drive or
Dropbox.

4.4.4. Print server

Print servers manage the queue of printing by all users in a firm. In large firm one does
not print directly to a printer but rather send it to a print server and the servers distributes the
workloads through the printers in the firm.
106

4.4.5. Web Servers

Webbrowsers are connected to web servers and several software can perform the function
of a web server and whenever the user seeks a webpage the browser connects to the web
server and displays web pages. Open source Apache servers is by far the most popular web
server in the world. Unix servers are coming out as a new alternative to Apache which has a
characteristic that is very slimmed down. It performs very few functions in a very efficient manner.
A more classic web server is Microsoft Internet Information Services Server.

4.4.6. Application Servers

Applications run the enterprise applications such as an ERP system for example AP or
any other application developed locally by the firm.

4.4.7. Mail servers

Mail servers which were widely used before the emergence of Office 365 or Google’s
Gmail such as Microsoft Exchange Server or Zimra were used in firms to handle all mail of
users.

4.4.8. Database Server

Another type of server that is going to be there in any firm is going to be a database
server. These servers organize the data used by all the information in systems of the firm.
Examples of these servers are Microsoft sequel servers, Oracle or my sequel or open source
servers.

4.4.9. Media servers

Media servers are used for video streaming or to share photo graphs to host gallery.

We also have servers that enable users to collaborate with each other to work in concurrent
forms such as Microsoft Share Point or IBM Lotus. Another way we can name servers is based
on their platform and by platform we mean that there are hardware and operating system. It can
be referred to their hardware based on based on their make and model for example IBM PC as
server. Next is the server’s operating system just as laptops can run in operating system such
as windows or Mac OS, a server can run an operating system such as Microsoft windows
server or a distribution of Linux such as Red Hat Linux, Debian, Ubuntu or others. More classic
servers may run UNIX as their operating systems.
107

Another way of classifying servers is based on their features and organizations. They
include mainframe, High Availability, Cluster, Virtual servers.

4.4.10. Mainframe

Mainframe is very large multi-functional equipment. These servers are capable of running
a huge number of transactions and workloads of thousands of users as they will generally
costs millions of dollars and be generally found in fortune 500 companies or big financial firms
that have enough workload to justify the investment in one of these humongous servers. Example
IBM Z Enterprise System delivers availability, security and manageability to address the challenge
of today’s multi-platform data centers. Its unified resource manager provides revolutionary ability
to centrally govern and manage IBM System Z and power and system X blades as a single
integrated system allowing one to monitor and optimize application performance, availability
and security and end to end based on business policies, while applications run where they run
their best. The Z enterprise extends System Z qualities to delivers significant operational business
and organizational advantages. It can reduce energy consumption, floor space and operating
costs. With its new 5.2 GHz super scalar processors, scalability up to 96 cores up to 3 terabytes
of high availability main memory and hot pluggable drawer, the Z enterprise the world’s fastest
most scalable enterprise system. IT allows running CPU intensive workloads like Java. Up to
60% faster that previous systems. New integrated workload optimizers and select IBM blades
offer additional advantages for example, the smart analytics optimizer delivers accelerated
performance for complex queries. It includes the new crypto express 3 that uses elliptic curve
cryptography for security.

4.4.11. High Availability servers

These are very powerful PCs or powerful servers that have elements that make them
highly available. For example one of the typical things that will fail on a server is one of its hard
drives but a high availability server will have multiple hard disk drives (e.g. RAID). If one of
these hard drives fails others continue to performing the same functions sometimes these are
arranged in what is called as a RAID array – a redundant array of inexpensive disks which
offer redundancy to the hard drives. They don’t have single power supply. A power supply is the
element by which electricity flows into the server. Network interfaces or network cards can burn
out hence there are multiple of them in case one of them fails, the others can take care.
108

4.4.12. Cluster Servers

Group of servers that perform the same function in parallel are called Cluster Servers. In
this sense they can have multiple web servers or multiple data servers that distribute workloads
and are scalable amongst them. The fact that in order to increase the capacity one need not
have to make a single server larger rather add more servers to architecture is that makes a
cluster servers particularly scalable. There are several ways in which the cluster servers can
be organized. Organizations can have a primary server and secondary or slave servers that
will only come online if the first server happened to come down. Sometimes it is referred to as
a cold backup in the sense that usually only have one server running in only if this primary
server comes down, its slave or secondary server comes online. Sometimes one can also have
a hot redundancy. In case we have two or more servers running performing the same functions
in this case one can have servers functioning as mirrors of each other. Having a cluster will
increase the availability of the servers not by a single server more powerful or more available
but rather by having redundancy across different servers.

4.4.13. Virtual Servers

In layman’s terms virtual servers are “servers within a server”. Let us take a big physical
host, this could be a high availability server or it could be a mainframe. Let’s install a software
tool called a hypervisor on this server and within this hypervisor or on top of this hypervisor,
multiple virtual machines can be run. In terms of IT stack, to run a software application first
requirement is hardware, CPU, Memory and Storage Devices that are part of any server. On
this hardware, the operating system is installed. For example one can install Linux on top of
hardware. Within this Linux, a hypervisor is installed. Hypervisor is a software tool that allows
one to segment or partition the entire servers into multiple portions and then create virtual
servers or smaller sized servers within this hypervisor. Then one can have multiple virtual servers
running within a hypervisor. The most common hypervisor that exists in corporate environment
is offered by VMware and in particular VMware ESXi. This software is traditionally used by
corporate. Open source software is Xen server and is become particularly popular in cloud
context because Citrix offers a more refined version of Xen software that is the one used by
cloud infrastructure providers such as Amazon or Rackspace. Cloud infrastructure providers
offers virtual servers to their customers on their public clouds. Another example of hypervisor is
the software used in laptops and in particular Mac Computers to run windows within a Mac –
Vmware fusion, oracle virtual box and parallels. These are software’s that can be installed on
laptops running one operating system and run another operating system within the virtual box.
109

4.4.14. Clients

Clients are the devices that access the servers. Thus they are the hardware used for
input and output of information used by end users. They are going to be the means through
which the end users access to servers and other clients. Examples are:

· PCs and Laptops

· Dump Terminals and Networked computers

· Transactional terminals: ATMs, POS

· Mobile phones, smart phones, tablets, IoT devices and so on.

4.4.15. Network Devices

Network devices are the devices that interconnect all the servers and clients. These devices
include access points, switches, routers and modems.

4.4.15.1. Access points

Access Points are these devices that are mounted on walls and these have antennas (the
means by which the devices such as laptops or smart phones connect to a local area network.
Laptops are connected to AP and the AP in turn connects to the LAN of the firm and thus
through this wireless device one can able to access all the services in the LAN. Some devices
come as a combo who acts as an AP, router and switch.

4.4.15.2. Switches

A switch is nothing more than a junction point cables come in and goes out. The smallest
of this might be an eight port switch. Data coming in through any of the ports of this switch will
go out by any of the other ports in the switch. So it is basically used by everyone to communicate
with everyone else connected to a switch. In a firm such small switches cannot be found.
Rather switches with 24 or 48 ports might be found slim and be organized in racks.Switches
interconnect all the devices within the network. Once communications needs to go out of the
network the traffic has to be routed from a LAN on to or towards other networks and for this we
use as router.
110

4.4.15.3. Router

A router isgoing to be a device that interconnects different LAN. For a larger corporate
like for example an ISP the router might look like the size of an entire cabinet or entire rack.

4.4.15.4. Communication Media

Two basic types of media are used to connect all the devices. They are wired or cables
and wireless.

4.4.15.4.1. Cables

a) Wired

A network cable is also called an Ethernet cable or UTP cable. UTP stands for Unshielded
Twisted-pair cables. The other is twisted pairs of copper cables. The jacks of these cables look
very similar to the traditional phone jacks and all laptops will have network ports to which we
can connect one of these cables although laptops now a day’s connects only through wireless
means. Another type of copper cable used to transmit is copper coaxial cables. Through these
cables one can cover larger distance than with the UTP cables mentioned before (example:
television cables). Data can also be transmitted through coaxial cables. IF ISPs are working
with cable modems, that means they are using coaxial cables. In order to increase speed and
distance of transmission we have fibre optic cables rather than transmitting electric signals to
the copper wires.

b) Wireless

WI-FI hotspot is nothing more than an access point that allows connecting to a network in
particular a LAN. Sometimes one can access the internet, however one should connect to a
LAN and from there a router takes the transmission onto the internet. That being said, there are
much more wireless communication media than just WI-FI. For example take cellular networks
3G, 4G and whatever there is to come. Communication to microwave antennas is characterized
by needing the line-of –sight that needs to have a clear straight line from one antenna to
another to have direct communication. Finally another form of wireless communication is satellite
communications. These however go into outer space but they also qualify as wireless
communications. So far a wide array of devices and servers has been discussed.
111

4.4.16. Types of networks


4.4.16.1. Classification based on their geographic extension

LAN: Local area network is going to occupy the space of room or a building with all the
elements namely switches, routers, servers, clients, printers. They interconnect anything within
a room or within a building. Typical speeds of these networks are going to range between 100
megabits/sec and 1 GB/sec. they are meant for nearby communication between devices.

4.4.16.2. Backbone

In a firm we can have multiple buildings each building have their own facility for eg. A firm
having multiple buildings namely: Manufacturing facility; administrative staff; inventories
warehouses and so on through a mechanism such that each of these buildings are interconnected
which is called a backbone network. The scale of the backbone network is going to be less than
few kilometres. The elements that compose or that build are going to be local area network and
the devices that need to interconnect LAN with each building. This is called a backbone network.
Elements include high speed switches and routers and also high speed circuits (for example:
fibre optic cables) speed ranges from 1 to 40GBits/sec.

4.4.16.3. MAN

A firm having multiple branches in multiple locations within a city or a metropolitan area
has metropolitan area network. The scale of MAN is then going to be more than a few kilometres.
Circuits in different buildings in different locations are likely to be leased to pubic providers or a
ISPs or telecom companies that have already laid out their own fibre optic cables. Alternatively
instead of fibre optic cables is to have point-to-point connections through microwave antennas.
Sometimes internet based tunnel or channel perhaps through VPN tunnel that relies on the
internet. Typical speeds for these kinds of networks are going to range from 64 kilobits to 10
gigabits. 64 kilobits range is the lowest one can use to have a VOIP conversation. However,
links that exists at least a few megabytes of capacity extending the concepts of MAN to much
larger scales and perhaps even international scales.

4.4.16.4. WAN

Side area networks are going to be the private networks that are going to be used to
interconnect multiple operations across the globe for a single firm by leasing, by satellite based
network and the speeds of this network are going to range from 64 kilobits and 10 gigabits/sec.
112

4.4.16.5. Intranets

Types of networks can be classified based on who can access them. These include Intranet
and Extranet. An intranet is only accessible to the members of the organization so these are
standard collaborators. For example everyone connecting to a local area network to the ERP
system to their central information system is going to be connecting through intranet. It is quite
common that collaborators who work remotely be it working from home, or on a business trip on
a hotel they might use a VPN to connect to the firm’s internet.

4.4.16.6. Extranets

Network accessible for people or entities external to the organization is called extranets.
Clients and providers logging into an inventory system over the extranet. Widely known example
of a big extranet is how walmart offers access to its stock to its suppliers themselves knows
when it is necessary for them to start shipping more goods into walmarts warehouses. They
know this by connecting to walmarts extranets. End users go into a public ecommerce website
such as amazon.com or any other firm that offers an online portal which is a part of that firm’s
enterprise extranet. A public WiFi offered by a retail store for its customers could also be called
as an extranet network for customers.

Summary
 A service offering is a quantified set of services and applications that end users can
consume through the provider — whether the cloud is private or public. Service
offerings should include resource guarantees, metering rules, resource management
and billing cycles.

 Cloud computing management solutions do not replace these tools and it is important
that there is open application programming interfaces (APIs) that integrate into
existing operation, administration, maintenance and provisioning systems (OAM&P)
out of the box. These include both current virtualization tools from VMware and
Citrix, but also the larger data center management tools from companies like IBM
and HP.

 Without strong visibility and reporting mechanisms the management of customer


service levels, system performance, compliance and billing becomes increasingly
difficult.
113

 Cloud computing is a paradigm shift in how data centres and service providers are
architecting and delivering highly reliable, highly scalable services to their users in
a manner that is significantly more agile and cost effective than previous models.

 A server is a device with a set of programs providing services requested by clients.


The word server refers to a specialized computer or hardware on which the server
software works and provides other computers or clients.

 Audio/Video servers provide multimedia capabilities to websites by helping the users


to broadcast streaming multimedia content.

 Chat server it serves the users to exchange data in an environment similar to internet
newsgroup which provides real-time discussion capabilities.

 Fax servers – it is one of the best options for organizations that seek minimum
incoming and outgoing telephone resources but require actual documents.

 FTP server works on one of the oldest of the internet services. FTP Protocol provides
a secure file transfer between computers while ensuring file security and transfer
control group.

 Web server it is a software design that enables all the users to work together
irrespective of the location through the internet and function together in a virtual
atmosphere.

 IRC server - is an ideal option for those looking for real time discussion capabilities
internet. Internet Relay Chat comprises of different network servers which enable
the users to connect to each other through an IRC network.

 List server provides a better way of managing mailing lists. The server can open
interactive discussion or a one-way people or a one-way list that provides
announcements newsletters or advertising.

 Mail server it transfers and stores mails over corporate mails corporate networks
through lands ones and across the lands ones and across the internet.

 News server serves as a distribution delivery source for many pubic news and
groups approachable over the Usenet networks.

 Proxy Servers operate between a client program and an external server to filter
requests, improve performance and share connections.
114

 Telnet Servers - telnet performance and share connections. Telnet server enables
the users to logon to a host computer and execute a host computer and execute
tasks as if they are working on a remote computer.

 Virtual servers are just like a physical computer because it is committed to individual
customer demands can be individually booted and maintains privacy of a separate
computer basically the distance among shared and dedicated.

 Hosting servers has now become omnipresent in data center. Web server –provides
static content to a web browser by loading a file from the disk and transferring it
across the network to the uses.

Check your answers


Write short notes on

 Cloud computing
 Servers and desktops
 Service offering
 Types of servers
 Cloud computing management

References
 Peter Bernus and L. Nemes (ed.). (1995). Modelling and Methodologies for
Enterprise Integration: Proceedings of the IFIP TC5 Working Conference on Models
and Methodologies for Enterprise Integration, Queensland, Australia, November
1995. Chapman & Hall. ISBN 0-412-75630-7.

 Peter Bernus et al. (1996). Architectures for Enterprise Integration. Springer. ISBN 0-
412-73140-1

 Fred A. Cummins (2002). Enterprise Integration: An Architecture for Enterprise


Application and Systems Integration. John Wiley & Sons. ISBN 0-471-40010-6

 Charles J. Petrie (1992). Enterprise Integration Modeling: Proceedings of the First


International Conference. MIT Press. ISBN 0-262-66080-6

 Kent Sandoe, Gail Corbitt, Raymond Boykin, Aditya Saharia (2001). Enterprise
Integration. Wiley, ISBN 0-471-35993-9.

 https://www.youtube.com/watch?v=EUfteewD3_M
115

UNIT 5
ENTERPRISE ACTIVE DIRECTORY
INFRASTRUCTURE
Learning Objectives

After reading this lesson you will be able to understand

· Overview of Active Directory (AD)

· Kerberos

· LDAP

· Ticket Granting Ticket {TGT}

· Forest

· Domain

· Organization Unit (OU)

· Site Topology of a Forest

· Trust Relationships

· Object – Creation, Modification, Management and Deletion

o User

o Group

o OU

o Domain

· Group Policy (GPO) Management

o Structure of GPO

o Permissions and Privileges

o GPO Security Settings

§ Password Settings

§ Account Lockout Settings

§ Account Timeout Settings


116

§ USB Enable/ Disable Settings

§ Screen Saver Settings

§ Audit Logging Settings

§ Windows Update Settings

§ User Restriction Settings

o Application of GPO

§ Linking a GPO

§ Enforcing a GPO

§ GPO Status

§ Inclusion / Exclusion of Users/ Groups in a GPO

o Precedence of GPO

o Loopback Processing of GPO

o Fine-Grain Policy / Fine-Grain Password Policy

· Addition of Windows Workstations to Domain and Group Policy Synchronisation

· Addition of Non-Windows Workstations in AD Environment

· Integrating Finger-Print, Smart Card, RSA or secondary authentication source to


Active Directory

· Single-Sign On Integration

· Active Directory Hardening Guidelines

Structure
5.1 Overview of Active director. Use Security Baselines and Benchmarks

5.2 Kerberos

5.3 LDAP (Lightweight Directory Access Protocol)

5.4 A ticket-granting ticket (TGT)

5.5 Forest

5.6 Domain
117

5.7 Organisational Unit

5.8 Site topology of forest

5.9 Trust Relationship

5.10 Object – Creation, Modification, Management and Deletion

5.11 Group Policy Management

5.12 Application of GPO

5.13 Precedence of GPO

5.14 Loopback Processing of GPO

5.15 Fine-Grained Password Policy

5.16 Addition of Windows Workstations to domain and Group Policy

5.17 Addition of Non-Windows Workstations in AD Environment

5.18 Integrating Physical Access Control in Active Directory in Windows

5.19 Access Control Technologies

5.20 Integration of Physical Access Security and Logical Access Security using
Microsoft Active Directory

5.21 Integrating Finger-Print, Smart Card, RSA or Secondary authentication to


Active Directory

5.22 Single Sign-on Integration

5.23 Active Directory Hardening Guidelines

5.1 Overview of Active directory


Active Directory (AD) is a Windows OS directory service that facilitates working with
interconnected, complex and different network resources in a unified manner.

Active Directory was initially released with Windows 2000 Server and revised with additional
features in Windows Server 2008. Active Directory provides a common interface for organizing
and maintaining information related to resources connected to a variety of network directories.
The directories may be systems-based (like Windows OS), application-specific or network
resources, like printers. Active Directory serves as a single data store for quick data access to
all users and controls access for users based on the directory’s security policy.
118

Active Directory provides the following network services:

· Lightweight Directory Access Protocol (LDAP) – An open standard used to access


other directory services

· Security service using the principles of Secure Sockets Layer (SSL) and Kerberos-
based authentication

· Hierarchical and internal storage of organizational data in a centralized location for


faster access and better network administration

· Data availability in multiple servers with concurrent updates to provide better


scalability

Active Directory is internally structured with a hierarchical framework. Each node in the
tree-like structure is referred to as an object and associated with a network resource, such as a
user or service. Like the database topic schema concept, the Active Directory schema is used
to specify attribute and type for a defined Active Directory object, which facilitates searching for
connected network resources based on assigned attributes. For example, if a user needs to
use a printer with color printing capability, the object attribute may be set with a suitable keyword,
so that it is easier to search the entire network and identify the object’s location based on that
keyword.

A domain consists of objects stored in a specific security boundary and interconnected in


a tree-like structure. A single domain may have multiple servers – each of which is capable of
storing multiple objects. In this case, organizational data is stored in multiple locations, so a
domain may have multiple sites for a single domain. Each site may have multiple domain
controllers for backup and scalability reasons. Multiple domains may be connected to form a
domain tree, which shares a common schema, configuration and global catalog (used for
searching across domains). A forest is formed by a set of multiple and trusted domain trees and
forms the uppermost layer of the Active Directory.

Novell’s directory service, an Active Directory alternative, contains all server data within
the directory itself, unlike Active Directory.
119

5.2 Kerberos
Kerberos is a network protocol that uses secret-key cryptography to authenticate client-
server applications. Kerberos requests an encrypted ticket via an authenticated server sequence
to use services.

The protocol gets its name from the three-headed dog (Kerberos, or Cerberus) that guarded
the gates of Hades in Greek mythology.

Kerberos was developed by Project Athena - a joint project between the Massachusetts
Institute of Technology (MIT), Digital Equipment Corporation and IBM that ran between 1983
and 1991.

An authentication server uses a Kerberos ticket to grant server access and then creates
a session key based on the requester’s password and another randomized value. The ticket-
granting ticket (TGT) is sent to the ticket-granting server (TGS), which is required to use the
same authentication server.

The requester receives an encrypted TGS key with a time stamp and service ticket, which
is returned to the requester and decrypted. The requester sends the TGS this information and
forwards the encrypted key to the server to obtain the desired service. If all actions are handled
correctly, the server accepts the ticket and performs the desired user service, which must decrypt
the key, verify the timestamp and contact the distribution center to obtain session keys. This
session key is sent to the requester, which decrypts the ticket.

If the keys and timestamp are valid, client-server communication continues. The TGS
ticket is time stamped, which allows concurrent requests within the allotted time frame.

5.3 LDAP (Lightweight Directory Access Protocol)


In a network, a directory tells one where in the network something is located. LDAP
allows one to search for an individual without knowing where they’re located (although additional
information will help with the search).

An LDAP directory is organized in a simple “tree” hierarchy consisting of the following


levels:
120

· The root directory (the starting place or the source of the tree), which branches out
to

· Countries, each of which branches out to

· Organizations, which branch out to

· Organizational units (divisions, departments, and so forth), which branches out to


(includes an entry for)

· Individuals (which includes people, files, and shared resources such as printers)

An LDAP directory can be distributed among many servers. Each server can have a
replicated version of the total directory that is synchronized periodically. An LDAP server is
called a Directory System Agent (DSA). An LDAP server that receives a request from a user
takes responsibility for the request, passing it to other DSAs as necessary, but ensuring a
single coordinated response for the user.

5.4. A ticket-granting ticket (TGT)


A ticket-granting ticket (TGT) is a small data set used in Kerberos authentication, which
was developed at MIT for authenticating server traffic.

A ticket-granting ticket is also known as an authentication ticket.In a TGT model, the first
tiny ticket or data set is issued in order to approve the beginning of authentication. An additional
ticket goes to the server with client identity and other information. Like other tickets, the initial
small ticket is also encrypted. In this ticket granting system, Kerberos uses some specific
protocols. The client first sends the ticket-granting ticket as a request for server credentials to
the server. The encrypted reply comes back with a key that is used for authentication purposes.

The client uses the TGT to “self-authenticate” with the ticket-granting server (TGS) for a
secure session.

5.5. Forest
An Active Directory forest is the highest level of organization within Active Directory. Each
forest shares a single database, a single global address list and a security boundary. By default,
a user or administrator in one forest cannot access another forest.
121

5.6. Domain
A domain, in the context of networking, refers to any group of users, workstations, devices,
printers, computers and database servers that share different types of data via network resources.
There are also many types of subdomains.A domain has a domain controller that governs all
basic domain functions and manages network security. Thus, a domain is used to manage all
user functions, including username/password and shared system resource authentication and
access. A domain is also used to assign specific resource privileges, such as user accounts.In
a simple network domain, many computers and/or workgroups are directly connected. A domain
is comprised of combined systems, servers and workgroups. Multiple server types may exist in
one domain - such as Web, database and print - and depend on network requirements.

5.7. Organisational Unit


5.7.1. In Active Directory, what is an organizational unit?

An organizational unit (OU) is a subdivision within an Active Directory into which one can
place users, groups, computers, and other organizational units. One can create organizational
units to mirror the organization’s functional or business structure. Each domain can implement
its own organizational unit hierarchy. If the organization contains several domains, one can
create organizational unit structures in each domain that are independent of the structures in
the other domains.

The term “organizational unit” is often shortened to “OU” in casual conversation. “Container”
is also often applied in its place, even in Microsoft’s own documentation. All terms are considered
correct and interchangeable.

At Indiana University, most OUs are organized first around campuses, and then around
departments; sub-OUs are then individual divisions within departments. For example,
the BL container represents the Bloomington campus; the BL-UITS container is a subdivision
that represents the University Information Technology Services (UITS) department on the
Bloomington campus, and there are subcontainers below that. This method of organization is
not an enforced rule at IU; it is merely chosen for convenience, and there are exceptions.
122

5.8. Site topology of forest


A domain is defined as a logical group of network objects (computers, users, devices)
that share the same Active Directory database. A tree is a collection of one or more domains
anddomain trees in a contiguous namespace, linked in a transitive trust hierarchy. At the top of
the structure is the forest.

5.9. Trust Relationship


A trustrelationship is a logical relationship established between two domains which allows
authentication. There are two domains in a trust relationship – the trusting and the trusted.

Trusted Domain. A trusted domain is a domain that the local system trusts to authenticate
users. In other words, if a user or application is authenticated by a trusted domain, this
authentication is accepted by all domains that trust the authenticating domain.

Trust relationships are an administration and communication link between two domains.
A trust relationship between two domains enables user accounts and global groups to be used
in a domain other than the domain where the accounts are defined.

5.10. Object – Creation, Modification, Management and Deletion


5.10.1. User

An Active Directory domain is a collection of objects within a Microsoft Active Directory


network. An object can be a single user or a group or it can be a hardware component, such as
a computer or printer. Each domain holds a database containing object identity information.

5.10.2. Groups

Groups are used to collect user accounts, computer accounts, and other groups into
manageable units. Working with groups instead of with individual users helps simplify network
maintenance and administration.

There are two types of groups inActive Directory:

5.10.2.1. Distribution groups - Used to create email distribution lists. An organizational


unit (OU) is a subdivision within an Active Directory into which one can place users, groups,
computers, and other organizational units. One can create organizational units to mirror the
123

organization’s functional or business structure. Each domain can implement its


own organizational unit hierarchy.

5.10.2.2. Security groups

Security groups can provide an efficient way to assign access to resources on your network.
User rights are assigned to a security group to determine what members of that group can do
within the scope of a domain or forest. User rights are automatically assigned to some security
groups when Active Directory is installed to help administrators define a person’s administrative
role in the domain.Group Policy to assign user rights to security groups to delegate specific
tasks. For more information about using Group Policy. permissions are different than user
rights. Permissions are assigned to the security group for the shared resource. Permissions
determine who can access the resource and the level of access, such as Full Control. Some
permissions that are set on domain objects are automatically assigned to allow various levels of
access to default security groups, such as the Account Operators group or the Domain Admins
group.

Security groups are listed in DACLs that define permissions on resources and objects.
When assigning permissions for resources (file shares, printers, and so on), administrators
should assign those permissions to a security group rather than to individual users. The
permissions are assigned once to the group, instead of several times to each individual user.
Each account that is added to a group receives the rights that are assigned to that group in
Active Directory, and the user receives the permissions that are defined for that group.

By using security groups:

· Assign user rights to security groups in Active Directory.

· Assign permissions to security groups for resources.

5.10.2.3. Organisational Unit

An organizational unit (OU) is a subdivision within an Active Directory into users, groups,
computers, and other organizational units can be placed. Creation of organizational units to
mirror an organization’s functional or business structure. Each domain can implement its own
organizational unit hierarchy. If an organization contains several domains, organizational unit
structures in each domain that are independent of the structures in the other domainscan be
created.
124

5.10.2.4. Domain

A server running Active Directory Domain Services (AD DS) is called a domain controller.
It authenticates and authorizes all users and computers in a Windows domain type network—
assigning and enforcing security policies for all computers and installing or updating software.

5.11. Group Policy Management


Group Policy is a feature of the Microsoft Windows NT family of operating systems that
controls the working environment of user accounts and computer accounts. Group Policy provides
centralized management and configuration of operating systems, applications, and users’ settings
in an Active Directory environment. A version of Group Policy called Local Group Policy (“LGPO”
or “LocalGPO”) also allows Group Policy Object management on standalone and non-domain
computers.

Fig 5.1 :Local Group Policy Editor

5.11.1. Structure of GPO

Group Policies are applied in the following order. The last one applied can overwrite
policies from any level above. The Default Order is:

Local System Policies (created on the individual machine)

Site
125

Domain

First Organizational Unit (OU)

Second OU, and so on down to the OU the Computer or User is in.

5.11.2. Permissions and Privileges

The following are the privileges assigned to a group policy object:

Authenticated Users – Read, Apply Group Policy, Special Permissions

Creator Owner – Special Permissions

Domain Administrators – Read, Write, Create All Child Objects, Delete All Child Objects,
Special Permissions

Enterprise Administrators – Read, Write, Create All Child Objects, Delete All Child
Objects, Special Permissions

Enterprise Domain Controllers – Read, Special Permissions

System – Read, Write, Create All Child Objects, Delete All Child Objects, Special
Permissions

5.11.3. GPO Security Settings


5.11.3.1. Password settings

[a] Set the Minimum Password Length to higher limits.

For example, for elevated accounts, passwords should be set to at least 15 characters,
and for regular accounts at least 12 characters. Setting a lower value for minimum password
length creates unnecessary risk. The default setting is “zero” characters, so one will have to
specify a number:

1. In Group Policy Management Editor window (opened for a custom GPO), go to


“Computer Configuration” “Windows Settings” “Security Settings” “Account Policies”
“Password Policy”.

2. In the right pane, double-click “Minimum password length” policy, select “Define
this policy setting” checkbox.
126

3. Specify a value for the password length.

4. Click “Apply” and “OK”.

Fig 5.2 :Minimum Password Length window

[b] Set Maximum Password Age to lower limits

If the password set expiration age to a lengthy period of time, users will not have to
change it very frequently, which means it’s more likely a password could get stolen. Shorter
password expiration periods are always preferred.

Windows’ default maximum password age is set to 42 days. The following screenshot
shows the policy setting used for configuring “Maximum Password Age”. Perform the following
steps:

1. In Group Policy Management Editor window (opened for a custom GPO), go to


“Computer Configuration” “Windows Settings” “Security Settings” “Account Policies”
“Password Policy”.

2. In the right pane, double-click “Maximum password age” policy.

3. Select “Define this policy setting” checkbox and specify a value.

4. Click “Apply” and “OK”.


127

Fig 5.3 Maximum Password Age window

5.11.3.2. Account lockout settings

a. Navigate to Computer Configuration\Policies \Windows Settings \Security Settings


\Account Policies \Account Lockout Policy where three lockout policy settings listed.

b. To set the Account Lockout Threshold policy setting, right click it and
select Properties from the drop down list.

c. The Account Lockout Threshold properties dialog box opens. For our example, we
amend the lockout threshold number to 12. Click OK to apply the changes.

d. It is informed that since the Account Lockout Threshold policy setting has been
given a value, Windows Server automatically defines and applies a security setting
of 30 minutes to the other policy settings (Account Lockout Duration and Reset
Account Lockout Counter After). Click OK to continue.

e. The Account Lockout Threshold has now been successfully configured. The other
policy settings, Account Lockout Duration and Reset Account Lockout Counter After,
also have been updated.
128

Fig 5.4: Account lockout policy window

5.11.3.3. Account timeout settings

Fig 5.5 Account timeout policy window


129

a. Navigate to Computer Configuration -> Policies -> Windows Settings -> Security
Settings -> Local Policies -> Security Options -> Interactive logon: Machine inactivity
limit

b. Set a value in seconds.

5.11.3.4. USB enable/disable policy settings


a. Navigate to Administrative Templates > System > Removable Storage Access

b. Enable ‘All Removable Storage Access: Deny All Access.

Fig 5.6 Removable device access settings window

5.11.3.5. Screen Saver settings

a. Navigate to User Configuration -> Control Panel -> Personalization

b. Make necessary changes to screensaver settings.


130

Fig 5.7 Screensaver policy window

5.11.3.6. Audit Logging Settings


a. Navigate to Computer Configuration\Policies\W indows Settings\Security
Settings\Local Policies\Audit Policy

b. Make necessary changes.

Fig 5.8 Audit Policy window


131

5.11.3.7. Windows Update Settings


a. Open the Group Policy Management console, and open an existing GPO or create
a new one.

b. Navigate to Computer Configuration, Policies, Administrative Templates, Windows


Components, Windows Update.

c. Double-click Configure Automatic Updates and set to Enabled, then configure the
update settings and click OK.

Fig 5.9 Windows Update Policy window

5.11.3.8. User Restriction Settings

a. Select the Group Policy Object in the Group Policy Management Console (GPMC)
and the click on the “Delegation” tab and then click on the “Advanced” button.

b. Select the “Authenticated Users” security group and then scroll down to the “Apply
Group Policy” permission and un-tick the “Allow” security setting.

c. Now click on the “Add” button and select the group (recommended) that one wants
to have this policy apply. Then select the group (e.g. “Accounting Users”) and scroll
the permission list down to the “Apply group policy” option and then tick the “Allow”
permission.
132

Fig 5.10 User Restrictions policy window

5.12. Application of GPO


5.12.1. Linking a GPO
· In GPMC, right click the Domain Controllers OU under Domains and select Link an
Existing GPO… from the menu.

· In the Select GPO dialog under Group Policy Objects, select the GPO one wants to
link and click OK.

· Now click the Domain Controllers OU in the left pane.


133

In the right pane, the new will be GPO listed. GPOs with a higher link order number, i.e.
those that appear higher up the list, take priority over those with lower numbers. Link GPOs to
AD sites and domains in the same way that it’s possible to link them to OUs. The GPO settings
will be applied to AD objects that fall in scope, i.e. in this example any computer accounts
located in the Domain Controllers OU.

5.12.2. Enforcing a GPO

The GPUPDATE/Force command is useful when working manually with clients and servers
to get GPO settings to apply. However, it is also important for some GPO settings to be forced
during the standard refresh cycle, which is typically every 90 minutes. This is possible by
configuring one or more GPO settings to coincide with the different Group Policy Extensions
that are embedded in each GPO.

Access the following path in the GPO, which contains the settings that need to be forced
on each refresh: Computer Configuration|AdministrativeTemplates|System|Group Policy.

Figure 5.11 shows the array of existing “processing” policies.

Figure 5.11. “ processing” policies can force the application of GPO settings.
134

Within each of these policies there is an option to “Process even if the Group Policy
objects have not changed,” as shown in Figure: 5.12.

Figure 5.12. GPO setting to process GPO settings even if there have been no changes.

With this setting configured, the settings in the GPO will apply on each refresh, even if
there are no changes to the GPO. This ensures that the settings are applied consistently.

5.12.3. GPO Status

Viewing GPO, GPO Link Details and Status

Using ADManager Plus ‘GPO Management’, it becomes quite simple for the administrators
to know all the required details, status of all the require GPOs, in just an instant.

View all available GPOs in a Domain

The first step in managing GPOs is to know the list of all available GPOs in the domain.
ADManager Plus provides this information, instantly.
135

Procedure:

In the ‘GPO Management’ section, click on the ‘Group Policy Objects’ container in the
required domain to view the list of all available GPOs in that domain.

Steps:

To view all the GPOs available in a domain

· Click the ‘AD Mgmt’ tab.

· In ‘GPO Management’, click the ‘GPO Management’ link.

· In the ‘Group Policy Management’ pane on the left hand side, click on ‘All Domains’
to expand the link and view all the configured domains.

· Click on the ‘+’ icon beside the required domain. This will list all the containers in the
domain.

· Click on ‘Group Policy Objects’ container to view all the available GPOs in this
particular domain.

· Using the ‘Enable’ and ‘Disable’ options located just above the list of GPOs, one
can enable/disable the required GPOs completely or partially (user/computer
configuration settings).

Note: If one click on the ‘domain’ instead of the ‘+’ icon beside the domain name, one
will be able to view only those GPOs that are linked to this domain instead of all the available
GPOs of this domain. Using ADManager Plus ‘GPO Management’, it becomes quite simple for
the administrators to know all the required details, status of all the require GPOs, in just an
instant.View all available GPOs in a DomainThe first step in managing GPOs is to know the list
of all available GPOs in the domain. ADManager Plus provides this information,
instantly.Procedure:In the ‘GPO Management’ section, click on the ‘Group Policy Objects’
container in the required domain to view the list of all available GPOs in that domain.To view all
the GPOs available in a domain· Click the ‘AD Mgmt’ tab.· In ‘GPO Management’,
click the ‘GPO Management’ link.· In the ‘Group Policy Management’ pane on the left hand
side, click on ‘All Domains’ to expand the link and view all the configured domains.·
Click on the ‘+’ icon beside the required domain. This will list all the containers in the domain.·
Click on ‘Group Policy Objects’ container to view all the available GPOs in this particular domain.·
Using the ‘Enable’ and ‘Disable’ options located just above the list of GPOs, one can enable/
136

disable the required GPOs completely or partially (user/computer configuration settings). Click
on the ‘domain’ instead of the ‘+’ icon beside the domain name, andview only those GPOs that
are linked to this domain instead of all the available GPOs of this domain

View all the GPOs linked to a specific Domain/OU/Site

Administrators can instantly view the list of all the GPOs that are linked to any specific
Domain/OU/Site using this option.

Procedure:

In the ‘GPO Management’ section, click on the required domain all the GPOs that are
linked to that domain.

Steps:

To view all the GPOs linked to any specific container,

· Click the ‘AD Mgmt’ tab.

· In ‘GPO Management’ section click on the ‘GPO Management’ link.

· In the ‘Group Policy Management’ pane on the left hand side, click on ‘All Domains’
to expand the link and view all the configured domains.

· Click on the required Domain/OU. This will display all the GPOs that are linked to
that specific container.

· To select a site, click on ‘All Sites’ and then the forest in which the required site is
located. Then, click on the required site to view all the GPOs linked to this site.

· One can manage the links of all GPOs linked to this container through the ‘Manage’
and ‘Enforce’ options located just above the list of linked GPOs.

· To link a GPO to this container,

o Click on the ‘Link GPOs’ option located in the top right corner of this page.

o In the ‘Select GPOs to be linked’ window that opens up, select the domain in which
the required GPO is located.

o This will list all the GPOs in the domain. One can also locate the GPO using the
search option.
137

o Select the required GPOs and click on ‘Link GPOs’.

o One will see a summary of the action just performed along with the linking status,
for each GPO.

View all the Domains/OUs/Sites that a GPO is linked to

This option enables the administrators to know in detail, all the containers that any specific
GPO is linked to.

Procedure:

In the ‘GPO Management’ section, in the ‘Group Policy Objects’ container, click on the
required GPO to view the list of all the containers to which this GPO is linked to, along with the
link status.

Steps:

To view all the Domains/OUs/Sites to which a GPO is linked,

· Click the ‘AD Mgmt’ tab.

· In ‘GPO Management’ section click on the ‘GPO Management’ link.

· In the ‘Group Policy Management’ pane on the left hand side, click on ‘All Domains’
to expand the link and view all the configured domains.

· Click on the domain in which the required GPO is located.

· Click on ‘Group Policy Objects’ container to view all the GPOs available in the
domain. For each GPO, the status of the ‘user configuration settings’ and also the
‘computer configuration settings’ are shown.

· From the list of all available GPOs, click on the required GPO. This will list all the
containers to which this GPO is linked along with the link status, enforce status and
the canonical name of the linked location.

· From this page, one can manage the links of this particular GPO through the ‘Manage’
and ‘Enforce’ options located just above the list of linked containers.

· One can also view the status of this particular GPO, in the ‘GPO Status’ located in
top right corner of this page. Using the change option located beside it, one can
also change the GPO status, as required.
138

Note:

To view the links from all the sites, in the ‘display links from’ option located just above the
list of linked containers; select ‘All Sites’ from the options.

5.12.4. Inclusion or Exclusion of Users or User Groups in a GPO

Right-click click the newly created policy and choose Edit. Since this needs to apply on
per computer basis, in the Group Policy Management Editor console expand Computer
Configuration > Preferences > Control Panel Settings and click on Local Users and Groups. As
one can see, there are other stuff one can configure here too like shortcuts, printers, enable or
disable services on clients etc and if one opens the Windows Settings folder one can find more.
Feel free to explore and test them, but right now do a right-click on Local Users and Groups and
choose New > Local Group.

Fig 5.13. Creating local users and group


139

On the Action drop-down box one have multiple choices. If one wants to create a new
local group on the clients go with the Create option, if one wants to replace a local group with
the one name here go with Replace and so on. Right now choose Update and from the Group
name drop-down box select the local group on which one want to make changes. The
local Administrators and local Remote Desktop Users are the most used ones. If the group
name is not updated and is not listed here one can type it, but do not click the ellipse button and
search for it because it will search the domain, and no one wants that.

Click the Add button from the Members section and add the domain users and/or groups
(groups are recommended) that one wants to be part of the group is selected in the Group
name box. On the Action drop-down box make sure Add to this group is selected and click OK.
Leave the rest of the settings unchanged.

Fig 5.14: Adding members Fig.5.15:Adding Groups


140

Fig 5.16. Adding Local users and Groups

To be able to see the changes and not wait until the policy is applied (between 90-120
min), gpupdate /force on some of the clients to re-read the policies from the domain controller(s)
and apply them, or one can use the Group Policy Update option if one have 2012 domain
controllers. After the policy is applied, one can go ahead and check if it worked. Launch the Local
Users and Groups console (Start > Run >lusrmgr.msc) on a client PC, click the Groups folder,
then open the properties of the group is updated trough Group Policy Preferences. The domain
users and/or groups should be member(s) of this local group.

Fig 5.17. Update Groups


141

Fig 5.18. Administration Properties

If one want to be more granular with the policy, one can set it so it applies only on specific
operating systems, or to computers that have a specific MAC address. Just click on
the Common tab on the Group Policy Preferences item and check the Item-level targeting check
box, then hit the Targeting button. As one can see, there a quite a few settings from which one
can choose.

Fig 5.19: Inter-level targeting Fig 5.20: Editing


142

Now go ahead and filter it out.

Fig 5.21. Filtering

In time, if one wants to remove some of the members from the local group(s), don’t just
go and delete the Group Policy Preferences item(s) because it will not accomplish what one
want. One need to update this again. From the policy, open the item properties, select the
domain user or group one wants to remove, click the Change button, then in the new window
select Remove from this group. Click OK.

Fig 5.22. Updating and Changing Fig 5.23. Updating Local Group Member
143

Leave it a few days or weeks, just in case some of the users are traveling and they did not
connect to the company’s network. After the membership was removed from the local group(s)
one can go ahead and delete the member(s) from the Group Policy Preference item, or delete
the item itself if no other members are present or one doesn’t need it anymore.

Fig 5.24. Removing Application Owners

Fig.5.25: Group policy management editor


144

And that’s it, simple and effective. By using this method one can add domain members to
whatever local groups one wants without typing any bits of code. Also, one can create, modify,
and remove those local groups as needed.

5.13. Precedence of GPO

Fig 5.26. Procedure of GPO

5.13.1. OU-based Group Policy

Depending on who has designed or organized the Active Directory OU structure, one will
typically have a set of containers or folders similar to the layout of a file system. These folders
(OUs) can contain any AD object like Users, Computers, Groups, etc. Even though they contain
these objects, all Group Policy Objects contain built-in filtering. When we create a new GPO,
we will see there are two main configuration options available (built-in filtering). These
are Computer Configuration and User Configuration. We can apply configurations to
both Users and Computers within the same GPO, but we can also specify one or the other as
well.
145

5.13.2. Domain-based Group Policy

Domain based Group Policy Objects are far more common in organizations, mostly
because setting up a new domain creates a “Default Domain Policy” at the root of that domain.
This policy contains a few default settings like a password policy for the users, but most
organizations change these. Additionally, some organizations modify this default policy and
add their own specifications and settings.

One can definitely add to and edit the Default Domain Policy, but one may be better off
just creating a new GPO at the root of the domain. If one decide to modify the existing Default
Domain Policy or create a new GPO, please be aware one should apply certain settings to the
root domain and not subsequent locations like OUs. It is possible to set these settings in alternate
locations, but not recommended. One can only set these settings once per domain, and thus
the best practice is to apply these at the root of the domain.

5.13.3. Site-based Group Policy

Now that we understand how Windows applies Local Group Policy settings, we move
toward understanding how an organization that has Active Directory (AD) can apply GPOs. At
the topmost layer, Group Policy Objects can apply to the “site” level. To understand how a site-
based Group Policy could work, we must first generally understand how large organizations
might structure their environment.

In Active Directory, we have a topmost layer called an AD forest. An organization can


have multiple forests. Within each AD forest, we can have multiple domains.

5.13.4. Local Group Policy

On the local system, one can view and edit the Local Group Policy settings by searching
the computer. Using the Start Menu, begin typing (searching) for “Edit Group Policy.” One can
configure settings for the local system or account, but all subsequent Group Policy layers (site,
domain, and OU) that have the same setting configured or enabled can overwrite these settings.

This means one can configure Group Policies locally, but the system can overwrite them
when theGroup Policies is set to trump these settings from site, domain, or OU GPOs applied
to the system or user account.
146

5.14. Loopback Processing of GPO


The User Group Policy loopback processing mode option available within the computer
configuration node of a Group Policy Object is a useful tool for ensuring certain user settings
are applied on specified computers.

Essentially loopback processing changes the standard group policy processing in a way
that allows user configuration settings to be applied based on the computers GPO scope during
logon. This means that user configuration options can be applied to all users who log on to a
specific computer.

5.14.1. When to use Loopback

Common scenarios where this policy is used include public accessible terminals, machines
acting as application kiosks, terminal servers and any other environment where the user settings
should be determined by the computer account instead of the user account.

5.14.2. Where to Enable Loopback

The setting is found within the Computer Configuration node of a GPO:

Computer Configuration > Administrative Templates > System > Group Policy > User
Group Policy loopback processing mode

5.14.3. Replace or Merge

When Enabled one must select which mode loopback processing will operate in; Replace
or Merge.

Replace mode will completely discard the user settings that normally apply to any users
logging on to a machine applying loopback processing and replace them with the user settings
that apply to the computer account instead.

Merge mode will apply the user settings that apply to any users logging on to a machine
applying loopback processing as normal and then will apply the user settings that apply to the
computer account; in the case of a conflict between the two, the computer account user settings
will overwrite the user account user settings.
147

5.14.4. How Loopback Works?

Loopback processing affects the way in which the GetGPOList function operates, normally
when a user logs on the GetGPOList function collects a list of all in scope GPOs and arranges
them in precedence order for processing.

When loopback processing is enabled in Merge mode the GetGPOList function also
collects all in scope GPOs for the computer account and appends them to the list of GPOs
collected for the user account, these then run as higher precedence than the users GPOs.

When loopback processing is enabled in Replace mode the GetGPOList function does
not collect the users in scope GPOs.

So, without loopback enabled, policy processing looks a little like this:

1. Computer Node policies from all GPOs in scope for the computer account object are
applied during start-up (in the normal Local, Site, Domain, OU order).

2. User Node policies from all GPOs in scope for the user account object are applied
during logon (in the normal Local, Site, Domain, OU order).

And, with loopback processing enabled (in Merge Mode):

1. Computer Node policies from all GPOs in scope for the computer account object are
applied during start-up (in the normal Local, Site, Domain, OU order), the computer flags that
loopback processing (Merge Mode) is enabled.

2. User Node policies from all GPOs in scope for the user account object are applied
during logon (in the normal Local, Site, Domain, OU order).

3. As the computer is running in loopback (Merge Mode) it then applies all User Node
policies from all GPOs in scope for the computer account object during logon (Local, Site,
Domain and OU), if any of these settings conflict with what was applied during step 2. Then the
computer account setting will take precedence.

And, with loopback processing enabled (in Replace Mode):


148

1. Computer Node policies from all GPOs in scope for the computer account object are
applied during start-up (in the normal Local, Site, Domain, OU order), the computer flags that
loopback processing (Replace Mode) is enabled.

2. User Node policies from all GPOs in scope for the user account object are not applied
during logon (as the computer is running loopback processing in Replace mode no list of user
GPOs has been collected).

3. As the computer is running in loopback (Replace Mode) it then applies all User Node
policies from all GPOs in scope for the computer account object during logon (Local, Site,
Domain and OU).

If one wants to add an exception to this rule, for example if someone have used loopback
processing to secure a terminal server using replace mode but would like to ensure that the
server administrators do not receive the settings; then one can set a security group containing
the administrators accounts in the delegation tab of the GPO(s) whilst viewed from the Group
Policy Management Console (GPMC) as Deny for the Apply group policy option. This will have
to be set for all GPOs that contain user settings one wish to deny that are in scope for the
computer account.

5.15. Fine-Grained Password Policy


Fine-Grind Password Policies allow to create a very strict Password and Lockout Policies
and apply them to specific users and groups. It also allows all kind of special accounts and
users without modifying the global password policy.

With Fine-Grind Password Policies, the Policy in the Active Directory Administrative Center
is created and users are added to it without touching the default password policy.

Fine-Grind Password Policies have a few limitations:

· They can only be applied to users and global security groups

· It can’t be applied to OUs or Domains, Sites, etc

· The domain functional level needs to be on Windows Server 2008 and above

· Only the Active Directory Administrative Center and PowerShell can be used to
manage it
149

To get started, Open ADAC, enable Tree View In the console and go to:

CN=Password Settings Container,CN=System,DC=test,DC=local

Fig 5.27: Screenshot of Active Directory Administrative Center

Fig 5.28: Screenshot of Password setting container

In the Password Settings Container, right click and click on new and fill the details of the
new Policy.

A lockout options could be set up and the password policy with a very good lockout policy
can be integrated in a single menu.
150

Once all the settings are set, users need to be added and apply should be clicked.

Fig 5.29: Creating secure password policy

Multiple policies can be created and applied to users and groups (Dynamic and regular)

Fig 5.30. Screenshot of Password settings container


151

5.16. Addition of Windows Workstations to domain and Group


Policy
15.16.1. Synchronisation

For Active Directory Federation Services (AD FS) to function, each computer that functions
as a federation server must be joined to a domain, federation server proxies may be joined to a
domain, but this is not a requirement.

To join a computer to a domain

1. On the Start screen, type Control Panel, and then press ENTER.

2. Navigate to System and Security, and then click System.

3. Under Computer name, domain, and workgroup settings, click Change settings.

4. On the Computer Name tab, click Change.

5. Under Member of, click Domain, type the name of the domain that this computer
will join, and then click OK.

6. Click OK, and then restart the computer.

Fig 5.31: Screenshot of the windows to be added


152

Fig 5.32: Screenshot of Joining windows to a domain

Fig 5.33: Screenshot of adding an account


153

5.16.1. Group Policy Synchronisation

5.16.1.1. Synchronizing GPOs

GPA enables one to match multiple copies of a GPO to a single GPO known as a master
GPO. A master GPO is one that is selected to use as a controlling source for other GPOs. The
GPOs one select to match the master GPO are controlled GPOs. The process of matching
controlled GPOs to a master GPO is called GPO synchronization.

To synchronize GPOs:

1. Log on to the GPA Console computer with an account that has GPO synchronization
permissions.

2. Start the GPA Console in the NetIQ Group Policy Administrator program group.

3. In the left pane, expand GP Repository.

4. Expand the appropriate domain hierarchy to the GPO one want to identify as a
master GPO, and then select the GPO.

5. On the Action menu, click Properties.

6. Click the GPO Sync Options tab.

7. Select the Make this GPO a master GPO check box, and then click OK.

8. Click the Synchronization tab in the GPO result view.

9. To select controlled GPOs for this master GPO, click Add.

10. If onewant to select GPOs from the GP Repository, accept the default selection,
and then click OK.

11. If onewant to select GPOs from an Enterprise Consistency Check report XML file,
select ECC Wizard XML file, and then browse to the location of the file.

12. If onewant to determine whether the controlled GPOs are in sync with the master
GPO, select the controlled GPOs one wants to check, and then click Run Sync
Report. GPA generates an Enterprise Consistency Check report on the master GPO
and the selected controlled GPOs in the GP Repository.

13. If one wants to synchronize a controlled GPO with the master GPO, select the
controlled GPO and click Synchronize. Onedo not need to perform this step if the In
Sync column indicates Yes.
154

5.17. Addition of Non-Windows Workstations in AD


Environment
5.17.1. USING SAMBA FOR ACTIVE DIRECTORY INTEGRATION:

Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise
Linux. The SMB protocol is used to access resources on a server, such as file shares and
shared printers.

One can use Samba to authenticate Active Directory (AD) domain users to a Domain
Controller (DC). Additionally, one can use Samba to share printers and local directories to other
SMB clients in the network.

5.17.2. Using winbindd to Authenticate Domain Users

Samba’s winbindd service provides an interface for the Name Service Switch (NSS) and
enables domain users to authenticate to AD when logging into the local system.

Using winbindd provides the benefit that one can enhance the configuration to share
directories and printers without installing additional software.

5.17.3. Joining an AD Domain

If onewant to join an AD domain and use the Winbind service, use the realm join —client-
software=winbind domain_name command. The realm utility automatically updates the
configuration files, such as those for Samba, Kerberos, and PAM.

5.18. Integrating Physical Access Control in Active Directory in


Windows
Physical access control decides who has access to which physical resources, for what
time frame and under what condition. Physical access control systems use a selection of access
control technologies such as smart cards and biometrics with which to authenticate a user and
a number of access control techniques in order to authorize a user.
155

5.18.1. Authentication

Authentication in a physical access control involves checking the identity of a user against
the logical data stored within the repository. Figure shows examples of existing techniques
which can be used to authenticate users. In addition to the authentication techniques shown, a
Digital Signature can be used to verify data at its origin by producing an identity that can be
verified by all parties involved in the transaction taking place. As one can see from Figure there
are a number of different methods for authentication classified as:

• What you know?- The SoC use username and password as their standard form of
authentication.

• What you have?- Something that you have on your person.

• A combination of what one has and what is known- This combination is commonly called
two factor 0authentication. Two-factor authentication confirms a user’s identity using something
they have (an identity token) and something they know (an authorization code or PIN code).
The problem here is one still have to remember the PIN in order to use the system. People may
be inclined to write the PIN down somewhere in order to remember it.

· Something unique about the user or something you are- These are biometrics

Figure.5.34 : Diagram showing existing user Authentication Techniques


156

5.19. Access Control Technologies


Access control technologies are used to enable authentication and access management
processes to be carried out in security systems.

5.19.1. Smart Cards

“A smart card resembles a credit card in shape and size, but contains an embedded
microprocessor.” The microprocessor is used to hold specific information in encrypted format.
Smart cards are defined according to the following:

• How the card data is read and written and

• The type of chip embedded within the card, and its capabilities.

Smart cards come in two main types, ‘Contact’ and ’Contactless’. Contactless smart cards
avoid the ‘wear and tear’ of contact smart cards by avoiding the need for physical contact
between the card and card reader.

5.20. Integration of Physical Access Security and Logical


Access Security using Microsoft Active Directory
Smart cards can be used to gain two factor authentication, combining ‘something you
know’, e.g. your PIN, and ‘something you have’, for example, the smart card. It is suggested
that smart cards provide protection against a range of security threats, from careless storage of
passwords to sophisticated system attacks, but contactless smart cards rely on RFID technology
which has been shown to be vulnerable to virus attack. In many countries disposable stored
value smart cards have replaced coins and notes for public telephone use and travel on many
public transport systems. Smart cards have gained some acceptance as a form of identification
management, with more sophisticated cards capable of storing biometric information such as a
fingerprint.

Smart cards can be used to advocate ‘single sign-on’ and as a single form of authentication
to buildings and to IT applications. A smart card can be used to gain access to a particular
building or area through doors or gates equipped with smart card readers. “The same smart
card, already employed as a form of ‘mobile’ authentication, can then also be used for ‘logical’
authentication in the IT environment i.e. the user’s smart card can be required as a form of
identification for logon to a computer, office network, VPN or other resource.”
157

5.20.1. Biometric

“Biometric systems assert the identity of an individual based on characteristics features


or behaviours of that person, for example their facial appearance, hand geometry, fingerprints
and voice patterns.”

Biometrics-based authentication offers several advantages over other authentication


methods; they hold a higher information content which provides them with more strength than
long passwords without compromising on speed, even so, hackers are still able to find
weaknesses. Unlike a password or PIN, if a biometric method is compromised it cannot be
changed, and a different one must be used.

There are two main phases to biometric based systems; enrolment and recognition. In
the enrolment phase a master template is constructed from a number of biometric scans. When
a user wishes to gain access, their on-the-fly scan is either verified or identified. In verification,
the user must declare their identity (username or smart card) and only one comparison must
take place against the template identified by their username. If the user’s identity is unknown,
the identity of the user must be matched against a database of master templates, and therefore
more processing must take place.

It is suggested that Biometrics can be assessed in terms of their usability and security.
Figure shows a graph of the biometric usability against biometric security.

Figure 5.35: Comparison of Various Biometric Methods


158

5.20.2. Iris Recognition

Iris recognition is acknowledged as the leading form of biometrics technology because of


its accuracy in recognizing individuals and also its robustness from matching individuals against
large biometric databases but it is argued that finger-vein authentication is the most suitable for
door-access control systems.

Smart cards have a higher usability than finger-vein and iris recognition, although less
secure. Voice and face recognition have a higher usability still, but it is easy for someone to
record a voice or put up a picture of a face to fool the system into authenticating them falsely.
There are a number of physiological and medical factors that can affect the usability and efficiency
of biometrics, e.g. something as common as arthritis may affect usability (it may be difficult to
position the finger and/or hand correctly for recognition). For these reason smart cards are
preferred over biometrics, but it will not be long before biometrics catch up in terms of usability,
cost and standardization.

5.20.3. RFID

“Radio Frequency IDentification (RFID) is an automatic identification method, relying on


storing and remotely retrieving data using devices called RFID tags or transponders”. A typical
system consists of tags with an embedded, unique identifier for the object and readers designed
to decode the data on the tag; and a host system or server that processes and manages the
information gathered. The RFID tag is made up of an antenna, a small silicon chip that contains
a radio receiver, a radio modulator for sending a response back to the reader, control logic,
some amount of memory, and a power system.

RFID tags come in a number of different types; active, passive and semi-passive. Active
tags are powered by battery, and are able to broadcast to a reader over distances over 100 feet.
Passive tags are not battery.

Integration of Physical Access Security and Logical Access Security using Microsoft Active
Directory powered, but draw their power from a low power radio signal through its antenna. The
main disadvantage with passive tags is they can only transmit over shorter distance, but are
lower in cost than active tags. Lacking their own power, they also have less encryption and are
left open to power consumption attacks and eavesdropping attacks. Semi-passive tags lie
between passive and active tags. Like active tags, they have a battery, but still use the readers’
power to transmit a message back to the RFID. Semi-passive tags thus have the read reliability
159

of an active tag but the read range of a passive tag. RFIDs also come in a range of frequencies
with different suitable uses:

• Low Frequency (125/134KHz) - Most commonly used for access control and asset
tracking.

• Mid-Frequency (13.56 MHz) - Used where medium data rate and read ranges are
required.

• Ultra High-Frequency (850 MHz to 950 MHz and 2.4 GHz to 2.5 GHz) - offer the
longest read ranges and high reading speeds.

Chips can hold a “kill” or self-destruct feature which stops the chip responding to commands
when a certain command is sent to the chip. The kill feature would be useful in the case of a lost
RFID used for physical access purposes. Contactless smart cards are a combination of smart
cards, wireless protocols and passive powering used in RFID. The amount of bits that RFID
tags are able to hold increases over time as RFID technology advances, it is proposed that low-
cost tags will have more bits and thus be able to support increasingly complex RFID viruses.
Vulnerability in a physical access system may allow access to unauthorized users or a hacker
to take control of the system. Before implementing an RFID based system the implications of
an RFID virus would have to be understood further.

• Simson Garfinkel and Henry Holtzman. Understanding RFID technology. Chapter


2, page 15, 2005.

• Microsoft Identity and Access Management Series. Intranet access management


paper.

5.21. Integrating Finger-Print, Smart Card, RSA or Secondary


authentication to Active Directory
Register the Agent in RSA Authentication Manager

After installation of Authentication Agent for AD FS, one must register it with Authentication
Manager.

Before starting:

Make sure that know the Agent Name that is specified when installing Agent for AD FS.
160

Procedure
1. Sign into the RSA Security Console.

2. Click Access > Authentication Agents > Add New.

3. Enter the required information. Make sure the Agent Type is set to Standard Agent
(default setting).

Authentication Manager uses this setting to determine how to communicate with


Microsoft AD FS.

4. Click Save.

Register or Unregister the Agent with Microsoft AD FS

After installing RSA Authentication Agent for Microsoft AD FS on all federation servers in
the AD FS deployment, one must register the agent on the primary federation server using the
RSA

Agent for AD FS Configuration

Utility. If oneneed to uninstall the agent, one must unregister it first.

Procedure

1. Sign into the primary AD FS server where one installed the agent.

2. Open a PowerShell command prompt.

3. Enter the following to run the Agent for AD FS Configuration Utility:

cd ‘C:\Program Files\RSA\RSA Authentication Agent\AD FS MFA Adapter\scripts’

.\MFAAuthProviderConfigSettings.ps1

4. From the Main Menu, do one of the following:

Enter 4 to select Register Agent.

Enter 5 to select Unregister Agent.

Smart Card centralized integration with Active Directory


161

Requirements

Smart Card Authentication to Active Directory requires that Smartcard workstations, Active
Directory, and Active Directory domain controllers be configured properly. Active Directory must
trust a certification authority to authenticate users based on certificates from that CA. Both
Smartcard workstations and domain controllers must be configured with correctly configured
certificates.

As with any PKI implementation, all parties must trust the Root CA to which the issuing
CA chains. Both the domain controllers and the smartcard workstations trust this root.
162
163
164

5.22. Single Sign-on Integration


In order to setup SSO:

· One must be an org admin for the organization in Amplitude.

· One must be able to configure Azure Active Directory for the organization in Microsoft
Azure.

Then follow these setup steps:

1. Go to the Azure Portal and go to the Azure Active Directory section.


165

2. Open the Enterprise applications sub-section.

3. Add a new application.

4. Search for Amplitude in the app gallery. Select the Amplitude entry and click “Add”
in the bottom right of the app summary.

5. Open the Single sign-on app settings.

6. One need to enter the “Identifier” and “Reply URL”.

7. These are the “Entity ID” and “Assertion Consumer Service URL” respectively in
the SSO settings in Amplitude.

8. Save the changes and then download the Metadata XML.

9. In Amplitude upload the metadata XML file.

5.23. Active Directory Hardening Guidelines


In a Windows based domain system, active directory is the central management tool that
provides access controls to users to the servers or to use any services offered by any specific
servers. So, security in Windows based infrastructure should start with securing the active
directory. Though most of the part of securing an active directory process focus on security
settings of the server, but there are some other components in a network environment – DNS,
File server etc. – that also play a vital role when we consider about securing an active directory
based environment. More or less, the following settings to check up how secure active directory
configuration are windows server configurations and the services are needed:

1. Server configurations: Every windows server has some basic configuration such as
administrator users, network settings, file sharing etc. Check the configuration of
the workstations that are managed by the active directory that is to be reviewed

2. Services: List the servers that provide specific functionality or services to the network
such as DHCP, DNS, Exchange, and File Servers.

3. Rename default administration account and disable the guest account.

4. For specifying the permissions in the domain object, always use global or universal
groups. Never use the local group for setting permissions to any domain object.

5. Check default users groups and its members. Remove unnecessary groups and its
corresponding default users right.
166

6. Physical security of the servers and server rooms.

7. User management policy and security monitoring.

8. Check if the domain is protected with anti-virus or anti-malware software.

9. Take regular backup of the domain controller

10. Check whether server software is updated with the Microsoft recommended security
patches.

11. Secure the DNS .Though it is a separate service and can reside on the servers that
are not hosing active directory, DNS helps active directory to locate the domain
controllers and other necessary services in the network.

Check and disable the followings:


1. All the drives in the server hosting active directory need to be in NTFS

2. Disable SMTP protocols

3. Disable boot from any removable devices except the boot disk.

4. Run only the services needed to run the server. Disable the rest. The services that
can be disabled are IIS, SMTP, FAX, indexing, Shell Hardware Detection and
Distributed Link Tracking Client; upload manager, Portable Media Serial Number,
Windows Audio and Utility Manager.

5. Allow only secure DNS updates

Before starting the hardening the security of active directory, try and collect the complete
topology of the network including the number of domains, sub-domains, and forest. Also make
sure if the active directory is only used locally or some other external offices of the organization
are under the active directory. Besides, make a list of administrators: service admin, data admin,
enterprise admin, domain admin, backup operators and forest owners.

Active directory security checklist:

· Domain controller logon policy should allow “logon locally” and “system shutdown”
privileges to the following administrators: 1. Administrators; 2.Backup operators;
3. Server operators

· The domain controller security policy should be defined in a separate GPO, which
should be linked to an Organizational Unit (OU) of domain controller.
167

· The best practices of securing active directory is available in Microsoft’s technet


page.

· Never store LAN manager Hash values.

· Set the domain Account lockout duration to ‘0’ and lockout threshold to three.

· Check the domain Kerberos policy for logon restrictions and the maximum lifetime
for service ticket, user ticket. Also check the clock synchronization-ideally it can be
3 to 5 minutes.

· Check the domain controller event log policy, in particular pay attention to the log
retention time and access. Disable the guests group from accessing the log.

Top 24 Active Directory Security Best Practices

Tips on securing domain admins, local administrators, audit policies, monitoring AD for
compromise, password policies, vulnerability scanning and much more are discussed below:

1. Clean up the Domain Admins Group

There should be no day to day user accounts in the Domain Admins group, the only
exception is the default Domain Administrator account.

Members of the DA group are too powerful. They have local admin rights on every domain
joined system (workstation, servers, laptops, etc).

Microsoft recommends that when DA access is needed, the account has to be temporarily
placed in the DA group. When the work is done the account should be removed from the DA
group.

This process is also recommended for the Enterprise Admins, Backup Admins and Schema
Admin groups.

2. Use at Least Two Accounts (Regular and Admin Account)

Logging should not be done on every day basis with an account that is a local admin or
has privileged access (Domain Admin).

It is recommended to create two accounts, a regular account with no admin rights and a
privileged account that is used only for administrative tasks and do not put the secondary
account in the Domain Admins group, at least permanently.
168

Follow the least privilege administrative model. Basically, this means all users should log
on with an account that has the minimum permissions to complete their work.

3. Secure the Domain Administrator account

Every domain includes an Administrator account, this account by default is a member of


the Domain Admins group.

The built in Administrator account should only be used for the domain setup and disaster
recovery (restoring Active Directory).

Anyone requiring administrative level access to servers or Active Directory should use
their own individual account.

No one should know the Domain Administrator account password. Set a really long 20+
characters password and lock it in a vault. Again the only time this is needed is for recovery
purposes.

In addition, Microsoft has several recommendations for securing the built in Administrator
Account. These settings can be applied to group policy and applied to all computers.

· Enable the Account is sensitive and cannot be delegated.

· Enable the smart card is required for interactive logon

· Deny access to this computer from the network

· Deny logon as batch job

· Deny log on as a service

· Deny log on through RDP

4. Disable the Local Administrator Account (on all computers)

The local administrator account is a well known account in Domain environments and is
not needed.

An individual account should be used that has the necessary rights to complete tasks.

Problems with the local admin account are

1. It is a well known account, even if re-named the SID is the same and is well known
by attackers.
169

2. It’s often configured with the same password on every computer in the domain.

To perform admin tasks on the computer (install software, delete files, etc) use the individual
account, not the local admin account.

Even if the account is disabled, booting can be done in safe mode and the local
administrator account can be used.

5. Use Local Administrator Password Solution (LAPS)

Local administrator Password Solution (LAPS) is a popular tool to handle the local admin
password on all computers.

LAPS is a Microsoft tool that provides management of local account password of domain
joined computers. It will set a unique password for every local administrator account and store
it in Active Directory for easy access.

LAPS is built upon the Active Directory infrastructure so there is no need to install additional
servers.

The solution uses the group policy client side extension to perform all the management
tasks on the workstations. It is supported on Active Directory 2003 SP1 and above and client
Vista Service Pack 2 and above.

Figure 5.36: LAPS is built upon the Active Directory infrastructure


170

6. Use a Secure Admin Workstation (SAW)

A secure admin workstation is a dedicated system that should only be used to perform
administrative tasks with a privileged account.

It should not be used for checking email or browsing the internet. In fact, internet access
should be restricted.

What tasks would can be done on a SAW?

· Active Directory administration

· Group Policy

· Managing DNS & DHCP Servers

· Any task that requires admin rights on servers

· Admin rights to Management Systems such as VMware, Hyper-v, Citrix

· Office 365 Administration

Basically, when it is need to use a privileged account to perform admin tasks it should be
done from a SAW.

Daily use workstations are more vulnerable to compromise from pass the hash, phishing
attacks, fake websites, keyloggers and more.

Using a secure workstation for an elevated account provides much greater protection
from those attack vectors.

Since attacks can come from internal and external it’s best to adopt an assume breach
security posture.

Due to the continuous threats and changes to technology the methodology on how to
deploy a SAW keeps changing. There is also PAW and jump servers to make it even more
confusing.

Here are some tips to help get it started:

· Use a clean OS install (use latest Windows OS)

· Apply hardening security baseline


171

· Enable full disk encryption

· Restrict USB ports

· Use personal firewall

· Block internet

· Use a VM – Terminal Server works good

· Minimal software installed

· Use two factor or smart card to for access

· Restrict systems to only accept connections from the SAW

7. Enable Audit policy Settings with Group Policy

Ensure the following Audit Policy settings are configured in group policy and applied to all
computers and servers.

Computer Configuration -> Policies -Windows Settings -> Security Settings -> Advanced
Audit Policy Configuration

Account Logon

Ensure ‘Audit Credential Validation’ is set to ‘Success and Failure’

Account Management

Audit ‘Application Group Management’ is set to ‘Success and Failure’

Audit ‘Computer Account Management’ is set to ‘Success and Failure’

Audit ‘Other Account Management Events’ is set to ‘Success and Failure’

Audit ‘Security Group Management’ is set to ‘Success and Failure’

Audit ‘User Account Management’ is set to ‘Success and Failure’

Detailed Tracking

Audit ‘PNP Activity’ is set to ‘Success’

Audit ‘Process Creation’ is set to ‘Success’

Logon/Logoff

Audit ‘Account Lockout’ is set to ‘Success and Failure’


172

Audit ‘Group Membership’ is set to ‘Success’

Audit ‘Logoff’ is set to ‘Success’

Audit ‘Logon’ is set to ‘Success and Failure’

Audit ‘Other Logon/Logoff Events’ is set to ‘Success and Failure’

Audit ‘Special Logon’ is set to ‘Success’

Object Access

Audit ‘Removable Storage’ is set to ‘Success and Failure’

Policy Change

Audit ‘Audit Policy Change’ is set to ‘Success and Failure’

Audit ‘Authentication Policy Change’ is set to ‘Success’

Audit ‘Authorization Policy Change’ is set to ‘Success’

Privilege Use

Audit ‘Sensitive Privilege Use’ is set to ‘Success and Failure’

System

Audit ‘IPsec Driver’ is set to ‘Success and Failure’

Audit’ Other System Events’ is set to ‘Success and Failure’

Audit ‘Security State Change’ is set to ‘Success’

Audit ‘Security System Extension’ is set to ‘Success and Failure’

Audit ‘System Integrity’ is set to ‘Success and Failure’

Malicious activity often starts on workstations, if continous monitoring is not in place early
signs of an attack can be missed.

8. Monitor Active Directory Events for Signs of Compromise

The following Active Directory events should be monitored which will help detect
compromise and abnormal behavior on the network.
173

Here are some events that should be monitored and reviewed on a weekly basis.

· Changes to privileged groups such as Domain Admins, Enterprise Admins and


Schema Admins

· A spike in bad password attempts

· A spike in locked out accounts

· Account lockouts

· Disabled or removal of antivirus software

· All actives performed by privileged accounts

· Logon/Logoff events

· Use of local administrator accounts

How are events monitored in Active Directory?

The best way is to collect all the logs on a centralized server then use log analyzing
software to generate reports.

Some log analyzers come pre built with Active Directory security reports.

Here are some of the most popular log analyzers.

· Elk Stack

· Lepid

· Splunk

· ManageEngineADAudit Plus

· Windows Event Forwarding.


174

Figure 5.37 : Account Locked out users

Figure 5.38: Top user Logon Failures

In this screenshot, one can see a huge spike in logon failures. Without a log analyzer,
these events would be hard to spot.
175

9. Password Complexity

8 characters with complexity is no longer a secure password. Instead, use a minimum of


12 characters and train users on passphrases. The longer the password the better.

Passphrases are simply two or more random words put together. One can add numbers
and characters if needed.

Better Password Policy

· Set 12 character passwords

· Remember 10 password history

· use passphrases

· Lockout policy 3 attempts

The key to using passphrases is to be totally random with each word.

Good passwords using passphrases

Bucketguitartire22

Screenjugglered

RoadbluesaltCloud

The above examples are totally random. These would take a very long time to crack and
most likely no one would guess them.

Bad passphrase examples

Ireallylikepizza22

Theskyisblue44

NIST recently updated their password policy guidelines in Special Publication 800-63 to
address new requirements for password policies.

If the organization must meet certain standards then make sure those standards support
these password recommendations.
176

Also be sure to update the companies written policy.

10. Use Descriptive Security Group Names

Applying permissions to resources with security groups and not individual accounts makes
managing resources much easier.Security groups should not be named with a generic name
like helpdesk or HR Training.

11. Cleanup Old Active Directory User & Computer Accounts

One needs to have a procedure in place to detect unused user and computer accounts in
Active Directory.

12. Do NOT Install Additional Software or Roles on Domain Controllers

Domain controllers should have limited software and roles installed on them.

DC’s are critical to the enterprise, nobody wants to increase security risks by having
additional software running on them.

Windows Server Core is a great option for running the DC role and other roles such as
DHCP, DNS, print servers and file servers.

Server Core runs without a GUI and requires fewer security patches due to its smaller
footprint.

More software, more roles = increased security risks.

13. Continues Patch Management & Vulnerability Scanning

Regular scanning and patching of softwares will remediate discovered vulnerabilities and
the risk of having an attack is minimal.

Tips for Continues Vulnerability Management

· Scan all systems at least once a month to identify all potential vulnerabilities. If one
can scan more frequently it’s better.

· Prioritize the finding of the vulnerability scans and first fix the ones that have known
vulnerabilities in the wild.
177

· Deploy automated software updates to operating systems

· Deploy automated updated to 3rd party software

· Identify out of date software that is no longer supported and get it updated.

14. Use DNS Services to Block Malicious Domains

One can prevent a lot of malicious traffic from entering the network by blocking malicious
DNS lookups.

Anytime a system needs to access the internet it will in most cases use a domain name.

There are several services available that check DNS queries for malicious domains and
blocks them. These DNS services gather intelligence about malicious domains from various
public and private sources. When it gets a query for a domain that it has flagged as malicious
it will block access when the system attempts to contact them.

Here is an example:

Step1: Client clicks a link that goes to example.net

Step2: Local cache is checked

Step 3: DNS Service checksif the domain is on its threat list, it is so it returns a block reply.

In the above example since the DNS query returned a block, no malicious traffic ever
entered into the network.

Here are some of the most popular secure DNS services

Quad9

OpenDNS

Comodo Secure DNS

Also, most IPS (Intrusion Prevention Systems) systems support the ability to check DNS
lookups against a list of malicious domains.
178

15. Run Critical Infrastructure on latest Windows Operating System

With each new version of Windows OS, Microsoft includes built in security features and
enhancements. Just staying on the latest OS will increase overall security.

16. Use Two Factor Authentication for Remote Access

Compromised accounts are very common and this can provide attackers remote access
to the systems through VPN, Citrix, or other remote access systems.

One of the best ways to protect against compromised accounts is two factor authentication.
This will also help against password spraying attacks.

Popular two factor authentication solutions

· DUO

· RSA

· Microsoft MFA

17. Monitor DHCP Logs for Connected Devices

One should know what is connected to the network, if there are multiple locations with
lots of users and computers this can be challenging.

There are ways to prevent only authorized devices from connecting but this can be costly
and a lot of work to set up.

Another method that is already available is to monitor the DHCP logs for connected devices.

18. Monitor DNS Logs for Security Threats

Most connections start with a DNS query. All domain joined systems should be set up to
use a local Windows DNS server.

With this setup, one can log every internal and external DNS lookup. When a client device
makes a connection to a malicious site it will log that site name in the DNS logs.

These malicious domains are usually odd, random character domains that raises red
flags.
Here are some screenshots of suspicious DNS lookups from certain logs.
179

Figure.5.39: DNS Lookups

19. Use Latest ADFS Security Features

ADFS has some great security features. These features will help with password spraying,
account compromise, phishing and so on.

Here are some features that are worth looking into:

· Smart Lockout – Uses algorithms to spot unusual sign on activity.

· IP Lockout – Uses Microsoft’s database of known malicious IP addresses to block


sign on ins.

· Attack Simulations – one should be doing regular phishing tests to help train

· end users. Microsoft will be releasing phish simulator software very soon.

· MFA Authentication – Microsoft’s 2 factor solution

· Banned passwords – Checks passwords against a known list

· Custom bad passwords – Ability to add custom banned passwords to check against.

20. Plan for Compromise ( Have a recovery plan)

Cyber attacks can shut down systems and bring business operations to a halt.

The City of Atlanta was shut down by a cyber attack, this prevented residents from paying
online utility bills. In addition, Police officers had to write reports by hand.
180

A good incident response plan could have limited the impact and enabled services back
online much faster.

Here are a few things to include in an incident response plan

· Create an incident response policy and plan

· Create procedures for performing incident handling and reporting

· Establish procedures for communicating with outside parties

· Establish response teams and leaders

· Prioritize servers

· Walkthrough and training

21. Document Delegation to Active Directory

The best way to control access to Active Directory and related resources is to use Security
Groups.

Create custom groups with very specific names, document who has rights and a process
for adding new users.

Don’t just allow users to be added to these custom groups without an approval process.
This is just another way permissions can get out of control.

Know what groups are delegated to what resources, document it.

22. Lock Down Service Accounts

Service accounts are those accounts that run an executable, task or service, AD
authentication, etc.

These are wildly used and often have a password set to never expire.

These accounts will often end up with too much permission and more often than not are
a member of the domain admins’ group.
181

Here are some tips for locking down service accounts.

· Use long Strong passwords

· Give access to only what is needed

· Try to avoid granting local administrator rights

· Do not put in Domain Admins

· Deny logon locally

· Deny logon as a batch

· Require vendors to make their software work without domain admin rights

23. Disable SMBv1

SMBv1 is 30 years old and Microsoft says to stop using it (They have been saying that for
a long time).

SMB (Server Message Blocks) is a network file and printer sharing protocol.

SMBv1 has been replaced by SMBv2 and SMBv3.

Many viruses can spread and exploit flaws in the SMBv1 protocol.

In addition, to the security issues with SMBv1 it’s not an efficient protocol, one will lose
performance with this old version.

Beginning with Windows 10 Fall Creators Update SMBv1 will be disabled by default.

24. Use Security Baselines and Benchmarks

A default install of the Windows Operating system has many features, services, default
settings and enabled ports that are not secure.

These default settings should be reviewed against known security benchmarks.

Establishing a secure configuration on all systems can reduce the attack surface while
maintaining functionality.

There are several resources that provide security benchmarks.


182

Microsoft has a Security Compliance Toolkit that allows one to analyze and test against
Microsoft recommended security configuration baselines.

Another great resource is CIS SecureSuite

It also provides security configuration baselines. In addition, it provides tools that can
scan a system and provide a report on failures.

Most of the recommended settings can be set up using Group Policy and deployed to all
computers.

CIS Securesuite can also scan against other systems like cisco, vmware, linux and more.

Recommended Tool: SolarWinds Server & Application Monitor (SAM)

This utility was designed to Monitor Active Directory and other critical applications. It will
quickly spot domain controller issues, prevent replication failures, track failed logon attempts
and much more.

Summary
• Authentication in a physical access control involves checking the identity of a user
against the logical data stored within the repository

• A smart card resembles a credit card in shape and size, but contains an embedded
microprocessor. The microprocessor is used to hold specific information in encrypted
format.

• Biometric systems assert the identity of an individual based on characteristics


features or behaviors of that person, for example their facial appearance, hand
geometry, fingerprints and voice patterns.

• Iris recognition is acknowledged as the leading form of biometrics technology


because of its accuracy in recognizing individuals and also its robustness from
matching individuals against large biometric databases but it is argued that finger-
vein authentication is the most suitable for door-access control systems.

• Radio Frequency IDentification (RFID) is an automatic identification method, relying


on storing and remotely retrieving data using devices called RFID tags or
transponders”. A typical system consists of tags with an embedded, unique identifier
183

for the object and readers designed to decode the data on the tag; and a host
system or server that processes and manages the information gathered.

Check your answers


• What is authentication?

• What is Smart Card?

• What are passwords?

• What is RFID?

• Different types biometric authentication are ……………., ………………..,


……………………., ………………, …………………………….

References
• (https://www.techopedia.com/definition/25/active-directory)

• https://securitywing.com/active-directory-security/

• For step by step instructions on installing LAPS see this article, How to Install Local
Administrator Password Solution (LAPS)

• https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-
access/privileged-access-workstations

• https://cloudblogs.microsoft.com/enterprisemobility/2018/03/05/azure-ad-and-adfs-
best-practices-defending-against-password-spray-attacks/

• NIST has a great guide on computer security incident handling, https://


nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-61r2.pdf

• YahyaMehdizadeh. Convergence of logical and physical security. Technical report,


SANS Institute, 2003.

• J. H. Connel N.K. Ratha and R. M. Bolle. Enhancing security and privacy in


biometrics-based authentication systems. IBM SYSTEMS JOURNAL, VOL 40. NO.3,
2001.

• Simson Garfinkel. Web Security, Privacy & Commerce. O’Reilly, 1997.

• Bruce Schneier. Secrets & Lies - Digital Security in a Networked World. Wiley,
2004.
184

• Jacqueline Emigh. Getting clever with smart cards. Access Control & Security
Systems, May, 2004.

• Bruno Crispo Melanie R. Rieback and Andrew S. Tanenbaum. Is Your Cat Infected
with a Computer Virus? PhD thesis, VrijeUniversiteit Amsterdam, 2006.

• IBM. Smart cards and it security. Technical report, IBM, 2003.

• C.H Seal M. M Gifford, D.J McCartney. Networked biometrics systems - requirements


based on iris recognition. BT Technology Journal, 17(7):163 – 169, 1999.

• Hitachi. Door-access-control system based on finger-vein authentication. Technical


report, Hitachi.

• Argus Solutions. Monitor and manage critical assets to guard against unauthorised
usage or theft. Technical report, Argus Solutions, 2006.

• Tiresias.org. Guidelines - biometric systems. http://www.tiresias.org/ guidelines /


biometric systems.htm, 2006.

• Wikipedia. RFID. http://en.wikipedia.org/wiki/Rfid, April 2006.

• Symbol Mobility Learning Centre. Rfid key issues. Technical report, Symbol Mobility
Learning Centre, 2004.

• Paxar. Rfid basics. Technical report, Paxar, 2004.

• Stephen AugustWeiss. Security and privacy in radio-frequency identification devices.


Master’s thesis, Massachusetts Institute of Technology, 2003.

• (https://www.techopedia.com/definition/3996/kerberos)

• https://ldap.com/

• https://www.techopedia.com/definition/30222/ticket-granting-ticket-tgt

• https://searchwindowsserver.techtarget.com/definition/Active-Directory-forest-AD-
forest

• https://www.techopedia.com/definition/1326/domain-networking

• https://kb.iu.edu/d/atvu

• http://www.itprotoday.com/windows-8/how-do-i-configure-trust-relationship

• https://www.techopedia.com/definition/3841/modification-mod
185

• https://www.techopedia.com/definition/30735/object-class

• https://www.techopedia.com/definition/25949/create-retrieve-update-and-delete-crud

• https://en.wikipedia.org/wiki/Group_Policy

• https://blogs.technet.microsoft.com/musings_of_a_technical_tam/2012/02/13/group-
policy-basics-part-1-understanding-the-structure-of-a-group-policy-object/

• https://www.serverwatch.com/tutorials/article.php/1497871/Group-Policy-
Structures.htm

• http://techgenix.com/defaultgpopermissions/

• https://www.lepide.com/blog/top-10-most-important-group-policy-settings-for-
preventing-security-breaches/

• https://www.it-support.com.au/how-to-configure-account-lockout-policy-on-windows-
server/2013/07/

• https://prajwaldesai.com/how-to-disable-usb-devices-using-group-policy/

• https://social.technet.microsoft.com/Forums/en-US/3b5f46b6-9d95-487d-b02d-
1 0 3 a7 5 a e 3 8 1 4/ c r e a t e -g r o u p - p ol i c y - t o- s e t - s c re e n s a v e r- t i m e o u t- i n -
registry?forum=winserverGP

• https://www.manageengine.com/products/active-directory-audit/help/getting-started/
eventlog-settings-workstation-auditing.html

• https://www.itprotoday.com/windows-8/group-policy-settings-wsus

• http://www.grouppolicy.biz/2010/05/how-to-apply-a-group-policy-object-to-individual-
users-or-computer/

• https://social.technet.microsoft.com/Forums/ie/en-US/b452101f-d2d3-4a6f-96f1-
e101e99107dd/server-2012-r2-lock-screen-timeout-settings?forum=winserver8gen

• https://www.tech-recipes.com/rx/35777/windows-8-using-group-policy-to-prevent-
screen-saver-changes/

• https://www.petenetlive.com/KB/Article/0001283

• https://www.addictivetips.com/windows-tips/enable-account-lockout-policy-set-
threshold-duration-in-windows-8/
186

• https://www.askvg.com/how-to-enable-group-policy-editor-gpedit-msc-in-windows-
7-home-premium-home-basic-and-starter-editions/

• https://www.petri.com/how-to-create-and-link-a-group-policy-object-in-active-
directory

• https://searchwindowsserver.techtarget.com/tip/Enforcing-Group-Policy-Object-
settings

• https://www.manageengine.com/products/ad-manager/help/gpo-management/view-
gpo-gpolinks-details.html

• http://www.vkernel.ro/blog/add-domain-users-to-local-groups-using-group-policy-
preferences

• https://emeneye.wordpress.com/2016/02/16/group-policy-order-of-precedence-faq/

• https://4sysops.com/archives/understanding-group-policy-order/

• https://www.experts-exchange.com/articles/1876/Understanding-Group-Policy-
Loopback-Processing.html

• https://www.msptechs.com/how-to-configure-fine-grained-password-policies-on-
windows-server-2016/

• https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/join-a-
computer-to-a-domain

• https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/
windows_integration_guide/winbind

• https://community.rsa.com/servlet/JiveServlet/downloadBody/93418-102-4-231820/
auth_agent20ADFS_admin_guide.pdf

• https://support.microsoft.com/en-in/help/281245/guidelines-for-enabling-smart-card-
logon-with-third-party-certification
187

UNIT 6
CLOUD COMPUTING
After reading this lesson you will be able to understand

· Overview of Cloud Computing

· Defining Cloud Computing

· Cloud Types

· Characteristics of Cloud computing

· Understanding services and application types

· Cloud Computing Technologies

· Cloud Computing Architecture

· Multi Tenancy Model

· Cloud Computing Challenges

· Cloud Security Reference Model

· Cloud identity and Access Management

Structure
6.1 Overview

6.2 Cloud Computing

6.3 Cloud Types

6.4 Characteristics of Cloud Computing

6.5 Understanding services and application types

6.6 Cloud Computing-Technologies

6.7 Cloud Computing-Architecture

6.8 Multi Tenancy Model

6.9 Cloud Computing Challenges

6.10 Cloud Security Reference Model

6.11 Cloud Identity and Access Management


188

6.12 Securing the Cloud

6.13 Encryption

6.14 Auditing and Compliance

6.15 Establishing Identity and Presence

6.16 Identity Protocol Standards

6.1. Overview
Cloud computing is a computing paradigm, where a large pool of systems are connected
in private or public networks, to provide dynamically scalable infrastructure for application, data
and file storage. With the advent of this technology, the cost of computation, application hosting,
content storage and delivery is reduced significantly.

The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over network, i.e.,
on public networks or on private networks, i.e., WAN, LAN or VPN. Applications such as e-mail,
web conferencing, customer relationship management (CRM), all run in cloud.

6.2. Cloud computing


Cloud computing is a practical approach to experience direct cost benefits and it has the
potential to transform a data center from a capital-intensive set up to a variable priced
environment. The idea of cloud computing is based on a very fundamental principal of reusability
of IT capabilities. The difference that cloud computing brings compared to traditional concepts
of “grid computing”, “distributed computing”, “utility computing”, or “autonomic computing” is to
broaden horizons across organizational boundaries. Cloud Computing refers to manipulating
configuring and accessing the applications online. It offers online data storage, infrastructure
and application.

The concept of Cloud Computing came into existence in 1950 with implementation of
mainframe computers, accessible via thin/static clients. Since then, cloud computing has been
evolved from static clients to dynamic ones from software to services. The following diagram
explains the evolution of cloud computing.
189

Figure 6.1: Evolution of Cloud Computing

We need not to install a piece of software on our local PC and this is how the cloud
computing overcomes platform dependency issues. Hence, the Cloud Computing is making
our business application mobile and collaborative.

6.2.1. Definition

Cloud computing is a subscription-based service where one can obtain networked storage
space and computer resources.

Forrester defines cloud computing as:

“A pool of abstracted, highly scalable, and managed compute infrastructure capable of


hosting end-customer applications and billed by consumption.”

Cloud computing takes the technology, services, and applications that are similar to those
on the Internet and turns them into a self-service utility. The use of the word “cloud” makes
reference to the two essential concepts:

• Abstraction: Cloud computing abstracts the details of system implementation from


users and developers. Applications run on physical systems that aren’t specified, data is stored
190

in locations that are unknown, administration of systems is outsourced to others, and access by
users is ubiquitous.

• Virtualization: Cloud computing virtualizes systems by pooling and sharing resources.


Systems and storage can be provisioned as needed from a centralized infrastructure, costs are
assessed on a metered basis, multi-tenancy is enabled, and resources are scalable with agility.

Figure 6.2: Conceptual View of Cloud computing

6.3. Cloud Types


There are certain services and models working behind the scene making the cloud
computing feasible and accessible to end users. Following are the working models for cloud
computing models.

NIST Model

· Deployment Models

· Service Models

Cloud Cube Model

· Jerico model
191

6.3.1. NIST Model

The United States government is a major consumer of computer services and, therefore,
one of the major users of cloud computing networks. They separate the cloud computing into
deployment model and service models. Cloud computing is a relatively new business model in
the computing world. According to the official NIST definition, “cloud computing is a model for
enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable
computing resources (e.g., networks, servers, storage, applications and services) that can be
rapidly provisioned and released with minimal management effort or service provider interaction.”
The NIST definition lists five essential characteristics of cloud computing: on-demand self-
service, broad network access, resource pooling, rapid elasticity or expansion, and measured
service. It also lists three “service models” (software, platform and infrastructure), and four
“deployment models” (private, community, public and hybrid) that together categorize ways to
deliver cloud services. The definition is intended to serve as a means for broad comparisons of
cloud services and deployment strategies, and to provide a baseline for discussion from what is
cloud computing to how to best use cloud computing.

Figure6.3: NIST Model

6.3.2. Cloud Cube Model

It is also called open group – Jericho Forum model. It categorizes a cloud network based
on four dimensional factors:

• Physical location of the data: Internal (I) / External (E) determine the organization’s
boundaries.
192

• Ownership: Proprietary (P) / Open (O) is a measure of not only the technology ownership,
but of interoperability, ease of data transfer, and degree of vendor application lock-in.

• Security boundary: Perimeterised (Per) / De-perimiterised (D-P) is a measure of whether


the operation is inside or outside the security boundary or network firewall.

• Sourcing: In-sourced or Outsourced means whether the service is provided by the


customer or the service provider

The same is illustrated in the Figure 6.4:

Figure6.4:Jericho’s Cloud cube model

The following figure illustrates both NIST model and Cloud Cube Model.

Figure 6.5: Cloud Computing Models


193

6.4. Characteristics of Cloud Computing


Some of the essential Characteristics are listed below:

· Reduced Cost

· Increased Storage

· Flexibility

· Resource Availability

· On-Demand Service

· Rapid Elasticity

· Resource Pooling

· Broad Network Access

The table 6.1. lists down the benefits and characteristics of cloud computing.
Table 6.1: Benefits of Cloud Computing

S.NO Benefits Characteristics

1 Reduced cost · There are a number of reasons to attribute


Cloud technology with lower costs.
· The billing model is pay as per usage
· The infrastructure is not purchased thus
lowering maintenance.
· Initial expense and recurring expenses are
much lower than traditional computing

2 Increased storage · With the massive Infrastructure that is offered


by Cloud providers today, storage &
maintenance of large volumes of data is a
reality.

· Sudden workload spikes are also managed


effectively & efficiently, since the cloud can
scale dynamically.
194

3 Flexibility · This is an extremely important characteristic.


With enterprises having to adapt, even more
rapidly, to changing business conditions, speed
to deliver is critical.

· Cloud computing stresses on getting


applications to market very quickly, by using
the most appropriate building blocks necessary
for deployment.

4 On demand self-service · Cloud Computing allows the users to use web


services and resources on demand. One can
logon to a website at any time and use them.

5 Broad network access · Since Cloud Computing is completely web


based, it can be accessed from anywhere and
at any time

6 Resource pooling · Cloud Computing allows multiple tenants to


share a pool of resources. One can share single
physical instance of hardware, database and
basic infrastructure.

7 Rapid elasticity · It is very easy to scale up or down the resources


at any time. Resources used by the customers
or currently assigned to customers are
automatically monitored and resources.

6.5 Understanding services and application types


6.5.1 Cloud Computing Models

Cloud Providers offer services that can be grouped into three categories. They also
provide deployment model which can be classified into four categories.
195

Service model Deployment model

· Software as a Service (SaaS) · Public Cloud


· Platform as a Service (Paas) · Private Cloud
· Infrastructure as a Service (Iaas) · Hybrid Cloud
· Community Cloud

6.5.2. Service Models


6.5.2.1. Software as a Service (SAAS)

In this model, a complete application is offered to the customer, as a service on demand.


A single instance of the service runs on the cloud & multiple end users are serviced. On the
customers’ side, there is no need for upfront investment in servers or software licenses, while
for the provider, the costs are lowered, since only a single application needs to be hosted &
maintained. Today SaaS is offered by companies such as Google, Salesforce, Microsoft, Zoho.

Examples:

· GoogleApps

· Oracle On Demand

· SalesForce.com

· SQL Azure

6.5.2.2 Platform as a Service (Paas)

Here, a layer of software or development environment is encapsulated & offered as a


service, upon which other higher levels of service can be built. The customer has the freedom
to build his own applications, which run on the provider’s infrastructure. To meet manageability
and scalability requirements of the applications, PaaS providers offer a predefined combination
of OS and application servers, such as LAMP platform (Linux, Apache, MySql and PHP), restricted
J2EE, Ruby etc. Google’s App Engine, Force.com, etc are some of the popular PaaS examples.

Examples

· Force.com

· GoGrid CloudCenter
196

· Google AppEngine

· Windows Azure Platform

6.5.2.3 Infrastructure as a Service (Iaas)

IaaS provides basic storage and computing capabilities as standardized services over
the network. Servers, storage systems, networking equipment, data center space etc. are pooled
and made available to handle workloads. The customer would typically deploy his own software
on the infrastructure. Some common examples such as Amazon, GoGrid, Tera.

Examples of IaaS service providers include:

• Amazon Elastic Compute Cloud (EC2)

• Eucalyptus [Elastic Utility Computing Architecture for Linking the Programs To Useful
Systems]

• GoGrid

• FlexiScale

• Linode

• RackSpace Cloud

• Terremark

6.5.3. Deployment Model

The NIST definition for the four deployment models is as follows:

6.5.3.1. Public cloud: The public cloud infrastructure is available for public use alternatively
for a large industry group and is owned by an organization selling cloud services.

6.5.3.2. Private cloud: The private cloud infrastructure is operated for the exclusive use
of an organization. The cloud may be managed by that organization or a third party. Private
clouds may be either on- or off-premises.

6.5.3.3. Hybrid cloud: A hybrid cloud combines multiple clouds (private, community of
public) where those clouds retain their unique identities, but are bound together as a unit. A
hybrid cloud may offer standardized or proprietary access to data and applications, as well as
application portability.
197

6.5.3.4. Community cloud:A community cloud is one where the cloud has been organized
to serve a common function or purpose.

It may be for one organization or for several organizations, but they share common
concerns such as their mission, policies, security, regulatory compliance needs, and so on. A
community cloud may be managed by the constituent organization(s) or by a third party.

6.5.4 Understanding Public and Private Clouds

Enterprises can choose to deploy applications on

6.5.4.1. Public Cloud

Public clouds are owned and operated by third parties; they deliver superior economies of
scale to customers, as the infrastructure costs are spread among a mix of users, giving each
individual client an attractive low-cost, “Pay-as-you-go” model.

All customers share the same infrastructure pool with limited configuration, security
protections, and availability variances. These are managed and supported by the cloud provider.
One of the advantages of a Public cloud is that they may be larger than an enterprises cloud,
thus providing the ability to scale seamlessly, on demand. Cloud Integrators can play a vital part
in determining the right cloud path for each organization. A cloud integrator is a product or
service that helps a business negotiate the complexities of cloud migrations. A cloud integrator
service (sometimes referred to as Integration-as-a-Service) is like a systems integrator (SI)
that specializes in cloud computing.

The Public Cloud allows systems and services to be easily accessible to general public,
e.g., Google, Amazon, Microsoft offers cloud services via Internet
198

Figure 6.6: Public Cloud

6.5.4.2. Private Cloud

Private clouds are built exclusively for a single enterprise. They aim to address concerns
on data security and offer greater control, which is typically lacking in a public cloud.

There are two variations to a private cloud:

· On-premise Private Cloud

· Externally hosted Private Cloud

The Private Cloud allows systems and services to be accessible within an organization.
The Private Cloud is operated only within a single organization. However, it may be managed
internally or by third-party.
199

Figure 6.7: Private Cloud

The following Figure illustrates benefits of on-premise and externally hosted private cloud:

Figure 6.8: Benefits of Private Cloud

The following table 6.2. illustrates the on-premise private cloud and externally hosted
private cloud:
200

Table 6.2. On-premise Vs. externally hosted private cloud

On-premise Private cloud Externally hosted Private Cloud

On-premise private clouds, also This type of private cloud is hosted externally
known as internal clouds are hosted with a cloud provider, where the provider
within one’s own data center. facilitates an exclusive cloud environment
with full guarantee of privacy.

This model provides a more standardized This is best suited for enterprises that
process and protection, but is limited in don’t prefer a public cloud due to
aspects of size and scalability. sharing of physical resources.

IT departments would also need to


incur the capital and operational
costs for the physical resources.

This is best suited for applications which


require complete control and configurability
of the infrastructure and security

6.5.4.3. Community Cloud

A community cloud allows systems and services to be accessible by shared among two
or more organizations that have similar cloud requirements. A community cloud in computing is
a collaborative effort in which infrastructure is shared between several organizations from a
specific community with common concerns (security, compliance, jurisdiction, etc.), whether
managed internally or by a third-party and hosted internally or externally. The costs are spread
over fewer users than a public cloud (but more than a private cloud), so only some of the cost
savings potential of cloud The Figure 6.9: Community Cloudcomputing are realized.

Community Cloud allows system and services to be accessible by group of organizations.


It shares the infrastructure between several organizations from a specific community. It may be
managed internally or by the third-party. The Community Cloud allows system and services to
be accessible by group of organizations. It shares the infrastructure between several
organizations from a specific community. It may be managed internally or by the third-party.
201

Figure 6.9: Community Cloud

6.5.4.4. Hybrid Cloud

Hybrid Clouds combine both public and private cloud models. With a Hybrid Cloud, service
providers can utilize 3rd party Cloud Providers in a full or partial manner thus increasing the
flexibility of computing. The Hybrid cloud environment is capable of providing on-demand,
externally provisioned scale. The ability to augment a private cloud with the resources of a
public cloud can be used to manage any unexpected surges in workload. The same is illustrated
in Figure.6.10.

Figure 6.10: Hybrid Cloud Model


202

6.5.6. Cloud Computing Benefits

Enterprises would need to align their applications, so as to exploit the architecture models
that Cloud Computing offers. The following figure illustrates the benefits of cloud computing

Figure 6.11: Benefits of Cloud Computing

The following figure illustrates the essential characteristics of deployment and service
models.

Figure 6.12: Essential Characteristics of Deployment and Service Models

6.6. Cloud Computing-Technologies


There are certain technologies that are working behind the cloud computing platforms
making cloud computing flexible, reliable, and usable. These technologies are listed below:

· Virtualization

· Service-Oriented Architecture (SOA)


203

· Grid Computing

· Utility Computing

6.6.1. Virtualization

Virtualization is a technique, which allows to share single physical instance of an application


or resource among multiple organizations or tenants (customers). It does so by assigning a
logical name to a physical resource and providing a pointer to that physical resource when
demanded. The figure 6.13. illustrates virtualization model.

Figure 6.13: Virtualized cloud model

6.6.2. Service-Oriented Architecture(SOA)

Service-Oriented Architecture helps to use applications as a service for other applications


regardless the type of vendor, product or technology. Therefore, it is possible to exchange of
data between applications of different vendors without additional programming or making changes
to services. The same is illustrated in Figure
204

Figure.6.14: CloudComputing Service Oriented Architecture(SOA Model)

6.6.3. Grid Computing

Grid Computing refers to distributed computing in which a group of computers from multiple
locations are connected with each other to achieve common objective. These computer
resources are heterogeneous and geographically dispersed. Grid Computing breaks complex
task into smaller pieces. These smaller pieces are distributed to CPUs that reside within the
grid.

Figure 6.15: Grid Computing


205

6.6.4. Utility Computing

Utility computing is based on Pay per Use model. It offers computational resources on
demand as a metered service. Cloud computing, grid computing, and managed IT services are
based on the concept of utility computing. The same is illustrated in the figure

Figure 6.16: Utility Computing

6.7. Cloud Computing-Architecture


The Cloud Computing architecture comprises of many cloud components, each of them
are loosely coupled. We can broadly divide the cloud architecture into two parts:

Front End - refers to the client part of cloud computing system. It consists of interfaces
and applications that are required to access the cloud computing platforms, e.g., Web Browser

Back End - refers to the cloud itself. It consists of all the resources required to provide
cloud computing services. It comprises of huge data storage, virtual machines, security
mechanism, services, deployment models, servers, etc.

Each of the ends is connected through a network, usually via Internet. The following
diagram shows the graphical view of cloud computing architecture. It is the responsibility of the
back end to provide built-in security mechanism, traffic control and protocols. The server employs
certain protocols, known as middleware, helps the connected devices to communicate with
each other.

The NIST Cloud Computing Reference Architecture consists of five major actors. Each
actor plays a role and performs a set of activities and functions. Among the five actors, cloud
brokers are optional, as cloud consumers may obtain service directly from a cloud provider.
206

Table. 6.3: Five major actors of cloud computing

Actor Definition

Cloud Consumer Person or organization that maintains a business


relationship with and uses service from Cloud
Providers

Cloud Provider Person or organization or entity responsible for


making a service available to cloud consumers

Cloud Auditor A party that can conduct independent assessment of


cloud services, information system operations,
performance and security of the cloud implementation.

Cloud Broker An entity that manages the use, performance of cloud


services and negotiates relationships between cloud
providers and cloud consumers.

Cloud Carrier The intermediary that provides connectivity and


transport of cloud services from cloud providers to
cloud consumers.

6.7.1. Cloud consumers

Major activities of consumers are based on the type of consumers. The following table
illustrates the consumer type, major activities and who the users are tabulated in Table.6.4.

Table.6.4. Relationship between consumer type and major activities

Consumer Type Major Activities Example users

SaaS Uses application/ service Business users, software


for business operations administrators

PaaS Develops tests, deploys and Application developers, testers


manages applications hosted and administrators
in cloud environment

IaaS Creates/installs, manages and System Developers, administrators,


monitors services for IT managers
IT infrastructure
207

The resources used by cloud consumers is illustrated in the Figure.6.17.

Figure.6.17: Resources used by consumers (Source – NIST)

6.7.2. Cloud Providers

Cloud Provider: Person, organization or entity responsible for making a service available
to Cloud Consumers.

• Cloud providers perform different tasks for different service models.

• The activities of cloud providers are discussed in greater detail from the perspectives
of Service Deployment, Service Orchestration, Cloud Service Management, Security
and Privacy.

Table. 6.5 Consumer type and major activities

Consumer Type Major Activities

SaaS Installs, manages, maintains and supports software on a cloud


infrastructure
PaaS Provisions and manages cloud infrastructure and middleware for
the platform consumers; provides development, deployment and
administration tools to platform consumers.
208

IaaS Provisions and manages the physical processing, storage,


networking and the hosting environment and cloud infrastructure
for IaaS consumers.

The following figure 6.18 illustrates the top-level view of cloud provider. They include
Service Deployment, Service Orchestration, Cloud Service Management, Security and Privacy.

Figure6.18:Top-level view of Cloud Service Provider (source – NIST)


The figure.6.19 represents the cloud service orchestration.

Figure 6.19: Cloud Provider – service orchestration (Source – NIST)

6.7.3. Cloud Auditor

A party that can conduct independent assessment of cloud services, information system
operations, performance and security of the cloud implementation.

A cloud auditor can evaluate the services provided by a cloud provider in terms of security
controls, privacy impact, performance, etc. – For security auditing, a cloud auditor can make an
assessment of the security controls in the information system to determine the extent to which
209

the controls are implemented correctly, operating as intended, and producing the desired outcome
with respect to meeting the security requirements for the system. •

Auditing is especially important for federal agencies and “agencies should include a
contractual clause enabling third parties to assess security controls of cloud providers”

6.7.4. Cloud Broker

An entity that manages the use, performance and delivery of cloud services and negotiates
relationships between Cloud Providers and Cloud Consumers.

As cloud computing evolves, the integration of cloud services can be too complex for
cloud consumers to manage.

The major services provided by a cloud broker include: –

6.7.4.1. Service Intermediation

A cloud broker enhances a given service by improving some specific capability and provides
the value-added service to cloud consumers.

6.7.4.2. Service Aggregation: A cloud broker combines and integrates multiple services
into one or more new services. The broker will provide data integration and ensure the secure
data movement between cloud consumer and multiple cloud providers.

6.7.4.3. Service Arbitrage: Service arbitrage is similar to service aggregation, with the
difference in that the services being aggregated aren’t fixed. Service arbitrage allows flexible
and opportunistic choices for the broker. For example, the cloud broker can use a credit scoring
service and select the best score from multiple scoring agencies.

6.7.5 Cloud Carrier

The intermediary that provides connectivity and transport of cloud services between Cloud
Providers and Cloud Consumers. – Provide access to cloud consumers through network,
telecommunication and other access devices.

• Example: Network access devices include computers, laptops, mobile phones, mobile
internet devices (MIDs), etc. – Distribution can be provided by network and telecomm carriers
or a transport agent.
210

• Transport agent: A business organization that provides physical transport of storage


media such as high-capacity hard drives – A cloud provider shall set up SLAs with a cloud
carrier to provide a consistent level of service. In general, the cloud carrier may be required to
provide dedicated and encrypted connections. The combined conceptual reference diagram is
illustrated in the figure.6.20.

Figure 6.20: Combined Conceptual Reference Diagram

6.8. Multi Tenancy Model


Cloud Computing is defined as “It is a model, where the software and hardware resources
of a data centre is shared using virtualization technology, which also provides on demand,
instant and elastic services to its users and resources offered on lease style. Cloud computing
is a ubiquitous model to implement acceptable, available network access to a shared pool of
self-configurable computing resources that can be fast provided and released with very low
administrative support or service provider interaction. In addition, the platform provides on demand
services that are always on anywhere, anytime and at any place. The development of cyber
societies and online transactions imposes continuously expanding IT budgets on organizations.
To handle this, organizations are redesigning their procurement and management strategies
for IT infrastructure.
211

Information Security provides security for the information and information systems from
insecure access, use, disclosure, disruption, modification, inspection, recording or destruction.
Based on a study for the Cloud Security Alliance (CSA), there are seven top threats that
organizations will face in adopting Cloud Computing. These are Abuse and Nefarious Use of
Cloud Computing, Insecure Application Programming Interfaces (API), Malicious Insiders, Shared
Technology Vulnerabilities, Data Loss/Leakage, Account, Service and Traffic Hijacking and
Unknown Risk Profile. Multi-Tenancy is recognized as one of the unique implications of security
and privacy in Cloud computing

Multi-Tenancy is a major characteristic of Cloud Computing and a major dimension in the


Cloud security problem that needs a vertical solution from the Software-as-a-Service (SaaS)
down to Infrastructure-as-a-Service (IaaS). Multi-Tenancy is the characteristic feature of cloud
computing. The multi-tenancy characteristic of cloud computing allows multiple users to access
the same hardware and software resources simultaneously which are present in a remote
location but with customized needs using virtualization concept. After highlighting Multi-Tenancy
as a security concern in Cloud Computing, the need for a deep understanding of Multi-Tenancy
is required in order to deal with it effectively.

Figure.6.21: Multi-Tenancy

Multi-Tenancy means sharing the application software between multiple users who have
different needs. Allocating single instance of an application software i.e., cloud to multiple users
is called as multitenancy. Each user is called as tenant. The users who need similar type of
resources are allocated a single instance of cloud, so that the cost is shared between the users
to make the access of instance of cloud computing cost effective. Multi-Tenancy allows users to
easily access, maintain, configure and manipulate the data stored in single database running
212

on the same operating system. The data storage mechanism remains same for all users who
share the similar hardware and software resources. In multitenant architecture, user cannot
share or see each other’s data, here the security and privacy is provided.

To perform any type of services like IaaS, SaaS and PaaS in public cloud and private
clouds the key technique is Multi-tenancy. If the people discuss about the clouds they many
speak about the IaaS Services. Both cloud architectures like private and public clouds go beyond
the special features like Virtualization and the concept of IT-as-a-Service through payments or
billing back in the event of private clouds based on metered usage. An IaaS service has an
advanced feature such as Service Level Agreements (SLAs), Identity and Access Management
for Security Access)(IDAM), fault tolerance, disaster recovery, dynamic resource allocation and
many other important properties. By Injecting all these key services at the level of infrastructure,
the clouds become multitenant to a degree. In the case of IaaS multi-tenancy go beyond the
layer to merge the PaaS layer and at the end SaaS layer or application layer. IaaS layer contains
Servers, Storages and networking components, PaaS layer Consists of Platform for Applications
like Java Virtual Machines like Java Compilers, Application Servers and SaaS Layer Consists
of applications like business logic, work flow, data bases and user interfaces.

6.8.1. Types of Multi-tenancy

There are basically two type of Multitenancy Techniques like:

6.8.1.1. Virtual Multi-Tenancy: In this Computing and storage resources are shared
among multiple users. Multiple tenants are served from virtual machines that execute concurrently
on top of the same computing and storage resources.

6.8.1.2. Organic Multi-Tenancy: In organic multi-tenancy every component i.e., hardware


and software resources in the system architecture is shared among multiple tenants. In the
cloud multitenancy concepts are implemented in three different levels of customer integration.
They are:

· Data centre layer

· Infrastructure layer

· Application layer

6.8.1.2.1. Data centre layer: This configuration provides the highest level of security
requirements if implemented correctly, with firewalls and access controls to meet business
213

requirements as well as defined security access to the physical location of the infrastructure
providing the SaaS. Mostly data centre layer multitenancy acts as a service provider that that
rents cages to companies that host their hardware, network, and software in the same building.

6.8.1.2.2. Infrastructure layer: In infrastructure layer multi-tenancy the software stacks


are provided. Each customer or tenant is provided with a dedicated software stack. T his
configuration saves costs compared to data centre-layer multi-tenancy, because stacks are
deployed based on actual customer accounts. The high availability of hardware and software
resources can be seen in this layer. In this case, one can grow hardware requirements based
on actual service use.

6.8.1.2.3. Application layer: Application-layer multi-tenancy requires architectural


implementations at both the software layer and the infrastructure layer. Modifications are required
for the existing software architecture, including multi-tenant patterns in the application layer.
For example, multi-tenant applications require application methods and database tables to
access and store data from different user accounts, which compromises on security. If done
accurately, however, the benefit is cost savings.

6.8.2. Benefits of Multi-Tenancy:

The following are the benefits of multi-tenancy:

· Lower cost of ownership

· Worry free capacity

· API Integration scalability

· Access to the latest releases

· Configurable to their own needs

6.8.2.1. Lower cost of ownership

Because all users access their services from the same technology platform it is much
easier to access automatic and frequent updates. No longer need to pay for report customizations
or to add new functionalities.

6.8.2.2. Worry free capacity

Multi-tenancy provides companies of all sizes the ability to reside in the same infrastructure
and data centre.
214

6.8.2.3. API Integration scalability

The integration of Web API is available in single-instances, but in the multi-tenancy


environment, specific requests for integrations will now go into our product roadmap, and as
they become available, they’ll be rolled out to all customers.

6.8.2.4. Access to the latest releases

Before, when we wanted to roll-out a new update, it was a lengthy process because we
had to code the change separately for each client instance to ensure that it was compatible with
their customizations, perform QA, and then put the change into production. With more than 100
customers, it was a timeconsuming task for our support team. Now with our multi-tenant
environment, because every customer’s instance has the same base code, the roll-out of new
releases will be very seamless and provide faster access to innovative features to manage IT
and communication expenses.

6.8.2.6. Configurable to their own needs

This capability provides our customers with the ability to meet their requirements and
communication styles to manage all IT and communication expenses.

6.8.3. Issues in Multi-Tenancy

The following are the issues in multi-tenancy

· Security

· Capacity

· Service delivery and high availability

· Flexibility

6.8.3.1. Security

There is also the threat of hackers – no matter how secure an encryption is with the right
knowledge. A hacker who breaks the encryption of multitenant database will be able to steal the
data of hundreds of businesses who have data stored on it.

6.8.3.2. Capacity optimization

Database administrators need the tools and the knowledge to understand which tenant
should be deployed on which network in order to maximise capacity and reduce costs.
215

6.8.3.3. Service delivery and high availability

When failures occur or when certain services generate abnormal loads the service delivery
can be interrupted – yet business clients will often request high-availability. Therefore, monitoring
the service delivery and its availability is critical to ensure that the service is properly delivered.

6.8.3.4. Flexible

Using multi-tenancy characteristics of cloud computing, customers can store the data
must be stored in servers located inside France, German customer data inside Germany etc.

6.9. Cloud Computing Challenges


Despite its growing influence, concerns regarding cloud computing still remain.

The benefits outweigh the drawbacks and the model is worth exploring. Some common
challenges are:

· Data Protection

· Data Recovery and Availability

· Management Capabilities

· Regulatory and Compliance Restrictions

6.9.1. Data Protection

Data Security is a crucial element that warrants scrutiny. Enterprises are reluctant to buy
an assurance of business data security from vendors. They fear losing data to competition and
the data confidentiality of consumers. In many instances, the actual storage location is not
disclosed, adding onto the security concerns of enterprises. In the existing models, firewalls
across data centres (owned by enterprises) protect this sensitive information. In the cloud model,
Service providers are responsible for maintaining data security and enterprises would have to
rely on them.

6.9.2. Data Recovery and Availability

All business applications have Service Level Agreements [SLA] that are stringently followed.
Operational teams play a key role in management of service level agreements and runtime
governance of applications. In production environments, operational teams support.
216

· Appropriate clustering and Fail over

· Data Replication

· System monitoring (Transactions monitoring, logs monitoring and others)

· Maintenance (Runtime Governance)

· Disaster recovery

· Capacity and performance management

If any of the above mentioned services is under-served by a cloud provider, the damage
& impact could be severe.

6.9.3. Management Capabilities

Despite there being multiple cloud providers, the management of platform and infrastructure
is still in its infancy. Features like auto-scaling for example, are crucial requirement for many
enterprises. There is huge potential to improve on the scalability and load balancing features
provided today.

6.9.4. Regulatory and Compliance Restrictions

In some of the European countries, Government regulations do not allow customer’s


personal information and other sensitive information to be physically located outside the state
or country. In order to meet such requirements, cloud providers need to setup a data center or
a storage site exclusively within the country to comply with regulations. Having such an
infrastructure may not always be feasible and is a big challenge for cloud providers. With cloud
computing, the action moves to the interface — that is, to the interface between service suppliers
and multiple groups of service consumers.

Cloud services will demand expertise in

· distributed services

· procurement

· risk assessment and

· service negotiation

areas that many enterprises are only modestly equipped to handle.


217

6.10. Cloud Security Reference Model


Security is a fundamental concern in clouds and several cloud vendors provide Security
Reference Architectures (SRAs) to describe the security level of their services. A SRA is an
abstract architecture without implementation details showing a conceptual model of security for
a cloud system.

Cloud computing systems involve a variety of devices connected to them, different


deployment models, and provide a variety of services all of which create many concerns about
how security can be enforced. Many of the cloud security issues are also true for any
kind of distributed system that uses web applications however,cloud architectures bring new
attacks and the result of a successful attack could be more catastrophic because an
attacker that compromises the infrastructure level may then compromise data from many
users. Even in the higher levels there are serious threats; for the PaaS model, a user gets a
development environment where he can develop any kind of application, even containing
malicious code. Once that application is deployed into the SaaS level, it can attack other users
or those services could be illegally used. Reference architectures (RAs) are becoming useful
tools to understand and build complex systems and many cloud providers and
software product vendors have developed versions of them. However, until now few
security reference architectures have appeared. Almost all of them use rather imprecise and
adhoc models where implementation details are mixed with architectural aspects. We propose
here a Security Reference Architecture (SRA) defined using UML models, which we consider
the first attempt to define a precise, semi-formal cloud computing architecture. SRA means
a RA where security mechanisms have been added in appropriate places to defend against its
identified threats and thus provide some degree of security (which we try to evaluate)
for the complete cloud environment. SRAs are useful to apply security to cloud
systems, to define Service Level Agreements (SLAs), to evaluate the security of a
specific cloud system, and for a variety of other purposes. The basic approach we use to build
a SRA adds security to cloud systems by applying a systematic secure development methodology,
which can be used as a guideline to build secure cloud systems and to evaluate their security
levels. As a starting point for cloud reference architecture, its threats are enumerated for which
countermeasures in the form of security patterns are first identified.A security pattern
encapsulates a defence to a threat in a concise and reusable way and we have built a catalogue
of them by checking if threats can be stopped or mitigated in the SRA, level of security is
218

evaluated. A systematic enumeration of cloud threats evaluated started building a catalogue


of cloud misuse patterns. A misuse pattern describes how an attack is performed to lead to
a misuse; with a complete catalogue of them. It can be appliedon them to systematically and
use the reference architecture to find where corresponding security controls be applied to
stop them. Such an approach should include:

· An approach to building security reference architecture for clouds which is


more precise and systematic than all the SRAs in the current literature

· A list of possible uses of SRAs, including their value for SLAs, for certification of
services, monitoring, testing, and others.

· A way to evaluate the security of cloud systems, which can have more general
application.

· A metamodel to relate together the concepts of SRAs.

6.10.1. Securing a Cloud Reference Architecture

A security reference architecture (SRA) using the steps described below:

· Looking for vulnerabilities and systematic enumeration of threats.

· Threats are expressed in the form of misuse patterns.

· Policies are applied to handle the threats

· Security patterns are identified to realize the policies.

· Defences come from best practices.

6.10.2. Security Reference Architecture

The identified threats can be neutralized by applying appropriate security


patterns. Each identified threat can be controlled by a corresponding security pattern. Once
security patterns are identified, we apply them into the reference architecture in order to
stop or mitigate the threats. Security mechanisms are added to the basic RA, including
Authenticator, Authorizer, Security Logger/Auditor and others that mitigate specific threats. To
avoid impostors, we can use the Authenticator so that every action with the cloud is
authenticated. The Security Logger/Auditor is used to log all activities that can be used for
auditing at a later time. For authorization we use Role-Based Access Control (RBAC), or a
similar model, so only authorized users can perform some actions to assets. To avoid storing
219

infected VMIs, they are scanned and filtered before storing them in the VMI
Repository. We do not show the details for lack of space. Figure 3 shows the resulting secure
IaaS architecture pattern. In this model, the subsystem Authenticator is an instance of the
Authenticator pattern and enables the Cloud Controller to authenticate Cloud Consumers/
Administrators. Instances of the Security Logger/Auditor pattern are used to keep track of any
access to cloud resources such as VMs, VMMs, and VMIs. The Reference Monitor enforces
authorization rights defined by the RBAC instances. The Filter scans created virtual machines
in order to remove malicious code. At the SaaS level the responsibility for security is in the
hands of the corresponding Service provider (SP); in the case of a travel it is necessary to
provide authentication, authorization, encryption, etc., toclients. These security services must
be supported at the IaaS level, including security administration. The same situation
occurs at the PaaS level where the corresponding SP must provide control of the components
at this level.

6.11. Cloud Identity and Access Management


6.11.1. Identity provisioning

Users in the Cloud Computing environment have to complete the user authentication
process required by the service provider whenever they use new Cloud service. Generally, a
user registers with offering personal information and a service provider provides a user’s own
ID (identification) and an authentication method for user authentication after a registration is
done. Then the user uses the ID and the authentication method to operate the user authentication
when the user accesses to use a Cloud Computing service. Unfortunately, there is a possibility
that the characteristics and safety of authentication method can be invaded by an attack during
the process of authentication, and then it could cause severe damages. Hence, there must be
not only security but also interoperability for user authentication of Cloud Computing.

6.11.2. User Authentication using Provisioning

User authentication platform using provisioning first authenticates by using ID/Password,


PKI, SSO, etc. which a user input. Second, it authenticates with Authentication Manager through
the user profile and patterns and stores the changes of the state via Monitor. When using Cloud
Computing services, to solve the inconvenience of user authentication, user’s information is
stored in the User Information.
220

6.11.3. Authentication and Access Control for AWS KMS

Access to AWS KMS requires credentials that AWS can use to authenticate the requests.
The credentials must have permissions to access AWS resources, such as AWS KMS customer
master keys (CMKs).

The following sections provide details about how one can use AWS Identity and Access
Management (IAM) and AWS KMS to help secure the resources by controlling who can access
them.

· Authentication

· Access Control

6.11.3.1. Authentication

One can access AWS as any of the following types of identities:

AWS account root user – While signing up for AWS, provide an email address and
password for the AWS account. These are the root credentials and they provide complete access
to all of the AWS resources. For security reasons, it is recommended that the root credentials
are used only to create an administrator user, which is an IAM user with full permissions to the
AWS account. Then, one can use this administrator user to create other IAM users and roles
with limited permissions.

IAM user – An IAM user is an identity within the AWS account that has specific permissions
(for example, to use a KMS CMK). One can use an IAM user name and password to sign in to
secure AWS webpages like the AWS Management Console, AWS Discussion Forums, or
the AWS Support Center.

In addition to a user name and password, one can also create access keys for each user
to enable the user to access AWS services programmatically, through one of the AWS SDKs or
the command line tools. The SDKs and command line tools use the access keys to
cryptographically sign API requests. If the AWS tools are not used, one must sign API requests
oneself.

IAM role – An IAM role is another IAM identity one can create in their account that has
specific permissions. It is similar to an IAM user, but it is not associated with a specific person.
221

An IAM role enables one to obtain temporary access keys to access AWS services and resources
programmatically.

IAM roles are useful in the following situations:

Federated user access – Instead of creating an IAM user, one can use pre-existing user
identities from AWS Directory Service, the enterprise user directory, or a web identity provider.
These are known as federated users. Federated users use IAM roles through an identity provider.

Cross-account access –One can use an IAM role in their AWS account to allow another
AWS account permissions to access their account’s resources.

AWS service access – One can use an IAM role in their account to allow an AWS service
permissions to access their account’s resources. For example, one can create a role that allows
Amazon Redshift to access an S3 bucket on the behalf and then load data stored in the S3
bucket into an Amazon Redshift cluster.

Applications running on EC2 instances – Instead of storing access keys on an EC2


instance for use by applications that run on the instance and make AWS API requests, one can
use an IAM role to provide temporary access keys for these applications. To assign an IAM role
to an EC2 instance, create an instance profile and then attach it when one launches the instance.
An instance profile contains the role and enables applications running on the EC2 instance to
get temporary access keys.

6.11.3.2. Access Control

One can have valid credentials to authenticate the requests, but one also need permissions
to make AWS KMS API requests to create, manage, or use AWS KMS resources. For example,
one must have permissions to create a KMS CMK, to manage the CMK, to use the CMK for
cryptographic operations (such as encryption and decryption), and so on.

6.11.4. Infrastructure and Virtualization security:

Virtualization security is the collective measures, procedures and processes that ensure
the protection of a virtualization infrastructure / environment.

It addresses the security issues faced by the components of a virtualization environment


and methods through which it can be mitigated or prevented.
222

Virtualization security is a broad concept that includes a number of different methods to


evaluate, implement, monitor and manage security within a virtualization infrastructure /
environment.

Typically, virtualization security may include processes such as:

· Implementation of security controls and procedures granularly at each virtual


machine.

· Securing virtual machines, virtual network and other virtual appliance with attacks
and vulnerabilities surfaced from the underlying physical device.

· Ensuring control and authority over each virtual machine.

· Creation and implementation of security policy across the infrastructure/ environment

Hypervisor Architecture Concerns

Many IT professionals worry about virtual environment security, concerned that malicious
code and malware may spread between workloads. Virtualization abstracts applications from
the physical server hardware running underneath, which allows the servers to run multiple
workloads simultaneously and share some system resources. Though the security threats are
very real, modern feature sets now offer better protection, and the type of hypervisor one chooses
to deploy can also make a big difference. Admins should understand hypervisor vulnerabilities
and the current concepts used to maintain security on virtual servers, as well as ways to minimize
the hypervisor’s system footprint and thus the potential attack surface.

Planning security based on the type of hypervisor

Given that Type 1 and Type 2 hypervisors deploy in the environment differently and interact
differently with their infrastructure components, it follows that one would also secure each
hypervisor using different techniques. Moreover, it’s often easier to code Type 1, or bare-metal,
hypervisors, and they also provide better native VM security than Type 2 hypervisors, which
must share data between the host and guest OSes.

Staying secure with thin hypervisors

Thin hypervisors are stripped-down, OS-independent hypervisors. With minimal software


and computing overhead, they limit the number of ways malicious code can intrude. Deployment
is also simpler with thin hypervisors, and one won’t need to patch them as often as bare-metal
223

versions. Just be sure any software installed includes digital signatures to ensure malware
doesn’t make its way into the system.

Getting to know the latest hypervisor security features

Firewall and Active Directory integration, auditing and software acceptance features are
just some of the ways today’s hypervisors offer enhanced security. But these features will only
benefit the infrastructure when deployed correctly. Installing only essential system roles, for
example, will minimize the OS footprint and attack surface. In addition, strong logon credentials
will help ensure that admin and management tools remain secure. Isolating management traffic
also minimizes the potential for hackers to access important data.

6.12 Securing the Cloud


The Internet was designed primarily to be resilient; it was not designed to be secure. Any
distributedapplication has a much greater attack surface than an application that is closely held
on a Local AreaNetwork. Cloud computing has all the vulnerabilities associated with Internet
applications, andadditional vulnerabilities arise from pooled, virtualized, and outsource
resources.In the report “Assessing the Security Risks of Cloud Computinghighlighted the following
areas of cloud computing that they felt were uniquely troublesome:

• Auditing

• Data integrity

• e-Discovery for legal compliance

• Privacy

• Recovery

• Regulatory compliance

The risks in any cloud deployment are dependent upon the particular cloud service model
chosen andthe type of cloud on which one deploy the applications. In order to evaluate the
risks, one needs to perform the following analysis:

· Determine which resources (data, services, or applications) one is planning to move


to the cloud.
224

· Determine the sensitivity of the resource to risk. Risks that need to be evaluated
are loss of privacy, unauthorized access by others, loss of data, and interruptions in
availability.

· Determine the risk associated with the particular cloud type for a resource. Cloud
types include public, private (both external and internal), hybrid, and shared
community types. With each type, one needs to consider where data and functionality
will be maintained.

· Take into account the particular cloud service model that one will be using.Different
models such asIaaS, SaaS, and PaaS require their customers to be responsible for
security at different levels of theservice stack.

· If one has selected a particular cloud service provider, one needs to evaluate its
system to understandhow data is transferred, where it is stored, and how to move
data both in and out of the cloud.

· One maywant to consider building a flowchart that shows the overall mechanism of
the system one is intendingto use or are currently using.

Many vendors maintain a security page where they list their various resources, certifications,
andcredentials. One of the more developed offerings is the AWS Security Center, where one
can download some backgrounders, white papers, and case studies related to the Amazon
Web Service’s security controls and mechanisms.

6.12.1. The security boundary

In order to concisely discuss security in cloud computing, one needs to define the particular
model of cloud computing that applies. This nomenclature provides a framework for
understandingwhat security is already built into the system, who has responsibility for a particular
securitymechanism, and where the boundary between the responsibility of the service provider
is separate fromthe responsibility of the customer. Deployment models are cloud types:
community, hybrid, private, andpublic clouds. Service models follow the SPI Model for three
forms of service delivery: Software,Platform, and Infrastructure as a Service. In the NIST model,
as one may recall, it was not required that acloud use virtualization to pool resources, nor did
that model require that a cloud support multitenancy.
225

It is just these factors that make security such a complicated proposition in cloud computing.
CSA is anindustry working group that studies security issues in cloud computing and offers
recommendations toits members.

The CSA partitions its guidance into a set of operational domains:

• Governance and enterprise risk management

• Legal and electronic discovery

• Compliance and audit Information lifecycle management

• Portability and interoperability

• Traditional security, business continuity, and disaster recovery

• Datacenter operations

• Incidence response, notification, and remediation

• Application security

• Encryption and key management

• Identity and access management

• Virtualization

6.12.2. Security services boundary

Security boundaries are usually defined by a set of systems that are under a
singleadministrative control. These boundaries occur at various levels, and vulnerabilities can
becomeapparent as data “crosses” each one. In his inaugural column, the author looks at a
range of boundariesfrom smaller to larger and presents vulnerabilities and potential solutions
for each case.Security is checked only at application boundaries. That is, for two components
in the same application,when one component calls the other, no security check will be done.
However, if two applications sharethe same process and a component in one calls a component
in the other, a security check is donebecause an application boundary is crossed. Likewise, if
two applications reside in different serverprocesses and a component in the first application
calls a component in the second application, asecurity check is done.

Therefore, if one has two components and one wants security checks to be done when
one calls theother, one needs to put the components in separate COM+ applications.Because
226

COM+ library applications are hosted by other processes, there is a security boundary betweenthe
library application and the hosting process. Additionally, the library application doesn’t
controlprocess-level security, which affects how one needs to configure security for it. Determining
whether a security check must be carried out on a call into a component is based on thesecurity
property on the object context created when the configured component is instantiated.

6.12.2.1 Component-Level Access Checks

For a COM+ server application, one has the choice of enforcing access checks either at
the componentlevel or at the process level.When one selects component-level access checking,
one enables fine-grained role assignments. One canassign roles to components, interfaces,
and methods and achieve an articulated authorization policy.This will be the standard
configuration for applications using role-based security.For COM+ library applications, one must
select component-level security if one wants to use roles.Library applications cannot use process-
level security.

One should select component-level access checking if one is using programmatic role-
based security.Security call context information is available only when component-level security
is enabled.Additionally, when one selects component-level access checking, the security property
will be includedon the object context. This means that security configuration can play a role in
how the object isactivated.

6.12.2.2 Process-Level Access Checks

Process-level checks apply only to the application boundary. That is, the roles that one
has defined forthe whole COM+ application will determine who is granted access to any resource
within theapplication. No finer-grained role assignments apply. Essentially, the roles are used to
create a securitydescriptor against which any call into the application’s components is validated.
In this case, one wouldnot want to construct a detailed authorization policy with multiple roles.
The application will use asingle security descriptor.For COM+ library applications, one would
not select process-level access checks. The library applicationwill run hosted in the client’s
process and hence will not control process-level security.With process-level access checks
enabled, security call context information is not available. This meansthat one cannot do
programmatic security when using only process-level security.Additionally, the security property
will not be included on the object context. This means that whenusing only process-level access
checks, security configuration will never play a role in how the object isactivated.
227

6.12.2.3 Security mapping

Increasingly, security management organizations are coming to rely on a unique type


ofgeography to recognize where threats and vulnerabilities are active, and where security
exploitsare occurring. The geography in question maps fairly closely to the physical map of the
world.

Because Internet links that connect sites and users to service providers are involved,
along withprevailing local Internet topologies between the edges of that global network and
local elements ofits core, this geography tends to be more compressed and to be subject to
strange or interestinghops between locations. Of course, this reflects the peering partners at
various points of presencefor SONET and other high-speed infrastructures, and doesn’t always
reflect the same kind ofgeographical proximity one might see on a country or continental
map.Nevertheless, keeping track of where threats and vulnerabilities are occurring is incredibly
useful.By following lines of “Internet topography” spikes in detection (which indicate upward
trends inproliferation, or frequency of attack) are useful in prioritizing threats based on location.
For onething, networks that are geographically nearby in the Internet topography are more
likely to getexposed to such threats, so it makes sense to use this kind of proximity to escalate
risk assessmentsof exposure. For another thing, traffic patterns for attacks and threats tend to
follow other typicaltraffic patterns, so increasing threat or vulnerability profiles can also help to
drive all kinds ofpredictive analytics as well.

It’s always interesting to look at real-time threat maps or network “weather reports” from
varioussources to see where issues may be cropping up and how fast they’re spreading.
Akamai’sReal-Time Web Monitor provides an excellent and visually interesting portrayal of this
kind ofmonitoring and analysis at work. In the following screen capture for example, we see a
handful ofUS States where attacks have been detected in the last 24 hours.In general, threat,
vulnerability and attack mapping work well because such data makes forintelligible and compelling
visual displays. Human viewers are familiar with maps, and quicklylearn how to develop an
intuitive sense for threat priority or urgency based on proximity and thenature of the threats
involved. That’s why so many security service providers use maps to helpinform security
administrators about safety and security in their neighbourhoods, and around theplanet.

6.12.2.4 Data security

Data is one of the most valuable assets a business has at its disposal, covering anything
fromfinancial transactions to important customer and prospect details. Using data effectively
228

canpositively impact everything from decision-making to marketing and sales effectiveness.


Thatmakes it vital for businesses to take data security seriously and ensure the necessary
precautionsare in place to protect this important asset.Data security means protecting digital
data, such asthose in a database, from destructive forces and from the unwanted actions of
unauthorizedusers, such as a cyberattack or a data breach.Data security is a huge topic with
many aspects toconsider and it can be confusing to know where to start. With this in mind, here
are six vitalprocesses organisations should implement to keep their data safe and sound.

· Know exactly what you have and where you keep it: Understanding what data
the organisation has, where it is and who is responsible for it is fundamental to
building a good data security strategy. Constructing and maintaining a data asset
log will ensure that any preventative measures introduced will refer to and include
all therelevant data assets.

· Train the troops: Data privacy and security are a key part of the new general data
protection regulation (GDPR), so itis crucial to ensure the staff are aware of their
importance. The most common and destructivemistakes are due to human error.
For example, the loss or theft of a USB stick or laptop containingpersonal information
about the business could seriously damage the organisation’s reputation, aswell as
lead to severe financial penalties. It is vital that organisations consider an engaging
stafftraining programme to ensure all employees are aware of the valuable asset
they are dealing withand the need to manage it securely.

· Maintain a list of employees with access to sensitive data – then minimiseSadly, the
most likely cause of a data breach is the staff. Maintaining controls over who can
accessdata and what data they can obtain is extremely important. Minimise their
access privileges to justthe data they need.

· Additionally, data watermarking will help prevent malicious data theft by staffand
ensure one can identify the source in the event of a data breach. It works by allowing
one to add unique tracking records (known as “seeds”) to the database and then
monitor how the data is being used – even when it has moved outside the
organisation’s direct control. The service works for email, physical mail, landline
and mobile telephone calls and is designed to build a detailed picture of the real use
of the data.

· Carry out a data risk assessment One should undertake regular risk assessments
to identify any potential dangers to the organisation’s data. This should review all
229

the threats one can identify – everything from an onlinedata breach to more physical
threats such as power cuts. This will let one identify any weak pointsin the
organisation’s current data security system, and from here you can formulate a
plan of how toremedy this, and prioritise actions to reduce the risk of an expensive
data breach.

· Install trustworthy virus/malware protection software and run regular scans - One
of the most important measures for safeguarding data is also one of the most
straightforward.Using active prevention and regular scans one can minimise the
threat of a data leakage throughhackers or malicious malware, and help ensure the
data does not fall into the wrong hands. Thereis no single software that is absolutely
flawless in keeping out cyber criminals, but good security?software will go a long
way to help keep the data secure.

· Run regular backups of your important and sensitive dataBacking up regularly is


often overlooked, but continuity of access is an important dimension ofsecurity. It is
important to take backups at a frequency the organisation can accept. Consider
howmuch time and effort might be required to reconstitute the data, and ensure
one manages a backupstrategy that makes this affordable. Now consider any
business interruption that may be incurredand these potential costs can begin to
rise quickly. Remember that the security of the backupsthemselves has to be at
least as strong as the security of the live systems.

· With the GDPR still set to come into force in the UK despite the results of the recent
referendum, itis vital for companies to start re-evaluating their systems now.
Businesses need to plan how tominimise the risks, keep data secure and put the
necessary processes in place should they need todeal with any of the data security
threats.

6.12.2.5 Brokered cloud storage access

The problem with the data that is stored in the cloud is that it can be located anywherein
the cloud service provider’s system: in another datacenter, another state or province,and in
many cases even in another country. With other types of system architectures, suchas client/
server, one could count on a firewall to serve as the network’s securityperimeter; cloud computing
has no physical system that serves this purpose. Therefore, toprotect the cloud storage assets,
one wants to find a way to isolate data from direct clientaccess.
230

One approach to isolating storage in the cloud from direct client access is to create
layeredaccess to the data. In one scheme, two services are created: a broker with full access
tostorage but no access to the client, and a proxy with no access to storage but access to
boththe client and broker. The location of the proxy and the broker is not important (they canbe
local or in the cloud); what is important is that these two services are in the direct datapath
between the client and data stored in the cloud. Under this system, when a clientmakes a
request for data, here’s what happens:

· The request goes to the external service interface (or endpoint) of the proxy, which
hasonly a partial trust.

· The proxy, using its internal interface, forwards the request to the broker.

· The broker requests the data from the cloud storage system.

· The storage system returns the results to the broker.

· The broker returns the results to the proxy.

· The proxy completes the response by sending the data requested to the client.

This design relies on the proxy service to impose some rules that allow it to safely
requestdata that is appropriate to that particular client based on the client’s identity and relay
thatrequest to the broker. The broker does not need full access to the cloud storage, but it
maybe configured to grant READ and QUERY operations, while not allowing APPEND
orDELETE. The proxy has a limited trust role, while the broker can run with higher privilegesor
even as native code. The use of multiple encryption keys can further separate the proxyservice
from the storage account. If oneuses two separate keys to create two different datazones—one
for the untrusted communication between the proxy and broker services, andanother a trusted
zone between the broker and the cloud storage— onecreates a situationwhere there is further
separation between the different service roles in the multi-key solution, one has not only eliminated
all internalservice endpoints, but one also has eliminated the need to have the proxy service run
at areduced trust level.

6.12.2.6 Storage location and tenancy

Some cloud service providers negotiate as part of their Service Level Agreements
tocontractually store and process data in locations that are predetermined by their contract.Not
all do. If one can get the commitment for specific data site storage, then one also should make
sure the cloud vendor is under contract to conform to local privacy laws. Becausedata stored in
231

the cloud is usually stored from multiple tenants, each vendor has its ownunique method for
segregating one customer’s data from another. It’s important to havesome understanding of
how the specific service provider maintains data segregation.

Another question to ask a cloud storage provider is who is provided privileged access
tostorage. The more one knows about how the vendor hires its IT staff and the securitymechanism
put into place to protect storage, the better. Most cloud service providers storedata in an encrypted
form. While encryption is important and effective, it does present itsown set of problems. When
there is a problem with encrypted data, the result is that thedata may not be recoverable. It is
worth considering what type of encryption the cloudprovider uses and to check that the system
has been planned and tested by security experts.

6.13 Encryption
Strong encryption technology is a core technology for protecting data in transit toand from
the cloud as well as data stored in the cloud. It is or will be required by law. Thegoal of encrypted
cloud storage is to create a virtual private storage system that maintainsconfidentiality and data
integrity while maintaining the benefits of cloud storage:ubiquitous, reliable, shared data storage.
Encryption should separate stored data (data atrest) from data in transit. Depending upon the
particular cloud provider, one can createmultiple accounts with different keys as it’s seen in the
example with Windows AzurePlatform in the previous section. Microsoft allows up to five security
accounts per client,and one can use these different accounts to create different zones. On
Amazon Web Service, one can create multiple keys and rotate those keys during different
sessions. Althoughencryption protects the data from unauthorized access, it does nothing to
prevent dataloss. Indeed, a common means for losing encrypted data is to lose the keys that
provideaccess to the data. Therefore, one needs to approach key management seriously.
Keysshould have a defined lifecycle. Among the schemes used to protect keys are the creation
ofsecure key stores that have restricted role-based access, automated key stores backup,
andrecovery techniques. It’s a good idea to separate key management from the cloud providerthat
hosts the data. One standard for interoperable cloud-based key management is theOASIS Key
Management Interoperability Protocol. IEEE 1619.3 also covers both storage encryption and
keymanagement for shared storage
232

6.14 Auditing and Compliance


Logging is the recording of events into a repository; auditing is the ability to monitorthe
events to understand performance. Logging and auditing is an important functionbecause it is
not only necessary, for evaluation performance, but it is also used toinvestigate security and
when illegal activity has been perpetrated. Logs should recordsystem, application, and security
events, at the very minimum.

Logging and auditing are unfortunately one of the weaker aspects of early cloud computing
service offerings. Cloud service providers often have proprietary log formats that one needs to
be aware of. Whatever monitoring and analysis tools you use need to be aware of these logs
and able to work with them. Often, providers offer monitoring tools of their own, many in the
form of a dashboard with the potential to customize the information one sees through either the
interface or programmatically using the vendor’s API. one wants to make full use of those built-
in services. Because cloud services are both multitenant and multisite operations, the logging
activity and data for different clients may not only be co-located, they may also be moving
across a landscape of different hosts and sites. one can’t simply expect that an investigation
will be provided with the necessary information at the time of discovery unless it is part of the
Service Level Agreement. Even an SLA with theappropriate obligations contained in it may not
be enough to guarantee one will get theinformation one needs when the time comes. It is wise
to determine whether the cloudservice provider has been able to successfully support
investigations in the past.

Therefore, one must understand the following:

• Which regulations apply to the use of a particular cloud computing service.

· Which regulations apply to the cloud service provider and where the demarcation
linefalls for responsibilities.

· How the cloud service provider will support the need for information associated with
regulation?

· How to work with the regulator to provide the information necessary regardless of
whohad the responsibility to collect the data.

Traditional service providers are much more likely to be the subject of securitycertifications
and external audits of their facilities and procedures than cloud serviceproviders. That makes
the willingness for a cloud service provider to subject its service toregulatory compliance scrutiny
233

an important factor in the selection of that provider overanother. In the case of a cloud service
provider who shows reluctance to or limits thescrutiny of its operations, it is probably wise to
use the service in ways that limit the exposure to risk. For example, although encrypting stored
data is always a good policy, one also might want to consider not storing any sensitive information
on that provider’ssystem.

As it stands now, clients must guarantee their own regulatory compliance, even when
them data is in the care of the service provider. one must ensure that the data is secure and that
its integrity has not been compromised. When multiple regulatory entities are involved, as there
surely are between site locations and different countries, then that burden to satisfy the laws of
those governments is also their responsibility.

For any company with clients in multiple countries, the burden of regulatory compliance is
onerous. While organizations such as the EEC (European Economic Community) orCommon
Market provide some relief for European regulation, countries such as the United

States, Japan, China, and others each have their own sets of requirements. This
makesregulatory compliance one of the most actively developing and important areas of
cloudcomputing technology. This situation is likely to change. On March 1, 2010, Massachusetts
passed a law that requires companies that providesensitive personal information on
Massachusetts residents to encrypt data transmitted andstored on their systems. Businesses
are required to limit the amount of personal datacollected, monitor data usage, keep a data
inventory, and be able to present asecurity plan on how they will keep the data safe. The steps
require that companies verifythat any third-party services they use conform to these requirements
and that there belanguage in all SLAs that enforce these protections.

6.15 Establishing Identity and Presence


Identities also are tied to the concept of accounts and can be used for contacts or “IDcards.”
Identities also are important from a security standpoint because they can be used toauthenticate
client requests for services in a distributed network system such as theInternet or, in this case,
for cloud computing services. Identity management is a primary mechanism for controlling
access to data in the cloud, preventing unauthorized uses,maintaining user roles, and complying
with regulations.
234

The sections that follow describe some of the different security aspects of identity and the
related concept of “presence.” For this conversation, one can consider presence to be the
mapping of an authenticated identity to a known location.Presence is important in cloud computing
because it adds context that can modify services and service delivery.

Cloud computing requires the following:

• That one establishes an identity

• That the identity be authenticated

• That the authentication be portable

• That authentication provide access to cloud resources

When applied to a number of users in a cloud computing system, these requirements


describe systems that mustprovision identities, provide mechanisms that manage credentials
and authentication, allow identities to be federated, and support a variety of user profiles and
access policies. Automating these processes can be a major management task, just as they
are for on-premises operations.

6.16 Identity Protocol Standards


The protocols that provide identity services have been and are under activedevelopment,
and several form the basis for efforts to create interoperability amongservices. OpenID 2.0
(http://openid net/) is the standard associated with creating anidentity and having a third-party
service authenticate the use of that digital identity. It isthe key to creating Single Sign-On
(SSO)systems. Some cloud service providers have adopted OpenID as a service, and its use
isgrowing. OpenID doesn’t specify the means for authentication of an identity, and it is up tothe
particular system how the authentication process is executed. Authentication can be bya
Challenge and Response Protocol (CHAP), through a physical smart card, or using a flyingfinger
or evil eye through a biometric measurement. In OpenID, the authenticationprocedure has the
following steps:

· The end-user uses a program like a browser that is called a user agent to enter an

· OpenID identifier, which is in the form of a URL or XRI. An OpenID might take the
form ofname.openid.provider.org.

· The OpenID is presented to a service that provides access to the resource that is
desired.
235

· An entity called a relaying party queries the OpenID identity provider to authenticate
theveracity of the OpenID credentials.

· The authentication is sent back to the relaying party from the identity provider and
access is either provided or denied.

The second protocol used to present identity-based claims in cloud computing is aset of
authorization mark-up languages that create files in the form of being XACML andSAML. SAML
is gaining growing acceptance among cloud serviceproviders. It is a standard of OASIS and an
XML standard for passing authentication andauthorization between an identity provider and the
service provider. SAML is acomplimentary mechanism to OpenID and is used to create SSO
systems. Taken asa unit, OpenID and SAML are being positioned to be the standard
authenticationmechanism for clients accessing cloud services.

It is particularly important for services such as mashups that draw information from two or
more data services. An open standard called OAuth (http://oauth.net/) provides a token service
that can be used to present validated accessto resources. OAuth is similar to OpenID, but
provides a different mechanism for sharedaccess. The use of OAuth tokens allows clients to
present credentials that contain noaccount information (user ID or password) to a cloud service.
The token comes with adefined period after which it can no longer be used. Several important
cloud service providers have begun to make OAuth APIs available based on the OAuth 2.0
standard, most notably Facebook’s Graph API and the Google Data API.

The Data Portability Project is an industry working group that promotes data interoperability
between applications, and the group’s work touches on anumber of the emerging standards
mentioned in this section. home page of the Data Portability Project, an industry working group
that promotes open identity standards.A number of vendors have created server products,
such as Identity and Access Managers (IAMs), to support these various standards.

Summary
· Cloud computing is a practical approach to experience direct cost benefits and it
has the potential to transform a data center from a capital-intensive set up to a
variable priced environment. The idea of cloud computing is based on a very
fundamental principal of reusability of IT capabilities.

· Cloud computing is defined as, a pool of abstracted, highly scalable, and managed
compute infrastructure capable of hosting end-customer applications and billed by
consumption.
236

· Two types of Cloud Models are NIST Model and Cloud Cube Model

· NIST Model are two types namely: Deployment Models and Service Models

· Cloud Cube Model is also called “Jerico model”

· Abstraction:Cloud computing abstracts the details of system implementation from


users and developers. Applications run on physical systems that aren’t specified,
data is stored in locations that are unknown, administration of systems is outsourced
to others, and access by users is ubiquitous.

· Virtualization:Cloud computing virtualizes systems by pooling and sharing resources.


Systems and storage can be provisioned as needed from a centralized infrastructure,
costs are assessed on a metered basis, multi-tenancy is enabled, and resources
are scalable with agility.

· Different service model includes Software as a Service (SaaS), Platform as a Service


(Paas), Infrastructure as a Service (IaaS ).

· Different deployment models are Public Cloud, Private Cloud, Hybrid Cloud and
Community Cloud.

· Cloud Provider: Person, organization or entity responsible for making a service


available to Cloud Consumers.

· Cloud Auditor - A party that can conduct independent assessment of cloud services,
information system operations, performance and security of the cloud
implementation.

· The intermediary that provides connectivity and transport of cloud services between
Cloud Providers and Cloud Consumers. – Provide access to cloud consumers
through network, telecommunication and other access devices.

· Multi-Tenancy is a major characteristic of Cloud Computing and a major dimension


in the Cloud security problem that needs a vertical solution from the Software-as-a-
Service (SaaS) down to Infrastructure-as-a-Service (IaaS).
237

Check your answers


· Write short notes on

· Cloud computing

· Types of cloud computing

· Multi-tenancy

· NIST Model

· Cloud Cube Mode

· Cloud Broker

· Cloud Auditor

· Cloud Provider

· Cloud Consumer

Reference
· h t t p : / / w w w. r g c e t p d y. a c . i n / N o t e s / I T / I V % 2 0 Y E A R / E L E C T I V E -
CLOUD%20COMPUTING/Unit%204.pdf

· Jay Heiser and Mark Nicolett of the Gartner Group http:// www.gartner.com /
DisplayDocument? id =685308)

· https://blog.kennasecurity.com/2013/02/the-role-of-security-mapping-in-vulnerability-
management/

· https://www.computerweekly.com/opinion/Six-essential-processes-for-keeping-data-
secure

· Security Assertion Mark-up Language;http://www.oasis-open.org/committees/


tchome.php?wg abbrev=security

· http://dataportability.org/
238

MODEL QUESTION PAPER


M.SC CYBER FORENSICS AND INFORMATION SECURITY
FIRST YEAR- FIRST SEMESTER
CORE PAPER-IV
IT Infrastructure & Cloud Computing
Time:3 hours Maximum : 80

Section-A

Answer the following in 50 words in each (10 x 2 = 20)

1. What is Primary Memory?

2. What is Central Processing Unit?

3. What is Client Server Operating System?

4. Advantages & Disadvantages of CLI.

5. Explain the processor sequence?

6. What are Buses?

7. List down the Parts of the Motherboard?

8. Little ENDIAN vs Big ENDIAN?

9. Draw a neat diagram of IT infrastructure Ecosystem?

10. What is Cloud Cube model?

Section-B

Answer any five of the following in 250 words in each (5 x 6 = 30)

1. Explain Monitors and their types.

2. Write in Brief about Client Server system Architecture.

3. Explain POST and Booting Sequence?

4. What is Server & explain its types?


239

5. Explain Multi-Tenancy Model?

6. What is Single-Sign-On & elaborate its correlation with Kerberos?

SECTION – C

Answer the following in about 500 words each (3 x 10 = 30)

1. Describe in detail about NIST cloud Models with suitable examples.

2. What is GPO & its Structure? How will you configure the settings for

a. Permission and Privilege

b. Password setting

c. Account settings

d. USB enable and disable

3. What is System Memory? What are the Different types of RAM?

You might also like