Cloud Computing
Cloud Computing
Cloud Computing
POSTGRADUATE COURSE
M.Sc., Cyber Forensics and Information Security
FIRST YEAR
FIRST SEMESTER
CORE PAPER - IV
IT INFRASTRUCTURE AND
CLOUD COMPUTING
WELCOME
Warm Greetings.
I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.
DIRECTOR
(i)
M.Sc., Cyber Forensics and Information Security CORE PAPER - IV
FIRST YEAR - FIRST SEMESTER IT INFRASTRUCTURE
AND CLOUD COMPUTING
Dr. N. Kala
Director i/c.
Cyber Forensics and Information Security,
University of Madras
Dr. S. Thenmozhi
Associate Professor
Department of Psychology
Institute of Distance Education
University of Madras
Chepauk Chennnai - 600 005.
(ii)
M.Sc., Cyber Forensics and Information Security
FIRST YEAR
FIRST SEMESTER
Core Paper - IV
(iii)
o Functions of Server operating system
o Introduction to Command line operation
· Basics on files and directories
· Details about system files and boot process
· Introduction to device drivers
(iv)
· Trust Relationships
· Object – Creation, Modification, Management and Deletion
o User
o Group
o Computer
o OU
o Domain
· Group Policy (GPO) Management
o Structure of GPO
o Permissions and Privileges
o GPO Security Settings
§ Password Settings
§ Account Lockout Settings
§ Account Timeout Settings
§ USB Enable/ Disable Settings
§ Screen Saver Settings
§ Audit Logging Settings
§ Windows Update Settings
§ User Restriction Settings
o Creation of GPO
o Linking a GPO
o Application of GPO
§ Linking a GPO
§ Enforcing a GPO
§ GPO Status
§ Inclusion / Exclusion of Users/ Groups in a GPO
o Precedence of GPO
o Loopback Processing of GPO
o Fine-Grain Policy / Fine-Grain Password Policy
· Addition of Windows Workstations to Domain and Group Policy Synchronisation
· Addition of Non-Windows Workstations in AD Environment
· Integrating Finger-Print, Smart Card, RSA or secondary authentication source to
Active Directory
· Single-Sign On Integration
· Active Directory Hardening Guidelines
Unit 5: Cloud Computing
· Concept – Fundamentals of Cloud Computing
· Types of clouds
· Security Design and Architecture
· Cloud Computing Service Models
· The Characteristics of Cloud Computing
· Multi Tenancy Model
· Cloud Security Reference Model
· Cloud Computing Deploying Models
· Cloud Identity and Access Management
o Identity Provisioning – Authentication
o Key Management for Access Control – Authorization
o Infrastructure and Virtualization Security
o Hypervisor Architecture Concerns.
FIRST YEAR
FIRST SEMESTER
Core Paper - IV
2 Operating Systems 49
(v)
1
UNIT I
COMPUTER HARDWARE BASICS
Learning Objectives
Basics of motherboard
Chipsets
Components of a CPU
System memory
Virtual memory
Protected memory
Hard Disks
Optical Drives
Structure
1.1 Introduction
1.3 Chipsets
1.17 USB
1.1. Introduction
The personal computers became possible in 1974 when a small company named Intel
started selling inexpensive computer chips called 8080 microprocessors. A single 8080
microprocessor contained all of the electronic circuits necessary to create a programmable
3
computer. Almost, immediately a few primitive computers were developed using this
microprocessor. By the early 1980’s, Steve Jobs and Steve Wozniak were mass marketing
Apple Computers and Bill Gates was working with IBM to mass market IBM personal computer
computers. In England, the Acorn and Sinclair computers were being sold. The Sinclair, a small
keyboard that plugged into a standard television and audio cassette player for memory storage,
was revolutionary in 1985. By supplanting expensive, centralized mainframes, these small,
inexpensive computers made by Bill Gate’s dream of putting a computer in every home a
distinct possibly. Additionally, the spread of these computers around the world made a global
network of computers the next logical step.
1.2.1.Basics of motherboard
The main printed circuit board in a computer is known as the motherboard. Other names
for this central computer unit are system board, mainboard, or printed wired board (PWB). The
motherboard is sometimes shortened to Mobo.
Numerous major components, crucial for the functioning of the computer, are attached to
the motherboard. These include the processor, memory, and expansion slots. The motherboard
connects directly or indirectly to every part of the PC.
The CPU is the core of any computer. Everything depends on the CPUs ability to process
instructions that it receives. So the first stage in the boot process is to get the CPU started –
reset – with an electric pulse. This pulse is usually generated when the power switch or button
is activated but can also be initiated over a network on some systems. Once the CPU is reset it
starts the computer’s basic input output system (BIOS).
Figure 1.2: An Electrical Pulse resets the CPU, which in turn, activates the BIOS
5
The processor chip is identified by the processor type and the manufacturer. This
information is usually inscribed on the chip itself. For example, Intel 386, Advanced Micro Devices
(AMD) 386, Cyrix 486, Pentium MMX, Intel Core 2Duo, or iCore7.
If the processor chip is not on the motherboard, it can be identified by the processor
socket as socket 1 to Socket 8, LGA 775 among others. This can help to identify the processor
that fits in the socket. For example, a 486DX processor fits into Socket 3.
Random Access Memory, or RAM, usually refers to computer chips that temporarily store
dynamic data to enhance computer performance while working.
In other words, it is the working place of the computer, where active programs and data
are loaded so that any time the processor requires them, it doesn’t have to fetch them from the
hard disk.
Random access memory is volatile, meaning it loses its contents once power is turned
off. This is different from non-volatile memory, such as hard disks and flash memory, which do
not require a power source to retain data.
When a computer shuts down properly, all data located in RAM is returned back to
permanent storage on the hard drive or flash drive. At the next boot-up, RAM begins to fill with
programs automatically loaded at startup, a process called booting. Later on, the user opens
other files and programs that are still loaded in the memory.
BIOS stands for Basic Input/Output System. BIOS is a “read-only” memory, which consists
of low-level software that controls the system hardware and acts as an interface between the
operating system and the hardware. Most people know the term BIOS by another name—
device drivers, or just drivers. BIOS is essentially the link between the computer hardware and
software in a system.
6
All motherboards include a small block of Read Only Memory (ROM) which is separate
from the main system memory used for loading and running software. On PCs, the BIOS contains
all the code required to control the keyboard, display screen, disk drives, serial communications,
and a number of miscellaneous functions.
The system BIOS is a ROM chip on the motherboard used during the startup routine
(boot process) to check out the system and prepare to run the hardware. The BIOS is stored on
a ROM chip because ROM retains information even when no power is being supplied to the
computer. Some BIOS programs allow an individual to set a password and then until the password
is typed in the BIOS will not run and the computer will not function.
Motherboards also include a small separate block of memory made from CMOS RAM
chips which are kept alive by a battery (known as a CMOS battery) even when the PC’s power
is off. This prevents reconfiguration when the PC is powered on.
The CMOS RAM is used to store basic Information about the PC’s configuration for
instance:
· RAM size
Other Important data kept in CMOS memory is the time and date, which is updated by a
Real Time Clock (RTC).
7
The BIOS contains a program called the power-on-self test (POST) that tests the
fundamental components of the computer. When the CPU first activates the BIOS, the POST
program is initiated. To be safe the first test verifies the integrity of the CPU and POST program
itself. The rest of the POST verifies that all of the computer’s components are functioning
properly, including the disk drives, monitor, RAM and Keyboard. Notably after the BIOS is activated
and before the POST is complete there is an opportunity to interrupt the boot process and have
it perform specific actions. For instance, Intel based computers allow user to open the
Complementary Metal Oxide Semiconductor tool (silicon configuration tool) at this stage.
Computers use CMOS and RAM chips to retain the date, time hard drive parameters and other
configuration details while the computers main power is off. A small battery powers the CMOS
chips – older computers may not boot even when the main power is turned on because this
CMOS battery is depleted, causing the computer to “forget” it’s hardware settings. Understanding
the CMOS configuration tool, it is possible to determine the system on the time ascertain if the
computer will try to find an operating system on the primary hardware or another disk first, and
change basic computer settings as needed. When collecting digital evidence from a computer,
it is often necessary to interrupt the boot process and examine CMOS settings such as the
system date and time, the configuration of hard drives, and the boot sequence in some instances
it may necessary to change the CMOS settings to ensure that the computer will boot from a
floppy diskette, rather than the evidentiary hard drive. In many computers the results of POST
are checked against a permanent record stored in CMOS microchip. If there is a problem at any
stage in the POST, the computer will emit a series of beeps and possibly an error message on
the screen. The combination of beep sounds indicates various errors. When all of the hardware
tests are complete, the BIOS instruct the CPU to look for a disk containing an operating system.
Most CPUs have an internal cache memory (built into the processor) which is referred to
as Level 1 or primary cache memory. This can be supplemented by external cache memory
fitted on the motherboard. This is the Level 2 or secondary cache.
8
In modern computers, Levels 1 and 2 cache memory are built into the processor die. If a
third cache is implemented outside the die, it is referred to as the Level 3 (L3) cache.
CACHE:
A Cache is a small and very fast temporary storage memory. It is designed to speed up
the transfer of data and instructions. It is located inside or close to the CPU chip. It is faster than
RAM and the data/instructions that are most recently or most frequently used by CPU are
stored in cache.
The data and instructions are retrieved from RAM when CPU uses them for the first time.
A copy of that data or instructions is stored in cache. The next time the CPU needs that data or
instructions, it first looks in cache. If the required data is found there, it is retrieved from cache
memory instead of main memory. It speeds up the working of CPU.
A computer can have several different levels of cache memory. The level numbers refer
to distance from CPU where Level 1 is the closest. All levels of cache memory are faster than
RAM. The cache closest to CPU is always faster but generally costs more and stores less data
then other level of cache.
It is also called primary or internal cache. It is built directly into the processor chip. It has
small capacity from 8 Km to 128 Kb.
It is slower than L1 cache. Its storage capacity is more, i-e. From 64 Kb to 16 MB. The
current processors contain advanced transfer cache on processor chip that is a type of L2
cache. The common size of this cache is from 512 kb to 8 Mb.
This cache is separate from processor chip on the motherboard. It exists on the computer
that uses L2 advanced transfer cache. It is slower than L1 and L2 cache. The personal computer
often has up to 8 MB of L3 cache.
An expansion bus is an input/output pathway from the CPU to peripheral devices and it is
typically made up of a series of slots on the motherboard. Expansion boards (cards) plug into
the bus. PCI is the most common expansion bus in a PC and other hardware platforms. Buses
carry signals such as data, memory addresses, power, and control signals from component to
component. Other types of buses include ISA and EISA are detailed in Unit 3.
Expansion buses enhance the PCs capabilities by allowing users to add missing features
in their computers by slotting adapter cards into expansion slots.
10
1.3. Chipsets
A chipset is a group of small circuits that coordinate the flow of data to and from a PC’s
key components. These key components include the CPU itself, the main memory, the secondary
cache, and any devices situated on the buses. A chipset also controls data flow to and from
hard disks and other devices connected to the IDE channels. For further details refer unit 3.
For example, a 200 MHz CPU receives 200 million pulses per second from the clock. A 2
GHz CPU gets two billion pulses per second. Similarly, in any communications device, a clock
may be used to synchronize the data pulses between sender and receiver.
A “real-time clock,” also called the “system clock,” keeps track of the time of day and
makes this data available to the software. A “time-sharing clock” interrupts the CPU at regular
intervals and allows the operating system to divide its time between active users and/or
applications.
11
· Jumper pins are small protruding pins on the motherboard. A jumper cap or bridge
is used to connect or short a pair of jumper pins. When the bridge is connected to
any two pins, via a shorting link, it completes the circuit and a certain configuration
has been achieved.
· Jumper caps are metal bridges that close an electrical circuit. Typically, a jumper
consists of a plastic plug that fits over a pair of protruding pins. Jumpers are
sometimes used to configure expansion boards. By placing a jumper plug over a
different set of pins, board’s parameters can be changed.
NOTE:The jumper pins and jumper cap at the back of an IDE hard disk and a CD/DVD
ROM/Writer can be checked.
Central processing unit (CPU),the hardware within a computer that executes a program
· Graphics processing unit (GPU), a processor designed for doing dedicated graphics-
rendering computations
13
§ Image processor, a specialized DSP used for image processing in digital cameras,
mobile phones or other devices
· Coprocessor
· Floating-point unit
Multi-core processor, single component with two or more independent CPUs (called
“cores”) on the same chip carrier or on the same die
1.7.1.Control Unit
The Control Unit is an internal part of a CPU that co-ordinates the instructions and data
flow between CPU and other components of the computer. It is the CU that directs the operations
of a central processing unit by sending timing and control signals.
The ALU is an internal electronic circuitry of a CPU that performs all the arithmetic and
logical operations in a computer. The ALU receives three types of inputs.
· Data(operands) to be operated
When all the instructions have been operated, the output that consists of data which is
stored in memory and status information is stored in internal registers of a CPU.
14
All the CPUs regardless of their origin or type performs a basic instruction cycle that
consists of three steps named Fetch, decode and Execute.
1.7.3.1. Fetch
1.7.3.2. Decode
A circuitry called instruction decoder decodes all the instructions fetched from the memory.
The instructions are decoded to various signals that control other areas of CPU.
1.7.3.3. Execute
In the last step, the CPU executes the instruction. For example, it stores a value in the
particular register and the instruction pointer then points to other instruction that is stored in
next address location.
The speed of processor is measured by the number of clock cycles a CPU can perform in
a second. The more the number of clock cycles, the more number of instructions (calculations)
it can carry out. The CPU speed is measured in Hertz. Modern Day processors have speed
units of GHz. (1GHz=1 million thousand cycles per second).
Volatile memory is computer memory that requires power to maintain the stored information.
Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM
(DRAM). SRAM retains its contents as long as the power is connected and is easy for interfacing,
but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control,
needing regular refresh cycles to prevent losing its contents, but uses only one transistor and
one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit
costs.
15
SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is
used for their cache memories. SRAM is commonplace in small embedded systems, which
might only need tens of kilobytes or less. Forthcoming volatile memory technologies that aim at
replacing or competing with SRAM and DRAM include Z-RAM and A-RAM.
· Non-volatile memory
Non-volatile memory is computer memory that can retain the stored information even
when not powered. Examples of non-volatile memory include read-only memory, flash memory,
most types of magnetic computer storage devices (e.g. hard disk drives, flopp disksand magnetic
tape), optical discs, and early computer storage methods such as paper tape and punched cards.
For example, some non-volatile memory types can wear out, where a “worn” cell has
increased volatility but otherwise continues to work. Data locations which are written frequently
can thus be directed to use worn circuits. As long as the location is updated within some known
retention time, the data stays valid. If the retention time “expires” without an update, then the
value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows
a high write rate while avoiding wear on the not-worn circuits.
As a second example, an STT-RAM can be made non-volatile by building large cells, but
the cost per bit and write power go up, while the write speed goes down. Using small cells
improves cost, power, and speed, but leads to semi-volatile behavior. In some applications the
increased volatility can be managed to provide many benefits of a non-volatile memory, for
example by removing power but forcing a wake-up before data is lost; or by caching read-only
data and discarding the cached data if the power-off time exceeds the non-volatile threshold.
16
The term semi-volatile is also used to describe semi-volatile behavior constructed from
other memory types. For example, a volatile and a non-volatile memory may be combined,
where an external signal copies data from the volatile memory to the non-volatile memory, but
if power is removed without copying, the data is lost. Or, a battery-backed volatile memory, and
if external power is lost there is some known period where the battery can continue to power the
volatile memory, but if power is off for an extended time, the battery runs down and data is lost.
Virtual memory is a system where all physical memory is controlled by the operating
system. When a program needs memory, it requests it from the operating system. The operating
system then decides what physical location to place the memory in.
This offers several advantages. Computer programmers no longer need to worry about
where the memory is physically stored or whether the user’s computer will have enough memory.
It also allows multiple types of memory to be used. For example, some memory can be stored
in physical RAM chips while other memory is stored on a hard drive (e.g. in a swapfile), functioning
as an extension of the cache hierarchy. This drastically increases the amount of memory available
to programs. The operating system will place actively used memory in physical RAM, which is
much faster than hard disks. When the amount of RAM is not sufficient to run all the current
programs, it can result in a situation where the computer spends more time moving memory
from RAM to disk and back than it does accomplishing tasks; this is known as thrashing.
Virtual memory systems usually include protected memory, but this is not always the
case.
Protected memory is a system where each program is given an area of memory to use
and is not permitted to go outside that range. Use of protected memory greatly enhances both
the reliability and security of a computer system.
Without protected memory, it is possible that a bug in one program will alter the memory
used by another program. This will cause that other program to run off of corrupted memory
with unpredictable results. If the operating system’s memory is corrupted, the entire computer
system may crash and need to be rebooted. At times programs intentionally alter the memory
used by other programs. This is done by viruses and malware to take over computers. It may
17
also be used benignly by desirable programs which are intended to modify other programs; in
the modern age, this is generally considered bad programming practice for application programs,
but it may be used by system development tools such as debuggers, for example to insert
breakpoints or hooks.
Protected memory assigns programs their own areas of memory. If the operating system
detects that a program has tried to alter memory that does not belong to it, the program is
terminated (or otherwise restricted or redirected). This way, only the offending program crashes,
and other programs are not affected by the misbehavior (whether accidental or intentional).
To understand how RAM works and the role it plays in a computer a few of its important
properties that are to be kept in mind are:
1. RAM is blazing fast compared to hard drives - Even the latest and greatest solid
state drives are embarrassingly slow when pitted against RAM. While top end solid
state drives can achieve transfer rates of more than 1,000 MB/s, modern RAM
modules are already hitting speeds in excess of 15,000 MB/s.
2. RAM storage is volatile (temporary) - Any data stored in RAM will be lost once the
computer is turned off. Comparing computer storage to the human brain, RAM
works like short term memory while hard drives resemble our long term memories.
Whenever a program is run (e.g. operating system, applications) or open a file (e.g.
videos, images, music, documents), it is loaded temporarily from the hard drive into RAM. Once
loaded into RAM, it is possible to access it smoothly with minimal delays.
Once one run’s out of RAM, the operating system will begin dumping some of the open
programs and files to the paging file. Paging file is stored on the much slower hard drive. So
instead of running everything through RAM, a part of it is being accessed from hard drive.
18
This is the time when slow loading times, stuttering and general unresponsiveness
Having enough RAM allows the computer to be more responsive, multitask better and run
memory-intensive programs (e.g. video editors, databases, virtual machines) with ease.
TYPES OF RAM
DRAM stands for Dynamic Random Access Memory. It is used in most of the computers.
It is the least expensive kind of RAM. It requires an electric current to maintain its electrical
state. The electrical charge of DRAM decreases with time that may result in loss of DATA.
DRAM is recharged or refreshed again and again to maintain its data. The processor cannot
access the data of DRAM when it is being refreshed. That is why it is slow.
SRAM stands for Static Random Access Memory. It can store data without any need of
frequent recharging. CPU does not need to wait to access data from SRAM during processing.
That is why it is faster than DRAM. It utilizes less power than DRAM. SRAM is more expensive
as compared to DRAM. It is normally used to build a very fast memory known as cache memory.
MRAM stands for Magneto resistive Random Access Memory. It stores data using magnetic
charges instead of electrical charges. MRAM uses far less power than other RAM technologies
so it is ideal for portable devices. It also has greater storage capacity. It has faster access time
than RAM. It retains its contents when the power is removed from computer.
FPM DRAM: Fast page mode dynamic random access memory was the original form of
DRAM. It waits through the entire process of locating a bit of data by column and row and then
reading the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately
176 MBps.
EDO DRAM: Extended data-out dynamic random access memory does not wait for all of
the processing of the first bit before continuing to the next one. As soon as the address of the
19
first bit is located, EDO DRAM begins looking for the next bit. It is about five percent faster than
FPM. Maximum transfer rate to L2 cache is approximately 264 MBps.
SDRAM: Synchronous dynamic random access memory takes advantage of the burst
mode concept to greatly improve performance. It does this by staying on the row containing the
requested bit and moving rapidly through the columns, reading each bit as it goes. The idea is
that most of the time the data needed by the CPU will be in sequence. SDRAM is about five
percent faster than EDO RAM and is the most common form in desktops today. Maximum
transfer rate to L2 cache is approximately 528 MBps.
DDR SDRAM: Double data rate synchronous dynamic RAM is just like SDRAM except
that is has higher bandwidth, meaning greater speed. Maximum transfer rate to L2 cache is
approximately 1,064 MBps (for DDR SDRAM 133 MHZ).
RDRAM: Rambus dynamic random access memory is a radical departure from the
previous DRAM architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory
module (RIMM), which is similar in size and pin configuration to a standard DIMM. What makes
RDRAM so different is its use of a special high-speed data bus called the Rambus channel.
RDRAM memory chips work in parallel to achieve a data rate of 800 MHz, or 1,600 Mbps. Since
they operate at such high speeds, they generate much more heat than other types of chips. To
help dissipate the excess heat Rambus chips are fitted with a heat spreader, which looks like a
long thin wafer. Just like there are smaller versions of DIMMs, there are also SO-RIMMs, designed
for notebook computers.
Credit Card Memory: Credit card memory is a proprietary self-contained DRAM memory
module that plugs into a special slot for use in notebook computers.
PCMCIA Memory Card: Another self-contained DRAM module for notebooks, cards of
this type are not proprietary and should work with any notebook computer whose system bus
matches the memory card’s configuration.
CMOS RAM: CMOS RAM is a term for the small amount of memory used by your computer
and some other devices to remember things like hard disk settings. This memory uses a small
battery to provide it with the power it needs to maintain the memory contents.
20
Computers rely on hard disk drives (HDDs) to store data permanently. They are storage
devices used to save and retrieve digital information that will be required for future reference.
Hard drives are non-volatile, meaning that they retain data even when they do not have
power. The information stored remains safe and intact unless the hard drive is destroyed or
interfered with.
Hard disk drives were introduced in 1956 by IBM. At the time, they were being used with
general purpose mainframes and minicomputers. Like other electronic devices, these have
witnessed numerous technological advancements over the years. This is in terms of capacity,
size, shape, internal structure, performance, interface, and modes of storing data.
These numerous changes have made HDDs here to stay, not like other devices that
became obsolete the moment they are introduced in the market.
21
IDE are a type of hard drive controller which bundles the components of the hard drive
and its controller into one interface. This allows for much simpler installation of the hard drive
into the system by removing the difficulties associated with the separation of the components
and controllers such as the predecessor drives. This advancement has lead to several hard
drive options which are both very large and very fast. Taking advantage of these types of hard
drives can be done in nearly any computer system in modern day computing.
These were the first types of hard disk drives and they made use of the Parallel ATA
interface standard to connect to computers. These types of drives are the ones that are referred
to as Integrated Drive Electronics (IDE) and Enhanced Integrated Drive Electronics (EIDE)
drives.
These PATA drives were introduced by Western Digital back in 1986. They provided a
common drive interface technology for connecting hard drives and other devices to computers.
Data transfer rate can go up to 133MB/s and a maximum of 2 devices can be connected to a
drive channel. Most of the motherboards have a provision of two channels, thus a total of 4
EIDE devices can be connected internally.
They make use of a 40 or 80 wire ribbon cable transferring multiple bits of data
simultaneously in parallel. These drives store data by the use of magnetism. The internal structure
is one made of mechanical moving parts. They have been superseded by serial ATA.
22
These hard drives have replaced the PATA drives in desktop and laptop computers. The
main physical difference between the two is the interface, although their method of connecting
to a computer is the same. Here are some advantages of SATA Hard Disk Drives. Worth noting
is that their capacities vary a lot and so does the prices.SATA drives can transfer data faster
than PATA types by using serial signaling technology.
· SATA cables are thinner and more flexible than PATA cables.
· Disks do not share bandwidth because there is only one disk drive allowed per
SATA controller chip on the computer motherboard.
· They consume less power. They only require 250 mV as opposed to 5V for PATA.
These are quite similar to IDE hard drives but they make use of the Small Computer
System Interface to connect to the computer. SCSI drives can be connected internally or
externally. Devices that are connected in a SCSI have to be terminated at the end. Here are
some of their advantages.
4. DVD ROM.
23
1. CD-ROM – It is an optical ROM in which. Pre-recorded data can be read out. The
manufacturer writes data on CD-ROMs. The disk is made up of a resin, such
aspolycarbonate. It is coated with a material which will change when a high intensity
laser beam is focused on it. The coating material is highly reflective, usually aluminum.
It is also called a laser disk.
Information in CD-ROM is written by creating pits on the disk surface by shining a laser
beam. As the disk rotates the laser beam traces out a continuous spiral. The sharply focused
beam created a circular pit of around 0.8 micrometer diameter, wherever a 1 is to be written and
no pit (also called a land) if a zero is to written.
The CD-ROM with pre-recorded information is read by a CD-ROM reader, which uses a
laser beam for reading. A laser head moves in and out to the specified position. As the disk
rotates, the head sense pits and land. This is converted to 1’s 0’s by the electronic interface and
sent to the computer.
The advantages of CD-ROM are its high storing capacity, mass copy of information stored,
removable from the computer, etc.
It’s main disadvantages is longer access time, as compared to that of a magnetic hard
disk. It cannot be updated because it is a read only memory. It is suitable for storing information’s
which are not to be changed.
To write data on the disk the laser beam of modest density is employed, which forms pits
or bubbles on the disk surface. Its disk controller is somewhat expensive than that for CD-
ROM. For writing operation, required laser power is more than that required for reading. It’s
advantages is its high capacity, better reliability and longer life. The drawback is greater access
time compares to that of hard disk.
24
· It is more reliable.
The drawback is its longer access time compared to that of a hard disk-
4. DVD ROM – DVD stands for Digital Versatile Disks. A DVD stores much more data
than CD-ROM. Its capacity are 4.7 GB, 8.5 GB, 20 GB, etc. the capacity depends on whether it
is a single layer, double layer single sided or double sided disk. DVD ROM uses the same
principle as a CDROM for reading and writing. But a smaller wavelength beam is used. A lens
system is used to focus on two different layers on the disk. On each layer data is recorded.
Thus, the capacity can be doubled. Further the recording beam is sharper compared to CDROM
and the distance between successive tracks on the surface is smaller. The total capacity of
DVD ROM is 8.5 GB. In double sided DVDROM two such disks are stuck back to back which
allows recording on both sides. This requires the disk to be reversed to read the reverse side.
Hence, the double sided DVDROM’s capacity is 17 GB. However, double sided DVDROM
should be handled carefully as both sides have data, they are thinner, and could be accidentally
damaged.
Also known as “pen drives,” “thumb drives” or “flash drives,” these are identifiable by the
rectangular metal connector that you insert into the computer. Like other removable storage
devices, USB drives are used to transport the files from one place to another.
Memory cards, also called “memory sticks” or “SD cards,” connect to the computer via a
special slot. Not every computer has these slots, but adapters are available that allow one to
read a memory card via a USB port. Memory cards are used in MP3 players and other portable
gadgets like the Canon PowerShot digital camera.
1.11.3 Smartphones
Handsets like the Samsung Galaxy S-4 also have SD cards for storage and can connect
to the computer with a USB cable like the T-Mobile universal charge cable. Such a cable may
have come packaged with thephone, and will also charge the phone while it’s connected to the
computer.
An external hard drive is like the drive inside the computer, but it comes in a protective
case and connects to the computer via a USB cable. If there’s a natural disaster or a break-in,
or if the computer crashes irreparably, one can copy the files from the external drive onto a new
computer and be back in business.
Tape backup is the practice of periodically copying data from a primary storage device to
a tape cartridge so the data can be recovered if there is a hard disk crash or failure. Tape backups
can be done manually or be programmed to happen automatically with appropriate software.
Tape backup systems exist for needs ranging from backing up the hard disk on a personal
computer to backing up large amounts of data storage for archiving and disaster recovery (DR)
purposes in a large enterprise. Tape backups can also restore data to storage devices when
needed.
26
Tape can be one of the best options for fixing an unstructured data backup problem
because of its inexpensive operational and ownership cost, capacity and speed. Magnetic tape
is especially attractive in an era of massive data growth. Customers can copy and store archival
and backup data on tape for use with cloud seeding.
The data transfer rate for tape can be significantly faster than disk and on par with flash
drivestorage, with native write rates of at least 300 megabytes per second (MBps). For anyone
concerned with backups increasing the latency of production storage, flash-to-tape, disk-to-
disk-to-tape or other data buffering strategies can mask the tape write operation.
Because disk is easier to restore data from, more secure and benefits from technologies
such as data reduplication, it has replaced tape as the preferred medium for backup. Tape is
still a relevant medium for archiving, however, and remains in use in large enterprises that may
have petabytes of data backed up on tape libraries.
Magnetic tape is well-suited for archiving because of its high capacity, low cost and
durability. Tape is a linear recording system that is not good for random access. But in an
archive, latency is less of an issue.
A file allocation table (FAT) is a file system developed for hard drives that originally used
12 or 16 bits for each cluster entry into the file allocation table. It is used by the operating
system (OS) to manage files on hard drives and other computer systems. It is often also found
on in flash memory, digital cameras and portable devices. It is used to store file information and
extend the life of a hard drive.
Most hard drives require a process known as seeking; this is the actual physical searching
and positioning of the read/write head of the drive. The FAT file system was designed to reduce
the amount of seeking and thus minimize the wear and tear on the hard disc.
FAT was designed to support hard drives and subdirectories. The earlier FAT12 had a
cluster addresses to 12-bit values with up to 4078 clusters; it allowed up to 4084 clusters with
UNIX. The more efficient FAT16 increased to 16-bit cluster address allowing up to 65,517
27
clusters per volume, 512-byte clusters with 32MB of space, and had a larger file system; with
the four sectors it was 2,048 bytes.
FAT16 was introduced in 1983 by IBM with the simultaneous releases of IBM’s personal
computer AT (PC AT) and Microsoft’s MS-DOS (disk operating system) 3.0software. In 1987
Compaq DOS 3.31 released an expansion of the original FAT16 and increased the disc sector
count to 32 bits. Because the disc was designed for a 16-bit assembly language, the whole disc
had to be altered to use 32-bit sector numbers.
In 1997 Microsoft introduced FAT32. This FAT file system increased size limits and allowed
DOS real mode code to handle the format. FAT32 has a 32-bit cluster address with 28 bits used
to hold the cluster number for up to approximately 268 million clusters. The highest level division
of a file system is a partition. The partition is divided into volumes or logical drives. Each logical
drive is assigned a letter such as C, D or E.
A FAT file system has four different sections, each as a structure in the FAT partition. The
four sections are:
· Boot Sector: This is also known as the reserved sector; it is located on the first part
of the disc. It contains the OS’s necessary boot loader code to start a PC system,
the partition table known as the master boot record (MRB) that describes how the
drive is organized, and the BIOS parameter block (BPB) which describes the physical
outline of the data storage volume.
· FAT Region: This region generally encompasses two copies of the File Allocation
Table which is for redundancy checking and specifies how the clusters are assigned.
· Data Region: This is where the directory data and existing files are stored. It uses
up the majority of the partition.
· Root Directory Region: This region is a directory table that contains the information
about the directories and files. It is used with FAT16 and FAT12 but not with other
FAT file systems. It has a fixed maximum size that is configured when created.
FAT32 usually stores the root directory in the data region so it can be expanded if
needed.
28
1.12.2 NTFS (NT file system; sometimes New Technology File System)
NTFS (NT file system; sometimes New Technology File System) is the file system that
the Windows NT operating system uses for storing and retrieving files on a hard disk. NTFS is
the Windows NT equivalent of the Windows 95 file allocation table (FAT) and the OS/2 High
Performance File System (HPFS). However, NTFS offers a number of improvements over FAT
and HPFS in terms of performance, extendibility, and security.
· Information about a file’s clusters and other data is stored with each cluster, not just
a governing table (as FAT is)
· Support for very large files (up to 2 to the 64th power or approximately 16 billion bytes
in size)
· An access control list (ACL) that lets a server administrator control who can access
specific files
When a hard disk is formatted (initialized), it is divided into partitions or major divisions of
the total physical hard disk space. Within each partition, the operating system keeps track of all
the files that are stored by that operating system. Each file is actually stored on the hard disk in
one or more clusters or disk spaces of a predefined uniform size. Using NTFS, the sizes of
clusters range from 512 bytes to 64 kilobytes. Windows NT provides a recommended default
cluster size for any given drive size. For example, for a 4 GB (gigabyte) drive, the default cluster
size is 4 KB (kilobytes). Note that clusters are indivisible. Even the smallest file takes up one
cluster and a 4.1 KB file takes up two clusters (or 8 KB) on a 4 KB cluster system.
29
The selection of the cluster size is a trade-off between efficient use of disk space and the
number of disk accesses required to access a file. In general, using NTFS, the larger the hard
disk the larger the default cluster size, since it’s assumed that a system user will prefer to
increase performance (fewer disk accesses) at the expense of some amount of space inefficiency.
When a file is created using NTFS, a record about the file is created in a special file, the
Master File Table (MFT). The record is used to locate a file’s possibly scattered clusters. NTFS
tries to find contiguous storage space that will hold the entire file (all of its clusters).
Each file contains, along with its data content, a description of its attributes (its metadata).
RAID allows to store the same data redundantly (in multiple paces) in a balanced way to
improve overall performance. RAID disk drives are used frequently on servers but aren’t generally
necessary for personal computers.
Provides data striping (spreading out blocks of each file across multiple disk drives) but
no redundancy. This improves performance but does not deliver fault tolerance. If one drive
fails, then all data in the array is lost.
30
Provides disk mirroring. Level 1 provides twice the read transaction rate of single disks
and the same write transaction rate as single disks.
Not a typical implementation and rarely used, Level 2 stripes data at the bit level rather
than the block level.
Provides byte-level striping with a dedicated parity disk. Level 3, which cannot service
simultaneous multiple requests, also is rarely used.
Provides data striping at the byte level and also stripe error correction information. This
results in excellent performance and good fault tolerance. Level 5 is one of the most popular
implementations of RAID.
Provides block-level striping with parity data distributed across all disks.
Not one of the original RAID levels, multiple RAID 1 mirrors are created, and a RAID 0
stripe is created over these.
Some devices use more than one level in a hybrid or nested arrangement, and some
vendors also offer non-standard proprietary RAID levels. Examples of non-standard RAID levels
include the following:
31
Not one of the original RAID levels, two RAID 0 stripes are created, and a RAID 1 mirror
is created over them. Used for both replicating and sharing data among disks.
1.14.11. Level 7
1.14.12. RAID 1E
A RAID 1 implementation with more than two disks. Data striping is combined with mirroring
each written stripe to one of the remaining disks in the array.
1.14.13. RAID S
Also called Parity RAID, this is EMC Corporation’s proprietary striped parity RAID system
used in its Symmetrix storage systems.
Since HFS was not originally designed to handle large hard disks, such as the 100GB+
hard disks that are common today, Apple introduced a updated file system called HFS+, or HFS
Extended, with the release of Mac OS 8.1. HFS+ allows for smaller clusters or block sizes,
which reduces the minimum size each file must take up. This means disk space can be used
much more efficiently on large hard disks. Mac OS X uses the HFS+ format by default and also
supports journaling, which makes it easier to recover data in case of a hard drive crash.
The main function of a computer port is to act as a point of attachment, where the cable
from the peripheral can be plugged in and allows data to flow from and to the device.
In Computers, communication ports can be divided into two types based on the type or
protocol used for communication. They are Serial Ports and Parallel Ports.
A serial port is an interface through which peripherals can be connected using a serial
protocol which involves the transmission of data one bit at a time over a single communication
line. The most common type of serial port is a D-Subminiature or a D-sub connector that carry
RS-232 signals.
A parallel port, on the other hand, is an interface through which the communication between
a computer and its peripheral device is in a parallel manner i.e. data is transferred in or out in
parallel using more than one communication line or wire. Printer port is an example of parallel
port.
1.16.1 PS/2
PS/2 connector is developed by IBM for connecting mouse and keyboard. It was introduced
with IBM’s Personal Systems/2 series of computers and hence the name PS/2 connector. PS/
2 connectors are color coded as purple for keyboard and green for mouse.
Even though the pinout of both mouse and keyboard PS/2 ports are same, computers do
not recognize the devise when connected to wrong port.
PS/2 port is now considered a legacy port as USB port has superseded it and very few of
the modern motherboards include it as a legacy port.
Even though the communication in PS/2 and USB is serial, technically, the term Serial
Port is used to refer the interface that is compliant to RS-232 standard. There are two types of
serial ports that are commonly found on a computer: DB-25 and DE-9.
33
1.16. 3. DB-25
DB-25 is a variant of D-sub connector and is the original port for RS-232 serial
communication. They were developed as the main port for serial connections using RS-232
protocol but most of the applications did not require all the pins.
Hence, DE-9 was developed for RS-232 based serial communication while DB-25 was
rarely used as a serial port and often used as a parallel printer port as a replacement of the
Centronics Parallel 36 pin connector.
DE-9 is the main port for RS-232 serial communication. It is a D-sub connector with E
shell and is often miscalled as DB-9. A DE-9 port is also called as a COM port and allows full
duplex serial communication between the computer and it’s peripheral.
Some of the applications of DE-9 port are serial interface with mouse, keyboard, modem,
uninterruptible power supplies (UPS) and other external RS-232 compatible devices.
The use of DB-25 and DE-9 ports for communication is in decline and are replaced by
USBs or other ports.
Parallel port is an interface between computer and peripheral devices like printers with
parallel communication. The Centronics port is a 36 pin port that was developed as an interface
for printers and scanners and hence a parallel port is also called as a Centronics port.
Before the wide use of USB ports, parallel ports are very common in printers. The
Centronics port was later replaced by DB-25 port with parallel interface.
Audio ports are used to connect speakers or other audio output devices with the computer.
The audio signals can be either analogue or digital and depending on that the port and its
corresponding connector differ.
34
It is the most commonly found audio port that can be used to connect stereo headphones
or surround sound channels. A 6 connector system is included on majority of computers for
audio out as well as a microphone connection.
The 6 connectors are color coded as Blue, Lime, Pink, Orange, Black and Grey. These 6
connectors can be used for a surround sound configuration of up to 8 channels.
The Sony/Phillips Digital Interface Format (S/PDIF) is an audio interconnect used in home
media. It supports digital audio and can be transmitted using a coaxial RCA Audio cable or an
optical fiber TOSLINK connector.
Most computers home entertainment systems are equipped with S/PDIF over TOSLINK.
TOSLINK (Toshiba Link) is most frequently used digital audio port that can support 7.1 channel
surround sound with just one cable. In the following image, the port on the right is an S/PDIF
port.
VGA port is found in many computers, projectors, video cards and High Definition TVs. It
is a D-sub connector consisting of 15 pins in 3 rows. The connector is called as DE-15.
VGA port is the main interface between computers and older CRT monitors. Even the
modern LCD and LED monitors support VGA ports but the picture quality is reduced. VGA
carries analogue video signals up to a resolution of 648X480.
With the increase in use of digital video, VGA ports are gradually being replaced by HDMI
and Display Ports. Some laptops are equipped with on-board VGA ports in order to connect to
external monitors or projectors. The pinout of a VGA port is shown below.
DVI is a high speed digital interface between a display controller like a computer and a
display device like a monitor. It was developed with an aim of transmitting lossless digital video
signals and replace the analogue VGA technology.
35
There are three types of DVI connectors based on the signals it can carry: DVI-I, DVI-D
and DVI-A. DVI-I is a DVI port with integrated analogue and digital signals. DVI-D supports only
digital signals and DVI-A supports only analogue signals.
The digital signals can be either single link or dual link where a single link supports a
digital signal up to 1920X1080 resolution and a dual link supports a digital signal up to 2560X1600
resolution. The following image compares the structures of DVI-I, DVI-D and DVI-A types along
with the pinouts.
1.16.7.3 Mini-DVI
It is a 32 pin port and is capable of transmitting DVI, composite, S-Video and VGA signals
with respective adapters. The following image shows a Mini-DVI port and its compatible cable.
1.16.7.4 Micro-DVI
Micro-DVI port, as the name suggests is physically smaller than Mini-DVI and is capable
of transmitting only digital signals.
This port can be connected to external devices with DVI and VGA interfaces and respective
adapters are required. In the following image, a Micro-DVI port can be seen adjacent to
headphone and USB ports.
Display Port is a digital display interface with optional multiple channel audio and other
forms of data. Display Port is developed with an aim of replacing VGA and DVI ports as the
main interface between a computer and monitor.
The Display Port has a 20 pin connector, which is a very less number when compared to
DVI port and offers better resolution.
RCA Connector can carry composite video and stereo audio signals over three cables.
Composite video transmits analogue video signals and the connector is as yellow colored RCA
connector.
36
The video signals are transmitted over a single channel along with the line and frame
synchronization pulses at a maximum resolution of 576i (standard resolution).
The red and white connectors are used for stereo audio signals (red for right channel and
white for left channel).
Component Video is an interface where the video signals are split into more than two
channels and the quality of the video signal is better that Composite video.
Like composite video, component video transmits only video signals and two separate
connectors must be used for stereo audio. Component video port can transmit both analogue
and digital video signals.
The ports of the commonly found Component video uses 3 connectors and are color
coded as Green, Blue and Red.
1.16.7.8 S-Video
S-Video or Separate Video connector is used for transmitting only video signals. The
picture quality is better than that of Composite video but has a lesser resolution than Component
video.
The S-Video port is generally black in color and is present on all TVs and most computers.
S-Video port looks like a PS/2 port but consists of only 4 pins.
Out of the 4 pins, one pin is used to carry the intensity signals (black and white) and other
pin is used to carry color signals. Both these pins have their respective ground pins.
1.16.7.9 HDMI
HDMI can be used to carry uncompressed video and compressed or uncompressed audio
signals.
37
The HDMI connector consists of 19 pins and the latest version of HDMI i.e. HDMI 2.0 can
carry digital video signal up to a resolution of 4096×2160 and 32 audio channels.
1.17 USB
Universal Serial Bus (USB) replaced serial ports, parallel ports, PS/2 connectors, game
ports and power chargers for portable devices.
USB port can be used to transfer data, act as an interface for peripherals and even act as
power supply for devices connected to it. There are three kinds of USB ports: Type A, Type B or
mini USB and Micro USB.
USB Type A - USB Type-A port is a 4 pin connector. There are different versions of Type
– A USB ports: USB 1.1, USB 2.0 and USB 3.0. USB 3.0 is the common standard and supports
a data rate of 400MBps.
USB 3.1 is also released and supports a data rate up to 10Gbps. The USB 2.0 is Black
color coded and USB 3.0 is Blue.
USB Type – C is the latest specification of the USB and is a reversible connector. USB
Type – C is supposed to replace Types A and B and is considered future proof.
The port of USB Type – C consists of 24 pins. The pinout diagram of USB Type – C is
shown below. USB Type – C can handle a current of 3A.
This feature of handling high current is used in the latest Fast Charging Technology
where a Smart Phone’s battery will reach its full charge is very less time.
1.17.2. RJ-45
Ethernet is a networking technology that is used to connect the computer to Internet and
communicate with other computers or networking devices.
The interface that is used for computer networking and telecommunications is known as
Registered Jack (RJ) and RJ – 45 port in particular is used for Ethernet over cable. RJ-45
connector is an 8 pin – 8 contact (8P – 8C) type modular connector.
38
The latest Ethernet technology is called Gigabit Ethernet and supports a data transfer
rate of over 10Gigabits per second. The Ethernet or a LAN port with 8P – 8C type connector
along with the male RJ-45 cable is shown below.
1.17.3. RJ-11
RJ-11 is another type of Registered Jack that is used as an interface for telephone,
modem or ADSL connections. Even though computers are almost never equipped with an RJ-
11 port, they are the main interface in all telecommunication networks.
RJ-45 and RJ11 ports look alike but RJ-11 is a smaller port and uses a 6 point – 4 contact
(6P – 4C) connector even though a 6 point – 2 contact (6P – 2C) is sufficient. The following is
a picture of an RJ-11 port and its compatible connector.
1.17.4. e-SATA
They are hybrid ports capable of supporting both e-SATA and USB. Neither the SATA
organization nor the USB organization has officially approved the e-SATAp port and must be
used at user’s risk.
Keyboard is the most common and very popular input device which helps to input data to
the computer. The layout of the keyboard is like that of traditional typewriter, although there are
some additional keys provided for performing additional functions.
Keyboards are of two sizes 84 keys or 101/102 keys, but now keyboards with 104 keys or
108 keys are also available for Windows and Internet.
39
1.17.5.2. Mouse
Mouse is the most popular pointing device. It is a very famous cursor-control device
having a small palm size box with a ball at its base, which senses the movement of the mouse
and sends corresponding signals to the CPU when the mouse buttons are pressed.
Generally, it has two buttons called the left and the right button and a wheel is present
between the buttons. A mouse can be used to control the position of the cursor on the screen,
but it cannot be used to enter text into the computer.
1.17.5.3. Advantages
· Easy to use
· Moves the cursor faster than the arrow keys of the keyboard.
1.17.5.4. Joystick
Joystick is also a pointing device, which is used to move the cursor position on a monitor
screen. It is a stick having a spherical ball at its both lower and upper ends. The lower spherical
ball moves in a socket. The joystick can be moved in all four directions.
The function of the joystick is similar to that of a mouse. It is mainly used in Computer
Aided Designing (CAD) and playing computer games.
Light pen is a pointing device similar to a pen. It is used to select a displayed menu item
or draw pictures on the monitor screen. It consists of a photocell and an optical system placed
in a small tube.
When the tip of a light pen is moved over the monitor screen and the pen button is
pressed, its photocell sensing element detects the screen location and sends the corresponding
signal to the CPU.
Track ball is an input device that is mostly used in notebook or laptop computer, instead
of a mouse. This is a ball which is half inserted and by moving fingers on the ball, the pointer
can be moved.
40
Since the whole device is not moved, a track ball requires less space than a mouse. A
track ball comes in various shapes like a ball, a button, or a square.
1.17.5.7. Scanner
Scanner is an input device, which works more like a photocopy machine. It is used when
some information is available on paper and it is to be transferred to the hard disk of the computer
for further manipulation.
Scanner captures images from the source which are then converted into a digital form
that can be stored on the disk. These images can be edited before they are printed.
1.17.5.8. Digitizer
Digitizer is an input device which converts analog information into digital form. Digitizer
can convert a signal from the television or camera into a series of numbers that could be stored
in a computer. They can be used by the computer to create a picture of whatever the camera
had been pointed at.
Digitizer is also known as Tablet or Graphics Tablet as it converts graphics and pictorial
data into binary inputs. A graphic tablet as digitizer is used for fine works of drawing and image
manipulation applications.
1.17.5.9. Microphone
Microphone is an input device to input sound that is then stored in a digital form.
The microphone is used for various applications such as adding sound to a multimedia
presentation or for mixing music.
MICR input device is generally used in banks as there are large number of cheques to be
processed every day. The bank’s code number and chequenumber are printed on the cheques
with a special type of ink that contains particles of magnetic material that are machine readable.
This reading process is called Magnetic Ink Character Recognition (MICR). The main
advantages of MICR is that it is fast and less error prone.
41
OCR scans the text optically, character by character, converts them into a machine readable
code, and stores the text on the system memory.
Bar Code Reader is a device used for reading bar coded data (data in the form of light
and dark lines). Bar coded data is generally used in labeling goods, numbering the books, etc.
It may be a hand-held scanner or may be embedded in a stationary scanner.
Bar Code Reader scans a bar code image, converts it into an alphanumeric value, which
is then fed to the computer that the bar code reader is connected to.
OMR is a special type of optical scanner used to recognize the type of mark made by pen
or pencil. It is used where one out of a few alternatives is to be selected and marked.
It is specially used for checking the answer sheets of examinations having multiple choice
questions.
A video graphics array (VGA) cable is a type of computer cable that carries visual display
data from the CPU to the monitor. A complete VGA cable consists of a cable and a connector at
each end, and the connectors are typically blue.A VGA cable is used primarily to link a computer
to a display device. One end of the VGA cable is attached to the port in the graphics card on the
computer motherboard, and the other to the port in the display device. When the computer is
running, the video card transmits video display signals via the VGA cable, which are then displayed
on the display device. VGA cables are available in different types, where shorter cables with
coaxial cable and insulation provide better video/display quality.
42
1.18.2 SVGA
A Super Video Graphics Array (SVGA) monitor is an output device which uses the SVGA
standard. SVGA is a video-display-standard type developed by the Video Electronics Standards
Association (VESA) for IBM PC compatible personal computers (PCs).
SVGA includes an array of computer display standards utilized for the manufacturing of
computer monitors and screens. It features a screen resolution of 800x600 pixels.
Monitors that use the SVGA graphic standard are intended to perform better than normal
VGA monitors. SVGA monitors make use of a VGA connector (DE-15 a.k.a HD-15).
A VGA monitor generally displays graphics in 640x480 pixels, or may be an even smaller
320x200 pixels while SVGA monitors display a better resolution of 800x600 pixels or more.
When comparing SVGA with other display standards like Extended Graphics Array (XGA)
or VGA, the standard resolution of SVGA is identified as 800x600 pixels.
The SVGA standard was referred to as a graphic resolution of 800x600 4-bit pixel (48000
pixels) when it was initially defined. This implies that every single pixel can be one of 16 different
colors. Later, this definition was extended to a resolution of 1024x768 8-bit pixel, which means
that there is a selection of 256 colors.
1.18.3 AGP
Stands for “Accelerated Graphics Port.” AGP is a type of expansion slot designed
specifically for graphics cards. It was developed in 1996 as an alternative to the PCIstandard.
Since the AGP interface provides a dedicated bus for graphics data, AGP cards are able to
render graphics faster than comparable PCI graphics cards.
Like PCI slots, AGP slots are built into a computer’s motherboard. They have a similar
form factor to PCI slots, but can only be used for graphics cards. Additionally, several AGP
specifications exist, including AGP 1.0, 2.0, and 3.0, which each use a different voltage. Therefore,
AGP cards must be compatible with the specification of the AGP slot they are installed in.
Since AGP cards require an expansion slot, they can only be used in desktop computers.
While AGP was popular for about a decade, the technology has been superseded by PCI Express,
which was introduced in 2004. For a few years, many desktop computers included both AGP
43
and PCI Express slots, but eventually AGP slots were removed completely. Therefore, most
desktop computers manufactured after 2006 do not include an AGP slot.
A video card (also called a display card, graphics card, display adapter or graphics
adapter) is an expansion card which generates a feed of output images to a display (such as
a computer monitor). Frequently, these are advertised as discrete or dedicated graphics cards,
emphasizing the distinction between these and integrated graphics. At the core of both is
the graphics processing unit (GPU), which is the main part that does the actual computations,
but should not be confused as the video card as a whole, although “GPU” is often used to refer
to video cards.
Most video cards are not limited to simple display output. Their integrated graphics
processor can perform additional processing, removing this task from the central processor of
the computer. For example, Nvidia and AMD (ATi) produced cards render the graphics pipeline
OpenGL and DirectX on the hardware level.Usually the graphics card is made in the form of a
printed circuit board (expansion board) and inserted into an expansion slot, universal or
specialized (AGP, PCI Express).[4] Some have been made using dedicated enclosures, which
are connected to the computer via a docking station or a cable.
These monitors employ CRT technology, which was used most commonly in the
manufacturing of television screens. With these monitors, a stream of intense high energy
electrons is used to form images on a fluorescent screen. A cathode ray tube is basically a
vacuum tube containing an electron gun at one end and a fluorescent screen at another end.
While CRT monitors can still be found in some organizations, many offices have stopped
using them largely because they are heavy, bulky, and costly to replace should they break.
While they are still in use, it would be a good idea to phase these monitors out for cheaper,
lighter, and more reliable monitors.
44
The LCD monitor incorporates one of the most advanced technologies available today.
Typically, it consists of a layer of color or monochrome pixels arranged schematically between
a couple of transparent electrodes and two polarizing filters. Optical effect is made possible by
polarizing the light in varied amounts and making it pass through the liquid crystal layer. The
two types of LCD technology available are the active matrix of TFT and a passive matrix
technology. TFT generates better picture quality and is more secure and reliable. Passive matrix,
on the other hand, has a slow response time and is slowly becoming outdated.
The advantages of LCD monitors include their compact size which makes them lightweight.
They also don’t consume much electricity as CRT monitors, and can be run off of batteries
which makes them ideal for laptops.
Images transmitted by these monitors don’t get geometrically distorted and have little
flicker. However, this type of monitor does have disadvantages, such as its relatively high price,
an image quality which is not constant when viewed from different angles, and a monitor resolution
that is not always constant, meaning any alterations can result in reduced performance.
LED monitors are the latest types of monitors on the market today. These are flat panel,
or slightly curved displays which make use of light-emitting diodes for back-lighting, instead of
cold cathode fluorescent (CCFL) back-lighting used in LCDs. LED monitors are said to use
much lesser power than CRT and LCD and are considered far more environmentally friendly.
The advantages of LED monitors are that they produce images with higher contrast, have
less negative environmental impact when disposed, are more durable than CRT or LCD monitors,
and feature a very thin design. They also don’t produce much heat while running. The only
downside is that they can be more expensive, especially for the high-end monitors like the new
curved displays that are being released.
Being aware of the different types of computer monitors available should help you choose
one that’s most suited to your needs.
45
An impact printer makes contact with the paper. It usually forms the print image by pressing
an inked ribbon against the paper using a hammer or pins. Following are some examples of
impact printers.
The dot-matrix printer uses print heads containing from 9 to 24 pins. These pins produce
patterns of dots on the paper to form the individual characters. The 24 pin dot-matrix printer
produces more dots that a 9 pin dot-matrix printer, which results in much better quality and
clearer characters. The general rule is: the more pins, the clearer the letters on the paper. The
pins strike the ribbon individually as the print mechanism moves across the entire print line in
both directions, i-e, from left to right, then right to left, and so on. The user can produce a color
output with a dot-matrix printer (the user will change the black ribbon with a ribbon that has
color stripes). Dot-matrix printers are inexpensive and typically print at speeds of 100-600
characters per second.
In order to get the quality of type found on typewriters, a daisy-wheel impact printer can
be used. It is called daisy-wheel printer because the print mechanism looks like a daisy; at the
end of each “Petal” is a fully formed character which produces solid-line print. A hammer strikes
a “petal” containing a character against the ribbon, and the character prints on the paper. Its
speed is slow typically 25-55 characters per second.
A drum printer consists of a solid, cylindrical drum that has raised characters in bands on
its surface. The number of print positions across the drum equals the number available on the
page. This number typically ranges from 80-132 print positions. The drum rotates at a rapid
speed. For each possible print position there is a print hammer located behind the paper. These
hammers strike the paper, along the ink ribbon, against the proper character on the drum as it
passes. One revolution of the drum is required to print each line. This means that all characters
on the line are not printed at exactly the same time, but the time required to print the entire line
is fast enough to call them line printers. Typical speeds of drum printers are in the range of 300
to 2000 lines per minute.
A chain printer uses a chain of print characters wrapped around two pulleys. Like the
drum printer, there is one hammer for each print position. Circuitry inside the printer detects
when the correct character appears at the desired print location on the page. The hammer then
strikes the page, pressing the paper against a ribbon and the character located at the desired
print position. An impression of the character is left on the page. The chain keeps rotating until
all the required print positions on the line have filled. Then the page moves up to print the next
line. Speeds of chain printers range from 400 to 2500 characters per minute.
A band printer operates similar to chain printer except it uses a band instead of a chain
and has fewer hammers. Band printer has a steel band divided into five sections of 48 characters
each. The hammers on a band printer are mounted on a cartridge that moves across the paper
to the appropriate positions. Characters are rotated into place and struck by the hammers. Font
styles can easily be changed by replacing a band or chain.
Non-impact printers do not use a striking device to produce characters on the paper; and
because these printers do not hammer against the paper they are much quieter. Following are
some non-impacted printers.
47
Ink-jet printers work in the same fashion as dot-matrix printers in the form images or
characters with little dots. However, the dots are formed by tiny droplets of ink. Ink-jet printers
form characters on paper by spraying ink from tiny nozzles through an electrical field that
arranges the charged ink particles into characters at the rate of approximately 250 characters
per second. The ink is absorbed into the paper and dries instantly. Various colors of ink can also
be used.
One or more nozzles in the print head emit a steady stream of ink drops. Droplets of ink
are electrically charged after leaving the nozzle. The droplets are then guided to the paper by
electrically charged deflecting plates [one plate has positive charge (upper plate) and the other
has negative charge (lover plate)]. A nozzle for black ink may be all that’s needed to print text,
but full-color printing is also possible with the addition of needed to print text, but full-color
printing is also possible with the addition three extra nozzles for the cyan, magenta, and yellow
primary colors. If a droplet isn’t needed for the character or image being formed, it is recycled
back to its input nozzle.
Several manufacturers produce color ink-jet printer. Some of these printers come with all
their color inks in a cartridge. These printers produce less noise and print in better quality with
greater speed.
A laser printer works like a photocopy machine. Laser printers produce images on paper
by directing a laser beam at a mirror which bounces the beam onto a drum. The drum has a
special coating on it to which toner (an ink powder) sticks. Using patterns of small dots, a laser
beam conveys information from the computer to a positively charged drum to become neutralized.
From all those areas of drum which become neutralized, the toner detaches. As the paper rolls
by the drum, the toner is transferred to the paper printing the letters or other graphics on the
paper. A hot roller bonds the toner to the paper.
Laser printers use buffers that store an entire page at a time. When a whole page is
loaded, it will be printed. The speed of laser printers is high and they print quietly without
producing much noise. Many home-use laser printers can print eight pages per minute, but
faster and print approximately 21,000 lines per minute, or 437 pages per minute if each page
contains 48 lines. When high speed laser printers were introduced they were expensive.
48
Developments in the last few years have provided relatively low-cost laser printers for use in
small businesses.
Summary
· The main printed circuit board in a computer is known as the motherboard
· Random Access Memory, or RAM, usually refers to computer chips that temporarily
store dynamic data to enhance computer performance while you are working.
· The CPU is the core of any computer. Everything depends on the CPUs ability to
process instructions that it receives.
· Monitors and their types include cathode ray tube monitor, LCD monitors and LEDs.
Reference
1. https://searchwindowsserver.techtarget.com/definition/NTFS
2. https://www.techopedia.com/definition/1369/file-allocation-table-fat
3. https://opensource.com/life/16/10/introduction-linux-filesystems
49
UNIT 2
OPERATING SYSTEM
Learning Objectives
· Client-Server Model
· Command-line interface
· OS inter-process communication
· Functions
· Device driver
Structure
2.1 Basic Operating System Concepts
· Enforcer of sharing, fairness and security with the goal of better overall performance
a. Sockets
b. Inter-process communication
client on each computer. The term “client” may also be applied to computers or devices that run
the client software or users that use the client software.
A client is part of a client–server model, which is still used today. Clients and servers may
be computer programs run on the same machine and connect via inter-process communication
techniques. Combined with Internet sockets, programs may connect to a service operating on
a possibly remote system through the Internet protocol suite. Servers wait for potential clients
to initiate connections that they may accept.
The term was first applied to devices that were not capable of running their own stand-
alone programs, but could interact with remote computers via a network. These computer
terminals were clients of the time-sharing mainframe computer.
· Windows Server
· Mac OS X Server
The two main architectures are the 2-tier and 3-tier architecture.
It is considered when the client has access to the database directly without involving
any intermediary.
It is also used to perform application logic whereby the application code will be
assigned to each of the client in the workstation.
3-tier client-server system architecture: This architecture involves the client PC,
Database server and Application server.
In this architecture, the client contains presentation logic only, whereby less resources
and less coding are needed by the client.
It supports one server being in charge of many clients and provides more resources
in the server.
The CLI was the primary means of interaction with most computer systems on computer
terminals in the mid-1960s, and continued to be used throughout the 1970s and 1980s on
OpenVMS, Unix systems and personal computer systems including MS-DOS, CP/M and Apple
DOS. The interface is usually implemented with a command line shell, which is a program that
accepts commands as text input and converts commands into appropriate operating system
functions.
Today, many end users rarely, if ever, use command-line interfaces and instead rely upon
graphical user interfaces and menu-driven interactions. However, many software developers,
56
system administrators and advanced users still rely heavily on command-line interfaces to perform
tasks more efficiently, configure their machine, or access programs and program features that
are not available through a graphical interface.
Alternatives to the command line include, but are not limited to text user interface menus
(see IBM AIX SMIT for example), keyboard shortcuts, and various other desktop metaphors
centered on the pointer (usually controlled with a mouse). Examples of this include the Windows
versions 1, 2, 3, 3.1, and 3.11 (an OS shell that runs in DOS), Dos Shell, and Mouse Systems
Power Panel.
Programs with command-line interfaces are generally easier to automate via scripting.
Command-line interfaces for software other than operating systems include a number of
programming languages such as Tcl/Tk, PHP, and others, as well as utilities such as the
compression utility WinZip, and some FTP and SSH/Telnet clients.
Operating system (OS) command line interfaces are usually distinct programs supplied
with the operating system. A program that implements such a text interface is often called a
command-line interpreter, command processor or shell.
Although the term ‘shell’ is often used to describe a command-line interpreter, strictly
speaking a ‘shell’ can be any program that constitutes the user-interface, including fully graphically
oriented ones. For example, the default W indows GUI is a shell program named
EXPLORER.EXE, as defined in the SHELL=EXPLORER.EXE line in the WIN.INI configuration
file. These programs are shells, but not CLIs.
57
Application programs (as opposed to operating systems) may also have command line
interfaces. An application program may support none, any, or all of these three major types of
command line interface mechanisms:
Interactive command line sessions: After launch, a program may provide an operator
with an independent means to enter commands in the form of text.
Some applications support only a CLI, presenting a CLI prompt to the user and acting
upon command lines as they are entered. Other programs support both a CLI and a GUI. In
some cases, a GUI is simply a wrapper around a separate CLI executable file. In other cases,
a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support
different functionality. For example, all features of MATLAB, a numerical analysis computer
program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features
· This type of interface needs much less memory (Random Access Memory) in order
to use compared to other types of user interfaces.
· This type of interface does not use as much CPU processing time as others.
· A low resolution, cheaper monitor can be used with this type of interface.
· If user can mistype an instruction, it is often necessary to start from scratch again.
· There are a large number of commands which need to be learned-in the case of
Unix it can be more than hundred.
· User can’t just guess what the instruction might be and user can’t just ‘have a go’.
A directory is a file that acts as a folder for other files. A directory can also contain other
directories (subdirectories); a directory that contains another directory is called the parent directory
of the directory it contains.
A directory tree includes a directory and all of its files, including the contents of all
subdirectories. (Each directory is a “branch” in the “tree.”) A slash character alone (`/’) is the
name of the root directory at the base of the directory tree hierarchy; it is the trunk from which
all other files or directories branch.
59
2.7.1. Functions
· Naming Files: How to give names to the files and directories.
A file critical to the proper function of an operating system which, if deleted or modified,
may cause it to no longer work. Often these files are hidden and cannot be deleted because
they are in use by the operating system. A system file is also an attribute that can be added to
any file in Windows or DOS using the .sys file extension. Although this process allows the
operating system to know the file is important, it does not make the file a system file.
60
Booting (also known as booting up) is the initial set of operations that a computer system
performs when electrical power is switched on. The process begins when a computer that has
been turned off is re-energized, and ends when the computer is ready to perform its normal
operations. On modern general purpose computers, this can take tens of seconds and typically
involves performing power-on self-test, locating and initializing peripheral devices, and then
finding, loading and starting an operating system. Many computer systems also allow these
operations to be initiated by a software command without cycling power, in what is known as a
soft reboot, though some of the initial operations might be skipped on a soft reboot. A boot
loader is a computer program that loads the main operating system or runtime environment for
the computer after completion of self-tests.
The computer term boot is short for bootstrap or bootstrap load and derives from the
phrase to pull oneself up by one’s bootstraps. The usage calls attention to the paradox that a
computer cannot run without first loading software but some software must run before any
software can be loaded. Early computers used a variety of ad-hoc methods to get a fragment of
software into memory to solve this problem. The invention of integrated circuit Read-only memory
(ROM) of various types solved the paradox by allowing computers to be shipped with a start-up
program that could not be erased, but growth in the size of ROM has allowed ever more elaborate
start up procedures to be implemented.
There are numerous examples of single and multi-stage boot sequences that begin with
the execution of boot program(s) stored in boot ROMs. During the booting process, the binary
code of an operating system or runtime environment may be loaded from nonvolatile secondary
storage (such as a hard disk drive) into volatile, or random-access memory (RAM) and then
executed. Some simpler embedded systems do not require a noticeable boot sequence to
begin functioning and may simply run operational programs stored in read-only memory (ROM)
when turned on.
In order for a computer to successfully boot, its BIOS, operating system and hardware
components must all be working properly; failure of any one of these three elements
will likely result in a failed boot sequence.
61
When the computer’s power is first turned on, the CPU initializes itself, which is
triggered by a series of clock ticks generated by the system clock. Part of the CPU’s
initialization is to look to the system’s ROM BIOS for its first instruction in the startup
program. The ROM BIOS stores the first instruction, which is the instruction to run
the power-on self-test (POST), in a predetermined memory address. POST begins
by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect
a battery failure, it then continues to initialize the CPU, checking the inventoried
hardware devices (such as the video card), secondary storage devices, such as
hard drives and floppy drives, ports and other hardware devices, such as the keyboard
and mouse, to ensure they are functioning properly.
Once the POST has determined that all components are functioning properly and
the CPU has successfully initialized the BIOS looks for an OS to load.
The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in
most PCs, the OS loads from the C drive on the hard drive even though the BIOS
has the capability to load the OS from a floppy disk, CD or ZIP drive. The order of
drives that the CMOS looks to in order to locate the OS is called the boot sequence,
which can be changed by altering the CMOS setup. Looking to the appropriate boot
drive, the BIOS will first encounter the boot record, which tells it where to find the
beginning of the OS and the subsequent program file that will initialize the OS.
Once the OS initializes, the BIOS copies its files into memory and the OS basically
takes over control of the boot process. Now in control, the OS performs another
inventory of the system’s memory and memory availability (which the BIOS already
checked) and loads the device drivers that it needs to control the peripheral devices,
such as a printer, scanner, optical drive, mouse and keyboard. This is the final
stage in the boot process, after which the user can access the system’s applications
to perform tasks.
device drivers should be installed. A device driver essentially converts the more general input/
output instructions of the operating system to messages that the device type can understand.
Some Windows programs are virtual device drivers. These programs interface with the
Windows Virtual Machine Manager. There is a virtual device driver for each main hardware
device in the system, including the hard disk drive controller, keyboard, and serial and parallel
ports. They’re used to maintain the status of a hardware device that has changeable settings.
Virtual device drivers handle software interrupts from the system rather than hardware interrupts.
In Windows operating systems, a device driver file usually has a file name suffix of DLL or
EXE. A virtual device driver usually has the suffix of VXD.
Protection (in cooperation with the OS) – Only authorized applications can use
the device
Multiplexing (in cooperation with the OS) – Multiple applications can use the
device concurrently
63
Summary
An Operating system is basically intermediary agent between the user and the
computer hardware.
3-tier client-server system architecture: This architecture involves the client PC,
Database server and Application server.
A file is a collection of data that is stored on disk and that can be manipulated as a
single unit by its name.
A directory is a file that acts as a folder for other files. A directory can also contain
other directories (subdirectories); a directory that contains another directory is called
the parent directory of the directory it contains.
A device driver is a program that controls a particular type of device that is attached
to the computer. There are device drivers for printers, displays, CD-ROM readers,
diskette drives, and so on. When an operating system is bought, many device drivers
are built into the product.
Operating system
Client OS
Server OS
Command line
Drivers
Reference
https://searchenterprisedesktop.techtarget.com/definition/device-driver
http://faculty.salina.k-state.edu/tim/ossg/Introduction/intro.html
https://smallbusiness.chron.com/primary-function-clientserver-system-46753.html
https://www.techopedia.com/definition/30145/server-operating-system-server-os
h t t p s : / / p d f s . s e m a n t i c s c h o l a r. o r g / e 1 d 2 / 1 3 3 5 4 1 a 5 d 2 2 d 0 e
e60ee39a0fece970a4ddbf.pdf
http://dsl.org/cookbook/cookbook_8.html
https://en.wikipedia.org/wiki/System_file
https://www.computerhope.com/jargon/s/systfile.htm
65
UNIT-3
COMPUTER PRINCIPLES AND A
BLACK BOX MODEL OF THE PC
Learning Objectives
· Computer Principles
· Format of instructions
· Motherboard
· Components of PC
Structure
3.1 Computer Principles
3.4 Buses
3.9 Motherboard
The part of the computer that carries out the function of executing instructions iscalled the
processor and the relationship between this element and the memory iswhat needs to be examine
in more detail.This can be done by means of a workedexample, showing step by step the
principles involved and how data in the memory isinterpreted and manipulated by the processor.
Memory can be either volatile and non-volatile memory. Volatile memory is a memory that
loses its contents when the computer or hardware device loses power. Computer RAM is an
example of a volatile memory. This is the reason for loss of data that has not been saved. This
happens when a computer freezes or reboots while working on a program.
Figure.3.1 represents a very simple diagram showing a processor and a memory astwo
black boxes connected together by two arrowed lines.The black boxes are shownas separate
because it is very likely that they will be implemented using differentelectronic chips: a processor
chip and a memory chip (or possibly a set of memorychips). They are connected together by
flexible cables (or tracks on a printed circuitboard) which are made up of several wires in parallel.
Suchmultiple connections arecalled buses.
67
The basic mechanism for our example processor is very simple. The idea of thestored
program concept, as implemented in a modern computer, was firstexpounded by John Von
Neumann (1945). This idea decrees that instructions areheld sequentially in the memory and
that the processor executes each one, in turn,from the lowest address in memory to the highest
address in memory, unlessotherwise instructed.To achieve this the processor maintains a record
of where it has got to so far in executing instructions. It does this using an internal store that is
variously called the counter register or the sequence control register or the program counter.
Again, for the purposes of our example, this sequence has been simplified into foursteps:
· fetch
· interpret
· update and
· execute
3.2.2.1. Fetch
In the fetch step, the processor will first ofall use its program counter to send a signal to
the main memory requesting that it besent a copy of the next instruction to be executed. Itwill
68
do this using the address bus.The memory will then respond by sending back a copy of the
binary patterns that itholds at the address it has been given. It will do this using the data bus.The
processorwill then take the binary patterns that represent the instruction from the data busand
place them in its instruction registers in readiness for decoding.
3.2.2.2. Interpret
Once thetransfer is complete, the processor will then enter the interpret step, where it
willinterpret or decode the patterns as an imperative instruction.Part of the pattern willbe used
to select the action that the processor should perform, and part will be usedto determine the
object to which this action should be applied, as we describedabove.
3.2.2.3. Update
On completion of its preparations to perform the instruction, the processorwill then enter
the update step. In this step, the processor prepares its programcounter so that it is ready for
the next instruction in sequence. In general, it does this by calculating the length in bytes of the
current instruction and adding that value to its program counter.Given that the systemis set up
to obey a sequence of instructions, one after the other, from lower address to higher address,
the program counter, having had this length added,will thus be pointing to the start of the next
instruction in the sequence.
3.2.2.4. Execute
Finally, the processor enters the execute step, where the action defined in the interpret
step is applied to the object defined in the interpret step. To do this, it may well use an additional
register as a scratchpad for interim results, and this is sometimes known as an accumulator or
general purposeregister. After that, the processor repeats the cycle starting with the fetch step
once again.
Fig. 3.3 shows a more detailed view of the two black boxes that has been considered
earlier,now rotated through 90° and expanded so that it can be seen that is contained within.Here
it is possible able to see into a small portion of the main memory on the left hand side and
observe exactlywhat patterns are in the byteswith addresses 3,4 and 5and 31 through to 36.
All that is required for the processor, on the right-hand side, is a small element ofinternal
memory for the registers and a four-step cyclic control mechanismwhich can be compared with
69
the four stroke internal combustion engine.Where the four strokes namely “suck”, “squeeze”,
“bang” and “blow” of the internalcombustion engine similarly, the four steps of processor life
cycle has “fetch”, “interpret”, “update” and “execute”.
One rather important difference between the twomodels, however, is their rotational speed.
In the case of a typical modern processor, the Intel Pentium4 for example,the speed of operation
can be as high as 10 000 MIPs2or more. This suggests that, since each “revolution” causes
one instruction to becarried out, the equivalent “rotational speed” is 10000 million revolutions
persecond, compared with the 4000 or so revolutions per minute of an internalcombustion
engine.
The processor is shown connected to the main memory by the two buses, the address
bus at the top and the data bus at the bottom.There is a third bus, not shown on the diagram for
the sake of clarity,known as the control bus,and this is concerned with control activities, such as
the direction of data flow on the data bus and the general timing of events throughout the
system.
As it was described above, the program counter in the processor holds the address of
where in main memory the next instruction that the processor is to execute can be found (in this
example, address 31) and the doing and using registers are our versions of the instruction
registers used by the processor to interpret the current instruction. The gp register is the general
purpose scratchpad register that was also referred to earlier.We have used throughout registers
that are only one byte in size so as to keep the example simple.Again, this does not affect the
70
principles, butmodern practical processors are likely to have two, four and even eight byte
registers.
The in-built control mechanism of our example processor causes it to cycle clockwise
through the four steps: fetch, interpret, update and execute, over and over again, repeating the
same cycle continuously all the while that the processor is switched on.
The rate at which the processor cycle is executed is controlled by a system clock and, as
mentioned above, this might well be running at several thousands of millions of cycles per
second.
The major elements are the address and data buses. Recalling that they are simply sets
of electrical connections, it will be no surprise to note that they tend to be implemented as
parallel tracks on a printed circuit board(PCB).This brings us then to the most important
component of all, the motherboard. This normally hosts the processor and the memory chips,
and as a result the buses between them are usually just parallel tracks on the motherboard.
Also on the motherboard is the chipset that carries out all the housekeeping needed to keep
control of the information transfers between the processor, the memory and all the peripheral
devices. In addition, the motherboard hosts the real-time clock, which contains within it the
battery-backed memory known as the CMOS or Complementary Metal Oxide Semiconductor
memory, and the Basic Input Output System(BIOS) Read-Only Memory (ROM).One particularly
clever idea in the original design of the PC was to arrange for the various buses to be accessible
in a standard form so that expansion cards could be fitted into expansion slots on the motherboard
and thus gain access to all the buses. The motherboard normally has a number of these
expansion slot connectors either directly fitted onto the motherboard itself, or fitted onto a separate
riser board or daughterboard that may be connected at right angles to the motherboard.
The next component that needs to looked at is the processor. This technology has advanced
at an unprecedented rate, in terms of both performance and price, over the past 25 years.
3.4. Buses
One bus (the address bus) has a single arrow on it, indicating a one-way transfer of data
and the second bus (the data bus) has two arrows, indicating a two-way transfer of data. If the
binary patterns are requirement to pass between the processor and the memory, we might
consider an appropriate unit as being the byte that is being processed. Recalling that a byte
consists of eight binary bits, a suitable form of connection that would permit all eight bits of a
byte to be transferred in one go would be eight parallel lines: a separate line for each bit. This
is precisely the form that a bus takes: a set of parallel lines that permits the transfer of several
bits of data all at once. Buses come in many sizes, and the eight-bit data bus.
The buses are nomore than a set of parallel electrical connections: one connection for
each bit ofinformation. Hence an eight-bit bus can transfer eight bits or one byte of informationat
a time. From this, it becomes apparent that although the speed at which theprocessor operates
is a very important factor in the overall performance of thesystem, it is the data transfer rates
across the system buses which effectively act asbottlenecks and limit the performance of the
72
whole. For this reason, there has beenmuch development of buses throughout the life of the
PC to try to overcome thesevarious performance bottlenecks as the major elements of the
system have allbecome so much faster.
Autonomous devices are devices that can operate without every action being controlledby
the main processor. The writing of a memory block to a hard disk drive, for instance,would be
initiated by the processor, but the disk controller might then carry out thedetailed transfer of
each byte of memory autonomously, referring back to the processorwith an interrupt only when
the transfer was complete. This is sometimes also referredto as Direct Memory Access (DMA)
device to specify the address of some other device (or the address of part of some other device
such as a memory byte) with which it wishes to communicate.
The data bus, in the above diagram, provides the means by which the data bits are
passed, in parallel, between the memory and the processor after the address of the required
byte has been specified by the address bus.
73
The control bus carries, can be expected to a number of control lines concernedwith the
housekeeping that is necessary to make this all work. Examples of suchcontrol lines include
signals to indicate that the
In addition, a number of clock timing signals are also distributed by means of the control
bus.
The three-bus model derives from the early processors, with their sets of data, address
and control pins, which were used to construct the first PCs. The buses are implemented in
such a way as to provide a standard interface to other devices. Using this standard interface,
expansion cards containing new devices can easily be slotted into spare sockets on the
motherboard and be connected directly to the three buses.
· The wider the bus, the more data that can be passed in parallel on each machine
cycle and hence the faster the overall system should be able to run.
· Very early processors are known as 8 bit, because they have only 8 pins for access
to their external data bus.
· In the mid- to late 1970s came the first of the 16 bit processors, and the Intel
Pentium processors of today are 64 bit, which means that they can transfer 8 bytes
at a time over their external data bus.
· One-point worth noting, in passing, is that modern processors are likely to have
much larger internal data buses, which interact with their on-chip caches, than the
external data buses that are evident to the rest of the system.
74
· In the case of the Intel Pentium 4, the internal data bus, on the chip itself, is 256 bits
wide.
· The width of the address bus, on the other hand, determines the maximum number
of different devices or memory bytes that can be individually addressed.
· In practice, it imposes a limit on the size of the memory that is directly accessible to
the processor, and thus dictates the memory capacity of the system.
One standard packaging arrangement that has been around since the early days of the
PC is for the
· Dual In Line (DIL) chip as shown at Fig. 3.6 (from Microsoft ClipArt Gallery 2.0), and
this is often known as a Dual In line Package (DIP).
75
· For the processor at the heart of the original IBM PC, the Intel 8088, the DIL package
has 40 pins, with 20 down each side.
o The data bus is 8 bits wide and the address bus is 20 bits wide, but 20 pins on the
package are also needed for control signals and for the power supply.
o In order to fit all of this onto a 40 pin package, many of the pins have to be used for
more than one purpose at different times in the processor cycle.
· With the Intel 8088, the address pins 0 to 7 also double up as the eight data bus
pins and the address pins 16 to 19 carry status information as well.
· DIL packages with more than 40 legs were found to be very unwieldy and difficult to
plug into their sockets, although the Texas Instruments TMS9900 had 64 pins in a
DIL package (see Adams, 1981).
· With this packaging, now often referred to as the form factor of the chip, we see the
more frequent use of Zero Insertion Force (ZIF) sockets, which allow the relatively
easy replacement and upgrading of pin grid array processor chips. A ZIF socket
allows a chip to be Inserted into the socket without using any significant force.
· When the chip is properly seated in the socket, a spring-loaded locking plate is
moved into place by means of a small lever, which can be seen to the left of Fig.3.7,
and this grips all the pins securely making good electrical contact with them. In Fig.
3.7 the lever is shown in the down (locked) position on a Socket 939 ZIF socket. T
· The form factors of processor chips for the PC introduced by Intel over the years
have seen a variety of pin grid array systems, initially known as Socket 1 through to
Socket 8, as shown at Table 3.2. Socket 8 is a Staggered Pin Grid Array (SPGA),
which was specially designed for Pentium Pro with its integrated L2 cache. Intel
also introduced what they called a Single Edge Contact (SEC) cartridge for some of
the Pentium II and III processors. This form factor is called Slot 1 and is a 242
contact daughter card slot.
76
· They then increased the number of contacts on the SEC cartridge to 330 and this
became known as Slot 2. Other manufacturers produced Slot A and Slot B SEC
form factors.
· Subsequently, for the Pentium III and Pentium 4, the Socket form factor returned to
favour and a variety of different Socket numbers were produced by Intel with the
Socket number indicating the number of pins on the PGA.
77
· Some examples are: Socket 370, Socket 423, Socket 478, Socket 479, Socket 775
and so forth. In addition, other manufacturers produced their own versions, such
as:
o Socket 754,
o Socket 939 (the one shown in Fig. 3.7 for an AMD chip),
· A more radical approach to the packaging problem is to place the die (or silicon
chip) directly onto the printed circuit board and bond the die connections straight
onto lands set up for that purpose on the PCB. The die is then covered with a blob
of resin for protection.
· This technique is known as Chip on Board (COB) or Direct Chip Attach (DCA) and
is now frequently found in the production of Personal Digital Assistants (PDAs) and
electronic organizers.
· These rules specify what is to be done to the binary patterns that are the data, and
· it is these program rule patterns that are to be interpreted by the second of the two
black boxes shown in the diagram: the processor.
The idea can be quite difficult to grasp. There are binary patterns in one part of the
memory. These binary patterns are interpreted by the processor as a sequence of rules. The
processor executes this sequence of rules and, in so doing, carries out a series of actions.
These actions, typically, manipulate binary patterns in another part of the memory. These
manipulations then confer specific interpretations onto the manipulated binary patterns.This
process mimics, in a very simple form, our mentalinterpretation of a binary pattern.
78
The pattern in the second byte is to represent the object on which the doing code action
is to be carried out. This is called theusing code. In Fig. 3.2 this pattern is 11000101, which in
decimal is 197. In many cases, the value of this second byte will refer to a starting place in
memory where the object to be manipulated resides; that is, it will often be a memory byte
address. The two-byte pattern may therefore be interpreted as an instruction, or rule, which
states: “subtract the thing in byte 197”.
In a practical processor, probably there would be a wide variety of different doing codes
available, known collectively as the order code for the processor, and these would associate
specific patterns in the doing byte with specific actions available in the hardware of the processor.
· add a byte
· subtract a byte
79
· multiply a byte
· divide a byte
· input a byte
· output a byte
· move a byte
There may be similar actions which relate to two or more bytes taken together. The range
and functionality of these doing codes are defined by the hardware of the processor. For example
processor, however, let us consider four such doing codes namely
· load a byte
· store a byte
· subtract a byte
and we will decree that load a byte is to be 00000001, store a byte is to be 00000010, add
a byte is to be 00000100, and subtract a byte is to be 00000101, as shown in Table 3.1.
Note:
Big Endian and Little Endian are the terms that describe the order in which a sequence of
bytes is stored in computer memory.
Big endian is an order in which the “big end” most significant value in the sequence is
stored first at the lowest storage address.For example, in a big endian format the two bytes
required for the hexadecimal number 4F52 would be stored as 4F52 in storage. If 4F is stored
at storage address 1000, 52 will be stored at storage address 1001. IBMs 370 mainframes,
most RISC based computers and Motorola microprocessors use the big endian approach.
TCP/IP also used big endian approach and hence it is sometimes called network order. For
those who use languages that read left to right, this seems like the natural way to think of a
storing string of characters or numbers in the same order as one would expect to see it (i.e)
forward fashion just as one would read a string.
Little endian is an order in which the “little end” the least significant value in the sequence
is stored first. In a little endian system, the above mentioned hexadecimal two bytes of information
will be stored as 524F (i.e) if 52 is stored at storage address 1000, then 4F will be stored at
storage address 1001. Intel processors and DEC Alphas use little endian.
Two approaches have been adopted by processor chip manufacturers: designs with
largenumbers of complex instructions, known as Complex Instruction Set Computers(CISC),
and designs with a minimal set of high-speed instructions, known as ReducedInstruction Set
Computers (RISC). counter register or the sequence control register or the program counter.
This is a small element of memory, internal to the processor, which normally holds the address
in the main memory of the next instruction that the processor is about to execute. The processor
will go through a series of steps to execute an instruction.
In order to try to reduce these bottlenecks,a number of different buses were introducedwhich
were tailored to connect particular parts of the system together. In theearly designs, these
81
buses might be called, for example, the processor bus, the I/O(input–output) bus and the memory
bus.
In Fig. 3.8 we see a typical case, where the processor bus connects the processorboth to
the bus controller chipset and to the external cache memory (ignoring for themoment the
connection to the local bus).
This processor bus is a high-speed bus,which for the Pentium might have 64 data lines,
32 address lines and various controllines, and would operate at the external clock rate. For a 66
MHz motherboard clockspeed, this means that the maximum transfer rate, or bandwidth, of the
processordata bus would be 66 × 64 = 4224 Mbit per second.Continuing with our example
case, the memory bus is used to transfer informationfrom the processor to the main dynamic
random access memory (DRAM)chips of the system.
This bus is often controlled by special memory controller chipsin the bus controller chipset
because the DRAM operates at a significantly slowerspeed than the processor.
The main memory data bus will probably be the same sizeas the processor data bus, and
this is what defines a bank of memory.When addingmore DRAM to a system, it has to be
added, for example, 32 bits at a time if theprocessor has a 32bit data bus.For 30 pin,8 bit
SIMMs (see later section on memory),
four modules will be required to be added at a time. For 72 pin, 32 bit SIMMs, thenonly
one module is required to be added at a time.
In the figure above the I/O bus is the main bus of the system. It connects
theprocessor,through the chipset, to all the internal I/O devices, such as the primary andsecondary
IDE (Integrated Drive Electronics) controllers, the floppy disk controller,the serial and parallel
ports, the video controller and, possibly, an integrated mouseport. It also connects the processor,
through the chipset, to the expansion slots.
Newer chipsets were designed to incorporate what is called bus mastering, atechnique
whereby a separate bus controller processor takes control of the bus andexecutes instructions
independently of the main processor.I/O bus architectures have evolved since the first PC,
albeit rather slowly.
The requirement has always been quite clear. In order to capitalize on the rapid
improvementsthat have taken place in chip and peripheral technologies, there is a need
toincrease significantly the amount of data that can be transferred at one time and thespeed at
which it can be done.The reason for the relatively slow rate of change in thisarea has been the
need to maintain backward compatibility with existing systems,particularly with respect to
expansion cards.
The original IBMPCbus architecture used an 8bit data buswhich ran at 4.77MHzand
became known as the Industry Standard Architecture (ISA).
With the introductionof the PC AT, the ISA data bus was increased to 16 bits and this ran
first at 6MHz andthen at 8MHz.However,because of the need to support both 8 bit and 16bit
expansioncards,the industry eventually standardized on 8.33MHzas the maximumtransfer ratefor
both sizes of bus, and developed an expansion slot connector which would acceptboth kinds of
cards.
ISA connector slots on motherboards are rarely seen today.When the 32 bit processors
became available, manufacturers started to look atextensions to the ISA bus which would permit
32 data lines. Rather than extend theISA bus again, IBM developed a proprietary 32bit bus to
replace ISA called MicroChannel Architecture (MCA).
83
Because of royalty issues, MCA did not achieve wideindustry acceptance and a competing
32bit data bus architecture was establishedcalled Extended Industry Standard Architecture (EISA)
which can handle 32 bits ofdata at 8.33 MHz.
All three of these bus architectures (ISA, MCA and EISA) run at relatively lowspeed and
as Graphical User Interfaces (GUIs) became prevalent, this speedrestriction proved to be an
unacceptable bottleneck, particularly for the graphicsdisplay.
One early solution to this was to move some of the expansion card slots fromthe traditional
I/O bus and connect themdirectly to the processor bus.This becameknown as a local bus, and
an example of this is shown in our example at Fig. 4.5.Themost popular local bus design was
known as the Video Electronics Standards Association(VESA) Local Bus or just VL-Bus and
this provided much improved performanceto both the graphics and the hard disk controllers.
Several weaknesses were seen to be inherent in the VL-Bus design. In 1992 a groupled
by Intel produced a completely new specification for a replacement bus architecture.This is
known as Peripheral Component Interconnect (PCI).Whereas VL-Buslinks directly into the very
delicate processor bus, PCI inserts a bridge between theprocessor bus and the PCI local bus.This
bridge also contains the memory controllerthat connects to the main DRAM chips. The PCI bus
operates at 33 MHz and at thefull data bus width of the processor.New expansion sockets that
connect directly to the PCI bus were designed and these, together with expansion sockets for
updatedversions of this bus, are what are likely to be found on most modern motherboards.
The design also incorporates an interface to the traditional I/O bus, whether it beISA,
EISA or MCA, and so backward compatibility is maintained.Further development of this approach
led to the Northbridge and Southbridge chipset that find in common use today.
In Fig. 3.9 a typical layoutdiagram of a motherboard that uses these chipsets is shown.
The Northbridge chip connectsvia a high-speed bus,known as the Front Side Bus (FSB) directly
to the processor.Wehave attempted, in the diagram, to give some idea of relative performance
of thevarious buses by making the thickness of the connecting lines indicative of theirtransfer
rates.
It may be noted that the memory slots are connected to the Northbridgechip, as is the
Accelerated Graphics Port (AGP). More recently, find high performance PCI Express slots
connected to both the Northbridge and Southbridgechips. This is a very fast serial bus consisting
of between 1 and 32 lanes, with eachlane having a transfer capability of up to 2.5 gigabits per
second.
Intel then introduced the Intel Hub Architecture (IHA) where, effectively, theNorthbridge
chip is replaced by the Memory Controller Hub (MCH) and the Southbridge chip is replaced by
the I/O Controller Hub (ICH). There is also a 64 bitPCI Controller Hub (P64H).The Intel Hub
Architecture is said to bemuch faster thanthe Northbridge/Southbridge design because the
latter connected all the low-speedports to the PCI bus, whereas the Intel architecture separates
them out.
Two other technologieswhich are in widespread use areFireWire is a serial bus technology
with very high transfer rates which has been designed largely for audio and video multimedia
devices. Most modern camcorders include this interface,which is sometimes knownas i.Link.
85
The official specifications for Firewire are IEEE-1394-1995, IEEE 1394a-2000 and IEEE
1394b (Apple Computer Inc., 2006), and it supports up to 63 devicesdaisy chained to a single
adapter card.
The second technology is that of Universal Serial Bus (USB) (USB, 2000),which is also a
high-speed serial bus that allows for upto a theoretical maximumof 127 peripheral devices to be
daisy chained froma singleadapter card. The current version, USB 2.0, is up to 40 times faster
than the earlierversion of USB 1.1. A good technical explanation of USB can be found in
Peacock(2005). With modern Microsoft Windows systems, “hot swapping” of hard diskdrives
can be achieved using either Firewire or USB connections. This is of significanceto the forensic
analyst in that it enables the possible collection of evidencefrom a system that is kept running
for a short while when first seized. This might berequired when, for example, an encrypted
container is found open on a computerthat is switched on.
Recent news hits are data breaches, lost drives, laptops, stolen identities. Data leakage
is a serious threat to organization which the organizations cannot afford. In order to protect
organizational critical information from being stolen by employees or contractors, encrypted
USB devices such as IronKey security solutions protect data, digital identities.
3.9. Motherboard
In Fig. 3.10 is shown a typical modern motherboard, an Asus A8N32-SLI (Asus, 2005).On
the left-hand side of the diagram we can see clearly the three PCI expansion slots.This modern
board, as expected, has no ISA or VESA slots, but it does have three ofthe relatively new PCI
Express slots.
Two of these slots are PCI Express × 16withwhatis known as Scalable Link Interface
(SLI) support, and this provides the motherboardwith the capability for fitting two identical graphics
cards in order to improveoverall graphics performance. These two slots are of a darker colour
than the PCIslots and slightly offset from them.
o the other,which is marked “PCI Expressin the diagram,is to the right of thethird PCI
slot.
o The third PCI Express slot is a × 4 slot, which is much smaller and islocated just to
the right of this second PCI Express slot.
86
It can be seen that the ZIF Socket 939 for the AMD processor in the figure. The two
IDEsockets for the ribbon cables to the Primary and Secondary parallel ATA hard disksare at
the bottom of the diagram close to the ATX power socket and the floppy diskcontroller socket.‘
This motherboard also has four Serial ATA sockets to the left of thePrimary IDE parallel
socket, and at the top of the diagram can be seen in addition aSerial ATA RAID socket.
At the bottom left of the diagram can be seen an 8 Mbyte flash EPROM, whichcontains
the BIOS, and the motherboard is controlled by Northbridge andSouthbridge chips which, as
can be seen, are connected together by a copperheatpipe. This is said to be provides an
innovative fanless design for a much quietermotherboard.
This motherboard is also fitted with a Super I/O chip, as we discussedabove.Along the
left-hand side of the diagram we note the COM1 port socket, USB andFireWire (IEEE 1394)
sockets, and the CR2032 lithium cell battery which providespower for the real-time clock and
the CMOS memory.
87
Along the top we note gigabitLocal Area Network (LAN) sockets,more USB sockets, the
audio sockets, the parallelport and the PS2 mouse and keyboard sockets.
The main random access memory is fitted into DIMM (Dual In-line MemoryModule) slots,of
which four 184 pin Double Data Rate (DDR) slots can be seen in thediagram, although two are
darker in colour and are not quite so evident.
The Intel 8088 is a later version of the Intel 8086, a processor chip that was first produced
in 1976.
Microcomputer systems of this time were all 8 bitand the 8086,which was one of the first
chips to have an external data bus of 16 bits,did not immediately gain widespread support,
88
mainly because both the chip andthe 16 bit motherboard designed to support it were, at the
time, very expensive.
In 1978, Intel introduced the 8088, which is almost identical (Intel, 1979) to the 8086, but
has an 8 bit external data bus rather than the 16 bits of the 8088. Both theseprocessors have a
16 bit internal data bus and fourteen 16 bit registers. They arepackaged as 40 pin DIL chips and
have an address bus size of 20 bits, enabling themto address up to 220 bytes; that is, up to
1,048,576 bytes or 1 Mbyte.
With the XT architecture designed round the 8088 chip it was able to use the then industry
standard 8 bit chip sets and printed circuit boards that were in common use andrelatively cheap.
Bus connections in the original XT architecture were very simple.
Everything was connected to everything else using the same data bus width of 8bits and
the same data bus speed of 4.77 MHz. This was the beginning of the 8 bitISA bus that we
discussed above.
The layout of the PC memory map is shown in the figure.3.11 and part of the basic design
of the PCis a consequence of the characteristics of these Intel 8088 and 8086 processors.
Thememory map is, of course, limited to 1 Mbyte, which is the address space of thisprocessor
family (20 bits).
The first 1024 bytes of this address space are reserved bythe processor for its interrupt
vectors, each of which is a four-byte pointer to an interrupt handling routine located elsewhere
in the address space. To ensure aflexible and upgradeable system, the interrupt vectors are
held in RAM so that theycan be modified.
In addition,when the processor is first switched on, and before anyvolatile memory has
yet been loaded with programs, it expects to start executingcode froman address that is 16
bytes fromthe top of the address space.This indicatesthat this area will have to be ROM.
The memory map that results is thus not surprising.The entire address space of 1Mbyte
cannot all be allocated to RAM.The compromise made was to arrange for thelower 640 kbyte to
be available as the main RAM6 and the upper part of the addressspace to be taken up with the
ROM BIOS, with the video RAM and to give room forfuture expansion with BIOS extensions.The
reason for the 640 kbyte figure is said tobe that the original designers looked at the then current
microprocessor systems, with their address buses of 16 bits and their consequent user address
spaces of 64kbyte of RAM,and felt that ten times this amount was a significant improvement
forthe new PC.
In practice, of course, the transient program area in which the user’s application programs
run does not get the whole of the 640 kbyte. Some is taken upby the interrupt vectors and by
the BIOS data, and some by the disk operating system(DOS).
The basic philosophy behind the design is very sound.The ROMBIOS, producedfor the
manufacturer of the motherboard, provides the programs for dealing indetail with all the vagaries
of the different kinds and variations of the specifichardware related to that motherboard.
The operating system and the applicationprograms can interact with the standard interface
of the BIOS and, provided thatthis standard is kept constant, both the operating system and the
application programs are transportable to any other PC that observes this same standard. The
standard BIOS interface utilizes yet another feature of this processor family, that ofthe software
interrupt. This works in a very similar manner to the hardwareinterrupt.
On detection of a particular interrupt number, the processor saves thecurrent state of the
system,causes the interrupt vector associated with that numberto be loaded and then transfers
control to the address to which the vector points.
90
In the case of a hardware interrupt, this will be to the start location of where code todeal
with some intervention request from the hardware resides. In the case of asoftware interrupt,
which calls on the BIOS, this will have been issued as an INTinstruction code by some calling
program, and will cause an appropriate part of theBIOS ROM code to be executed. In both
cases, when the interrupt is complete, theoriginal state of the system, saved at the time of the
interrupt, will be restored.Oneof the major benefits of this approach is the ability to change the
interrupt vectors, because they are held in RAM.
Let us consider, for example, that we are using theoriginal BIOS to control our graphics
display and that this therefore contains a setof programs which control the actual display controller
chip which is on ourmotherboard.When one of the applications uses the display, itwill issue a
standardBIOS software interrupt and the associated interrupt vectorwill have been set up
totransfer control to where these original BIOS graphics programs reside.
Components of PC
Hardware interrupts are transmitted along Interrupt Request channels (IRQs)which are
used by various hardware devices to signal to the processor that a requestneeds to be dealt
with. Such a request may arise, for example, because input data is now availablefromthe
hardware and needs processing,or because output data hasnow been dealt with by the hardware
and it is ready for the next tranche.
There are alimited number of IRQs available and each has its own specific address in
theinterrupt vector table which points to the appropriate software driver to handle theHardware
that is assigned to that IRQ.Many IRQs are pre-assigned by the systemtointernal devices and
allocation of IRQs to expansion cards has to be carried outwith great care, since the system is
not able to distinguish between two hardwaredevices which have been set to use the same IRQ
channel.
91
Often, an expansion cardwill have DIP (Dual Inline Package) switches which enable one
of a number ofdifferent IRQ channels to be selected for a given configuration in an attempt
toavoid IRQ conflicts.Autonomous data transfer,which is the sending of data between a hardware
deviceand the main memory without involving the main processor, is provided by Direct Memory
Access (DMA) channels,and these too are a limited resource.
Again, some of the channels are pre-assigned by the system and others are available for
use by expansion cards and may also be set by DIP switches on the card.Conflicts can ariseif
two different hardware devices are trying to use the sameDMAchannel at the same time, though
it is possible for different hardware devices to share channels providing that they are not using
them at the same time.The third system resource is the I/O port address.
The Intel 8088 processor, inaddition to being able to address 1 Mbyte of main memory,
can also address, quiteseparately, up to 65,535 I/O ports. Many hardware device functions are
associatedwith an I/O port address. For example, the issuing by the processor of an INinstruction
to a particular port address may obtain from the hardware associatedwith that address the
current contents of its status register. Similarly, the issuing bythe processor of an OUT instruction
to a port address may transfer a byte of data tothe hardware.This type of activity is known as
Programmed I/O (PIO) or Processor I/O as opposed to Memory Mapped I/O (MMIO), where
the 65,535 port addresses are each assigned space in the overall main memory map.
Using MMIO, any memory access instruction that is permitted by the processor can be
used to access a portaddress. Normally a particular hardware device will be allocated a range
of port addresses.
The final system resource, and perhaps the one in greatest demand, is that ofmain memory
address space itself. MMIO is rarely used in the PCbecause it unnecessarily takes up valuable
main memory address space in the upper part of the memory map, space that is required for
the use of any BIOS extensions in particular.
When a new expansion card is fitted, therefore, consideration has to be given towhat of
these limited system resources it is going to require. It may have to beallocated an IRQ, a DMA
channel, a set of port addresses and, possibly, some address space in the upper part of the
memory map for a BIOS extension.
The concept of Plug and Play (PnP) was introduced with Microsoft Windows 95 to try to
automate this process of assigning these limited system resources.The system BIOS, the
92
operating system and the PnP-compatible hardware devices have to collaborate in order toidentify
the card, assign and configure the resources and find and load a suitabledriver.
A modern PC is both simple and complicated. It is simple in the sense that over the years,
many of the components used to construct a system have become integrated with other
components into fewer and fewer actual parts. It is complicated in the sense that each part in a
modern system performs many more functions than did the same types of parts in older systems.
· Motherboard
· Processor
· Memory (RAM)
· Case/chassis
· Power supply
· Floppy drive
· Hard disk
· Keyboard
· Mouse
· Video card
· Monitor (display)
· Sound card
· Speakers
· Modem
Summary
· The part of the computer that carries out the function of executing instructions is
called the processor
· Within the memory box binary patterns have been indicated as both objects and
rules. The rules are ordered sequences of instructions that are to be interpreted by
the processor and which will cause it to carry out a series of specific actions. Such
sequences of rules are called programs and the idea that the computer holds in its
memory instructions to itself is sometimes referred to as the stored program concept.
94
· Typical examples might include: add a byte, subtract a byte, multiply a byte, divide
a byte, input a byte, output a byte, move a byte, compare a byte and so forth.
· Big Endian and Little Endian are the terms that describe the order in which a sequence
of bytes is stored in computer memory.
· This processor bus is a high-speed bus,which for the Pentium might have 64 data
lines, 32 address lines and various controllines, and would operate at the external
clock rate.
· the motherboard is controlled by North bridge and Southbridge chips which, as can
be seen, are connected together by a copper heat pipe.
Reference
1. https://searchwindowsserver.techtarget.com/definition/NTFS
2. https://www.techopedia.com/definition/1369/file-allocation-table-fat
3. https://opensource.com/life/16/10/introduction-linux-filesystems
UNIT – 4
ENTERPRISE INFRASTRUCTURE INTEGRATION
Learning Objectives
Structure
4.1 Overview of Enterprise Infrastructure Integration
IT infrastructure includes client machines and server machines, as well as modern main
frames. Blade Servers are ultrathin servers, intended for a single dedicated application that are
mounted in space saving racks.
include platforms for client computers, dominated by windows operating systems, and
servers, dominated by the various forms of UNIX operating systems or Linux Operating Systems
are software that manage the resources and activities of the computer and act as an interface
for the user.
96
Enterprise software applications include SAP, ORACLE, and PeopleSoft and middleware
software are used to link a firms existing application systems.
Data management and storage is handled by database management software and storage
devices include traditional storage methods, such as disk arrays and tape libraries and newer
network based technologies such as storage area networks (SANs). SANs connect multiple
storage devices on dedicated high speed networks.
Internet related infrastructure includes the hardware, software and services to maintain
corporate websites, intranets, extranets including web hosting services and web software
applications development tools. A webhosting service maintains a large web server or series of
servers and provides fee-paying subscribes with space to maintain their web sites.
Consulting and system integration services are relied on for on for integrating technology
and infrastructure by providing expertise in implementing new infrastructure along with relevant
changes in business process, training and software integration.
97
The figure 4.1 represents the seven major IT infrastructure structure ecosystems.
The independent components in each of these seven major infrastructures are portrayed
in the figure 4.2
They can be spread across multiple data centres. These decentralized data centers can
be controlled by the organization or by third party. The organization may be the owner and third
party may be a cloud provider or collocation facility.
98
While 2010 was the year for talking about the cloud, 2011 will be the year for
implementation. It is for this reason that it is important for service providers and enterprises to
gain a better understanding of exactly is needed to build their cloud infrastructure. For both
enterprises and service providers, the successful creation and deployment of cloud services
will become the foundation for their IT operations for years to come making it essential to get it
right from the start.
For the architect employed with building out a cloud infrastructure, there are seven key
requirements that need to be addressed when building their cloud strategy. These requirements
include:
Not only should cloud management solutions leverage the latest hardware, virtualization
and software solutions, but they should also support a data centre’s existing infrastructure.
While many of the early movers based their solutions on commodity and open source solutions
like general x86 systems running open source Xen and distributions like CentOS, larger service
providers and enterprises have requirements around both commodity and proprietary systems
when building out their clouds. Additionally, cloud management providers must integrate with
traditional IT systems in order to truly meet the requirements of the data center. Companies that
don’t support technologies from the likes of Cisco, Red Hat, NetApp, EMC, VMware and Microsoft
will fall short in delivering a true cloud product that fits the needs of the data center.
services and applications that end users can consume through the provider — whether the
cloud is private or public. Service offerings should include resource guarantees, metering rules,
resource management and billing cycles. The service management functionality should tie into
the broader offering repository such that defined services can be quickly and easily deployed
and managed by the end user.
In order for a cloud to be truly on-demand and elastic while consistently able to meet
consumer service level agreements (SLAs), the cloud must be workload- and resource- aware.
Cloud computing raises the level of abstraction to make all components of the data center
virtualized, not just compute and memory. Once abstracted and deployed, it is critical that
management solutions have the ability to create policies around workload and data management
to ensure that maximum efficiency and performance is delivered to the system running in the
cloud. This becomes even more critical as systems hit peak demand. The system must be able
to dynamically prioritize systems and resources on-the-fly based on business priorities of the
various workloads to ensure that SLAs are met.
While the model and infrastructure for how IT services are delivered and consumed may
have changed with cloud computing, it is still critical for these new solutions to support the
same elements that have always been important for end users. Whether the cloud serves as a
test bed for developers prototyping new services and applications or it is running the latest
version of a popular social gaming application, users expect it to be functioning every minute of
every day. To be fully reliable and available, the cloud needs to be able to continue to operate
while data remains intact in the virtual data center regardless if a failure occurs in one or more
components. Additionally, since most cloud architectures deal with shared resource pools across
multiple groups both internal and external, security and multi-tenancy must be integrated into
every aspect of an operational architecture and process. Services need to be able to provide
access to only authorized users and in this shared resource pool model the users need to be
able to trust that their data and applications are secure.
Many components of traditional data center management sill require some level of
integration with new cloud management solutions even though the cloud is a new way of
100
consuming IT. Within most data centres, a variety of tools are used for provisioning, customer
care, billing, systems management, directory, security and much more. Cloud computing
management solutions do not replace these tools and it is important that there is open application
programming interfaces (APIs) that integrate into existing operation, administration, maintenance
and provisioning systems (OAM&P) out of the box. These include both current virtualization
tools from VMware and Citrix, but also the larger data center management tools from companies
like IBM and HP.
The need to manage cloud services from a performance, service level, and reporting
perspective becomes paramount to the success of the deployment of the service. Without
strong visibility and reporting mechanisms the management of customer service levels, system
performance, compliance and billing becomes increasingly difficult. Data center operations
have the requirement of having real-time visibility and reporting capabilities within the cloud
environment to ensure compliance, security, billing and chargebacks as well as other instruments,
which require high levels of granular visibility and reporting.
One of the primary attributes and successes of existing cloud-based services on the
market comes from the fact that self-service portals and deployment models shield the complexity
of the cloud service from the end user. This helps by driving adoption and by decreasing operating
costs as the majority of the management is offloaded to the end user. Within the self-service
portal, the consumer of the service should be able to manage their own virtual data center,
create and launch templates, manage their virtual storage, compute and network resources
and access image libraries to get their services up and running quickly. Similarly, administrator
interfaces must provide a single pane view into all of the physical resources, virtual machine
instances, templates, service offerings, and multiple cloud users. On top of core interfaces, all
of these features need to be interchangeable to developers and third parties through common
APIs.
Cloud computing is a paradigm shift in how data centres and service providers are
architecting and delivering highly reliable, highly scalable services to their users in a manner
that is significantly more agile and cost effective than previous models. This new model offers
early adopters the ability to quickly realize the benefits of improved business agility, faster time
101
to market and an overall reduction in capital expenditures. However, enterprises and service
providers need to understand what elements their cloud must contain in order to build a truly
successful cloud.
This infrastructure supports the data center hardware with power, cooling and building
elements. This hardware includes:
· Servers
· Storage
Organization must ensure that data is secure and protected from unauthorised personnel
stealing information using malicious software and causing damage to the organizations. Hence,
it becomes inevitable for data centers to have physical security inside the premises of data
centre as a part of IT infrastructure security. These include:
Figure 4.3: The Right Balance between Traditional IT Vs. Digital Enterprise
Growing businesses need a server solution that supports changing demands. It is important
for an organization to develop a server strategy that will help to achieve optimum performance,
availability, efficiency and business value from the investment.
· Office 365
· Antivirus
· Email encryption
A server is a device with a set of programs providing services requested by clients. The
word server refers to a specialized computer or hardware on which the server software works
103
and provides other computers or clients. A server has many functions and they come in different
types to facilitate different uses. A server is a device with a set of programs providing services
requested by clients. Is a device with a particular set of programs or protocols that provide
various services together a server and its clients form a client-server which provides routing
systems and centralized access to information, resources stored data etc at the most ground
level one can consider it as a technology solution that serves files, data, print, facts resources
and multiple computers. The advanced server versions like windows small business server are
to enable the user to handle the accounts and passwords allow or limit the access to shared
resources, automatically support the data and access the business information remotely for
example a file server is a machine that allows clients or users to upload or download files from
it similarly a web server hosts websites and allows users to access these websites. Clients
mainly include computer printers, faxes or other devices that can be connected to the server.
By using a server one can securely share files and resources like fax machines and printers.
Hence with a server network employees can access the internet or company emails
simultaneously.
and stores mails over corporate mails corporate networks through lands ones and across the
lands ones and across the internet. News server serves as a distribution delivery source for
many pubic news and groups approachable over the Usenet networks. Proxy Servers operate
between a client program and an external server to filter requests, improve performance and
share connections. Telnet Servers - telnet performance and share connections. Telnet server
enables the users to logon to a host computer and execute a host computer and execute tasks
as if they are working on a remote computer. Virtual servers are just like a physical computer
because it is committed to individual customer demands can be individually booted and maintains
privacy of a separate computer basically the distance among shared and dedicated. Hosting
servers has now become omnipresent in data center. Web server –provides static content to a
web browser by loading a file from the disk and transferring it across the network to the uses.
Web browser – this exchange is intermediated by the browser and the server communicating
using HTTP other types of servers and other types of servers include open source gopher like
a plain document similar to www and plain document similar the hyper text being absent and
name hypertext being absent server applies name service protocol the various servers can be
categorized according to their applications according to their applications servers along with
managing networking resources along with managing network resources are also dedicated
that is platform no other tasks other than their tasks than their server.
System administrator can be seen as this know-it-all person who knows how every single
element on the data center is connected. This is very powerful sophisticated engineer. However
not all data centers look huge. Sometimes data centers might be complete mess where it is
difficult to find where things are located and how they are connected and in these cases the
system administrator cannot do much but somehow ensure nothing untoward happens, nothing
breaks and systems continue operating. Today setting up data centre quite simple, all that is
required is a kind of blueprint. In early 1990s the network diagram was very complicated. Today
the network blueprint is rather easy to understand because the layout can be made with
powerpoint or visio and it can clearly show how different devices such as servers, clients,
switches access points and also the access to the internet. However these are static diagrams
not too useful to system administrators. Since administrator needs to monitor the performance
of every single node connected to a network.
Networks are built of series of elements. This figure shows any typical elements that is
found on any local area network. If a business is using IT chances are all of these elements or
at least most of them are going to be part of IT infrastructure. Within this network one can find
servers, clients and network devices.
4.4.2. Servers
Is a computer that serves or supplies the data and applications used by clients. Servers
get their names based on the functions they perform.
A file server has the functions to share files with potential users. Linux based Samba for
more traditional file transfer protocols (FTP). These servers would allow one to share files with
users. These have been replaced by modern cloud based systems such as Google Drive or
Dropbox.
Print servers manage the queue of printing by all users in a firm. In large firm one does
not print directly to a printer but rather send it to a print server and the servers distributes the
workloads through the printers in the firm.
106
Webbrowsers are connected to web servers and several software can perform the function
of a web server and whenever the user seeks a webpage the browser connects to the web
server and displays web pages. Open source Apache servers is by far the most popular web
server in the world. Unix servers are coming out as a new alternative to Apache which has a
characteristic that is very slimmed down. It performs very few functions in a very efficient manner.
A more classic web server is Microsoft Internet Information Services Server.
Applications run the enterprise applications such as an ERP system for example AP or
any other application developed locally by the firm.
Mail servers which were widely used before the emergence of Office 365 or Google’s
Gmail such as Microsoft Exchange Server or Zimra were used in firms to handle all mail of
users.
Another type of server that is going to be there in any firm is going to be a database
server. These servers organize the data used by all the information in systems of the firm.
Examples of these servers are Microsoft sequel servers, Oracle or my sequel or open source
servers.
Media servers are used for video streaming or to share photo graphs to host gallery.
We also have servers that enable users to collaborate with each other to work in concurrent
forms such as Microsoft Share Point or IBM Lotus. Another way we can name servers is based
on their platform and by platform we mean that there are hardware and operating system. It can
be referred to their hardware based on based on their make and model for example IBM PC as
server. Next is the server’s operating system just as laptops can run in operating system such
as windows or Mac OS, a server can run an operating system such as Microsoft windows
server or a distribution of Linux such as Red Hat Linux, Debian, Ubuntu or others. More classic
servers may run UNIX as their operating systems.
107
Another way of classifying servers is based on their features and organizations. They
include mainframe, High Availability, Cluster, Virtual servers.
4.4.10. Mainframe
Mainframe is very large multi-functional equipment. These servers are capable of running
a huge number of transactions and workloads of thousands of users as they will generally
costs millions of dollars and be generally found in fortune 500 companies or big financial firms
that have enough workload to justify the investment in one of these humongous servers. Example
IBM Z Enterprise System delivers availability, security and manageability to address the challenge
of today’s multi-platform data centers. Its unified resource manager provides revolutionary ability
to centrally govern and manage IBM System Z and power and system X blades as a single
integrated system allowing one to monitor and optimize application performance, availability
and security and end to end based on business policies, while applications run where they run
their best. The Z enterprise extends System Z qualities to delivers significant operational business
and organizational advantages. It can reduce energy consumption, floor space and operating
costs. With its new 5.2 GHz super scalar processors, scalability up to 96 cores up to 3 terabytes
of high availability main memory and hot pluggable drawer, the Z enterprise the world’s fastest
most scalable enterprise system. IT allows running CPU intensive workloads like Java. Up to
60% faster that previous systems. New integrated workload optimizers and select IBM blades
offer additional advantages for example, the smart analytics optimizer delivers accelerated
performance for complex queries. It includes the new crypto express 3 that uses elliptic curve
cryptography for security.
These are very powerful PCs or powerful servers that have elements that make them
highly available. For example one of the typical things that will fail on a server is one of its hard
drives but a high availability server will have multiple hard disk drives (e.g. RAID). If one of
these hard drives fails others continue to performing the same functions sometimes these are
arranged in what is called as a RAID array – a redundant array of inexpensive disks which
offer redundancy to the hard drives. They don’t have single power supply. A power supply is the
element by which electricity flows into the server. Network interfaces or network cards can burn
out hence there are multiple of them in case one of them fails, the others can take care.
108
Group of servers that perform the same function in parallel are called Cluster Servers. In
this sense they can have multiple web servers or multiple data servers that distribute workloads
and are scalable amongst them. The fact that in order to increase the capacity one need not
have to make a single server larger rather add more servers to architecture is that makes a
cluster servers particularly scalable. There are several ways in which the cluster servers can
be organized. Organizations can have a primary server and secondary or slave servers that
will only come online if the first server happened to come down. Sometimes it is referred to as
a cold backup in the sense that usually only have one server running in only if this primary
server comes down, its slave or secondary server comes online. Sometimes one can also have
a hot redundancy. In case we have two or more servers running performing the same functions
in this case one can have servers functioning as mirrors of each other. Having a cluster will
increase the availability of the servers not by a single server more powerful or more available
but rather by having redundancy across different servers.
In layman’s terms virtual servers are “servers within a server”. Let us take a big physical
host, this could be a high availability server or it could be a mainframe. Let’s install a software
tool called a hypervisor on this server and within this hypervisor or on top of this hypervisor,
multiple virtual machines can be run. In terms of IT stack, to run a software application first
requirement is hardware, CPU, Memory and Storage Devices that are part of any server. On
this hardware, the operating system is installed. For example one can install Linux on top of
hardware. Within this Linux, a hypervisor is installed. Hypervisor is a software tool that allows
one to segment or partition the entire servers into multiple portions and then create virtual
servers or smaller sized servers within this hypervisor. Then one can have multiple virtual servers
running within a hypervisor. The most common hypervisor that exists in corporate environment
is offered by VMware and in particular VMware ESXi. This software is traditionally used by
corporate. Open source software is Xen server and is become particularly popular in cloud
context because Citrix offers a more refined version of Xen software that is the one used by
cloud infrastructure providers such as Amazon or Rackspace. Cloud infrastructure providers
offers virtual servers to their customers on their public clouds. Another example of hypervisor is
the software used in laptops and in particular Mac Computers to run windows within a Mac –
Vmware fusion, oracle virtual box and parallels. These are software’s that can be installed on
laptops running one operating system and run another operating system within the virtual box.
109
4.4.14. Clients
Clients are the devices that access the servers. Thus they are the hardware used for
input and output of information used by end users. They are going to be the means through
which the end users access to servers and other clients. Examples are:
Network devices are the devices that interconnect all the servers and clients. These devices
include access points, switches, routers and modems.
Access Points are these devices that are mounted on walls and these have antennas (the
means by which the devices such as laptops or smart phones connect to a local area network.
Laptops are connected to AP and the AP in turn connects to the LAN of the firm and thus
through this wireless device one can able to access all the services in the LAN. Some devices
come as a combo who acts as an AP, router and switch.
4.4.15.2. Switches
A switch is nothing more than a junction point cables come in and goes out. The smallest
of this might be an eight port switch. Data coming in through any of the ports of this switch will
go out by any of the other ports in the switch. So it is basically used by everyone to communicate
with everyone else connected to a switch. In a firm such small switches cannot be found.
Rather switches with 24 or 48 ports might be found slim and be organized in racks.Switches
interconnect all the devices within the network. Once communications needs to go out of the
network the traffic has to be routed from a LAN on to or towards other networks and for this we
use as router.
110
4.4.15.3. Router
A router isgoing to be a device that interconnects different LAN. For a larger corporate
like for example an ISP the router might look like the size of an entire cabinet or entire rack.
Two basic types of media are used to connect all the devices. They are wired or cables
and wireless.
4.4.15.4.1. Cables
a) Wired
A network cable is also called an Ethernet cable or UTP cable. UTP stands for Unshielded
Twisted-pair cables. The other is twisted pairs of copper cables. The jacks of these cables look
very similar to the traditional phone jacks and all laptops will have network ports to which we
can connect one of these cables although laptops now a day’s connects only through wireless
means. Another type of copper cable used to transmit is copper coaxial cables. Through these
cables one can cover larger distance than with the UTP cables mentioned before (example:
television cables). Data can also be transmitted through coaxial cables. IF ISPs are working
with cable modems, that means they are using coaxial cables. In order to increase speed and
distance of transmission we have fibre optic cables rather than transmitting electric signals to
the copper wires.
b) Wireless
WI-FI hotspot is nothing more than an access point that allows connecting to a network in
particular a LAN. Sometimes one can access the internet, however one should connect to a
LAN and from there a router takes the transmission onto the internet. That being said, there are
much more wireless communication media than just WI-FI. For example take cellular networks
3G, 4G and whatever there is to come. Communication to microwave antennas is characterized
by needing the line-of –sight that needs to have a clear straight line from one antenna to
another to have direct communication. Finally another form of wireless communication is satellite
communications. These however go into outer space but they also qualify as wireless
communications. So far a wide array of devices and servers has been discussed.
111
LAN: Local area network is going to occupy the space of room or a building with all the
elements namely switches, routers, servers, clients, printers. They interconnect anything within
a room or within a building. Typical speeds of these networks are going to range between 100
megabits/sec and 1 GB/sec. they are meant for nearby communication between devices.
4.4.16.2. Backbone
In a firm we can have multiple buildings each building have their own facility for eg. A firm
having multiple buildings namely: Manufacturing facility; administrative staff; inventories
warehouses and so on through a mechanism such that each of these buildings are interconnected
which is called a backbone network. The scale of the backbone network is going to be less than
few kilometres. The elements that compose or that build are going to be local area network and
the devices that need to interconnect LAN with each building. This is called a backbone network.
Elements include high speed switches and routers and also high speed circuits (for example:
fibre optic cables) speed ranges from 1 to 40GBits/sec.
4.4.16.3. MAN
A firm having multiple branches in multiple locations within a city or a metropolitan area
has metropolitan area network. The scale of MAN is then going to be more than a few kilometres.
Circuits in different buildings in different locations are likely to be leased to pubic providers or a
ISPs or telecom companies that have already laid out their own fibre optic cables. Alternatively
instead of fibre optic cables is to have point-to-point connections through microwave antennas.
Sometimes internet based tunnel or channel perhaps through VPN tunnel that relies on the
internet. Typical speeds for these kinds of networks are going to range from 64 kilobits to 10
gigabits. 64 kilobits range is the lowest one can use to have a VOIP conversation. However,
links that exists at least a few megabytes of capacity extending the concepts of MAN to much
larger scales and perhaps even international scales.
4.4.16.4. WAN
Side area networks are going to be the private networks that are going to be used to
interconnect multiple operations across the globe for a single firm by leasing, by satellite based
network and the speeds of this network are going to range from 64 kilobits and 10 gigabits/sec.
112
4.4.16.5. Intranets
Types of networks can be classified based on who can access them. These include Intranet
and Extranet. An intranet is only accessible to the members of the organization so these are
standard collaborators. For example everyone connecting to a local area network to the ERP
system to their central information system is going to be connecting through intranet. It is quite
common that collaborators who work remotely be it working from home, or on a business trip on
a hotel they might use a VPN to connect to the firm’s internet.
4.4.16.6. Extranets
Network accessible for people or entities external to the organization is called extranets.
Clients and providers logging into an inventory system over the extranet. Widely known example
of a big extranet is how walmart offers access to its stock to its suppliers themselves knows
when it is necessary for them to start shipping more goods into walmarts warehouses. They
know this by connecting to walmarts extranets. End users go into a public ecommerce website
such as amazon.com or any other firm that offers an online portal which is a part of that firm’s
enterprise extranet. A public WiFi offered by a retail store for its customers could also be called
as an extranet network for customers.
Summary
A service offering is a quantified set of services and applications that end users can
consume through the provider — whether the cloud is private or public. Service
offerings should include resource guarantees, metering rules, resource management
and billing cycles.
Cloud computing management solutions do not replace these tools and it is important
that there is open application programming interfaces (APIs) that integrate into
existing operation, administration, maintenance and provisioning systems (OAM&P)
out of the box. These include both current virtualization tools from VMware and
Citrix, but also the larger data center management tools from companies like IBM
and HP.
Cloud computing is a paradigm shift in how data centres and service providers are
architecting and delivering highly reliable, highly scalable services to their users in
a manner that is significantly more agile and cost effective than previous models.
Chat server it serves the users to exchange data in an environment similar to internet
newsgroup which provides real-time discussion capabilities.
Fax servers – it is one of the best options for organizations that seek minimum
incoming and outgoing telephone resources but require actual documents.
FTP server works on one of the oldest of the internet services. FTP Protocol provides
a secure file transfer between computers while ensuring file security and transfer
control group.
Web server it is a software design that enables all the users to work together
irrespective of the location through the internet and function together in a virtual
atmosphere.
IRC server - is an ideal option for those looking for real time discussion capabilities
internet. Internet Relay Chat comprises of different network servers which enable
the users to connect to each other through an IRC network.
List server provides a better way of managing mailing lists. The server can open
interactive discussion or a one-way people or a one-way list that provides
announcements newsletters or advertising.
Mail server it transfers and stores mails over corporate mails corporate networks
through lands ones and across the lands ones and across the internet.
News server serves as a distribution delivery source for many pubic news and
groups approachable over the Usenet networks.
Proxy Servers operate between a client program and an external server to filter
requests, improve performance and share connections.
114
Telnet Servers - telnet performance and share connections. Telnet server enables
the users to logon to a host computer and execute a host computer and execute
tasks as if they are working on a remote computer.
Virtual servers are just like a physical computer because it is committed to individual
customer demands can be individually booted and maintains privacy of a separate
computer basically the distance among shared and dedicated.
Hosting servers has now become omnipresent in data center. Web server –provides
static content to a web browser by loading a file from the disk and transferring it
across the network to the uses.
Cloud computing
Servers and desktops
Service offering
Types of servers
Cloud computing management
References
Peter Bernus and L. Nemes (ed.). (1995). Modelling and Methodologies for
Enterprise Integration: Proceedings of the IFIP TC5 Working Conference on Models
and Methodologies for Enterprise Integration, Queensland, Australia, November
1995. Chapman & Hall. ISBN 0-412-75630-7.
Peter Bernus et al. (1996). Architectures for Enterprise Integration. Springer. ISBN 0-
412-73140-1
Kent Sandoe, Gail Corbitt, Raymond Boykin, Aditya Saharia (2001). Enterprise
Integration. Wiley, ISBN 0-471-35993-9.
https://www.youtube.com/watch?v=EUfteewD3_M
115
UNIT 5
ENTERPRISE ACTIVE DIRECTORY
INFRASTRUCTURE
Learning Objectives
· Kerberos
· LDAP
· Forest
· Domain
· Trust Relationships
o User
o Group
o OU
o Domain
o Structure of GPO
§ Password Settings
o Application of GPO
§ Linking a GPO
§ Enforcing a GPO
§ GPO Status
o Precedence of GPO
· Single-Sign On Integration
Structure
5.1 Overview of Active director. Use Security Baselines and Benchmarks
5.2 Kerberos
5.5 Forest
5.6 Domain
117
5.20 Integration of Physical Access Security and Logical Access Security using
Microsoft Active Directory
Active Directory was initially released with Windows 2000 Server and revised with additional
features in Windows Server 2008. Active Directory provides a common interface for organizing
and maintaining information related to resources connected to a variety of network directories.
The directories may be systems-based (like Windows OS), application-specific or network
resources, like printers. Active Directory serves as a single data store for quick data access to
all users and controls access for users based on the directory’s security policy.
118
· Security service using the principles of Secure Sockets Layer (SSL) and Kerberos-
based authentication
Active Directory is internally structured with a hierarchical framework. Each node in the
tree-like structure is referred to as an object and associated with a network resource, such as a
user or service. Like the database topic schema concept, the Active Directory schema is used
to specify attribute and type for a defined Active Directory object, which facilitates searching for
connected network resources based on assigned attributes. For example, if a user needs to
use a printer with color printing capability, the object attribute may be set with a suitable keyword,
so that it is easier to search the entire network and identify the object’s location based on that
keyword.
Novell’s directory service, an Active Directory alternative, contains all server data within
the directory itself, unlike Active Directory.
119
5.2 Kerberos
Kerberos is a network protocol that uses secret-key cryptography to authenticate client-
server applications. Kerberos requests an encrypted ticket via an authenticated server sequence
to use services.
The protocol gets its name from the three-headed dog (Kerberos, or Cerberus) that guarded
the gates of Hades in Greek mythology.
Kerberos was developed by Project Athena - a joint project between the Massachusetts
Institute of Technology (MIT), Digital Equipment Corporation and IBM that ran between 1983
and 1991.
An authentication server uses a Kerberos ticket to grant server access and then creates
a session key based on the requester’s password and another randomized value. The ticket-
granting ticket (TGT) is sent to the ticket-granting server (TGS), which is required to use the
same authentication server.
The requester receives an encrypted TGS key with a time stamp and service ticket, which
is returned to the requester and decrypted. The requester sends the TGS this information and
forwards the encrypted key to the server to obtain the desired service. If all actions are handled
correctly, the server accepts the ticket and performs the desired user service, which must decrypt
the key, verify the timestamp and contact the distribution center to obtain session keys. This
session key is sent to the requester, which decrypts the ticket.
If the keys and timestamp are valid, client-server communication continues. The TGS
ticket is time stamped, which allows concurrent requests within the allotted time frame.
· The root directory (the starting place or the source of the tree), which branches out
to
· Individuals (which includes people, files, and shared resources such as printers)
An LDAP directory can be distributed among many servers. Each server can have a
replicated version of the total directory that is synchronized periodically. An LDAP server is
called a Directory System Agent (DSA). An LDAP server that receives a request from a user
takes responsibility for the request, passing it to other DSAs as necessary, but ensuring a
single coordinated response for the user.
A ticket-granting ticket is also known as an authentication ticket.In a TGT model, the first
tiny ticket or data set is issued in order to approve the beginning of authentication. An additional
ticket goes to the server with client identity and other information. Like other tickets, the initial
small ticket is also encrypted. In this ticket granting system, Kerberos uses some specific
protocols. The client first sends the ticket-granting ticket as a request for server credentials to
the server. The encrypted reply comes back with a key that is used for authentication purposes.
The client uses the TGT to “self-authenticate” with the ticket-granting server (TGS) for a
secure session.
5.5. Forest
An Active Directory forest is the highest level of organization within Active Directory. Each
forest shares a single database, a single global address list and a security boundary. By default,
a user or administrator in one forest cannot access another forest.
121
5.6. Domain
A domain, in the context of networking, refers to any group of users, workstations, devices,
printers, computers and database servers that share different types of data via network resources.
There are also many types of subdomains.A domain has a domain controller that governs all
basic domain functions and manages network security. Thus, a domain is used to manage all
user functions, including username/password and shared system resource authentication and
access. A domain is also used to assign specific resource privileges, such as user accounts.In
a simple network domain, many computers and/or workgroups are directly connected. A domain
is comprised of combined systems, servers and workgroups. Multiple server types may exist in
one domain - such as Web, database and print - and depend on network requirements.
An organizational unit (OU) is a subdivision within an Active Directory into which one can
place users, groups, computers, and other organizational units. One can create organizational
units to mirror the organization’s functional or business structure. Each domain can implement
its own organizational unit hierarchy. If the organization contains several domains, one can
create organizational unit structures in each domain that are independent of the structures in
the other domains.
The term “organizational unit” is often shortened to “OU” in casual conversation. “Container”
is also often applied in its place, even in Microsoft’s own documentation. All terms are considered
correct and interchangeable.
At Indiana University, most OUs are organized first around campuses, and then around
departments; sub-OUs are then individual divisions within departments. For example,
the BL container represents the Bloomington campus; the BL-UITS container is a subdivision
that represents the University Information Technology Services (UITS) department on the
Bloomington campus, and there are subcontainers below that. This method of organization is
not an enforced rule at IU; it is merely chosen for convenience, and there are exceptions.
122
Trusted Domain. A trusted domain is a domain that the local system trusts to authenticate
users. In other words, if a user or application is authenticated by a trusted domain, this
authentication is accepted by all domains that trust the authenticating domain.
Trust relationships are an administration and communication link between two domains.
A trust relationship between two domains enables user accounts and global groups to be used
in a domain other than the domain where the accounts are defined.
5.10.2. Groups
Groups are used to collect user accounts, computer accounts, and other groups into
manageable units. Working with groups instead of with individual users helps simplify network
maintenance and administration.
Security groups can provide an efficient way to assign access to resources on your network.
User rights are assigned to a security group to determine what members of that group can do
within the scope of a domain or forest. User rights are automatically assigned to some security
groups when Active Directory is installed to help administrators define a person’s administrative
role in the domain.Group Policy to assign user rights to security groups to delegate specific
tasks. For more information about using Group Policy. permissions are different than user
rights. Permissions are assigned to the security group for the shared resource. Permissions
determine who can access the resource and the level of access, such as Full Control. Some
permissions that are set on domain objects are automatically assigned to allow various levels of
access to default security groups, such as the Account Operators group or the Domain Admins
group.
Security groups are listed in DACLs that define permissions on resources and objects.
When assigning permissions for resources (file shares, printers, and so on), administrators
should assign those permissions to a security group rather than to individual users. The
permissions are assigned once to the group, instead of several times to each individual user.
Each account that is added to a group receives the rights that are assigned to that group in
Active Directory, and the user receives the permissions that are defined for that group.
An organizational unit (OU) is a subdivision within an Active Directory into users, groups,
computers, and other organizational units can be placed. Creation of organizational units to
mirror an organization’s functional or business structure. Each domain can implement its own
organizational unit hierarchy. If an organization contains several domains, organizational unit
structures in each domain that are independent of the structures in the other domainscan be
created.
124
5.10.2.4. Domain
A server running Active Directory Domain Services (AD DS) is called a domain controller.
It authenticates and authorizes all users and computers in a Windows domain type network—
assigning and enforcing security policies for all computers and installing or updating software.
Group Policies are applied in the following order. The last one applied can overwrite
policies from any level above. The Default Order is:
Site
125
Domain
Domain Administrators – Read, Write, Create All Child Objects, Delete All Child Objects,
Special Permissions
Enterprise Administrators – Read, Write, Create All Child Objects, Delete All Child
Objects, Special Permissions
System – Read, Write, Create All Child Objects, Delete All Child Objects, Special
Permissions
For example, for elevated accounts, passwords should be set to at least 15 characters,
and for regular accounts at least 12 characters. Setting a lower value for minimum password
length creates unnecessary risk. The default setting is “zero” characters, so one will have to
specify a number:
2. In the right pane, double-click “Minimum password length” policy, select “Define
this policy setting” checkbox.
126
If the password set expiration age to a lengthy period of time, users will not have to
change it very frequently, which means it’s more likely a password could get stolen. Shorter
password expiration periods are always preferred.
Windows’ default maximum password age is set to 42 days. The following screenshot
shows the policy setting used for configuring “Maximum Password Age”. Perform the following
steps:
b. To set the Account Lockout Threshold policy setting, right click it and
select Properties from the drop down list.
c. The Account Lockout Threshold properties dialog box opens. For our example, we
amend the lockout threshold number to 12. Click OK to apply the changes.
d. It is informed that since the Account Lockout Threshold policy setting has been
given a value, Windows Server automatically defines and applies a security setting
of 30 minutes to the other policy settings (Account Lockout Duration and Reset
Account Lockout Counter After). Click OK to continue.
e. The Account Lockout Threshold has now been successfully configured. The other
policy settings, Account Lockout Duration and Reset Account Lockout Counter After,
also have been updated.
128
a. Navigate to Computer Configuration -> Policies -> Windows Settings -> Security
Settings -> Local Policies -> Security Options -> Interactive logon: Machine inactivity
limit
c. Double-click Configure Automatic Updates and set to Enabled, then configure the
update settings and click OK.
a. Select the Group Policy Object in the Group Policy Management Console (GPMC)
and the click on the “Delegation” tab and then click on the “Advanced” button.
b. Select the “Authenticated Users” security group and then scroll down to the “Apply
Group Policy” permission and un-tick the “Allow” security setting.
c. Now click on the “Add” button and select the group (recommended) that one wants
to have this policy apply. Then select the group (e.g. “Accounting Users”) and scroll
the permission list down to the “Apply group policy” option and then tick the “Allow”
permission.
132
· In the Select GPO dialog under Group Policy Objects, select the GPO one wants to
link and click OK.
In the right pane, the new will be GPO listed. GPOs with a higher link order number, i.e.
those that appear higher up the list, take priority over those with lower numbers. Link GPOs to
AD sites and domains in the same way that it’s possible to link them to OUs. The GPO settings
will be applied to AD objects that fall in scope, i.e. in this example any computer accounts
located in the Domain Controllers OU.
The GPUPDATE/Force command is useful when working manually with clients and servers
to get GPO settings to apply. However, it is also important for some GPO settings to be forced
during the standard refresh cycle, which is typically every 90 minutes. This is possible by
configuring one or more GPO settings to coincide with the different Group Policy Extensions
that are embedded in each GPO.
Access the following path in the GPO, which contains the settings that need to be forced
on each refresh: Computer Configuration|AdministrativeTemplates|System|Group Policy.
Figure 5.11. “ processing” policies can force the application of GPO settings.
134
Within each of these policies there is an option to “Process even if the Group Policy
objects have not changed,” as shown in Figure: 5.12.
Figure 5.12. GPO setting to process GPO settings even if there have been no changes.
With this setting configured, the settings in the GPO will apply on each refresh, even if
there are no changes to the GPO. This ensures that the settings are applied consistently.
Using ADManager Plus ‘GPO Management’, it becomes quite simple for the administrators
to know all the required details, status of all the require GPOs, in just an instant.
The first step in managing GPOs is to know the list of all available GPOs in the domain.
ADManager Plus provides this information, instantly.
135
Procedure:
In the ‘GPO Management’ section, click on the ‘Group Policy Objects’ container in the
required domain to view the list of all available GPOs in that domain.
Steps:
· In the ‘Group Policy Management’ pane on the left hand side, click on ‘All Domains’
to expand the link and view all the configured domains.
· Click on the ‘+’ icon beside the required domain. This will list all the containers in the
domain.
· Click on ‘Group Policy Objects’ container to view all the available GPOs in this
particular domain.
· Using the ‘Enable’ and ‘Disable’ options located just above the list of GPOs, one
can enable/disable the required GPOs completely or partially (user/computer
configuration settings).
Note: If one click on the ‘domain’ instead of the ‘+’ icon beside the domain name, one
will be able to view only those GPOs that are linked to this domain instead of all the available
GPOs of this domain. Using ADManager Plus ‘GPO Management’, it becomes quite simple for
the administrators to know all the required details, status of all the require GPOs, in just an
instant.View all available GPOs in a DomainThe first step in managing GPOs is to know the list
of all available GPOs in the domain. ADManager Plus provides this information,
instantly.Procedure:In the ‘GPO Management’ section, click on the ‘Group Policy Objects’
container in the required domain to view the list of all available GPOs in that domain.To view all
the GPOs available in a domain· Click the ‘AD Mgmt’ tab.· In ‘GPO Management’,
click the ‘GPO Management’ link.· In the ‘Group Policy Management’ pane on the left hand
side, click on ‘All Domains’ to expand the link and view all the configured domains.·
Click on the ‘+’ icon beside the required domain. This will list all the containers in the domain.·
Click on ‘Group Policy Objects’ container to view all the available GPOs in this particular domain.·
Using the ‘Enable’ and ‘Disable’ options located just above the list of GPOs, one can enable/
136
disable the required GPOs completely or partially (user/computer configuration settings). Click
on the ‘domain’ instead of the ‘+’ icon beside the domain name, andview only those GPOs that
are linked to this domain instead of all the available GPOs of this domain
Administrators can instantly view the list of all the GPOs that are linked to any specific
Domain/OU/Site using this option.
Procedure:
In the ‘GPO Management’ section, click on the required domain all the GPOs that are
linked to that domain.
Steps:
· In the ‘Group Policy Management’ pane on the left hand side, click on ‘All Domains’
to expand the link and view all the configured domains.
· Click on the required Domain/OU. This will display all the GPOs that are linked to
that specific container.
· To select a site, click on ‘All Sites’ and then the forest in which the required site is
located. Then, click on the required site to view all the GPOs linked to this site.
· One can manage the links of all GPOs linked to this container through the ‘Manage’
and ‘Enforce’ options located just above the list of linked GPOs.
o Click on the ‘Link GPOs’ option located in the top right corner of this page.
o In the ‘Select GPOs to be linked’ window that opens up, select the domain in which
the required GPO is located.
o This will list all the GPOs in the domain. One can also locate the GPO using the
search option.
137
o One will see a summary of the action just performed along with the linking status,
for each GPO.
This option enables the administrators to know in detail, all the containers that any specific
GPO is linked to.
Procedure:
In the ‘GPO Management’ section, in the ‘Group Policy Objects’ container, click on the
required GPO to view the list of all the containers to which this GPO is linked to, along with the
link status.
Steps:
· In the ‘Group Policy Management’ pane on the left hand side, click on ‘All Domains’
to expand the link and view all the configured domains.
· Click on ‘Group Policy Objects’ container to view all the GPOs available in the
domain. For each GPO, the status of the ‘user configuration settings’ and also the
‘computer configuration settings’ are shown.
· From the list of all available GPOs, click on the required GPO. This will list all the
containers to which this GPO is linked along with the link status, enforce status and
the canonical name of the linked location.
· From this page, one can manage the links of this particular GPO through the ‘Manage’
and ‘Enforce’ options located just above the list of linked containers.
· One can also view the status of this particular GPO, in the ‘GPO Status’ located in
top right corner of this page. Using the change option located beside it, one can
also change the GPO status, as required.
138
Note:
To view the links from all the sites, in the ‘display links from’ option located just above the
list of linked containers; select ‘All Sites’ from the options.
Right-click click the newly created policy and choose Edit. Since this needs to apply on
per computer basis, in the Group Policy Management Editor console expand Computer
Configuration > Preferences > Control Panel Settings and click on Local Users and Groups. As
one can see, there are other stuff one can configure here too like shortcuts, printers, enable or
disable services on clients etc and if one opens the Windows Settings folder one can find more.
Feel free to explore and test them, but right now do a right-click on Local Users and Groups and
choose New > Local Group.
On the Action drop-down box one have multiple choices. If one wants to create a new
local group on the clients go with the Create option, if one wants to replace a local group with
the one name here go with Replace and so on. Right now choose Update and from the Group
name drop-down box select the local group on which one want to make changes. The
local Administrators and local Remote Desktop Users are the most used ones. If the group
name is not updated and is not listed here one can type it, but do not click the ellipse button and
search for it because it will search the domain, and no one wants that.
Click the Add button from the Members section and add the domain users and/or groups
(groups are recommended) that one wants to be part of the group is selected in the Group
name box. On the Action drop-down box make sure Add to this group is selected and click OK.
Leave the rest of the settings unchanged.
To be able to see the changes and not wait until the policy is applied (between 90-120
min), gpupdate /force on some of the clients to re-read the policies from the domain controller(s)
and apply them, or one can use the Group Policy Update option if one have 2012 domain
controllers. After the policy is applied, one can go ahead and check if it worked. Launch the Local
Users and Groups console (Start > Run >lusrmgr.msc) on a client PC, click the Groups folder,
then open the properties of the group is updated trough Group Policy Preferences. The domain
users and/or groups should be member(s) of this local group.
If one want to be more granular with the policy, one can set it so it applies only on specific
operating systems, or to computers that have a specific MAC address. Just click on
the Common tab on the Group Policy Preferences item and check the Item-level targeting check
box, then hit the Targeting button. As one can see, there a quite a few settings from which one
can choose.
In time, if one wants to remove some of the members from the local group(s), don’t just
go and delete the Group Policy Preferences item(s) because it will not accomplish what one
want. One need to update this again. From the policy, open the item properties, select the
domain user or group one wants to remove, click the Change button, then in the new window
select Remove from this group. Click OK.
Fig 5.22. Updating and Changing Fig 5.23. Updating Local Group Member
143
Leave it a few days or weeks, just in case some of the users are traveling and they did not
connect to the company’s network. After the membership was removed from the local group(s)
one can go ahead and delete the member(s) from the Group Policy Preference item, or delete
the item itself if no other members are present or one doesn’t need it anymore.
And that’s it, simple and effective. By using this method one can add domain members to
whatever local groups one wants without typing any bits of code. Also, one can create, modify,
and remove those local groups as needed.
Depending on who has designed or organized the Active Directory OU structure, one will
typically have a set of containers or folders similar to the layout of a file system. These folders
(OUs) can contain any AD object like Users, Computers, Groups, etc. Even though they contain
these objects, all Group Policy Objects contain built-in filtering. When we create a new GPO,
we will see there are two main configuration options available (built-in filtering). These
are Computer Configuration and User Configuration. We can apply configurations to
both Users and Computers within the same GPO, but we can also specify one or the other as
well.
145
Domain based Group Policy Objects are far more common in organizations, mostly
because setting up a new domain creates a “Default Domain Policy” at the root of that domain.
This policy contains a few default settings like a password policy for the users, but most
organizations change these. Additionally, some organizations modify this default policy and
add their own specifications and settings.
One can definitely add to and edit the Default Domain Policy, but one may be better off
just creating a new GPO at the root of the domain. If one decide to modify the existing Default
Domain Policy or create a new GPO, please be aware one should apply certain settings to the
root domain and not subsequent locations like OUs. It is possible to set these settings in alternate
locations, but not recommended. One can only set these settings once per domain, and thus
the best practice is to apply these at the root of the domain.
Now that we understand how Windows applies Local Group Policy settings, we move
toward understanding how an organization that has Active Directory (AD) can apply GPOs. At
the topmost layer, Group Policy Objects can apply to the “site” level. To understand how a site-
based Group Policy could work, we must first generally understand how large organizations
might structure their environment.
On the local system, one can view and edit the Local Group Policy settings by searching
the computer. Using the Start Menu, begin typing (searching) for “Edit Group Policy.” One can
configure settings for the local system or account, but all subsequent Group Policy layers (site,
domain, and OU) that have the same setting configured or enabled can overwrite these settings.
This means one can configure Group Policies locally, but the system can overwrite them
when theGroup Policies is set to trump these settings from site, domain, or OU GPOs applied
to the system or user account.
146
Essentially loopback processing changes the standard group policy processing in a way
that allows user configuration settings to be applied based on the computers GPO scope during
logon. This means that user configuration options can be applied to all users who log on to a
specific computer.
Common scenarios where this policy is used include public accessible terminals, machines
acting as application kiosks, terminal servers and any other environment where the user settings
should be determined by the computer account instead of the user account.
Computer Configuration > Administrative Templates > System > Group Policy > User
Group Policy loopback processing mode
When Enabled one must select which mode loopback processing will operate in; Replace
or Merge.
Replace mode will completely discard the user settings that normally apply to any users
logging on to a machine applying loopback processing and replace them with the user settings
that apply to the computer account instead.
Merge mode will apply the user settings that apply to any users logging on to a machine
applying loopback processing as normal and then will apply the user settings that apply to the
computer account; in the case of a conflict between the two, the computer account user settings
will overwrite the user account user settings.
147
Loopback processing affects the way in which the GetGPOList function operates, normally
when a user logs on the GetGPOList function collects a list of all in scope GPOs and arranges
them in precedence order for processing.
When loopback processing is enabled in Merge mode the GetGPOList function also
collects all in scope GPOs for the computer account and appends them to the list of GPOs
collected for the user account, these then run as higher precedence than the users GPOs.
When loopback processing is enabled in Replace mode the GetGPOList function does
not collect the users in scope GPOs.
So, without loopback enabled, policy processing looks a little like this:
1. Computer Node policies from all GPOs in scope for the computer account object are
applied during start-up (in the normal Local, Site, Domain, OU order).
2. User Node policies from all GPOs in scope for the user account object are applied
during logon (in the normal Local, Site, Domain, OU order).
1. Computer Node policies from all GPOs in scope for the computer account object are
applied during start-up (in the normal Local, Site, Domain, OU order), the computer flags that
loopback processing (Merge Mode) is enabled.
2. User Node policies from all GPOs in scope for the user account object are applied
during logon (in the normal Local, Site, Domain, OU order).
3. As the computer is running in loopback (Merge Mode) it then applies all User Node
policies from all GPOs in scope for the computer account object during logon (Local, Site,
Domain and OU), if any of these settings conflict with what was applied during step 2. Then the
computer account setting will take precedence.
1. Computer Node policies from all GPOs in scope for the computer account object are
applied during start-up (in the normal Local, Site, Domain, OU order), the computer flags that
loopback processing (Replace Mode) is enabled.
2. User Node policies from all GPOs in scope for the user account object are not applied
during logon (as the computer is running loopback processing in Replace mode no list of user
GPOs has been collected).
3. As the computer is running in loopback (Replace Mode) it then applies all User Node
policies from all GPOs in scope for the computer account object during logon (Local, Site,
Domain and OU).
If one wants to add an exception to this rule, for example if someone have used loopback
processing to secure a terminal server using replace mode but would like to ensure that the
server administrators do not receive the settings; then one can set a security group containing
the administrators accounts in the delegation tab of the GPO(s) whilst viewed from the Group
Policy Management Console (GPMC) as Deny for the Apply group policy option. This will have
to be set for all GPOs that contain user settings one wish to deny that are in scope for the
computer account.
With Fine-Grind Password Policies, the Policy in the Active Directory Administrative Center
is created and users are added to it without touching the default password policy.
· The domain functional level needs to be on Windows Server 2008 and above
· Only the Active Directory Administrative Center and PowerShell can be used to
manage it
149
To get started, Open ADAC, enable Tree View In the console and go to:
In the Password Settings Container, right click and click on new and fill the details of the
new Policy.
A lockout options could be set up and the password policy with a very good lockout policy
can be integrated in a single menu.
150
Once all the settings are set, users need to be added and apply should be clicked.
Multiple policies can be created and applied to users and groups (Dynamic and regular)
For Active Directory Federation Services (AD FS) to function, each computer that functions
as a federation server must be joined to a domain, federation server proxies may be joined to a
domain, but this is not a requirement.
1. On the Start screen, type Control Panel, and then press ENTER.
3. Under Computer name, domain, and workgroup settings, click Change settings.
5. Under Member of, click Domain, type the name of the domain that this computer
will join, and then click OK.
GPA enables one to match multiple copies of a GPO to a single GPO known as a master
GPO. A master GPO is one that is selected to use as a controlling source for other GPOs. The
GPOs one select to match the master GPO are controlled GPOs. The process of matching
controlled GPOs to a master GPO is called GPO synchronization.
To synchronize GPOs:
1. Log on to the GPA Console computer with an account that has GPO synchronization
permissions.
2. Start the GPA Console in the NetIQ Group Policy Administrator program group.
4. Expand the appropriate domain hierarchy to the GPO one want to identify as a
master GPO, and then select the GPO.
7. Select the Make this GPO a master GPO check box, and then click OK.
10. If onewant to select GPOs from the GP Repository, accept the default selection,
and then click OK.
11. If onewant to select GPOs from an Enterprise Consistency Check report XML file,
select ECC Wizard XML file, and then browse to the location of the file.
12. If onewant to determine whether the controlled GPOs are in sync with the master
GPO, select the controlled GPOs one wants to check, and then click Run Sync
Report. GPA generates an Enterprise Consistency Check report on the master GPO
and the selected controlled GPOs in the GP Repository.
13. If one wants to synchronize a controlled GPO with the master GPO, select the
controlled GPO and click Synchronize. Onedo not need to perform this step if the In
Sync column indicates Yes.
154
Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise
Linux. The SMB protocol is used to access resources on a server, such as file shares and
shared printers.
One can use Samba to authenticate Active Directory (AD) domain users to a Domain
Controller (DC). Additionally, one can use Samba to share printers and local directories to other
SMB clients in the network.
Samba’s winbindd service provides an interface for the Name Service Switch (NSS) and
enables domain users to authenticate to AD when logging into the local system.
Using winbindd provides the benefit that one can enhance the configuration to share
directories and printers without installing additional software.
If onewant to join an AD domain and use the Winbind service, use the realm join —client-
software=winbind domain_name command. The realm utility automatically updates the
configuration files, such as those for Samba, Kerberos, and PAM.
5.18.1. Authentication
Authentication in a physical access control involves checking the identity of a user against
the logical data stored within the repository. Figure shows examples of existing techniques
which can be used to authenticate users. In addition to the authentication techniques shown, a
Digital Signature can be used to verify data at its origin by producing an identity that can be
verified by all parties involved in the transaction taking place. As one can see from Figure there
are a number of different methods for authentication classified as:
• What you know?- The SoC use username and password as their standard form of
authentication.
• A combination of what one has and what is known- This combination is commonly called
two factor 0authentication. Two-factor authentication confirms a user’s identity using something
they have (an identity token) and something they know (an authorization code or PIN code).
The problem here is one still have to remember the PIN in order to use the system. People may
be inclined to write the PIN down somewhere in order to remember it.
· Something unique about the user or something you are- These are biometrics
“A smart card resembles a credit card in shape and size, but contains an embedded
microprocessor.” The microprocessor is used to hold specific information in encrypted format.
Smart cards are defined according to the following:
• The type of chip embedded within the card, and its capabilities.
Smart cards come in two main types, ‘Contact’ and ’Contactless’. Contactless smart cards
avoid the ‘wear and tear’ of contact smart cards by avoiding the need for physical contact
between the card and card reader.
Smart cards can be used to advocate ‘single sign-on’ and as a single form of authentication
to buildings and to IT applications. A smart card can be used to gain access to a particular
building or area through doors or gates equipped with smart card readers. “The same smart
card, already employed as a form of ‘mobile’ authentication, can then also be used for ‘logical’
authentication in the IT environment i.e. the user’s smart card can be required as a form of
identification for logon to a computer, office network, VPN or other resource.”
157
5.20.1. Biometric
There are two main phases to biometric based systems; enrolment and recognition. In
the enrolment phase a master template is constructed from a number of biometric scans. When
a user wishes to gain access, their on-the-fly scan is either verified or identified. In verification,
the user must declare their identity (username or smart card) and only one comparison must
take place against the template identified by their username. If the user’s identity is unknown,
the identity of the user must be matched against a database of master templates, and therefore
more processing must take place.
It is suggested that Biometrics can be assessed in terms of their usability and security.
Figure shows a graph of the biometric usability against biometric security.
Smart cards have a higher usability than finger-vein and iris recognition, although less
secure. Voice and face recognition have a higher usability still, but it is easy for someone to
record a voice or put up a picture of a face to fool the system into authenticating them falsely.
There are a number of physiological and medical factors that can affect the usability and efficiency
of biometrics, e.g. something as common as arthritis may affect usability (it may be difficult to
position the finger and/or hand correctly for recognition). For these reason smart cards are
preferred over biometrics, but it will not be long before biometrics catch up in terms of usability,
cost and standardization.
5.20.3. RFID
RFID tags come in a number of different types; active, passive and semi-passive. Active
tags are powered by battery, and are able to broadcast to a reader over distances over 100 feet.
Passive tags are not battery.
Integration of Physical Access Security and Logical Access Security using Microsoft Active
Directory powered, but draw their power from a low power radio signal through its antenna. The
main disadvantage with passive tags is they can only transmit over shorter distance, but are
lower in cost than active tags. Lacking their own power, they also have less encryption and are
left open to power consumption attacks and eavesdropping attacks. Semi-passive tags lie
between passive and active tags. Like active tags, they have a battery, but still use the readers’
power to transmit a message back to the RFID. Semi-passive tags thus have the read reliability
159
of an active tag but the read range of a passive tag. RFIDs also come in a range of frequencies
with different suitable uses:
• Low Frequency (125/134KHz) - Most commonly used for access control and asset
tracking.
• Mid-Frequency (13.56 MHz) - Used where medium data rate and read ranges are
required.
• Ultra High-Frequency (850 MHz to 950 MHz and 2.4 GHz to 2.5 GHz) - offer the
longest read ranges and high reading speeds.
Chips can hold a “kill” or self-destruct feature which stops the chip responding to commands
when a certain command is sent to the chip. The kill feature would be useful in the case of a lost
RFID used for physical access purposes. Contactless smart cards are a combination of smart
cards, wireless protocols and passive powering used in RFID. The amount of bits that RFID
tags are able to hold increases over time as RFID technology advances, it is proposed that low-
cost tags will have more bits and thus be able to support increasingly complex RFID viruses.
Vulnerability in a physical access system may allow access to unauthorized users or a hacker
to take control of the system. Before implementing an RFID based system the implications of
an RFID virus would have to be understood further.
After installation of Authentication Agent for AD FS, one must register it with Authentication
Manager.
Before starting:
Make sure that know the Agent Name that is specified when installing Agent for AD FS.
160
Procedure
1. Sign into the RSA Security Console.
3. Enter the required information. Make sure the Agent Type is set to Standard Agent
(default setting).
4. Click Save.
After installing RSA Authentication Agent for Microsoft AD FS on all federation servers in
the AD FS deployment, one must register the agent on the primary federation server using the
RSA
Procedure
1. Sign into the primary AD FS server where one installed the agent.
.\MFAAuthProviderConfigSettings.ps1
Requirements
Smart Card Authentication to Active Directory requires that Smartcard workstations, Active
Directory, and Active Directory domain controllers be configured properly. Active Directory must
trust a certification authority to authenticate users based on certificates from that CA. Both
Smartcard workstations and domain controllers must be configured with correctly configured
certificates.
As with any PKI implementation, all parties must trust the Root CA to which the issuing
CA chains. Both the domain controllers and the smartcard workstations trust this root.
162
163
164
· One must be able to configure Azure Active Directory for the organization in Microsoft
Azure.
4. Search for Amplitude in the app gallery. Select the Amplitude entry and click “Add”
in the bottom right of the app summary.
7. These are the “Entity ID” and “Assertion Consumer Service URL” respectively in
the SSO settings in Amplitude.
1. Server configurations: Every windows server has some basic configuration such as
administrator users, network settings, file sharing etc. Check the configuration of
the workstations that are managed by the active directory that is to be reviewed
2. Services: List the servers that provide specific functionality or services to the network
such as DHCP, DNS, Exchange, and File Servers.
4. For specifying the permissions in the domain object, always use global or universal
groups. Never use the local group for setting permissions to any domain object.
5. Check default users groups and its members. Remove unnecessary groups and its
corresponding default users right.
166
10. Check whether server software is updated with the Microsoft recommended security
patches.
11. Secure the DNS .Though it is a separate service and can reside on the servers that
are not hosing active directory, DNS helps active directory to locate the domain
controllers and other necessary services in the network.
3. Disable boot from any removable devices except the boot disk.
4. Run only the services needed to run the server. Disable the rest. The services that
can be disabled are IIS, SMTP, FAX, indexing, Shell Hardware Detection and
Distributed Link Tracking Client; upload manager, Portable Media Serial Number,
Windows Audio and Utility Manager.
Before starting the hardening the security of active directory, try and collect the complete
topology of the network including the number of domains, sub-domains, and forest. Also make
sure if the active directory is only used locally or some other external offices of the organization
are under the active directory. Besides, make a list of administrators: service admin, data admin,
enterprise admin, domain admin, backup operators and forest owners.
· Domain controller logon policy should allow “logon locally” and “system shutdown”
privileges to the following administrators: 1. Administrators; 2.Backup operators;
3. Server operators
· The domain controller security policy should be defined in a separate GPO, which
should be linked to an Organizational Unit (OU) of domain controller.
167
· Set the domain Account lockout duration to ‘0’ and lockout threshold to three.
· Check the domain Kerberos policy for logon restrictions and the maximum lifetime
for service ticket, user ticket. Also check the clock synchronization-ideally it can be
3 to 5 minutes.
· Check the domain controller event log policy, in particular pay attention to the log
retention time and access. Disable the guests group from accessing the log.
Tips on securing domain admins, local administrators, audit policies, monitoring AD for
compromise, password policies, vulnerability scanning and much more are discussed below:
There should be no day to day user accounts in the Domain Admins group, the only
exception is the default Domain Administrator account.
Members of the DA group are too powerful. They have local admin rights on every domain
joined system (workstation, servers, laptops, etc).
Microsoft recommends that when DA access is needed, the account has to be temporarily
placed in the DA group. When the work is done the account should be removed from the DA
group.
This process is also recommended for the Enterprise Admins, Backup Admins and Schema
Admin groups.
Logging should not be done on every day basis with an account that is a local admin or
has privileged access (Domain Admin).
It is recommended to create two accounts, a regular account with no admin rights and a
privileged account that is used only for administrative tasks and do not put the secondary
account in the Domain Admins group, at least permanently.
168
Follow the least privilege administrative model. Basically, this means all users should log
on with an account that has the minimum permissions to complete their work.
The built in Administrator account should only be used for the domain setup and disaster
recovery (restoring Active Directory).
Anyone requiring administrative level access to servers or Active Directory should use
their own individual account.
No one should know the Domain Administrator account password. Set a really long 20+
characters password and lock it in a vault. Again the only time this is needed is for recovery
purposes.
In addition, Microsoft has several recommendations for securing the built in Administrator
Account. These settings can be applied to group policy and applied to all computers.
The local administrator account is a well known account in Domain environments and is
not needed.
An individual account should be used that has the necessary rights to complete tasks.
1. It is a well known account, even if re-named the SID is the same and is well known
by attackers.
169
2. It’s often configured with the same password on every computer in the domain.
To perform admin tasks on the computer (install software, delete files, etc) use the individual
account, not the local admin account.
Even if the account is disabled, booting can be done in safe mode and the local
administrator account can be used.
Local administrator Password Solution (LAPS) is a popular tool to handle the local admin
password on all computers.
LAPS is a Microsoft tool that provides management of local account password of domain
joined computers. It will set a unique password for every local administrator account and store
it in Active Directory for easy access.
LAPS is built upon the Active Directory infrastructure so there is no need to install additional
servers.
The solution uses the group policy client side extension to perform all the management
tasks on the workstations. It is supported on Active Directory 2003 SP1 and above and client
Vista Service Pack 2 and above.
A secure admin workstation is a dedicated system that should only be used to perform
administrative tasks with a privileged account.
It should not be used for checking email or browsing the internet. In fact, internet access
should be restricted.
· Group Policy
Basically, when it is need to use a privileged account to perform admin tasks it should be
done from a SAW.
Daily use workstations are more vulnerable to compromise from pass the hash, phishing
attacks, fake websites, keyloggers and more.
Using a secure workstation for an elevated account provides much greater protection
from those attack vectors.
Since attacks can come from internal and external it’s best to adopt an assume breach
security posture.
Due to the continuous threats and changes to technology the methodology on how to
deploy a SAW keeps changing. There is also PAW and jump servers to make it even more
confusing.
· Block internet
Ensure the following Audit Policy settings are configured in group policy and applied to all
computers and servers.
Computer Configuration -> Policies -Windows Settings -> Security Settings -> Advanced
Audit Policy Configuration
Account Logon
Account Management
Detailed Tracking
Logon/Logoff
Object Access
Policy Change
Privilege Use
System
Malicious activity often starts on workstations, if continous monitoring is not in place early
signs of an attack can be missed.
The following Active Directory events should be monitored which will help detect
compromise and abnormal behavior on the network.
173
Here are some events that should be monitored and reviewed on a weekly basis.
· Account lockouts
· Logon/Logoff events
The best way is to collect all the logs on a centralized server then use log analyzing
software to generate reports.
Some log analyzers come pre built with Active Directory security reports.
· Elk Stack
· Lepid
· Splunk
· ManageEngineADAudit Plus
In this screenshot, one can see a huge spike in logon failures. Without a log analyzer,
these events would be hard to spot.
175
9. Password Complexity
Passphrases are simply two or more random words put together. One can add numbers
and characters if needed.
· use passphrases
Bucketguitartire22
Screenjugglered
RoadbluesaltCloud
The above examples are totally random. These would take a very long time to crack and
most likely no one would guess them.
Ireallylikepizza22
Theskyisblue44
NIST recently updated their password policy guidelines in Special Publication 800-63 to
address new requirements for password policies.
If the organization must meet certain standards then make sure those standards support
these password recommendations.
176
Applying permissions to resources with security groups and not individual accounts makes
managing resources much easier.Security groups should not be named with a generic name
like helpdesk or HR Training.
One needs to have a procedure in place to detect unused user and computer accounts in
Active Directory.
Domain controllers should have limited software and roles installed on them.
DC’s are critical to the enterprise, nobody wants to increase security risks by having
additional software running on them.
Windows Server Core is a great option for running the DC role and other roles such as
DHCP, DNS, print servers and file servers.
Server Core runs without a GUI and requires fewer security patches due to its smaller
footprint.
Regular scanning and patching of softwares will remediate discovered vulnerabilities and
the risk of having an attack is minimal.
· Scan all systems at least once a month to identify all potential vulnerabilities. If one
can scan more frequently it’s better.
· Prioritize the finding of the vulnerability scans and first fix the ones that have known
vulnerabilities in the wild.
177
· Identify out of date software that is no longer supported and get it updated.
One can prevent a lot of malicious traffic from entering the network by blocking malicious
DNS lookups.
Anytime a system needs to access the internet it will in most cases use a domain name.
There are several services available that check DNS queries for malicious domains and
blocks them. These DNS services gather intelligence about malicious domains from various
public and private sources. When it gets a query for a domain that it has flagged as malicious
it will block access when the system attempts to contact them.
Here is an example:
Step 3: DNS Service checksif the domain is on its threat list, it is so it returns a block reply.
In the above example since the DNS query returned a block, no malicious traffic ever
entered into the network.
Quad9
OpenDNS
Also, most IPS (Intrusion Prevention Systems) systems support the ability to check DNS
lookups against a list of malicious domains.
178
With each new version of Windows OS, Microsoft includes built in security features and
enhancements. Just staying on the latest OS will increase overall security.
Compromised accounts are very common and this can provide attackers remote access
to the systems through VPN, Citrix, or other remote access systems.
One of the best ways to protect against compromised accounts is two factor authentication.
This will also help against password spraying attacks.
· DUO
· RSA
· Microsoft MFA
One should know what is connected to the network, if there are multiple locations with
lots of users and computers this can be challenging.
There are ways to prevent only authorized devices from connecting but this can be costly
and a lot of work to set up.
Another method that is already available is to monitor the DHCP logs for connected devices.
Most connections start with a DNS query. All domain joined systems should be set up to
use a local Windows DNS server.
With this setup, one can log every internal and external DNS lookup. When a client device
makes a connection to a malicious site it will log that site name in the DNS logs.
These malicious domains are usually odd, random character domains that raises red
flags.
Here are some screenshots of suspicious DNS lookups from certain logs.
179
ADFS has some great security features. These features will help with password spraying,
account compromise, phishing and so on.
· Attack Simulations – one should be doing regular phishing tests to help train
· end users. Microsoft will be releasing phish simulator software very soon.
· Custom bad passwords – Ability to add custom banned passwords to check against.
Cyber attacks can shut down systems and bring business operations to a halt.
The City of Atlanta was shut down by a cyber attack, this prevented residents from paying
online utility bills. In addition, Police officers had to write reports by hand.
180
A good incident response plan could have limited the impact and enabled services back
online much faster.
· Prioritize servers
The best way to control access to Active Directory and related resources is to use Security
Groups.
Create custom groups with very specific names, document who has rights and a process
for adding new users.
Don’t just allow users to be added to these custom groups without an approval process.
This is just another way permissions can get out of control.
Service accounts are those accounts that run an executable, task or service, AD
authentication, etc.
These are wildly used and often have a password set to never expire.
These accounts will often end up with too much permission and more often than not are
a member of the domain admins’ group.
181
· Require vendors to make their software work without domain admin rights
SMBv1 is 30 years old and Microsoft says to stop using it (They have been saying that for
a long time).
SMB (Server Message Blocks) is a network file and printer sharing protocol.
Many viruses can spread and exploit flaws in the SMBv1 protocol.
In addition, to the security issues with SMBv1 it’s not an efficient protocol, one will lose
performance with this old version.
Beginning with Windows 10 Fall Creators Update SMBv1 will be disabled by default.
A default install of the Windows Operating system has many features, services, default
settings and enabled ports that are not secure.
Establishing a secure configuration on all systems can reduce the attack surface while
maintaining functionality.
Microsoft has a Security Compliance Toolkit that allows one to analyze and test against
Microsoft recommended security configuration baselines.
It also provides security configuration baselines. In addition, it provides tools that can
scan a system and provide a report on failures.
Most of the recommended settings can be set up using Group Policy and deployed to all
computers.
CIS Securesuite can also scan against other systems like cisco, vmware, linux and more.
This utility was designed to Monitor Active Directory and other critical applications. It will
quickly spot domain controller issues, prevent replication failures, track failed logon attempts
and much more.
Summary
• Authentication in a physical access control involves checking the identity of a user
against the logical data stored within the repository
• A smart card resembles a credit card in shape and size, but contains an embedded
microprocessor. The microprocessor is used to hold specific information in encrypted
format.
for the object and readers designed to decode the data on the tag; and a host
system or server that processes and manages the information gathered.
• What is RFID?
References
• (https://www.techopedia.com/definition/25/active-directory)
• https://securitywing.com/active-directory-security/
• For step by step instructions on installing LAPS see this article, How to Install Local
Administrator Password Solution (LAPS)
• https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-
access/privileged-access-workstations
• https://cloudblogs.microsoft.com/enterprisemobility/2018/03/05/azure-ad-and-adfs-
best-practices-defending-against-password-spray-attacks/
• Bruce Schneier. Secrets & Lies - Digital Security in a Networked World. Wiley,
2004.
184
• Jacqueline Emigh. Getting clever with smart cards. Access Control & Security
Systems, May, 2004.
• Bruno Crispo Melanie R. Rieback and Andrew S. Tanenbaum. Is Your Cat Infected
with a Computer Virus? PhD thesis, VrijeUniversiteit Amsterdam, 2006.
• Argus Solutions. Monitor and manage critical assets to guard against unauthorised
usage or theft. Technical report, Argus Solutions, 2006.
• Symbol Mobility Learning Centre. Rfid key issues. Technical report, Symbol Mobility
Learning Centre, 2004.
• (https://www.techopedia.com/definition/3996/kerberos)
• https://ldap.com/
• https://www.techopedia.com/definition/30222/ticket-granting-ticket-tgt
• https://searchwindowsserver.techtarget.com/definition/Active-Directory-forest-AD-
forest
• https://www.techopedia.com/definition/1326/domain-networking
• https://kb.iu.edu/d/atvu
• http://www.itprotoday.com/windows-8/how-do-i-configure-trust-relationship
• https://www.techopedia.com/definition/3841/modification-mod
185
• https://www.techopedia.com/definition/30735/object-class
• https://www.techopedia.com/definition/25949/create-retrieve-update-and-delete-crud
• https://en.wikipedia.org/wiki/Group_Policy
• https://blogs.technet.microsoft.com/musings_of_a_technical_tam/2012/02/13/group-
policy-basics-part-1-understanding-the-structure-of-a-group-policy-object/
• https://www.serverwatch.com/tutorials/article.php/1497871/Group-Policy-
Structures.htm
• http://techgenix.com/defaultgpopermissions/
• https://www.lepide.com/blog/top-10-most-important-group-policy-settings-for-
preventing-security-breaches/
• https://www.it-support.com.au/how-to-configure-account-lockout-policy-on-windows-
server/2013/07/
• https://prajwaldesai.com/how-to-disable-usb-devices-using-group-policy/
• https://social.technet.microsoft.com/Forums/en-US/3b5f46b6-9d95-487d-b02d-
1 0 3 a7 5 a e 3 8 1 4/ c r e a t e -g r o u p - p ol i c y - t o- s e t - s c re e n s a v e r- t i m e o u t- i n -
registry?forum=winserverGP
• https://www.manageengine.com/products/active-directory-audit/help/getting-started/
eventlog-settings-workstation-auditing.html
• https://www.itprotoday.com/windows-8/group-policy-settings-wsus
• http://www.grouppolicy.biz/2010/05/how-to-apply-a-group-policy-object-to-individual-
users-or-computer/
• https://social.technet.microsoft.com/Forums/ie/en-US/b452101f-d2d3-4a6f-96f1-
e101e99107dd/server-2012-r2-lock-screen-timeout-settings?forum=winserver8gen
• https://www.tech-recipes.com/rx/35777/windows-8-using-group-policy-to-prevent-
screen-saver-changes/
• https://www.petenetlive.com/KB/Article/0001283
• https://www.addictivetips.com/windows-tips/enable-account-lockout-policy-set-
threshold-duration-in-windows-8/
186
• https://www.askvg.com/how-to-enable-group-policy-editor-gpedit-msc-in-windows-
7-home-premium-home-basic-and-starter-editions/
• https://www.petri.com/how-to-create-and-link-a-group-policy-object-in-active-
directory
• https://searchwindowsserver.techtarget.com/tip/Enforcing-Group-Policy-Object-
settings
• https://www.manageengine.com/products/ad-manager/help/gpo-management/view-
gpo-gpolinks-details.html
• http://www.vkernel.ro/blog/add-domain-users-to-local-groups-using-group-policy-
preferences
• https://emeneye.wordpress.com/2016/02/16/group-policy-order-of-precedence-faq/
• https://4sysops.com/archives/understanding-group-policy-order/
• https://www.experts-exchange.com/articles/1876/Understanding-Group-Policy-
Loopback-Processing.html
• https://www.msptechs.com/how-to-configure-fine-grained-password-policies-on-
windows-server-2016/
• https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/join-a-
computer-to-a-domain
• https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/
windows_integration_guide/winbind
• https://community.rsa.com/servlet/JiveServlet/downloadBody/93418-102-4-231820/
auth_agent20ADFS_admin_guide.pdf
• https://support.microsoft.com/en-in/help/281245/guidelines-for-enabling-smart-card-
logon-with-third-party-certification
187
UNIT 6
CLOUD COMPUTING
After reading this lesson you will be able to understand
· Cloud Types
Structure
6.1 Overview
6.13 Encryption
6.1. Overview
Cloud computing is a computing paradigm, where a large pool of systems are connected
in private or public networks, to provide dynamically scalable infrastructure for application, data
and file storage. With the advent of this technology, the cost of computation, application hosting,
content storage and delivery is reduced significantly.
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over network, i.e.,
on public networks or on private networks, i.e., WAN, LAN or VPN. Applications such as e-mail,
web conferencing, customer relationship management (CRM), all run in cloud.
The concept of Cloud Computing came into existence in 1950 with implementation of
mainframe computers, accessible via thin/static clients. Since then, cloud computing has been
evolved from static clients to dynamic ones from software to services. The following diagram
explains the evolution of cloud computing.
189
We need not to install a piece of software on our local PC and this is how the cloud
computing overcomes platform dependency issues. Hence, the Cloud Computing is making
our business application mobile and collaborative.
6.2.1. Definition
Cloud computing is a subscription-based service where one can obtain networked storage
space and computer resources.
Cloud computing takes the technology, services, and applications that are similar to those
on the Internet and turns them into a self-service utility. The use of the word “cloud” makes
reference to the two essential concepts:
in locations that are unknown, administration of systems is outsourced to others, and access by
users is ubiquitous.
NIST Model
· Deployment Models
· Service Models
· Jerico model
191
The United States government is a major consumer of computer services and, therefore,
one of the major users of cloud computing networks. They separate the cloud computing into
deployment model and service models. Cloud computing is a relatively new business model in
the computing world. According to the official NIST definition, “cloud computing is a model for
enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable
computing resources (e.g., networks, servers, storage, applications and services) that can be
rapidly provisioned and released with minimal management effort or service provider interaction.”
The NIST definition lists five essential characteristics of cloud computing: on-demand self-
service, broad network access, resource pooling, rapid elasticity or expansion, and measured
service. It also lists three “service models” (software, platform and infrastructure), and four
“deployment models” (private, community, public and hybrid) that together categorize ways to
deliver cloud services. The definition is intended to serve as a means for broad comparisons of
cloud services and deployment strategies, and to provide a baseline for discussion from what is
cloud computing to how to best use cloud computing.
It is also called open group – Jericho Forum model. It categorizes a cloud network based
on four dimensional factors:
• Physical location of the data: Internal (I) / External (E) determine the organization’s
boundaries.
192
• Ownership: Proprietary (P) / Open (O) is a measure of not only the technology ownership,
but of interoperability, ease of data transfer, and degree of vendor application lock-in.
The following figure illustrates both NIST model and Cloud Cube Model.
· Reduced Cost
· Increased Storage
· Flexibility
· Resource Availability
· On-Demand Service
· Rapid Elasticity
· Resource Pooling
The table 6.1. lists down the benefits and characteristics of cloud computing.
Table 6.1: Benefits of Cloud Computing
Cloud Providers offer services that can be grouped into three categories. They also
provide deployment model which can be classified into four categories.
195
Examples:
· GoogleApps
· Oracle On Demand
· SalesForce.com
· SQL Azure
Examples
· Force.com
· GoGrid CloudCenter
196
· Google AppEngine
IaaS provides basic storage and computing capabilities as standardized services over
the network. Servers, storage systems, networking equipment, data center space etc. are pooled
and made available to handle workloads. The customer would typically deploy his own software
on the infrastructure. Some common examples such as Amazon, GoGrid, Tera.
• Eucalyptus [Elastic Utility Computing Architecture for Linking the Programs To Useful
Systems]
• GoGrid
• FlexiScale
• Linode
• RackSpace Cloud
• Terremark
6.5.3.1. Public cloud: The public cloud infrastructure is available for public use alternatively
for a large industry group and is owned by an organization selling cloud services.
6.5.3.2. Private cloud: The private cloud infrastructure is operated for the exclusive use
of an organization. The cloud may be managed by that organization or a third party. Private
clouds may be either on- or off-premises.
6.5.3.3. Hybrid cloud: A hybrid cloud combines multiple clouds (private, community of
public) where those clouds retain their unique identities, but are bound together as a unit. A
hybrid cloud may offer standardized or proprietary access to data and applications, as well as
application portability.
197
6.5.3.4. Community cloud:A community cloud is one where the cloud has been organized
to serve a common function or purpose.
It may be for one organization or for several organizations, but they share common
concerns such as their mission, policies, security, regulatory compliance needs, and so on. A
community cloud may be managed by the constituent organization(s) or by a third party.
Public clouds are owned and operated by third parties; they deliver superior economies of
scale to customers, as the infrastructure costs are spread among a mix of users, giving each
individual client an attractive low-cost, “Pay-as-you-go” model.
All customers share the same infrastructure pool with limited configuration, security
protections, and availability variances. These are managed and supported by the cloud provider.
One of the advantages of a Public cloud is that they may be larger than an enterprises cloud,
thus providing the ability to scale seamlessly, on demand. Cloud Integrators can play a vital part
in determining the right cloud path for each organization. A cloud integrator is a product or
service that helps a business negotiate the complexities of cloud migrations. A cloud integrator
service (sometimes referred to as Integration-as-a-Service) is like a systems integrator (SI)
that specializes in cloud computing.
The Public Cloud allows systems and services to be easily accessible to general public,
e.g., Google, Amazon, Microsoft offers cloud services via Internet
198
Private clouds are built exclusively for a single enterprise. They aim to address concerns
on data security and offer greater control, which is typically lacking in a public cloud.
The Private Cloud allows systems and services to be accessible within an organization.
The Private Cloud is operated only within a single organization. However, it may be managed
internally or by third-party.
199
The following Figure illustrates benefits of on-premise and externally hosted private cloud:
The following table 6.2. illustrates the on-premise private cloud and externally hosted
private cloud:
200
On-premise private clouds, also This type of private cloud is hosted externally
known as internal clouds are hosted with a cloud provider, where the provider
within one’s own data center. facilitates an exclusive cloud environment
with full guarantee of privacy.
This model provides a more standardized This is best suited for enterprises that
process and protection, but is limited in don’t prefer a public cloud due to
aspects of size and scalability. sharing of physical resources.
A community cloud allows systems and services to be accessible by shared among two
or more organizations that have similar cloud requirements. A community cloud in computing is
a collaborative effort in which infrastructure is shared between several organizations from a
specific community with common concerns (security, compliance, jurisdiction, etc.), whether
managed internally or by a third-party and hosted internally or externally. The costs are spread
over fewer users than a public cloud (but more than a private cloud), so only some of the cost
savings potential of cloud The Figure 6.9: Community Cloudcomputing are realized.
Hybrid Clouds combine both public and private cloud models. With a Hybrid Cloud, service
providers can utilize 3rd party Cloud Providers in a full or partial manner thus increasing the
flexibility of computing. The Hybrid cloud environment is capable of providing on-demand,
externally provisioned scale. The ability to augment a private cloud with the resources of a
public cloud can be used to manage any unexpected surges in workload. The same is illustrated
in Figure.6.10.
Enterprises would need to align their applications, so as to exploit the architecture models
that Cloud Computing offers. The following figure illustrates the benefits of cloud computing
The following figure illustrates the essential characteristics of deployment and service
models.
· Virtualization
· Grid Computing
· Utility Computing
6.6.1. Virtualization
Grid Computing refers to distributed computing in which a group of computers from multiple
locations are connected with each other to achieve common objective. These computer
resources are heterogeneous and geographically dispersed. Grid Computing breaks complex
task into smaller pieces. These smaller pieces are distributed to CPUs that reside within the
grid.
Utility computing is based on Pay per Use model. It offers computational resources on
demand as a metered service. Cloud computing, grid computing, and managed IT services are
based on the concept of utility computing. The same is illustrated in the figure
Front End - refers to the client part of cloud computing system. It consists of interfaces
and applications that are required to access the cloud computing platforms, e.g., Web Browser
Back End - refers to the cloud itself. It consists of all the resources required to provide
cloud computing services. It comprises of huge data storage, virtual machines, security
mechanism, services, deployment models, servers, etc.
Each of the ends is connected through a network, usually via Internet. The following
diagram shows the graphical view of cloud computing architecture. It is the responsibility of the
back end to provide built-in security mechanism, traffic control and protocols. The server employs
certain protocols, known as middleware, helps the connected devices to communicate with
each other.
The NIST Cloud Computing Reference Architecture consists of five major actors. Each
actor plays a role and performs a set of activities and functions. Among the five actors, cloud
brokers are optional, as cloud consumers may obtain service directly from a cloud provider.
206
Actor Definition
Major activities of consumers are based on the type of consumers. The following table
illustrates the consumer type, major activities and who the users are tabulated in Table.6.4.
Cloud Provider: Person, organization or entity responsible for making a service available
to Cloud Consumers.
• The activities of cloud providers are discussed in greater detail from the perspectives
of Service Deployment, Service Orchestration, Cloud Service Management, Security
and Privacy.
The following figure 6.18 illustrates the top-level view of cloud provider. They include
Service Deployment, Service Orchestration, Cloud Service Management, Security and Privacy.
A party that can conduct independent assessment of cloud services, information system
operations, performance and security of the cloud implementation.
A cloud auditor can evaluate the services provided by a cloud provider in terms of security
controls, privacy impact, performance, etc. – For security auditing, a cloud auditor can make an
assessment of the security controls in the information system to determine the extent to which
209
the controls are implemented correctly, operating as intended, and producing the desired outcome
with respect to meeting the security requirements for the system. •
Auditing is especially important for federal agencies and “agencies should include a
contractual clause enabling third parties to assess security controls of cloud providers”
An entity that manages the use, performance and delivery of cloud services and negotiates
relationships between Cloud Providers and Cloud Consumers.
As cloud computing evolves, the integration of cloud services can be too complex for
cloud consumers to manage.
A cloud broker enhances a given service by improving some specific capability and provides
the value-added service to cloud consumers.
6.7.4.2. Service Aggregation: A cloud broker combines and integrates multiple services
into one or more new services. The broker will provide data integration and ensure the secure
data movement between cloud consumer and multiple cloud providers.
6.7.4.3. Service Arbitrage: Service arbitrage is similar to service aggregation, with the
difference in that the services being aggregated aren’t fixed. Service arbitrage allows flexible
and opportunistic choices for the broker. For example, the cloud broker can use a credit scoring
service and select the best score from multiple scoring agencies.
The intermediary that provides connectivity and transport of cloud services between Cloud
Providers and Cloud Consumers. – Provide access to cloud consumers through network,
telecommunication and other access devices.
• Example: Network access devices include computers, laptops, mobile phones, mobile
internet devices (MIDs), etc. – Distribution can be provided by network and telecomm carriers
or a transport agent.
210
Information Security provides security for the information and information systems from
insecure access, use, disclosure, disruption, modification, inspection, recording or destruction.
Based on a study for the Cloud Security Alliance (CSA), there are seven top threats that
organizations will face in adopting Cloud Computing. These are Abuse and Nefarious Use of
Cloud Computing, Insecure Application Programming Interfaces (API), Malicious Insiders, Shared
Technology Vulnerabilities, Data Loss/Leakage, Account, Service and Traffic Hijacking and
Unknown Risk Profile. Multi-Tenancy is recognized as one of the unique implications of security
and privacy in Cloud computing
Figure.6.21: Multi-Tenancy
Multi-Tenancy means sharing the application software between multiple users who have
different needs. Allocating single instance of an application software i.e., cloud to multiple users
is called as multitenancy. Each user is called as tenant. The users who need similar type of
resources are allocated a single instance of cloud, so that the cost is shared between the users
to make the access of instance of cloud computing cost effective. Multi-Tenancy allows users to
easily access, maintain, configure and manipulate the data stored in single database running
212
on the same operating system. The data storage mechanism remains same for all users who
share the similar hardware and software resources. In multitenant architecture, user cannot
share or see each other’s data, here the security and privacy is provided.
To perform any type of services like IaaS, SaaS and PaaS in public cloud and private
clouds the key technique is Multi-tenancy. If the people discuss about the clouds they many
speak about the IaaS Services. Both cloud architectures like private and public clouds go beyond
the special features like Virtualization and the concept of IT-as-a-Service through payments or
billing back in the event of private clouds based on metered usage. An IaaS service has an
advanced feature such as Service Level Agreements (SLAs), Identity and Access Management
for Security Access)(IDAM), fault tolerance, disaster recovery, dynamic resource allocation and
many other important properties. By Injecting all these key services at the level of infrastructure,
the clouds become multitenant to a degree. In the case of IaaS multi-tenancy go beyond the
layer to merge the PaaS layer and at the end SaaS layer or application layer. IaaS layer contains
Servers, Storages and networking components, PaaS layer Consists of Platform for Applications
like Java Virtual Machines like Java Compilers, Application Servers and SaaS Layer Consists
of applications like business logic, work flow, data bases and user interfaces.
6.8.1.1. Virtual Multi-Tenancy: In this Computing and storage resources are shared
among multiple users. Multiple tenants are served from virtual machines that execute concurrently
on top of the same computing and storage resources.
· Infrastructure layer
· Application layer
6.8.1.2.1. Data centre layer: This configuration provides the highest level of security
requirements if implemented correctly, with firewalls and access controls to meet business
213
requirements as well as defined security access to the physical location of the infrastructure
providing the SaaS. Mostly data centre layer multitenancy acts as a service provider that that
rents cages to companies that host their hardware, network, and software in the same building.
Because all users access their services from the same technology platform it is much
easier to access automatic and frequent updates. No longer need to pay for report customizations
or to add new functionalities.
Multi-tenancy provides companies of all sizes the ability to reside in the same infrastructure
and data centre.
214
Before, when we wanted to roll-out a new update, it was a lengthy process because we
had to code the change separately for each client instance to ensure that it was compatible with
their customizations, perform QA, and then put the change into production. With more than 100
customers, it was a timeconsuming task for our support team. Now with our multi-tenant
environment, because every customer’s instance has the same base code, the roll-out of new
releases will be very seamless and provide faster access to innovative features to manage IT
and communication expenses.
This capability provides our customers with the ability to meet their requirements and
communication styles to manage all IT and communication expenses.
· Security
· Capacity
· Flexibility
6.8.3.1. Security
There is also the threat of hackers – no matter how secure an encryption is with the right
knowledge. A hacker who breaks the encryption of multitenant database will be able to steal the
data of hundreds of businesses who have data stored on it.
Database administrators need the tools and the knowledge to understand which tenant
should be deployed on which network in order to maximise capacity and reduce costs.
215
When failures occur or when certain services generate abnormal loads the service delivery
can be interrupted – yet business clients will often request high-availability. Therefore, monitoring
the service delivery and its availability is critical to ensure that the service is properly delivered.
6.8.3.4. Flexible
Using multi-tenancy characteristics of cloud computing, customers can store the data
must be stored in servers located inside France, German customer data inside Germany etc.
The benefits outweigh the drawbacks and the model is worth exploring. Some common
challenges are:
· Data Protection
· Management Capabilities
Data Security is a crucial element that warrants scrutiny. Enterprises are reluctant to buy
an assurance of business data security from vendors. They fear losing data to competition and
the data confidentiality of consumers. In many instances, the actual storage location is not
disclosed, adding onto the security concerns of enterprises. In the existing models, firewalls
across data centres (owned by enterprises) protect this sensitive information. In the cloud model,
Service providers are responsible for maintaining data security and enterprises would have to
rely on them.
All business applications have Service Level Agreements [SLA] that are stringently followed.
Operational teams play a key role in management of service level agreements and runtime
governance of applications. In production environments, operational teams support.
216
· Data Replication
· Disaster recovery
If any of the above mentioned services is under-served by a cloud provider, the damage
& impact could be severe.
Despite there being multiple cloud providers, the management of platform and infrastructure
is still in its infancy. Features like auto-scaling for example, are crucial requirement for many
enterprises. There is huge potential to improve on the scalability and load balancing features
provided today.
· distributed services
· procurement
· service negotiation
· A list of possible uses of SRAs, including their value for SLAs, for certification of
services, monitoring, testing, and others.
· A way to evaluate the security of cloud systems, which can have more general
application.
infected VMIs, they are scanned and filtered before storing them in the VMI
Repository. We do not show the details for lack of space. Figure 3 shows the resulting secure
IaaS architecture pattern. In this model, the subsystem Authenticator is an instance of the
Authenticator pattern and enables the Cloud Controller to authenticate Cloud Consumers/
Administrators. Instances of the Security Logger/Auditor pattern are used to keep track of any
access to cloud resources such as VMs, VMMs, and VMIs. The Reference Monitor enforces
authorization rights defined by the RBAC instances. The Filter scans created virtual machines
in order to remove malicious code. At the SaaS level the responsibility for security is in the
hands of the corresponding Service provider (SP); in the case of a travel it is necessary to
provide authentication, authorization, encryption, etc., toclients. These security services must
be supported at the IaaS level, including security administration. The same situation
occurs at the PaaS level where the corresponding SP must provide control of the components
at this level.
Users in the Cloud Computing environment have to complete the user authentication
process required by the service provider whenever they use new Cloud service. Generally, a
user registers with offering personal information and a service provider provides a user’s own
ID (identification) and an authentication method for user authentication after a registration is
done. Then the user uses the ID and the authentication method to operate the user authentication
when the user accesses to use a Cloud Computing service. Unfortunately, there is a possibility
that the characteristics and safety of authentication method can be invaded by an attack during
the process of authentication, and then it could cause severe damages. Hence, there must be
not only security but also interoperability for user authentication of Cloud Computing.
Access to AWS KMS requires credentials that AWS can use to authenticate the requests.
The credentials must have permissions to access AWS resources, such as AWS KMS customer
master keys (CMKs).
The following sections provide details about how one can use AWS Identity and Access
Management (IAM) and AWS KMS to help secure the resources by controlling who can access
them.
· Authentication
· Access Control
6.11.3.1. Authentication
AWS account root user – While signing up for AWS, provide an email address and
password for the AWS account. These are the root credentials and they provide complete access
to all of the AWS resources. For security reasons, it is recommended that the root credentials
are used only to create an administrator user, which is an IAM user with full permissions to the
AWS account. Then, one can use this administrator user to create other IAM users and roles
with limited permissions.
IAM user – An IAM user is an identity within the AWS account that has specific permissions
(for example, to use a KMS CMK). One can use an IAM user name and password to sign in to
secure AWS webpages like the AWS Management Console, AWS Discussion Forums, or
the AWS Support Center.
In addition to a user name and password, one can also create access keys for each user
to enable the user to access AWS services programmatically, through one of the AWS SDKs or
the command line tools. The SDKs and command line tools use the access keys to
cryptographically sign API requests. If the AWS tools are not used, one must sign API requests
oneself.
IAM role – An IAM role is another IAM identity one can create in their account that has
specific permissions. It is similar to an IAM user, but it is not associated with a specific person.
221
An IAM role enables one to obtain temporary access keys to access AWS services and resources
programmatically.
Federated user access – Instead of creating an IAM user, one can use pre-existing user
identities from AWS Directory Service, the enterprise user directory, or a web identity provider.
These are known as federated users. Federated users use IAM roles through an identity provider.
Cross-account access –One can use an IAM role in their AWS account to allow another
AWS account permissions to access their account’s resources.
AWS service access – One can use an IAM role in their account to allow an AWS service
permissions to access their account’s resources. For example, one can create a role that allows
Amazon Redshift to access an S3 bucket on the behalf and then load data stored in the S3
bucket into an Amazon Redshift cluster.
One can have valid credentials to authenticate the requests, but one also need permissions
to make AWS KMS API requests to create, manage, or use AWS KMS resources. For example,
one must have permissions to create a KMS CMK, to manage the CMK, to use the CMK for
cryptographic operations (such as encryption and decryption), and so on.
Virtualization security is the collective measures, procedures and processes that ensure
the protection of a virtualization infrastructure / environment.
· Securing virtual machines, virtual network and other virtual appliance with attacks
and vulnerabilities surfaced from the underlying physical device.
Many IT professionals worry about virtual environment security, concerned that malicious
code and malware may spread between workloads. Virtualization abstracts applications from
the physical server hardware running underneath, which allows the servers to run multiple
workloads simultaneously and share some system resources. Though the security threats are
very real, modern feature sets now offer better protection, and the type of hypervisor one chooses
to deploy can also make a big difference. Admins should understand hypervisor vulnerabilities
and the current concepts used to maintain security on virtual servers, as well as ways to minimize
the hypervisor’s system footprint and thus the potential attack surface.
Given that Type 1 and Type 2 hypervisors deploy in the environment differently and interact
differently with their infrastructure components, it follows that one would also secure each
hypervisor using different techniques. Moreover, it’s often easier to code Type 1, or bare-metal,
hypervisors, and they also provide better native VM security than Type 2 hypervisors, which
must share data between the host and guest OSes.
versions. Just be sure any software installed includes digital signatures to ensure malware
doesn’t make its way into the system.
Firewall and Active Directory integration, auditing and software acceptance features are
just some of the ways today’s hypervisors offer enhanced security. But these features will only
benefit the infrastructure when deployed correctly. Installing only essential system roles, for
example, will minimize the OS footprint and attack surface. In addition, strong logon credentials
will help ensure that admin and management tools remain secure. Isolating management traffic
also minimizes the potential for hackers to access important data.
• Auditing
• Data integrity
• Privacy
• Recovery
• Regulatory compliance
The risks in any cloud deployment are dependent upon the particular cloud service model
chosen andthe type of cloud on which one deploy the applications. In order to evaluate the
risks, one needs to perform the following analysis:
· Determine the sensitivity of the resource to risk. Risks that need to be evaluated
are loss of privacy, unauthorized access by others, loss of data, and interruptions in
availability.
· Determine the risk associated with the particular cloud type for a resource. Cloud
types include public, private (both external and internal), hybrid, and shared
community types. With each type, one needs to consider where data and functionality
will be maintained.
· Take into account the particular cloud service model that one will be using.Different
models such asIaaS, SaaS, and PaaS require their customers to be responsible for
security at different levels of theservice stack.
· If one has selected a particular cloud service provider, one needs to evaluate its
system to understandhow data is transferred, where it is stored, and how to move
data both in and out of the cloud.
· One maywant to consider building a flowchart that shows the overall mechanism of
the system one is intendingto use or are currently using.
Many vendors maintain a security page where they list their various resources, certifications,
andcredentials. One of the more developed offerings is the AWS Security Center, where one
can download some backgrounders, white papers, and case studies related to the Amazon
Web Service’s security controls and mechanisms.
In order to concisely discuss security in cloud computing, one needs to define the particular
model of cloud computing that applies. This nomenclature provides a framework for
understandingwhat security is already built into the system, who has responsibility for a particular
securitymechanism, and where the boundary between the responsibility of the service provider
is separate fromthe responsibility of the customer. Deployment models are cloud types:
community, hybrid, private, andpublic clouds. Service models follow the SPI Model for three
forms of service delivery: Software,Platform, and Infrastructure as a Service. In the NIST model,
as one may recall, it was not required that acloud use virtualization to pool resources, nor did
that model require that a cloud support multitenancy.
225
It is just these factors that make security such a complicated proposition in cloud computing.
CSA is anindustry working group that studies security issues in cloud computing and offers
recommendations toits members.
• Datacenter operations
• Application security
• Virtualization
Security boundaries are usually defined by a set of systems that are under a
singleadministrative control. These boundaries occur at various levels, and vulnerabilities can
becomeapparent as data “crosses” each one. In his inaugural column, the author looks at a
range of boundariesfrom smaller to larger and presents vulnerabilities and potential solutions
for each case.Security is checked only at application boundaries. That is, for two components
in the same application,when one component calls the other, no security check will be done.
However, if two applications sharethe same process and a component in one calls a component
in the other, a security check is donebecause an application boundary is crossed. Likewise, if
two applications reside in different serverprocesses and a component in the first application
calls a component in the second application, asecurity check is done.
Therefore, if one has two components and one wants security checks to be done when
one calls theother, one needs to put the components in separate COM+ applications.Because
226
COM+ library applications are hosted by other processes, there is a security boundary betweenthe
library application and the hosting process. Additionally, the library application doesn’t
controlprocess-level security, which affects how one needs to configure security for it. Determining
whether a security check must be carried out on a call into a component is based on thesecurity
property on the object context created when the configured component is instantiated.
For a COM+ server application, one has the choice of enforcing access checks either at
the componentlevel or at the process level.When one selects component-level access checking,
one enables fine-grained role assignments. One canassign roles to components, interfaces,
and methods and achieve an articulated authorization policy.This will be the standard
configuration for applications using role-based security.For COM+ library applications, one must
select component-level security if one wants to use roles.Library applications cannot use process-
level security.
One should select component-level access checking if one is using programmatic role-
based security.Security call context information is available only when component-level security
is enabled.Additionally, when one selects component-level access checking, the security property
will be includedon the object context. This means that security configuration can play a role in
how the object isactivated.
Process-level checks apply only to the application boundary. That is, the roles that one
has defined forthe whole COM+ application will determine who is granted access to any resource
within theapplication. No finer-grained role assignments apply. Essentially, the roles are used to
create a securitydescriptor against which any call into the application’s components is validated.
In this case, one wouldnot want to construct a detailed authorization policy with multiple roles.
The application will use asingle security descriptor.For COM+ library applications, one would
not select process-level access checks. The library applicationwill run hosted in the client’s
process and hence will not control process-level security.With process-level access checks
enabled, security call context information is not available. This meansthat one cannot do
programmatic security when using only process-level security.Additionally, the security property
will not be included on the object context. This means that whenusing only process-level access
checks, security configuration will never play a role in how the object isactivated.
227
Because Internet links that connect sites and users to service providers are involved,
along withprevailing local Internet topologies between the edges of that global network and
local elements ofits core, this geography tends to be more compressed and to be subject to
strange or interestinghops between locations. Of course, this reflects the peering partners at
various points of presencefor SONET and other high-speed infrastructures, and doesn’t always
reflect the same kind ofgeographical proximity one might see on a country or continental
map.Nevertheless, keeping track of where threats and vulnerabilities are occurring is incredibly
useful.By following lines of “Internet topography” spikes in detection (which indicate upward
trends inproliferation, or frequency of attack) are useful in prioritizing threats based on location.
For onething, networks that are geographically nearby in the Internet topography are more
likely to getexposed to such threats, so it makes sense to use this kind of proximity to escalate
risk assessmentsof exposure. For another thing, traffic patterns for attacks and threats tend to
follow other typicaltraffic patterns, so increasing threat or vulnerability profiles can also help to
drive all kinds ofpredictive analytics as well.
It’s always interesting to look at real-time threat maps or network “weather reports” from
varioussources to see where issues may be cropping up and how fast they’re spreading.
Akamai’sReal-Time Web Monitor provides an excellent and visually interesting portrayal of this
kind ofmonitoring and analysis at work. In the following screen capture for example, we see a
handful ofUS States where attacks have been detected in the last 24 hours.In general, threat,
vulnerability and attack mapping work well because such data makes forintelligible and compelling
visual displays. Human viewers are familiar with maps, and quicklylearn how to develop an
intuitive sense for threat priority or urgency based on proximity and thenature of the threats
involved. That’s why so many security service providers use maps to helpinform security
administrators about safety and security in their neighbourhoods, and around theplanet.
Data is one of the most valuable assets a business has at its disposal, covering anything
fromfinancial transactions to important customer and prospect details. Using data effectively
228
· Know exactly what you have and where you keep it: Understanding what data
the organisation has, where it is and who is responsible for it is fundamental to
building a good data security strategy. Constructing and maintaining a data asset
log will ensure that any preventative measures introduced will refer to and include
all therelevant data assets.
· Train the troops: Data privacy and security are a key part of the new general data
protection regulation (GDPR), so itis crucial to ensure the staff are aware of their
importance. The most common and destructivemistakes are due to human error.
For example, the loss or theft of a USB stick or laptop containingpersonal information
about the business could seriously damage the organisation’s reputation, aswell as
lead to severe financial penalties. It is vital that organisations consider an engaging
stafftraining programme to ensure all employees are aware of the valuable asset
they are dealing withand the need to manage it securely.
· Maintain a list of employees with access to sensitive data – then minimiseSadly, the
most likely cause of a data breach is the staff. Maintaining controls over who can
accessdata and what data they can obtain is extremely important. Minimise their
access privileges to justthe data they need.
· Additionally, data watermarking will help prevent malicious data theft by staffand
ensure one can identify the source in the event of a data breach. It works by allowing
one to add unique tracking records (known as “seeds”) to the database and then
monitor how the data is being used – even when it has moved outside the
organisation’s direct control. The service works for email, physical mail, landline
and mobile telephone calls and is designed to build a detailed picture of the real use
of the data.
· Carry out a data risk assessment One should undertake regular risk assessments
to identify any potential dangers to the organisation’s data. This should review all
229
the threats one can identify – everything from an onlinedata breach to more physical
threats such as power cuts. This will let one identify any weak pointsin the
organisation’s current data security system, and from here you can formulate a
plan of how toremedy this, and prioritise actions to reduce the risk of an expensive
data breach.
· Install trustworthy virus/malware protection software and run regular scans - One
of the most important measures for safeguarding data is also one of the most
straightforward.Using active prevention and regular scans one can minimise the
threat of a data leakage throughhackers or malicious malware, and help ensure the
data does not fall into the wrong hands. Thereis no single software that is absolutely
flawless in keeping out cyber criminals, but good security?software will go a long
way to help keep the data secure.
· With the GDPR still set to come into force in the UK despite the results of the recent
referendum, itis vital for companies to start re-evaluating their systems now.
Businesses need to plan how tominimise the risks, keep data secure and put the
necessary processes in place should they need todeal with any of the data security
threats.
The problem with the data that is stored in the cloud is that it can be located anywherein
the cloud service provider’s system: in another datacenter, another state or province,and in
many cases even in another country. With other types of system architectures, suchas client/
server, one could count on a firewall to serve as the network’s securityperimeter; cloud computing
has no physical system that serves this purpose. Therefore, toprotect the cloud storage assets,
one wants to find a way to isolate data from direct clientaccess.
230
One approach to isolating storage in the cloud from direct client access is to create
layeredaccess to the data. In one scheme, two services are created: a broker with full access
tostorage but no access to the client, and a proxy with no access to storage but access to
boththe client and broker. The location of the proxy and the broker is not important (they canbe
local or in the cloud); what is important is that these two services are in the direct datapath
between the client and data stored in the cloud. Under this system, when a clientmakes a
request for data, here’s what happens:
· The request goes to the external service interface (or endpoint) of the proxy, which
hasonly a partial trust.
· The proxy, using its internal interface, forwards the request to the broker.
· The broker requests the data from the cloud storage system.
· The proxy completes the response by sending the data requested to the client.
This design relies on the proxy service to impose some rules that allow it to safely
requestdata that is appropriate to that particular client based on the client’s identity and relay
thatrequest to the broker. The broker does not need full access to the cloud storage, but it
maybe configured to grant READ and QUERY operations, while not allowing APPEND
orDELETE. The proxy has a limited trust role, while the broker can run with higher privilegesor
even as native code. The use of multiple encryption keys can further separate the proxyservice
from the storage account. If oneuses two separate keys to create two different datazones—one
for the untrusted communication between the proxy and broker services, andanother a trusted
zone between the broker and the cloud storage— onecreates a situationwhere there is further
separation between the different service roles in the multi-key solution, one has not only eliminated
all internalservice endpoints, but one also has eliminated the need to have the proxy service run
at areduced trust level.
Some cloud service providers negotiate as part of their Service Level Agreements
tocontractually store and process data in locations that are predetermined by their contract.Not
all do. If one can get the commitment for specific data site storage, then one also should make
sure the cloud vendor is under contract to conform to local privacy laws. Becausedata stored in
231
the cloud is usually stored from multiple tenants, each vendor has its ownunique method for
segregating one customer’s data from another. It’s important to havesome understanding of
how the specific service provider maintains data segregation.
Another question to ask a cloud storage provider is who is provided privileged access
tostorage. The more one knows about how the vendor hires its IT staff and the securitymechanism
put into place to protect storage, the better. Most cloud service providers storedata in an encrypted
form. While encryption is important and effective, it does present itsown set of problems. When
there is a problem with encrypted data, the result is that thedata may not be recoverable. It is
worth considering what type of encryption the cloudprovider uses and to check that the system
has been planned and tested by security experts.
6.13 Encryption
Strong encryption technology is a core technology for protecting data in transit toand from
the cloud as well as data stored in the cloud. It is or will be required by law. Thegoal of encrypted
cloud storage is to create a virtual private storage system that maintainsconfidentiality and data
integrity while maintaining the benefits of cloud storage:ubiquitous, reliable, shared data storage.
Encryption should separate stored data (data atrest) from data in transit. Depending upon the
particular cloud provider, one can createmultiple accounts with different keys as it’s seen in the
example with Windows AzurePlatform in the previous section. Microsoft allows up to five security
accounts per client,and one can use these different accounts to create different zones. On
Amazon Web Service, one can create multiple keys and rotate those keys during different
sessions. Althoughencryption protects the data from unauthorized access, it does nothing to
prevent dataloss. Indeed, a common means for losing encrypted data is to lose the keys that
provideaccess to the data. Therefore, one needs to approach key management seriously.
Keysshould have a defined lifecycle. Among the schemes used to protect keys are the creation
ofsecure key stores that have restricted role-based access, automated key stores backup,
andrecovery techniques. It’s a good idea to separate key management from the cloud providerthat
hosts the data. One standard for interoperable cloud-based key management is theOASIS Key
Management Interoperability Protocol. IEEE 1619.3 also covers both storage encryption and
keymanagement for shared storage
232
Logging and auditing are unfortunately one of the weaker aspects of early cloud computing
service offerings. Cloud service providers often have proprietary log formats that one needs to
be aware of. Whatever monitoring and analysis tools you use need to be aware of these logs
and able to work with them. Often, providers offer monitoring tools of their own, many in the
form of a dashboard with the potential to customize the information one sees through either the
interface or programmatically using the vendor’s API. one wants to make full use of those built-
in services. Because cloud services are both multitenant and multisite operations, the logging
activity and data for different clients may not only be co-located, they may also be moving
across a landscape of different hosts and sites. one can’t simply expect that an investigation
will be provided with the necessary information at the time of discovery unless it is part of the
Service Level Agreement. Even an SLA with theappropriate obligations contained in it may not
be enough to guarantee one will get theinformation one needs when the time comes. It is wise
to determine whether the cloudservice provider has been able to successfully support
investigations in the past.
· Which regulations apply to the cloud service provider and where the demarcation
linefalls for responsibilities.
· How the cloud service provider will support the need for information associated with
regulation?
· How to work with the regulator to provide the information necessary regardless of
whohad the responsibility to collect the data.
Traditional service providers are much more likely to be the subject of securitycertifications
and external audits of their facilities and procedures than cloud serviceproviders. That makes
the willingness for a cloud service provider to subject its service toregulatory compliance scrutiny
233
an important factor in the selection of that provider overanother. In the case of a cloud service
provider who shows reluctance to or limits thescrutiny of its operations, it is probably wise to
use the service in ways that limit the exposure to risk. For example, although encrypting stored
data is always a good policy, one also might want to consider not storing any sensitive information
on that provider’ssystem.
As it stands now, clients must guarantee their own regulatory compliance, even when
them data is in the care of the service provider. one must ensure that the data is secure and that
its integrity has not been compromised. When multiple regulatory entities are involved, as there
surely are between site locations and different countries, then that burden to satisfy the laws of
those governments is also their responsibility.
For any company with clients in multiple countries, the burden of regulatory compliance is
onerous. While organizations such as the EEC (European Economic Community) orCommon
Market provide some relief for European regulation, countries such as the United
States, Japan, China, and others each have their own sets of requirements. This
makesregulatory compliance one of the most actively developing and important areas of
cloudcomputing technology. This situation is likely to change. On March 1, 2010, Massachusetts
passed a law that requires companies that providesensitive personal information on
Massachusetts residents to encrypt data transmitted andstored on their systems. Businesses
are required to limit the amount of personal datacollected, monitor data usage, keep a data
inventory, and be able to present asecurity plan on how they will keep the data safe. The steps
require that companies verifythat any third-party services they use conform to these requirements
and that there belanguage in all SLAs that enforce these protections.
The sections that follow describe some of the different security aspects of identity and the
related concept of “presence.” For this conversation, one can consider presence to be the
mapping of an authenticated identity to a known location.Presence is important in cloud computing
because it adds context that can modify services and service delivery.
· The end-user uses a program like a browser that is called a user agent to enter an
· OpenID identifier, which is in the form of a URL or XRI. An OpenID might take the
form ofname.openid.provider.org.
· The OpenID is presented to a service that provides access to the resource that is
desired.
235
· An entity called a relaying party queries the OpenID identity provider to authenticate
theveracity of the OpenID credentials.
· The authentication is sent back to the relaying party from the identity provider and
access is either provided or denied.
The second protocol used to present identity-based claims in cloud computing is aset of
authorization mark-up languages that create files in the form of being XACML andSAML. SAML
is gaining growing acceptance among cloud serviceproviders. It is a standard of OASIS and an
XML standard for passing authentication andauthorization between an identity provider and the
service provider. SAML is acomplimentary mechanism to OpenID and is used to create SSO
systems. Taken asa unit, OpenID and SAML are being positioned to be the standard
authenticationmechanism for clients accessing cloud services.
It is particularly important for services such as mashups that draw information from two or
more data services. An open standard called OAuth (http://oauth.net/) provides a token service
that can be used to present validated accessto resources. OAuth is similar to OpenID, but
provides a different mechanism for sharedaccess. The use of OAuth tokens allows clients to
present credentials that contain noaccount information (user ID or password) to a cloud service.
The token comes with adefined period after which it can no longer be used. Several important
cloud service providers have begun to make OAuth APIs available based on the OAuth 2.0
standard, most notably Facebook’s Graph API and the Google Data API.
The Data Portability Project is an industry working group that promotes data interoperability
between applications, and the group’s work touches on anumber of the emerging standards
mentioned in this section. home page of the Data Portability Project, an industry working group
that promotes open identity standards.A number of vendors have created server products,
such as Identity and Access Managers (IAMs), to support these various standards.
Summary
· Cloud computing is a practical approach to experience direct cost benefits and it
has the potential to transform a data center from a capital-intensive set up to a
variable priced environment. The idea of cloud computing is based on a very
fundamental principal of reusability of IT capabilities.
· Cloud computing is defined as, a pool of abstracted, highly scalable, and managed
compute infrastructure capable of hosting end-customer applications and billed by
consumption.
236
· Two types of Cloud Models are NIST Model and Cloud Cube Model
· NIST Model are two types namely: Deployment Models and Service Models
· Different deployment models are Public Cloud, Private Cloud, Hybrid Cloud and
Community Cloud.
· Cloud Auditor - A party that can conduct independent assessment of cloud services,
information system operations, performance and security of the cloud
implementation.
· The intermediary that provides connectivity and transport of cloud services between
Cloud Providers and Cloud Consumers. – Provide access to cloud consumers
through network, telecommunication and other access devices.
· Cloud computing
· Multi-tenancy
· NIST Model
· Cloud Broker
· Cloud Auditor
· Cloud Provider
· Cloud Consumer
Reference
· h t t p : / / w w w. r g c e t p d y. a c . i n / N o t e s / I T / I V % 2 0 Y E A R / E L E C T I V E -
CLOUD%20COMPUTING/Unit%204.pdf
· Jay Heiser and Mark Nicolett of the Gartner Group http:// www.gartner.com /
DisplayDocument? id =685308)
· https://blog.kennasecurity.com/2013/02/the-role-of-security-mapping-in-vulnerability-
management/
· https://www.computerweekly.com/opinion/Six-essential-processes-for-keeping-data-
secure
· http://dataportability.org/
238
Section-A
Section-B
SECTION – C
2. What is GPO & its Structure? How will you configure the settings for
b. Password setting
c. Account settings