Haramaya Coa Assigment 2022

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

HARAMAYA UNIVERSITY COLLEGE OF

COMPUTING AND INFORMATICS


DEPARTMENT OF COMPUTER SCIENCE
COA ASSIGNMENT
Introduction to RAID – Redundant Array of Independent Disks
TCHENOLOGY

• The storage which we are used in laptop computers is a single disk but in servers, data centers, and cloud computing we
used multiple disks, which means RAID technology.
• RAID saves data on multiple disks.
• RAID is a set of multiple hard disks, which are in the network.
• Additional memory in form of a cache can improve system performance, in the same way, additional disks can also
improve system performance.
• Since there are many disks, separate and multiple I/O requests can be handled in parallel if the data required is on
separate disks.
• •RAID is a way of storing the same data in different places on multiple hard
disks or solid-state drives to protect data in the case of a drive failure
• •There are different RAID levels, however, and not all have the goal of
providing redundancy.
How RAID works

• RAID is a technique that makes use of a combination of multiple disks instead of using a single disk for
increased performance, data redundancy, or both
• Key evaluation points for a RAID System
• Reliability: How many disk faults can the system tolerate
• Availability: What fraction of the total session time is a system in uptime mode
• Performance: How good is the response time
• Capacity: Given a set of N disks each with B blocks, how much useful capacity is available to
the user
• Arrays of small and inexpensive disks
• Increase potential throughput by having many disk drives
• Data is spread over multiple disks
• Multiple accesses are made to several disks at a time
• Reliability is lower than a single disk
• But availability can be improved by adding redundant disks
• Lost information can be reconstructed from redundant information
• MTTR: mean time to repair is in the order of hours
• MTTF: mean time to failure of disks is tens of Redundant Array Inexpensive Disks
Different RAID levels

• RAID-0:-   Blocks are “stripped” across disks


• In a RAID 0 system, data is split up into blocks
• These blocks are written across all of the drives simultaneously
• Using multiple disks at the same time offers better I/O performance
• it uses all the disks’ storage and is easy to implement due to its simplicity
• However, RAID 0 is not fault-tolerant
RAID Level 1 – Mirroring

• data is duplicated into an additional disk drive


• This system provides excellent read speed as the data is available in an additional disk drive
• It also provides better fault tolerance as there is a backup of the data
• if there is any failure in one disk drive, the mirror disk drive is used
• However, the biggest disadvantage of RAID 1 is that it requires twice as many resources

RAID Level 2
• A RAID 2 stripes data at the bit level, and uses hamming code for error correction
• The disks are synchronized by the controller to spin at the same angular orientation, so it generally cannot service multiple requests simultaneously
• Extremely high data transfer rates are possible
A RAID Level -3

• uses byte-level striping with a dedicated parity disk.


• Data striping with bit interleaves and parity checking .
• RAID 3 is similar to level 2 but more reliable.
• Data striping is done across the drives, one byte at a time.
• 4 or 5 drives are used providing very high data transfer rates.
• One drive is dedicated to storing parity information.
• The failure of a single drive can be compensated using the parity drive to reconstruct the failed
drive contents.
A RAID Level 4

consist of Block-level Striping


• RAID 4 is a RAID configuration that uses a dedicated parity disk and
block-level striping across multiple disks.
• Because data is striped in RAID 4, the records can be read from any disk.
However, since all the writes must go to the dedicated parity disk, this
causes a performance bottleneck for all to write operations.
Multiprocessor

• What Does Multiprocessor Mean?


• A Multiprocessor is a computer system with two or more central processing units sharing
full access to a common RAM.
• each one shares the common main memory as well as the peripherals.
• This helps in the simultaneous processing of programs.
• The key objective of using a multiprocessor is to boost the system’s execution speed, with
other objectives being fault tolerance and application matching.
• There are two types of multiprocessors: one is called:
• shared memory multiprocessor is all the CPUs share the common memory
• distributed memory multiprocessor, every CPU has its private memory
• Applications of Multiprocessor
• As a uniprocessor, such as single instruction, single data
• Inside a single system for executing multiple, individual series of instructions in multiple
perspectives, such as multiple instructions, multiple data
• A single series of instructions in various perspectives, such as single instruction, and
multiple data , which is usually used for vector processing
• Multiple series of instructions in a single perspective, such as multiple instructions, and
multiple data streams
Benefits of using a multiprocessor include
• Enhanced performance
• Multiple applications
• Multiple users
• Multi-tasking inside an application
• Hardware sharing among CPUs
The interconnection structure of multiprocessor

• The components that form a multiprocessor system are CPUs, IOPs connected to I/O devices, and
memory units.
• The interconnection between the components can have different physical configurations, depending
on the number of available transfer paths.
• Between the processor and memory in the shared memory system.
• Among the processing elements in a loosely coupled system.
• The advantage is higher transfer rate can be achieved because of the multiple paths
• The disadvantage is it requires expensive memory control logic and a large number of cables and connections
• This is a binary n-cube architecture
• A node can be a memory module, and I/O interface also, and not necessarily a processor
• LRU
• LRU replaces the line in the cache that has been in the cache the longest with no reference to it
• It works on the idea that the more recently used blocks are more likely to be referenced again
• LRU is the most frequently used algorithm as it gives less number of page faults when compared to the other algorithms
• Characteristics of LRU,
Characteristics of LRU

• It has been observed that pages that have been recently heavily used will probably also be heavily used in the
upcoming instructions and this forms the basis for LRU.
• When the page requested by the program is not present in the RAM.
• a page fault occurs and then if the page frame is full we have to remove the page that has not been in use for the
longest period

Advantages of LRU
1. It is open for full analysis.
2. In this, we replace the page which is least recently used.
3. Easy to choose a page that has faulted and hasn’t been used for a long time.
4. algorithm is very efficient.
Disadvantages of LRU

• There is more overhead as we have to keep track of which pages were referenced
• It is difficult to implement as hardware assistance is required
• It requires additional Data Structure to be implemented.
• Hardware assistance is high.
• In LRU error detection is difficult as compared to other algorithms.
• It has limited acceptability.
• LRUs are very costly to operate.

You might also like