0% found this document useful (0 votes)
13 views17 pages

UNIT - 4 Notes

computer architecture notes unit-4 for engineering 2nd year computer science students

Uploaded by

Abhinay
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
13 views17 pages

UNIT - 4 Notes

computer architecture notes unit-4 for engineering 2nd year computer science students

Uploaded by

Abhinay
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 17

UNIT – IV - NOTES

Design of Control Unit :

The Control Unit is classified into two major categories:


1.Hardwired Control
2.Microprogrammed Control

Hardwired Control :
A hardwired control unit is a control unit that uses a fixed set of logic gates and circuits to execute
instructions. The control signals for each instruction are hardwired into the control unit, so the control
unit has a dedicated circuit for each possible instruction. Hardwired control units are simple and fast,
but they can be inflexible and difficult to modify.
Hardwired Control Unit: The control hardware can be viewed as a state machine that changes from
one state to another in every clock cycle, depending on the contents of the instruction register, the
condition codes, and the external inputs. The outputs of the state machine are the control signals. The
sequence of the operation carried out by this machine is determined by the wiring of the logic elements
and hence named “hardwired”.
 Fixed logic circuits that correspond directly to the Boolean expressions are used to generate the
control signals.
 Hardwired control is faster than micro-programmed control.
 A controller that uses this approach can operate at high speed.
 RISC architecture is based on the hardwired control unit

The Hardwired Control organization involves the control logic to be implemented with gates, flip-flops,
decoders, and other digital circuits.

The following image shows the block diagram of a Hardwired Control organization.
Designing of Hardwired Control Unit

The following are some of the ways for constructing hardwired control logic that have been proposed:
Sequence Counter Method − It is the most practical way to design a somewhat complex controller.
Delay Element Method – For creating the sequence of control signals, this method relies on the usage of timed
delay elements.
State Table Method − The standard algorithmic approach to designing the Notes controller utilising the classical
state table method is used in this method.

Working of a Hardwired Control Unit

The basic data for control signal creation is contained in the operation code of an instruction. The operation code is
decoded in the instruction decoder. The instruction decoder is a collection of decoders that decode various fields of
the instruction opcode.

As a result, only a few of the instruction decoder’s output lines have active signal values. These output lines are
coupled to the matrix’s inputs, which provide control signals for the computer’s executive units. This matrix
combines the decoded signals from the instruction opcode with the outputs from that matrix which generates signals
indicating consecutive control unit states, as well as signals from the outside world, such as interrupt signals. The
matrices are constructed in the same way that programmable logic arrays are.

A Hard-wired Control consists of two decoders, a sequence counter, and a number of logic gates.
An instruction fetched from the memory unit is placed in the instruction register (IR).
The component of an instruction register includes; I bit, the operation code, and bits 0 through 11.
The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.
The outputs of the decoder are designated by the symbols D0 through D7.
The operation code at bit 15 is transferred to a flip-flop designated by the symbol I.
The operation codes from Bits 0 through 11 are applied to the control logic gates.
The Sequence counter (SC) can count in binary from 0 through 15.

Microprogrammed control unit (MCU) :

o A microprogrammed control unit (MCU) is a control unit that stores binary control values as words in
memory and uses a programming approach to implement a series of micro-operations. The MCU's control
memory stores a microprogram made up of microinstructions.
o The MCU operates by generating specific signal collections at every system clock beat, which in turn,
direct the instructions to be executed. Each output signal generates one micro-operation, including register
transfer.
o The main difference between microprogrammed structures and the hardwired control unit structure is the
existence of the control store. Hardwired control units are usually faster than microprogrammed
designs. Hardwired control units use a state counter and a PLA circuit to generate all the control signals
needed inside the CPU. Microprogrammed control units are relatively simple logic circuits that can
sequence through microinstructions and generate control signals to execute each.
o Microprogramming has its advantages, such as being very flexible. The instruction sets can be very robust
or very simple, but still very powerful

Characteristics of Micro-programmed Control Unit

 The microinstruction address is specified in the control memory address register.


 All the control information is saved in the control memory, which is considered to be a ROM.
 The microinstruction received from memory is stored in the control register.
 A control word in the microinstruction specifies one or multiple micro-operations for a data processor.
 The next address is calculated in the circuit of the next address generator and then transferred to the control
address register for reading the next microinstruction when the micro-operations are being executed.
 Because it determines the sequence of addresses received from control memory, the next address generator
is also known as a microprogram sequencer.
In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory such that it can
minimize the access time. The Memory Hierarchy was developed based on a program behavior known as locality of
references. The figure below clearly demonstrates the different levels of the memory hierarchy.

Memory Hierarchy
Memory Hierarchy is one of the most required things in Computer Memory as it helps in optimizing the
memory available in the computer. There are multiple levels present in the memory, each one having a
different size, different cost, etc. Some types of memory like cache, and main memory are faster as
compared to other types of memory but they are having a little less size and are also costly whereas some
memory has a little higher storage value, but they are a little slower. Accessing of data is not similar in all
types of memory, some have faster access whereas some have slower access.

Types of Memory Hierarchy :

This Memory Hierarchy Design is divided into 2 main types:

 External Memory or Secondary Memory: Comprising of Magnetic Disk, Optical Disk, and Magnetic Tape
i.e. peripheral storage devices which are accessible by the processor via an I/O Module

 Internal Memory or Primary Memory: Comprising of Main Memory, Cache Memory & CPU registers. This
is directly accessible by the processor.
Memory Hierarchy Design

Memory Hierarchy Design

1. Registers :Registers are small, high-speed memory units located in the CPU. They are used to store the
most frequently used data and instructions. Registers have the fastest access time and the smallest storage capacity,
typically ranging from 16 to 64 bits.

2. Cache Memory :Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used
data and instructions that have been recently accessed from the main memory. Cache memory is designed to
minimize the time it takes to access data by providing the CPU with quick access to frequently used data.

3. Main Memory :
Main memory, also known as RAM (Random Access Memory), is the primary memory of a computer system. It has
a larger storage capacity than cache memory, but it is slower. Main memory is used to store data and instructions
that are currently in use by the CPU.
Types of Main Memory
 Static RAM: Static RAM stores the binary information in flip flops and information remains valid until power
is supplied. It has a faster access time and is used in implementing cache memory.
 Dynamic RAM: It stores the binary information as a charge on the capacitor. It requires refreshing circuitry to
maintain the charge on the capacitors after a few milliseconds. It contains more memory cells per unit area as
compared to SRAM.

4. Secondary Storage

Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile memory unit that
has a larger storage capacity than main memory. It is used to store data and instructions that are not currently in use
by the CPU. Secondary storage has the slowest access time and is typically the least expensive type of memory in
the memory hierarchy.
Characteristics of Memory Hierarchy :
 Capacity: It is the global volume of information the memory can store. As we move from top to bottom in the
Hierarchy, the capacity increases.
 Access Time: It is the time interval between the read/write request and the availability of the data. As we move
from top to bottom in the Hierarchy, the access time increases.
 Performance: Earlier when the computer system was designed without a Memory Hierarchy design, the speed
gap increased between the CPU registers and Main Memory due to a large difference in access time. This
results in lower performance of the system and thus, enhancement was required. This enhancement was made in
the form of Memory Hierarchy Design because of which the performance of the system increases. One of the
most significant ways to increase system performance is minimizing how far down the memory hierarchy one
has to go to manipulate data.
 Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal Memory
is costlier than External Memory.
Advantages of Memory Hierarchy
 It helps in removing some destruction, and managing the memory in a better way.
 It helps in spreading the data all over the computer system.
 It saves the consumer’s price and time.

Main Memory :

The main memory acts as the central storage unit in a computer system. It is a relatively large and fast memory which is used to
store programs and data during the run time operations.

The primary technology used for the main memory is based on semiconductor integrated circuits. The integrated circuits for the
main memory are classified into two major units.

1. RAM (Random Access Memory) integrated circuit chips

2. ROM (Read Only Memory) integrated circuit chips

RAM integrated circuit chips

The RAM integrated circuit chips are further classified into two possible operating modes, static and dynamic.

The primary compositions of a static RAM are flip-flops that store the binary information. The nature of the stored information
is volatile, i.e. it remains valid as long as power is applied to the system. The static RAM is easy to use and takes less time
performing read and write operations as compared to dynamic RAM.p 10sPlay VideoForward Skip 10s

The dynamic RAM exhibits the binary information in the form of electric charges that are applied to capacitors. The capacitors
are integrated inside the chip by MOS transistors. The dynamic RAM consumes less power and provides large storage capacity
in a single memory chip.
RAM chips are available in a variety of sizes and are used as per the system requirement. The following block diagram
demonstrates the chip interconnection in a 128 * 8 RAM chip.

o A 128 * 8 RAM chip has a memory capacity of 128 words of eight bits (one byte) per word. This requires a
7-bit address and an 8-bit bidirectional data bus.
o The 8-bit bidirectional data bus allows the transfer of data either from memory to CPU during
a read operation or from CPU to memory during a write operation.
o The read and write inputs specify the memory operation, and the two chip select (CS) control inputs are for
enabling the chip only when the microprocessor selects it.
o The bidirectional data bus is constructed using three-state buffers.
o The output generated by three-state buffers can be placed in one of the three possible states which include a
signal equivalent to logic 1, a signal equal to logic 0, or a high-impedance state.

The following function table specifies the operations of a 128 * 8 RAM chip.

From the functional table, we can conclude that the unit is in operation only when CS1 = 1 and CS2 = 0.
The bar on top of the second select variable indicates that this input is enabled when it is equal to 0.
ROM integrated circuit

The primary component of the main memory is RAM integrated circuit chips, but a portion of memory may be constructed with
ROM chips. A ROM memory is used for keeping programs and data that are permanently resident in the computer.

Apart from the permanent storage of data, the ROM portion of main memory is needed for storing an initial program called
a bootstrap loader. The primary function of the bootstrap loader program is to start the computer software operating when
power is turned on.

ROM chips are also available in a variety of sizes and are also used as per the system requirement. The following block diagram
demonstrates the chip interconnection in a 512 * 8 ROM chip.

o A ROM chip has a similar organization as a RAM chip. However, a ROM can only perform read operation; the data
bus can only operate in an output mode.

o The 9-bit address lines in the ROM chip specify any one of the 512 bytes stored in it.
o The value for chip select 1 and chip select 2 must be 1 and 0 for the unit to operate. Otherwise, the data bus is said to be
in a high-impedance state.

Virtual memory :
Virtual memory is a memory management technique where secondary memory can be used as if it were a part of
the main memory. Virtual memory is a common technique used in a computer's operating system (OS).

Virtual memory uses both hardware and software to enable a computer to compensate for physical memory
shortages, temporarily transferring data from random access memory (RAM) to disk storage. Mapping chunks of
memory to disk files enables a computer to treat secondary memory as though it were main memory.

Today, most personal computers (PCs) come with at least 8 GB (gigabytes) of RAM. But, sometimes, this is not
enough to run several programs at one time. This is where virtual memory comes in. Virtual memory frees up RAM
by swapping data that has not been used recently over to a storage device, such as a hard drive or solid-state drive
(SSD).

Virtual memory is important for improving system performance, multitasking and using large programs. However,
users should not overly rely on virtual memory, since it is considerably slower than RAM. If the OS has to swap
data between virtual memory and RAM too often, the computer will begin to slow down -- this is called thrashing.
Virtual memory was developed at a time when physical memory -- also referenced as RAM -- was expensive.
Computers have a finite amount of RAM, so memory will eventually run out when multiple programs run at the
same time. A system using virtual memory uses a section of the hard drive to emulate RAM. With virtual memory, a
system can load larger or multiple programs running at the same time, enabling each one to operate as if it has more
space, without having to purchase more RAM.

Techniques that automatically move program and data blocks into the physical main memory when they are
required for execution are called virtual-memory techniques. Programs, and hence the processor, reference an
instruction and data space that is independent of the available physical main memory space. The binary addresses
that the processor issues for either instructions or data are called virtual or logical addresses. These addresses are
translated into physical addresses by a combination of hardware and software components. If a virtual address refers
to a part of the program or data space that is currently in the physical memory, then the contents of the appropriate
location in the main memory are accessed immediately. On the other hand, if the referenced address is not in the
main memory, its contents must be brought into a suitable location in the memory before they can be used.
Therefore, an address used by a programmer will be called a virtual address, and the set of such addresses
the address space. An address in main memory is called a location or physical address. The set of such locations is
called the memory space, which consists of the actual main memory locations directly addressable for processing.
As an example, consider a computer with a main-memory capacity of 32M words. Twenty-five bits are needed to
specify a physical address in memory since 32 M = 225. Suppose that the computer has available auxiliary memory
for storing 235, that is, 32G words. Thus, the auxiliary memory has a capacity for storing information equivalent to
the capacity of 1024 main memories. Denoting the address space by N and the memory space by M, we then have
for this example N = 32 Giga words and M = 32 Mega words.

The portion of the program that is shifted between main memory and secondary storage can be of fixed size
(pages) or of variable size (segments). Virtual memory also permits a program’s memory to be physically
noncontiguous , so that every portion can be allocated wherever space is available. This facilitates process
relocation. Virtual memory, apart from overcoming the main memory size limitation, allows sharing of main
memory among processes.

Even though the programs generate virtual addresses, these addresses cannot be used to access the physical
memory. Therefore, the virtual to physical address translation has to be done. This is done by the memory
management unit (MMU). The mapping is a dynamic operation, which means that every address is translated
immediately as a word is referenced by the CPU. This concept is depicted diagrammatically in Figures 30.1 and
30.2. Figure 30.1 gives a general overview of the mapping between the logical addresses and physical addresses.
Figure 30.2 shows how four different pages A, B, C and D are mapped. Note that, even though they are contiguous
pages in the virtual space, they are not so in the physical space. Pages A, B and C are available in physical memory
at non-contiguous locations, whereas, page D is not available in physical storage.
Cache Memory :
Cache memory is a small, high-speed storage area in a computer. The cache is a smaller and faster memory that
stores copies of the data from frequently used main memory locations. There are various independent caches in a
CPU, which store instructions and data. The most important use of cache memory is that it is used to reduce the
average time to access data from the main memory.
By storing this information closer to the CPU, cache memory helps speed up the overall processing time. Cache
memory is much faster than the main memory (RAM). When the CPU needs data, it first checks the cache. If the
data is there, the CPU can access it quickly. If not, it must fetch the data from the slower main memory.
Characteristics of Cache Memory
 Cache memory is an extremely fast memory type that acts as a buffer between RAM and the CPU.
 Cache Memory holds frequently requested data and instructions so that they are immediately available to the
CPU when needed.
 Cache memory is costlier than main memory or disk memory but more economical than CPU registers.
 Cache Memory is used to speed up and synchronize with a high-speed CPU.

Cache Memory

Levels of Memory

 Level 1 or Register: It is a type of memory in which data is stored and accepted that are immediately stored
in the CPU. The most commonly used register is Accumulator, Program counter, Address Register, etc.
 Level 2 or Cache memory: It is the fastest memory that has faster access time where data is temporarily
stored for faster access.
 Level 3 or Main Memory: It is the memory on which the computer works currently. It is small in size and
once power is off data no longer stays in this memory.
 Level 4 or Secondary Memory: It is external memory that is not as fast as the main memory but data stays
permanently in this memory.
Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a corresponding entry
in the cache.
 If the processor finds that the memory location is in the cache, a Cache Hit has occurred and data is read
from the cache.
 If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache miss,
the cache allocates a new entry and copies in data from the main memory, then the request is fulfilled from
the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

it Ratio(H) = hit / (hit + miss) = no. of hits/total accesses


Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit ratio(H)

We can improve Cache performance using higher cache block size, and higher associativity, reduce miss rate,
reduce miss penalty, and reduce the time to hit in the cache.

Advantages
 Cache Memory is faster in comparison to main memory and secondary memory.
 Programs stored by Cache Memory can be executed in less time.
 The data access time of Cache Memory is less than that of the main memory.
 Cache Memory stored data and instructions that are regularly used by the CPU, therefore it increases the
performance of the CPU.
Disadvantages
 Cache Memory is costlier than primary memory and secondary memory.
 Data is stored on a temporary basis in Cache Memory.
 Whenever the system is turned off, data and instructions stored in cache memory get destroyed.
 The high cost of cache memory increases the price of the Computer System.

secondary storage :

A secondary storage device refers to any non-volatile storage device that is internal or external to the computer. It
can be any storage device beyond the primary storage that enables permanent data storage. A secondary storage
device is also known as an auxiliary storage device, backup storage device, tier 2 storage, or external storage. These
devices store virtually all programs and applications on a computer, including the operating system, device drivers,
applications and general user data.

The Secondary storage media can be fixed or removable. Fixed Storage media is an internal storage medium like a
hard disk that is fixed inside the computer. A storage medium that is portable and can be taken outside the computer
is termed removable storage media. The main advantage of using secondary storage devices is:

o In Secondary storage devices, the stored data might not be under the direct control of the operating system.
For example, many organizations store their archival data or critical documents on secondary storage
drives, which their main network cannot access to ensure their preservation whenever a data breach occurs.

o Since these drives do not interact directly with the main infrastructure and can be situated in a remote or
secure site, it is unlikely that a hacker may access these drives unless they're physically stolen.
Why do we need Secondary Storage?

Computers use main memory such as random access memory (RAM) and cache to hold data that is being processed.
However, this type of memory is volatile, and it loses its data when the computer is switched off. General-purpose
computers, such as personal computers and tablets, need to store programs and data for later use.

Types of Secondary Storage Device

Here are the two types of secondary storage devices, i.e., fixed storage and removable storage.

1. Fixed Storage

Fixed storage is an internal media device used by a computer system to store data. Usually, these are referred to as
the fixed disk drives or Hard Drives.

Fixed storage devices are not fixed. These can be removed from the system for repairing work, maintenance
purposes, and also for an upgrade, etc. But in general, this can not be done without a proper toolkit to open up the
computer system to provide physical access, which needs to be done by an engineer.

Technically, almost all data, i.e. being processed on a computer system, is stored on some built-in fixed storage
device. We have the following types of fixed storage:

o Internal flash memory (rare)

o SSD (solid-state disk) units

o Hard disk drives (HDD)

2. Removable Storage

Removable storage is an external media device that is used by a computer system to store data. Usually, these are
referred to as the Removable Disks drives or the External Drives. Removable storage is any storage device that can
be removed from a computer system while the system is running. Examples of external devices include CDs, DVDs,
Blu-ray disk drives, and diskettes and USB drives. Removable storage makes it easier for a user to transfer data from
one computer system to another.

The main benefit of removable disks in storage factors is that they can provide the fast data transfer rates associated
with storage area networks (SANs). We have the following types of Removable Storage:

o Optical discs (CDs, DVDs, Blu-ray discs)

o Memory cards, Floppy disks ,Magnetic tapes

o Disk packs, Paper storage (punched tapes, punched cards)


Classification of Secondary Storage Devices

The following image shows the classification of commonly used secondary storage devices.

Sequential Access Storage Device

It is a class of data storage devices that read stored data in a sequence. This is in contrast to random access memory
(RAM), where data can access in any order, and magnetic tape is the common sequential access storage device.

i. Magnetic tape: It is a medium for magnetic recording, made of a thin, magnetizable coating on a long,
narrow strip of plastic film. Devices that record and play audio and video using magnetic tape are tape
recorders and videotape recorders. A device that stores computer data on magnetic tape is known as a tape
drive.
It was a key technology in early computer development, allowing unparalleled amounts of data to be
mechanically created, stored for long periods, and rapidly accessed.

Direct Access Storage Devices

A direct-access storage device (DASD) is another name for secondary storage devices that store data in discrete
locations with a unique address, such as hard disk drives, optical drives and most magnetic storage devices.

1. Magnetic disks: A magnetic disk is a storage device that uses a magnetization process to write, rewrite and
access data. It is covered with a magnetic coating and stores data in the form of tracks, spots and sectors. Hard disks,
zip disks and floppy disks are common examples of magnetic disks.

i. Floppy Disk: A floppy disk is a flexible disk with a magnetic coating on it, and it is packaged inside a
protective plastic envelope. These are among the oldest portable storage devices that could store up to 1.44
MB of data, but now they are not used due to very little memory storage.

ii. Hard Disk Drive (HDD): Hard disk drive comprises a series of circular disks called platters arranged one
over the other almost ½ inches apart around a spindle. Disks are made of non-magnetic material like
aluminium alloy and coated with 10-20 nm magnetic material. The standard diameter of these disks is 14
inches, and they rotate with speeds varying from 4200 rpm (rotations per minute) for personal computers to
15000rpmforservers.
Data is stored by magnetizing or demagnetizing the magnetic coating. A magnetic reader arm is used to
read data from and write data to the disks. A typical modern HDD has a capacity in terabytes (TB).

2. Optical Disk: An optical disk is any computer disk that uses optical storage techniques and technology to read
and write data. It is a computer storage disk that stores data digitally and uses laser beams to read and write data.

i. CD Drive: CD stands for Compact Disk. CDs are circular disks that use optical rays, usually lasers, to read
and write data. They are very cheap as you can get 700 MB of storage space for less than a dollar. CDs are
inserted in CD drives built into the CPU cabinet. They are portable as you can eject the drive, remove the
CD and carry it with you. There are three types of CDs:

o CD-ROM (Compact Disk - Read Only Memory): The manufacturer recorded the data on these
CDs. Proprietary Software, audio or video are released on CD-ROMs.

o CD-R (Compact Disk - Recordable): The user can write data once on the CD-R. It cannot be
deleted or modified later.

o CD-RW (Compact Disk - Rewritable): Data can repeatedly be written and deleted on these
optical disks.

ii. DVD Drive: DVD stands for digital video display. DVD is an optical device that can store 15 times the
data held by CDs. They are usually used to store rich multimedia files that need high storage capacity.
DVDs also come in three varieties - read-only, recordable and rewritable.

iii. Blu Ray Disk: Blu Ray Disk (BD) is an optical storage media that stores high definition (HD) video and
other multimedia files. BD uses a shorter wavelength laser than CD/DVD, enabling the writing arm to
focus more tightly on the disk and pack in more data. BDs can store up to 128 GB of data.

3. Memory Storage Devices: A memory device contains trillions of interconnected memory cells that store data.
When switched on or off, these cells hold millions of transistors representing 1s and 0s in binary code, allowing a
computer to read and write information. It includes USB drives, flash memory devices, SD and memory cards,
which you'll recognize as the storage medium used in digital cameras.

i. Flash Drive: A flash drive is a small, ultra-portable storage device. USB flash drives were essential for
easily moving files from one device to another. Flash drives connect to computers and other devices via a
built-in USB Type-Aor USB-C plug, making one a USB device and cable combination.
Flash drives are often referred to as pen drives, thumb drives, or jump drives. The terms USB
drive and solid-state drive (SSD) are also sometimes used, but most of the time, those refer to larger, not-
so-mobile USB-based storage devices like external hard drives.
These days, a USB flash drive can hold up to 2 TB of storage. They're more expensive per gigabyte than an
external hard drive, but they have prevailed as a simple, convenient solution for storing and transferring
smallerfiles.
Pen drive has the following advantages in computer organization, such as:

o Transfer Files: A pen drive is a device plugged into a USB port of the system that is used to
transfer files, documents, and photos to a PC and vice versa.

o Portability: The lightweight nature and smaller size of a pen drive make it possible to carry it
from place to place, making data transportation an easier task.

o Backup Storage:Most of the pen drives now come with the feature of having password
encryption, important information related to family, medical records, and photos can be stored on
them as a backup.

o Transport Data: Professionals or Students can now easily transport large data files and video,
audio lectures on a pen drive and access them from anywhere. Independent PC technicians can
store work-related utility tools, various programs, and files on a high-speed 64 GB pen drive and
move from one site to another.

ii. Memory card: A memory cardor memory cartridge is an electronic data storage device used for storing
digital information, typically using flash memory. These are commonly used in portable electronic devices,
such as digital cameras, mobile phones, laptop computers, tablets, PDAs, portable media players, video
game consoles, synthesizers, electronic keyboards and digital pianos, and allow adding memory to such
devices without compromising ergonomy, as the card is usually contained within the device rather than
protruding like USB flash drive.

Memory Management Hardware :

Memory Management Hardware in computer architecture plays a crucial role in ensuring efficient and effective use
of a computer's memory. It is responsible for organizing and allocating memory resources to different processes and
applications running on the system. Without proper memory management, computing systems would quickly
become overwhelmed, leading to crashes, slow performance, and other issues.

The history of memory management hardware dates back to the early days of computing, when physical memory
was limited and manual management was required. Over the years, advancements in technology have led to the
development of hardware-based memory management systems that automate the process, improving performance
and reliability. Today, memory management hardware solutions are integrated into modern computer architectures,
enabling seamless multitasking, efficient memory allocation, and effective utilization of available memory
resources.
Memory management hardware in computer architecture plays a crucial role in optimizing system performance and
resource allocation. It includes features such as memory caches, memory controllers, and memory management units
(MMUs). These components work together to ensure efficient access to data, reduce latency, and enhance overall
system responsiveness. Additionally, memory management hardware helps in handling memory allocation and
deallocation, virtual memory management, and protection mechanisms. This advanced hardware technology is
essential for modern computer systems to deliver optimal performance and meet the demands of complex
applications and multitasking environments.

.Components of Memory Management Hardware


The memory management hardware consists of several key components that work together to ensure efficient
memory usage and allocation. These components include:

 Memory Management Unit (MMU): The MMU is a critical component of memory management hardware. It
translates virtual addresses to physical addresses, enabling the system to access the correct memory location.
 Translation Lookaside Buffer (TLB): The TLB is a cache that stores recently used virtual-to-physical address
translations, speeding up the address translation process.
 Memory Segmentation Unit: This unit divides the memory into fixed-size segments to organize and manage
memory resources efficiently.
 Memory Protection Unit (MPU): The MPU ensures the security and protection of memory by enforcing access
permissions and preventing unauthorized access.

Memory Management Unit (MMU)

The Memory Management Unit (MMU) is a key component of memory management hardware in computer
architecture. It performs the essential task of translating virtual addresses generated by the CPU into physical
addresses, allowing the system to access the correct memory location.

The MMU works in conjunction with the operating system's memory management software to allocate and manage
memory resources effectively. It uses a technique called address translation, which involves converting virtual
addresses to physical addresses by utilizing page tables or translation tables.
The MMU also plays a vital role in memory protection by implementing memory access control and prevention
mechanisms. It enforces access permissions, ensuring that each process can only access its allocated memory and
preventing unauthorized access to sensitive information.

Translation Lookaside Buffer (TLB)

The Translation Lookaside Buffer (TLB) is a cache in the memory management hardware that stores recently used
virtual-to-physical address translations. It acts as a high-speed memory for address translation, improving the overall
performance of the system.

When the CPU generates a virtual address, the TLB checks if the translation for that address is available in its cache.
If the translation is found, the TLB provides the corresponding physical address, eliminating the need for a time-
consuming lookup in the page tables or translation tables.

The TLB operates on the principle of locality, which states that recently accessed memory locations are likely to be
accessed again in the near future. By storing frequently used translations, the TLB reduces the overhead of address
translation, improving system performance.

Functions of Memory Management Hardware

The memory management hardware performs several critical functions to ensure efficient memory usage and
allocation. These functions include:

 Address Translation: The memory management hardware translates virtual addresses into physical addresses,
allowing the CPU to access the correct memory location.
 Memory Allocation: It allocates and deallocates memory resources to processes, ensuring that each process has
sufficient memory to execute efficiently.

 Memory Protection: The memory management hardware enforces access permissions and prevents unauthorized
access to memory areas.
 Virtual Memory Management: It manages the mapping of virtual addresses to physical addresses, enabling the
efficient use of limited physical memory by utilizing disk-based virtual memory.

Address Translation

Address translation is one of the primary functions of memory management hardware. It involves converting virtual
addresses generated by the CPU into physical addresses, allowing the system to access the correct memory location.

During address translation, the memory management hardware uses page tables or translation tables to map virtual
addresses to physical addresses. This process ensures that each process has its isolated memory space, protecting it
from interference by other processes.

Address translation is essential for enabling the efficient use of physical memory and facilitating the execution of
multiple processes concurrently. By translating virtual addresses to physical addresses, the memory management
hardware provides each process with a unique memory address space.

Memory Allocation

Memory allocation is another crucial function of memory management hardware. It involves assigning memory
resources to processes and deallocating them when no longer needed.
The memory management hardware tracks the available memory blocks, allocating them to processes based on their
memory requirements. It ensures that each process has a sufficient amount of memory to execute efficiently,
preventing resource contention.

Efficient memory allocation allows for the simultaneous execution of multiple processes, maximizing system
productivity. By managing memory resources effectively, the memory management hardware minimizes wastage
and fragmentation, optimizing overall system performance.

Backward Skip 10s

You might also like