OS (Unit 5)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Operating System (Unit 5) Prepared By: Sujesh Manandhar

Unit 5
Memory Management

Memory management is the functionality of an operating system which handles


or manage primary memory, keeps tracks of each and every memory location and
moves process back and forth between the main memory and disk during
execution. It is not only concerned with accessing the memory but also for correct
operation and protection required in memory for smooth execution. It should
check how much memory to allocate to process, which process will get memory
at what time, keeps tracks of which portion of memory are free, which are
allocated and update the status.
Basic hardware:
Main memory and the registers are the built into the processor itself are the only
general purpose storage that the CPU can access directly. Any instruction in
execution and any data being used by the instruction must be in one of these direct
access storage device. If the data are not in memory, they must be move in such
memory before the CPU can operate on them. For the fast access, a fast memory
cache can be added between the CPU and memory. To manage the cache built
into the CPU the hardware automatically speeds up memory access without any
operating system control.
For the proper system operation, operating system should be protected from
access by user process. On multi-processor system user process should be protect
from one another i.e. each process should have separate memory space. Separate
per process memory protects the processes from each other. This protection must
be provided by hardware. To separate the memory space, range of legal address
that the process may access should be determine and ensure that process can only
access those legal sequence. This protection can be provide using two register:
The base register that holds the smallest legal physical memory address. The limit
register that specifies the size of the range. For e.g. if the base register holds
300040 and the limit register is 120900 then the program can legally access all
addresses from 300040 to 420939.
The base and limit register can be loaded only by the operating system which uses
a special privileged instruction. Since privileged instruction can be executed only
in kernel mode, since only the OS can executes in kernel mode, only the operating
system can load the base and limit register. This allow an operating system to

For: BIM 8th Semester Page | 1


Operating System (Unit 5) Prepared By: Sujesh Manandhar

load users program into users’ memory, to dump out those program in case of
error, to perform IO to and from user memory and to provide many other service.

Figure: A base and limit register define a logical address space

Figure: hardware address protection with base and limit register.

For: BIM 8th Semester Page | 2


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Address Binding:
The process address space is the set of logical address that a process reference in
its code. The operating system takes care of mapping the logical address to
physical at the time of the memory allocation of the program. Address binding
refers to mapping of such logical address to physical address. Each binding is the
mapping form one address space to another.
For the execution, program must be brought into memory. The program on the
disk waiting to be brought into memory forms the input queue. In most of the
case a user program goes through several steps and during this steps address can
be represented in different ways such as:
• Symbolic address: the address used in the source code. The variable name,
constant and instruction label are the basic element of symbolic address.
• Relative address: at the time of compilation, compiler bind these symbolic
address to relocatable address.
• Physical address: the linkage editor or loader in turn binds the relocatable
address to absolute address (such as 74014). The loader generates this
address at the time when the program is loaded into the memory.
The binding of the instruction and data to memory address can be done at any
step:
• Compile Time:
If it is known that, at compile time where the process will resides in
memory then absolute code (real address) can be generate. For e.g. if it is
known that a user process will reside starting at location R, then the
generated compiler code will start at that location and extend up from there.
If at some later time, the starting location changes then it will be necessary
to recompile this code.
• Load Time:
If it is not known at compile time where the process will reside in memory
then the compiler must generates relocatable code. In this case final
binding is delayed until load time. If the starting address changes then it is
needed to only reload the user code to incorporate this changed value.
Loader translates the relocatable address into absolute address. The base
address of the process in the main memory is added to all logical address
by the loader to generate absolute address.
• Execution Time: if the process can be moved during its execution from
one memory segment to another then binding must be delayed until run
time. Special hardware must be available for this scheme. Additional

For: BIM 8th Semester Page | 3


Operating System (Unit 5) Prepared By: Sujesh Manandhar

memory may be allocated and deallocated at this time. Most general


purpose operating system use this method.

Logical and Physical address space:


An address generated by the CPU is commonly referred to as a logical address. It
is generated by the CPU while a program is running. It is also known as virtual
address as it does not exist physically. This address is used as a reference to access
the physical memory location by CPU. The set of all logical addresses generated
by a program is a logical address space.
An address seen by the memory unit i.e. the one loaded into the memory address
register of the memory is commonly referred as physical address. It identifies a
physical location of required data in memory. The user never directly deals with
the physical memory but can access by its corresponding logical address. The set
of all the physical address corresponding to the logical addresses is a physical
address space.
For the compile time and load time address binding method, both logical and
physical addresses are same or identical. However, execution address binding
scheme results in different logical and physical address. The user program
generates only logical address and thinks that the process runs in the location 0
to max but program needs physical memory for its execution. So, this logical
addresses must be mapped to physical addresses before they are used.
Mapping Logical address to Physical address:
The runtime mapping of logical address to physical address is done by hardware
device called memory management unit (MMU). The logical address generated
by the CPU is fed into memory management unit which consist relocation register
(base register) which contains base address of physical memory. Such base value
is added to every address generated by the user process (logical address) and
results in physical address which is sent to memory.
For example: if the base address is at 14000 then an attempt by the user process
to address location 0 is dynamically relocated to location 14000 and accessed to
location 346 is mapped to location 14346.
Following figure shows the process of mapping logical address to physical
address:

For: BIM 8th Semester Page | 4


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Figure: Mapping logical to physical address using relocation register

Difference between logical and physical address:


Logical Address Physical Address
It is generated by the CPU in It is the exact location that exist in
perspective of program memory unit.
The set of all logical addresses The set of all the physical address
generated by a CPU in reference to corresponding to the logical addresses
program is a logical address space. is a physical address space.

The logical address does not exist Physical address is the location in the
physically in the memory memory that can be accessed
physically
User can view the logical address of User can never view the physical
program address of memory
User can use the logical address to Logical address must convert to
access physical address. physical address in order to execute
program.
Generated by CPU Computed by MMU

For: BIM 8th Semester Page | 5


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Dynamic Loading:
Dynamic loading is a mechanism in which only the required routines are loaded
into memory first and other routines are loaded only when they are required or
called i.e. a routine is not loaded until it is called. Only main program is loaded
into memory first and other routine are loaded when it is required. All the routines
are kept on disk in a relocatable load format and loaded to memory whenever it
is required.
Advantages:
• A routine is loaded only when it is required.
• Useful when large amount of code are needed to handle infrequently
occurring case such as error routines.
• Does not require special support from OS. It is the responsibility of user to
design their program to take advantage of such a method.

Dynamic linking:
Dynamic linking is the mechanism in which all the required system libraries are
linked or reference to a user program whenever it is required or run i.e. system
libraries are not linked until execution time. For example if a module or program
calls the function and body of function resides in separate system library. When
the respective function is called then only such required library routine is loaded
into the memory to provide the body of the function. This mechanism is called
dynamic linking.
With dynamic linking a stub is included in the image for each library routine
reference. A stub is a piece of code that indicates how to locate the appropriate
library routines or how to load the library if the routine is not already present.
When a stub is executed it checks to see whether the needed routine is already in
memory or not. If it is not program load the routine into memory.
Advantages:
• Without this scheme, each program in a system must include a copy of its
library or routine reference by the program in the executable image. This
will waste both disk space and main memory.

For: BIM 8th Semester Page | 6


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Memory Management Scheme or Memory Allocation:


1. Single contiguous memory allocation:
In this strategy, user’s job or process is assigned the complete control of
the CPU until the job completes or error occurred. Once the one process
has acquired the memory then other requesting process will not get the
memory until that process (executing process) leaves the memory. The
user’s job is the only program which would reside in the memory apart
from the operating system.
Following figure represents this scenario where all the part of memory is
allocated to a program.

Figure: single contiguous memory allocation

Following example shows how single contiguous memory allocation is


implemented. Let us suppose a memory has total space of 70KB, 20KB of
memory is allocated to OS, another 20 kB has been used by program 1 then still
30KB is free. Using this strategy, although 30KB is free such space cannot be
utilized because memory has been already consumed by program 1 and no other
program will be allowed to use further memory.

For: BIM 8th Semester Page | 7


Operating System (Unit 5) Prepared By: Sujesh Manandhar

20 KB job waiting

Figure: example of single contiguous memory allocation


Disadvantages:
• Leads to wastage of memory known as fragmentation
• This technique will lead to uniprogramming. Cannot be used for
multiprogramming.
• Leads to wastage of CPU time. When current job in memory have to wait
for some external event (input output operation) then CPU will remain idle.

2. Fixed partition memory allocation:


Here, memory is divided into several fixed size and each partition may
contains exactly one process. When a partition is free a process is selected
from an input queue and is loaded into the free partition. This will allow
several user job to reside in the memory.
Following figure shows the fixed partitioning memory allocation. Let us
consider we have total of 70KB memory space. Out of this 20KB space is
consumed by OS and remaining 50KB space is divided into 4 part. First
part is given 10KB space, second 10 KB space, third part 20KB space and
last part 10 KB space. If the request job have size more than 20KB then it
will have to wait because large space is only 20KB.

For: BIM 8th Semester Page | 8


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Figure: example of fixed size partitioning


Disadvantage: This strategy will suffer from external and internal fragmentation.
Following example shows how external and internal fragmentation will occur:
External fragmentation:

Figure: example of external fragmentation on fixed size partitioning:


On above figure, memory is divided into fixed size and currently, job 2 is
consuming 10KB of memory. Suppose, if another job which require 40Kb of
memory requests then it cannot be allocated because of lack of contiguous
memory i.e. the available memory in each partition is not enough to handle
40KB’s job. This will lead to external fragmentation.

For: BIM 8th Semester Page | 9


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Internal fragmentation:

Figure: Example of internal fragmentation in fixed size partitioning


On above figure, memory is divided into fixed size and currently, job 2 is
consuming 10KB of memory. Suppose job 4 which requires 10KB of memory
requested for allocation but is allocated to memory which have size of 20KB.
Here, 10KB of memory is wasted because process needs only 10KB but memory
have 20KB size. This lead to internal wastage of memory known as internal
fragmentation.
3. Variable size partitioning:
This technique will allocate exact amount of memory required for a job.
Here, operating system maintains a table indicating which parts of a
memory are available and which are occupied. Initially all the memory is
available for user process and is consider as one large block of available
memory known as hole. Whenever the process request for allocation then
exact size of space that a process need is allocated and when a process
terminates it releases its memory.
For example if total size of memory is 10KB and if process 1 needs 5KB
of memory space then 5KB is allocated and still 5KB is remaining which
will be allocated to other process.
Memory is allocated to process until the memory requirement of next
process cannot be satisfied i.e. when available memory block (hole) is not
large enough to hold the space of process.
Following figure shows the variable size partitioning:

For: BIM 8th Semester Page | 10


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Figure: example of variable partitioning:


On above figure, out of total 70KB size of memory 20KB is used by OS, so,
50KB memory is free. Job 1 require 10KB so it is allocated such size of memory.
Now, 40KB space is left in which 10KB is used by job 2, 20KB is used by job 3
and 10KB is free. If 40KB job requested then it cannot be allocated because
available memory is not enough to hold the size of 40KB. This will lead to
external fragmentation.
In a system, many request can come at a single time. The main concern here is
how to satisfy such request from a list of free hole?
Following three strategies can be used to allocate memory in variable size
partitioning.
I. First fit: the first hole is allocated that is big enough. Searching can start
either at the beginning of the set of holes or at the location where the
previous first fit search ended. Searching can be stop as soon as free hole
that is large enough is find out. When the process request for memory,
searching is done from beginning of the memory address and if the size of
memory hole greater or equal to process’s size then the memory is
allocated. For second process searching again start from the beginning of
memory address.
II. Best Fit: this strategy allocate the small hole that is big enough. That is
searching is made randomly for the memory hole that is equals and little
greater that the process’s size. This strategy produces the smallest leftover
hole.

For: BIM 8th Semester Page | 11


Operating System (Unit 5) Prepared By: Sujesh Manandhar

III. Worst fit: this strategy allocate the largest hole. That is searching is made
randomly for the memory hole that is greater amongst all the memory hole.
This strategy will produce the largest left over hole.
Example of first fit, best fit and worst fit are as follows:
Given the five memory partition of 100KB, 500KB, 200KB, 300KB and 600KB
in order. How would first fit, best fit and worst fit algorithm place process of 212
KB, 417KB, 112KB, and 426KB in order using variable partitioning. Which
algorithm makes the efficient use of memory?
Solution:
First Fit:

Here, 212KB process will comes first and search is made from starting of
memory. Those memory block whose size is greater or equal to 212KB comes
first that block will be allocated to process with 212KB size. Since variable
partitioning is used, the free space or unused space will be used for another
process. The process runs until all the memory block will not satisfy the memory
requirement of process.

Best Fit:
Here, 212 KB process will comes first and search is made for memory block
which is just a little greater of equal to 212KB.

For: BIM 8th Semester Page | 12


Operating System (Unit 5) Prepared By: Sujesh Manandhar

In this case 300KB memory is just little bit greater that 212KB so that block is
allocated. Further process is shown in figure below:

Worst Fit:
Here, 212 KB process will comes first and search is made for memory block with
largest size of all block is selected. In this case 600KB is largest that all so, 600KB
is allocated to process with 212KB.

For: BIM 8th Semester Page | 13


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Note: further example of best bit, first fit and best fit using variable partitioning
and fixed partitioning are done in class. So, refer your class note for further
solution.

4. Contemporary Allocation Strategy (Dynamic allocation strategy):


Here partition are of variable length and number and process are allocated
exactly as much memory as required. The process are allocated in run time.
Following figure shows the example of dynamic allocation strategy. Let us
consider that the memory has total size of 64MB in which 8MB is
consumed by OS. So, 56MB is free for user process.

Suppose process 1 with size 20MB request to allocate a memory then


20MB is allocated for it from total of 56MB. Now we have 36 MB left (56-
20). If another process 2 with 14MB request then the memory will be
allocated for it because memory have more size than process’s size
(36>14). Similarly process 3 with 18MB will be allocated since we have
22 MB left and 22 is greater than 18. Now, 4MB is free. In this time if
process 4 with size 8MB request then it cannot be allocated as memory
only contain 4 MB free space. This will have to wait.
At certain time suppose that process 2 consuming 14MB will release the
memory then such free memory can be used for another process. Now
process 4 which was waiting will get memory space because it needs only
8MB which can be allocated by 14MB memory space.
Such effect is shown in figure below.

For: BIM 8th Semester Page | 14


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Figure: effect of dynamic partitioning

External and Internal fragmentation:


External fragmentation is a problem that exist when there is enough total
memory space to satisfy a request but request cannot be allocated due to lack of
available contiguous memory. That is storage is fragmented into a large number
of small holes due to which an exact block of memory cannot be allocated. Both
the best fit and first fit will suffer from external fragmentation.
Internal fragmentation is a problem that exist when the process cannot use or
fully utilize all the space of allocated memory due to the size of process less than
exact space of memory. That is unused memory that is internal to partition.
For example if there is a size of 10MB memory hole and the process with 9MB
request for allocation. If we allocate exactly the requested block then 1MB hole
is left or unused. This phenomenon is known as internal fragmentation. Fixed size
partition suffer from both internal and external fragmentation.
Solution to external and internal fragmentation:
One solution to external fragmentation is compaction. It is the process to shuffle
the memory content so as to place all free memory together in one large block. It
is possible only if relocation is dynamic and is done at execution time. The
simplest compaction algorithm is to move all processes occupying a memory
toward one end of memory and all holes move in other direction producing one
large hole of available memory.
Solution to internal fragmentation is to break a physical memory into fixed size
block and allocating a memory in unit based on block size.

For: BIM 8th Semester Page | 15


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Memory Manager Strategy:


1. Swapping:
A process must be in memory to be executed. A process however can be
swapped temporarily out of memory to a backing store and the brought
back into memory for continued execution. A swapping is the mechanism
in which process can be swapped or replaced temporarily out of main
memory to secondary storage (disk) and make that memory available to
another process. Later such swapped process are brought back from
secondary storage to same physical memory space where it has swapped
before for execution.
Swapping involves moving processes between main memory and a
backing store. The backing store is commonly a fat disk, large enough to
accommodate copies of all memory image for all users and it must provide
direct access to such image. Another variant roll out roll in is used for
priority based scheduling i.e. lower priority process is swapped out so that
higher priority process can be loaded and executed.

Figure: swapping process

Working mechanism:
• The system maintains a ready queue consisting of all process whose
memory image are on the backing store.
• Whenever the CPU scheduler decides to execute a process it calls
the dispatcher. Dispatcher checks to see whether the next process in
the queue is in memory or not.
• If it is not and if there is no free memory region the dispatcher swaps
out the process currently in memory and swaps in the desired

For: BIM 8th Semester Page | 16


Operating System (Unit 5) Prepared By: Sujesh Manandhar

process. In order to swap out the process, such process must be in


idle condition.
The operation of swapping for fixed partition and variable partition is shown
below:
For fixed partition:

Figure: Swapping in fixed partition


Here, memory block are divided into fixed block and only one process can resides
in one block. At first only process A is in memory then the process B and C are
swapped in into memory. As the memory become full, process A is swapped out
when it is idle and another process D is swapped in. Similarly B goes out and A
comes in. Now A is in different location, so the address must be relocated either
by software when it is swapped in or by hardware during program execution.
For variable partition:

Figure: swapping for variable partition

For: BIM 8th Semester Page | 17


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Here, the process gets memory size that it needs. Memory is not divided into parts
i.e. whole free block is allocated for user process.
2. Virtual memory:
Virtual memory is a technique that allows the execution of the processes
that are not completely in a memory. The combined size of the program,
data and stack may exceeds the amount of physical memory available for
it. Entire whole program may not be all needed at the same time. So when
using the virtual memory, OS keeps those part of the program that are
currently in use or currently needed in physical memory and rest in
secondary disk.
Virtual memory involves the separation of logical memory as perceived by
users from physical memory. This separation allows an extremely large
virtual memory to be provided for user when only a smaller physical
memory is available. Virtual address space of a process refers to the
logical view of how a process is stored in memory. This view is that a
process begins at a certain logical address (say 0) and exist in contiguous
memory.
For example: a 512 MB program can run in 256MB system by carefully
choosing, out of 512 MB which part of 256MB to keep in memory at each
instant with piece of program being swapped between the disk and memory
as needed.

Figure: virtual address space


The large blank space (or hole) between the heap and the stack is part of the
virtual address space but will require actual physical pages only if the heap or
stack grows.

For: BIM 8th Semester Page | 18


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Implementing Virtual memory:


Here we use demand paging to implement virtual memory in which only the
pages that are needed or demanded during execution are loaded. Pages that are
never accessed are never loaded into physical memory. Here, the logical memory
or secondary memory are divided into parts known as page and physical memory
is also divided into multiple fixed size parts known as frame. A whole process is
also divided into multiple page such as function in one page, main program in
one page, subroutine in one page etc. Such pages are reference as page number.
A demand paging system is similar to a paging system with swapping where
process resides in secondary memory. Whenever a process needs to execute
rather that swapping the entire process into memory only the required page is
swapped into physical memory.
Working procedure:
Here, the page table is maintained which shows which page is in which frame of
main memory. The valid invalid bit can be used for this purpose. When the bit is
set to valid then it indicates that require page is in main memory. When the bit is
invalid then it indicates the page either is not valid i.e. not in logical address space
of process or valid but currently in secondary disk not in main memory.
If a CPU request the page that is not in main memory or have not brought in main
memory then it causes a page fault. If a CPU request a page that is already
available in main memory then it causes page hit.
When the page fault occur it causes a trap to the operating system. Operating
system takes information from internal table stored in process control block to
know whether such reference is valid or not. If the reference is valid operating
system now check where that process resides in secondary disk. After finding the
page in secondary memory, a check is made for free frame in main memory and
the page is allocated to such free frame.
Now, the page table and internal table of the process is modify to indicate that the
page is now in the main memory. The instruction that was interrupted by trap is
restarted. The page can now be accessed as though it had always been in memory.

For: BIM 8th Semester Page | 19


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Figure: Steps in handling a page fault


Example:
Let us suppose, CPU request for process 1’s page number 1. If this page is not
available in page table then a trap will occur. Operating system now checks where
such page is reside in secondary disk. After determining such page, check is made
in main memory for frame that is free or idle. Suppose frame number 3 is free
then the page is allocated to frame 3 and the page table is updated. The page can
now be accessed by CPU.
Page replacement:
If there is free frame then the page can be allocated on such frame. But if the
frame is not free then decision have to be made on which page to replace or swap
such that requested page can get in main memory frame. In this case page
replacement algorithm can be used to select a victim frame to be swapped out
from main memory frame and transfer to the secondary disk. This will cause a
frame to be free and requested page can be allocated in such free frame. The
following are the different page replacement algorithm:
• First in first out (FIFO) page replacement
• Optimal page replacement
• Least recently used (LRU) page replacement

For: BIM 8th Semester Page | 20


Operating System (Unit 5) Prepared By: Sujesh Manandhar

FIFO Page Replacement:


It is a simplest page replacement algorithm which is associates with each page
the time when that page was brought into memory. When a page must be replaced
the oldest page is chosen. When the page is brought into memory we insert it at
tail of the queue and when the page is to replace it is replace from the head of the
queue.
Following example will illustrate the FIFO page replacement:

Here we have 3 frame which are initially empty. The first three reference 7, 0 and
1 cause page fault and are brought into these frame. The next reference 2 replaces
the 7 because 7 was brought first. 0 is the next reference and 0 is already in frame
it will not cause page fault and is not replace. The process continue up to last
reference.
Optimal Page Replacement:
This method replaces the page that will not be used for the longest period of time.
This will guarantees the lowest possible page fault rate for a fixed number of
frame. It is not possible or difficult to implement because future knowledge of
reference string should be known but serve as a standard to compare with the
other algorithm.
Following example illustrate the working of optimal page replacement.

For: BIM 8th Semester Page | 21


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Here we have 3 frame which are initially empty. The first three reference 7, 0 and
1 cause page fault and are brought into these frame. The next reference 2 replaces
the 7 because out of 7, 0 and 1: 7 will not be used for a longer period of time. This
process will continue up to last reference string.

LRU Page Replacement:


LRU page replacement associates with each page the time of that page’s last use.
When a page must be replace, LRU chooses the page that has not been used for
the longest period of time from past to now. It is same as optimal page
replacement but looking backward at time rather than forward.
Following figure shows the working mechanism of LRU page replacement:

For: BIM 8th Semester Page | 22


Operating System (Unit 5) Prepared By: Sujesh Manandhar

Here, we have 3 frame which are initially empty. The first three reference 7, 0
and 1 cause page fault and are brought into these frame. The next reference 2
replaces the 7 because out of 7, 0 and 1: 7 was used least recently (7 has not been
used for long time from past to now). After this reference 0 will request and for
this no any replacement takes place as 0 is already in frame. When the reference
3 makes request then 3 will replace 1 because out of 2, 0 and 1: 1 has not been
recently used from past. This process will continue up to last reference string.

3. Shared Memory Multiprocessor:


It shows how the number of different processor shares common memory
or executes using same memory. The system with multiple processors
sharing the common memory is known as shared memory multiprocessor.
A program consist of a collection of executable sub program unit which is
refer to as task or programming grain. The must be defined scheduled and
coordinated by the hardware and software before or during program
execution. The process communication is done through common memory.

Figure: shared bus multiprocessor organization.


The simplest multiprocessor have a single bus which connect at least two
processor and shared among all the processors. All the process must
synchronized on the single bus and memory access. When a processor
wants to access a memory first it checks if the bus is free or not then it send
the request to memory and wait for the requested data to be available in
bus.
An issue associated with communication of processor is memory
coherency which ensures that the transmitting and receiving elements have
the same coherent picture of the content of memory at least for data which
is communicated between two tasks.
When each processor have a separate cache it is possible to have many
copies of one instruction operand: one in main memory and one in cache.

For: BIM 8th Semester Page | 23


Operating System (Unit 5) Prepared By: Sujesh Manandhar

When one copy of an operand is changed the other copy of operand must
be changed. This is known as cache coherency which ensures that change
in the value of shared operand are propagated throughout the system in a
timely fashion.
There are three class of multiprocessor:
• Uniform memory access in which all the processor shares a unique
centralized primary memory so, each CPU has the same memory
access time.
• Non uniform memory access in which logical address are shared
and physical address are distributed among processor so that access
time to data depends on data position in local or in remote memory.
• Cache only memory access in which data have no specific or
permanent location (no specific memory address) where they stay
and hence they can be read (copied into local cache) and modified
(first in the cache then in permanent location).

Note: For further material scan following QR code:

For: BIM 8th Semester Page | 24

You might also like