0% found this document useful (0 votes)
15 views34 pages

OS Unit-IV

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 34

OS

Unit-IV
Bare machine:
Bare machine is a logical hardware which is used to execute the program in the processor
without using the operating system. As of now, we have studied that we can’t execute any
process without the operating system. But yes with the help of the bare machine we can do that.
Initially, when the operating systems are not developed, the execution of an instruction is done by
directly on hardware without using any interfering hardware, at that time the only drawback was
that the bare machines accepting the instruction in only machine language, due to this those person
who has sufficient knowledge about computer field are able to operate a computer. So after the
development of the operating system bare machine is referred to as inefficient.

Resident Monitor:
In this section, if we talk about how the code runs on bare machines, then this component is
used, so basically, the Resident Monitor is a code that runs on Bare Machines.
The resident monitor works like an operating system that controls the instructions and performs
all necessary functions. It also works like job sequencer because it also sequences the job and
sends them to the processor.
After scheduling the job Resident monitors loads the programs one by one into the main memory
according to their sequences. One most important factor about the resident monitor is that when
the program execution occurred there is no gap between the program execution and the
processing is going to be faster.
The Resident monitors are divided into 4 parts as:
1. Control Language Interpreter
2. Loader
3. Device Driver
4. Interrupt Processing
These are explained as following below.
1. Control Language Interpreter: The first part of the Resident monitor is control language
interpreter which is used to read and carry out the instruction from one level to the next level.

2. Loader: The second part of the Resident monitor which is the main part of the Resident
Monitor is Loader which Loads all the necessary system and application programs into the
main memory.

3. Device Driver: The third part of the Resident monitor is Device Driver which is used to
manage the connecting input-output devices to the system. So basically it is the interface
between the user and the system. It works as an interface between the request and response.
Request which user made, Device driver responds that the system produces to fulfill these
requests.

4. Interrupt Processing: The fourth part as the name suggests, it processes the all occurred
interrupt to the system.
Fixed Partitioning :
Multi-programming with fixed partitioning is a contiguous memory management technique in
which the main memory is divided into fixed sized partitions which can be of equal or unequal
size. Whenever we have to allocate a process memory then a free partition that is big enough to
hold the process is found. Then the memory is allocated to the process. If there is no free space
available then the process waits in the queue to be allocated memory. It is one of the oldest
memory management technique which is easy to implement. Multiprogramming with fixed
partitions, also known as static partitioning

Advantages:-

 Simplicity: It is straightforward to implement because the partitions are static and do not
change.

 Predictability: The operating system can ensure a minimum amount of memory for each
process.
 Security: Processes are isolated in their own partitions, which can prevent them from
interfering with each other’s memory space.

Disadvantages:

 Internal fragmentation: If a process’s memory requirements are smaller than the


partition size, the remaining memory within the partition goes unused.

 Limitation on process size: A process cannot be larger than the largest partition, which
imposes a limitation on the size of processes that can be loaded into memory.

 Degree of multiprogramming is fixed: The number of processes that can run


concurrently is limited by the number of partitions, and the system may not be able to
accommodate as many processes as it could with variable partitioning.
Variable Partitioning :
Multi-programming with variable partitioning is a contiguous memory management technique
in which the main memory is not divided into partitions and the process is allocated a chunk of
free memory that is big enough for it to fit. The space which is left is considered as the free space
which can be further used by other processes. It also provides the concept of compaction. In
compaction the spaces that are free and the spaces which not allocated to the process are
combined and single large memory space is made.

Advantages: -

 No internal fragmentation: Partitions are exactly the size needed for the process, so no
space within a partition goes unused.
 No limit on the degree of multiprogramming: More processes can be accommodated in
memory at once, as there is no wasted space.

 No limitation on the size of the process: Unlike fixed partitioning, where a process
cannot be larger than the largest partition, variable partitioning allows any size of process
as long as there is enough free memory.

Disadvantages:

 Difficult implementation: Allocating memory at run-time is more complex than pre-


allocated fixed partitions.

 External fragmentation: Although internal fragmentation is avoided, external


fragmentation can occur when there are small blocks of free memory scattered
throughout the system that cannot be used effectively

Difference between Fixed Partitioning and Variable Partitioning:


S.N Fixed partitioning Variable partitioning
1. In multi-programming with fixed In multi-programming with variable
partitioning the main memory is divided partitioning the main memory is not divided
into fixed sized partitions. into fixed sized partitions.
2. Only one process can be placed in a In variable partitioning, the process is
partition. allocated a chunk of free memory.
3. It does not utilize the main memory It utilizes the main memory effectively.
effectively.
4. There is presence of internal There is external fragmentation.
fragmentation and external fragmentation.
5. Degree of multi-programming is less. Degree of multi-programming is higher.
6. It is easier to implement. It is less easy to implement.
7. There is limitation on size of process. There is no limitation on size of process.
Protection Scheme:-

Memory protection is a crucial component of operating systems which permits them to avert one
method's storage from being utilized by another. Memory safeguarding is vital in contemporary
operating systems since it enables various programs to run in tandem lacking tampering with their
respective storage space

The primary goal of safeguarding memory is to avert an application from accessing RAM without
permission. Whenever an approach attempts to use memory that it does not have permission to
enter, the computer's operating system will stop and end the process. This hinders the program
from obtaining memory that it should not.

Memory backup is frequently carried out using equipment memory management units (MMUs).
An MMU is an instruction set component that corresponds digital addresses utilized by a program
to actual locations in memory. The MMU is in charge of converting artificial addresses to real
addresses and guaranteeing the program only has access to the recall that it has been granted access
to.

Memory security usually happens within contemporary operating systems using an approach
known as memory virtualization. Virtual RAM enables every program to operate in a virtual
address space of its own, which the MMU maps to physical memory. This enables several
programs to run concurrently, everyone having a different virtual address space but distributing
the same physical storage space.

AD

Different Scheme of Memory Protection

Segmentation

Memory is segmented into sections, every single one which can have a separate set of access rights.
An OS kernel segment, for instance, might be read-only, whereas a user data segment could have
been designated as read-write.

Example

As an illustration, User A may be using a text-editing program while User B is using an internet
browser. A distinct segment is given for every consumer's implementation of their code, data, and
stack. The section for the document evaluating program used by User A is entirely separate from
the internet browser program used by User B.
The word processing program used by User a can only use or alter data that is located in its
designated segment. A segmentation fault or gain access infringement is going to happen if the
program tries to get into RAM outside of its segment, and the OS terminates the implementation
to stop unauthorized access to additional segments.

Paged Virtual Memory

Memory is divided into pages in paged virtual memory, and each page can be saved to its own
place in physical memory. In order to maintain track of where pages are kept, the OS uses a page
table. This gives the operating system the ability to move pages to various parts of physical
memory, where they can be secured against unauthorized access.

Example

The OS sets permissions for entry on every page to safeguard memory. For instance, the
information pages could be granted read-write authorizations in order for the game to change its
internal configuration whereas the code pages might be identified as read-only to safeguard against
unintentional alterations. Depending on their needs, framework processes' pages might be granted
various access authorizations

The virtual memory management unit (VMM) uses a table of pages to convert an Internet address
to a real address when an app attempts to reach a specific memory location. The page table
identifies the exact position of the information in physical memory by mapping the digital
numbering of pages to physical numbers for pages.

Protection keys

Each RAM page has a set of bits called encryption keys. Accessibility to the page can be controlled
using these bits. A protection key could be utilized, for instance, to specify whether or not a
document will be read, written to, or operated

Example

On an equivalent server that is User A operates an application with a database which holds private
client information, and User B is operating an algorithm that uses machine learning. Memory
protection among both of these programs is enforced by the OS using protection keys.

The protection key linked to User A's data is the only way for the database implementation to get
into memory. The protection key makes certain that neither the database usage nor other system
methods have access to memory locations used by User B's machine learning method.
Similar to User A's, User B's machine learning algorithm works within the confines of the
protection key that was given to it. This hinders unauthorized gain of User A's information or
additional system assets and limits User B's access to just its own memory

Advantages

 Improved Stability − Memory security prevents one program from accessing another
procedure's memory area, which can enhance system stability and prevent the loss of vital
information.
 Increased Security − Memory protection helps to prevent the unauthorized access of
private information, as the OS will interrupt and terminate any application attempting to
access unauthorized RAM, preventing security breaches.
 Better Resource Management − Memory shielding allows multiple processes to run
concurrently without affecting each other's memory space, improving the overall efficiency
of the system's resource management.
 More Efficient Memory Usage − Simulated memory security strategies can optimize the
use of memory while decreasing the amount of RAM necessary for the system, allowing
multiple programs to use the same physical storage space.
 Facilitates Multitasking − Memory protection enables multiple processes to run
simultaneously, allowing for multitasking and running multiple programs at the same time.

Disadvantages
 Overhead − Guarding memory requires additional software and hardware resources,
which can lead to higher costs and reduced system efficiency.
 Complexity − Memory protection adds complexity to the operating system, making
development, testing, and maintenance more difficult.
 Memory Fragmentation − Virtual memory can cause memory fragmentation, where real
memory is broken into inadequate, pseudo contiguous blocks.
 Limitation − Memory protection is not foolproof and can be circumvented in certain
situations. For example, a malicious user might exploit vulnerabilities in the OS to gain
access to another process's memory area.
 Compatibility Issues − Some older software programs may be incompatible with memory
protection features, limiting the operating system's ability to protect memory from
unauthorized access

Paging:-
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main memory
will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the contiguous
frames or holes.

Pages of the process are brought into the main memory only when they are required otherwise they
reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as
same as frame size.

Example

Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory
will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided
into pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous
way.

Frames, pages and the mapping between the two is shown in the image below.

Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames become
empty and therefore other pages can be loaded in that empty place. The process P5 of size 8 KB
(8 pages) is waiting inside the ready queue.

Given the fact that, we have 8 non-contiguous frames available in the memory and paging provides
the flexibility of storing the process at the different places. Therefore, we can load the pages of
process P5 in the place of P2 and P4.
Memory Management Unit

The purpose of Memory Management Unit (MMU) is to convert the logical address into the
physical address. The logical address is the address generated by the CPU for every page while
the physical address is the actual address of the frame where each page will be stored.

When a page is to be accessed by the CPU by using the logical address, the operating system needs
to obtain the physical address to access that page physically.

The logical address has two parts.

1. Page Number
2. Offset

Memory management unit of OS needs to convert the page number to the frame number.

Key features of paging in an operating system:


1. Fixed Size Pages: Memory is divided into fixed-size blocks called pages (in logical
memory) and page frames (in physical memory). The size of a page is typically a power of
2, ranging from 512 bytes to several megabytes.
2. Logical and Physical Address Mapping: Paging translates logical addresses (used by a
program) into physical addresses (used by the hardware). This is done through a page table.
3. Page Table: Each process has a page table that maps logical page numbers to physical
frame numbers. The page table is used to look up the physical frame number corresponding
to a logical page number.
4. Protection and Sharing: Paging provides protection by keeping each process's address
space separate. However, it also allows sharing of pages (e.g., code segments) among
processes by mapping multiple logical pages to the same physical frame.
5. Efficient Memory Use: Since physical memory is allocated in fixed-size blocks,
fragmentation is minimized. Both internal fragmentation (wasted space within allocated
regions) and external fragmentation (wasted space between allocated regions) are reduced.
6. Swapping: Pages can be swapped in and out of physical memory to disk (secondary
storage) to free up space. This is a fundamental feature of virtual memory systems that use
paging.
7. Demand Paging: Pages are loaded into memory only when they are needed, which is
referred to as demand paging. This reduces the amount of memory used and allows for
more processes to be loaded into memory.
8. Page Replacement Algorithms: When a page needs to be loaded into memory, but there
is no free space, a page replacement algorithm decides which page to remove. Common
algorithms include Least Recently Used (LRU), First-In-First-Out (FIFO), and Optimal
Page Replacement.
9. TLB (Translation Lookaside Buffer): A hardware cache that stores recent translations of
virtual addresses to physical addresses to speed up memory access. If the TLB doesn't have
the required translation (a TLB miss), the page table must be consulted.
10. Segmentation with Paging: Some systems combine segmentation and paging to provide
benefits of both. In this scheme, memory is divided into segments, each of which is further
divided into pages.
11. Paging Levels: Modern systems use multi-level paging to manage large address spaces.
This reduces the size of each page table and can optimize memory usage.
12. Page Fault Handling: When a page that is not in physical memory is accessed, a page
fault occurs. The operating system then loads the required page from secondary storage
into physical memory.
13. Security and Isolation: Paging helps enforce memory protection and process isolation, as
each process operates within its own set of pages, preventing unauthorized access to other
processes' memory.
14. Hardware Support: Paging requires hardware support, typically through a Memory
Management Unit (MMU) that handles the translation of virtual to physical addresses and
checks access rights.
Segmentation:-
Segmentation is a memory management technique in which the memory is divided into the
variable size parts. Each part is known as a segment which can be allocated to a process.

The details about each segment are stored in a table called a segment table. Segment table is stored
in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Why Segmentation is required?

Till now, we were using Paging as our main memory management technique. Paging is more close
to the Operating system rather than the User. It divides all the processes into the form of pages
regardless of the fact that a process can have some relative parts of functions which need to be
loaded in the same page.

Operating system doesn't care about the User's view of the process. It may divide the same function
into different pages and those pages may or may not be loaded at the same time into the memory.
It decreases the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each segment
contains the same type of functions such as the main function can be included in one segment and
the library functions can be included in the other segment.
Translation of Logical address into physical address by segment table

CPU generates a logical address which contains two parts:

1. Segment Number
2. Offset

For Example:

Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for the segment
offset so the maximum segment size is 4096 and the maximum number of segments that can be
refereed is 16.

When a program is loaded into memory, the segmentation system tries to locate space that is large
enough to hold the first segment of the process, space information is obtained from the free list
maintained by memory manager. Then it tries to locate space for other segments. Once adequate
space is located for all the segments, it loads them into their respective areas.

The operating system also generates a segment map table for each program.
With the help of segment map tables and hardware assistance, the operating system can easily
translate a logical address into physical address on execution of a program.

The Segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset. If the offset is less than the limit then the address is valid otherwise it
throws an error as the address is invalid.

In the case of valid addresses, the base address of the segment is added to the offset to get the
physical address of the actual word in the main memory.

The above figure shows how address translation is done in case of segmentation.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages

1. It can have external fragmentation.


2. It is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

SN Paging Segmentation
1 Non-Contiguous memory allocation Non-contiguous memory allocation
2 Paging divides program into fixed size Segmentation divides program into
pages. variable size segments.
3 OS is responsible Compiler is responsible.
4 Paging is faster than segmentation Segmentation is slower than paging
5 Paging is closer to Operating System Segmentation is closer to User
6 It suffers from internal fragmentation It suffers from external fragmentation
7 There is no external fragmentation There is no external fragmentation
8 Logical address is divided into page Logical address is divided into segment
number and page offset number and segment offset
9 Page table is used to maintain the page Segment Table maintains the segment
information. information
10 Page table entry has the frame number Segment table entry has the base
and some flag bits to represent details address of the segment and some
about pages. protection bits for the segments.
Virtual Memory in OS:-

Virtual Memory is a storage scheme that provides user an illusion of having a very big main
memory. This is done by treating a part of secondary memory as the main memory.

In this scheme, User can load the bigger size processes than the available main memory by having
the illusion that the memory is available to load the process.

Instead of loading one big process in the main memory, the Operating System loads the different
parts of more than one process in the main memory.

By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization
will also be increased.

How Virtual Memory Works?

In modern word, virtual memory has become quite common these days. In this scheme, whenever
some pages needs to be loaded in the main memory for the execution and the memory is not
available for those many pages, then in that case, instead of stopping the pages from entering in
the main memory, the OS search for the RAM area that are least used in the recent times or that
are not referenced and copy that into the secondary memory to make the space for the new pages
in the main memory.

Modern microprocessors intended for general-purpose use, a memory management unit, or MMU,
is built into the hardware. The MMU's job is to translate virtual addresses into physical addresses.
A basic example is given below −
Virtual memory is commonly implemented by demand paging. It can also be implemented in a
segmentation system. Demand segmentation can also be used to provide virtual memory.

Since all this procedure happens automatically, therefore it makes the computer feel like it is
having the unlimited RAM.

Advantages of Virtual Memory

1. The degree of Multiprogramming will be increased.


2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory

1. The system becomes slower since swapping takes time.


2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.

Demand Paging:-
It suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.

Whenever any page is referred for the first time in the main memory, then that page will be found
in the secondary memory.

After that, it may or may not be present in the main memory depending upon the page replacement
algorithm.

A demand paging system is quite similar to a paging system with swapping where processes reside
in secondary memory and pages are loaded only on demand, not in advance. When a context switch
occurs, the operating system does not copy any of the old program’s pages out to the disk or any
of the new program’s pages into the main memory Instead, it just begins executing the new
program after loading the first page and fetches that program’s pages as they are referenced.
While executing a program, if the program references a page which is not available in the main
memory because it was swapped out a little ago, the processor treats this invalid memory reference
as a page fault and transfers control from the program to the operating system to demand the page
back into the memory.

Advantage:-

1. Large virtual memory.


2. More efficient use of memory.
3. There is no limit on degree of multiprogramming.

Disadvantage:-

1. Increased Latency: Accessing a page not in memory results in a page fault, causing delays
as the system fetches the page from secondary storage.
2. Page Fault Overhead: Frequent page faults can lead to significant overhead, as each fault
requires handling by the operating system, increasing processing time.
3. Thrashing: If the system does not have enough physical memory to handle the working
set of active processes, it can lead to thrashing, where the system spends more time
swapping pages in and out of memory than executing processes.

Performance of demand paging:-


The performance of demand paging can be influenced by several factors. While demand paging
has the advantage of efficient memory usage and the ability to run larger programs on systems
with limited physical memory, its performance can vary based on the following considerations:

1. Page Fault Rate:


o The frequency of page faults is a critical determinant of demand paging
performance. A high page fault rate can significantly degrade system performance
as the overhead of handling each fault can be substantial.
2. Page Replacement Algorithm:
o The efficiency of the page replacement algorithm (e.g., Least Recently Used
(LRU), First-In-First-Out (FIFO), or Optimal Page Replacement) affects
performance. An effective algorithm can reduce the number of page faults by
predicting and preloading necessary pages.
3. Locality of Reference:
o Programs that exhibit strong locality of reference, where frequently accessed data
is clustered together, will perform better under demand paging. This reduces the
frequency of page faults and enhances performance.
4. Disk I/O Speed:
o The speed of secondary storage (e.g., hard drives or SSDs) impacts the time taken
to fetch pages from disk into memory. Faster disks can mitigate some of the
performance penalties associated with page faults.
5. Available Physical Memory:
o Sufficient physical memory reduces the likelihood of frequent page faults. Systems
with more physical memory can keep a larger working set of pages in memory,
enhancing performance.
6. Process Behavior:
o The behavior of running processes, including their memory access patterns, affects
demand paging performance. Processes that frequently access a large number of
pages can lead to thrashing, significantly degrading performance.
7. System Load:
o The overall system load and the number of concurrently running processes
influence performance. High system load can increase competition for memory and
CPU resources, impacting the efficiency of demand paging.
8. Prefetching Techniques:
o Effective prefetching strategies, where the system anticipates and loads pages
before they are needed, can improve performance by reducing the number of page
faults.
9. Virtual Memory Size:
o The size of the virtual memory space and the proportion of it that is actively used
by processes affect demand paging performance. Larger virtual memory can lead
to more frequent page faults if not managed well.

Page Replacement Algorithms:-


There are three types of Page Replacement Algorithms. They are:

o Optimal Page Replacement Algorithm


o First In First Out Page Replacement Algorithm
o Least Recently Used (LRU) Page Replacement Algorithm

First in First out Page Replacement Algorithm

This is the first basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames is
filled up with the help of first frames present. This concept is fulfilled with the help of Demand
Paging

After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.

If the page to be searched is found among the frames then, this process is known as Page Hit.

If the page to be searched is not found among the frames then, this process is known as Page Fault.

When Page Fault occurs this problem arises, then the First In First Out Page Replacement
Algorithm comes into picture.

The First In First Out (FIFO) Page Replacement Algorithm removes the Page in the frame which
is allotted long back. This means the useless page which is in the frame for a longer time is removed
and the new page which is in the ready queue and is ready to occupy the frame is allowed by the
First In First Out Page Replacement.

Let us understand this First In First Out Page Replacement Algorithm working with the help of an
example.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory with


three frames and calculate number of page faults by using FIFO (First In First Out) Page
replacement algorithms.

Points to Remember

Page Not Found - - - > Page Fault

Page Found - - - > Page Hit

Reference String:
Number of Page Hits = 8

Number of Page Faults = 12

The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66

The Page Hit Percentage = 8 *100 / 20 = 40%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%

Explanation

First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy. So, with the help of First in First Out Page
Replacement Algorithm we remove the frame which contains the page is older among the pages.
By removing the older page we give access for the new frame to occupy the empty space created
by the First in First out Page Replacement Algorithm.

OPTIMAL Page Replacement Algorithm

This is the second basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames is
filled up with the help of first frames present. This concept is fulfilled with the help of Demand
Paging

After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.

If the page to be searched is found among the frames then, this process is known as Page Hit.

If the page to be searched is not found among the frames then, this process is known as Page Fault.
When Page Fault occurs this problem arises, then the OPTIMAL Page Replacement Algorithm
comes into picture.

The OPTIMAL Page Replacement Algorithms works on a certain principle. The principle is:

Replace the Page which is not used in the Longest Dimension of time in future

This principle means that after all the frames are filled then, see the future pages which are to
occupy the frames. Go on checking for the pages which are already available in the frames. Choose
the page which is at last.

Example:

Suppose the Reference String is:

0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0

6, 1, 2 are in the frames occupying the frames.

Now we need to enter 0 into the frame by removing one page from the page

So, let us check which page number occurs last

From the sub sequence 0, 3, 4, 6, 0, 2, 1 we can say that 1 is the last occurring page number. So
we can say that 0 can be placed in the frame body by removing 1 from the frame.

Let us understand this OPTIMAL Page Replacement Algorithm working with the help of an
example.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0 for a memory with


three frames and calculate number of page faults by using OPTIMAL Page replacement
algorithms.

Points to Remember

Page Not Found - - - > Page Fault

Page Found - - - > Page Hit

Reference String:
Number of Page Hits = 8

Number of Page Faults = 12

The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66

The Page Hit Percentage = 8 *100 / 20 = 40%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%

Explanation

First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.

Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that we
would replace the Page which is not used in the Longest Dimension of time in future.

There comes a question what if there is absence of page which is in the frame.

Suppose the Reference String is:

0, 2, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0

6, 1, 5 are in the frames occupying the frames.

Here, we can see that page number 5 is not present in the Reference String. But the number 5 is
present in the Frame. So, as the page number 5 is absent we remove it when required and other
page can occupy that position.

Least Recently Used (LRU) Replacement Algorithm

This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames is
filled up with the help of first frames present. This concept is fulfilled with the help of Demand
Paging

After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.

If the page to be searched is found among the frames then, this process is known as Page Hit.

If the page to be searched is not found among the frames then, this process is known as Page Fault.

When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture.

The Least Recently Used (LRU) Page Replacement Algorithms works on a certain principle. The
principle is:

Replace the page with the page which is less dimension of time recently used page in the past.

Example:

Suppose the Reference String is:

6, 1, 1, 2, 0, 3, 4, 6, 0

The pages with page numbers 6, 1, 2 are in the frames occupying the frames.

Now, we need to allot a space for the page numbered 0.

Now, we need to travel back into the past to check which page can be replaced.

6 is the oldest page which is available in the Frame.

So, replace 6 with the page numbered 0.

Let us understand this Least Recently Used (LRU) Page Replacement Algorithm working with the
help of an example.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory with


three frames and calculate number of page faults by using Least Recently Used (LRU) Page
replacement algorithms.

Points to Remember

Page Not Found - - - > Page Fault


Page Found - - - > Page Hit

Reference String:

Number of Page Hits = 7

Number of Page Faults = 13

The Ratio of Page Hit to the Page Fault = 7: 12 - - - > 0.5833: 1

The Page Hit Percentage = 7 * 100 / 20 = 35%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%

Explanation

First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.

Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that we
would replace the Page which is not used in the Longest Dimension of time in past or can be said
as the Page which is very far away in the past.

Thrashing:-
Thrashing is when the page fault and swapping happens very frequently at a higher rate and then
the operating system has to spend more time swapping these pages. This state in the operating
system is known as thrashing. Because of thrashing, the CPU utilization is going to be reduced or
negligible.
The basic concept involved is that if a process is allocated too few frames, then there will be too
many and too frequent page faults. As a result, no valuable work would be done by the CPU, and
the CPU utilization would fall drastically.

The long-term scheduler would then try to improve the CPU utilization by loading some more
processes into the memory, thereby increasing the degree of multiprogramming. Unfortunately,
this would result in a further decrease in the CPU utilization, triggering a chained reaction of higher
page faults followed by an increase in the degree of multiprogramming, called thrashing.

Algorithms during Thrashing

Whenever thrashing starts, the operating system tries to apply either the Global page replacement
Algorithm or the Local page replacement algorithm.

1. Global Page Replacement

Since global page replacement can bring any page, it tries to bring more pages whenever thrashing
is found. But what actually will happen is that no process gets enough frames, and as a result, the
thrashing will increase more and more. Therefore, the global page replacement algorithm is not
suitable when thrashing happens.

2. Local Page Replacement

Unlike the global page replacement algorithm, local page replacement will select pages which only
belong to that process. So there is a chance to reduce the thrashing. But it is proven that there are
many disadvantages if we use local page replacement. Therefore, local page replacement is just an
alternative to global page replacement in a thrashing scenario.

Causes of Thrashing

Programs or workloads may cause thrashing, and it results in severe performance problems, such
as:
o If CPU utilization is too low, we increase the degree of multiprogramming by introducing
a new system. A global page replacement algorithm is used. The CPU scheduler sees the
decreasing CPU utilization and increases the degree of multiprogramming.
o CPU utilization is plotted against the degree of multiprogramming.
o As the degree of multiprogramming increases, CPU utilization also increases.
o If the degree of multiprogramming is increased further, thrashing sets in, and CPU
utilization drops sharply.
o So, at this point, to increase CPU utilization and to stop thrashing, we must decrease the
degree of multiprogramming.

AD

How to Eliminate Thrashing

Thrashing has some negative impacts on hard drive health and system performance. Therefore, it
is necessary to take some actions to avoid it. To resolve the problem of thrashing, here are the
following methods, such as:

o Adjust the swap file size: If the system swap file is not configured correctly, disk thrashing
can also happen to you.
o Increase the amount of RAM: As insufficient memory can cause disk thrashing, one
solution is to add more RAM to the laptop. With more memory, your computer can handle
tasks easily and don't have to work excessively. Generally, it is the best long-term solution.
o Decrease the number of applications running on the computer: If there are too many
applications running in the background, your system resource will consume a lot. And the
remaining system resource is slow that can result in thrashing. So while closing, some
applications will release some resources so that you can avoid thrashing to some extent.
o Replace programs: Replace those programs that are heavy memory occupied with
equivalents that use less memory.

Techniques to Prevent Thrashing

The Local Page replacement is better than the Global Page replacement, but local page replacement
has many disadvantages, so it is sometimes not helpful. Therefore below are some other techniques
that are used to handle thrashing:

1. Locality Model
A locality is a set of pages that are actively used together. The locality model states that as a process
executes, it moves from one locality to another. Thus, a program is generally composed of several
different localities which may overlap.

For example, when a function is called, it defines a new locality where memory references are
made to the function call instructions, local and global variables, etc. Similarly, when the function
is exited, the process leaves this locality.

AD

2. Working-Set Model

This model is based on the above-stated concept of the Locality Model.

The basic principle states that if we allocate enough frames to a process to accommodate its current
locality, it will only fault whenever it moves to some new locality. But if the allocated frames are
lesser than the size of the current locality, the process is bound to thrash.

According to this model, based on parameter A, the working set is defined as the set of pages in
the most recent 'A' page references. Hence, all the actively used pages would always end up being
a part of the working set.

The accuracy of the working set is dependent on the value of parameter A. If A is too large, then
working sets may overlap. On the other hand, for smaller values of A, the locality might not be
covered entirely.

AD

If D is the total demand for frames and WSSi is the working set size for process i,

D = ⅀ WSSi

Now, if 'm' is the number of frames available in the memory, there are two possibilities:

o D>m, i.e., total demand exceeds the number of frames, then thrashing will occur as some
processes would not get enough frames.
o D<=m, then there would be no thrashing.

If there are enough extra frames, then some more processes can be loaded into the memory. On
the other hand, if the summation of working set sizes exceeds the frames' availability, some of the
processes have to be suspended (swapped out of memory).

This technique prevents thrashing along with ensuring the highest degree of multiprogramming
possible. Thus, it optimizes CPU utilization.

3. Page Fault Frequency


A more direct approach to handle thrashing is the one that uses the Page-Fault Frequency concept.

The problem associated with thrashing is the high page fault rate, and thus, the concept here is to
control the page fault rate.

If the page fault rate is too high, it indicates that the process has too few frames allocated to it. On
the contrary, a low page fault rate indicates that the process has too many frames.

Upper and lower limits can be established on the desired page fault rate, as shown in the diagram.

If the page fault rate falls below the lower limit, frames can be removed from the process.
Similarly, if the page faults rate exceeds the upper limit, more frames can be allocated to the
process.

Cache memory Organization:-

Cache memory is a small, high-speed storage area in a computer. The cache is a smaller and
faster memory that stores copies of the data from frequently used main memory locations. There
are various independent caches in a CPU, which store instructions and data. The most important
use of cache memory is that it is used to reduce the average time to access data from the main
memory.
By storing this information closer to the CPU, cache memory helps speed up the overall
processing time. Cache memory is much faster than the main memory (RAM). When the CPU
needs data, it first checks the cache. If the data is there, the CPU can access it quickly. If not, it
must fetch the data from the slower main memory.
Characteristics of Cache Memory
 Cache memory is an extremely fast memory type that acts as a buffer between RAM and
the CPU.
 Cache Memory holds frequently requested data and instructions so that they are
immediately available to the CPU when needed.
 Cache memory is costlier than main memory or disk memory but more economical than
CPU registers.
 Cache Memory is used to speed up and synchronize with a high-speed CPU.

Cache Memory

Levels of Memory
 Level 1 or Register: It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter, Address Register, etc.
 Level 2 or Cache memory: It is the fastest memory that has faster access time where data
is temporarily stored for faster access.
 Level 3 or Main Memory: It is the memory on which the computer works currently. It is
small in size and once power is off data no longer stays in this memory.
 Level 4 or Secondary Memory: It is external memory that is not as fast as the main
memory but data stays permanently in this memory.
Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache.
 If the processor finds that the memory location is in the cache, a Cache Hit has occurred
and data is read from the cache.
 If the processor does not find the memory location in the cache, a cache miss has occurred.
For a cache miss, the cache allocates a new entry and copies in data from the main memory,
then the request is fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses
Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit ratio(H)
We can improve Cache performance using higher cache block size, and higher associativity,
reduce miss rate, reduce miss penalty, and reduce the time to hit in the cache.
Cache Mapping
There are three different types of mapping used for the purpose of cache memory which is as
follows:
 Direct Mapping
 Associative Mapping
 Set-Associative Mapping
1. Direct Mapping
The simplest technique, known as direct mapping, maps each block of main memory into only
one possible cache line. or In Direct mapping, assign each memory block to a specific line in the
cache. If a line is previously taken up by a memory block when a new block needs to be loaded,
the old block is trashed. An address space is split into two parts index field and a tag field. The
cache is used to store the tag field whereas the rest is stored in the main memory. Direct
mapping`s performance is directly proportional to the Hit ratio.
i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache

Direct Mapping

For purposes of cache access, each main memory address can be viewed as consisting of three
fields. The least significant w bits identify a unique word or byte within a block of main memory.
In most contemporary machines, the address is at the byte level. The remaining s bits specify
one of the 2s blocks of main memory. The cache logic interprets these s bits as a tag of s-r bits
(the most significant portion) and a line field of r bits. This latter field identifies one of the
m=2r lines of the cache. Line offset is index bits in the direct mapping.

Direct Mapping – Structure

2. Associative Mapping
In this type of mapping, associative memory is used to store the content and addresses of the
memory word. Any block can go into any line of the cache. This means that the word id bits are
used to identify which word in the block is needed, but the tag becomes all of the remaining bits.
This enables the placement of any word at any place in the cache memory. It is considered to be
the fastest and most flexible mapping form. In associative mapping, the index bits are zero.

Associative Mapping – Structure

3. Set-Associative Mapping
This form of mapping is an enhanced form of direct mapping where the drawbacks of direct
mapping are removed. Set associative addresses the problem of possible thrashing in the direct
mapping method. It does this by saying that instead of having exactly one line that a block can
map to in the cache, we will group a few lines together creating a set. Then a block in memory
can map to any one of the lines of a specific set. Set-associative mapping allows each word that
is present in the cache can have two or more words in the main memory for the same index
address. Set associative cache mapping combines the best of direct and associative cache
mapping techniques. In set associative mapping the index bits are given by the set offset bits. In
this case, the cache consists of a number of sets, each of which consists of a number of lines.

Set-Associative Mapping
Relationships in the Set-Associative Mapping can be defined as:
m=v*k
i= j mod v

where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set

Set-Associative Mapping – Structure

For more, you can refer to the Difference between Types of Cache Mapping.
Application of Cache Memory
Here are some of the applications of Cache Memory.
 Primary Cache: A primary cache is always located on the processor chip. This cache is
small and its access time is comparable to that of processor registers.
 Secondary Cache: Secondary cache is placed between the primary cache and the rest of
the memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also
housed on the processor chip.
 Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance
that the element will be present in close proximity to the reference point and next time if
again searched then more close proximity to the point of reference.
 Temporal Locality of Reference: Temporal Locality of Reference uses the Least recently
used algorithm will be used. Whenever there is page fault occurs within a word will not
only load the word in the main memory but the complete page fault will be loaded because
the spatial locality of reference rule says that if you are referring to any word next word
will be referred to in its register that’s why we load complete page table so the complete
block will be loaded.
Advantages
 Cache Memory is faster in comparison to main memory and secondary memory.
 Programs stored by Cache Memory can be executed in less time.
 The data access time of Cache Memory is less than that of the main memory.
 Cache Memory stored data and instructions that are regularly used by the CPU, therefore it
increases the performance of the CPU.
Disadvantages
 Cache Memory is costlier than primary memory and secondary memory.
 Data is stored on a temporary basis in Cache Memory.
 Whenever the system is turned off, data and instructions stored in cache memory get
destroyed.
 The high cost of cache memory increases the price of the Computer System.

Locality of reference:-

Locality of reference refers to the tendency of the computer program to access the same set of
memory locations for a particular time period. The property of Locality of Reference is mainly
shown by loops and subroutine calls in a program.

On an abstract level there are two types of localities which are as follows −

 Temporal locality
 Spatial locality

Temporal locality
This type of optimization includes bringing in the frequently accessed memory references to a
nearby memory location for a short duration of time so that the future accesses are much faster.

For example, if in an instruction set we have a variable declared that is being accessed very
frequently we bring in that variable in a memory register which is the nearest in memory hierarchy
for faster access.

Spatial locality

This type of optimization assumes that if a memory location has been accessed it is highly likely
that a nearby/consecutive memory location will be accessed as well and hence we bring in the
nearby memory references too in a nearby memory location for faster access.

For example, traversal of a one-dimensional array in any instruction set will benefit from this
optimization.

Using these optimizations we can greatly improve upon the efficiency of the programs and can be
implemented on hardware level or on software level.

You might also like