0% found this document useful (0 votes)
15 views27 pages

UNIT-4 OS

os

Uploaded by

Vinaykumar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
15 views27 pages

UNIT-4 OS

os

Uploaded by

Vinaykumar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 27

UNIT-4

MEMORY MANAGEMENT

What is Main Memory:

The main memory is central to the operation of a modern computer. Main Memory is a large array of
words or bytes, ranging in size from hundreds of thousands to billions. Main memory is a repository of
rapidly available information shared by the CPU and I/O devices. Main memory is the place where
programs and information are kept when the processor is effectively utilizing them. Main memory is
associated with the processor, so moving instructions and information into and out of the processor is
extremely fast. Main memory is also known as RAM(Random Access Memory). This memory is a
volatile memory.RAM lost its data when a power interruption occurs.

Figure 1: Memory hierarchy

What is Memory Management :

In a multiprogramming computer, the operating system resides in a part of memory and the rest is used
by multiple processes. The task of subdividing the memory among different processes is called memory
management. Memory management is a method in the operating system to manage operations between
main memory and disk during process execution. The main aim of memory management is to achieve
efficient utilization of memory.
Why Memory Management is required:

Allocate and de-allocate memory before and after process execution.

 To keep track of used memory space by processes.


 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Now we are discussing the concept of logical address space and Physical address space:

Logical and Physical Address Space:

Logical Address space: An address generated by the CPU is known as “Logical Address”. It is also
known as a Virtual address. Logical address space can be defined as the size of the process. A logical
address can be changed.
Physical Address space: An address seen by the memory unit (i.e the one loaded into the memory
address register of the memory) is commonly known as a “Physical Address”. A Physical address is also
known as a Real address. The set of all physical addresses corresponding to these logical addresses is
known as Physical address space. A physical address is computed by MMU. The run-time mapping from
virtual to physical addresses is done by a hardware device Memory Management Unit(MMU). The
physical address always remains constant.

Static and Dynamic Loading:

To load a process into the main memory is done by a loader. There are two different types of loading :
 Static loading:- loading the entire program into a fixed address. It requires more memory space.
 Dynamic loading:- The entire program and all data of a process must be in physical memory for the
process to execute. So, the size of a process is limited to the size of physical memory. To gain proper
memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded until it is
called. All routines are residing on disk in a relocatable load format. One of the advantages of
dynamic loading is that unused routine is never loaded. This loading is useful when a large amount of
code is needed to handle it efficiently.

Static and Dynamic linking:

To perform a linking task a linker is used. A linker is a program that takes one or more object files
generated by a compiler and combines them into a single executable file.
 Static linking: In static linking, the linker combines all necessary program modules into a single
executable program. So there is no runtime dependency. Some operating systems support only static
linking, in which system language libraries are treated like any other object module.
 Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic
linking, “Stub” is included for each appropriate library routine reference. A stub is a small piece of
code. When the stub is executed, it checks whether the needed routine is already in memory or not. If
not available then the program loads the routine into memory.
Swapping :
When a process is executed it must have resided in memory. Swapping is a process of swap a process
temporarily into a secondary memory from the main memory, which is fast as compared to secondary
memory. A swapping allows more processes to be run and can be fit into memory at one time. The main
part of swapping is transferred time and the total time directly proportional to the amount of memory
swapped. Swapping is also known as roll-out, roll in, because if a higher priority process arrives and
wants service, the memory manager can swap out the lower priority process and then load and execute
the higher priority process. After finishing higher priority work, the lower priority process swapped back
in memory and continued to the execution process.

Contiguous Memory Allocation :


The main memory should oblige both the operating system and the different client processes. Therefore,
the allocation of memory becomes an important task in the operating system. The memory is usually
divided into two partitions: one for the resident operating system and one for the user processes. We
normally need several user processes to reside in memory simultaneously. Therefore, we need to consider
how to allocate available memory to the processes that are in the input queue waiting to be brought into
memory. In adjacent memory allotment, each process is contained in a single contiguous segment of
memory.
Memory allocation:

To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the
simplest methods for allocating memory is to divide memory into several fixed-sized partitions and each
partition contains exactly one process. Thus, the degree of multiprogramming is obtained by the number
of partitions.
Multiple partition allocation: In this method, a process is selected from the input queue and loaded into
the free partition. When the process terminates, the partition becomes available for other processes.
Fixed partition allocation: In this method, the operating system maintains a table that indicates which
parts of memory are available and which are occupied by processes. Initially, all memory is available for
user processes and is considered one large block of available memory. This available memory is known
as “Hole”. When the process arrives and needs memory, we search for a hole that is large enough to store
this process. If the requirement fulfills then we allocate memory to process, otherwise keeping the rest
available to satisfy future requests. While allocating a memory sometimes dynamic storage allocation
problems occur, which concerns how to satisfy a request of size n from a list of free holes. There are
some solutions to this problem:
First fit:-
In the first fit, the first available free hole fulfills the requirement of the process allocated.

Here, in this diagram 40 KB memory block is the first available free hole that can store process A (size of
25 KB), because the first two blocks did not have sufficient memory space.
Best fit:-
In the best fit, allocate the smallest hole that is big enough to process requirements. For this, we search
the entire list, unless the list is ordered by size.

Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable
hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory allocation techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This method produces the largest
leftover hole.

Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which is
60KB. Inefficient memory utilization is a major issue in the worst fit.

Fragmentation:

A Fragmentation is defined as when the process is loaded and removed after execution from memory, it
creates a small free hole. These holes can not be assigned to new processes because holes are not
combined or do not fulfill the memory requirement of the process. To achieve a degree of
multiprogramming, we must reduce the waste of memory or fragmentation problem. In operating system
two types of fragmentation:
Internal fragmentation:
Internal fragmentation occurs when memory blocks are allocated to the process more than their requested
size. Due to this some unused space is leftover and creates an internal fragmentation problem.
Example: Suppose there is a fixed partitioning is used for memory allocation and the different size of
block 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and demand for
the block of memory. It gets a memory block of 3MB but 1MB block memory is a waste, and it can not
be allocated to other processes too. This is called internal fragmentation.
External fragmentation:
In external fragmentation, we have a free memory block, but we can not assign it to process because
blocks are not contiguous.
Example: Suppose (consider above example) three process p1, p2, p3 comes with size 2MB, 4MB, and
7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively.
After allocating process p1 process and p2 process left 1MB and 2MB. Suppose a new process p4 comes
and demands a 3MB block of memory, which is available, but we can not assign it because free memory
space is not contiguous. This is called external fragmentation.
Both the first fit and best-fit systems for memory allocation affected by external fragmentation. To
overcome the external fragmentation problem Compaction is used. In the compaction technique, all free
memory space combines and makes one large block. So, this space can be used by other processes
effectively.
Another possible solution to the external fragmentation is to allow the logical address space of the
processes to be noncontiguous, thus permit a process to be allocated physical memory where ever the
latter is available.

Paging:

Paging is a memory management scheme that eliminates the need for contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non-contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by the CPU
 Logical Address Space or Virtual Address Space (represented in words or bytes): The set of all
logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on a memory unit
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
The mapping from virtual to physical address is done by the memory management unit (MMU) which is
a hardware device and this mapping is known as the paging technique.
 The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size
Let us consider an example:
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)
The address generated by the CPU is divided into
 Page number(p): Number of bits required to represent the pages in Logical Address Space or Page
number
 Page offset(d): Number of bits required to represent a particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number frame
 Frame offset(d): Number of bits required to represent a particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated registers. But the usage of
register for the page table is satisfactory only if the page table is small. If the page table contains a large
number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast look-up
hardware cache.
 The TLB is an associative, high-speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously. If the item is
found, then the corresponding value is returned.

Main memory access time = m


If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)
Fixed Partitioning

The earliest and one of the simplest technique which can be used to load more
than one processes into the main memory is Fixed partitioning or Contiguous
memory allocation.

In this technique, the main memory is divided into partitions of equal or


different sizes. The operating system always resides in the first partition while
the other partitions can be used to store user processes. The memory is
assigned to the processes in contiguous way.

In fixed partitioning,

1. The partitions cannot overlap.

2. A process must be contiguously present in a partition for the execution.

There are various cons of using this technique.

Skip Ad

1. Internal Fragmentation

If the size of the process is lesser then the total size of the partition then some
size of the partition get wasted and remain unused. This is wastage of the
memory and called internal fragmentation.

As shown in the image below, the 4 MB partition is used to load only 3 MB


process and the remaining 1 MB got wasted.

2. External Fragmentation

The total unused space of various partitions cannot be used to load the
processes even though there is space available but not in the contiguous form.

As shown in the image below, the remaining 1 MB space of each partition


cannot be used as a unit to store a 4 MB process. Despite of the fact that the
sufficient space is available to load the process, process will not be loaded.
3. Limitation on the size of the process

If the process size is larger than the size of maximum sized partition then that
process cannot be loaded into the memory. Therefore, a limitation can be
imposed on the process size that is it cannot be larger than the size of the
largest partition.

4. Degree of multiprogramming is less

By Degree of multi programming, we simply mean the maximum number of


processes that can be loaded into the memory at the same time. In fixed
partitioning, the degree of multiprogramming is fixed and very less due to the
fact that the size of the partition cannot be varied according to the size of
processes.

Dynamic/ Variable Partitioning

Dynamic partitioning tries to overcome the problems caused by fixed


partitioning. In this technique, the partition size is not declared initially. It is
declared at the time of process loading.
The first partition is reserved for the operating system. The remaining space is
divided into parts. The size of each partition will be equal to the size of the
process. The partition size varies according to the need of the process so that
the internal fragmentation can be avoided.

Advantages of Dynamic/Variable Partitioning over fixed partitioning

1. No Internal Fragmentation

Given the fact that the partitions in dynamic partitioning are created according
to the need of the process, It is clear that there will not be any internal
fragmentation because there will not be any unused remaining space in the
partition.

2. No Limitation on the size of the process

In Fixed partitioning, the process with the size greater than the size of the
largest partition could not be executed due to the lack of sufficient contiguous
memory. Here, In Dynamic partitioning, the process size can't be restricted
since the partition size is decided according to the process size.
3. Degree of multiprogramming is dynamic

Due to the absence of internal fragmentation, there will not be any unused
space in the partition hence more processes can be loaded in the memory at
the same time.

Disadvantages of dynamic partitioning

External Fragmentation

Absence of internal fragmentation doesn't mean that there will not be external
fragmentation.

Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being
loaded in the respective partitions of the main memory.

After some time P1 and P3 got completed and their assigned space is freed.
Now there are two unused partitions (1 MB and 1 MB) available in the main
memory but they cannot be used to load a 2 MB process in the memory since
they are not contiguously located.

The rule says that the process must be contiguously present in the main
memory to get executed. We need to change this rule to avoid external
fragmentation.
Complex Memory Allocation

In Fixed partitioning, the list of partitions is made once and will never change
but in dynamic partitioning, the allocation and deallocation is very complex
since the partition size will be varied every time when it is assigned to a new
process. OS has to keep track of all the partitions.

Due to the fact that the allocation and deallocation are done very frequently in
dynamic memory allocation and the partition size will be changed at each time,
it is going to be very difficult for OS to manage everything.

Structure of Page Table in Operating Systems


In this tutorial, we will cover some of the most common techniques used for
structuring the Page table.

The data structure that is used by the virtual memory system in the operating
system of a computer in order to store the mapping between physical and
logical addresses is commonly known as Page Table.

As we had already told you that the logical address that is generated by the
CPU is translated into the physical address with the help of the page table.

 Thus page table mainly provides the corresponding frame number (base
address of the frame) where that page is stored in the main memory.

The above diagram shows the paging model of Physical and logical memory.

Characteristics of the Page Table

Some of the characteristics of the Page Table are as follows:

 It is stored in the main memory.


 Generally; the Number of entries in the page table = the Number of
Pages in which the process is divided.

 PTBR means page table base register and it is basically used to hold the
base address for the page table of the current process.

 Each process has its own independent page table.

Techniques used for Structuring the Page Table

Some of the common techniques that are used for structuring the Page table
are as follows:

1. Hierarchical Paging

2. Hashed Page Tables

3. Inverted Page Tables

Let us cover these techniques one by one;

Hierarchical Paging

Another name for Hierarchical Paging is multilevel paging.

 There might be a case where the page table is too big to fit in a
contiguous space, so we may have a hierarchy with several levels.

 In this type of Paging the logical address space is broke up into Multiple
page tables.

 Hierarchical Paging is one of the simplest techniques and for this


purpose, a two-level page table and three-level page table can be used.

Two Level Page Table

Consider a system having 32-bit logical address space and a page size of 1 KB
and it is further divided into:

 Page Number consisting of 22 bits.

 Page Offset consisting of 10 bits.

As we page the Page table, the page number is further divided into :
 Page Number consisting of 12 bits.

 Page Offset consisting of 10 bits.

Thus the Logical address is as follows:

In the above diagram,

P1 is an index into the Outer Page table.

P2 indicates the displacement within the page of the Inner page Table.

As address translation works from outer page table inward so is known


as forward-mapped Page Table.

Below given figure below shows the Address Translation scheme for a two-level
page table

Three Level Page Table

For a system with 64-bit logical address space, a two-level paging scheme is
not appropriate. Let us suppose that the page size, in this case, is 4KB.If in this
case, we will use the two-page level scheme then the addresses will look like
this:
Thus in order to avoid such a large table, there is a solution and that is to
divide the outer page table, and then it will result in a Three-level page
table:

Hashed Page Tables

This approach is used to handle address spaces that are larger than 32 bits.

 In this virtual page, the number is hashed into a page table.

 This Page table mainly contains a chain of elements hashing to the same
elements.

Each element mainly consists of :

1. The virtual page number

2. The value of the mapped page frame.

3. A pointer to the next element in the linked list.

Given below figure shows the address translation scheme of the Hashed Page
Table:
The above Figure shows Hashed Page Table

The Virtual Page numbers are compared in this chain searching for a match; if
the match is found then the corresponding physical frame is extracted.

In this scheme, a variation for 64-bit address space commonly uses clustered
page tables.

Clustered Page Tables

 These are similar to hashed tables but here each entry refers to several
pages (that is 16) rather than 1.

 Mainly used for sparse address spaces where memory references are
non-contiguous and scattered

Inverted Page Tables

The Inverted Page table basically combines A page table and A frame table into
a single data structure.

 There is one entry for each virtual page number and a real page of
memory

 And the entry mainly consists of the virtual address of the page stored in
that real memory location along with the information about the process
that owns the page.
 Though this technique decreases the memory that is needed to store
each page table; but it also increases the time that is needed to search
the table whenever a page reference occurs.

Given below figure shows the address translation scheme of the Inverted Page
Table:

In this, we need to keep the track of process id of each entry, because many
processes may have the same logical addresses.

Also, many entries can map into the same index in the page table after going
through the hash function. Thus chaining is used in order to handle this.

Types of Page Replacement Algorithms

There are various page replacement algorithms. Each algorithm has a different
method by which the pages can be replaced.

1. Optimal Page Replacement algorithm → this algorithms replaces the


page which will not be referred for so long in future. Although it can not
be practically implementable but it can be used as a benchmark. Other
algorithms are compared to this in terms of optimality.

2. Least recent used (LRU) page replacement algorithm → this


algorithm replaces the page which has not been referred for a long time.
This algorithm is just opposite to the optimal page replacement
algorithm. In this, we look at the past instead of staring at future.
3. FIFO → in this algorithm, a queue is maintained. The page which is
assigned the frame first will be replaced first. In other words, the page
which resides at the rare end of the queue will be replaced on the every
page fault.

Numerical on Optimal, LRU and FIFO

Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the number of frames in the


memory is 3. Find out the number of page faults respective to:

1. Optimal Page Replacement Algorithm

2. FIFO Page Replacement Algorithm

3. LRU Page Replacement Algorithm

Optimal Page Replacement Algorithm

Number of Page Faults in Optimal Page Replacement Algorithm = 5

LRU Page Replacement Algorithm

Number of Page Faults in LRU = 6


FIFO Page Replacement Algorithm

Number of Page Faults in FIFO = 6

Numerical on Optimal, LRU and FIFO

Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the number of frames in the


memory is 3. Find out the number of page faults respective to:

1. Optimal Page Replacement Algorithm

2. FIFO Page Replacement Algorithm

3. LRU Page Replacement Algorithm

Optimal Page Replacement Algorithm

Number of Page Faults in Optimal Page Replacement Algorithm = 5

LRU Page Replacement Algorithm


Number of Page Faults in LRU = 6

FIFO Page Replacement Algorithm

Number of Page Faults in FIFO = 6

Belady’s Anomaly in FIFO –


Assuming a system that has no pages loaded in the memory and uses the FIFO Page
replacement algorithm. Consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Case-1: If the system has 3 frames, the given reference string the using FIFO page
replacement algorithm yields a total of 9 page faults. The diagram below illustrates the
pattern of the page faults occurring in the example.
Case-2: If the system has 4 frames, the given reference string using the FIFO page
replacement algorithm yields a total of 10 page faults. The diagram below illustrates the
pattern of the page faults occurring in the example.

It can be seen from the above example that on increasing the number of frames while
using the FIFO page replacement algorithm, the number of page faults increased from
9 to 10.
Note – It is not necessary that every string reference pattern cause Belady anomaly in
FIFO but there is certain kind of string references that worsen the FIFO performance on
increasing the number of frames.

Segmentation in Operating System


A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the same sizes are called segments.
Segmentation gives user’s view of the process which paging does not give.
Here the user’s view is mapped to physical memory.
There are types of segmentation:
1. Virtual memory segmentation –
Each process is divided into a number of segments, not all of which are
resident at any one point in time.
2. Simple segmentation –
Each process is divided into a number of segments, all of which are loaded
into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical
addresses in segmentation. A table stores the information about all such
segments and is called Segment Table.
Segment Table – It maps two-dimensional Logical address into one-
dimensional Physical address. It’s each table entry has:
 Base Address: It contains the starting physical address where the
segments reside in memory.
 Limit: It specifies the length of the segment.
Translation of Two dimensional Logical Address to one dimensional Physical
Address.

Address generated by the CPU is divided into:


 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the
segment.
Advantages of Segmentation –
 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation –
 As processes are loaded and removed from the memory, the free memory
space is broken into little pieces, causing External fragmentation.

Virtual Memory in Operating System

Virtual Memory is a storage allocation scheme in which secondary memory can


be addressed as though it were part of the main memory. The addresses a
program may use to reference memory are distinguished from the addresses
the memory system uses to identify physical storage sites, and program-
generated addresses are translated automatically to the corresponding
machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer
system and the amount of secondary memory is available not by the actual
number of the main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in
computer memory.
1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be
swapped in and out of the main memory such that it occupies different places in the
main memory at different times during the course of execution.
2. A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and use of page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution. This means that the
required pages need to be loaded into memory whenever required. Virtual memory is
implemented using Demand Paging or Demand Segmentation.

Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs)
is known as demand paging.
The process includes the following steps :
1. If the CPU tries to refer to a page that is currently not available in the main memory,
it generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision-making of
replacing the page in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place
the process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating system
and the required page is brought into memory.
Advantages :
 More processes may be maintained in the main memory: Because we are going to
load only some of the pages of any particular process, there is room for more
processes. This leads to more efficient utilization of the processor because it is more
likely that at least one of the more numerous processes will be in the ready state at
any particular time.
 A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in the
main memory as required.
 It allows greater multiprogramming levels by using less of the available (primary)
memory for each process.
Page Fault Service Time :
The time taken to service the page fault is called page fault service time. The page fault
service time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
Swapping:
Swapping a process out means removing all of its pages from memory, or marking them
so that they will be removed by the normal page replacement process. Suspending a
process ensures that it is not runnable while it is swapped out. At some later time, the
system swaps back the process from the secondary storage to the main memory. When
a process is busy swapping pages in and out then this situation is called thrashing.

Thrashing :
At any given time, only a few pages of any process are in the main memory and
therefore more processes can be maintained in memory. Furthermore, time is saved
because unused pages are not swapped in and out of memory. However, the OS must
be clever about how it manages this scheme. In the steady-state practically, all of the
main memory will be occupied with process pages, so that the processor and OS have
direct access to as many processes as possible. Thus when the OS brings one page in,
it must throw another out. If it throws out a page just before it is used, then it will just
have to get that page again almost immediately. Too much of this leads to a condition
called Thrashing. The system spends most of its time swapping pages rather than
executing instructions. So a good page replacement algorithm is required.
In the given diagram, the initial degree of multiprogramming up to some extent of
point(lambda), the CPU utilization is very high and the system resources are utilized
100%. But if we further increase the degree of multiprogramming the CPU utilization will
drastically fall down and the system will spend more time only on the page replacement
and the time is taken to complete the execution of the process will increase. This
situation in the system is called thrashing.
Causes of Thrashing :
1. High degree of multiprogramming : If the number of processes keeps on
increasing in the memory then the number of frames allocated to each process will
be decreased. So, fewer frames will be available for each process. Due to this, a
page fault will occur more frequently and more CPU time will be wasted in just
swapping in and out of pages and the utilization will keep on decreasing.
For example:
Let free frames = 400
Case 1: Number of process = 100
Then, each process will get 4 frames.
Case 2: Number of processes = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes is increased, frames
per process are decreased. Hence CPU time will be consumed in just swapping
pages.
2. Lacks of Frames: If a process has fewer frames then fewer pages of that process
will be able to reside in memory and hence more frequent swapping in and out will
be required. This may lead to thrashing. Hence sufficient amount of frames must be
allocated to each process in order to prevent thrashing.
Recovery of Thrashing :
 Do not allow the system to go into thrashing by instructing the long-term scheduler
not to bring the processes into memory after the threshold.
 If the system is already thrashing then instruct the mid-term scheduler to suspend
some of the processes so that we can recover the system from thrashing.

You might also like