0% found this document useful (0 votes)
27 views55 pages

OS - Unit 4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 55

Operating Systems

Unit 4
Memory Management:

• Basic bare machine, Resident • Virtual memory concepts,


monitor, • Demand paging,
• Multiprogramming with fixed • Performance of demand paging,
partitions,
• Page replacement algorithms,
• Multiprogramming with variable
partitions, • Thrashing,
• Protection schemes, • Cache memory organization,
• Paging, • Locality of reference.
• Segmentation,
• Paged segmentation,
Bare Machine and Resident Monitor
• Bare machine is logical hardware which is used to execute the program in the processor without
using the operating system.
• Resident Monitor is a code that runs on Bare Machines.
• The Resident monitors are divided into 4 parts as:
• Control Language Interpreter: The first part of the Resident monitor is control language
interpreter which is used to read and carry out the instruction from one level to the next level.
• Loader: The second part of the Resident monitor which is the main part of the Resident Monitor
is Loader which Loads all the necessary system and application programs into the main memory.
• Device Driver: The third part of the Resident monitor is Device Driver which is used to managing
the connecting input-output devices to the system. So basically it is the interface between the
user and the system. it works as an interface between the request and response. request which
user made, Device driver responds that the system produces to fulfil these requests.
• Interrupt Processing: The fourth part as the name suggests, it processes the all occurred interrupt
to the system.
Multiprogramming
• Multiprogramming : Multiprogramming
operating system allows to execute multiple
processes by monitoring their process
states and switching in between processes.
It executes multiple programs to avoid CPU
and memory underutilization. It is also
called as Multiprogram Task System. It is
faster in processing than Batch Processing
system. Advantages of Multiprogramming :
• CPU never becomes idle
• Efficient resources utilization
• Response time is shorter
• Short time jobs completed faster than
long time jobs
• Increased Throughput
Difference between Multiprogramming and Multi-tasking
S. No. Multiprogramming Multi-tasking

1. Both of these concepts are for single CPU. Both of these concepts are for single CPU.

Concept of Context Switching and Time Sharing is


2. Concept of Context Switching is used.
used.

The processor is typically used in time sharing mode.


In multi-programmed system, the operating Switching happens when either allowed time expires
3. system simply switches to, and executes, or where there other reason for current process needs
another job when current job needs to wait. to wait (example process needs to do IO).

Multi-programming increases CPU utilization In multi-tasking also increases CPU utilization, it also
4.
by organising jobs . increases responsiveness.

The idea is to reduce the CPU idle time for as The idea is to further extend the CPU Utilization
5. long as possible. concept by increasing responsiveness Time Sharing.
Memory Management Techniques
Contiguous memory Management
• Contiguous memory management schemes: In this
scheme, each program occupies a single contiguous
block of storage locations, i.e., a set of memory
locations with consecutive addresses.
• Single contiguous memory management schemes: It
is the most effortless memory management
technique. In this strategy, a wide range of computer
memory aside from a little part which is held for the
working framework is accessible for one application.
In other words it is also known as fixed sized partition
of the system that separates memory into fixed-size
segments (might possibly be of a similar size). In this
whole partition is permitted to a procedure and if
there is some wastage inside the segment is
apportioned to a procedure and if there is some
wastage inside the segment, at that point it is called
an internal fragmentation.
Multiple partitioning schemes
• Multiple Partitioning: The single Contiguous memory
management scheme is inefficient as it limits computers
to execute only one program at a time resulting in
wastage in memory space and CPU time. The problem of
inefficient CPU use can be overcome using
multiprogramming that allows more than one program to
run concurrently. To switch between two processes, the
operating systems need to load both processes into the
main memory. The operating system needs to divide the
available main memory into multiple parts to load
multiple processes into the main memory. Thus multiple
processes can reside in the main memory
simultaneously.
• The multiple partitioning schemes can be of two types:
• Fixed Partitioning
• Dynamic Partitioning
Static or Fix Partitioning
• Fixed Partitioning: The main memory is divided into
several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning.
• These partitions can be of the same size or different sizes.
Each partition can hold a single process.
• The number of partitions determines the degree of
multiprogramming, i.e., the maximum number of
processes in memory.
• These partitions are made at the time of system generation
and remain fixed after that.
• 1. Internal Fragmentation: If the size of the process is
lesser then the total size of the partition then some size of
the partition get wasted and remain unused. This is
wastage of the memory and called internal fragmentation.
• As shown in the image, the 4 MB partition is used to load
only 3 MB process and the remaining 1 MB got wasted.
• 2. External Fragmentation: The total unused space of
various partitions cannot be used to load the processes
even though there is space available but not in the
contiguous form.
• As shown in the image, the remaining 1 MB space of each
partition cannot be used as a unit to store a 4 MB process.
Despite of the fact that the sufficient space is available to
Difference between Internal and External fragmentation
• Internal Fragmentation: Internal fragmentation
happens when the memory is split into mounted-
sized blocks. Whenever a method is requested for
the memory, the mounted-sized block is allotted to
the method. In the case where the memory allotted
to the method is somewhat larger than the
memory requested, then the difference between
allotted and requested memory is called internal
fragmentation.

• External Fragmentation: External fragmentation


happens when there’s a sufficient quantity of area
within the memory to satisfy the memory request
of a method. However, the process’s memory
request cannot be fulfilled because the memory
offered is in a non-contiguous manner. Whether
you apply a first-fit or best-fit memory allocation
Difference between Internal fragmentation and External fragmentation
S.NO Internal fragmentation External fragmentation

1. In internal fragmentation fixed-sized memory, blocks square In external fragmentation, variable-sized memory blocks square
measure appointed to process. measure appointed to the method.

2. Internal fragmentation happens when the method or process External fragmentation happens when the method or process is
is smaller than the memory. removed.

3. The solution of internal fragmentation is the best-fit block. The solution to external fragmentation is compaction and paging.

4. Internal fragmentation occurs when memory is divided into External fragmentation occurs when memory is divided into variable
fixed-sized partitions. size partitions based on the size of processes.

The difference between memory allocated and required space The unused spaces formed between non-contiguous memory
5. or memory is called Internal fragmentation. fragments are too small to serve a new process, which is called
External fragmentation.

6. Internal fragmentation occurs with paging and fixed External fragmentation occurs with segmentation and dynamic
partitioning. partitioning.

It occurs on the allocation of a process to a partition greater It occurs on the allocation of a process to a partition greater which is
7. than the process’s requirement. The leftover space causes exactly the same memory space as it is required.
degradation system performance.

8. It occurs in worst fit memory allocation method. It occurs in best fit and first fit memory allocation method.
Advantages and Disadvantages of Fixed Partitioning
• Advantages
• Easy to implement: Algorithms needed to implement Fixed Partitioning are easy to implement. It simply
requires putting a process into a certain partition without focusing on the emergence of Internal and External
Fragmentation.
• Little OS overhead: Processing of Fixed Partitioning requires lesser excess and indirect computational power.
• Disadvantages
• Internal Fragmentation: Main memory use is inefficient. Any program, no matter how small, occupies an
entire partition. This can cause internal fragmentation.
• External Fragmentation: The total unused space (as stated above) of various partitions cannot be used to
load the processes even though there is space available but not in the contiguous form (as spanning is not
allowed).
• Limit process size: Process of size greater than the size of the partition in Main Memory cannot be
accommodated. The partition size cannot be varied according to the size of the incoming process size. Hence,
the process size of 32MB in the above-stated example is invalid.
• Limitation on Degree of Multiprogramming: Partitions in Main Memory are made before execution or during
system configure. Main Memory is divided into a fixed number of partitions. Suppose if there are n 1
partitions in RAM and n2 are the number of processes, then n2 <= n1 condition must be fulfilled. Number of
processes greater than the number of partitions in RAM is invalid in Fixed Partitioning.
Dynamic or Variable Partitioning

• Dynamic Partitioning: In a dynamic partitioning


scheme, each process occupies only as much
memory as they require when loaded for
processing.
• Requested processes are allocated memory
until the entire physical memory is exhausted
or the remaining space is insufficient to hold
the requesting process.
• In this scheme the partitions used are of
variable size, and the number of partitions is
not defined at the system generation time.
Advantages and Disadvantages of Variable Partitioning
• Advantages
• No Internal Fragmentation: In variable Partitioning, space in main memory is
allocated strictly according to the need of process, hence there is no case of
internal fragmentation. There will be no unused space left in the partition.
• No restriction on Degree of Multiprogramming: More number of processes
can be accommodated due to absence of internal fragmentation. A process
can be loaded until the memory is empty.
• No Limitation on the size of the process: In Fixed partitioning, the process
with the size greater than the size of the largest partition could not be loaded
and process can not be divided as it is invalid in contiguous allocation
technique. Here, In variable partitioning, the process size can’t be restricted
since the partition size is decided according to the process size.
• Disadvantages:
• Difficult Implementation: Implementing variable Partitioning is difficult as
compared to Fixed Partitioning as it involves allocation of memory during run-
time rather than during system configure.
• External Fragmentation: There will be external fragmentation in spite of
absence of internal fragmentation.
• For example, suppose in example- process P1(2MB) and process P3(1MB)
completed their execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s
suppose process P5 of size 3MB comes. The empty space in memory cannot be
allocated as no spanning is allowed in contiguous allocation. The rule says that
process must be contiguously present in main memory to get executed. Hence
it results in External Fragmentation.
Algorithms for Memory Allocations
• For both fixed and dynamic memory allocation
schemes, the operating system must keep a list of
each memory location noting which are free and
which are busy. Then as new jobs come into the
system, the free partitions must be allocated. These
partitions may be allocated in 4 ways:
1. First-Fit Memory Allocation : In the first fit
approach is to allocate the first free partition or
hole large enough which can accommodate the
process. It finishes after finding the first suitable
free partition.
2. Best-Fit Memory Allocation : The best fit deals
with allocating the smallest free partition which
meets the requirement of the requesting process.
This algorithm first searches the entire list of free
partitions and considers the smallest hole that is
adequate. It then tries to find a hole which is close
to actual process size needed.
3. Worst-Fit Memory Allocation : In worst fit
approach is to locate largest available free portion
so that the portion left will be big enough to be
useful. It is the reverse of best fit.
4. Next-Fit Memory Allocation : In next fit
approach start with first fit and then next time,
next partition we will put in block starting from
previously placed partition.
Memory Allocations Algorithms Important Points
• The first fit algorithm is the best algorithm among all because
• It takes lesser time compare to the other algorithms.
• It produces bigger holes that can be used to load other processes later on.
• It is easiest to implement.
GATE Question 1
• Consider allocation of memory to a new process. Assume that none of the
existing holes in the memory will exactly fit the process's memory
requirement. Hence, a new hole of smaller size will be created if allocation is
made in any of the existing holes. Which one of the following statements is
TRUE?

• Answer C
GATE Question 2
• Consider five memory partitions of size 100 KB, 500 KB, 200 KB, 450 KB and 600 KB in same order. If
sequence of requests for blocks of size 212 KB, 417 KB, 112 KB and 426 KB in same order come, then which
of the following algorithm makes the efficient use of memory?
A. Best fit algorithm
B. First fit algorithm
C. Next fit algorithm
D. Both next fit and best fit results in same
1)First Fit-->212 in 500-->288 remaining, 417 in 450-->33 remaining, 112 in 288(of 500)-->176 remaining,
426 in 600-->174 remaining, so total remaining-->100+176+200+33+174--->683 remaining

2)Best fit-->212 in 450-->238 remaining,417 in 500-->83 remaining,112 in 200-->88 remaining,426 in


600-->174 remaining, so total remaining-->100+83+88+238+174-->683 remaining

3)Next fit--> so we will start with first fit and then next time, next partition we will put in block starting
from previously placed partition, 212 in 500-->288 remaining, 417 in 450-->33 remaining, 112 in 600--
>488 remaining, 426 in 488-->62 remaining, still remaining space is-->100+288+200+33+62--->683
remaining
Non-Contiguous allocation
• In contiguous allocation, space in memory should be
allocated to the whole process. If not, then that space
remains unallocated.
• In Non-Contiguous allocation, the process can be divided
into different parts and hence filling the space in the main
memory. In this example, process P can be divided into two
parts of equal size – 2KB. Hence one part of process P can
be allocated to the first 2KB space of main memory and the
other part of the process can be allocated to the second
2KB space of main memory.
• In order to avoid this time-consuming process, we divide
our process in secondary memory in advance before
reaching the main memory for its execution. Every process
is divided into various parts of equal size called Pages. We
also divide our main memory into different parts of equal
size called Frames.
• Size of page in process = Size of frame in memory
Memory Protection in Operating Systems
In the diagram, when the
scheduler selects a process for
the execution process, the
dispatcher, on the other hand, is
responsible for loading the
relocation and limit registers with
the correct values as part of the
context switch as every address
generated by the CPU is checked
against these 2 registers, and we
may protect the operating
system, programs, and the data
of the users from being altered by
this running process.
Memory Protection in Operating Systems
• In Memory protection, we have to protect the operating system from user processes and which
can be done by using a relocation register with a limit register.
• Relocation register has the value of the smallest physical address.
• Limit register has the range of the logical addresses.
• These two registers have some conditions like each logical address must be less than the limit
register.
• The memory management unit is used to translate the logical address with the value in the
relocation register dynamically after which the translated (or mapped) address is then sent to
memory.
• Need of Memory protection :
• Memory protection prevents a process from accessing unallocated memory in OS as it stops
the software from seizing control of an excessive amount of memory and may cause damage
that will impact other software which is currently being used or may create a loss of saved
data.
• Resources of memory protection also help in detecting malicious or harmful applications, that
may after damaged the processes of the operating system.
Methods of memory protection
• There are various methods for protecting a process from accessing memory that has not been allocated and
some of the commonly used methods are given below:
• Memory Protection using Keys: The concept of using memory protection with keys can be found in
most modern computers with the purpose of paged memory organization and for the dynamic
distribution between the parallel running programs. The keys are based on the use of special codes as
we can verify the compliance between using arrays of memory cells and the number of running
programs. This key method gives the users a process to impose page-based protections without any
modification in the page tables.
• Memory Protection using Rings: In CS, the domains related to ordered protection are called Protection
Rings. This method helps in improving fault tolerance and provides security. These rings are arranged in
a hierarchy from most privileged to least privileged. In the single-level sharing OS, each and every
segment has a protection ring for the process of reading, writing, and executing operations of the
process. If there is a use of a higher ring number by the process then the ring number for the segment
creates a fault. But we do have some methods for calling the procedures safely that can run in a lower
ring number and then return to the number of the higher ring.
• Capability-based addressing: It is a method of protecting the memory that cannot be seen in modern
commercial computers. Here, the pointers (objects consisting of a memory address) are restored by the
capabilities objects that can only be created with the protected instructions and may only execute by a
kernel, or by another process that is authorized to execute and therefore it gives an advantage of
controlling the unauthorized processes in creating additional separate address spaces in memory.
Methods of memory protection
• Memory Protection using masks: The masks are used in the protection of memory during the
organization of paging. In this method, before the implementation, the page numbers are
indicated to each program and are reserved for the placement of its directives. Here the
allocated pages for the program are now given the control of the operating system in the form
of mask code (an n-bit binary code) which is formed for every working program that is
determined by the bit number of OD pages.
• Memory Protection using Segmentation: It is a method of dividing the system memory into
different segments. The data structures of x86 architecture of OS like local descriptor table and
global descriptor table are used in the protection of memory.
• Memory Protection using Simulated segmentation: With this technique, we can monitor the
program for interpreting the machine code instructions of system architectures. Through this,
the simulator can help in protecting the memory by using a segmentation using the scheme
and validating the target address of every instruction in real-time.
• Memory Protection using Dynamic tainting: Dynamic tainting is a technique that consists of
marking and tracking certain data in a program at runtime as it protects the process from illegal
memory accesses. In tainting technique, we taint a program to mark two kinds of data i.e.,
memory in the data space and the pointers.
Paging
• Paging is a memory management scheme that
eliminates the need for contiguous allocation
of physical memory. This scheme permits the
physical address space of a process to be non
– contiguous.
• Logical Address or Virtual Address
(represented in bits): An address generated by
the CPU
• Logical Address Space or Virtual Address
Space( represented in words or bytes): The
set of all logical addresses generated by a
program
• Physical Address (represented in bits): An
address actually available on memory unit
• Physical Address Space (represented in words
or bytes): The set of all physical addresses
corresponding to the logical addresses
Paging Example
• Let us consider the main memory size 16
Kb and Frame size is 1 KB therefore the
main memory will be divided into the
collection of 16 frames of 1 KB each.
• There are 4 processes in the system that
is P1, P2, P3 and P4 of 4 KB each. Each
process is divided into pages of 1 KB
each so that one page can be stored in
one frame.
• Initially, all the frames are empty
therefore pages of the processes will get
stored in the contiguous way.
• Frames, pages and the mapping
between the two is shown in the image.
Paging Example
• Let us consider that, P2 and P4 are
moved to waiting state after some
time. Now, 8 frames become empty
and therefore other pages can be
loaded in that empty place.
• The process P5 of size 8 KB (8 pages)
is waiting inside the ready queue.
• Given the fact that, we have 8 non
contiguous frames available in the
memory and paging provides the
flexibility of storing the process at
the different places.
• Therefore, we can load the pages of
process P5 in the place of P2 and P4.
Physical and Logical Address Space
• Physical address space in a system can be defined as the size of the main memory. It is really important to
compare the process size with the physical address space. The process size must be less than the physical
address space.
If, physical address space = 64 KB = 26 KB = 26 X 210 Bytes = 216 bytes and word size = 8 Bytes = 23 Bytes

Then, Physical address space (in words) = (216) / (23) = 213 Words
Therefore, Physical Address = 13 bits
In General, If, Physical Address Space = N Words then, Physical Address = log2 N

• Logical address space can be defined as the size of the process. The size of the process should be less
enough so that it can reside in the main memory.
if, Logical Address Space = 128 MB = (27 X 220) Bytes = 227 Bytes and Word size = 4 Bytes = 22 Bytes

Then, Logical Address Space (in words) = (227) / (22) = 225 Words
Therefore Logical Address = 25 Bits
In general, If, logical address space = L words, Then, Logical Address = Log2L bits
• Page Table is a data structure used by
Page Table
the virtual memory system to store
the mapping between logical
addresses and physical addresses.

Physical Address Space = M words


Logical Address Space = L words
Page Size = P words

Physical Address = log 2 M = m bits


Logical Address = log 2 L = l bits
page offset = log 2 P = p bits
Logical address and Physical Address
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS
Basic generated by CPU location in a memory unit

Address Logical Address Space is set of all logical Physical Address is set of all physical
Space addresses generated by CPU in reference addresses mapped to the corresponding
to a program. logical addresses.

User can view the logical address of a User can never view physical address of
Visibility program. program.

The user can use the logical address to The user can indirectly access physical
Access access the physical address. address but not directly.

Editable Logical address can be change. Physical address will not change.
The logical address does not exist physical address is a location in the
physically in the memory memory that can be accessed physically
Logical address is generated by CPU in physical address is a location that exists in
perspective of a program the memory unit.
Example of Mapping
• For Main Memory- Important Formulas Paging
• Physical Address Space = Size of main memory
• Size of main memory = Total number of frames x Page size
• Frame size = Page size
• If number of frames in main memory = 2X, then number of bits in frame number = X bits
• If Page size = 2X Bytes, then number of bits in page offset = X bits
• If size of main memory = 2X Bytes, then number of bits in physical address = X bits
• For Process-
• Virtual Address Space = Size of process
• Number of pages the process is divided = Process size / Page size
• If process size = 2X bytes, then number of bits in virtual address space = X bits
• For Page Table-
• Size of page table = Number of entries in page table x Page table entry size
• Number of entries in pages table = Number of pages the process is divided
• Page table entry size = Number of bits in frame number + Number of bits used for optional fields if any
• NOTE-
• In general, if the given address consists of ‘n’ bits, then using ‘n’ bits, 2 n locations are possible.
• Then, size of memory = 2n x Size of one location.
• If the memory is byte-addressable, then size of one location = 1 byte.
• Thus, size of memory = 2n bytes.
• If the memory is word-addressable where 1 word = m bytes, then size of one location = m bytes.
• Thus, size of memory = 2n x m bytes.
Practice Numerical Paging
• Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte addressable.
We have Number of locations possible with 22 bits = 222 locations
It is given that the size of one location = 2 bytes
Thus, Size of memory= 222 x 2 bytes= 223 bytes= 8 MB
• Calculate the number of bits required in the address for memory having size of 16 GB. Assume the memory is 4-byte
addressable.
Let ‘n’ number of bits are required. Then, Size of memory = 2n x 4 bytes.
Since, the given memory has size of 16 GB, so we have 2 n x 4 bytes = 16 GB
2n x 4 = 16G  2n x 22 = 234  2n = 232  n = 32 bits
• Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and page table entries of 4
bytes each. The size of the page table in the system in megabytes is _____.
Given Number of bits in logical address = 32 bits, Page size = 4KB, Page table entry size = 4 bytes
Process Size-
Number of bits in logical address = 32 bits Thus, Process size = 232 Bn= 4 GB
Number of Entries in Page Table-
Number of pages the process is divided = Process size / Page size = 4 GB / 4 KB = 2 20 pages
Thus, Number of entries in page table = 220 entries,
Page Table Size-
Page table size = Number of entries in page table x Page table entry size = 2 20 x 4 bytes = 4 MB
Practice Numerical Paging
• Consider a machine with 64 MB physical memory and a 32 bit virtual address space. If the page size is 4 KB, what is the
approximate size of the page table?
Given Size of main memory = 64 MB, Number of bits in virtual address space = 32 bits, Page size = 4 KB
We will consider that the memory is byte addressable.
Number of Bits in Physical Address-
Size of main memory = 64 MB = 226 B
Thus, Number of bits in physical address = 26 bits
Number of Frames in Main Memory = Size of main memory / Frame size = 64 MB / 4 KB = 226 B / 212 B = 214
Thus, Number of bits in frame number = 14 bits
Number of Bits in Page Offset- We have, Page size = 4 KB = 212 B
Thus, Number of bits in page offset = 12 bits, So, Physical address is----
Process Size- Number of bits in virtual address space = 32 bits
Thus, Process size = 232 B = 4 GB
Number of Entries in Page Table-
Number of pages the process is divided = Process size / Page size = 4 GB / 4 KB = 2 20 pages
Thus, Number of entries in page table = 220 entries
Page Table Size-
Page table size = Number of entries in page table x Page table entry size
= Number of entries in page table x Number of bits in frame number
= 220 x 14 bits
= 220 x 16 bits (Approximating 14 bits ≈ 16 bits)
= 220 x 2 bytes = 2 MB
Segmentation
• A process is divided into Segments. The chunks that a
program is divided into which are not necessarily all of the
same sizes are called segments. Segmentation gives user’s
view of the process which paging does not give. Here the
user’s view is mapped to physical memory.
• There are types of segmentation:
Virtual memory segmentation –Each process is divided
into a number of segments, not all of which are resident at
any one point in time.
Simple segmentation –Each process is divided into a
number of segments, all of which are loaded into memory
at run time, though not necessarily contiguously.
• There is no simple relationship between logical addresses
and physical addresses in segmentation. A table stores the
information about all such segments and is called Segment
Table.
Segment Table – It maps two-dimensional Logical address
into one-dimensional Physical address. It’s each table
entry has:
Base Address: It contains the starting physical address
where the segments reside in memory.
Limit: It specifies the length of the segment.
Translation of Two dimensional Logical Address to one dimensional
Physical Address using segmentation

• Logical Address generated by the


CPU is divided into:
Segment number (s): Number of
bits required to represent the
segment.
Segment offset (d): Number of
bits required to represent the
size of the segment.
Practice Numerical Segmentation
• Consider the following segment table-Which of the following
logical address will produce trap addressing error?
A. 0, 430 B. 1, 11 C. 2, 100
In a segmentation scheme, the generated logical address consists of
two parts- Segment Number and Segment Offset We know-
Segment Offset must always lie in the range [0, limit-1]. If segment
offset becomes greater than or equal to the limit of segment, then
trap addressing error is produced.
A. Segment Number = 0, Segment Offset = 430, We have, In the segment table, limit of segment-0 is
700. Thus, segment offset must always lie in the range = [0, 700-1] = [0, 699] Now, Since generated segment
offset lies in the above range, so request generated is valid. Therefore, no trap will be produced.
Physical Address = 1219 + 430 = 1649
B. Segment Number = 1,Segment Offset = 11, We have, In the segment table, limit of segment-1 is 14.
Thus, segment offset must always lie in the range = [0, 14-1] = [0, 13] Now, Since generated segment
offset lies in the above range, so request generated is valid. Therefore, no trap will be produced.
Physical Address = 2300 + 11 = 2311
C. Segment Number = 2,Segment Offset = 100 We have, In the segment table, limit of segment-2 is 100.
Thus, segment offset must always lie in the range = [0, 100-1] = [0, 99] Now, Since generated segment
offset does not lie in the above range, so request generated is invalid. Therefore, trap will be produced.
Segmented Paging
• Process is first divided into segments and
then each segment is divided into pages.
• These pages are then stored in the frames
of main memory.
• A page table exists for each segment that
keeps track of the frames storing the pages
of that segment.
• Each page table occupies one frame in the
main memory.
• Number of entries in the page table of a
segment = Number of pages that segment is
divided.
• A segment table exists that keeps track of
the frames storing the page tables of
segments.
• Number of entries in the segment table of a
process = Number of segments that process
is divided.
• The base address of the segment table is
stored in the segment table base register.
Practice Numerical Paged Segmentation
• A certain computer system has the segmented paging architecture for virtual memory. The memory is byte addressable.
Both virtual and physical address spaces contain 216 bytes each. The virtual address space is divided into 8 non-overlapping
equal size segments. The memory management unit (MMU) has a hardware segment table, each entry of which contains the
physical address of the page table for the segment. Page tables are stored in the main memory and consists of 2 byte page
table entries. What is the minimum page size in bytes so that the page table for a segment requires at most one page to
store it?
Given- Virtual Address Space = Process size = 216 bytes,
Physical Address Space = Main Memory size = 216 bytes
Process is divided into 8 equal size segments,
Page table entry size = 2 bytes
Let page size = n bytes, Now, since page table has to be stored into a single page, so we must have-
Size of page table <= Page size
Size of each segment= Process size / Number of segments = 216 bytes / 8 = 216 bytes / 23 = 213 bytes = 8 KB
Number of pages each segment is divided = Size of segment / Page size = 8 KB / n bytes = (8K / n) pages
Size of each page table = Number of entries in page table x Page table entry size
= Number of pages the segment is divided x 2 bytes = (8K / n) x 2 bytes = (16K / n) bytes
Substituting values in the above condition, we get-
(16K / n) bytes <= n bytes  (16K / n) <= n  n2 >= 16K  n2 >= 214 n >= 27
Thus, minimum page size possible = 27 bytes = 128 bytes.
Advantage and Disadvantage
• Advantages of Segmentation –
• Segment Table consumes less space in comparison to Page table in paging.
• It allows to divide the program into modules which provides better visualization.
• It solves the problem of internal fragmentation.
• Disadvantage of Segmentation –
• There is an overhead of maintaining a segment table for each process.
• The time taken to fetch the instruction increases since now two memory accesses are required.
• Segments of unequal size are not suited for swapping.
• It suffers from external fragmentation as the free space gets broken down into smaller pieces with
the processes being loaded and removed from the main memory.
• Advantages of segmented paging-
• Segment table contains only one entry corresponding to each segment.
• It reduces memory usage.
• The size of Page Table is limited by the segment size.
• It solves the problem of external fragmentation.
• Disadvantages of segmented paging-
• Segmented paging suffers from internal fragmentation.
• The complexity level is much higher as compared to paging.
Virtual Memory
• In this scheme, whenever some pages needs to be loaded in the main memory for
the execution and the memory is not available for those many pages, then in that
case, instead of stopping the pages from entering in the main memory, the OS
search for the RAM area that are least used in the recent times or that are not
referenced and copy that into the secondary memory to make the space for the
new pages in the main memory. Following are the situations, when entire
program is not required to be loaded fully in main memory:
1. User written error handling routines are used only when an error occurred in
the data or computation.
2. Certain options and features of a program may be used rarely.
3. Many tables are assigned a fixed amount of address space even though only a
small amount of the table is actually used.
4. The ability to execute a program that is only partially in memory would counter
many benefits.
5. Less number of I/O would be needed to load or swap each user program into
memory.
6. A program would no longer be constrained by the amount of physical memory
that is available.
7. Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Virtual
• Let us assume 2 processes, P1 and P2,
memory management
contains 4 pages each. Each page size is 1
KB. The main memory contains 8 frame of
1 KB each. The OS resides in the first two
partitions. In the third partition, 1st page
of P1 is stored and the other frames are
also shown as filled with the different
pages of processes in the main memory.
• The page tables of both the pages are 1
KB size each and therefore they can be fit
in one frame each. The page tables of
both the processes contain various
information that is also shown in the
image.
• The CPU contains a register which
contains the base address of page table
that is 5 in the case of P1 and 7 in the
case of P2. This page table base address
will be added to the page number of the
Logical address when it comes to
accessing the actual corresponding entry.
Optional Bits in Page table
• Present/Absent bit – Present or absent bit says whether a particular page you are
looking for is present or absent. In case if it is not present, that is called Page Fault. It is
set to 0 if the corresponding page is not in memory. Used to control page fault by the
operating system to support virtual memory. Sometimes this bit is also known as
valid/invalid bits. v = in-memory (memory resident), i = not-in-memory
• Protection bit – Protection bit says that what kind of protection you want on that
page. So, these bit for the protection of the page frame (read, write etc).
• Referenced bit – Referenced bit will say whether this page has been referred in the last
clock cycle or not. It is set to 1 by hardware when the page is accessed.
• Caching enabled/disabled – Some times we need the fresh data. Let us say the user is
typing some information from the keyboard and your program should run according to
the input given by the user. In that case, the information will come into the main
memory. Therefore main memory contains the latest information which is typed by the
user. Now if you try to put that page in the cache, that cache will show the old
information. So whenever freshness is required, we don’t want to go for caching or
many levels of the memory. The information present in the closest level to the CPU
and the information present in the closest level to the user might be different. So we
want the information has to be consistency, which means whatever information user
has given, CPU should be able to see it as first as possible. That is the reason we want
to disable caching. So, this bit enables or disable caching of the page.
• Modified bit – Modified bit says whether the page has been modified or not. Modified
means sometimes you might try to write something on to the page. If a page is
modified, then whenever you should replace that page with some other page, then the
modified information should be kept on the hard disk or it has to be written back or it
has to be saved back. It is set to 1 by hardware on write-access to page which is used
to avoid writing when swapped out. Sometimes this modified bit is also called as the
Dirty bit.
Demand Paging
• Demand Paging : The process of loading the page
into memory on demand (whenever page fault
occurs) is known as demand paging.
• The process includes the following steps :
• If the CPU tries to refer to a page that is
currently not available in the main memory, it
generates an interrupt indicating a memory
access fault.
• The OS puts the interrupted process in a
blocking state. For the execution to proceed the
OS must bring the required page into the
memory.
• The OS will search for the required page in the
logical address space.
• The required page will be brought from logical
address space to physical address space. The
page replacement algorithms are used for the
decision-making of replacing the page in physical
address space.
• The page table will be updated accordingly.
• The signal will be sent to the CPU to continue
the program execution and it will place the
process back into the ready state.
Performance of Demand Paging-TBL
• To overcome the problem of small register size and
because process size may be big sometimes hence the
required page table will also be big, but since registers
may not hold all the Page Table Entries of Page table,
a high-speed cache is set up for page table entries
called a Translation Lookaside Buffer (TLB).
• Translation Lookaside Buffer (TLB) is nothing but a
special cache used to keep track of recently used
transactions. TLB contains page table entries that
have been most recently used.
• Effective memory access time(EMAT) : TLB is used to
reduce effective memory access time as it is a high
speed associative cache.
EMAT = h*(c + m) + (1-h)*(c + 2m)
where, h = hit ratio of TLB
m = Memory access time
c = TLB access time
Swapping
• This interchange of data between virtual
memory and real memory is called swapping and
space on disk as swap space. Swap space helps
the computer's operating system pretend that it
has more RAM than it actually has. It is also
called a swap file.
• Swapping a process out means removing all of its
pages from memory, or marking them so that
they will be removed by the normal page
replacement process. Suspending a process
ensures that it is not runnable while it is
swapped out. At some later time, the system
swaps back the process from the secondary
storage to the main memory.
• The action of moving a process out from main
memory to secondary memory is called Swap
Out.
• The action of moving a process out from
secondary memory to main memory is called
Page Fault Handling
• A page fault will happen if a program tries to access a
piece of memory that does not exist in physical
memory (main memory). The fault specifies the
operating system to trace all data into virtual memory
management and then relocate it from secondary
memory to its primary memory, such as a hard disk.
• Now, let's understand the procedure of page fault
handling in the OS:
1. Firstly, an internal table for this process to assess
whether the reference was valid or invalid memory
access.
2. If the reference becomes invalid, the system process
would be terminated. Otherwise, the page will be
paged in.
3. After that, the free-frame list finds the free frame in
the system.
4. Now, the disk operation would be scheduled to get
the required page from the disk.
5. When the I/O operation is completed, the process's
page table will be updated with a new frame
number, and the invalid bit will be changed. Now, it
is a valid page reference.
6. If any page fault is found, restart these steps from
starting.
Page Replacement Algorithms
• In case of a page fault, Operating System might
have to replace one of the existing pages with
the newly needed page. Different page
replacement algorithms suggest different ways
to decide which page to replace. The target for
all algorithms is to reduce the number of page
faults.

• Page Replacement Algorithms :


1. First In First Out (FIFO)
2. Optimal Page replacement
3. Least Recently Used
First-In-First-Out (FIFO) Algorithm
• This is the simplest page replacement algorithm. In this
algorithm, the operating system keeps track of all pages
in the memory in a queue, the oldest page is in the front
of the queue. When a page needs to be replaced page in
the front of the queue is selected for removal.
• Example-1: Consider page reference string
1, 3, 0, 3, 5, 6, 3
with 3 page frames. Find the number of page faults.
• Initially, all slots are empty, so when 1, 3, 0 came
they are allocated to the empty slots  3 Page
Faults.
• when 3 comes, it is already in memory so —> 0 Page
Faults.
• Then 5 comes, it is not available in memory so it
replaces the oldest page slot i.e., 1.1 Page Fault.
• 6 comes, it is also not available in memory so it
replaces the oldest page slot i.e., 3 1 Page Fault.
• Finally, when 3 come it is not available so it replaces
0 1 page fault
Belady’s Anomaly
• Generally, on increasing the number of frames to a process’ a, but this is not independent of the number of page frames
virtual memory, its execution becomes faster as fewer page and hence, FIFO does not follow a stack page replacement
faults occur. Sometimes the reverse happens, i.e. more page policy and therefore suffers Belady’s Anomaly.
faults occur when more frames are allocated to a process.
• Assuming a system that has no pages loaded in the memory
This most unexpected result is termed Belady’s Anomaly.
and uses the FIFO Page replacement algorithm. Consider the
• Belady's anomaly is the name given to the phenomenon following reference string:
where increasing the number of page frames results in an 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
increase in the number of page faults for a given memory
access pattern.
• Belady’s Anomaly can never occur in Optimal and LRU
algorithms for any reference string as they belong to a class
of stack-based page replacement algorithms.
• Reason for Belady’s Anomaly in FIFO – A stack-based
algorithm is one for which it can be shown that the set of
pages in memory for N frames is always a subset of the set
of pages that would be in memory with N + 1 frames. For
LRU replacement, the set of pages in memory would be the
n most recently referenced pages. If the number of frames
increases then these n pages will still be the most recently
referenced and so, will still be in the memory. While in FIFO,
if a page named b came into physical memory before a page
– a then priority of replacement of b is greater than that of
Optimal Page replacement
• In this algorithm, pages are replaced which would• 4 will takes place of 1  1 Page Fault.
not be used for the longest duration of time in the • Now for the further page reference string  0
future. Page fault because they are already available in
• Optimal page replacement is perfect, but not the memory.
possible in practice as the operating system cannot
know future requests. The use of Optimal Page
replacement is to set up a benchmark so that
other replacement algorithms can be analysed
against it.
Example-2: Consider the page references
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,
with 4 page frame. Find number of page fault.
• Initially, all slots are empty, so when 7 0 1 2 are
allocated to the empty slots  4 Page faults
• 0 is already there so 0 Page fault.
• when 3 came it will take the place of 7 because it
is not used for the longest duration of time in the
future.1 Page fault.
• 0 is already there so  0 Page fault..
Least Recently Used
• In this algorithm, page will be replaced which is
least recently used.
• Example-3 : Consider the page reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
with 4 page frames. Find number of page faults.
• Initially, all slots are empty, so when 7 0 1 2 are
allocated to the empty slots —> 4 Page faults
• 0 is already their so —> 0 Page fault.
• when 3 came it will take the place of 7 because
it is least recently used —>1 Page fault
• 0 is already in memory so —> 0 Page fault.
• 4 will takes place of 1 —> 1 Page Fault
• Now for the further page reference string —> 0
Page fault because they are already available in
the memory.
Thrashing
• The initial degree of multiprogramming up to some extent of point (lambda), the
CPU utilization is very high and the system resources are utilized 100%. But if we
further increase the degree of multiprogramming the CPU utilization will drastically
fall down and the system will spend more time only on the page replacement and
the time is taken to complete the execution of the process will increase. This
situation in the system is called thrashing.
• Causes of Thrashing :
1. High degree of multiprogramming : If the number of processes keeps on
increasing in the memory then the number of frames allocated to each process
will be decreased. So, fewer frames will be available for each process. Due to
this, a page fault will occur more frequently and more CPU time will be wasted in
just swapping in and out of pages and the utilization will keep on decreasing.
For example:
Let free frames = 400
Case 1: Number of process = 100 ,Then, each process will get 4 frames.
Case 2: Number of processes = 400 ,Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes is increased,
frames per process are decreased. Hence CPU time will be consumed in just
swapping pages.
2. Lacks of Frames: If a process has fewer frames then fewer pages of that
process will be able to reside in memory and hence more frequent swapping in
and out will be required. This may lead to thrashing. Hence sufficient amount of
frames must be allocated to each process in order to prevent thrashing.
• Recovery of Thrashing :
1. Do not allow the system to go into thrashing by instructing the long-term
scheduler not to bring the processes into memory after the threshold.
2. If the system is already thrashing then instruct the mid-term schedular to
suspend some of the processes so that we can recover the system from
thrashing.
Cache memory Organisation
• Cache Performance: The performance of the cache
is measured in terms of hit ratio. When CPU refers
to memory and find the data or instruction within
the Cache Memory, it is known as cache hit. If the
desired data or instruction is not found in the cache
memory and CPU refers to the main memory to find
that data or instruction, it is known as a cache miss.
• Hit + Miss = Total CPU Reference
• Hit Ratio(h) = Hit / (Hit + Miss)
• Average access time of any memory system consists
of two levels: Cache and Main Memory. If Tc is
time to access cache memory and Tm is the time to
access main memory then we can write:
Tavg = Average time to access memory
Tavg = h * Tc + (1-h)*(Tm + Tc)
Locality of Reference
• Locality of reference refers to a phenomenon in which a computer program
tends to access same set of memory locations for a particular time period.
• Locality of Reference refers to the tendency of the computer program to
access instructions whose addresses are near one another. The property of
locality of reference is mainly shown by loops and subroutine calls in a
program.
• There are two ways with which data or instruction is fetched from main
memory and get stored in cache memory. These two ways are the following:
• Temporal Locality –Temporal locality means current data or instruction
that is being fetched may be needed soon. So we should store that data
or instruction in the cache memory so that we can avoid again searching
in main memory for the same data. When CPU accesses the current
main memory location for reading required data or instruction, it also
gets stored in the cache memory which is based on the fact that same
data or instruction may be needed in near future. This is known as
temporal locality. If some data is referenced, then there is a high
probability that it will be referenced again in the near future.
• Spatial Locality –Spatial locality means instruction or data near to the
current memory location that is being fetched, may be needed soon in
the near future. This is slightly different from the temporal locality. Here
we are talking about nearly located memory locations while in temporal
locality we were talking about the actual memory location that was
being fetched.

You might also like