anjani sir unit 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

Notes in Unit-4

Memory Management in Operating System (OS)


What do you mean by memory management?
Memory is the important part of the computer that is used to store the data. Its management is
critical to the computer system because the amount of main memory available in a computer
system is very limited. At any time, many processes are competing for it. Moreover, to
increase performance, several processes are executed simultaneously. For this, we must keep
several processes in the main memory, so it is even more important to manage them
effectively.

Role of Memory management


Following are the important roles of memory management in a computer system:
o Memory manager is used to keep track of the status of memory locations, whether it is
free or allocated. It addresses primary memory by providing abstractions so that
software perceives a large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using
the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each
process from being corrupted by another process. If this is not ensured, then the
system may exhibit unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus,
two programs can reside at the same memory location although at different times.
Memory Management Techniques:
The memory management techniques can be classified into following main categories:
o Contiguous memory management schemes
o Non-Contiguous memory management schemes

Contiguous memory management schemes:


In a Contiguous memory management scheme, each program occupies a single contiguous
block of storage locations, i.e., a set of memory locations with consecutive addresses.
Single contiguous memory management schemes:
The Single contiguous memory management scheme is the simplest memory management
scheme used in the earliest generation of computer systems. In this scheme, the main memory
is divided into two contiguous areas or partitions. The operating systems reside permanently
in one partition, generally at the lower memory, and the user process is loaded into the other
partition.
Advantages of Single contiguous memory management schemes:
o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is
given full processor's time, and no other processor will interrupt it.
Disadvantages of Single contiguous memory management schemes:
o Wastage of memory space due to unused memory as the process is unlikely to use all
the available memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main
memory.
o It can not be executed if the program is too large to fit the entire available main
memory space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs
simultaneously.
Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it limits computers to
execute only one program at a time resulting in wastage in memory space and CPU time. The
problem of inefficient CPU use can be overcome using multiprogramming that allows more
than one program to run concurrently. To switch between two processes, the operating
systems need to load both processes into the main memory. The operating system needs to
divide the available main memory into multiple parts to load multiple processes into the main
memory. Thus multiple processes can reside in the main memory simultaneously.
The multiple partitioning schemes can be of two types:
o Fixed Partitioning
o Dynamic Partitioning
Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning. These partitions can be of the same size or
different sizes. Each partition can hold a single process. The number of partitions determines
the degree of multiprogramming, i.e., the maximum number of processes in memory. These
partitions are made at the time of system generation and remain fixed after that.
Advantages of Fixed Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Fixed Partitioning memory management schemes:
o This scheme suffers from internal fragmentation.
o The number of partitions is specified at the time of system generation.
Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning
scheme. In a dynamic partitioning scheme, each process occupies only as much memory as
they require when loaded for processing. Requested processes are allocated memory until the
entire physical memory is exhausted or the remaining space is insufficient to hold the
requesting process. In this scheme the partitions used are of variable size, and the number of
partitions is not defined at the system generation time.
Advantages of Dynamic Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Dynamic Partitioning memory management schemes:
o This scheme also suffers from internal fragmentation.
o The number of partitions is specified at the time of system segmentation.
Non-Contiguous memory management schemes:
In a Non-Contiguous memory management scheme, the program is divided into different
blocks and loaded at different portions of the memory that need not necessarily be adjacent to
one another. This scheme can be classified depending upon the size of blocks and whether the
blocks reside in the main memory or not.
What is paging?
Paging is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into fixed-size blocks of physical memory
called frames. The size of a frame should be kept the same as that of a page to maximize the
main memory and avoid external fragmentation.
Advantages of paging:
o Pages reduce external fragmentation.
o Simple to implement.
o Memory efficient.
o Due to the equal size of frames, swapping becomes very easy.
o It is used for faster access of data.
What is Segmentation?
Segmentation is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into variable-size blocks of physical memory
called segments. It is based on the way the programmer follows to structure their programs.
With segmented memory allocation, each job is divided into several segments of different
sizes, one for each module. Functions, subroutines, stack, array, etc., are examples of such
modules.

Fixed Partitioning
The earliest and one of the simplest technique which can be used to load more than one
processes into the main memory is Fixed partitioning or Contiguous memory allocation.
In this technique, the main memory is divided into partitions of equal or different sizes. The
operating system always resides in the first partition while the other partitions can be used to
store user processes. The memory is assigned to the processes in contiguous way.
In fixed partitioning,
1. The partitions cannot overlap.
2. A process must be contiguously present in a partition for the execution.
There are various cons of using this technique.
1. Internal Fragmentation
If the size of the process is lesser then the total size of the partition then some size of the
partition get wasted and remain unused. This is wastage of the memory and called internal
fragmentation.
As shown in the image below, the 4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.
2. External Fragmentation
The total unused space of various partitions cannot be used to load the processes even though
there is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot be used as
a unit to store a 4 MB process. Despite of the fact that the sufficient space is available to load
the process, process will not be loaded.
3. Limitation on the size of the process
If the process size is larger than the size of maximum sized partition then that process cannot
be loaded into the memory. Therefore, a limitation can be imposed on the process size that is
it cannot be larger than the size of the largest partition.
4. Degree of multiprogramming is less
By Degree of multi programming, we simply mean the maximum number of processes that
can be loaded into the memory at the same time. In fixed partitioning, the degree of
multiprogramming is fixed and very less due to the fact that the size of the partition cannot be
varied according to the size of processes.
Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this
technique, the partition size is not declared initially. It is declared at the time of process
loading.
The first partition is reserved for the operating system. The remaining space is divided into
parts. The size of each partition will be equal to the size of the process. The partition size
varies according to the need of the process so that the internal fragmentation can be avoided.
Advantages of Dynamic Partitioning over fixed partitioning

1. No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to the need of
the process, It is clear that there will not be any internal fragmentation because there will not
be any unused remaining space in the partition.
2. No Limitation on the size of the process
In Fixed partitioning, the process with the size greater than the size of the largest partition
could not be executed due to the lack of sufficient contiguous memory. Here, In Dynamic
partitioning, the process size can't be restricted since the partition size is decided according to
the process size.
3. Degree of multiprogramming is dynamic
Due to the absence of internal fragmentation, there will not be any unused space in the
partition hence more processes can be loaded in the memory at the same time.
Disadvantages of dynamic partitioning

External Fragmentation
Absence of internal fragmentation doesn't mean that there will not be external fragmentation.
Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in
the respective partitions of the main memory.
After some time P1 and P3 got completed and their assigned space is freed. Now there are
two unused partitions (1 MB and 1 MB) available in the main memory but they cannot be
used to load a 2 MB process in the memory since they are not contiguously located.
The rule says that the process must be contiguously present in the main memory to get
executed. We need to change this rule to avoid external fragmentation.

Complex Memory Allocation


In Fixed partitioning, the list of partitions is made once and will never change but in dynamic
partitioning, the allocation and deallocation is very complex since the partition size will be
varied every time when it is assigned to a new process. OS has to keep track of all the
partitions.
Due to the fact that the allocation and deallocation are done very frequently in dynamic
memory allocation and the partition size will be changed at each time, it is going to be very
difficult for OS to manage everything.

Need for Paging


Disadvantage of Dynamic Partitioning
The main disadvantage of Dynamic Partitioning is External fragmentation. Although, this can
be removed by Compaction but as we have discussed earlier, the compaction makes the
system inefficient.
We need to find out a mechanism which can load the processes in the partitions in a more
optimal way. Let us discuss a dynamic and flexible mechanism called paging.
Need for Paging
Lets consider a process P1 of size 2 MB and the main memory which is divided into three
partitions. Out of the three partitions, two partitions are holes of size 1 MB each.
P1 needs 2 MB space in the main memory to be loaded. We have two holes of 1 MB each but
they are not contiguous.
Although, there is 2 MB space available in the main memory in the form of those holes but
that remains useless until it become contiguous. This is a serious problem to address.
We need to have some kind of mechanism which can store one process at different locations
of the memory.
The Idea behind paging is to divide the process in pages so that, we can store them in the
memory at different holes. We will discuss paging with the examples in the next sections.

Paging in OS (Operating System)


In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the
contiguous frames or holes.
Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame must be
equal. Considering the fact that the pages are mapped to the frames in Paging, page size
needs to be as same as frame size.

Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is
divided into pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.
Frames, pages and the mapping between the two is shown in the image below.

Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames
become empty and therefore other pages can be loaded in that empty place. The process P5 of
size 8 KB (8 pages) is waiting inside the ready queue.
Given the fact that, we have 8 non contiguous frames available in the memory and paging
provides the flexibility of storing the process at the different places. Therefore, we can load
the pages of process P5 in the place of P2 and P4.
Memory Management Unit
The purpose of Memory Management Unit (MMU) is to convert the logical address into the
physical address. The logical address is the address generated by the CPU for every page
while the physical address is the actual address of the frame where each page will be stored.
When a page is to be accessed by the CPU by using the logical address, the operating system
needs to obtain the physical address to access that page physically.
The logical address has two parts.
1. Page Number
2. Offset
Memory management unit of OS needs to convert the page number to the frame number.
Example
Basics of Binary Addresses
Computer system assigns the binary addresses to the memory locations. However, The
system uses amount of bits to address a memory location.
Using 1 bit, we can address two memory locations. Using 2 bits we can address 4 and using 3
bits we can address 8 memory locations.
A pattern can be identified in the mapping between the number of bits in the address and the
range of the memory locations.
We know,
1. Using 1 Bit we can represent 2^1 i.e 2 memory locations.
2. Using 2 bits, we can represent 2^2 i.e. 4 memory locations.
3. Using 3 bits, we can represent 2^3 i.e. 8 memory locations.
4. Therefore, if we generalize this,
5. Using n bits, we can assign 2^n memory locations.
6.
7. n bits of address → 2 ^ n memory locations

these n bits can be divided into two parts, that are, K bits and (n-k) bits.
Considering the above image, let's say that the CPU demands 10th word of 4th page of
process P3. Since the page number 4 of process P1 gets stored at frame number 9 therefore
the 10th word of 9th frame will be returned as the physical address.

Physical and Logical Address Space


Physical Address Space
Physical address space in a system can be defined as the size of the main memory. It is really
important to compare the process size with the physical address space. The process size must
be less than the physical address space.

Physical Address Space = Size of the Main Memory

If, physical address space = 64 KB = 2 ^ 6 KB = 2 ^ 6 X 2 ^ 10 Bytes = 2 ^ 16 bytes

Let us consider,
word size = 8 Bytes = 2 ^ 3 Bytes

Hence,
Physical address space (in words) = (2 ^ 16) / (2 ^ 3) = 2 ^ 13 Words

Therefore,
Physical Address = 13 bits

In General,
If, Physical Address Space = N Words

then, Physical Address = log2 N


Logical Address Space
Logical address space can be defined as the size of the process. The size of the process should
be less enough so that it can reside in the main memory.
Let's say,
Logical Address Space = 128 MB = (2 ^ 7 X 2 ^ 20) Bytes = 2 ^ 27 Bytes
Word size = 4 Bytes = 2 ^ 2 Bytes

Logical Address Space (in words) = (2 ^ 27) / (2 ^ 2) = 2 ^ 25 Words


Logical Address = 25 Bits

In general,
If, logical address space = L words
Then, Logical Address = Log2L bits
What is a Word?
The Word is the smallest unit of the memory. It is the collection of bytes. Every operating
system defines different word sizes after analyzing the n-bit address that is inputted to the
decoder and the 2 ^ n memory locations that are produced from the decoder.
Page Table in OS
Page Table is a data structure used by the virtual memory system to store the mapping
between logical addresses and physical addresses.
Logical addresses are generated by the CPU for the pages of the processes therefore they are
generally used by the processes.
Physical addresses are the actual frame address of the memory. They are generally used by
the hardware or more specifically by RAM subsystems.
The image given below considers,
Physical Address Space = M words
Logical Address Space = L words
Page Size = P words

Physical Address = log 2 M = m bits


Logical Address = log 2 L = l bits
page offset = log 2 P = p bits

Mapping from page table to main memory


In operating systems, there is always a requirement of mapping from logical address to the
physical address. However, this process involves various steps which are defined as follows.
1. Generation of logical address
CPU generates logical address for each page of the process. This contains two parts: page
number and offset.
2. Scaling
To determine the actual page number of the process, CPU stores the page table base in a
special register. Each time the address is generated, the value of the page table base is added
to the page number to get the actual location of the page entry in the table. This process is
called scaling.
3. Generation of physical Address
The frame number of the desired page is determined by its entry in the page table. A physical
address is generated which also contains two parts : frame number and offset. The Offset will
be similar to the offset of the logical address therefore it will be copied from the logical
address.
4. Getting Actual Frame Number
The frame number and the offset from the physical address is mapped to the main memory in
order to get the actual word address.
The CPU always accesses the processes through their logical addresses. However, the main
memory recognizes physical address only.
In this situation, a unit named as Memory Management Unit comes into the picture. It
converts the page number of the logical address to the frame number of the physical address.
The offset remains same in both the addresses.
To perform this task, Memory Management unit needs a special kind of mapping which is
done by page table. The page table stores all the Frame numbers corresponding to the page
numbers of the page table.
In other words, the page table maps the page number to its actual location (frame number) in
the memory.
In the image given below shows, how the required word of the frame is accessed with the
help of offset.
Page Offset(d) - It denotes the page size or the number of bits required to represent a
word on a page.
Size of the page table
However, the part of the process which is being executed by the CPU must be present in the
main memory during that time period. The page table must also be present in the main
memory all the time because it has the entry for all the pages.
The size of the page table depends upon the number of entries in the table and the bytes
stored in one entry.
Let's consider,
1. Logical Address = 24 bits
2. Logical Address space = 2 ^ 24 bytes
3. Let's say, Page size = 4 KB = 2 ^ 12 Bytes
4. Page offset = 12
5. Number of bits in a page = Logical Address - Page Offset = 24 - 12 = 12 bits
6. Number of pages = 2 ^ 12 = 2 X 2 X 10 ^ 10 = 4 KB
7. Let's say, Page table entry = 1 Byte
8. Therefore, the size of the page table = 4 KB X 1 Byte = 4 KB
Here we are lucky enough to get the page table size equal to the frame size. Now, the page
table will be simply stored in one of the frames of the main memory. The CPU maintains a
register which contains the base address of that frame, every page number from the logical
address will first be added to that base address so that we can access the actual location of the
word being asked.
However, in some cases, the page table size and the frame size might not be same. In those
cases, the page table is considered as the collection of frames and will be stored in the
different frames.

Finding Optimal Page Size


We have seen that the bigger page table size cause an extra overhead because we have to
divide that table into the pages and then store that into the main memory.
Our concern must be about executing processes not on the execution of page table. Page table
provides a support for the execution of the process. The larger the page Table, the higher the
overhead.
We know that,
1. Page Table Size = number of page entries in page table X size of one page entry
2. Let's consider an example,
3. Virtual Address Space = 2 GB = 2 X 2 ^ 30 Bytes
4. Page Size = 2 KB = 2 X 2 ^ 10 Bytes
5. Number of Pages in Page Table = (2 X 2 ^ 30)/(2 X 2 ^ 10) = 1 M pages
There will be 1 million pages which is quite big number. However, try to make page size
larger, say 2 MB.
Then, Number of pages in page table = (2 X 2 ^ 30)/(2 X 2 ^ 20) = 1 K pages.
If we compare the two scenarios, we can find out that the page table size is anti proportional
to Page Size.
In Paging, there is always wastage on the last page. If the virtual address space is not a
multiple of page size, then there will be some bytes remaining and we have to assign a full
page to those many bytes. This is simply a overhead.
Let's consider,
1. Page Size = 2 KB
2. Virtual Address Space = 17 KB
3. Then number of pages = 17 KB / 2 KB
The number of pages will be 9 although the 9th page will only contain 1 byte and the
remaining page will be wasted.
In general,
1. If page size = p bytes
2. Entry size = e bytes
3. Virtual Address Space = S bytes
4. Then, overhead O = (S/p) X e + (p/2)
On an average, the wasted number of pages in a virtual space is p/2(the half of total number
of pages).
For, the minimal overhead,
1. ∂O/∂p = 0
2. -S/(p^2) + ½ = 0
3. p = √ (2.S.e) bytes
Hence, if the page size √(2.S.e) bytes then the overhead will be minimal.
Virtual Memory in Operating System

Virtual memory is a memory management technique that provides an “idealized abstraction


of the storage resources” to the user while managing the actual physical memory of the
system. It creates the illusion for users that they have a very large (virtually infinite) memory,
even though the physical memory (RAM) is limited. It allows an operating system to run
large applications or multiple applications simultaneously, even when the system’s physical
memory is insufficient.

Key Concepts of Virtual Memory:


1. Logical and Physical Address Space:
o Logical Address Space: The set of addresses that the CPU generates during
program execution.
o Physical Address Space: The set of addresses corresponding to the actual
physical memory locations.
o Virtual memory separates these two, allowing programs to use a logical
address space that can be much larger than the actual physical memory.
2. Paging:
o Paging is a common implementation technique of virtual memory.
o The logical memory is divided into fixed-size blocks called pages, while the
physical memory is divided into blocks of the same size called frames.
o The operating system maintains a page table to keep track of where each page
is located in physical memory or secondary storage (like a hard drive).
3. Page Table:
o The page table maps the logical addresses generated by the CPU to the
physical addresses in memory.
o It includes information like the base address of each page in physical memory,
and whether the page is currently in memory or on disk.
4. Demand Paging:
o Demand paging loads pages into physical memory only when they are needed,
reducing the overall memory usage.
o If a page is not in physical memory when the CPU requests it, a page fault
occurs, and the operating system loads the required page from disk into
memory.
5. Page Fault:
o A page fault occurs when the CPU references a page that is not currently in
physical memory.
o When a page fault happens, the OS must retrieve the page from secondary
storage (e.g., hard disk) and load it into a free frame in physical memory.
6. Swapping:
o When physical memory is full, the operating system may use a technique
called swapping to free up memory. It swaps out pages (i.e., moves them)
from memory to a storage device (typically a hard disk) and swaps in the
required pages from the disk into memory.
7. Thrashing:
o Thrashing occurs when a system spends more time swapping pages in and out
of memory than executing processes. It often happens when there is
insufficient physical memory, causing excessive page faults.
8. Translation Lookaside Buffer (TLB):
o The TLB is a specialized cache used to improve the speed of virtual-to-
physical address translation.
o It stores a small number of recent translations of virtual page numbers to
physical frame numbers, allowing for faster retrieval without needing to
consult the page table every time.

Advantages of Virtual Memory:


1. Efficient Memory Use:
o Programs do not need to be entirely loaded into memory, allowing more
programs to run simultaneously.
2. Larger Address Space:
o Programs can use more memory than what is physically available in the
system.
3. Program Isolation:
o Virtual memory provides protection by ensuring that one program cannot
access the memory of another, improving security and stability.
4. Simplified Memory Management:
o Programmers do not need to worry about memory limitations and memory
allocation/deallocation issues, as the operating system handles it.
5. Process Relocation:
o Virtual memory allows processes to be moved in memory or even swapped out
without affecting the programs.

Disadvantages of Virtual Memory:


1. Increased Complexity:
o Virtual memory increases the complexity of the operating system’s memory
management.
2. Performance Overhead:
o Accessing a page not in memory (page fault) can be much slower, as it
requires disk I/O.
3. Thrashing:
o If too many page faults occur, the system may spend more time swapping
pages than doing useful work.

Types of Virtual Memory


In a computer, virtual memory is managed by the Memory Management Unit (MMU), which
is often built into the CPU. The CPU generates virtual addresses that the MMU translates into
physical addresses.
There are two main types of virtual memory:
 Paging
 Segmentation
Paging
Paging divides memory into small fixed-size blocks called pages. When the computer runs
out of RAM, pages that aren’t currently in use are moved to the hard drive, into an area called
a swap file. The swap file acts as an extension of RAM. When a page is needed again, it is
swapped back into RAM, a process known as page swapping. This ensures that the operating
system (OS) and applications have enough memory to run.
Demand Paging: The process of loading the page into memory on demand (whenever a page
fault occurs) is known as demand paging. The process includes the following steps are as
follows:

Demand Paging
 If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
 The OS puts the interrupted process in a blocking state. For the execution to proceed
the OS must bring the required page into the memory.
 The OS will search for the required page in the logical address space.
 The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision-making of replacing
the page in physical address space.
 The page table will be updated accordingly.
 The signal will be sent to the CPU to continue the program execution and it will place
the process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating system and the
required page is brought into memory.
What is Page Fault Service Time?
The time taken to service the page fault is called page fault service time. The page fault
service time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
Segmentation
Segmentation divides virtual memory into segments of different sizes. Segments that aren’t
currently needed can be moved to the hard drive. The system uses a segment table to keep
track of each segment’s status, including whether it’s in memory, if it’s been modified, and its
physical address. Segments are mapped into a process’s address space only when needed.
Combining Paging and Segmentation
Sometimes, both paging and segmentation are used together. In this case, memory is divided
into pages, and segments are made up of multiple pages. The virtual address includes both a
segment number and a page number.
Virtual Memory vs Physical Memory
When talking about the differences between virtual memory and physical memory, the
biggest distinction is speed. RAM is much faster than virtual memory, but it is also more
expensive.
When a computer needs storage for running programs, it uses RAM first. Virtual memory,
which is slower, is used only when the RAM is full.

Feature Virtual Memory Physical Memory (RAM)

An abstraction that extends the The actual hardware (RAM) that stores
Definition available memory by using disk data and instructions currently being
storage used by the CPU

Location On the hard drive or SSD On the computer’s motherboard

Slower (due to disk I/O


Speed Faster (accessed directly by the CPU)
operations)

Smaller, limited by the amount of RAM


Capacity Larger, limited by disk space
installed

Lower (cost of additional disk


Cost Higher (cost of RAM modules)
storage)
Data Indirect (via paging and
Direct (CPU can access data directly)
Access swapping)

Non-volatile (data persists on


Volatility Volatile (data is lost when power is off)
disk)

What is Swapping?
Swapping is a process out means removing all of its pages from memory, or marking them so
that they will be removed by the normal page replacement process. Suspending a process
ensures that it is not runnable while it is swapped out. At some later time, the system swaps
back the process from the secondary storage to the main memory. When a process is busy
swapping pages in and out then this situation is called thrashing.

What is Thrashing?
At any given time, only a few pages of any process are in the main memory, and therefore
more processes can be maintained in memory. Furthermore, time is saved because unused
pages are not swapped in and out of memory. However, the OS must be clever about how it
manages this scheme. In the steady state practically, all of the main memory will be occupied
with process pages, so that the processor and OS have direct access to as many processes as
possible. Thus when the OS brings one page in, it must throw another out. If it throws out a
page just before it is used, then it will just have to get that page again almost immediately.
Too much of this leads to a condition called Thrashing. The system spends most of its time
swapping pages rather than executing instructions. So a good page replacement algorithm is
required.
In the given diagram, the initial degree of multiprogramming up to some extent of
point(lambda), the CPU utilization is very high and the system resources are utilized 100%.
But if we further increase the degree of multiprogramming the CPU utilization will
drastically fall down and the system will spend more time only on the page replacement and
the time taken to complete the execution of the process will increase. This situation in the
system is called thrashing.
Causes of Thrashing

1. High Degree of Multiprogramming: If the number of processes keeps on increasing in


the memory then the number of frames allocated to each process will be decreased. So, fewer
frames will be available for each process. Due to this, a page fault will occur more frequently
and more CPU time will be wasted in just swapping in and out of pages and the utilization
will keep on decreasing.
For example:
Let free frames = 400
Case 1: Number of processes = 100
Then, each process will get 4 frames.
Case 2: Number of processes = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes is increased, frames per process
are decreased. Hence CPU time will be consumed just by swapping pages.
2. Lacks of Frames: If a process has fewer frames then fewer pages of that process will be
able to reside in memory and hence more frequent swapping in and out will be required. This
may lead to thrashing. Hence a sufficient amount of frames must be allocated to each process
in order to prevent thrashing.
Recovery of Thrashing
 Do not allow the system to go into thrashing by instructing the long-term scheduler
not to bring the processes into memory after the threshold.
 If the system is already thrashing then instruct the mid-term scheduler to suspend
some of the processes so that we can recover the system from thrashing.
Performance in Virtual Memory
 Let p be the page fault rate( 0 <= p <= 1).
 if p = 0 no page faults
 if p =1, every reference is a fault.
Effective access time (EAT) = (1-p)* Memory Access Time + p * Page fault time.
Page fault time = page fault overhead + swap out + swap in +restart overhead
The performance of a virtual memory management system depends on the total number of
page faults, which depend on “paging policies” and “frame allocation“
Frame Allocation
A number of frames allocated to each process in either static or dynamic.
 Static Allocation: The number of frame allocations to a process is fixed.
 Dynamic Allocation: The number of frames allocated to a process changes.
Paging Policies
 Fetch Policy: It decides when a page should be loaded into memory.
 Replacement Policy: It decides which page in memory should be replaced.
 Placement Policy: It decides where in memory should a page be loaded.
Applications of Virtual memory
Virtual memory has the following important characteristics that increase the capabilities of
the computer system. The following are five significant characteristics of Lean.
 Increased Effective Memory: One major practical application of virtual memory is,
virtual memory enables a computer to have more memory than the physical memory
using the disk space. This allows for the running of larger applications and numerous
programs at one time while not necessarily needing an equivalent amount of DRAM.

Demand paging is a memory management scheme used in operating systems to improve


memory usage and system performance. Let’s understand demand paging with real life
example Imagine you are reading a very thick book, but you don’t want to carry the entire
book around because it’s too heavy. Instead, you decide to only bring the pages you need as
you read through the book. When you finish with one page, you can put it away and grab the
next page you need.
In a computer system, the book represents the entire program, and the pages are parts of the
program called “pages” of memory. Demand paging works similarly: instead of loading the
whole program into the computer’s memory at once (which can be very large and take up a
lot of space), the operating system only loads the necessary parts (pages) of the program
when they are needed.
This concept says that we should not load any pages into the main memory until we need
them, or keep all pages in secondary memory until we need them.
What is Demand Paging?
Demand paging is a technique used in virtual memory systems where pages enter main
memory only when requested or needed by the CPU. In demand paging, the operating system
loads only the necessary pages of a program into memory at runtime, instead of loading the
entire program into memory at the start. A page fault occurred when the program needed to
access a page that is not currently in memory.
The operating system then loads the required pages from the disk into memory and updates
the page tables accordingly. This process is transparent to the running program and it
continues to run as if the page had always been in memory.
What is Page Fault?
The term “page miss” or “page fault” refers to a situation where a referenced page is not
found in the main memory.
When a program tries to access a page, or fixed-size block of memory, that isn’t currently
loaded in physical memory (RAM), an exception known as a page fault happens. Before
enabling the program to access a page that is required, the operating system must bring it into
memory from secondary storage (such a hard drive) in order to handle a page fault.
In modern operating systems, page faults are a common component of virtual memory
management. By enabling programs to operate with more data than can fit in physical
memory at once, they enable the efficient use of physical memory. The operating system is
responsible for coordinating the transfer of data between physical memory and secondary
storage as needed.
What is Thrashing?
Thrashing is the term used to describe a state in which excessive paging activity takes place
in computer systems, especially in operating systems that use virtual memory, severely
impairing system performance. Thrashing occurs when a system’s high memory demand and
low physical memory capacity cause it to spend a large amount of time rotating pages
between main memory (RAM) and secondary storage, which is typically a hard disc.
It is caused due to insufficient physical memory, overloading and poor memory management.
The operating system may use a variety of techniques to lessen thrashing, including lowering
the number of running processes, adjusting paging parameters, and improving memory
allocation algorithms. Increasing the system’s physical memory (RAM) capacity can also
lessen thrashing by lowering the frequency of page swaps between RAM and the disc.
Pure Demand Paging
Pure demand paging is a specific implementation of demand paging. The operating system
only loads pages into memory when the program needs them. In on-demand paging only, no
pages are initially loaded into memory when the program starts, and all pages are initially
marked as being on disk.
Operating systems that use pure demand paging as a memory management strategy do so
without preloading any pages into physical memory prior to the commencement of a task.
Demand paging loads a process’s whole address space into memory one step at a time,
bringing just the parts of the process that are actively being used into memory from disc as
needed.
It is useful for executing huge programs that might not fit totally in memory or for computers
with limited physical memory. If the program accesses a lot of pages that are not in memory
right now, it could also result in a rise in page faults and possible performance overhead.
Operating systems frequently use caching techniques and improve page replacement
algorithms to lessen the negative effects of page faults on system performance as a whole.
Working Process of Demand Paging
Let us understand this with the help of an example. Suppose we want to run a process P
which have four pages P0, P1, P2, and P3. Currently, in the page table, we have pages P1 and
P3.

The operating system‘s demand paging mechanism follows a few steps


in its operation.
 Program Execution: Upon launching a program, the operating system allocates a
certain amount of memory to the program and establishes a process for it.
 Creating Page Tables: To keep track of which program pages are currently in
memory and which are on disk, the operating system makes page tables for each
process.
 Handling Page Fault: When a program tries to access a page that isn’t in memory at
the moment, a page fault happens. In order to determine whether the necessary page is
on disk, the operating system pauses the application and consults the page tables.
 Page Fetch: The operating system loads the necessary page into memory by
retrieving it from the disk if it is there.
 The page’s new location in memory is then reflected in the page table.
 Resuming The Program: The operating system picks up where it left off when the
necessary pages are loaded into memory.
 Page Replacement: If there is not enough free memory to hold all the pages a
program needs, the operating system may need to replace one or more pages currently
in memory with pages currently in memory. on the disk. The page replacement
algorithm used by the operating system determines which pages are selected for
replacement.
 Page Cleanup: When a process terminates, the operating system frees the memory
allocated to the process and cleans up the corresponding entries in the page tables.
How Demand Paging in OS Affects System Performance?
Demand paging can improve system performance by reducing the memory needed for
programs and allowing multiple programs to run simultaneously. However, if not
implemented properly, it can cause performance issues. When a program needs a part that
isn’t in the main memory, the operating system must fetch it from the hard disk, which takes
time and pauses the program. This can cause delays, and if the system runs out of memory, it
will need to frequently swap pages in and out, increasing delays and reducing performance.
Common Algorithms Used for Demand Paging in OS
Demand paging is a memory management technique that loads parts of a program into
memory only when needed. If a program needs a page that isn’t currently in memory, the
system fetches it from the hard disk. Several algorithms manage this process:
 FIFO (First-In-First-Out): Replaces the oldest page in memory with a new one. It’s
simple but can cause issues if pages are frequently swapped in and out, leading to
thrashing.
 LRU (Least Recently Used): Replaces the page that hasn’t been used for the longest
time. It reduces thrashing more effectively than FIFO but is more complex to
implement.
 LFU (Least Frequently Used): Replaces the page used the least number of times. It
helps reduce thrashing but requires extra tracking of how often each page is used.
 MRU (Most Recently Used): Replaces the page that was most recently used. It’s
simpler than LRU but not as effective in reducing thrashing.
 Random: Randomly selects a page to replace. It’s easy to implement but
unpredictable in performance.
What is the Impact of Demand Paging in Virtual Memory Management?
With demand paging, the operating system swaps memory pages between the main memory
and secondary storage based on need. When a program needs a page not currently in memory,
the operating system retrieves it from secondary storage, a process called a page fault.
Demand paging significantly impacts virtual memory management by allowing the operating
system to use virtual memory efficiently, improving overall system performance. Its main
advantage is reducing the physical memory required, enabling more applications to run at
once and allowing larger programs to run.
However, demand paging has some drawbacks. The page fault mechanism can delay program
execution because the operating system must retrieve pages from secondary storage. This
delay can be minimized by optimizing the page replacement algorithm.
Demand Paging in OS vs Pre-Paging
Demand paging and pre-paging are two memory management techniques used in operating
systems.
Demand paging loads pages from disk into main memory only when they are needed by a
program. This approach saves memory space by keeping only the required pages in memory,
reducing memory allocation costs and improving memory use. However, the initial access
time for pages not in memory can delay program execution.
Pre-paging loads multiple pages into main memory before they are needed by a program. It
assumes that if one page is needed, nearby pages will also be needed soon. Pre-paging can
speed up program execution by reducing delays caused by demand paging but can lead to
unnecessary memory allocation and waste.
Advantages of Demand Paging
So in the Demand Paging technique, there are some benefits that provide efficiency of the
operating system.
 Efficient use of physical memory: Query paging allows for more efficient use
because only the necessary pages are loaded into memory at any given time.
 Support for larger programs: Programs can be larger than the physical memory
available on the system because only the necessary pages will be loaded into memory.
 Faster program start: Because only part of a program is initially loaded into
memory, programs can start faster than if the entire program were loaded at once.
 Reduce memory usage: Query paging can help reduce the amount of memory a
program needs, which can improve system performance by reducing the amount of
disk I/O required.
Disadvantages of Demand Paging
 Page Fault Overload: The process of swapping pages between memory and disk can
cause a performance overhead, especially if the program frequently accesses pages
that are not currently in memory.
 Degraded Performance: If a program frequently accesses pages that are not
currently in memory, the system spends a lot of time swapping out pages, which
degrades performance.
 Fragmentation: Query paging can cause physical memory fragmentation, degrading
system performance over time.
 Complexity: Implementing query paging in an operating system can be complex,
requiring complex algorithms and data structures to manage page tables and swap
space.

Demand Paging in OS
Consider a main memory with five page frames and the following sequence of page
references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. which one of the following is true with
respect to page replacement policies First-In-First-out (FIFO) and Least Recently Used
(LRU)?
Number of frames = 5
FIFO
According to FIFO, the page which first comes in the memory will first goes out.
Number of Page Faults = 9
Number of hits = 6
LRU
According to LRU, the page which has not been requested for a long time will get replaced
with the new one.

Number of Page Faults = 9


Number of Hits = 6

Page Replacement Algorithms in Operating Systems


In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when a new page comes
in. Page replacement becomes necessary when a page fault occurs and no free page frames
are in memory.
However, another page fault would arise if the replaced page is referenced again. Hence it is
important to replace a page that is not likely to be referenced in the immediate future. Before
proceeding to actual page replacement algorithms, let’s first discuss Paging and Virtual
Memory.
What is Paging?
Paging is a memory management technique operating systems use operating systems to
optimize computer memory usage. It divides memory into fixed-size pages that are mapped
to physical memory frames, reducing fragmentation and improving system performance .
This method allows for more efficient use of memory, which is crucial for modern operating
systems and their ability to handle multitasking effectively.
What is Page Fault?
A page fault happens when a running program accesses a memory page that is mapped into
the virtual address space but not loaded in physical memory. Since actual physical memory is
much smaller than virtual memory, page faults happen. In case of a page fault, the Operating
System might have to replace one of the existing pages with the newly needed page. Different
page replacement algorithms suggest different ways to decide which page to replace. The
target for all algorithms is to reduce the number of page faults.
What is Virtual Memory in OS?
Virtual memory in an operating system is a memory management technique that creates an
illusion of a large block of contiguous memory for users. It uses both physical
memory (RAM) and disk storage to provide a larger virtual memory space, allowing
systems to run larger applications and handle more processes simultaneously. This helps
improve system performance and multitasking efficiency.
Understanding page replacement algorithms is critical for exams like GATE, where operating
systems are a key focus area. To deepen your knowledge and enhance your exam preparation,
consider enrolling in the GATE CS Self-Paced Course . This course provides comprehensive
coverage of operating system concepts, including detailed explanations and examples of page
replacement algorithms, helping you to excel in your exams.
Page Replacement Algorithms
Page replacement algorithms are techniques used in operating systems to
manage memory efficiently when the virtual memory is full. When a new page needs to be
loaded into physical memory , and there is no free space, these algorithms determine which
existing page to replace.
If no page frame is free, the virtual memory manager performs a page replacement operation
to replace one of the pages existing in memory with the page whose reference caused the
page fault. It is performed as follows: The virtual memory manager uses a page replacement
algorithm to select one of the pages currently in memory for replacement, accesses the page
table entry of the selected page to mark it as “not present” in memory, and initiates a page-out
operation for it if the modified bit of its page table entry indicates that it is a dirty page.
Common Page Replacement Techniques
 First In First Out (FIFO)
 Optimal Page replacement
 Least Recently Used
 Most Recently Used (MRU)
operating systems use First In First Out (FIFO)
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the
number of page faults.

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not
available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. 6 comes, it is
also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault.
Finally, when 3 come it is not available so it replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement algorithm.
For example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we
get 9 total page faults, but if we increase slots to 4, we get 10-page faults.
Optimal Page Replacement
In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frame. Find number of page fault.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is
not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so
—> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.
Least Recently Used
In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frames. Find number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is
least recently used —> 1 Page fault
0 is already in memory so —> 0 Page fault .
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.

Copy on Write
Copy on Write or simply COW is a resource management technique. One of its main use is
in the implementation of the fork system call in which it shares the virtual memory(pages) of
the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is
called as the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then
both of these processes initially will share the same pages in memory and these shared pages
will be marked as copy-on-write which means that if any of these processes will try to modify
the shared pages then only a copy of these pages will be created and the modifications will be
done on the copy of pages by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page
3.
The below figures shows what happens before and after process P modifies page 3.
Allocation of Frames in OS
The main memory of the operating system is divided into various frames. The process is
stored in these frames, and once the process is saved as a frame, the CPU may run it. As a
result, the operating system must set aside enough frames for each process. As a result, the
operating system uses various algorithms in order to assign the frame.
Demand paging is used to implement virtual memory, an essential operating system feature.
It requires the development of a page replacement mechanism and a frame allocation system.
If you have multiple processes, the frame allocation techniques are utilized to define how
many frames to allot to each one. A number of factors constrain the strategies for allocating
frames:
1. You cannot assign more frames than the total number of frames available.
2. A specific number of frames should be assigned to each process. This limitation is
due to two factors. The first is that when the number of frames assigned drops, the
page fault ratio grows, decreasing the process's execution performance. Second, there
should be sufficient frames to hold all the multiple pages that any instruction may
reference.
There are mainly five ways of frame allocation algorithms in the OS. These are as follows:
1. Equal Frame Allocation
2. Proportional Frame Allocation
3. Priority Frame Allocation
4. Global Replacement Allocation
5. Local Replacement Allocation
Equal Frame Allocation
In equal frame allocation, the processes are assigned equally among the processes in the OS.
For example, if the system has 30 frames and 7 processes, each process will get 4 frames. The
2 frames that are not assigned to any system process may be used as a free-frame buffer pool
in the system.
Advertisement
Disadvantage
In a system with processes of varying sizes, assigning equal frames to each process makes
little sense. Many allotted empty frames will be wasted if many frames are assigned to a
small task.
Proportional Frame Allocation
The proportional frame allocation technique assigns frames based on the size needed for
execution and the total number of frames in memory.
The allocated frames for a process pi of size si are ai = (si/S)*m, in which S represents the
total of all process sizes, and m represents the number of frames in the system.
Disadvantage
The only drawback of this algorithm is that it doesn't allocate frames based on priority.
Priority frame allocation solves this problem.
Advertisement
Priority Frame Allocation
Priority frame allocation assigns frames based on the number of frame allocations and the
processes. Suppose a process has a high priority and requires more frames that many frames
will be allocated to it. Following that, lesser priority processes are allocated.
Global Replacement Allocation
When a process requires a page that isn't currently in memory, it may put it in and select a
frame from the all frames sets, even if another process is already utilizing that frame. In other
words, one process may take a frame from another.
Advantages
Process performance is not hampered, resulting in higher system throughput.
Disadvantages
The process itself may not solely control the page fault ratio of a process. The paging
behavior of other processes also influences the number of pages in memory for a process.
Local Replacement Allocation
When a process requires a page that isn't already in memory, it can bring it in and assign it a
frame from its set of allocated frames.
Advantages
The paging behavior of a specific process has an effect on the pages in memory and the page
fault ratio.
Disadvantages
A low priority process may obstruct a high priority process by refusing to share its frames.
Global Vs. Local Replacement Allocation
The number of frames assigned to a process does not change using a local replacement
strategy. On the other hand, using global replacement, a process can choose only frames
granted to other processes and enhance the number of frames allocated.

Memory Mapped Files in OS


We can use standard system calls like read(), seek(), open(), and so on to perform a sequential
read of a file present on the disk. Thus, to access a file from the disk we need system calls
and disk access. Memory mapping is a technique that allows a part of the virtual address
space to be associated with a file logically. This technique of memory mapping leads to a
significant increase in performance.
Basic Mechanism of Memory Mapping
 The Operating System uses virtual memory for memory mapping a file. It is
performed by mapping a disk block to a page present in the physical memory.
Initially, the file is accessed through demand paging. If a process references an
address that does not exist in the physical memory, then page fault occurs and the
Operating System takes charge of bringing the missing page into the physical
memory.
 A page-sized portion of the file is read from the file system into a physical page.
 Manipulating the files through the use of memory rather than incurring the overhead
of using the read() and write() system calls not only simplifies but also speeds up file
access and usage.
 Multiple processes may be allowed to map a single file simultaneously to allow
sharing of data.
 If any of the processes write data in the virtual memory, then the modified data will be
visible to all the processes that map the same section of the file.
 The memory mapping system calls support copy-on-write functionality which allows
processes to share a file in read-only mode but the processes can have their own
copies of data that they have modified.
The sharing of memory is depicted with the help of a diagram shown below.

Memory Mapped Files


Types of Memory Mapped Files
Basically, there are two types of memory mapped files:
 Persisted: Persisted files are connected with a source file on a disk. After completing
the final process, the data is saved to the source file on disc. Working with very big
source files is appropriate with these type of memory-mapped files.
 Non-persisted: Non-persisted files are not connected to any disk-based files. The data
is lost when the last process with the file completes its required task. The shared
memory that these files enable for inter-process communications or IPC.
Advantages of Memory Mapped Files
 It increases the I/O performance especially when it is used on large files.
 Accessing memory mapped file is faster than using direct system calls like read() and
write().
 Another advantage is lazy loading where small amount of RAM is used for a very
large file.
 Shared memory is often implemented by memory mapping files. Thus, it supports
data sharing.
Disadvantages of Memory Mapped Files
 In some cases, memory mapped file I/O may be substantially slower as compared to
standard file I/O.
 Only hardware architecture that has MMU (Memory Management Unit) supports
memory mapped files.
 In memory mapped files , expanding the size of a file is not easy.

Allocating Kernel Memory in OS


Allocating kernel memory in an operating system is a critical process since it deals with
memory that the OS itself uses for internal operations, drivers, and various kernel-level tasks.
Kernel memory management differs significantly from user-space memory management due
to its direct interaction with hardware and the necessity for high reliability.
Here are the main concepts related to allocating kernel memory:
1. Contiguous vs. Non-Contiguous Memory Allocation
 Contiguous Memory Allocation: Some parts of the kernel require memory to be
allocated in contiguous blocks. This is critical for device drivers and DMA (Direct
Memory Access) operations that require continuous physical memory.
o Buddy System: Used for managing memory allocation in contiguous chunks.
It divides memory into blocks of size that are powers of two, which helps in
allocating and deallocating memory efficiently.
 Non-Contiguous Memory Allocation: In many cases, the kernel doesn't need
physically contiguous memory. The OS can allocate non-contiguous physical pages
and manage them using paging or segmentation mechanisms.
2. Kernel Memory Allocators
 Slab Allocator: The slab allocator is used for managing the allocation and
deallocation of small chunks of memory. It's efficient for objects of similar size and is
used heavily in Linux kernel memory management.
o Slab: A cache of pre-allocated small objects of a particular size.
o SLUB (Simplified Slab Allocator): A more efficient variant used in modern
Linux kernels.
 SLOB (Simple List of Blocks): A minimalist allocator for embedded systems or
systems with limited resources.
 kmalloc() and kfree(): These are the kernel space analogs of malloc() and free() in
user space. They allocate and free memory in the kernel, but the memory is not
guaranteed to be contiguous unless explicitly requested.
3. Page Allocator
 The kernel uses a page allocator to manage memory at the page level (typically 4 KB
per page). This is more efficient for larger allocations or for allocating virtual
memory.
o alloc_pages(): This function is used to allocate pages. It allows for both
contiguous and non-contiguous page allocation.
o get_free_pages(): Used to allocate a range of free pages. It can allocate
multiple pages of memory based on a requested order (where order represents
2^n pages).
4. vmalloc()
 Unlike kmalloc(), which allocates physically contiguous memory, vmalloc() allocates
virtually contiguous memory. The physical memory may be scattered, but the kernel
ensures that the virtual memory addresses form a continuous range. This is useful for
allocating large chunks of memory that do not need to be physically contiguous.
5. Kernel Memory Protection
 Since the kernel runs in privileged mode, memory corruption in kernel space can
cause a system crash. Modern operating systems have protection mechanisms such as:
o Kernel Address Space Layout Randomization (KASLR): This randomizes
the location of the kernel in memory to make it harder for attackers to predict
and exploit vulnerabilities.
o Memory barriers: Ensure proper ordering of memory operations.
o Guard pages: Placed around sensitive memory regions to detect and prevent
buffer overflows.
6. Swapping and Kernel Memory
 In many systems, kernel memory is not swappable, meaning it cannot be moved to
disk when memory is low. This is because kernel memory must remain resident for
the system to function properly.
7. Memory Fragmentation
 Internal Fragmentation: Occurs when memory allocated is larger than what is
actually needed. The slab allocator helps minimize this by managing memory
efficiently in smaller blocks.
 External Fragmentation: Occurs when free memory is fragmented into small, non-
contiguous blocks, making it difficult to allocate larger contiguous blocks. The buddy
system helps reduce external fragmentation.

Disk Structure in an Operating System


The structure of a disk is organized to store data efficiently, ensuring that read/write
operations are optimized for speed and reliability.
 Disk Layout:
o Platters: A hard disk is composed of one or more platters, which are circular
disks coated with magnetic material.
o Tracks: Each platter is divided into concentric circles called tracks.
o Sectors: Tracks are subdivided into smaller arcs called sectors, which are the
smallest unit of storage on the disk.
o Cylinders: A set of tracks that are aligned vertically on multiple platters forms
a cylinder.
o Blocks/Clusters: The OS groups sectors into blocks or clusters for data
management. A block is the smallest logical unit of storage used by the OS.
 Logical vs. Physical Addressing:
o Physical Addressing: Refers to the actual position of data on the disk,
including details like cylinder, track, and sector.
o Logical Block Addressing (LBA): The OS abstracts physical addresses into
logical blocks, where each block is identified by a single number. LBA
simplifies the interface between the OS and storage devices.
 File Systems:
o The OS organizes disk storage using file systems (like NTFS, FAT32, ext4).
File systems manage the mapping of files and directories to disk blocks,
ensuring proper allocation, retrieval, and deletion of data.

Disk-Attached Management
Disks are attached to systems in different ways, and the OS must manage this attachment
efficiently.
 Types of Disk Attachment:
o Host-Attached Storage: Disks directly connected to the system (e.g., internal
hard drives, SSDs).
 Managed by the OS using drivers and device interfaces (e.g., SATA,
NVMe).
o Network-Attached Storage (NAS): Disks connected via a network, accessed
over protocols like NFS or SMB.
 Managed using network file systems that abstract the network layer.
o Storage Area Network (SAN): Dedicated network providing access to block-
level storage, typically used in enterprise environments for large databases or
virtual machines.
 Managed using specialized network interfaces (e.g., Fibre Channel).
 Device Drivers and Controllers:
o The OS communicates with disks using device drivers, which act as an
interface between the disk hardware and the software.
o Disk Controllers manage the actual communication with the storage device
(e.g., read/write commands, buffer management).
o Direct Memory Access (DMA): Used to offload data transfer between the
disk and system memory to the disk controller, reducing CPU overhead.
 I/O Buffering and Caching:
o The OS buffers disk data in memory to reduce disk I/O operations. It may also
cache frequently accessed disk blocks to improve performance.
o Write-back caching stores data in memory before writing it to disk, while
write-through caching writes data to both the cache and the disk
simultaneously.

Disk Scheduling Algorithms in OS (Operating System)


As we know, a process needs two type of time, CPU time and IO time. For I/O, it
requests the Operating system to access the disk.
However, the operating system must be fare enough to satisfy each request and at the
same time, operating system must maintain the efficiency and speed of process execution.
The technique that operating system uses to determine the request which is to be
satisfied next is called disk scheduling.
Seek Time:
Seek time is the time taken in locating the disk arm to a specified track where the
read/write request will be satisfied.
Rotational Latency:
It is the time taken by the desired sector to rotate itself to the position from where it
can access the R/W heads.
Transfer Time:
It is the time taken to transfer the data.
Disk Access Time
Disk access time is given as,
Disk Access Time = Rotational Latency + Seek Time + Transfer Time
Disk Response Time
It is the average of time spent by each request waiting for the IO operation.
Purpose of Disk Scheduling
The main purpose of disk scheduling algorithm is to select a disk request from the
queue of IO requests and decide the schedule when this request will be processed.
Goal of Disk Scheduling Algorithm
o Fairness
o High throughout
o Minimal traveling head time
Disk Scheduling Algorithms
The list of various disks scheduling algorithm is given below. Each algorithm is
carrying some advantages and disadvantages. The limitation of each algorithm leads to the
evolution of a new algorithm.
o FCFS scheduling algorithm
o SSTF (shortest seek time first) algorithm
o SCAN scheduling
o C-SCAN scheduling
o LOOK Scheduling
o C-LOOK scheduling

FCFS Scheduling Algorithm


It is the simplest Disk Scheduling algorithm. It services the IO requests in the order in which
they arrive. There is no starvation in this algorithm, every request is serviced.
Disadvantages
o The scheme does not optimize the seek time.
o The request may come from different processes therefore there is the possibility of
inappropriate movement of the head.
Example
Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 90, 4, 50,
89, 52, 61, 87, 25
Head pointer starting at 50 and moving in left direction. Find the number of head movements
in cylinders using FCFS scheduling.
Solution

Number of cylinders moved by the head


= (50-45)+(45-21)+(67-21)+(90-67)+(90-4)+(50-4)+(89-50)+(61-52)+(87-61)+(87-25)
= 5 + 24 + 46 + 23 + 86 + 46 + 49 + 9 + 26 + 62
= 376

STF Scheduling Algorithm


Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the least
disk arm movement from its current position regardless of the direction. It reduces the total
seek time as compared to FCFS.
It allows the head to move to the closest track in the service queue.
Disadvantages
o It may cause starvation for some requests.
o Switching direction on the frequent basis slows the working of algorithm.
o It is not the most optimal algorithm.
Example
Consider the following disk request sequence for a disk with 100 tracks
45, 21, 67, 90, 4, 89, 52, 61, 87, 25
Head pointer starting at 50. Find the number of head movements in cylinders using SSTF
scheduling.
Solution:

Number of cylinders = 5 + 7 + 9 + 6 + 20 + 2 + 1 + 65 + 4 + 17 = 136

SCAN and C-SCAN algorithm


Scan Algorithm
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular
direction till the end, satisfying all the requests coming in its path,and then it turns backand
moves in the reverse direction satisfying requests coming in its path.
It works in the way an elevator works, elevator moves in a direction completely till the last
floor of that direction and then turns back.
Example
Consider the following disk request sequence for a disk with 100 tracks
98, 137, 122, 183, 14, 133, 65, 78
Head pointer starting at 54 and moving in left direction. Find the number of head movements
in cylinders using SCAN scheduling.
Number of Cylinders = 40 + 14 + 65 + 13 + 20 + 24 + 11 + 4 + 46 = 237
C-SCAN algorithm
In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests
until it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction
without servicing any request then it turns back and start moving in that direction servicing
the remaining requests.
Example
Consider the following disk request sequence for a disk with 100 tracks
98, 137, 122, 183, 14, 133, 65, 78
Head pointer starting at 54 and moving in left direction. Find the number of head movements
in cylinders using C-SCAN scheduling.
No. of cylinders crossed = 40 + 14 + 199 + 16 + 46 + 4 + 11 + 24 + 20 + 13 = 387

Swap Space Management in OS


Swap space management in an operating system (OS) plays a crucial role in managing
memory, especially when the system’s physical memory (RAM) is fully utilized. Swap space
acts as an extension of physical memory by providing additional space on the disk for
temporarily storing inactive memory pages.
Here’s an in-depth look at swap space management in the OS:

1. What is Swap Space?


Swap space is a dedicated area on the disk used as virtual memory to supplement physical
RAM. When the RAM is insufficient to hold all the running processes and their data, the OS
moves some of the less frequently used pages from RAM to the swap space on the disk. This
process is known as swapping.
 Virtual Memory: The combination of physical memory (RAM) and swap space.
 Paging: The process of moving individual memory pages between RAM and swap
space.
 Swapping: The broader term that can refer to moving entire processes or pages
between RAM and swap space.

2. Purpose of Swap Space


The main purposes of swap space in an OS are:
 Extend Physical Memory: When the system runs out of physical memory, swap
space is used to store data that doesn’t fit into RAM.
 Support Hibernation: On some systems, swap space is used to save the entire
system state (contents of RAM) when the system hibernates.
 Improve System Performance: By swapping out inactive memory pages, the OS can
free up RAM for more active processes, improving overall performance when
memory usage is high.

3. Swap Management Process


Here’s how the OS manages swap space:
3.1. Swapping and Paging:
 Swapping: This is the process of moving entire processes between RAM and disk.
Swapping out occurs when a process is no longer active, freeing up RAM for other
processes. When a swapped-out process becomes active again, it is swapped back into
RAM.
 Paging: Modern OSs, including Linux, Windows, and macOS, generally use paging
instead of full-process swapping. Paging moves only the parts of a process (individual
memory pages) that are not currently needed by the CPU to the swap space, while
keeping the rest in physical memory.
3.2. Page Faults:
 A page fault occurs when a process tries to access a page that is not in RAM but has
been moved to swap space. The OS must retrieve the page from swap and bring it
back into memory, potentially swapping out another page to free up space.
 Minor page faults happen when the required page is still in memory but not mapped
to the current process's address space.
 Major page faults involve retrieving the page from the swap space, which incurs a
delay due to the disk’s slower speed compared to RAM.
3.3. Demand Paging:
 The OS doesn’t load all pages of a process into memory at once; instead, pages are
loaded as they are needed. This is known as demand paging. It helps optimize
memory usage by only keeping frequently accessed pages in RAM.
3.4. Page Replacement Algorithms:
 When the OS needs to swap out a page from RAM to swap space, it uses a page
replacement algorithm to decide which page to move out. Common algorithms
include:
o Least Recently Used (LRU): Pages that haven’t been used for the longest
time are swapped out first.
o First-In, First-Out (FIFO): Pages are swapped out in the order they were
loaded.
o Clock Algorithm (Second Chance): Gives pages a second chance before they
are swapped out based on usage.

4. Swap Space Allocation


The way swap space is allocated varies across operating systems:
4.1. Linux Swap Space Management:
 Swap Partition: A dedicated partition on the disk specifically for swap.
 Swap File: A file on a regular file system that the OS can use as swap space. This is
more flexible than a partition and allows for dynamic resizing if necessary.
 Swappiness: Linux provides a tunable parameter called swappiness, which controls
the OS’s tendency to use swap space. A high swappiness value (e.g., 100) means the
system will swap more aggressively, while a low value (e.g., 10) reduces swapping in
favor of keeping pages in RAM.
 Swap Priorities: Linux can have multiple swap spaces (files or partitions), each with
different priorities. Higher-priority swap spaces are used before lower-priority ones.
4.2. Windows Swap Space Management (Paging File):
 Page File: In Windows, swap space is managed using a page file (often called
pagefile.sys), a hidden system file on the disk. Windows automatically manages the
size of this file, although users can configure it manually.
 Automatic vs. Manual Management: By default, Windows dynamically adjusts the
size of the page file based on system needs, but users can specify custom sizes or even
disable it (not recommended).
4.3. macOS Swap Management:
 macOS uses a swap file system, similar to Linux, dynamically creating swap files as
needed in /private/var/vm.
 macOS doesn’t allow the user to manually configure swap space, as the OS
dynamically manages it without user intervention.

5. Advantages and Disadvantages of Swap Space


Advantages:
 Prevents Crashes: Swap space allows the system to continue operating even when
RAM is fully used.
 Supports Large Applications: Applications that require more memory than available
RAM can still run by swapping parts of their memory to disk.
 Improves System Performance: When properly managed, swap space can help
optimize memory usage and ensure smoother multitasking.
 Hibernation: Swap space allows the system to save its entire state during hibernation.
Disadvantages:
 Slower than RAM: Accessing data from swap space is much slower than accessing it
from RAM, since disk speeds are significantly lower than RAM speeds.
 Frequent Disk Access: Excessive swapping can lead to performance degradation due
to constant read/write operations on the disk, known as thrashing.
 Disk Space: Swap space occupies valuable disk space, which may be limited in
systems with smaller storage capacities.

6. Thrashing and Swap Space Overuse


Thrashing occurs when the OS spends more time swapping pages in and out of memory than
executing actual processes. This happens when:
 There’s insufficient physical memory.
 Processes request more memory than is available, leading to constant swapping.
 The system has a very high swappiness value, causing frequent swapping even when
it’s not necessary.
To mitigate thrashing:
 Increase the size of physical RAM.
 Optimize the number of running processes.
 Adjust the OS's swappiness settings (in Linux, reduce the value).

7. Swap Size Considerations


The size of the swap space required depends on several factors:
 System RAM Size: The general rule of thumb used to be that swap space should be
1.5 to 2 times the size of the physical RAM, but this is less relevant with modern
systems that have large amounts of RAM.
 Workload: Systems running memory-intensive applications (e.g., video editing, large
databases) may require larger swap space.
 Hibernation: If hibernation is enabled, the swap space should be at least equal to the
size of the physical RAM, as the system state needs to be saved to swap during
hibernation.
RAID STRUCTURE
RAID (Redundant Array of Independent Disks) is a technology used to improve the
performance, reliability, and/or storage capacity of a system by combining multiple physical
disks into a single logical unit. RAID structures allow data to be distributed across several
disks to provide fault tolerance (redundancy) and improve data access speeds. There are
several RAID levels, each with its own configuration, advantages, and trade-offs.
Here’s an overview of the key concepts and RAID levels:

1. Why Use RAID?


 Performance: RAID can improve read/write speeds by distributing data across
multiple disks.
 Redundancy: Some RAID levels provide fault tolerance, ensuring that data is
protected if a disk fails.
 Capacity: RAID can combine the capacity of multiple disks to create a single large
storage volume.

2. RAID Structure Types


2.1. RAID 0 (Striping):
 Structure: Data is split into blocks and written evenly across two or more disks
(striping).
 Advantages:
o High Performance: Because data is written/read from multiple disks
simultaneously, read and write speeds are significantly faster.
o Full Storage Utilization: All disk space is available for data storage since no
disk is used for redundancy.
 Disadvantages:
o No Redundancy: If one disk fails, all data is lost.
 Use Case: Best suited for applications requiring high performance and where data
redundancy is not important (e.g., gaming, video editing).
2.2. RAID 1 (Mirroring):
 Structure: Data is duplicated (mirrored) across two or more disks.
 Advantages:
o Redundancy: If one disk fails, data can still be retrieved from the mirrored
disk.
o Read Performance: Can improve read performance since the system can read
from both disks simultaneously.
 Disadvantages:
o Storage Inefficiency: Only half of the total disk space is usable, as the other
half is used for mirroring.
o Write Performance: Writing can be slower since data must be written to both
disks.
 Use Case: Ideal for applications where data protection is critical, such as databases or
file servers.
2.3. RAID 5 (Striping with Parity):
 Structure: Data and parity information (used for error correction) are striped across
three or more disks. Parity is distributed evenly among the disks.
 Advantages:
o Redundancy: Can recover from a single disk failure.
o Efficient Storage: More efficient than mirroring since only one disk's worth
of space is used for parity.
o Good Performance: Stripes data across multiple disks, improving read
speeds.
 Disadvantages:
o Write Performance Overhead: Writing is slower because the parity
information must be calculated and written to disk.
o Rebuild Time: If a disk fails, rebuilding the array can take a long time and
impact performance.
 Use Case: Common in servers where both performance and data protection are
needed (e.g., file and application servers).
2.4. RAID 6 (Striping with Double Parity):
 Structure: Similar to RAID 5 but with an additional parity block, allowing for the
failure of two disks.
 Advantages:
o Redundancy: Can tolerate the failure of two disks.
o Efficient Storage: Requires two disks' worth of space for parity, but the rest is
usable for data.
 Disadvantages:
o Write Performance: Slightly slower than RAID 5 due to the extra parity
calculation.
o Rebuild Time: Rebuilding from two disk failures can take a long time and
stress the remaining disks.
 Use Case: Suited for critical systems that require high fault tolerance (e.g., large-scale
storage systems).
2.5. RAID 10 (1+0 or Mirroring + Striping):
 Structure: Combines RAID 1 (mirroring) and RAID 0 (striping). Data is first
mirrored and then striped across pairs of disks.
 Advantages:
o High Performance: Offers the performance benefits of RAID 0 (striping)
while also providing redundancy through RAID 1 (mirroring).
o Fault Tolerance: Can tolerate the failure of multiple disks as long as the failed
disks are not in the same mirror set.
 Disadvantages:
o Storage Inefficiency: Like RAID 1, only 50% of the total disk space is
usable.
 Use Case: Ideal for environments requiring both high performance and high
reliability, such as databases or high-transaction environments.
2.6. RAID 50 (Striping + RAID 5):
 Structure: A combination of RAID 0 (striping) and RAID 5 (striping with parity). It
requires at least six drives.
 Advantages:
o Performance and Redundancy: Combines the performance benefits of
striping with the fault tolerance of RAID 5.
o Fault Tolerance: Can tolerate the failure of one disk per RAID 5 array.
 Disadvantages:
o Complexity: More complex to set up and maintain.
o Storage Efficiency: Less efficient than RAID 5 because of the need for extra
drives.
 Use Case: High-performance applications where both speed and redundancy are
required.
2.7. RAID 60 (Striping + RAID 6):
 Structure: Similar to RAID 50 but uses RAID 6 arrays (striping with double parity)
instead of RAID 5.
 Advantages:
o High Fault Tolerance: Can survive the failure of two disks per RAID 6 array.
o Performance and Redundancy: Provides a balance of performance and fault
tolerance.
 Disadvantages:
o Complex and Costly: Requires a large number of drives.
o Rebuild Time: Rebuilds from two-disk failures can be slow.
 Use Case: Used in environments where very high redundancy is required and where
long rebuild times can be tolerated.

3. Parity and Redundancy


 Parity: A method used in some RAID levels (RAID 5, RAID 6) to provide fault
tolerance. Parity information is calculated from the data and written to the disk. In the
event of a disk failure, the parity information can be used to reconstruct the missing
data.
o Simple Parity: Used in RAID 5, allows recovery from a single disk failure.
o Double Parity: Used in RAID 6, allows recovery from two disk failures.

4. Hot Spares and Disk Failures


 Hot Spare: A disk that is part of the RAID array but remains inactive until another
disk fails. When a disk failure occurs, the hot spare is automatically used to rebuild
the RAID array, reducing downtime and risk of data loss.
 Rebuilding: When a disk in a RAID array fails, the data from the failed disk is
reconstructed using the remaining disks and the parity information (if applicable). The
OS or RAID controller automatically manages this process. During a rebuild,
performance may degrade, and the array is vulnerable to additional disk failures,
especially if the array does not support multiple disk failures (e.g., RAID 5).

5. Software vs. Hardware RAID


 Software RAID: Managed by the operating system without the need for dedicated
hardware. It is cheaper and easier to implement but may result in lower performance,
as it relies on the system's CPU for RAID operations.
 Hardware RAID: Managed by a dedicated RAID controller card with its own
processor and memory. It offers better performance and advanced features (e.g.,
battery-backed cache) but is more expensive.

6. Choosing a RAID Level


The choice of RAID level depends on the use case and the balance between performance,
redundancy, and storage efficiency. Here’s a general guide:
 High Performance (No Redundancy Needed): RAID 0
 Data Redundancy and Improved Read Performance: RAID 1
 Balanced Performance and Fault Tolerance: RAID 5
 High Redundancy with Dual Disk Failure Protection: RAID 6
 High Performance + Redundancy: RAID 10
 Enterprise-Level, High Redundancy + Performance: RAID 50 or RAID 60

File Access Methods


Let's look at various ways to access files stored in secondary memory.
Sequential Access

Most of the operating systems access the file sequentially. In other words, we can say that
most of the files need to be accessed sequentially by the operating system.
In sequential access, the OS read the file word by word. A pointer is maintained which
initially points to the base address of the file. If the user wants to read first word of the file
then the pointer provides that word to the user and increases its value by 1 word. This process
continues till the end of the file.
Modern word systems do provide the concept of direct access and indexed access but the
most used method is sequential access due to the fact that most of the files such as text files,
audio files, video files, etc need to be sequentially accessed.
Direct Access
The Direct Access is mostly required in the case of database systems. In most of the cases,
we need filtered information from the database. The sequential access can be very slow and
inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the record we needed is
stored in 10th block. In that case, the sequential access will not be implemented because it
will traverse all the blocks in order to access the needed record.
Direct access will give the required result despite of the fact that the operating system has to
perform some complex tasks such as determining the desired block number. However, that is
generally implemented in database applications.

Indexed Access
If a file can be sorted on any of the filed then an index can be assigned to a group of certain
records. However, A particular record can be accessed by its index. The index is nothing but
the address of a record in the file.
In index accessing, searching in a large database became very quick and easy but we need to
have some extra space in the memory to store the index value.
File concept, access method, directory and disk structure, file system
mounting, file sharing protection
Understanding the file concept, access methods, directory and disk structure, file system
mounting, and file sharing protection is essential for managing files in operating systems.
Here’s a detailed overview of each aspect:
1. File Concept
A file is a collection of related information or data that is stored on a storage device. The file
concept in an operating system encompasses several key characteristics:
 Attributes: Each file has metadata, including its name, type, size, creation date,
modification date, and permissions.
 Content: The actual data stored in the file, which could be text, images, audio, video,
or executable code.
 Types of Files:
o Regular Files: Contain user data (e.g., documents, images).
o Directory Files: Contain references to other files or directories, effectively
organizing the file system.
o Special Files: Include device files that represent hardware components and
FIFO (named pipes) for inter-process communication.
2. Access Methods
Access methods determine how data is read from and written to files. Common access
methods include:
 Sequential Access: Data is read or written in a linear order, from the beginning to the
end of the file. This method is simple but not efficient for random access.
 Random Access: Data can be read or written in any order, allowing direct access to
specific locations within the file. This is useful for databases and applications
requiring fast access to specific records.
 Indexed Access: An index is maintained for fast access to file records, allowing the
system to locate records quickly without scanning the entire file.
3. Directory and Disk Structure
The organization of files on a disk is crucial for efficient data retrieval. Key components
include:
 Directory Structure:
o Single-Level Directory: All files are stored in one directory, making it simple
but challenging to manage as the number of files grows.
o Two-Level Directory: Each user has their directory containing their files,
helping organize data but complicating user management.
o Hierarchical Directory: Directories can contain subdirectories, forming a
tree-like structure. This method is the most common, allowing for better
organization and easier navigation.
 Disk Structure:
o Blocks: Data is stored in fixed-size blocks on the disk. Each block can hold a
portion of a file.
o Disk Partitions: Disks can be divided into partitions for different file systems
or purposes, improving management and performance.
4. File System Mounting
File system mounting is the process of making a file system accessible to the operating
system by attaching it to a directory in the existing file system hierarchy. This process
involves:
 Mounting Points: A directory in the existing file system where the new file system
will be attached. For example, a USB drive may be mounted to /mnt/usb.
 Mount Command: In Unix-like systems, the mount command is used to mount file
systems. Syntax:
bash
Copy code
mount /dev/sdXn /mnt/mount_point
Here, /dev/sdXn refers to the device and partition, and /mnt/mount_point is the directory
where it will be mounted.
 Unmounting: To safely detach a file system, the umount command is used, ensuring
that all processes using the file system have finished.
5. File Sharing and Protection
File sharing allows multiple users or processes to access the same file, while protection
mechanisms ensure data security. Key aspects include:
 File Sharing:
o Concurrent Access: Multiple users can read or write to a file simultaneously,
depending on the file system's capabilities and configurations.
o Network File Systems: Protocols like NFS (Network File System) and SMB
(Server Message Block) enable file sharing over networks.
 Protection Mechanisms:
o Access Control Lists (ACLs): Specify which users or groups have
permissions (read, write, execute) for a file or directory.
o File Permissions: Basic permissions are typically categorized into read (r),
write (w), and execute (x), which can be set for the owner, group, and others.
o Encryption: Sensitive files can be encrypted to prevent unauthorized access,
even if someone gains access to the file system.
o Audit Trails: Logging access attempts and modifications to files can help
track unauthorized access and changes.

You might also like