anjani sir unit 4
anjani sir unit 4
anjani sir unit 4
Fixed Partitioning
The earliest and one of the simplest technique which can be used to load more than one
processes into the main memory is Fixed partitioning or Contiguous memory allocation.
In this technique, the main memory is divided into partitions of equal or different sizes. The
operating system always resides in the first partition while the other partitions can be used to
store user processes. The memory is assigned to the processes in contiguous way.
In fixed partitioning,
1. The partitions cannot overlap.
2. A process must be contiguously present in a partition for the execution.
There are various cons of using this technique.
1. Internal Fragmentation
If the size of the process is lesser then the total size of the partition then some size of the
partition get wasted and remain unused. This is wastage of the memory and called internal
fragmentation.
As shown in the image below, the 4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.
2. External Fragmentation
The total unused space of various partitions cannot be used to load the processes even though
there is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot be used as
a unit to store a 4 MB process. Despite of the fact that the sufficient space is available to load
the process, process will not be loaded.
3. Limitation on the size of the process
If the process size is larger than the size of maximum sized partition then that process cannot
be loaded into the memory. Therefore, a limitation can be imposed on the process size that is
it cannot be larger than the size of the largest partition.
4. Degree of multiprogramming is less
By Degree of multi programming, we simply mean the maximum number of processes that
can be loaded into the memory at the same time. In fixed partitioning, the degree of
multiprogramming is fixed and very less due to the fact that the size of the partition cannot be
varied according to the size of processes.
Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this
technique, the partition size is not declared initially. It is declared at the time of process
loading.
The first partition is reserved for the operating system. The remaining space is divided into
parts. The size of each partition will be equal to the size of the process. The partition size
varies according to the need of the process so that the internal fragmentation can be avoided.
Advantages of Dynamic Partitioning over fixed partitioning
1. No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to the need of
the process, It is clear that there will not be any internal fragmentation because there will not
be any unused remaining space in the partition.
2. No Limitation on the size of the process
In Fixed partitioning, the process with the size greater than the size of the largest partition
could not be executed due to the lack of sufficient contiguous memory. Here, In Dynamic
partitioning, the process size can't be restricted since the partition size is decided according to
the process size.
3. Degree of multiprogramming is dynamic
Due to the absence of internal fragmentation, there will not be any unused space in the
partition hence more processes can be loaded in the memory at the same time.
Disadvantages of dynamic partitioning
External Fragmentation
Absence of internal fragmentation doesn't mean that there will not be external fragmentation.
Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in
the respective partitions of the main memory.
After some time P1 and P3 got completed and their assigned space is freed. Now there are
two unused partitions (1 MB and 1 MB) available in the main memory but they cannot be
used to load a 2 MB process in the memory since they are not contiguously located.
The rule says that the process must be contiguously present in the main memory to get
executed. We need to change this rule to avoid external fragmentation.
Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is
divided into pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.
Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames
become empty and therefore other pages can be loaded in that empty place. The process P5 of
size 8 KB (8 pages) is waiting inside the ready queue.
Given the fact that, we have 8 non contiguous frames available in the memory and paging
provides the flexibility of storing the process at the different places. Therefore, we can load
the pages of process P5 in the place of P2 and P4.
Memory Management Unit
The purpose of Memory Management Unit (MMU) is to convert the logical address into the
physical address. The logical address is the address generated by the CPU for every page
while the physical address is the actual address of the frame where each page will be stored.
When a page is to be accessed by the CPU by using the logical address, the operating system
needs to obtain the physical address to access that page physically.
The logical address has two parts.
1. Page Number
2. Offset
Memory management unit of OS needs to convert the page number to the frame number.
Example
Basics of Binary Addresses
Computer system assigns the binary addresses to the memory locations. However, The
system uses amount of bits to address a memory location.
Using 1 bit, we can address two memory locations. Using 2 bits we can address 4 and using 3
bits we can address 8 memory locations.
A pattern can be identified in the mapping between the number of bits in the address and the
range of the memory locations.
We know,
1. Using 1 Bit we can represent 2^1 i.e 2 memory locations.
2. Using 2 bits, we can represent 2^2 i.e. 4 memory locations.
3. Using 3 bits, we can represent 2^3 i.e. 8 memory locations.
4. Therefore, if we generalize this,
5. Using n bits, we can assign 2^n memory locations.
6.
7. n bits of address → 2 ^ n memory locations
these n bits can be divided into two parts, that are, K bits and (n-k) bits.
Considering the above image, let's say that the CPU demands 10th word of 4th page of
process P3. Since the page number 4 of process P1 gets stored at frame number 9 therefore
the 10th word of 9th frame will be returned as the physical address.
Let us consider,
word size = 8 Bytes = 2 ^ 3 Bytes
Hence,
Physical address space (in words) = (2 ^ 16) / (2 ^ 3) = 2 ^ 13 Words
Therefore,
Physical Address = 13 bits
In General,
If, Physical Address Space = N Words
In general,
If, logical address space = L words
Then, Logical Address = Log2L bits
What is a Word?
The Word is the smallest unit of the memory. It is the collection of bytes. Every operating
system defines different word sizes after analyzing the n-bit address that is inputted to the
decoder and the 2 ^ n memory locations that are produced from the decoder.
Page Table in OS
Page Table is a data structure used by the virtual memory system to store the mapping
between logical addresses and physical addresses.
Logical addresses are generated by the CPU for the pages of the processes therefore they are
generally used by the processes.
Physical addresses are the actual frame address of the memory. They are generally used by
the hardware or more specifically by RAM subsystems.
The image given below considers,
Physical Address Space = M words
Logical Address Space = L words
Page Size = P words
Demand Paging
If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
The OS puts the interrupted process in a blocking state. For the execution to proceed
the OS must bring the required page into the memory.
The OS will search for the required page in the logical address space.
The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision-making of replacing
the page in physical address space.
The page table will be updated accordingly.
The signal will be sent to the CPU to continue the program execution and it will place
the process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating system and the
required page is brought into memory.
What is Page Fault Service Time?
The time taken to service the page fault is called page fault service time. The page fault
service time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
Segmentation
Segmentation divides virtual memory into segments of different sizes. Segments that aren’t
currently needed can be moved to the hard drive. The system uses a segment table to keep
track of each segment’s status, including whether it’s in memory, if it’s been modified, and its
physical address. Segments are mapped into a process’s address space only when needed.
Combining Paging and Segmentation
Sometimes, both paging and segmentation are used together. In this case, memory is divided
into pages, and segments are made up of multiple pages. The virtual address includes both a
segment number and a page number.
Virtual Memory vs Physical Memory
When talking about the differences between virtual memory and physical memory, the
biggest distinction is speed. RAM is much faster than virtual memory, but it is also more
expensive.
When a computer needs storage for running programs, it uses RAM first. Virtual memory,
which is slower, is used only when the RAM is full.
An abstraction that extends the The actual hardware (RAM) that stores
Definition available memory by using disk data and instructions currently being
storage used by the CPU
What is Swapping?
Swapping is a process out means removing all of its pages from memory, or marking them so
that they will be removed by the normal page replacement process. Suspending a process
ensures that it is not runnable while it is swapped out. At some later time, the system swaps
back the process from the secondary storage to the main memory. When a process is busy
swapping pages in and out then this situation is called thrashing.
What is Thrashing?
At any given time, only a few pages of any process are in the main memory, and therefore
more processes can be maintained in memory. Furthermore, time is saved because unused
pages are not swapped in and out of memory. However, the OS must be clever about how it
manages this scheme. In the steady state practically, all of the main memory will be occupied
with process pages, so that the processor and OS have direct access to as many processes as
possible. Thus when the OS brings one page in, it must throw another out. If it throws out a
page just before it is used, then it will just have to get that page again almost immediately.
Too much of this leads to a condition called Thrashing. The system spends most of its time
swapping pages rather than executing instructions. So a good page replacement algorithm is
required.
In the given diagram, the initial degree of multiprogramming up to some extent of
point(lambda), the CPU utilization is very high and the system resources are utilized 100%.
But if we further increase the degree of multiprogramming the CPU utilization will
drastically fall down and the system will spend more time only on the page replacement and
the time taken to complete the execution of the process will increase. This situation in the
system is called thrashing.
Causes of Thrashing
Demand Paging in OS
Consider a main memory with five page frames and the following sequence of page
references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. which one of the following is true with
respect to page replacement policies First-In-First-out (FIFO) and Least Recently Used
(LRU)?
Number of frames = 5
FIFO
According to FIFO, the page which first comes in the memory will first goes out.
Number of Page Faults = 9
Number of hits = 6
LRU
According to LRU, the page which has not been requested for a long time will get replaced
with the new one.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not
available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. 6 comes, it is
also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault.
Finally, when 3 come it is not available so it replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement algorithm.
For example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we
get 9 total page faults, but if we increase slots to 4, we get 10-page faults.
Optimal Page Replacement
In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frame. Find number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is
not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so
—> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.
Least Recently Used
In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frames. Find number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is
least recently used —> 1 Page fault
0 is already in memory so —> 0 Page fault .
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
Copy on Write
Copy on Write or simply COW is a resource management technique. One of its main use is
in the implementation of the fork system call in which it shares the virtual memory(pages) of
the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is
called as the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then
both of these processes initially will share the same pages in memory and these shared pages
will be marked as copy-on-write which means that if any of these processes will try to modify
the shared pages then only a copy of these pages will be created and the modifications will be
done on the copy of pages by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page
3.
The below figures shows what happens before and after process P modifies page 3.
Allocation of Frames in OS
The main memory of the operating system is divided into various frames. The process is
stored in these frames, and once the process is saved as a frame, the CPU may run it. As a
result, the operating system must set aside enough frames for each process. As a result, the
operating system uses various algorithms in order to assign the frame.
Demand paging is used to implement virtual memory, an essential operating system feature.
It requires the development of a page replacement mechanism and a frame allocation system.
If you have multiple processes, the frame allocation techniques are utilized to define how
many frames to allot to each one. A number of factors constrain the strategies for allocating
frames:
1. You cannot assign more frames than the total number of frames available.
2. A specific number of frames should be assigned to each process. This limitation is
due to two factors. The first is that when the number of frames assigned drops, the
page fault ratio grows, decreasing the process's execution performance. Second, there
should be sufficient frames to hold all the multiple pages that any instruction may
reference.
There are mainly five ways of frame allocation algorithms in the OS. These are as follows:
1. Equal Frame Allocation
2. Proportional Frame Allocation
3. Priority Frame Allocation
4. Global Replacement Allocation
5. Local Replacement Allocation
Equal Frame Allocation
In equal frame allocation, the processes are assigned equally among the processes in the OS.
For example, if the system has 30 frames and 7 processes, each process will get 4 frames. The
2 frames that are not assigned to any system process may be used as a free-frame buffer pool
in the system.
Advertisement
Disadvantage
In a system with processes of varying sizes, assigning equal frames to each process makes
little sense. Many allotted empty frames will be wasted if many frames are assigned to a
small task.
Proportional Frame Allocation
The proportional frame allocation technique assigns frames based on the size needed for
execution and the total number of frames in memory.
The allocated frames for a process pi of size si are ai = (si/S)*m, in which S represents the
total of all process sizes, and m represents the number of frames in the system.
Disadvantage
The only drawback of this algorithm is that it doesn't allocate frames based on priority.
Priority frame allocation solves this problem.
Advertisement
Priority Frame Allocation
Priority frame allocation assigns frames based on the number of frame allocations and the
processes. Suppose a process has a high priority and requires more frames that many frames
will be allocated to it. Following that, lesser priority processes are allocated.
Global Replacement Allocation
When a process requires a page that isn't currently in memory, it may put it in and select a
frame from the all frames sets, even if another process is already utilizing that frame. In other
words, one process may take a frame from another.
Advantages
Process performance is not hampered, resulting in higher system throughput.
Disadvantages
The process itself may not solely control the page fault ratio of a process. The paging
behavior of other processes also influences the number of pages in memory for a process.
Local Replacement Allocation
When a process requires a page that isn't already in memory, it can bring it in and assign it a
frame from its set of allocated frames.
Advantages
The paging behavior of a specific process has an effect on the pages in memory and the page
fault ratio.
Disadvantages
A low priority process may obstruct a high priority process by refusing to share its frames.
Global Vs. Local Replacement Allocation
The number of frames assigned to a process does not change using a local replacement
strategy. On the other hand, using global replacement, a process can choose only frames
granted to other processes and enhance the number of frames allocated.
Disk-Attached Management
Disks are attached to systems in different ways, and the OS must manage this attachment
efficiently.
Types of Disk Attachment:
o Host-Attached Storage: Disks directly connected to the system (e.g., internal
hard drives, SSDs).
Managed by the OS using drivers and device interfaces (e.g., SATA,
NVMe).
o Network-Attached Storage (NAS): Disks connected via a network, accessed
over protocols like NFS or SMB.
Managed using network file systems that abstract the network layer.
o Storage Area Network (SAN): Dedicated network providing access to block-
level storage, typically used in enterprise environments for large databases or
virtual machines.
Managed using specialized network interfaces (e.g., Fibre Channel).
Device Drivers and Controllers:
o The OS communicates with disks using device drivers, which act as an
interface between the disk hardware and the software.
o Disk Controllers manage the actual communication with the storage device
(e.g., read/write commands, buffer management).
o Direct Memory Access (DMA): Used to offload data transfer between the
disk and system memory to the disk controller, reducing CPU overhead.
I/O Buffering and Caching:
o The OS buffers disk data in memory to reduce disk I/O operations. It may also
cache frequently accessed disk blocks to improve performance.
o Write-back caching stores data in memory before writing it to disk, while
write-through caching writes data to both the cache and the disk
simultaneously.
Most of the operating systems access the file sequentially. In other words, we can say that
most of the files need to be accessed sequentially by the operating system.
In sequential access, the OS read the file word by word. A pointer is maintained which
initially points to the base address of the file. If the user wants to read first word of the file
then the pointer provides that word to the user and increases its value by 1 word. This process
continues till the end of the file.
Modern word systems do provide the concept of direct access and indexed access but the
most used method is sequential access due to the fact that most of the files such as text files,
audio files, video files, etc need to be sequentially accessed.
Direct Access
The Direct Access is mostly required in the case of database systems. In most of the cases,
we need filtered information from the database. The sequential access can be very slow and
inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the record we needed is
stored in 10th block. In that case, the sequential access will not be implemented because it
will traverse all the blocks in order to access the needed record.
Direct access will give the required result despite of the fact that the operating system has to
perform some complex tasks such as determining the desired block number. However, that is
generally implemented in database applications.
Indexed Access
If a file can be sorted on any of the filed then an index can be assigned to a group of certain
records. However, A particular record can be accessed by its index. The index is nothing but
the address of a record in the file.
In index accessing, searching in a large database became very quick and easy but we need to
have some extra space in the memory to store the index value.
File concept, access method, directory and disk structure, file system
mounting, file sharing protection
Understanding the file concept, access methods, directory and disk structure, file system
mounting, and file sharing protection is essential for managing files in operating systems.
Here’s a detailed overview of each aspect:
1. File Concept
A file is a collection of related information or data that is stored on a storage device. The file
concept in an operating system encompasses several key characteristics:
Attributes: Each file has metadata, including its name, type, size, creation date,
modification date, and permissions.
Content: The actual data stored in the file, which could be text, images, audio, video,
or executable code.
Types of Files:
o Regular Files: Contain user data (e.g., documents, images).
o Directory Files: Contain references to other files or directories, effectively
organizing the file system.
o Special Files: Include device files that represent hardware components and
FIFO (named pipes) for inter-process communication.
2. Access Methods
Access methods determine how data is read from and written to files. Common access
methods include:
Sequential Access: Data is read or written in a linear order, from the beginning to the
end of the file. This method is simple but not efficient for random access.
Random Access: Data can be read or written in any order, allowing direct access to
specific locations within the file. This is useful for databases and applications
requiring fast access to specific records.
Indexed Access: An index is maintained for fast access to file records, allowing the
system to locate records quickly without scanning the entire file.
3. Directory and Disk Structure
The organization of files on a disk is crucial for efficient data retrieval. Key components
include:
Directory Structure:
o Single-Level Directory: All files are stored in one directory, making it simple
but challenging to manage as the number of files grows.
o Two-Level Directory: Each user has their directory containing their files,
helping organize data but complicating user management.
o Hierarchical Directory: Directories can contain subdirectories, forming a
tree-like structure. This method is the most common, allowing for better
organization and easier navigation.
Disk Structure:
o Blocks: Data is stored in fixed-size blocks on the disk. Each block can hold a
portion of a file.
o Disk Partitions: Disks can be divided into partitions for different file systems
or purposes, improving management and performance.
4. File System Mounting
File system mounting is the process of making a file system accessible to the operating
system by attaching it to a directory in the existing file system hierarchy. This process
involves:
Mounting Points: A directory in the existing file system where the new file system
will be attached. For example, a USB drive may be mounted to /mnt/usb.
Mount Command: In Unix-like systems, the mount command is used to mount file
systems. Syntax:
bash
Copy code
mount /dev/sdXn /mnt/mount_point
Here, /dev/sdXn refers to the device and partition, and /mnt/mount_point is the directory
where it will be mounted.
Unmounting: To safely detach a file system, the umount command is used, ensuring
that all processes using the file system have finished.
5. File Sharing and Protection
File sharing allows multiple users or processes to access the same file, while protection
mechanisms ensure data security. Key aspects include:
File Sharing:
o Concurrent Access: Multiple users can read or write to a file simultaneously,
depending on the file system's capabilities and configurations.
o Network File Systems: Protocols like NFS (Network File System) and SMB
(Server Message Block) enable file sharing over networks.
Protection Mechanisms:
o Access Control Lists (ACLs): Specify which users or groups have
permissions (read, write, execute) for a file or directory.
o File Permissions: Basic permissions are typically categorized into read (r),
write (w), and execute (x), which can be set for the owner, group, and others.
o Encryption: Sensitive files can be encrypted to prevent unauthorized access,
even if someone gains access to the file system.
o Audit Trails: Logging access attempts and modifications to files can help
track unauthorized access and changes.