Os Unit 1 & 2.

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.

in

CLASS: BCA/BSC IT
BATCH:

SUBJECT: OPERATING SYSTEMS

Notes as per IKGPTU Syllabus

NAME OF FACULTY: --AARIF KHAN

FACULTY OF COMPUTER & IT, SBS COLLEGE. LUDHIANA

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

UNIT-I

OPERATING SYSTEM DEFINITION


An operating system acts as an intermediary between the user of a computer and
computer hardware. The purpose of an operating system is to provide an environment in
which a user can execute programs conveniently and efficiently.
An operating system is a software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system.
 An operating system is a program that controls the execution of application
programs and acts as an interface between the user of a computer and the computer
hardware.
 A more common definition is that the operating system is the one program running
at all times on the computer (usually called the kernel), with all else being
application programs.
 An operating system is concerned with the allocation of resources and services, such
as memory, processors, devices, and information. The operating system
correspondingly includes programs to manage these resources, such as a traffic
controller, a scheduler, a memory management module, I/O programs, and a file
system.

Features of Operating system – Operating system has the following features:


1. Convenience: An OS makes a computer more convenient to use.
2. Efficiency: An OS allows the computer system resources to be used efficiently.

Major Functionalities of Operating System:

 Resource Management: When parallel accessing happens in the OS means when


multiple users are accessing the system the OS works as Resource Manager, Its
responsibility is to provide hardware to the user. It decreases the load in the system.
 Process Management: It includes various tasks like scheduling and
termination of the process. It is done with the help of CPU Scheduling algorithms.
 Memory Management: Refers to the management of primary memory. The
operating system has to keep track of how much memory has been used and by

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

whom. It has to decide which process needs memory space and how much. OS also
has to allocate and deallocate the memory space.
 Security/Privacy Management: Privacy is also provided by the Operating system
by means of passwords so that unauthorized applications can’t access programs or
data. For example, Windows uses Kerberos authentication to prevent unauthorized
access to data

ROLE OF KERNEL AND SHELL

Kernel is central component of an operating system that manages operations of


computer and hardware. It basically manages operations of memory and CPU time. It is
core component of an operating system. Kernel acts as a bridge between applications
and data processing performed at hardware level using inter-process communication
and system calls.
Kernel loads first into memory when an operating system is loaded and remains into
memory until operating system is shut down again. It is responsible for various tasks
such as disk management, task management, and memory management.

o As discussed above, Kernel is the core part of an OS (Operating system); hence it has full
control over everything in the system. Each operation of hardware and software is
managed and administrated by the kernel.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

o It acts as a bridge between applications and data processing done at the hardware level. It
is the central component of an OS.
o It is the part of the OS that always resides in computer memory and enables the
communication between software and hardware components.
o It is the computer program that first loaded on start-up the system (After the bootloader).
Once it is loaded, it manages the remaining start-ups. It also manages memory,
peripheral, and I/O requests from software. Moreover, it translates all I/O requests into
data processing instructions for the CPU. It manages other tasks also such as memory
management, task management, and disk management.
o A kernel is kept and usually loaded into separate memory space, known as protected
Kernel space. It is protected from being accessed by application programs or less
important parts of OS.
o Other application programs such as browser, word processor, audio & video player use
separate memory space known as user-space.
o Due to these two separate spaces, user data and kernel data don't interfere with each other
and do not cause any instability and slowness.

Functions of a Kernel
A kernel of an OS is responsible for performing various functions and has control over the
system. Some main responsibilities of Kernel are given below:

o Device Management
To perform various actions, processes require access to peripheral devices such as a mouse,
keyboard, etc., that are connected to the computer. A kernel is responsible for controlling these
devices using device drivers. Here, a device driver is a computer program that helps or enables
the OS to communicate with any hardware device.
A kernel maintains a list of all the available devices, and this list may be already known,
configured by the user, or detected by OS at runtime.
o Memory Management
The kernel has full control for accessing the computer's memory. Each process requires some

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

memory to work, and the kernel enables the processes to safely access the memory. To allocate
the memory, the first step is known as virtual addressing, which is done by paging or
segmentation. Virtual addressing is a process of providing virtual address spaces to the
processes. This prevents the application from crashing into each other.
o Resource Management
one of the important functionalities of Kernel is to share the resources between various processes.
It must share the resources in a way that each process uniformly accesses the resource.
The kernel also provides a way for synchronization and inter-process communication (IPC). It
is responsible for context switching between processes.
o Accessing Computer Resources
A kernel is responsible for accessing computer resources such as RAM and I/O devices. RAM or
Random-Access Memory is used to contain both data and instructions. Each program needs to
access the memory to execute and mostly wants more memory than the available. For such a case,
Kernel plays its role and decides which memory each process will use and what to do if the
required memory is not available.
The kernel also allocates the request from applications to use I/O devices such as keyboards,
microphones, printers, etc.

SHELL
Your interface to the operating system is called a shell. The shell is the layer of programming
that understands and executes the commands a user enters. In some systems, the shell is called a
command interpreter.

Shells provide a way for you to communicate with the operating system. This communication is carried
out either interactively (input from the keyboard is acted upon immediately) or as a shell script. A shell
script is a sequence of shell and operating system commands that is stored in a file.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Types of Operating Systems (OS)


An operating system is a well-organized collection of programs that manages the computer
hardware. It is a type of system software that is responsible for the smooth functioning of the
computer system.

Batch Operating System


In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were
batched together and executed in time. People were used to having a single computer which was
called a mainframe.

In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.

The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed. The
purpose of this operating system was mainly to transfer control from one job to another as soon
as the job was completed.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time between
two jobs.

Disadvantages of Batch OS

1. Starvation

Batch processing suffers from starvation.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires
the input of two numbers from the console, then it will never get it in the batch processing
scenario since the user is not present at the time of execution.

Multiprogramming Operating System


Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each
process needs two types of system time: CPU time and IO time.

In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.

Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources are used
efficiently, but they do not provide any user interaction with the computer system.

Multiprocessing Operating System


In Multiprocessing, Parallel computing is achieved. There are more than one processors present
in the system which can execute more than one process at the same time. This will increase the
throughput of the system.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput of
the system.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks can be


distributed among several processors. This increases reliability as if one processor fails,
the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.

Disadvantages of Multiprocessing operating System

o Multiprocessing operating system is more complex and sophisticated as it takes care of


multiple CPUs simultaneously.

Multitasking Operating System


The multitasking operating system is a logical extension of a multiprogramming system that
enables multiple programs simultaneously. It allows a user to perform more than one computer
task at the same time.

Advantages of Multitasking operating system


o This operating system is more suited to supporting multiple users simultaneously.
o The multitasking operating systems have well-defined memory management.

Disadvantages of Multitasking operating system


o The multiple processors are busier at the same time to complete any task in a multitasking
environment, so the CPU generates more heat.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Network Operating System

An Operating system, which includes software and associated protocols to communicate with
other computers via a network conveniently and cost-effectively, is called Network Operating
System.

Advantages of Network Operating System


o In this type of operating system, network traffic reduces due to the division between clients and
the server.
o This type of system is less expensive to set up and maintain.

Disadvantages of Network Operating System


o In this type of operating system, the failure of any node in a system affects the whole system.
o Security and performance are important issues. So trained network administrators are required for
network administration.

Distributed Operating System

The Distributed Operating system is not installed on a single machine, it is divided into parts,
and these parts are loaded on different machines. A part of the distributed Operating system is
installed on each machine to make their communication possible. Distributed Operating systems
are much more complex, large, and sophisticated than Network operating systems because they
also have to take care of varying networking protocols.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Advantages of Distributed Operating System


o The distributed operating system provides sharing of resources.
o This type of system is fault-tolerant.

Disadvantages of Distributed Operating System


o Protocol overhead can dominate computation cost.

PROGRAM VS PROCESS

Program PROCESS

It is a set of instructions that has been designed to It is an instance of a program that is being currently executed.
complete a certain task.

It is a passive entity. It is an active entity.

It resides in the secondary memory of the system. It is created when a program is in execution and is being loaded
into the main memory.

It exists in a single place and continues to exist until it has It exists for a limited amount of time and it gets terminated once
been explicitly deleted. the task has been completed.

It is considered as a static entity. It is considered as a dynamic entity.

It doesn't have a resource requirement. It has a high resource requirement.

It requires memory space to store instructions. It requires resources such as CPU, memory address, I/O during its
working.

It doesn't have a control block. It has its own control block, which is known as Process Control
Block.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

PCB
Process Control Block is a data structure that contains information of the process related to it.
The process control block is also known as a task control block, entry of the process table, etc.
It is very important for process management as the data structuring for processes is done in
terms of the PCB. It also defines the current state of the operating system.

When the process is created by the operating system it creates a data structure to store the
information of that process. This is known as Process Control Block (PCB).

For Example: there are MS word processes, pdf processes, printing processes, and many
background processes are running currently on the CPU. How will OS identify and manage each
process without knowing the identity of each process?

So, here PCB comes into play as a data structure to store information about each process.

Therefore, whenever a user triggers a process (like print command), a process control
block (PCB) is created for that process in the operating system which is used by the operating
system to execute and manage the processes when the operating system is free.

Process State:

A process, from its creation to completion goes through different states. Generally, a process
may be present in one of the 5 states during its execution:

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

 New: This state contains the processes which are ready to be loaded by the operating
system into the main memory.

 Ready: This state contains the process which is both ready to be executed and is
currently in the main memory of the system. The operating system brings the processes
from secondary memory (hard disk) to main memory (RAM). As these processes are
present in the main memory and are waiting to be assigned to the CPU, the state of these
processes is known as Ready state.

 Running: This state contains the processes which are currently executed by the CPU in
our system. If there is a total x CPU in our system, then a maximum number of running
processes for a particular time is also x.

 Block or wait: A process from its running state may transition to a block or wait for state
depending on the scheduling algorithm or because of the internal behavior of the process
(process explicitly wants to wait).

 Termination: A process that completes its execution comes to its termination state. All
the contents of that process (Process control block) will also be deleted by the operating
system.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

What is a Process Scheduler in an Operating System?

Process Scheduling is responsible for selecting a processor process based on a scheduling


method as well as removing a processor process. It’s a crucial component of a
multiprogramming operating system. Process scheduling makes use of a variety of scheduling
queues. The scheduler’s purpose is to implement the virtual machine so that each process
appears to be running on its own computer to the user.

Long-Term Scheduler
The job scheduler is another name for Long-Term scheduler. It selects processes from the pool
(or the secondary memory) and then maintains them in the primary memory’s ready queue.

Short-Term Scheduler
It chooses one job from the ready queue and then sends it to the CPU for processing. The Short-
Term scheduler’s task can be essential in the sense that if it chooses a job with a long CPU burst
time, all subsequent jobs will have to wait in a ready queue for a long period. This is known as
hunger, and it can occur if the Short-Term scheduler makes a mistake when selecting the work

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Medium-Term Scheduler
The switched-out processes are handled by the Medium-Term scheduler. If the running state
processes require some IO time to complete, the state must be changed from running to waiting.
This is accomplished using a Medium-Term scheduler. It stops the process from executing in
order to make space for other processes. Swapped out processes are examples of this, and the
operation is known as swapping. The Medium-Term scheduler here is in charge of stopping and
starting processes

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

What is a Thread?
Within a program, a thread is a separate execution path. It is a lightweight process that the
operating system can schedule and run concurrently with other threads. The operating system
creates and manages threads, and they share the same memory and resources as the program
that created them. This enables multiple threads to collaborate and work efficiently within a
single program.
The idea is to achieve parallelism by dividing a process into multiple threads. For
example, in a browser, multiple tabs can be different threads. MS Word uses multiple
threads: one thread to format the text, another thread to process inputs, etc.
Types of Threads
There are two types of threads:
 User Level Thread
 Kernel Level Thread

User Level Thread

The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user.

Kernel Level Thread

The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-
level thread is implemented by the operating system. The kernel knows about all the threads and
manages them. The kernel-level thread offers a system call to create and manage the threads
from user-space. The implementation of kernel threads is more difficult than the user thread.

Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in a
concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other, and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization techniques
such as semaphores, monitors, and critical sections are used .

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of
other processes.
 Cooperative Process: A process that can affect or be affected by other processes
executing in the system

CPU Scheduling

CPU Scheduling is a process that allows one process to use the CPU while another
process is delayed (in standby) due to unavailability of any resources such as I / O etc,
thus making full use of the CPU. The purpose of CPU Scheduling is to make the system
more efficient, faster, and fairer.
Whenever the CPU becomes idle, the operating system must select one of the processes in the
line ready for launch. The selection process is done by a temporary (CPU) scheduler. The
Scheduler selects between memory processes ready to launch and assigns the CPU to one of
them.

What is CPU- I/O Burst Cycle?

CPU–I/O Burst Cycle process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states. Process execution begins with a CPU burst that is
followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so
on. Eventually, the final CPU burst ends with a system request to terminate execution.

What is Preemptive Scheduling?

Preemptive scheduling is a method that may be used when a process switches from a running
state to a ready state or from a waiting state to a ready state. The resources are assigned to the
process for a particular time and then removed. If the resources still have the remaining CPU
burst time, the process is placed back in the ready queue. The process remains in the ready queue
until it is given a chance to execute again.

When a high-priority process comes in the ready queue, it doesn't have to wait for the running
process to finish its burst time. However, the running process is interrupted in the middle of its
execution and placed in the ready queue until the high-priority process uses the resources. As a
result, each process gets some CPU time in the ready queue. It improves the overhead of
switching a process from running to ready state and vice versa, increasing preemptive scheduling
flexibility. It may or may not include SJF and Priority scheduling.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Advantages

1. It is a more robust method because a process may not monopolize the processor.
2. Each event causes an interruption in the execution of ongoing tasks.
3. It improves the average response time.
4. It is more beneficial when you use this method in a multi-programming environment.
5. The operating system ensures that all running processes use the same amount of CPU.

Disadvantages

1. It requires the use of limited computational resources.


2. It takes more time suspending the executing process, switching the context, and
dispatching the new incoming process.
3. If several high-priority processes arrive at the same time, the low-priority process would
have to wait longer.

What is Non-Preemptive Scheduling?


Non-preemptive scheduling is a method that may be used when a process terminates or switches
from a running to a waiting state. When processors are assigned to a process, they keep the
process until it is eliminated or reaches a waiting state. When the processor starts the process
execution, it must complete it before executing the other process, and it may not be interrupted in
the middle.

When a non-preemptive process with a high CPU burst time is running, the other process would
have to wait for a long time, and that increases the process average waiting time in the ready
queue. However, there is no overhead in transferring processes from the ready queue to the CPU
under non-preemptive scheduling. The scheduling is strict because the execution process is not
even preempted for a higher priority process.

Advantages

1. It provides a low scheduling overhead.


2. It is a very simple method.
3. It uses less computational resources.
4. It offers high throughput.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Disadvantages

1. It has a poor response time for the process.


2. A machine can freeze up due to bugs.

SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU. There are many different CPU-scheduling algorithms.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

First-Come, First-Served Scheduling


By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling
algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue. When a process
enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The running process is then removed from the
queue. The code for FCFS scheduling is simple to write and understand. On the negative side,
the average waiting time under the FCFS policy is often quite long. Consider the following set of
processes that arrive at time 0, with the length of the CPU burst given in milliseconds:

Shortest-Job-First Scheduling (SJF)

This algorithm associates with each process the length of the process’s next CPU burst. When
the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next
CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. Note that a
more appropriate term for this scheduling method would be the shortest-next-CPU-burst
algorithm, because scheduling depends on the length of the next CPU burst of a process, rather
than its total length. As an example of SJF scheduling, consider the following set of processes,
with the length of the CPU burst given in milliseconds:

Round Robin Scheduling Algorithm


Round Robin scheduling algorithm is one of the most popular scheduling algorithm which can
actually be implemented in most of the operating systems. This is the preemptive version of
first come first serve scheduling. The Algorithm focuses on Time Sharing. In this algorithm,
every process gets executed in a cyclic way. A certain time slice is defined in the system which
is called time quantum. Each process present in the ready queue is assigned the CPU for that
time quantum, if the execution of the process is completed during that time then the process
will terminate else the process will go back to the ready queue and waits for the next turn to
complete the execution.

Advantages
1. It can be actually implementable in the system because it is not depending on the burst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU

Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

3. Deciding a perfect time quantum is really a very difficult task in the system.

What is address binding in the operating system?


Address binding refers to the translation of provided virtual address spaces to actual physical
memory address spaces to execute the program. There are different modes and sequences in
which addresses are bound to their true value.

Types of Address Binding:

Address Binding divided into three types as follows.


1. Compile-time Address Binding
2. Load time Address Binding
3. Execution time Address Binding

Compile-time Address Binding:


 If the compiler is responsible for performing address binding then it is called
compile-time address binding.
 It will be done before loading the program into memory.
 The compiler requires interacts with an OS memory manager to perform compile-
time address binding.

Load time Address Binding:


 It will be done after loading the program into memory.
 This type of address binding will be done by the OS memory manager i.e. loader.

Execution time or dynamic Address Binding:


 It will be postponed even after loading the program into memory.
 The program will be kept on changing the locations in memory until the time of
program execution.
 The dynamic type of address binding done by the processor at the time of program
execution.

Note:
The majority of the Operating System practically implement dynamic loading, dynamic
linking, and dynamic address binding. For example – Windows, Linux, UNIX all
popular OS.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

What is Linking and Loading in OS

Linking Loading

Loading is the process of loading


The process of collecting and maintaining pieces of the program from secondary
code and data into a single file is known as Linking memory to the main memory for
in the operating system. execution.

Loading is used to allocate the


address to all executable files and
Linking is used to join all the modules. this task is done by the loader.

Linking is performed with the help of Linker. In


an operating system, Linker is a program that helps A loader is a program that places
to link object modules of a program into a single programs into memory and
object file. It is also called a link editor. prepares them for execution.

Linkers are an important part of the software


development process because they enable separate
compilation. Apart from that organizing a large
application as one monolithic source file, we can The loader is responsible for the
decompose it into smaller, more allocation, linking, relocation,
manageable modules that can be modified and and loading of the operating
compiled separately. system.

Paging & Segmentation

It is a fixed-size partitioning theme (scheme). In paging, both main memory and secondary
memory are divided into equal fixed-size partitions. The partitions of the secondary memory
area unit and main memory area unit are known as pages and frames respectively. Paging is a
memory management method accustomed fetch processes from the secondary memory into the
main memory in the form of pages. In paging, each process is split into parts wherever the size
of every part is the same as the page size.

SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in

Segmentation
In paging, we were blindly diving the process into pages of fixed sizes but in segmentation, we
divide the process into modules for better visualization of the process. Here each segment or
module consists of the same type of functions. For example, the main function is included in
one segment, library function is kept in other segments, and so on. As the size of segments
may vary, so memory is divided into variable size parts .

Virtual Memory
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical memory

Modern microprocessors intended for general-purpose use, a memory management unit, or MMU, is built
into the hardware. The MMU's job is to translate virtual addresses into physical addresses.

Virtual memory is commonly implemented by demand paging

What is Demand Paging in OS (Operating System)?


According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory which means that only a few pages will only be
present in the main memory at any time.

However, deciding, which pages need to be kept in the main memory and which need to be kept
in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time.

Therefore, to overcome this problem, there is a concept called Demand Paging is introduced. It
suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.

Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory.

SBS @PROPERITARY

You might also like