Os Unit 1 & 2.
Os Unit 1 & 2.
Os Unit 1 & 2.
in
CLASS: BCA/BSC IT
BATCH:
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
UNIT-I
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
whom. It has to decide which process needs memory space and how much. OS also
has to allocate and deallocate the memory space.
Security/Privacy Management: Privacy is also provided by the Operating system
by means of passwords so that unauthorized applications can’t access programs or
data. For example, Windows uses Kerberos authentication to prevent unauthorized
access to data
o As discussed above, Kernel is the core part of an OS (Operating system); hence it has full
control over everything in the system. Each operation of hardware and software is
managed and administrated by the kernel.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
o It acts as a bridge between applications and data processing done at the hardware level. It
is the central component of an OS.
o It is the part of the OS that always resides in computer memory and enables the
communication between software and hardware components.
o It is the computer program that first loaded on start-up the system (After the bootloader).
Once it is loaded, it manages the remaining start-ups. It also manages memory,
peripheral, and I/O requests from software. Moreover, it translates all I/O requests into
data processing instructions for the CPU. It manages other tasks also such as memory
management, task management, and disk management.
o A kernel is kept and usually loaded into separate memory space, known as protected
Kernel space. It is protected from being accessed by application programs or less
important parts of OS.
o Other application programs such as browser, word processor, audio & video player use
separate memory space known as user-space.
o Due to these two separate spaces, user data and kernel data don't interfere with each other
and do not cause any instability and slowness.
Functions of a Kernel
A kernel of an OS is responsible for performing various functions and has control over the
system. Some main responsibilities of Kernel are given below:
o Device Management
To perform various actions, processes require access to peripheral devices such as a mouse,
keyboard, etc., that are connected to the computer. A kernel is responsible for controlling these
devices using device drivers. Here, a device driver is a computer program that helps or enables
the OS to communicate with any hardware device.
A kernel maintains a list of all the available devices, and this list may be already known,
configured by the user, or detected by OS at runtime.
o Memory Management
The kernel has full control for accessing the computer's memory. Each process requires some
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
memory to work, and the kernel enables the processes to safely access the memory. To allocate
the memory, the first step is known as virtual addressing, which is done by paging or
segmentation. Virtual addressing is a process of providing virtual address spaces to the
processes. This prevents the application from crashing into each other.
o Resource Management
one of the important functionalities of Kernel is to share the resources between various processes.
It must share the resources in a way that each process uniformly accesses the resource.
The kernel also provides a way for synchronization and inter-process communication (IPC). It
is responsible for context switching between processes.
o Accessing Computer Resources
A kernel is responsible for accessing computer resources such as RAM and I/O devices. RAM or
Random-Access Memory is used to contain both data and instructions. Each program needs to
access the memory to execute and mostly wants more memory than the available. For such a case,
Kernel plays its role and decides which memory each process will use and what to do if the
required memory is not available.
The kernel also allocates the request from applications to use I/O devices such as keyboards,
microphones, printers, etc.
SHELL
Your interface to the operating system is called a shell. The shell is the layer of programming
that understands and executes the commands a user enters. In some systems, the shell is called a
command interpreter.
Shells provide a way for you to communicate with the operating system. This communication is carried
out either interactively (input from the keyboard is acted upon immediately) or as a shell script. A shell
script is a sequence of shell and operating system commands that is stored in a file.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed. The
purpose of this operating system was mainly to transfer control from one job to another as soon
as the job was completed.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time between
two jobs.
Disadvantages of Batch OS
1. Starvation
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires
the input of two numbers from the console, then it will never get it in the batch processing
scenario since the user is not present at the time of execution.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources are used
efficiently, but they do not provide any user interaction with the computer system.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput of
the system.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
An Operating system, which includes software and associated protocols to communicate with
other computers via a network conveniently and cost-effectively, is called Network Operating
System.
The Distributed Operating system is not installed on a single machine, it is divided into parts,
and these parts are loaded on different machines. A part of the distributed Operating system is
installed on each machine to make their communication possible. Distributed Operating systems
are much more complex, large, and sophisticated than Network operating systems because they
also have to take care of varying networking protocols.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
PROGRAM VS PROCESS
Program PROCESS
It is a set of instructions that has been designed to It is an instance of a program that is being currently executed.
complete a certain task.
It resides in the secondary memory of the system. It is created when a program is in execution and is being loaded
into the main memory.
It exists in a single place and continues to exist until it has It exists for a limited amount of time and it gets terminated once
been explicitly deleted. the task has been completed.
It requires memory space to store instructions. It requires resources such as CPU, memory address, I/O during its
working.
It doesn't have a control block. It has its own control block, which is known as Process Control
Block.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
PCB
Process Control Block is a data structure that contains information of the process related to it.
The process control block is also known as a task control block, entry of the process table, etc.
It is very important for process management as the data structuring for processes is done in
terms of the PCB. It also defines the current state of the operating system.
When the process is created by the operating system it creates a data structure to store the
information of that process. This is known as Process Control Block (PCB).
For Example: there are MS word processes, pdf processes, printing processes, and many
background processes are running currently on the CPU. How will OS identify and manage each
process without knowing the identity of each process?
So, here PCB comes into play as a data structure to store information about each process.
Therefore, whenever a user triggers a process (like print command), a process control
block (PCB) is created for that process in the operating system which is used by the operating
system to execute and manage the processes when the operating system is free.
Process State:
A process, from its creation to completion goes through different states. Generally, a process
may be present in one of the 5 states during its execution:
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
New: This state contains the processes which are ready to be loaded by the operating
system into the main memory.
Ready: This state contains the process which is both ready to be executed and is
currently in the main memory of the system. The operating system brings the processes
from secondary memory (hard disk) to main memory (RAM). As these processes are
present in the main memory and are waiting to be assigned to the CPU, the state of these
processes is known as Ready state.
Running: This state contains the processes which are currently executed by the CPU in
our system. If there is a total x CPU in our system, then a maximum number of running
processes for a particular time is also x.
Block or wait: A process from its running state may transition to a block or wait for state
depending on the scheduling algorithm or because of the internal behavior of the process
(process explicitly wants to wait).
Termination: A process that completes its execution comes to its termination state. All
the contents of that process (Process control block) will also be deleted by the operating
system.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
Long-Term Scheduler
The job scheduler is another name for Long-Term scheduler. It selects processes from the pool
(or the secondary memory) and then maintains them in the primary memory’s ready queue.
Short-Term Scheduler
It chooses one job from the ready queue and then sends it to the CPU for processing. The Short-
Term scheduler’s task can be essential in the sense that if it chooses a job with a long CPU burst
time, all subsequent jobs will have to wait in a ready queue for a long period. This is known as
hunger, and it can occur if the Short-Term scheduler makes a mistake when selecting the work
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
Medium-Term Scheduler
The switched-out processes are handled by the Medium-Term scheduler. If the running state
processes require some IO time to complete, the state must be changed from running to waiting.
This is accomplished using a Medium-Term scheduler. It stops the process from executing in
order to make space for other processes. Swapped out processes are examples of this, and the
operation is known as swapping. The Medium-Term scheduler here is in charge of stopping and
starting processes
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
What is a Thread?
Within a program, a thread is a separate execution path. It is a lightweight process that the
operating system can schedule and run concurrently with other threads. The operating system
creates and manages threads, and they share the same memory and resources as the program
that created them. This enables multiple threads to collaborate and work efficiently within a
single program.
The idea is to achieve parallelism by dividing a process into multiple threads. For
example, in a browser, multiple tabs can be different threads. MS Word uses multiple
threads: one thread to format the text, another thread to process inputs, etc.
Types of Threads
There are two types of threads:
User Level Thread
Kernel Level Thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user.
The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-
level thread is implemented by the operating system. The kernel knows about all the threads and
manages them. The kernel-level thread offers a system call to create and manage the threads
from user-space. The implementation of kernel threads is more difficult than the user thread.
Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in a
concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other, and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization techniques
such as semaphores, monitors, and critical sections are used .
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution of
other processes.
Cooperative Process: A process that can affect or be affected by other processes
executing in the system
CPU Scheduling
CPU Scheduling is a process that allows one process to use the CPU while another
process is delayed (in standby) due to unavailability of any resources such as I / O etc,
thus making full use of the CPU. The purpose of CPU Scheduling is to make the system
more efficient, faster, and fairer.
Whenever the CPU becomes idle, the operating system must select one of the processes in the
line ready for launch. The selection process is done by a temporary (CPU) scheduler. The
Scheduler selects between memory processes ready to launch and assigns the CPU to one of
them.
CPU–I/O Burst Cycle process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states. Process execution begins with a CPU burst that is
followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so
on. Eventually, the final CPU burst ends with a system request to terminate execution.
Preemptive scheduling is a method that may be used when a process switches from a running
state to a ready state or from a waiting state to a ready state. The resources are assigned to the
process for a particular time and then removed. If the resources still have the remaining CPU
burst time, the process is placed back in the ready queue. The process remains in the ready queue
until it is given a chance to execute again.
When a high-priority process comes in the ready queue, it doesn't have to wait for the running
process to finish its burst time. However, the running process is interrupted in the middle of its
execution and placed in the ready queue until the high-priority process uses the resources. As a
result, each process gets some CPU time in the ready queue. It improves the overhead of
switching a process from running to ready state and vice versa, increasing preemptive scheduling
flexibility. It may or may not include SJF and Priority scheduling.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
Advantages
1. It is a more robust method because a process may not monopolize the processor.
2. Each event causes an interruption in the execution of ongoing tasks.
3. It improves the average response time.
4. It is more beneficial when you use this method in a multi-programming environment.
5. The operating system ensures that all running processes use the same amount of CPU.
Disadvantages
When a non-preemptive process with a high CPU burst time is running, the other process would
have to wait for a long time, and that increases the process average waiting time in the ready
queue. However, there is no overhead in transferring processes from the ready queue to the CPU
under non-preemptive scheduling. The scheduling is strict because the execution process is not
even preempted for a higher priority process.
Advantages
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
Disadvantages
SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU. There are many different CPU-scheduling algorithms.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
This algorithm associates with each process the length of the process’s next CPU burst. When
the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next
CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. Note that a
more appropriate term for this scheduling method would be the shortest-next-CPU-burst
algorithm, because scheduling depends on the length of the next CPU burst of a process, rather
than its total length. As an example of SJF scheduling, consider the following set of processes,
with the length of the CPU burst given in milliseconds:
Advantages
1. It can be actually implementable in the system because it is not depending on the burst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
3. Deciding a perfect time quantum is really a very difficult task in the system.
Note:
The majority of the Operating System practically implement dynamic loading, dynamic
linking, and dynamic address binding. For example – Windows, Linux, UNIX all
popular OS.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
Linking Loading
It is a fixed-size partitioning theme (scheme). In paging, both main memory and secondary
memory are divided into equal fixed-size partitions. The partitions of the secondary memory
area unit and main memory area unit are known as pages and frames respectively. Paging is a
memory management method accustomed fetch processes from the secondary memory into the
main memory in the form of pages. In paging, each process is split into parts wherever the size
of every part is the same as the page size.
SBS @PROPERITARY
BCA-4TH SEM/B.Sc. IT 2nd SEM www.sbs.ac.in
Segmentation
In paging, we were blindly diving the process into pages of fixed sizes but in segmentation, we
divide the process into modules for better visualization of the process. Here each segment or
module consists of the same type of functions. For example, the main function is included in
one segment, library function is kept in other segments, and so on. As the size of segments
may vary, so memory is divided into variable size parts .
Virtual Memory
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical memory
Modern microprocessors intended for general-purpose use, a memory management unit, or MMU, is built
into the hardware. The MMU's job is to translate virtual addresses into physical addresses.
However, deciding, which pages need to be kept in the main memory and which need to be kept
in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time.
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced. It
suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.
Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory.
SBS @PROPERITARY