0% found this document useful (0 votes)
3 views26 pages

OS UNIT2

The document explains key concepts of processes in operating systems, including process states, control blocks, scheduling, and queues. It details the lifecycle of a process, the structure and function of a Process Control Block (PCB), and various scheduling algorithms used to manage CPU time among multiple processes. Additionally, it discusses the importance of queues in managing processes and resources within the OS.

Uploaded by

canibus.yt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
3 views26 pages

OS UNIT2

The document explains key concepts of processes in operating systems, including process states, control blocks, scheduling, and queues. It details the lifecycle of a process, the structure and function of a Process Control Block (PCB), and various scheduling algorithms used to manage CPU time among multiple processes. Additionally, it discusses the importance of queues in managing processes and resources within the OS.

Uploaded by

canibus.yt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 26

1.

Explain Process concepts in operating systems

Process concepts are foundational to understanding how operating


systems (OS) manage resources and execute programs. A process in
an operating system is a program in execution. It acts as the primary
unit of work within a computer system and requires resources like
CPU time, memory, files, and I/O devices to function. Here's a
breakdown of the key concepts:

1. Process States
A process undergoes different states during its execution lifecycle:
 New: The process is being created.
 Ready: The process is loaded into main memory and waiting for
CPU allocation.
 Running: The process is currently being executed by the CPU.
 Waiting (Blocked): The process is waiting for some I/O
operation or event to complete.
 Terminated: The process has finished execution or is aborted.

2. Process Control Block (PCB)


The OS maintains a PCB for each process, containing:
 Process ID (PID)
 Process state
 Program counter (address of the next instruction to execute)
 CPU registers
 Memory management information
 I/O status information
 Accounting information (e.g., CPU usage, time limits)

3. Process Scheduling
The OS uses scheduling algorithms to decide which process to run
next. Types of scheduling:
 Long-term scheduling: Decides which processes to admit into
the system.
 Short-term scheduling: Determines which process gets CPU
time (e.g., round-robin, priority scheduling).
 Medium-term scheduling: Temporarily removes processes from
memory to reduce the load (swapping).

4. Process Operations
Processes can be manipulated through various operations:
 Process Creation: Initiated by a system call like fork() in Unix. A
parent process creates child processes.
 Process Termination: A process ends via exit() system call or an
error.
 Process Hierarchy: Parent-child relationship forms a tree of
processes. Child processes may inherit resources from parents.

5. Inter-Process Communication (IPC)


Processes often need to communicate with one another. Common
IPC mechanisms include:
 Shared Memory: Multiple processes can access a common
memory area.
 Message Passing: Processes exchange information using
messages.
 Pipes, Semaphores, and Sockets: Used for synchronization and
communication.

6. Threads
A thread is a lightweight subunit of a process. While a process is the
container, threads represent individual tasks within it. Threads share
the same process resources but execute independently.

7. Context Switching
When the CPU switches from executing one process to another, the
OS saves the current process's state (PCB) and loads the state of the
next process. This is known as context switching and incurs some
overhead.

8. Process Synchronization
When processes share resources, synchronization ensures correct
execution. Mechanisms include:
 Semaphores and Mutexes: Prevent race conditions.
 Monitors: High-level synchronization construct.
 Critical Section: Part of the code where shared resources are
accessed.

9. Deadlocks
A situation where two or more processes are waiting indefinitely for
resources held by each other. The OS uses methods like deadlock
prevention, avoidance (e.g., Banker's Algorithm), or detection and
recovery.

Importance of Process Management


Process management ensures efficient CPU utilization, fair resource
allocation, and smooth multitasking, enabling modern operating
systems to execute multiple applications simultaneously.

2. Explain process state, in operating


systems

Process State in Operating Systems


A process state refers to the current status of a process at a specific
moment in its lifecycle. It represents what the process is doing and
determines the actions the operating system (OS) can take for that
process. A process transitions between states as it executes.
Key Process States
1. New
o The process is being created.
o The OS has allocated the required resources but hasn’t yet
admitted the process to the ready queue.
2. Ready
o The process is prepared to execute but is waiting for CPU
allocation.
o It resides in the ready queue, where processes await CPU
scheduling.
3. Running
o The process is actively being executed by the CPU.
o At any given time, only one process per CPU core is in this
state (in non-parallel systems).
4. Waiting (Blocked)
o The process is unable to continue execution until some
I/O operation or event completes (e.g., reading data from
a file, waiting for a network response).
o Once the event completes, the process transitions back to
the ready state.
5. Terminated
o The process has finished its execution or has been
explicitly killed.
o Resources used by the process are deallocated, and the
process exits the system.

Process State Transition Diagram


Below is a description of the typical state transitions:
1. New → Ready: Once a process is created and all necessary
resources are allocated, it enters the ready state.
2. Ready → Running: The CPU scheduler selects a process from
the ready queue, and it starts executing.
3. Running → Terminated: The process completes or is
terminated by a user or system call.
4. Running → Waiting: If the process requires an I/O operation or
is waiting for an event, it moves to the waiting state.
5. Waiting → Ready: When the I/O operation or event is
completed, the process returns to the ready queue.
6. Running → Ready: In preemptive multitasking, if a higher-
priority process needs the CPU, the running process is paused
and moved back to the ready state.

3.Explain process control block, in operating


systems
Process Control Block is a data structure that
contains information of the process related to
it. The process control block is also known as
a task control block, entry of the process
table, etc.
It is very important for process
management as the data structuring for
processes is done in terms of the PCB. It also
defines the current state of the operating
system.
Structure of the Process Control Block
The process control stores many data items
that are needed for efficient process
management. Some of these data items are
explained with the help of the given diagram

The following are the data items −
.Process State
This specifies the process state i.e. new, ready, running,
waiting or terminated.
.Process Number
This shows the number of the particular process.
.Program Counter
This contains the address of the next instruction that
needs to be executed in the process.
.Registers
This specifies the registers that are used by the process.
They may include accumulators, index registers, stack
pointers, general purpose registers etc.
.List of Open Files
These are the different files that are associated with
the process
.CPU Scheduling Information
The process priority, pointers to scheduling queues etc.
is the CPU scheduling information that is contained in
the PCB. This may also include any other scheduling
parameters.
.Memory Management Information
The memory management information includes the
page tables or the segment tables depending on the
memory system used. It also contains the value of the
base registers, limit registers etc.
.I/O Status Information
This information includes the list of I/O devices used by
the process, the list of files etc.
.Accounting information
The time limits, account numbers, amount
of CPU used, process numbers etc. are all a part of the
PCB accounting information.

4. Explain scheduling in operating systems


CPU scheduling is a process used by the operating system to
decide which task or program gets to use the CPU at a
particular time. Since many programs can run at the same
time, the OS needs to manage the CPU’s time so that every
program gets a proper chance to run. The purpose of CPU
Scheduling is to make the system more efficient and faster.
CPU scheduling is a key part of how an operating system
works. It decides which task (or process) the CPU should work
on at any given time. This is important because a CPU can
only handle one task at a time, but there are usually many
tasks that need to be processed. In this article, we are going
to discuss CPU scheduling in detail.
CPU scheduling is the process of deciding which process will
own the CPU to use while another process is suspended. The
main function of CPU scheduling is to ensure that whenever
the CPU remains idle, the OS has at least selected one of the
processes available in the ready-to-use line.

Terminologies Used in CPU Scheduling


 Arrival Time: The time at which the process arrives in
the ready queue.
 Completion Time: The time at which the process
completes its execution.
 Burst Time: Time required by a process for CPU
execution.
 Turn Around Time: Time Difference between completion
time and arrival time.

CPU Scheduling algorithms

The criteria include the following:


 CPU Utilization: The main purpose of any CPU algorithm
is to keep the CPU as busy as possible. Theoretically, CPU
usage can range from 0 to 100 but in a real-time system,
it varies from 40 to 90 percent depending on the system
load.
 Throughput: The average CPU performance is the
number of processes performed and completed during
each unit. This is called throughput. The output may vary
depending on the length or duration of the processes.
 Turn Round Time: For a particular process, the
important conditions are how long it takes to perform
that process. The time elapsed from the time of process
delivery to the time of completion is known as the
conversion time. Conversion time is the amount of time
spent waiting for memory access, waiting in line, using
CPU, and waiting for I / O.
 Waiting Time: The Scheduling algorithm does not affect
the time required to complete the process once it has
started performing. It only affects the waiting time of
the process i.e. the time spent in the waiting process in
the ready queue.
 Response Time: In a collaborative system, turn around
time is not the best option. The process may produce
something early and continue to computing the new
results while the previous results are released to the
user. Therefore another method is the time taken in the
submission of the application process until the first
response is issued. This measure is called response time.

Different Types of CPU Scheduling Algorithms


 There are mainly two types of scheduling methods:
 Preemptive Scheduling: Preemptive scheduling is used
when a process switches from running state to ready
state or from the waiting state to the ready state.
 Non-Preemptive Scheduling: Non-Preemptive
scheduling is used when a process terminates , or when
a process switches from running state to waiting state.
5.Explain queues, in operating systems

In operating systems, queues are data structures used to


manage and organize processes, tasks, or resources waiting
to be executed or accessed. They follow specific scheduling
policies, such as First In, First Out (FIFO), to determine the
order in which elements are processed.

Types of Queues in Operating Systems


Queues play a vital role in various subsystems of the OS, such
as CPU scheduling, disk scheduling, and resource
management. The major types include:

1. Job Queue
 Contains all processes that are submitted to the system.
 Represents the pool of processes awaiting admission
into the system.

2. Ready Queue
 Contains all processes that are ready to execute and are
waiting for CPU time.
 Managed by the short-term scheduler.
 Implemented using data structures like linked lists,
circular queues, or priority queues.

3. Device Queue
 Stores processes waiting for access to specific I/O
devices (e.g., disk, printer).
 Each device may have its own queue, such as a disk
queue or a printer queue.

4. Waiting Queue (Blocked Queue)


 Contains processes that are waiting for an event (e.g.,
I/O completion or a signal).
 Once the event occurs, the processes are moved back to
the ready queue.

5. Priority Queue
 Processes are assigned priorities, and the queue ensures
higher-priority processes are executed first.
 Used in priority scheduling algorithms.

6. Multilevel Queue
 The ready queue is divided into multiple queues based
on specific criteria (e.g., foreground vs. background
processes).
 Each queue has its own scheduling policy, and the OS
decides how to allocate CPU time between queues.

How Queues Work in Process Management


1. Process Creation:
o When a process is created, it is placed in the job
queue.
2. Ready Queue:
o If admitted to the system, the process moves to the
ready queue.
3. Execution:
o The scheduler selects a process from the ready
queue and assigns it to the CPU.
4. I/O Requests:
o If the process requests I/O, it moves to the device
queue.
5. Completion or Termination:
o Once the process completes or is terminated, it
exits all queues.
6.Explain process scheduling, in operating
systems

Process scheduling is the mechanism used by an operating


system (OS) to decide which process in the system will
execute next. It involves allocating CPU time and other
resources to processes to ensure efficient and fair system
performance. Since multiple processes may need the CPU
simultaneously, scheduling is crucial for multitasking and
resource management.

Goals of Process Scheduling


1. Maximize CPU Utilization: Ensure the CPU is kept as
busy as possible.
2. Fairness: Prevent starvation by giving every process
access to the CPU.
3. Minimize Waiting Time: Reduce the time processes
spend in the ready queue.
4. Maximize Throughput: Complete as many processes as
possible in a given time.
5. Minimize Turnaround Time: Lower the total time taken
to execute a process.
6. Minimize Response Time: Enhance the responsiveness
of interactive systems.

Levels of Scheduling
Process scheduling can be categorized based on the stage of
the process lifecycle:
1. Long-Term Scheduling
 Decides which processes are admitted into the system
for processing.
 Controls the degree of multiprogramming (number of
processes in memory).
 Determines the balance between I/O-bound and CPU-
bound processes.
2. Medium-Term Scheduling
 Temporarily removes processes from memory
(swapping) to reduce system load.
 Reintroduces swapped-out processes later when
resources become available.
3. Short-Term Scheduling (CPU Scheduling)
 Selects which process from the ready queue will execute
next on the CPU.
 Happens frequently, as it directly controls the execution
of processes.

Process States and Scheduling


Process scheduling moves processes between the following
states:
 Ready State: Processes waiting in the ready queue for
CPU time.
 Running State: Process currently being executed by the
CPU.
 Waiting State: Process waiting for an event (e.g., I/O
completion) before returning to the ready state.

Types of Scheduling Algorithms


Scheduling algorithms determine how processes are
prioritized for execution. They can be classified as non-
preemptive or preemptive:
Non-Preemptive Scheduling Algorithms
1. First-Come, First-Served (FCFS):
o Processes are executed in the order they arrive.
o Simple but can lead to the convoy effect (long
waiting times for short processes).
2. Shortest Job Next (SJN):
o Executes the process with the shortest CPU burst
time.
o Efficient for batch systems but may cause starvation
for longer processes.
Preemptive Scheduling Algorithms
3. Round-Robin (RR):
o Each process gets a fixed time slice (time quantum)
in turn.
o Ensures fairness and responsiveness, especially in
interactive systems.
4. Priority Scheduling:
o Processes are prioritized based on their
importance.
o Can be preemptive or non-preemptive; starvation
can occur without aging.
5. Shortest Remaining Time First (SRTF):
o Preemptive version of SJN.
o CPU switches to the process with the shortest
remaining burst time.
6. Multilevel Queue Scheduling:
o Divides the ready queue into multiple queues
based on priority or process type.
o Each queue has its own scheduling algorithm.
7. Multilevel Feedback Queue Scheduling:
o Processes can move between queues based on
their behavior or priority.
o Adapts to process needs, reducing starvation.

Real-Time Scheduling
Used in systems with strict timing constraints, such as
embedded or real-time systems:
 Hard Real-Time Scheduling: Ensures deadlines are
always met.
 Soft Real-Time Scheduling: Aims to meet deadlines but
is not critical.

7. Explain multithreaded programming, in


operating systems
Multithreading is a feature in operating systems that allows
a program to do several tasks at the same time. Think of it
like having multiple hands working together to complete
different parts of a job faster. Each “hand” is called a thread,
and they help make programs run more efficiently.
Multithreading makes your computer work better by using its
resources more effectively, leading to quicker and smoother
performance for applications like web browsers, games, and
many other programs you use every day.

The concept of multi-threading needs a proper


understanding of these two terms – a process and a thread.
A process is a program being executed. A process can be
further divided into independent units known as threads. A
thread is like a small light-weight process within a process. Or
we can say a collection of threads is what is known as a
process.

Threading is used widely in almost every field. Most widely it


is seen over the internet nowadays where we are using
transaction processing of every type like recharges, online
transfer, banking etc. Threading is a segment which divide the
code into small parts that are of very light weight and has less
burden on CPU memory so that it can be easily worked out
and can achieve goal in desired field. The concept
of threading is designed due to the problem of fast and
regular changes in technology and less the work in different
areas due to less application. Then as says “need is the
generation of creation or innovation” hence by following this
approach human mind develop the concept of thread to
enhance the capability of programming.

8. Explain threads in UNIX operating systems

In UNIX operating systems, threads are the smallest units of


execution within a process. They share the process's
resources such as memory, file descriptors, and code but
have their own execution contexts like registers, stack, and
program counter. Threads enable multitasking within a single
process, improving performance and responsiveness.

Key Characteristics of Threads in UNIX


1. Shared Resources:
o Threads in a process share global variables,
memory, open files, and signal handlers.
o Each thread has its own stack, program counter,
and register set.
2. Lightweight:
o Threads are more lightweight than processes as
they avoid the overhead of resource duplication
during context switching.
3. Concurrency:
o Threads allow concurrent execution within a
process, enabling tasks like computation, I/O, and
user interaction to run in parallel.
Thread Models in UNIX
1. User-Level Threads:
o Managed by a user-space thread library.
o Kernel is unaware of their existence.
o Advantages:
 Lightweight and fast.
o Disadvantages:
 One thread's blocking system call can block the
entire process.
2. Kernel-Level Threads:
o Managed by the kernel, with each thread
represented by a kernel structure.
o Advantages:
 True parallelism on multicore systems.
o Disadvantages:
 Higher overhead due to kernel intervention.
3. Hybrid Model:
o Combines user-level and kernel-level threads,
mapping many user threads to kernel threads.
o Examples:
 Solaris supports this model.

Thread Management in UNIX

Creating Threads
Threads are created using pthread_create in UNIX.

Thread Termination
Threads can terminate using:
 pthread_exit: Explicitly exits the calling thread.
 return: Exits the thread function.
Synchronization
Synchronization is necessary when threads access
shared resources:
 Mutex (pthread_mutex_lock, pthread_mutex_unlock):
o Ensures mutual exclusion.
 Semaphores (sem_wait, sem_post):
o Controls access to a resource by multiple threads.
 Condition Variables (pthread_cond_wait,
pthread_cond_signal):
o Used for signaling between threads.

Challenges of Thread Programming in UNIX


1. Race Conditions:
o Occur when multiple threads access shared
resources without proper synchronization.
2. Deadlocks:
o Threads waiting indefinitely for resources held by
each other.
3. Debugging Complexity:
o Multithreaded programs are harder to debug due
to non-deterministic behavior.
4. System Call Blocking:
In user-level threads, a blocking system call can block all
threads in the process.

9. Compare UNIX and windows operating


systems

You might also like