OS UNIT2
OS UNIT2
1. Process States
A process undergoes different states during its execution lifecycle:
New: The process is being created.
Ready: The process is loaded into main memory and waiting for
CPU allocation.
Running: The process is currently being executed by the CPU.
Waiting (Blocked): The process is waiting for some I/O
operation or event to complete.
Terminated: The process has finished execution or is aborted.
3. Process Scheduling
The OS uses scheduling algorithms to decide which process to run
next. Types of scheduling:
Long-term scheduling: Decides which processes to admit into
the system.
Short-term scheduling: Determines which process gets CPU
time (e.g., round-robin, priority scheduling).
Medium-term scheduling: Temporarily removes processes from
memory to reduce the load (swapping).
4. Process Operations
Processes can be manipulated through various operations:
Process Creation: Initiated by a system call like fork() in Unix. A
parent process creates child processes.
Process Termination: A process ends via exit() system call or an
error.
Process Hierarchy: Parent-child relationship forms a tree of
processes. Child processes may inherit resources from parents.
6. Threads
A thread is a lightweight subunit of a process. While a process is the
container, threads represent individual tasks within it. Threads share
the same process resources but execute independently.
7. Context Switching
When the CPU switches from executing one process to another, the
OS saves the current process's state (PCB) and loads the state of the
next process. This is known as context switching and incurs some
overhead.
8. Process Synchronization
When processes share resources, synchronization ensures correct
execution. Mechanisms include:
Semaphores and Mutexes: Prevent race conditions.
Monitors: High-level synchronization construct.
Critical Section: Part of the code where shared resources are
accessed.
9. Deadlocks
A situation where two or more processes are waiting indefinitely for
resources held by each other. The OS uses methods like deadlock
prevention, avoidance (e.g., Banker's Algorithm), or detection and
recovery.
1. Job Queue
Contains all processes that are submitted to the system.
Represents the pool of processes awaiting admission
into the system.
2. Ready Queue
Contains all processes that are ready to execute and are
waiting for CPU time.
Managed by the short-term scheduler.
Implemented using data structures like linked lists,
circular queues, or priority queues.
3. Device Queue
Stores processes waiting for access to specific I/O
devices (e.g., disk, printer).
Each device may have its own queue, such as a disk
queue or a printer queue.
5. Priority Queue
Processes are assigned priorities, and the queue ensures
higher-priority processes are executed first.
Used in priority scheduling algorithms.
6. Multilevel Queue
The ready queue is divided into multiple queues based
on specific criteria (e.g., foreground vs. background
processes).
Each queue has its own scheduling policy, and the OS
decides how to allocate CPU time between queues.
Levels of Scheduling
Process scheduling can be categorized based on the stage of
the process lifecycle:
1. Long-Term Scheduling
Decides which processes are admitted into the system
for processing.
Controls the degree of multiprogramming (number of
processes in memory).
Determines the balance between I/O-bound and CPU-
bound processes.
2. Medium-Term Scheduling
Temporarily removes processes from memory
(swapping) to reduce system load.
Reintroduces swapped-out processes later when
resources become available.
3. Short-Term Scheduling (CPU Scheduling)
Selects which process from the ready queue will execute
next on the CPU.
Happens frequently, as it directly controls the execution
of processes.
Real-Time Scheduling
Used in systems with strict timing constraints, such as
embedded or real-time systems:
Hard Real-Time Scheduling: Ensures deadlines are
always met.
Soft Real-Time Scheduling: Aims to meet deadlines but
is not critical.
Creating Threads
Threads are created using pthread_create in UNIX.
Thread Termination
Threads can terminate using:
pthread_exit: Explicitly exits the calling thread.
return: Exits the thread function.
Synchronization
Synchronization is necessary when threads access
shared resources:
Mutex (pthread_mutex_lock, pthread_mutex_unlock):
o Ensures mutual exclusion.
Semaphores (sem_wait, sem_post):
o Controls access to a resource by multiple threads.
Condition Variables (pthread_cond_wait,
pthread_cond_signal):
o Used for signaling between threads.