Operating System
Operating System
Operating System
Operating System
A brief introduction to OS
This study material gives an brief introduction About OS and its
various application to real world.
By
Hitesh Mahapatra
2/20/2010
OS
Operating System
Hitesh Mahapatra
(B.E (I.T), M.Tech (CSE))
Chapter 1: Introduction
Distributed Systems
• Distribute the computation among several physical processors.
• Loosely coupled system – each processor has its own local memory;
processors communicate with one another through various
communications lines, such as high-speed buses or telephone lines.
• Advantages of distributed systems.
o Resources Sharing
o Computation speed up – load sharing
o Reliability
o Communications
• Requires networking infrastructure.
• Local area networks (LAN) or Wide area networks (WAN)
• May be either client-server or peer-to-peer systems.
General Structure of Client-Server
Clustered Systems
• Clustering allows two or more systems to share storage.
• Provides high reliability.
• Asymmetric clustering: one server runs the application while other
servers’ standby.
• Symmetric clustering: all N hosts are running the application
Real-Time Systems
• Often used as a control device in a dedicated application such as
controlling scientific experiments, medical imaging systems,
industrial control systems, and some display systems.
• Well-defined fixed-time constraints.
• Real-Time systems may be either hard or soft real-time.
• Hard real-time:
o Secondary storage limited or absent, data stored in short term
memory, or read-only memory (ROM)
o Conflicts with time-sharing systems, not supported by general-
purpose operating systems.
• Soft real-time
o Limited utility in industrial control of robotics
o Useful in applications (multimedia, virtual reality) requiring
advanced operating-system features.
Handheld Systems
• Personal Digital Assistants (PDAs)
• Cellular telephones
• Issues:
o Limited memory
o Slow processors
o Small display screens.
Computing Environments
• Traditional computing
• Web-Based Computing
• Embedded Computing
Migration of Operating-System Concepts and Features
Chapter 4: Processes
Process Concept
Process Scheduling
Operations on Processes
Cooperating Processes
Interprocess Communication
Communication in Client-Server Systems
Process Concept
■ An operating system executes a variety of programs:
✦ Batch system – jobs
✦ Time-shared systems – user programs or tasks
■ Textbook uses the terms job and process almost interchangeably.
■ Process – a program in execution; process execution must progress in
sequential fashion.
■ A process includes:
✦ program counter
✦ stack
✦ data section
Process State
■ As a process executes, it changes state
✦ new: The process is being created.
✦ running: Instructions are being executed.
✦ waiting: The process is waiting for some event to occur.
✦ ready: The process is waiting to be assigned to a process.
✦ terminated: The process has finished execution.
Diagram of Process State
Context Switch
• When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process.
• Context-switch time is overhead; the system does no useful work
while switching.
• Time dependent on hardware support.
Process Creation
• Parent process create children processes, which, in turn create other
processes, forming a tree of processes.
• Resource sharing
o Parent and children share all resources.
o Children share subset of parent’s resources.
o Parent and child share no resources.
• Execution
o Parent and children execute concurrently.
o Parent waits until children terminate.
• Address space
o Child duplicate of parent.
o Child has a program loaded into it.
• UNIX examples
o fork system call creates new process
o exec system call used after a fork to replace the process’
memory space with a new program.
Processes Tree on a UNIX System
• Process Termination
• Process executes last statement and asks the operating system to
decide it (exit).
o Output data from child to parent (via wait).
o Process’ resources are deallocated by operating system.
• Parent may terminate execution of children processes (abort).
o Child has exceeded allocated resources.
o Task assigned to child is no longer required.
o Parent is exiting.
Operating system does not allow child to continue if its
parent terminates.
Cascading termination
• Cooperating Processes
• Independent process cannot affect or be affected by the execution of
another process.
• Cooperating process can affect or be affected by the execution of
another process
• Advantages of process cooperation
o Information sharing
o Computation speed-up
o Modularity
o Convenience
• Producer-Consumer Problem
• Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process.
o unbounded-buffer places no practical limit on the size of the
buffer.
o bounded-buffer assumes that there is a fixed buffer size.
• Overview
• Multithreading Models
• Threading Issues
• Pthreads
• Solaris 2 Threads
• Windows 2000 Threads
• Linux Threads
• Java Threads
CPU Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Multiple-Processor Scheduling
Real-Time Scheduling
Algorithm Evaluation
Basic Concepts
• Maximum CPU utilization obtained with multiprogramming
• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait.
• CPU burst distribution
Alternating Sequence of CPU and I/O Bursts
Histogram of CPU-burst Times
CPU Scheduler
• Selects from among the processes in memory that are ready to
execute, and allocates the CPU to one of them.
• CPU scheduling decisions may take place when a process:
Switches from running to waiting state.
Switches from running to ready state.
Switches from waiting to ready.
Terminates.
• Scheduling under 1 and 4 is nonpreemptive.
• All other scheduling is preemptive.
Dispatcher
• Dispatcher module gives control of the CPU to the process selected by
the short-term scheduler; this involves:
✦ switching context
✦ switching to user mode
✦ jumping to the proper location in the user program to restart
that program
• Dispatch latency – time it takes for the dispatcher to stop one process
and start another running.
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per time
unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the ready
queue
• Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order
P2 , P3 , P1 .
• The Gantt chart for the schedule is:
• Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case.
• Convoy effect short process behind long process
Shortest-Job-First (SJR) Scheduling
• Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time.
• Two schemes:
✦ nonpreemptive – once CPU given to the process it cannot be
preempted until completes its CPU burst.
✦ preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process, preempt.
This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting time for a given set
of processes.
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)
• Average waiting time = (0 + 6 + 3 + 7)/4 - 4
Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)
• Average waiting time = (9 + 1 + 0 +2)/4 - 3
Determining Length of Next CPU Burst
• Can only estimate the length.
• Can be done by using the length of previous CPU bursts, using
exponential averaging.
Process Synchronization
Background
The Critical-Section Problem
Synchronization Hardware
Semaphores
Classical Problems of Synchronization
Critical Regions
Monitors
Synchronization in Solaris 2 & Windows 2000
Background
Concurrent access to shared data may result in data inconsistency.
Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes.
Shared-memory solution to bounded-butter problem (Chapter 4)
allows at most n – 1 items in buffer at the same time. A solution,
where all N buffers are used is not simple.
✦ Suppose that we modify the producer-consumer code by adding
a variable counter, initialized to 0 and incremented each time a
new item is added to the buffer
Bounded-Buffer
Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;
Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Consumer process
item nextConsumed;
while (1) {
while (counter == 0)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
The statements
counter++;
counter--;
must be performed atomically.
Atomic operation means an operation that completes in its entirety
without interruption.
The statement “count++” may be implemented in machine language
as:
register1 = counter
register1 = register1 + 1
counter = register1
The statement “count—” may be implemented as:
register2 = counter
register2 = register2 – 1
counter = register2
If both the producer and consumer attempt to update the buffer
concurrently, the assembly language statements may get interleaved.
Interleaving depends upon how the producer and consumer processes
are scheduled.
Assume counter is initially 5. One interleaving of statements is:
producer: register1 = counter (register1 = 5)
producer: register1 = register1 + 1 (register1 = 6)
consumer: register2 = counter (register2 = 5)
consumer: register2 = register2 – 1 (register2 = 4)
producer: counter = register1 (counter = 6)
consumer: counter = register2 (counter = 4)
The value of count may be either 4 or 6, where the correct result
should be 5.
Race Condition
■ Race condition: The situation where several processes access – and
manipulate shared data concurrently. The final value of the shared
data depends upon which process finishes last.
Semaphore Implementation
■ Define a semaphore as a record
typedef struct {
int value;
struct process *L;
} semaphore;
■ Assume two simple operations:
✦ block suspends the process that invokes it.
✦ wakeup(P) resumes the execution of a blocked process P.
Implementation
■ Semaphore operations now defined as
wait(S):
S.value--;
if (S.value < 0) {
add this process to S.L;
block;
}
signal(S):
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Semaphore as a General Synchronization Tool
■ Execute B in Pj only after A executed in Pi
■ Use semaphore flag initialized to 0
■ Code:
Pi Pj
Μ Μ
A wait(flag)
signal(flag) B
Deadlock and Starvation
■ Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes.
■ Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
Μ Μ
signal(S); signal(Q);
signal(Q) signal(S);
■ Starvation – indefinite blocking. A process may never be removed
from the semaphore queue in which it is suspended.
Two Types of Semaphores
■ Counting semaphore – integer value can range over an unrestricted
domain.
■ Binary semaphore – integer value can range only between 0 and 1;
can be simpler to implement.
■ Can implement a counting semaphore S as a binary semaphore
Implementing S as a Binary Semaphore
■ Data structures:
binary-semaphore S1, S2;
int C:
■ Initialization:
S1 = 1
S2 = 0
C = initial value of semaphore S
Implementing S
■ wait operation
wait(S1);
C--;
if (C < 0) {
signal(S1);
wait(S2);
}
signal(S1);
■ signal operation
wait(S1);
C ++;
if (C <= 0)
signal(S2);
else
signal(S1);
Classical Problems of Synchronization
■ Bounded-Buffer Problem
■ Readers and Writers Problem
■ Dining-Philosophers Problem
Bounded-Buffer Problem
■ Shared data
semaphore full, empty, mutex;
Initially:
full = 0, empty = n, mutex = 1
Bounded-Buffer Problem Producer Process
do { … produce an item in nextp
…
wait(empty);
wait(mutex);
…
add nextp to buffer
…
signal(mutex);
signal(full);
} while (1);
Bounded-Buffer Problem Consumer Process
do { wait(full)
wait(mutex);
… remove an item from buffer to nextc
… signal(mutex);
signal(empty);
…
consume the item in nextc
…
} while (1);
Readers-Writers Problem
■ Shared data
semaphore mutex, wrt;
Initially
mutex = 1, wrt = 1, readcount = 0
Readers-Writers Problem Writer Process
wait(wrt); …
writing is performed … signal(wrt);
Readers-Writers Problem Reader Process
wait(mutex);
readcount++;
if (readcount == 1)
wait(rt);
signal(mutex);
… reading is performed
… wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt); signal(mutex):
Dining-Philosophers Problem
■ Shared data
semaphore chopstick[5];
Initially all values are 1
Dining-Philosophers Problem
■ Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
… eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…think …
} while (1);
Critical Regions
■ High-level synchronization construct
■ A shared variable v of type T, is declared as:
v: shared T
■ Variable v accessed only inside statement
region v when B do S
where B is a boolean expression.
■ While statement S is being executed, no other process can access
variable v.
■ Regions referring to the same shared variable exclude each other in
time.
■ When a process tries to execute the region statement, the Boolean
expression B is evaluated. If B is true, statement S is executed. If it is
false, the process is delayed until B becomes true and no other process
is in the region associated with v.
Example – Bounded Buffer
■ Shared data:
struct buffer {
int pool[n];
int count, in, out;
}
Bounded Buffer Producer Process
■ Producer process inserts nextp into the shared buffer
region buffer when( count < n) {
pool[in] = nextp;
in:= (in+1) % n;
count++;
}
Bounded Buffer Consumer Process
■ Consumer process removes an item from the shared buffer and puts it
in nextc
region buffer when (count > 0) { nextc
= pool[out];
out = (out+1) % n;
count--;
}
Implementation region x when B do S
■ Associate with the shared variable x, the following variables:
semaphore mutex, first-delay, second-delay;
int first-count, second-count;
■ Mutually exclusive access to the critical section is provided by mutex.
■ If a process cannot enter the critical section because the Boolean
expression B is false, it initially waits on the first-delay semaphore;
moved to the second-delay semaphore before it is allowed to
reevaluate B.
Implementation
■ Keep track of the number of processes waiting on first-delay and
second-delay, with first-count and second-count respectively.
■ The algorithm assumes a FIFO ordering in the queuing of processes
for a semaphore.
■ For an arbitrary queuing discipline, a more complicated
implementation is required.
Monitors
■ High-level synchronization construct that allows the safe sharing of an
abstract data type among concurrent processes.
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
... }
procedure body P2 (…) {
... }
procedure body Pn (…) {
... }
{initialization code } }
■ To allow a process to wait within the monitor, a condition variable
must be declared, as
condition x, y;
■ Condition variable can only be used with the operations wait and
signal.
✦ The operation
x.wait();
means that the process invoking this operation is suspended until another
process invokes
x.signal();
✦ The x.signal operation resumes exactly one suspended process.
If no process is suspended, then the signal operation has no
effect.
Schematic View of a Monitor
Monitor With Condition Variables
Dining Philosophers Example
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i) // following slides
void putdown(int i) // following slides
void test(int i) // following slides
void init() {
for (int i = 0; i < 5; i++)
state[i] = thinking; }}
Dining Philosophers
void pickup(int i) {
state[i] = hungry;
test[i];
if (state[i] != eating)
self[i].wait();
}
void putdown(int i) {
state[i] = thinking;
// test left and right neighbors
test((i+4) % 5);
test((i+1) % 5); }
Dining Philosophers
void test(int i) {
if ( (state[(I + 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) {
state[i] = eating;
self[i].signal();}}
Monitor Implementation Using Semaphores
■ Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
■ Each external procedure F will be replaced by
wait(mutex);
…
body of F;
…
if (next-count > 0)
signal(next)
else
signal(mutex);
■ Mutual exclusion within a monitor is ensured.
Monitor Implementation
■ For each condition variable x, we have:
semaphore x-sem; // (initially = 0)
int x-count = 0;
■ The operation x.wait can be implemented as:
x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
■ The operation x.signal can be implemented as:
if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
■ Conditional-wait construct: x.wait(c);
✦ c – integer expression evaluated when the wait operation is
executed.
✦ value of c (a priority number) stored with the name of the
process that is suspended.
✦ when x.signal is executed, process with smallest associated
priority number is resumed next.
■ Check two conditions to establish correctness of system:
✦ User processes must always make their calls on the monitor in a
correct sequence.
✦ Must ensure that an uncooperative process does not ignore the
mutual-exclusion gateway provided by the monitor, and try to
access the shared resource directly, without using the access
protocols.
Solaris 2 Synchronization
■ Implements a variety of locks to support multitasking, multithreading
(including real-time threads), and multiprocessing.
■ Uses adaptive mutexes for efficiency when protecting data from short
code segments.
■ Uses condition variables and readers-writers locks when longer
sections of code need access to data.
■ Uses turnstiles to order the list of threads waiting to acquire either an
adaptive mutex or reader-writer lock.
Windows 2000 Synchronization
■ Uses interrupt masks to protect access to global resources on
uniprocessor systems.
■ Uses spinlocks on multiprocessor systems.
■ Also provides dispatcher objects which may act as wither mutexes
and semaphores.
■ Dispatcher objects may also provide events. An event acts much like a
condition variable.
Unit 3--Chapter 8: Deadlocks
The Deadlock Problem
■ A set of blocked processes each holding a resource and waiting to
acquire a resource held by another process in the set.
■ Example
✦ System has 2 tape drives.
✦ P1 and P2 each hold one tape drive and each needs another one.
■ Example
✦ semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B)
wait (B); wait(A)
* Process
* Resource Type with 4 instances
* Pi requests instance of Rj
* Pi is holding an instance of Rj
Example of a Resource Allocation Graph
Resource Allocation Graph With A Deadlock
Resource Allocation Graph With A Cycle But No Deadlock
Basic Facts
■ If graph contains no cycles ⇒ no deadlock.
■ If graph contains a cycle ⇒
✦ if only one instance per resource type, then deadlock.
✦ if several instances per resource type, possibility of deadlock
Methods for Handling Deadlocks
■ Ensure that the system will never enter a deadlock state.
■ Allow the system to enter a deadlock state and then recover.
■ Ignore the problem and pretend that deadlocks never occur in the
system; used by most operating systems, including UNIX.
Deadlock Prevention
Restrain the ways request can be made.
■ Mutual Exclusion – not required for sharable resources; must hold
for nonsharable resources.
■ Hold and Wait – must guarantee that whenever a process requests a
resource, it does not hold any other resources.
✦ Require process to request and be allocated all its resources
before it begins execution, or allow process to request resources
only when the process has none.
✦ Low resource utilization; starvation possible.
■ No Preemption –
✦ If a process that is holding some resources requests another
resource that cannot be immediately allocated to it, then all
resources currently being held are released.
✦ Preempted resources are added to the list of resources for which
the process is waiting.
✦ Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
■ Circular Wait – impose a total ordering of all resource types, and
require that each process requests resources in an increasing order of
enumeration.
Deadlock Avoidance
Requires that the system has some additional a priori information
available.
■ Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
■ The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-
wait condition.
■ Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
Safe State
■ When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state.
■ System is in safe state if there exists a safe sequence of all processes.
■ Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi
can still request can be satisfied by currently available resources +
resources held by all the Pj, with j<I.
✦ If Pi resource needs are not immediately available, then Pi can
wait until all Pj have finished.
✦ When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate.
✦ When Pi terminates, Pi+1 can obtain its needed resources, and
so on.
Basic Facts
■ If a system is in safe state ⇒ no deadlocks.
■ If a system is in unsafe state ⇒ possibility of deadlock.
■ Avoidance ⇒ ensure that a system will never enter an unsafe state.
Resource-Allocation Graph Algorithm
■ Claim edge Pi → Rj indicated that process Pj may request resource Rj;
represented by a dashed line.
■ Claim edge converts to request edge when a process requests a
resource.
■ When a resource is released by a process, assignment edge reconverts
to a claim edge.
■ Resources must be claimed a priori in the system.
Safe, Unsafe , Deadlock State
Resource-Allocation Graph For Deadlock Avoidance
Unsafe State In Resource-Allocation Graph
Banker’s Algorithm
■ Multiple instances.
■ Each process must a priori claim maximum use.
■ When a process requests a resource it may have to wait.
■ When a process gets all its resources it must return them in a finite
amount of time.
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
■ Available: Vector of length m. If available [j] = k, there are k
instances of resource type Rj available.
■ Max: n x m matrix. If Max [i,j] = k, then process Pi may request at
most k instances of resource type Rj.
■ Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently
allocated k instances of Rj.
■ Need: n x m matrix. If Need[i,j] = k, then Pi may need k more
instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i - 1,3, …, n.
2.Find and i such that both:
(a) Finish [i] = false
(b) Needi ≤ Work
If no such i exists, go to step 4.
3.Work = Work + Allocationi
Finish[i] = true
go to step 2.
4.If Finish [i] == true for all i, then the system is in a safe state.
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi. If Requesti [j] = k then process Pi
wants k instances of resource type Rj.
1.If Requesti ≤ Needi go to step 2. Otherwise, raise error condition, since
process has exceeded its maximum claim.
2.If Requesti ≤ Available, go to step 3. Otherwise Pi must wait, since
resources are not available.
3.Pretend to allocate requested resources to Pi by modifying the state as
follows:
Available = Available = Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;;
• If safe ⇒ the resources are allocated to Pi.
• If unsafe ⇒ Pi must wait, and the old resource-allocation
state is restored
Example of Banker’s Algorithm
■ processes P0 through P4; 3 resource types A
(10 instances),
B (5instances, and C (7 instances).
■ Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433