Operating Systems Ii: by Oguntunde, B.O (PHD) Redeemer'S University, Ede

Download as pdf or txt
Download as pdf or txt
You are on page 1of 114

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

OPERATING SYSTEMS II

By OGUNTUNDE, B.O (PHD)


REDEEMER’S UNIVERSITY, EDE.
OUTLINE
• The following topics should be treated in detail:
• Process management
• Memory Management
• I/O management
• File management

• TEXT:
• Modern operating system 2nd edition, by Andrew
S. Tanenbaum. Prentice-Hall International.
INTRODUCTION
• An operating system is a program that acts as
an interface between the user and the
computer hardware and controls the
execution of all kinds of programs.
OPERATING SYSTEM
Important functions
• Memory Management
• Processor Management
• Device Management
• File Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and
users
Opera8ng System ─ Types
• Batch Operating System: programs /jobs with
similar requirements sorted into batches.
• Problems: Lack of interaction between the
user and the job.
• CPU is often idle, because the speed of the
mechanical I/O devices is slower than the
CPU.
• Difficult to provide the desired priority.
Time-sharing Operating Systems
• enables many people, located at various terminals, to
use a particular computer system at the same time.
Time-sharing/ multitasking is time a logical extension
of multiprogramming.
• Advantages :
– quick response
– Avoids duplication of software
– Reduces CPU idle
• Disadvantages
– Problem of reliability
– Question of security and integrity of user programs and
data
– Problem of data communication
Distributed Operating System
• multiple central processors to serve multiple real-time
applications and multiple users
• processors communicate with one another through various
communication lines (such as high-speed buses or telephone
lines) advantages:
• With resource sharing facility, a user at one site may be able
to use the resources available at another.
• Speedup the exchange of data with one another via electronic
mail.
• If one site fails in a distributed system, the remaining sites can
potentially continue operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing
Network Operating System
• runs on a server and provides the server the capability to
manage data, users, groups, security, applications, and
other networking functions.
• primary purpose: to allow shared file and printer access
among multiple computers in a network, typically a LAN, a
private network or to other networks.
• advantages:
– Centralized servers are highly stable.
– Security is server managed.
– Upgrades to new technologies and hardware can be easily
integrated into the system.
– Remote access to servers is possible from different locations
and types of systems.
• disadvantages:
– High cost of buying and running a server.
– Dependency on a central location for most operations.
– Regular maintenance and updates are required.
Real-Time Operating System
• data processing system in which the time interval required
to process and respond to inputs is so small that it controls
the environment.
• types
• Hard real-time systems: guarantee that critical tasks
complete on time. In hard real-time systems, secondary
storage is limited or missing and the data is stored in ROM.
In these systems, virtual memory is almost never found.
• Soft real-time systems: are less restrictive. A critical real-
time task gets priority over other tasks and retains the
priority until it completes.
• Soft real-time systems have limited utility than hard real-
time systems. For example, multimedia, virtual reality,
Advanced Scientific Projects like undersea exploration and
planetary rovers, etc.
Opera8ng System ─ Services
• Operating System provides services to both the users
and to the programs.
– It provides programs an environment to execute.
– It provides users the services to execute the programs in a
convenient manner.
• common services provided by an operating system:
• Program execution
• I/O operations
• File System manipulation
• Communication
• Error Detection
• Resource Allocation
• Protection
Program Execution
• Activities
– Loads a program into memory
– Executes the program
– Handles program's execution
– Provides a mechanism for process synchronization
– Provides a mechanism for process communication
– Provides a mechanism for deadlock handling
I/O Operation
• Operating System manages the
communication between user and device
drivers.
• I/O operation means read or write
operation with any file or any specific I/O
device.
– Operating system provides the access to the
required I/O device when required.
File System Manipulation
• Program needs to read a file or write a file.
•  The operating system gives the permission to the
program for operation on file.
•  Permission varies from read-only, read-write,
denied, and so on.
•  Operating System provides an interface to the
user to create/delete files.
•  Operating System provides an interface to the
user to create/delete directories.
•  Operating System provides an interface to create
the backup of file system.
Communication
• Two processes often require data to be
transferred between them.
• Both the processes can be on one computer or
on different computers, but are connected
through a computer network.
• Communication may be implemented by two
methods, either by Shared Memory or by
Message Passing.
Error Handling
• The OS constantly checks for possible errors.
• The OS takes an appropriate action to ensure
correct and consistent computing
Resource Management
• The OS manages all kinds of resources using
schedulers.
• CPU scheduling algorithms are used for better
utilization of CPU.
Protection
• The OS ensures that all access to system
resources is controlled.
• The OS ensures that external I/O devices are
protected from invalid access attempts.
• The OS provides authentication features for
each user by means of passwords.
PROCESS MANAGEMENT
OS Activities
• Creation and deletion of user and system
processes.
• Suspension and resumption of processes.
• A mechanism for process synchronization.
• A mechanism for process communication.
• A mechanism for deadlock handling.
Process
• Process is
• A program in execution
• A program instance that has been loaded into
memory and managed by the OS.
content
• Current value of Program Counter (PC)
• Contents of the processors registers
• Value of the variables
• The processes stack (SP) which typically
contains temporary data such as
• subroutine parameter, return address, and
temporary variables.
• A data section that contains global variables.
Process and program
• Process is active, program is passive
• Process is dynamic program is not
• Process is an activity
• recipe
Process creation
• System initialization : at booting
• Calls by a running process
• A user requests to create a new process
• Initiation of a batch job
Process Termination
• Normal exit (voluntary) finish its work
• Error exit (voluntary) discovers an error
• Fatal error(involuntary): the process itself
caused an error
• Killed by another process (involuntary)
Process State
Possible Transitions
• Scheduler dispatches
• I/O or event wait
• Interrupt
• I/O or event completion
Process Implementation
• OS maintains a data structure Process Control
Block (PCB)
• Process state:
• Program counter
• Memory allocation
• Status of its opened files
• Accounting information
• I/O status Information
Process Control Block
threads
• lightweight process
• Allow multiple processes in same environment
• Contains
– PC
– Register
– Multithreading
Single and Multithreaded Processes
Benefits of Multithreading
• Responsiveness
• Easy Resource Sharing
• Economy
• Utilization of Multi-processor Architectures
Inter Process Communication
Processes often need to communicate
– Use shared memory
– Need a structured way to faciitate IPC
– Maintain integrity of system
– Ensure predictabe behaviour
– IPC: techniques and mechanism that facilitate
communication between processes.
Issues
• How one process can pass information to
another
• Ensuring that 2 or more processes do not get
into each other’s way when engaging in
critical activities
• Proper sequencing when dependencies are
present
Race Condition
• 2 or more processes are reading or writing
some shared data and the final result depends
on who runs precisely when,
• example
– ATM
– 2 processes P1 and P2
– P1 P2
–C=B–1 D=2*B
–B=2*c B=D–1
– B is shared with initial value 2
• Case1 case2 case3 case4
• 1,2,3,4 1,3,2,4 1,3,4,2 3,4,1,2
• C=2-1=1 c=2-1=1 c=2-1=1 D=2*2=4
• B=2*1=2 D=2*2=4 D-2*2=4 B=4-1=3
• D=2*2=4 B=2*1=2 B=4-1=3 c=3-1=2
• B= 4-1=3 B= 4-1=3 B=2*1=2 B=2*2 =4
• B=3 B=3 B=2 B=4
Critical Region/section
• part of the program where the shared memory is
accessed

– Execute as quickly as possible.


– mutual exclusion must be enforced
– Be coded carefully to reduce any possibility of errors
(e.g., infinite loops).
– Ensure termination housekeeping if process in critical
region aborts.
– Must release mutual exclusion so other processes can
enter region
Conditions
• No 2 processes may be simultaneously inside
their critical regions
• No assumption may be made about speeds or
the number of CPUs
• No process running outside its critical region
may block other processes
• No process should have to wait forever to
enter its critical region
Mutual Exclusion
Techniques
• Two approaches
• Mutual exclusion with Busy Waiting
– Disabling Interrupts
– Lock variables
– Strict Alternation
– Peterson’s Solution
– The TSL instruction (Test and Set Lock)
Techniques contd
• Mechanisms with blocking or suspension of
the waiting process
• Sleep and wakeup
• Semaphores
• Monitors
• Message passing
Disabling Interrupts
• disables all interrupts just after entering its critical
region
• re-enables them just before leaving it,
• no clock interrupts can occur.
• advantage: process inside critical region may update
shared resources without any risk of races,
• disadvantage: if after leaving the region interrupts are
not reenabled there will be a crash of the system.
Moreover: useless in multiprocessor architectures,
• may be used inside operating system kernel when
some system structures are to be updated, but is not
recommended for implementation of mutual exclusion
in user space.
• Unattractive, because it is unwise to give user process
power to turn off interrupts.
Lock Variables
• a single, shared (Lock) variable initially 0,
• if lock = 0, set lock to 1 and enter critical region;
if not, wait until lock becomes 0
• Thus
• Lock= 0 means that no process in critical region
• Lock= 1 means some process in its critical region.
• race condition can occur
Strict Alternation

• integer variable “turn” initially 0,


• process A inspects ‘turn’, finds it to be 0
• enters its critical region.
• Process B also finds it to be 0
• sits in a tight loop continually testing turn to see when
it becomes 1.
• When process A leaves the critical region, it sets ‘turn’
to 1, to allow process B to enter its critical region.
– It wastes CPU time busy waiting
– starvation
Peterson’s Solution
• Combines strict alternation and lock variables
• Before entering critical region, each process
calls enter-region with its own process
number, 0 or 1 as parameter.
• calls leave-region after leaving CR to indicate
that it is done and to allow the other process
to enter
Test and Set Lock (TSL) Instruction
• TSL reads the contents of the memory word ‘lock’ into
register RX
• stores a non-zero value at the memory address lock.
• Read and store operation are indivisible
• shared variable, lock coordinates access to shared memory.
• lock = 0, process set it to 1 using the TSL instruction and
then read or write the shared memory.
• When it is done, the process sets lock back to 0 using a
move instruction
• the processes must call enter-region and leave-region at
the correct times
Disadvantages of solutions with busy
waiting
• Waste CPU time
• Possibility of deadlock/starvation in systems
with multi-priority scheduling
Solution with blocking
Sleep and Wakeup
• Two system calls: sleep() and wakeup()
– Sleep() calling process is suspended till being
woken by other process calling wakeup()
– wakeup call has one parameter, the process to
wakeup.
– both sleep and wakeup each has one parameter, a
memory address used to match up sleeps with
wakeup.
Example: producer-consumer problem
• a common buffer with limited capacity
• the producer, puts information into the buffer, the consumer takes it
out
• If pr attempts to put message into full buffer, pr is suspended
• If co attempts to take message from empty buffer, co is suspended
• To keep track of the number of items in the buffer,
• variable, count and Buffer size N,
• producer: if(count ==N) (go to sleep) else (add msg and count++)
• Consumer: if (count ==0) (go to sleep) else (take msg and count--)
• Race condition can still occur because access to count is
unconstrained.
Semaphores
• integer variable to count the number of
wakeups saved for future use
• Semaphore ≥ 0
• Two operations
• down: checks if (count> 0) then (count--) else
(sleep)
• Up: (count++) and (wakeup)
Mutexes
• simplified version of semaphore (binary
semaphore)
• can be in one of 2 states, unlocked and locked.
Only 1 bit is required to represent it, 0 means
unlocked any other integer means locked.
– Used when there is no requirement to count
signal occurrences but only to organize mutual
exclusion
– Efficient and simple implementation
monitors
• High level synchronization mechanism
• A set of procedures, variables and structures grouped
in one structure.
• One active process inside monitor
• Proposed to introduce conditional variables with two
operations wait () and signal()
• Wait(): operation cannot continue, wait is performed
on conditional variable process that executes
procedure is suspended
• Other process enters CR, when it leaves, it performs
signal to wakeup the suspended process o
Monitor’s main features
• Wait()signal() function pair protects against
losing signals
• Not all high-level language offer monitors
• Some languages offer incomplete mechanisms
• Solutions not dedicated for distributed
environment because of required accessibility
of shared memory
4. Message Passing
• Based on two system calls
– Send(destination, &message);
– Receive(source, &message);
Class Activity
• four groups of equal number, each group to
implement one of the following
– Dinning philosopher
– Sleeping barber
– Readers and writers problem
– Producer consumer problems
CPU Scheduling
• Basic Concepts
• • Scheduling Criteria
• • Scheduling Algorithms
Basic Concepts
Maximum CPU utilization obtained
with multiprogramming.
• CPU–I/O Burst Cycle
– Process execution consists of
a cycle of CPU execution
and I/O wait.
– Example: Alternating
Sequence of CPU And I/O
Bursts
– In an I/O – bound program
would have many very short
CPU bursts.
– In a CPU – bound program
would have a few very long
CPU bursts.
CPU Scheduler
• The CPU scheduler (short-term scheduler) selects from among the
• processes in memory that are ready to execute, and allocates the
• CPU to one of them.
• • A ready queue may be implemented as a FIFO queue, priority
• queue, a tree, or an unordered linked list.
• • CPU scheduling decisions may take place when a process:
• 1. Switches from running to waiting state (ex., I/O request).
• 2. Switches from running to ready state (ex., Interrupts occur).
• 3. Switches from waiting to ready state (ex., Completion of I/O).
• 4. Terminates.
• • Scheduling under 1 and 4 is nonpreemptive; otherwise is called
• preemptive.
• • Under nonpreemptive scheduling, once the CPU has been allocated
• to a process, the process keeps the CPU until it releases the CPU
• either by terminating or by switching to the waiting state.
Dispatcher
• Dispatcher module gives control of the CPU to the
• process selected by the short-term scheduler; this
• involves:
• – switching context
• – switching to user mode
• – jumping to the proper location in the user program
• to restart that program
• • Dispatch latency – time it takes for the dispatcher to
• stop one process and start another running.
Scheduling Algorithm Metrics
CPU utilization – keep the CPU as busy as possible.
• • Throughput – number of processes that complete their
• execution per time unit.
• • Turnaround time – amount of time to execute a particular
• process; which is the interval from time of submission of a
• process to the time of completion (includes the sum of
• periods spent waiting to get into memory, waiting in ready
• queue, executing on the CPU, and doing I/O).
• • Waiting time – sum of the periods spent waiting in the ready
• queue.
• • Response time – the time from the submission of a request
• until the first response is produced (i.e., the amount of time it
• takes to start responding, but not the time that it takes to
• output that response).
Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
Scheduling Algorithms
• CPU scheduling deals with the problem of choosing a
• process from the ready queue to be executed by the
• CPU.
• • The following CPU scheduling algorithms will be
• described:
• – First-Come, First-Served (FCFS).
• – Shortest-Job-First (SJF).
• – Priority.
• – Round-Robin (RR).
• – Multilevel Queue.
• – Multilevel Feedback Queue.
• Ope
First-Come, First-Served (FCFS)
Scheduling
• It is the simplest CPU scheduling algorithm.
• The process that requests the CPU first is
allocated the CPU first.
• The average waiting time under FCFS is long.
• Process P1 takes 24s, P2 takes 3s, P3 takes 3s. if
arrive in order P1, P2, P3.
• Waiting time :-
• Turnaround time:
• Throughput:
Process P1 takes 24s, P2 takes 3s, P3 takes 3s. if arrive in
order P1, P2, P3.

Waiting time :-

Turnaround time:

Turnaround time: ( 30 ) = 10
3
If processes come in order P2, P3, P1

Waiting time :

Turnaround time: (3 + 6 + 30) = 13


3
Throughput: 30 = 10
3
Shortest job first (SJF)
• Consider the following processes given their
arrival times and burst times

Waiting Time:
(0+6+3+7) =4
4
Shortest Remaining Time Next
Is a pre-emptive version of SJF.

Average waiting time =


Priority scheduling
• Waiting time and response time depend on
priority
• deadlines can be met by giving processes with
deadlines higher priority
• starvation of lower priority processes is
possible
• solution : AGING:- as time progresses increase
process priority
Round Robin (RR)

Waiting Time of each Process


P1: 0+ (77-20) + (121-97) = 81
P2: 20
P3: 37+ (7-57) + (134 -117) =94
P4: 57 + (117 -77) = 7

Average Waiting Time = (81+20+94+97) / 4 = 73


Multilevel Queue
• Ready queue is partitioned into separate
queues, foreground and background, each
queue with its own scheduling algorithm. It
Assigns jobs based on batch (processes are
divided into different groups) e.g foreground
(interactive) and background (batch)
processes. Batch at the lowest level, real time
at the highest level, CPU bound at the highest
level, mixture of I/O and CPU is taken to the
2nd level.
Multilevel feedback queue
• N priority levels
• Priority scheduling between levels
• RR within a level
• Quantum size decreases as priority level
increases
• Process in a given level is not scheduled until all
higher priority queues are empty
• If process does not complete in a given quantum
at a priority level, it is moved to the next lower
priority level
DEADLOCKS
• A set of processes is deadlocked if each process in
the set is waiting for an event that only another
process in the set can cause (including itself). Two
or more processes are waiting for an event that
can never occur, so they wait forever.

• Resources: CPU, disk, files, database, memory etc


• Pre-emptable: resources can be taken away e.g
memory, CPU
• Non-pre-emptable: disk, files, (mutex, can’t be
taken away e.g CD-Rom.
Conditions for deadlock
• Mutual exclusion:

• Hold and wait:

• No pre-emption:

• Circular wait:
Deadlock Modelling
Deadlock Detection with Multiple
Resources of Each Type
Resources in Existence (E1, E2, E3, ……………………..Em)

Resources available (A1, A2, A3, . ………………...Am)

Current allocation matrix


example
Resource Allocation Graph (RAG)
Strategies for Dealing with Deadlock
• Ignore the problem altogether, it is the user’s
fault (Ostrich approach)
• Detection and recovery: let deadlock occur,
detect them and take action to fix the
problem.
• Dynamic avoidance by careful resource
allocation
• Prevention by structurally negating one of the
4 conditions necessary to cause a deadlock.
Banker’s Algorithm
Safe state
• there is at least one way for all users to finish,
the state of figure 2 is safe because, with 2
units left, the banker can delay any request
except C’s, thus letting C finish and release all
4 resources. With 4 units in hand, the banker
can let either D or B have the necessary units
and so on.
MEMORY MANAGEMENT
• to keep track of which parts of memory are in
use and which parts are not
• to allocate memory to processes when they
need it and deallocate it when they are done
• to manage swapping between main memory
and disk when main memory is too small to
hold all the processes.
Mono-programming without swapping
or paging
Multiprogramming with Fixed
Partitions
Swapping
Memory management algorithms
• First fit: Use first hole big enough
• Next fit: Use next hole big enough
• Best fit: Search list for smallest hole big
enough
• Worst fit: Search list for largest hole available
• Quick fit: Separate lists of commonly
requested sizes
Virtual Memory
Virtual Memory
Page fault
• Page fault occurs when a program tries to use
an unmapped page, and the CPU traps to the
OS. When page fault occurs, the OS picks a
little used page frame and write its content
back to the disk. It then fetches the page just
referenced into the page frame just freed,
changes the map and restarts the trapped
instruction.
PAGE REPLACEMENT ALGORITHMS
• Optimal replacement
• Not recently used (NRU) replacement
• First-in, first-out (FIFO) replacement
• Second chance replacement
• Clock page replacement
• Least recently used (LRU) replacement
Segmentation
• Segmentation is a memory management
scheme that supports user view of memory. A
program is a collection of segments; a
segment is a logical unit e.g main program,
function, object, procedure, method, local
variables, stack array.
Segmentation
• Characteristic
– Varying sizes (could be 0 –max allowed)
– Cannot be exhausted
• Advantages
– Because segments have different address space, the
can grow without affecting one another
– They are rarely filled up because they are very large
– Segments are easy to modify, need only the starting
address.
– It facilitates sharing procedure
– Protection: each segments can have its own
protection (different protection mechanism)
Comparison of paging and
segmentation
Segmentation contd….
Input/output management
categories
• Human readable
– Used to communicate with the user
– Printers and terminals
– Video display
– Keyboard
– Mouse etc
Machine readable
– Used to communicate with electronic equipment
– Disk drives
– USB keys
– Sensors
– Controllers
– Actuators
Communication
– Used to communicate with remote devices
– Digital line drivers
– Modems
Differences in I/O Devices
• Devices differ in a number of areas
– Data Rate
– Application
– Complexity of Control
– Unit of Transfer
– Data Representation
– Error Conditions
Speed of I/O devices
• Device Data rate
• Keyboard 10 bytes/sec
• Mouse 100 bytes/sec
• 56K modem 7 KB/sec
• Printer / scanner 200 KB/sec
• USB 1.5 MB/sec
• Digital camcorder 4 MB/sec
• Fast Ethernet 12.5 MB/sec
• Hard drive 20 MB/sec
• FireWire (IEEE 1394) 50 MB/sec
• XGA monitor 60 MB/sec
• PCI bus 500 MB/sec
I/O Device Data Rates
Device controllers
• I/O devices have components
– Mechanical component
– Electronic component
• Electronic component controls the device
– May be able to handle multiple devices
– May be more than one controller per mechanical
component (example: hard drive)
• Controller's tasks
– Convert serial bit stream to block of bytes
– Perform error correction as necessary
– Make available to main memory
Performing I/O
• Programmed I/O
– Process is busy-waiting for the operation to complete
• Interrupt-driven I/O
– I/O command is issued
– Processor continues executing instructions
• Direct Memory Access (DMA)
– DMA module controls exchange of data between
main memory and the I/O device
– Processor interrupted only after entire block has
been transferred
Relationship Among Techniques
Evolution of the I/O Function
• Processor directly controls a peripheral device
• Controller or I/O module is added
– Processor uses programmed I/O without
interrupts
– Processor does not need to handle details of
external devices
Evolution of the I/O Function
• I/O module is a separate processor
• I/O processor
– I/O module has its own local memory
– It is a computer in its own right
Evolution of the I/O Function
• Controller or I/O module with interrupts
– Processor does not spend time waiting for an I/O
operation to be performed
• Direct Memory Access
– Blocks of data are moved into memory without
involving the processor
– Processor involved at beginning and end only
Direct Memory Address
• Processor delegates I/O operation to the DMA
module
• DMA module transfers data directly to or from
memory
• When transfer is complete, DMA module
sends an interrupt signal to the processor
Operating System Design Issues
• Efficiency
– Most I/O devices extremely slow compared to
main memory
– Use of multiprogramming allows for some
processes to be waiting on I/O while another
process executes
– I/O cannot keep up with processor speed
– Swapping is used to bring in additional Ready
processes which is an I/O operation
• Generality
– Desirable to handle all I/O devices in a uniform
manner
– Hide most of the details of device I/O in lower-
level routines
Goals of I/O software
• Device independence
– Programs can access any I/O device
– No need to specify device in advance
• Uniform naming
– Name of a file or device is a string or an integer
– Doesn’t depend on the machine (underlying hardware)
• Error handling
– Done as close to the hardware as possible
– Isolate higher-level software
• Synchronous vs. asynchronous transfers
– Blocked transfers vs. interrupt-driven
• Buffering
– Data coming off a device cannot be stored in final destination
• Sharable vs. dedicated devices

You might also like