Operating System

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 277

Operating system

1
Chapter one

Introduction to operating system

9/3/2019 2
What is operating system

othat acts as an intermediary between a user of a computer and the


computer hardware.

oOperating system goals:


Execute user programs and make solving user problems easier.
Make the computer system convenient to use.
Use the computer hardware in an efficient manner.

9/3/2019 3
History of operating system

 Since operating systems have historically been closely tied to the


architecture of the computers on which they run, we will look at
successive generations of computers to see what their operating
systems were like.
 First generation(early system-bare system)
 Second generation(simple batch system)
 Third generation(multiprogramming and time sharing)
 Fourth generation(personal computer)

9/3/2019 4
First generation(early system-bare machine)
o Structure
 Large machines run from console
 Single user system
 Programmer/User as operator
 Paper tape or punched cards
o Early Software
 Assemblers
 Loaders
 Linkers
 Libraries of common subroutines
 Compilers
 Device drivers 3
o Secure
o Inefficient use of expensive resources
 Low CPU utilization
 Significant amount of setup time
9/3/2019 5
Second generation(simple batch system)
 Use an operator (somebody to work the machine)
 Add a card reader (a device to read programs written on punched
cards)
 Reduce setup time by batching similar jobs
 Automatic job sequencing - automatically transfers control from one
job to another. First rudimentary operating system.
 Resident monitor
 initial control in monitor
 control transfers to job
 when job completes control transfers back to monitor

9/3/2019 6
Second generation(con’t…)
Problems:
1. How does the monitor know about the nature of the job (e.g.,
Fortran versus Assembly) or which program to execute?
2. 2. How does the monitor distinguish a) job from job? b) data from
program?
 Solution: introduce control cards
Control Cards:
 Special cards that tell the resident monitor which programs to
run.

9/3/2019 7
Second generation(con’t…)
o Parts of resident monitor
o Control card interpreter - responsible for reading and carrying
out instructions on the cards.
o Loader - loads systems programs and applications programs
into memory. –
o Device drivers - know special characteristics and properties
for each of the system's I/O devices.
o Problem: Slow Performance - since I/O and CPU could not overlap,
and card reader very slow.
o Solution: Off-line operation - speed up computation by loading jobs
into memory from tapes and card reading and line printing done off-
line using smaller machines.

9/3/2019 8
Third generation-Multiprogramming and Time Sharing

Multiprogramming
 Several jobs are kept in main memory at the same time, and the CPU is shared
between them. Each job is called a process.
OS Features Needed for Multiprogramming
 I/O routine supplied by the system.
 Memory management - the system must allocate the memory to several jobs.
 CPU scheduling - the system must choose among several jobs ready to run.
 Allocation of devices.
Time-Sharing Systems- Interactive Computing
 Most efficient for many users to share a large computer.
 The CPU is shared between several processes.
 Each process belongs to a user and I/O is to/from a separate terminal for each
user.
 On-line file system must be available for users to access data and code.

9/3/2019 9
Fourth-generation(personal computer)
o Personal computers - computer system dedicated to a single user.

o I/O devices - keyboards, mice, display screens, small printers.

o User convenience and responsiveness.

o Can adopt technology developed for larger operating systems; often


individuals have sole use of computer and do not need advanced CPU
utilization or protection features.

9/3/2019 10
Fourth-generation(con’t…)
Parallel Systems - multiprocessor systems with more than one CPU in close
communication.
 Tightly coupled system - processors share memory and a clock; communication
usually takes place through the shared memory.
 Advantages of parallel systems:
 Increased throughput
 Economical
 Increased reliability
 Symmetric multiprocessing
 Each processor runs an identical copy of the operating system.
 Many processes can run at once without performance deterioration.
 Asymmetric multiprocessing
 Each processor is assigned a specific task; master processor schedules and
allocates work to slave processors.
 More common in extremely large system.
9/3/2019 11
Fourth-generation(con’t…)
Distributed Systems - distribute the computation among several physical
processors.
 Loosely coupled system - each processor has its own local memory;
processors communicate with one another through various communication
lines, such as high-speed networks.
 Advantages of distributed systems:
 Resource sharing
 Computation speed up - load sharing
 Reliability
 Communication
Real-Time Systems
 Often used as a control device in a dedicated application such as controlling
scientific experiments, medical imaging systems, industrial control systems,
and some display systems.
 Well-defined fixed-time constraints.
 OS must be able to respond very quickly.
9/3/2019 12
Computer system organization
Computer system can be divided into four components
–Hardware – provides basic computing resources
•CPU, memory, I/O devices
–Operating system
•Controls and coordinates use of hardware among various applications and
users
–Application programs – define the ways in which the system resources are used
to solve the computing problems of the users
•Word processors, compilers, web browsers, database systems, video games
–Users
•People, machines, other computers

9/3/2019 13
Computer system operation
Computer-system operation
•One or more CPUs, device controllers connect through common bus providing
access to shared memory
Concurrent execution of CPUs and devices competing for memory cycles

9/3/2019 14
Computer system operation(con’t…)
o I/O devices and the CPU can execute concurrently.
oEach device controller is in charge of a particular device type.
oEach device controller has a local buffer.
oCPU moves data from/to main memory to/from local buffers
oI/O is from the device to local buffer of controller.
Device controller informs CPU that it has finished its operation by
causing an interrupt.
• An operating system is interrupt driven.

9/3/2019 15
Storage structure
oMain memory – only large storage media that the CPU can access directly.
oSecondary storage – extension of main memory that provides large
nonvolatile storage capacity.
oMagnetic disks – rigid metal or glass platters covered with magnetic
recording material
Disk surface is logically divided into tracks, which are subdivided into
sectors.
The disk controller determines the logical interaction between the device
and the computer.
oCaching – copying information into faster storage system; main memory can
be viewed as a last cache for secondary storage.
oStorage systems organized in hierarchy.
Speed
Cost
Volatility

9/3/2019 16
Storage-Device Hierarchy

Less
High Cost
Cost and slow
and fast

9/3/2019 17
Migration of A From Disk to Register

oData transfer between cache to CPU is hardware function without OS


intervention.
oTransfer from disk to memory is usually controlled by OS.

9/3/2019 18
Operating system operation
• Interrupt driven by hardware
• Software error or request creates exception or trap
– Division by zero, request for operating system service
• Other process problems include infinite loop, processes modifying each other or the
operating system
• Dual-mode operation allows OS to protect itself and other system components
– User mode: execution done on behalf of a user.
– Monitor mode (also kernel mode or system mode) : execution done on behalf of
operating system.

9/3/2019 19
purpose of operating systems
• Hiding the complexities of hardware from the user.
• Managing between the hardware's resources which include the
processors, memory, data storage and I/O devices.
• Handling "interrupts" generated by the I/O controllers.
• Sharing of I/O between many programs using the CPU.

9/3/2019 Wolkite Unversity OS(SEng2043) 20


Operating system Functions(services)
One set of operating system service provides functions that are helpful
to user.
• Program execution - ability to load a program into memory and to run
it.
• I/O operations - since user programs cannot execute I/O operations
directly, the operating system must provide some means to perform
I/O.
• File-system manipulation - capability to read, write, create, and
delete files.
• Communications - exchange of information between processes
executing either on the same computer or on different systems tied
together by a network. Implemented via shared memory or message
passing.
• Error detection - ensure correct computing by detecting errors in
the CPU and memory hardware, in I/O devices, or in user programs.
9/3/2019 21
Operating system functions (con’t…)
Additional operating-system functions exist not for helping the user,
but rather for ensuring efficient system operation.
Resource allocation - allocating resources to multiple users or multiple
processes running at the same time.

 Accounting - keep track of and record which users use how much and what
kinds of computer resources for account billing or for accumulating usage
statistics.

 Protection - ensuring that all access to system resources is controlled.

Control program - controls the execution of user programs and operation of I/O
devices.
 Kernel - the one program running at all times (all else being application
programs).

9/3/2019 22
Operating system common component
oUser interface
oProcess Management
oMain Memory Management
oSecondary-Storage Management
oI/O System Management
oFile Management
oProtection System
Networking

9/3/2019 23
Process Management
A process is a program in execution. A process needs certain resources,
including CPU time, memory, files, and I/O devices, to accomplish its
task.
• The operating system is responsible for the following activities in
connection with process management.
Process creation and deletion.
process suspension and resumption.
Provision of mechanisms for:
process synchronization
process communication

9/3/2019 24
Main-Memory Management
oMemory is a large array of words or bytes, each with its own address.
It is a repository of quickly accessible data shared by the CPU and I/O
devices.
oThe operating system is responsible for the following activities in
connection with memory management:
 Keep track of which parts of memory are currently being used and by whom.
Decide which processes to load when memory space becomes available.
Allocate and deallocate memory space as needed.

9/3/2019 25
Secondary-Storage Management
Since main memory (primary storage) is volatile and too small to
accommodate all data and programs permanently, the computer system
must provide secondary storage to back up main memory.
Most modern computer systems use disks as the principle on-line
storage medium, for both programs and data.
The operating system is responsible for the following activities in
connection with disk management:
Free space management
Storage allocation
Disk scheduling

9/3/2019 26
File Management
 A file is a collection of related information defined by its creator.
Commonly, files represent programs (both source and object forms) and
data.
 The operating system is responsible for the following activities in
connections with file management:
File creation and deletion.
Directory creation and deletion.
Support of primitives for manipulating files and directories.
Mapping files onto secondary storage.
File backup on stable (nonvolatile) storage media.

9/3/2019 27
Protection System
Protection refers to a mechanism for controlling access by programs,
processes, or users to both system and user resources.
The protection mechanism must:
distinguish between authorized and unauthorized usage.
specify the controls to be imposed.
provide a means of enforcement.

9/3/2019 28
Chapter 2

Process & Thread

September 3, 2019 29
2.1 process

September 3, 2019 30
Process concept con’t…
 Process
 The entity that can be assigned to and executed on a processor
 An activity of some kind which has a program, input, output,
and a state.
 a program in execution; process execution must progress in
sequential fashion
 Conceptually, each process has its own virtual CPU.
 In reality, of course, the real CPU switches back and forth from
process to process.
 Provide the illusion of parallelism, which is some times called
pseudo parallelism.

September 3, 2019 31
Program Vs Process
• Program
o It is sequence of instructions defined to perform some task
o It is a passive entity
• Process
o It is a program in execution
o It is an instance of a program running on a computer
o It is an active entity
o A processor performs the actions defined by a process

September 3, 2019 32
Process creation
o In systems designed for running only a single application, it
may be possible to have all the processes that will ever be
needed be present when the system comes up.
o In general-purpose systems some way is needed to create
processes as needed during operation.
o There are four principal events that cause processes to be
created:
1. System initialization.
2. Execution of a process creation system call by a
running process.
3. A user request to create a new process.
4. Initiation of a batch job.

September 3, 2019 33
Process creation con’t….
1. System initialization:
 When an operating system is booted, typically several
processes are created.
 These processes can be:
 Foreground processes : processes that interact with (human) users
and perform work for them.
 Background processes: processes which are not
associated with particular users, but instead have some
specific function.
2. Execution of a process creation system call by a running process
 Running process will issue system calls to create one or more
new processes to help it do its job.
 Creating new processes is particularly useful when the work to
be done can easily be formulated in terms of several related,
but otherwise independent interacting processes.
September 3, 2019 34
Process creation con’t….
3. A user request to create a new process.
 In interactive systems, users can start a program by typing a
command or(double) clicking an icon.
 Taking either of these actions starts a new process and runs the
selected program in it.
4. Initiation of a batch job.
 users can submit batch jobs to the system (possibly remotely).
 When the operating system decides that it has the resources to
run another job, it creates a new process and runs the next job
from the input queue in it.

September 3, 2019 35
Process termination
o After a process has been created, it starts running and does
whatever its job is.
o However, nothing lasts forever, not even processes.
o Sooner or later the new process will terminate, usually due to one
of the following conditions:
1. Normal exit (voluntary).
2. Error exit (voluntary).
3. Fatal error (involuntary).
4. Killed by another process (involuntary).

September 3, 2019 36
Process termination(con’t…)
1. Normal exit (voluntary)
o Most processes terminate because they have done their work.
 Example ,When a compiler has compiled the program, it executes a
system call to tell the operating system that it is finished. This call is exit in
UNIX and ExitProcess in Windows
o Screen-oriented programs also support voluntary termination.
 Example Word processors, Internet browsers and similar programs always
have an icon or menu item that the user can click to tell the process to
remove any temporary files it has open and then terminate.
2. Error exit (voluntary)
o The second reason for termination is an error caused by the
process, often due to a program bug.
 Examples include executing an illegal instruction, referencing nonexistent
memory, or dividing by zero.

September 3, 2019 37
Process termination(con’t…)
3. Fatal error (involuntary)
o A process is terminate if it is discovers a fatal error.
 For example, if a user types the command cc foo.c to compile
the program foo.c and no such file exists, the compiler simply
exits.
4. Killed by another process (involuntary)
o The fourth reason a process might terminate is that the process
executes a system call telling the operating system to kill some
other process.
o In UNIX this call is kill. The corresponding Win32 function is
Terminate Process.

September 3, 2019 38
Process state
o The process state define the current activity of the process.
o As a process executes, it changes state
o The state in which may in is differ from one system to onather.
o Below we see three states a process may be in:
1. Running : Instructions of program are being executed.
2. Ready: The process is waiting to be assigned to a processor.
3. Blocked :The process is waiting for some event to occur

September 3, 2019 39
2.2 Thread

September 3, 2019 40
Thread concept
• process model is based on two independent concepts: resource
grouping and execution.
• One way of looking at a process is that it is a way to group
related resources together.
• A process has an address space containing program text and
data, as well as other resources. These resource may include
open files, child processes, pending alarms, signal handlers,
accounting information, and more.
• By putting them together in the form of a process, they can be
managed more easily.
• The other concept a process has is a thread of execution, usually
shortened to just thread.
• The thread has a program counter that keeps track of which
instruction to execute next.
September 3, 2019 41
Thread concept(con’t..)

• It has registers, which hold its current working variables.


• It has a stack, which contains the execution history, with one
frame for each procedure called but not yet returned from.
• Processes are used to group resources together; threads are the
entities scheduled for execution on the CPU.
• The term multithreading is also used to describe the situation of
allowing multiple threads in the same process.

September 3, 2019 42
Thread concept(con’t..)
• Inter-process communication is simple and easy when used
occasionally
• If there are many processes sharing many resources, then the
mechanism becomes cumbersome- difficult to handle.
 Threads are created to make this kind of resource sharing simple &
efficient
• A thread is a basic unit of CPU utilization that consists of:
• thread id
• program counter
• register set
• stack
• Threads belonging to the same process share:
• its code
• its data section
• other OS resources

September 3, 2019 43
Processes and Threads
• A thread of execution is the smallest unit of processing that can be
scheduled by an OS.
• The implementation of threads and process differs from one OS to another,
but in most cases, a thread is contained inside a process.
• Multiple threads can exist within the same process and share resources
such as memory, while different processes do not share these resources.
• Like process states, threads also have states:
• New, Ready, Running, Waiting and Terminated
• Like processes, the OS will switch between threads (even though they
belong to a single process) for CPU usage
• Like process creation, thread creation is supported by APIs
• Creating threads is inexpensive (cheaper) compared to processes
• They do not need new address space, global data, program code or operating
system resources
• Context switching is faster as the only things to save/restore are program
counters, registers and stacks

September 3, 2019 44
Processes and Threads
Similarities Differences
• Both share CPU and only one • Unlike processes, threads
thread/process is active (running) are not independent of one
at a time. another.

•Like processes, threads within a • Unlike processes, all


process execute sequentially. threads can access every
address in the task.
•Like processes, thread can
create children. • Unlike processes, thread
are design to assist one
•Like process, if one thread is other.
blocked, another thread can run.

September 3, 2019 45
Examples of Threads
In a word processor,
• a background thread may check spelling and grammar, while a foreground
thread processes user input ( keystrokes ), while yet a third thread loads
images from the hard drive, and a fourth does periodic automatic backups
of the file being edited

In a spreadsheet program,
• one thread could display menus and read user input, while another thread
executes user commands and updates the spreadsheet
In a web server,
• Multiple threads allow for multiple requests to be satisfied
simultaneously, without having to service requests sequentially or to fork
off separate processes for every incoming request.

September 3, 2019 46
Multithreading
• Multithreading refers to the ability on an operating system to support multiple
threads of execution within a single process.
• A traditional (heavy weight) process has a single thread of control
• There’s one program counter and a set of instructions carried out at a
time
• If a process has multiple thread of control, it can perform more than one task at
a time
• Each threads have their own program counter, stacks and registers
• But they share common code, data and some operating system data
structures like files

September 3, 2019 47
Multi-threading(cont...)
• Traditionally there is a single thread of execution per process.
• Example: MSDOS supports a single user process and single thread.
• UNIX supports multiple user processes but only support one thread per process.
• Multithreading
• Java run time environment is an example of one process with multiple threads.
• Examples of supporting multiple processes, with each process supporting multiple threads
• Windows 2000, Solaris, Linux, Mach, and OS/2

Instruction
trace

One process one thread One process multiple threads

Multiple processes,
Multiple processes,
multiple threads per process
one thread per process

September 3, 2019 48
Single and Multithreaded Processes
 In a single threaded process model, the representation of a process includes its PCB,
user address space, as well as user and kernel stacks.
When a process is running. The contents of these registers are controlled by that
process, and the contents of these registers are saved when the process is not running.
In a multi threaded environment
• There is a single PCB and address space,
• However, there are separate stacks for each thread as well as separate control
blocks for each thread containing register values, priority, and other thread related
state information.

September 3, 2019 49
Benefits of Multithreading
There are four major benefits of multi-threading:
• Responsiveness
• one thread can give response while other threads are blocked or slowed down
doing computations
• Resource Sharing
• Threads share common code, data and resources of the process to which they
belong. This allows multiple tasks to performed within the same address space
• Economy
• Creating and allocating processes is expensive, while creating threads is
cheaper as they share the resources of the process to which they belong.
Hence, it’s more economical to create and context-switch threads.
• Scalability/utilization of multi-processor architectures
• The benefits of multithreading is increased in a multi-processor architecture,
where threads are executed in parallel. A single-threaded process can run on
one CPU, no matter how many are available.

September 3, 2019 50
Multithreading Models
There are two types of multithreading models in modern operating systems: user
threads & kernel threads
1. Kernel threads
• are supported by the OS kernel itself. All modern OS support kernel threads
• Need user/kernel mode switch to change threads
2. User threads
• are threads application programmers put in their programs. They are managed without
the kernel support, Does not require O.S. support
•Has problems with blocking system calls
•Cannot support multiprocessing
• There must be a relationship between the kernel threads and the user threads.
•There are 3 common ways to establish this relationship:
1. Many-to-One
2. One-to-One
3. Many-to-Many

September 3, 2019 51
Multithreading Models: Many-to-One

It maps many user level threads in to


one kernel thread

•Thread management is done by


thread library in user space

•It’s efficient but if a thread makes


blocking system call, it blocks

•Only one thread access the kernel at


a time, so multiple threads can not
run on multiprocessor systems
•Used on systems that do not support
kernel threads.

September 3, 2019 52
Multithreading models: One-to-One
• Each user-level thread maps to kernel thread.
• A separate kernel thread is created to
handle each user-level thread
• It provides more concurrency and solves the
problems of blocking system calls
• Managing the one-to-one model involves
more overhead slowed down system
• Drawback: creating user thread requires
creating the corresponding kernel thread.
• Most implementations of this thread puts
restrictions on the number of threads
created

September 3, 2019 53
Multithreading Models: Many-to-Many
• allows the mapping of many user threads in to
many kernel threads

• Allows the OS to create sufficient number of


kernel threads

• It combines the best features of one-to-one and


many-to-one model

• Users have no restrictions on the numbers of


threads created
• Blocking kernel system calls do not block the
entire process.

September 3, 2019 54
Multithreading Models: Many-to-Many
• Processes can be split across multiple
processors

• Individual processes may be allocated


variable numbers of kernel threads,
depending on the number of CPUs present
and other factors

• One popular variation of the many-to-many


model is the two-tier model, which allows
either many-to-many or one-to-one
operation.
Two-tier model

September 3, 2019 55
Thread usage
o several reasons for having multiple threads:
o many applications need multiple activities going on at once.
 decomposing such an application into multiple sequential threads that
run in quasi-parallel, the programming model becomes simpler.
o they are lighter weight than processes, they are easier (i.e., faster)
to create and destroy than processes.
o Having multiple threads within an application provide higher
performance argument.
• If there is substantial computing and also substantial I/0, having
threads allows these activities to overlap, thus speeding up the
application.
o Threads are useful on systems with multiple CPUs

September 3, 2019 56
Thread library
o Thread libraries provide programmers an API to create and
manage threads
There are three basic libraries used:
POSIX pthreads
•They may be provided as either a user or kernel library, as an extension to the
POSIX standard
• Systems like Solaris, Linux and Mac OS X implement pthreads specifications
WIN32 threads
• These are provided as a kernel-level library on Windows systems.
Java threads
• Since Java generally runs on a Java Virtual Machine, the implementation of threads is
based upon whatever OS and hardware the JVM is running on, i.e. either Pthreads or
Win32 threads depending on the system.

September 3, 2019 57
Thread implementation
o There are two main ways to implement a threads package: in user space and
in the kernel.
Implementing Threads in User Space
All code and data structure are reside in user space.
Invoking a function in the library results in a local prodecuder call in user
space not system call.
the kernel is not aware of the existence of threads.
Advantage:
 To do thread switching, it calls a run-time system procedure, which is least an
order of magnitude-may be more-faster than trapping to the kernel
 They allow each process to have its own customized scheduling algorithm.
Disadvantage:
problem of how blocking system calls are implemented
Problem of page faults
no other thread in that process will ever run unless the first thread voluntarily
September 3, 2019 gives up the CPU. 58
Thread implementation
Implementing Threads in kernel Space
o All code and data structure are reside in kernels pace.
o Invoking a function in the library results system call.
o the kernel is aware of the existence of threads.
Advantage:
 All calls that might block a thread are implemented as system calls
 if one thread in a process causes a page fault, the kernel can easily check to
see if the process has any other runnable threads, and if so, run one of them
while waiting for the required page to be brought in from the disk.
o kernel threads solve some While problems, they do not solve all problem
o what happens when a multithreaded process forks?
In many cases, the best choice depends on what the process is planning
to do next.

September 3, 2019 59
Implementing Threads in kernel Space(con’t..)
o When a signal comes in, which thread should handle it?
Possibly threads could register their interest in certain signals but there
may be two or more threads register for the same signal.
Hybrid Implementations
o use kernel-level threads and then multiplexes user-level threads
onto some or all of the kernel threads.

September 3, 2019 60
Chapter 3

memory management

9/3/2019 61
2.1 Main memory

9/3/2019 62
Content

o Background
o Logical versus Physical Address Space
o Swapping
o Contiguous Allocation
o Paging
o Segmentation
o Segmentation with Paging

9/3/2019 63
Background
o Program must be brought into memory and placed within a process for it to
be executed.
o In a multiprogramming system, the “user” part of memory must be further
subdivided to accommodate multiple processes.
o The task of subdivision is carried out dynamically by the operating system
and is known as memory management.
o Main memory and registers are only storage CPU can access directly.
o Register access in one CPU clock (or less), Main memory can take many
cycles
o Cache sits between main memory and CPU registers
o Memory needs to be allocated efficiently to pack as many processes into
memory as possible.
o Problem
 how to manage relative speed of accessing physical memory?
 How to Ensure correct operation to protect the operating system from being
accessed by user process and user processes from one another?

9/3/2019 64
Background(con’t….)
To provide the above protection, we need to ensure that each
process has a separate address space.
Determine the legal addresses that the process can access legally
Ensure that a process can access only these legal addresses.
This protection can be done using two registers
• Base registers: - holds the physical address where its Defines legal
program begins in memory addresses
• Limit registers:-holds the length of the program.
o Use a HW to compare every address generated
in user space with registers value Ensures memory
o A trap is generated for any attempt by a user processprotection
to access
beyond the limit.
o Base and limit registers are loaded only by the OS, using special
privileged instructions

9/3/2019 65
Background(con’t….)
o A pair of base and limit registers define the logical address
space.

9/3/2019 66
Background(con’t….)
o A pair of base and limit registers define the logical address
space.

9/3/2019 67
Address Binding
o Usually a program resides in a disk in the form of executable binary
file.
o It is brought to memory for execution (it may be moved between disk
and memory in the meantime).
o When a process is executed it accesses instructions and data from
memory. When execution is completed the memory space will be
freely available.
o A user program may be placed at any part of the memory.
o A user program passes through a number of steps before being
executed.
o Addresses may be represented in different ways during these steps.
 Symbolic addresses:- addresses in source program(eg count)
 Re-locatable addresses:(eg. 14 bytes from the beginning of
this module)
 Absolute addresses:-(eg. 74014)

9/3/2019 68
Address Binding(con’t…)
o Address binding of instructions and data to memory addresses
can happen at three different stages.
• Compile time:
• If memory location is known a prior, absolute code can be
generated; must recompile code if starting location
changes.
• Load time:
• Must generate re-locatable code if memory location is not
known at compile time.
• Execution time:
•Binding delayed until runtime if the process can be moved
during its execution from one memory segment to another
•Need hardware support for address maps (e.g. base and limit
registers).

9/3/2019 69
Binding time tradeoffs

• Early binding
compiler - produces efficient code
allows checking to be done early
allows estimates of running time and space
• Delayed binding
Linker, loader
produces efficient code, allows separate compilation
portability and sharing of object code
• Late binding
VM, dynamic linking/loading, overlaying, interpreting
code less efficient, checks done at runtime
flexible, allows dynamic reconfiguration

9/3/2019 70
Logical vs. Physical Address Space
o The concept of a logical address space that is bound to a separate

and physical addresses?


What’s the difference between logical
physical address space is central to proper memory management.
• Logical Address: or virtual address - generated by CPU
• Set of logical addresses generated by a program is called logical
address space.
• Physical Address: address seen by memory unit.
• Set of physical addresses corresponds to logical space is called
physical address space
• Logical and physical addresses are the same in compile-time and load-
time address-binding schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme.
Memory Management Unit (MMU)
—Hardware device that maps virtual to physical address.
—In MMU scheme, the value in the relocation register is added to every
address generated by a user process at the time it is sent to memory.
—The user program deals with logical addresses; it never sees the real
physical address.

9/3/2019 71
Dynamic relocation using a relocation register

9/3/2019 72
Swapping
 A process can be swapped temporarily out of memory to a
backing store and then brought back into memory for continued
execution.

• Backing Store - fast disk large enough to accommodate copies of


all memory images for all users; must provide direct access to
these memory images.

• Roll out, roll in - swapping variant used for priority based


scheduling algorithms; lower priority process is swapped out,
so higher priority process can be loaded and executed.

• System maintains a ready queue of ready-to-run processes


which have memory images on disk.

9/3/2019 73
Schematic View of Swapping

Swapping is done when


oQuantum expires
oPriority issues arises

Conditions for swapping


o Process to be executed must be not in
memory
o No sufficient free memory location

9/3/2019 74
Contiguous Allocation
• Each process is contained in a single contiguous section of memory.
• Main memory usually divided into two partitions
 Resident Operating System, usually held in low memory with
interrupt vector.
 User processes then held in high memory.
Single partition allocation
 Relocation register scheme used to protect user processes from
each other, and from changing OS code and data.
 Relocation register contains value of smallest physical address;
limit register contains range of logical addresses - each logical
address must be less than the limit register.
 When CPU scheduler selects a process for execution, the
dispatcher loads the relocation and limit registers with the correct
values.
 Every address generated by CPU is compared against these values.
ensures memory protection of OS and other processes from being
modified by the running process.

9/3/2019 75
Contiguous Allocation(con’t…)
Multiple partition Allocation
o Hole - block of available memory; holes of various sizes are
scattered throughout memory.
o When a process arrives, it is allocated memory from a hole
large enough to accommodate it.
o Operating system maintains information about which partitions
are :
• allocated partitions
• free partitions (hole)

OS OS OS OS
Process 5 Process 5 Process 5 Process 5
Process 9 Process 9

Process 8 Process 10

Process 2 Process 2 Process 2 Process 2

9/3/2019 76
Fixed Partitioning
• Divide memory into several fixed-sized partitions.

• Each partition contains one process.

• Degree of multiprogramming is bound by the number of


partitions.

• When a partition is free, a process is selected from the


input queue and is loaded into the free partition.

• When the process terminates, the partition becomes


available for another process.

9/3/2019 77
Fixed Partitioning

Equal size partition unequal size partition

9/3/2019 78
Fixed Partitioning
Equal-size partitions
• any process whose size is less than or equal to the partition
size can be loaded into an available partition
• if all partitions are full, the operating system can swap a
process out of a partition
• a program may not fit in a partition. The programmer must
design the program with overlays
• because all partitions are of equal size, it does not matter
which partition is used
Unequal-size partitions
• can assign each process to the smallest partition within
which it will fit
• queue for each partition
• processes are assigned in such a way as to minimize
wasted memory within a partition.

9/3/2019 79
Fixed Partitioning(con’t…)
 Its no longer used because it has various drawbacks like:
1. Degree of multiprogramming is bounded by the number of
partitions.
2. Internal fragmentations.
• Internal fragmentations: is the phenomenon, in which
there is wasted space internal to a partition due to the fact
that the block of data loaded is smaller than the partition

9/3/2019 80
Internal fragmentation

Process 1
Input Queue (7 KB) 10
(in the disk) KB

4 KB 8 KB 9 KB 7 KB
Process 2
(9 KB) 10
KB

Internal Process 3
Fragmentation (8 KB) 10
KB

As shown
•This method suffers from internal fragmentations.
•The degree of multiprogramming is bounded to 3 although it can be
4.
9/3/2019 81
Dynamic Partitioning
• Partitions are of variable length and number
• Initially, all memory is available for user processes, and is
considered as one large block of available memory.

• When a process arrives and needs memory, we search for


a hole large enough for this process using: First fit, Best fit,
Worst fit

• If we find one, we allocate only as much memory as is


needed, keeping the rest available to satisfy future
requests.

9/3/2019 82
Dynamic Partitioning
o Operating system satisfy a request of size n from a list of free
holes-using algorithms:
1. First fit: allocate the first hole that is big enough (fastest
method).
2. Best fit: allocate the smallest hole that is big enough
(produces the smallest leftover hole).
3. Worst fit: allocate the largest hole (produces the largest
leftover hole which may be more useful than the smaller
leftover hole from a best-fit approach.

9/3/2019 83
OS
Process 4
Process 1
(4 KB)
Input Queue (7 KB)
(in the disk) Process 5
(9 KB) 2
Process
(9 KB)
4 KB 8 KB 9 KB 7 KB

Process 3
(8 KB)

9/3/2019 84
Memory
Using First Fit 0

OS
Base 999
1000
Start 1040 1010
address of Process
process 20
Legal range 1040

Limit
Process
1060
1070
Executable file
(Size =20 Process
memory words) 1100

1125
Process
1150

Loader 1200

Process
9/3/2019 1250 85
1255
Memory
Using Best Fit
OS
Base 1000
Start 1100 1010
address of Process
process 20
Legal range 1040

Limit

1070
Executable file
(Size =20 Process
memory words) 1100
Process
1120
1125
Process
1150

Loader
1200
Process
9/3/2019 1250 86
1255
Memory
Using Worst Fit
OS
Base 1000
Start 1150 1010
address of Process
process 20
Legal range 1040

Limit

1070
Executable file
(Size =20 Process
memory words) 1100

1125
Process
1150
Process
1170
Loader
1200
Process
9/3/2019 1250 87
1255
Dynamic Partitioning
o The degree of multiprogramming changing according to the
number of pro
o This method suffers from external Fragmentations. cesses in
the memory (in ready queue).
External fragmentation: is the phenomenon, in which
memory that is external to all partitions becomes increasingly
fragmented.

9/3/2019 88
External fragmentation

OS
Process 4
Input Queue
(in the disk) Process 2

4 KB 8 KB 9 KB 7 KB
Process 3 External
Fragmentations
Process 9

Process 20

As shown:
dynamic partition is suffered from external Fragmentations.

9/3/2019 89
Compaction

OS
Process 4
Process 4
Input Queue
(in the disk)
Process 2
Process 2

Process 3
Process 3
4 KB 8 KB 9 KB 7 KB
Process 9
Process 9
Process 20
Process 20 A new hole to store a
Compaction: new process
• Is a movement of the memory contents to place all free memory in a one large
block sufficient to store new process.
• It is a solution for the external fragmentation but it is expensive and is not always
possible.
9/3/2019 90
Paging and Segmentation

9/3/2019 91
Basic method
o Paging is a memory-management scheme that permits the
physical-address space of a process to be noncontiguous.
o it is commonly used in most operating systems.
o Divide physical memory into fixed-sized blocks called frames.
o Divide Process into blocks of same size called pages
o size is power of 2, between 512 bytes and 8,192 bytes
o Use a page table which contains base address of each page in
physical memory.
o To run a program of size n pages, need to find n free frames
and load program
o Set up a page table to translate logical to physical addresses

9/3/2019 92
Basic method
 When a process arrives the size in pages is examined
 Each page of process needs one frame.
 If n frames are available these are allocated, and page table is updated with frame
number.

Before allocation After allocation


9/3/2019 93
Address translation
o Address generated by CPU is divided into:
 Page number (p) – used as an index into a page table which contains
base address of each page in physical memory.
 Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit.
o Page number is an index to the page table.
o The page table contains base address of each page in physical memory.
o The base address is combined with the page offset to define the physical
address that is sent to the memory unit.
oThe size of logical address space is 2m and page size is 2n address units.
oHigher m-n bits designate the page number
on lower order bits indicate the page offset.
logical address

p d

page offset (n)


Page number(m-n)
9/3/2019 94
Address Translation Architecture

9/3/2019 95
More on Paging
o In paging scheme, Pages are allocated as units.
o In this scheme there is no external fragmentation.
o But internal fragmentation is inevitable.
Example:-
― assume page size=2048 bytes.
― a process of 61442 bytes needs 30 pages plus 2 bytes. Since
units are managed in terms of pages 31 pages are allocated.
―Internal fragmentation=2046 bytes!!!!.
o In the worst case a process needs n pages plus 1 byte.
So it will be allocated n+1 pages.

Fragmentation =(page size-1 byte) ~ entire page.

9/3/2019 96
Page table
o page table is used to map virtual pages onto page frames.
o Page table can have the following important fields.
Page frame number: represented the page frame number in physical
memory.
Present/absent bit: indicate whether the page to which the entry
belongs is currently in memory or not.
 bit 1- the entry is valid and can be used.
 Bit 0 the virtual page is not currently in memory. Accessing a page table entry with this bit
set to 0 causes a page fault.
Protection bits: tell what kinds of access are permitted. In the simplest
form, this field contains 1 bit, with 0 for read/write and 1 for read only.
Modified bit: is bit set when page is written . If the page in it has been
modified, it must be written back to the disk. If it has not been

9/3/2019 97
Page table(con’t…)
Referenced bit: is set whenever a page is referenced, either for reading or
writing.
 Its value is to help the operating system choose a page to evict when a
page fault occurs and plays an important role in several of the page
replacement algorithms
Cache disable: With this bit, caching can be turned off. This feature is
important for pages that map onto device registers rather than memory.

9/3/2019 98
Implementation of Page Table
o Two options: Page table can be kept in registers or main memory
o Page table is kept in main memory due to bigger size.
• Ex: address space = 232 to 264
• Page size= 212
• Page table= 232 / 212=220
• If each entry consists of 4 bytes, the page table size = 4MB.
o Page-table base register (PTBR) points to the page table.
o Page-table length register (PRLR) indicates size of the page table.
o PTBR, and PRLR are maintained in the registers.
o In this scheme every data/instruction access requires two memory accesses. One
for the page table and one for the data/instruction.
• Memory access is slowed by a factor of 2.
• Swapping might be better !
• The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers (TLBs)
What’s the purpose
of using TLB?
9/3/2019 99
Associative Memory
• Is associative high speed memory.
• Each entry in TLB consists two parts: key and value.
• If TLB is presented with item, item compared with all keys
simultaneously.
• If item is found corresponding field is returned.
• TLB is used for page table with page number and frame as its two
column.
• Associative memory contains only few page table entries.
Page #(p) Frame #(d)

• Address translation (p, d)


• If p is in associative register, get frame # out
• Otherwise get frame # from page table in memory

9/3/2019 100
Paging Hardware With TLB

9/3/2019 101
Effective Access Time
o Associative Lookup =  time unit, Assume memory cycle time is 1 microsecond
o Hit ratio – percentage of times that a page number is found in the associative
registers; ratio related to number of associative registers
o Example:
o hit ratio= 20ns for TLB search and 100ns for memory cycle(120)
o Miss ratio=20n for TLB search , 100ns for getting page frame number, 100ns for memory
access(20ns +100ns+100ns=220ns)
• Effective access time=hit ratio*Associate memory access time +miss ratio*
memory access time.
• For e.g 98-percent hit ratio, we have
• Effective access time= 0.98*120+0.02*220
= 122 nanoseconds

9/3/2019 102
Segmentation
o Memory-management scheme that supports user view of
memory.
o A program is a collection of segments. A segment is a logical
unit such as:
main program,
procedure,
function,
local variables, global variables,
common block,
stack,
symbol table, arrays

9/3/2019 103
User’s View of a Program

9/3/2019 104
Logical View of Segmentation
11

4
1

3 22
4

33

user space physical memory space

9/3/2019 105
Segmentation Architecture
o Logical address consists of a two tuple
 <segment-number, offset>
o Segment Table
• Maps two-dimensional user-defined addresses into one-
dimensional physical addresses.
• Each table entry has
• Base - contains the starting physical address where the
segments reside in memory.
• Limit - specifies the length of the segment.
• Segment-table base register (STBR) points to the segment table’s
location in memory.
• Segment-table length register (STLR) indicates the number of
segments used by a program;.
 Note: segment number s is legal if s < STLR.

9/3/2019 106
Segmentation Architecture (cont.)
o Relocation is dynamic - by segment table
o Sharing
―Code sharing occurs at the segment level.
―Shared segments must have same segment number.
o Protection
• protection bits associated with segments
• read/write/execute privileges
• array in a separate segment - hardware can check for illegal
array
o Allocation - dynamic storage allocation problem
―use best fit/first fit, may cause external fragmentation.

9/3/2019 107
Segmented Paged Memory
o Segment-table entry contains not the base address of the
segment, but the base address of a page table for the
segment.
―Overcomes external fragmentation problem of segmented memory.
―Paging also makes allocation simpler; time to search for a suitable
segment (using best-fit etc.) reduced.
―Introduces some internal fragmentation and table space overhead.

9/3/2019 108
Segmented Paged Memory

9/3/2019 109
Chapter 4: Process Management
Deadlocks

9/3/2019 110
Deadlock
• System Model
• Deadlock Characterization
• Methods for handling Deadlock
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Detection
• Recovery from Deadlock

9/3/2019 111
System Model
• A system contains a finite number of resource types (R1, R2, . . ., Rm) to be
distributed among competing processes
• The resource types are partitioned in to several types (e.g files, I/O devices, CPU
cycles, memory), each having a number of identical instances
• A process must request a resource before using it and release it after making use
of it. Each process utilizes a resource as follows:
• Request
• A process requests for an instance of a resource type. If the resource is
free, the request will be granted. Otherwise the process should wait
until it acquires the resource
• Use
• The process uses the resource for its operations
• Release
• The process releases the resource

9/3/2019 112
Deadlock
• Deadlock can be defined as a permanent blocking of processes that either
compete for system resources or communicate with each other
• The set of blocked processes each hold a resource and wait to acquire a
resource held by another process in the set
• All deadlocks involve conflicting needs for resources by two or more
processes
• A set of processes or threads is deadlocked when each process or thread is
waiting for a resource to be freed which is controlled by another process
• Example 1
• Suppose a system has 2 disk drives
• If P1 is holding disk 2 and P2 is holding disk 1 and if P1 requests for disk
1 and P2 requests for disk 2, then deadlock occurs

9/3/2019 113
Traffic gridlock is an everyday
example of a deadlock situation.
When two trains approach each other at a crossing, both shall
come to a full stop and neither shall start up again until the
other has gone

9/3/2019 114
Deadlock characterization
Deadlock can arise if four conditions hold simultaneously in
a system:
1. Mutual exclusion: only one process at a time can use a resource. No process
can access a resource unit that has been allocated to another process
2. Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.
3. No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task.
4. Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that:
• P0 is waiting for a resource that is held by P1,
• P1 is waiting for a resource that is held by P2, …,
• Pn–1 is waiting for a resource that is held by Pn, and
• Pn is waiting for a resource that is held by P0

9/3/2019 115
Resource Allocation Graph
• Deadlock can be better described by using a directed graph called
resource allocation graph
• The graph consists of a set of vertices V and a set of edges E
V is partitioned into two types:
• P = {P1, P2, …, Pn}, the set consisting of all the processes in the
system.
• R = {R1, R2, …, Rm}, the set consisting of all resource types in
the system.
• request edge – directed edge P1  Rj
• assignment edge – directed edge Rj  Pi
• If a Resource Allocation Graph contains a cycle, then a deadlock
may exist

9/3/2019 116
Resource Allocation Graph /RAG (cont.)
• Diagrammatically, processes and resources in RAG are represented as follow:

• Process

• Resource Type with 4 instances

Pi Rj
• Pi requests instance of Rj
Pi Rj
• Pi is holding an instance of Rj

9/3/2019 117
Example of Resource Allocation Graph

9/3/2019 118
Graph With A Cycle But No Deadlock
Basic Facts
• If graph contains no cycles then there’s no
deadlock

• If however a graph contains a cycle then


there are two possible situations:
• if there is only one instance per
resource type, then deadlock can occur

• if there are several instances per


resource type, there’s a possibility of no
deadlock

9/3/2019 119
Example of Resource Allocation Graph
• The RAG shown here tells us about the following
situation in a system:
• P= {P1, P2, P3}
• R= {R1,R2, R3, R4}
•E ={P1R1,P2R3, R1P2, R2P1,R3P3}

• The process states


• P1 is holding an instance of R2 and is waiting
for an instance of R1

• P2 is holding an instance of R1 and instance


of R2, and is waiting for an instance of R3

• P3 is holding an instance of R3

9/3/2019 120
Example of Resource Allocation Graph
• Resource Allocation Graph With a Deadlock

• There are two cycles in this graph


• P1R1P2R3P3R2P1
• P2R3P3R2P2

• Processes P1, P2 and P3 are deadlocked:

• P1 is waiting for P2 to release R1

• P2 is waiting for R3 held by P3 and

• P3 is waiting for either P1 or P2 to


release R2

9/3/2019 121
Methods for handling Deadlocks
• Deadlock problems can be handled in one of the following 3 ways:

1. Using a protocol that prevents or avoids deadlock by ensuring that


a system will never enter a deadlock state deadlock prevention and
deadlock avoidance scheme are used

2. Allow the system to enter a deadlock state and then recover


3. Ignore the problem and pretend that deadlocks never occur in the
system; used by most operating systems, including UNIX

9/3/2019 122
Deadlock Prevention
• By ensuring at least one of the necessary conditions for deadlock will not hold,
deadlock can be prevented.
 This is mainly done by restraining how requests for resources can be
made

• Deadlock prevention methods fall into two classes:


• An indirect method of deadlock prevention prevents the occurrence of one
of the three necessary conditions listed previously (items 1 through 3)
• A direct method of deadlock prevention prevents the occurrence of a
circular wait (item 4)

9/3/2019 123
Deadlock Prevention (contd.)
1. Mutual Exclusion – This is not required for sharable resources; however to prevent a
system from deadlock, the mutual exclusion condition must hold for non-sharable
resources

2. Hold and Wait – in order to prevent the occurrence of this condition in a system, we
must guarantee that whenever a process requests a resource, it does not hold any other
resources. Two protocols are used to implement this:
1. Require a process to request and be allocated all its resources before it begins
execution or
2. Allow a process to request resources only when the process has none
• Both protocols have two main disadvantages:
o Since resources may be allocated but not used for a long period, resource utilization
will be low
o A process that needs several popular resources has to wait indefinitely because one
of the resources it needs is allocated to another process. Hence starvation is
possible.

9/3/2019 124
Deadlock Prevention (contd.)
3. No Preemption
• If a process holding certain resources is denied further request, that process
must release its original resources allocated to it
• If a process requests a resource allocated to another process waiting for some
additional resources, and the requested resource is not being used, then the
resource will be preempted from the waiting process and allocated to the
requesting process
• Preempted resources are added to the list of resources for which the process
is waiting
• Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting
• This approach is practical to resources whose state can easily saved and
retrieved easily

9/3/2019 125
Deadlock Prevention (contd.)
4. Circular Wait
• A linear ordering of all resource types is defined and each process
requests resources in an increasing order of enumeration

• So, if a process initially is allocated instances of resource type R, then it


can subsequently request instances of resources types following R in
the ordering

9/3/2019 126
Deadlock Avoidance
• Deadlock avoidance scheme requires each process to declare the maximum
number of resources of each type that it may need in advance

• Having this full information about the sequence of requests and release of
resources, we can know whether or not the system is entering unsafe state

• The deadlock-avoidance algorithm dynamically examines the resource-


allocation state to ensure that there can never be a circular-wait condition
• Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes
• A state is safe if the system can allocate resources to each process in some
order avoiding a deadlock.
• A deadlock state is an unsafe state.

9/3/2019 127
Deadlock avoidance: Safe State
Basic Facts
• If a system is in a safe state,
then there are no deadlocks.

• If a system is in unsafe state,


then there is a possibility of
deadlock

• Deadlock avoidance method


ensures that a system will
never enter an unsafe state

9/3/2019 128
Deadlock Avoidance Algorithms
• Based on the concept of safe state, we can define
algorithms that ensures the system will never deadlock.

• If there is a single instance of a resource type,


• Use a resource-allocation graph

• If there are multiple instances of a resource type,


• Use the Dijkstra’s banker’s algorithm

9/3/2019 129
Deadlock Avoidance Algorithms (contd.)
Resource-Allocation Graph Scheme
• A new type of edge (Claim edge), in addition to the request and assignment edge is
introduced.
• Claim edge Pi  Rj indicates that process Pi may request resource Rj at some point in the
future. The edge resembles a request edge but is represented by a dashed line in the
graph
• Claim edge is converted to request edge when a process requests a resource
• Request edge is converted to an assignment edge when the resource is allocated to
the process
• When a resource is released by a process, assignment edge reconverts to a claim
edge
• If no cycle exists in the allocation, then system is in safe state otherwise the system is
in unsafe state
• Resources must be claimed a priori in the system
• i.e, before a process starts executing, all its claim edge must show up in the
allocation graph

9/3/2019 130
Deadlock Avoidance Algorithms (contd.)
Resource-Allocation Graph Algorithm
• Suppose that process Pi requests a resource Rj

• The request can be granted only if converting the request edge to an


assignment edge does not result in the formation of a cycle in the
resource allocation graph

• If we suppose P2 requests R2. We can not


allocate it since it will create a cycle.

9/3/2019 131
Deadlock Avoidance Algorithms (contd.)
Resource-Allocation Graph algorithm

• If P1 requests for R2 and P2 requests


for R1, then deadlock will occur

Unsafe State Resource-Allocation

9/3/2019 132
Deadlock Avoidance Algorithms (contd.)
Banker’s Algorithm
• This algorithm is used when there are multiple instances of resources

• When a process enters a system, it must declare the maximum number


of each instance of resource types it may need

• The number however may not exceed the total number of


resource types in the system

• When a process requests a resource it may have to wait

• When a process gets all its resources it must return them in a finite
amount of time

9/3/2019 133
Deadlock Avoidance Algorithms (contd.)
Banker’s Algorithm
• The following data structures are used in the algorithm:
Let n = number of processes, and
m = number of resources types.
• Available: Vector of length m.
• If available [j] = k, there are k instances of resource type Rj available.
• Max: n x m matrix.
• If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj.
• Allocation: n x m matrix.
• If Allocation[i,j] = k then Pi is currently allocated k instances of Rj.
• Need: n x m matrix.
• If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task

Need [i,j] = Max[i,j] – Allocation [i,j]

9/3/2019 134
Deadlock Avoidance Algorithms (contd.)
Safety Algorithm
• It is used to identify whether or not a system is in a safe state. The algorithm
can be described as follow:
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish [i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.

9/3/2019 135
Deadlock Avoidance Algorithms (contd.)
Banker’s Algorithm
Examples of Banker’s algorithm:
• Assume a system has
• 5 processes P0 through P4;
• 3 resource types:
A (10 instances), B (5instances), and C (7 instances)
• Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433

9/3/2019 136
Deadlock Avoidance Algorithms (contd.)
Banker’s Algorithm
Example (contd.)
• The content of the matrix Need is defined to be Max – Allocation
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0>
satisfies safety criteria

9/3/2019 137
Deadlock Detection
• If a system does not implement either deadlock
prevention or avoidance, deadlock may occur. Hence the
system must provide

• A deadlock detection algorithm that examines the state of the


system if there is an occurrence of deadlock

• An algorithm to recover from the deadlock

9/3/2019 138
Deadlock Detection:
1. Single Instance of Each Resource Type
• If there are single instances of each resources in a system, then an algorithm
that uses a type of resource allocation graph called wait-for graph will be used
• The wait-for graph is obtained from the resource allocation graph by removing
the resource nodes and collapsing the corresponding edges
• If a process Pi points to Pj I in a wait-for graph, it indicates that A Pi is waiting
for Pj to release a resource Pj needs

• To detect deadlocks, the system needs to periodically invoke an algorithm that


searches for a cycle in the graph. If there is a cycle, there exists a deadlock.
• An algorithm to detect a cycle in a graph requires an order of n2 operations,
where n is the number of vertices in the graph.

9/3/2019 139
Deadlock Detection:
2. Several Instances of a Resource Type
• When there are multiple instances of a resource type in a resource allocation
system, the wait-for graph is not applicable. Hence, a deadlock detection
algorithm is used
• The algorithm uses several data structures similar to the ones in banker’s
algorithm
• Available: A vector of length m indicates the number of available resources of
each type.
• Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process.
• Request: An n x m matrix indicates the current request of each process. If
Request [ij] = k, then process Pi is requesting k more instances of resource type.
Rj .

9/3/2019 140
Deadlock Detection Algorithm Usage
• When, and how often, to invoke the detection algorithm depends on:
1. How often a deadlock is likely to occur?
2. How many processes will be affected (need to be rolled back)?

• If deadlock occurs frequently, then the algorithm is invoked frequently,


o Resources allocated to deadlocked processes will be idle until the deadlock can be
broken
o The number of processes in the deadlock may increase
• If the algorithm is invoked for every resource request not granted, it will incur a
computation time overhead on the system

9/3/2019 141
Recovery from Deadlock
• Once a deadlock has been detected, recovery strategy is needed. There are two possible
recovery approaches:
• Process termination
• Resource preemption
Process Termination
• Abort all deadlocked processes
• Abort one process at a time until the deadlock cycle is eliminated

• In which order should we choose a process to abort? Chose the process with
• Least amount of processor time consumed so far
• Least amount of output produced so far
• Most estimated time remaining
• Least total resources allocated so far
• Lowest priority

9/3/2019 142
Recovery from Deadlock
Resource Preemption
• In this recovery strategy, we successively preempt resources and allocate them to another
process until the deadlock is broken
• While implementing this strategy, there are three issues to be considered
• Selecting a victim – which resources and process should be selected to minimize cost
just like in process termination. The cost factors may include parameters like the
number of resources a deadlocked process is holding, number of resources it used so
far
• Rollback – if a resource is preempted from a process, then it can not continue its
normal execution
• The process must be rolled back to some safe state and started
• Starvation – same process may always be picked as victim several times. As a result,
starvation may occur. The best solution to this problem is to only allow a process to
be picked as a vicitim for a limited finite number of times. This can be done by
including the number of rollback in the cost factor

9/3/2019 143
Chapter 5: CPU
scheduling

9/3/2019 Wolkite Unveristy OS(Seng2043) 144


Contents:

 Introduction to scheduling
 Categories of scheduling algorithm
 CPU scheduling
 Scheduling criteria
 Scheduling algorithms
 Multiprocessor scheduling
 Threads scheduling

9/3/2019 Wolkite Unveristy OS(Seng2043) 145


Introduction to scheduling

 When a computer is multiprogrammed, it frequently has


multiple processes competing for the CPU at the same time.
 When more processes are there in the ready state than the
number of available CPUs, the operating system must decide
which process to run first.
 The part of the operating system that makes the choice is called
the scheduler and the algorithm it uses is called the scheduling
algorithm.

9/3/2019 Wolkite Unveristy OS(Seng2043) 146


Process scheduling queues
o The objective of multi-programming
• To have some process running at all times.
o Timesharing: Switch the CPU frequently that users can interact
the program while it is running.
o If there are many processes, the rest have to wait until CPU is free.
o Scheduling is to decide which process to execute and when.
o Scheduling queues:-Several queues used for scheduling:
a) Job queue – set of all processes in the system.
b) Ready queue – set of all processes residing in main memory,
ready and waiting to execute.
c) Device queues – set of processes waiting for an I/O device.
• Each device has its own queue.
o Process migrates between the various queues during its life time.

9/3/2019 Wolkite Unveristy OS(Seng2043) 147


schedulers
o A process in a job-queue is selected in some fashion and
assigned to memory/CPU.
o The selection process is carried out by a scheduler. Schedulers
are of three types:
1. Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue from the
job queue (determine the degree of multi-programming)

2. Short-term scheduler (or CPU scheduler) – selects which


process should be executed next and allocates CPU

3. Medium-term ( or Emergency) scheduler: swap out the


process from memory (ready queue) and swapped in again
later (it decrease the degree of multiprogramming).
9/3/2019 Wolkite Unveristy OS(Seng2043) 148
Passive
Programs

Disk Memory

Long Term Short Term


Scheduler Scheduler
Open
program
Process CPU
Select a Assign the
process CPU to a
from job to process
ready queue from ready
queue

Swap a Process
process
Job (input) from ready
to job queue
Queue
Medium Term
9/3/2019
Scheduler Wolkite Unveristy OS(Seng2043) 149
Degree of multi-programming is the number of processes
that are placed in the ready queue waiting for execution
by the CPU.

Process 1
Process 2
Process 3 Degree of
Process 4 Multi-Programming
Process 5

Memory

9/3/2019 Wolkite Unveristy OS(Seng2043) 150


• Since Long term scheduler selects which processes to
brought to the ready queue, hence, it increases the degree
of multiprogramming.

Long Term
Process 1
Disk Scheduler Process 2
Process 3
Degree of
Process 4 Multi-Programming
Process 5

Memory
Job Queue

9/3/2019 Wolkite Unveristy OS(Seng2043) 151


Since Medium term scheduler picks some processes from
the ready queue and swap them out of memory, hence, it
decreases the degree of multiprogramming.

Medium Term
Process 1
Disk Scheduler Process 2
Process 3
Degree of
Process 4 Multi-Programming
Process 5

Memory
Job Queue

9/3/2019 Wolkite Unveristy OS(Seng2043) 152


Categories of Scheduling Algorithms(con’t..)
o Scheduling algorithms can be divided into two categories
with respect to how they deal with clock interrupts.
 Preemptive scheduling: allows releasing the current
executing process from CPU when another process (which
has a higher priority) comes and need execution.

 Non-preemptive scheduling: once the CPU has been


allocated to a process, the process keeps the CPU until it
release the CPU .

9/3/2019 Wolkite Unveristy OS(Seng2043) 153


Preemptive
Scheduling

CPU

Non- Preemptive
Scheduling

CPU
9/3/2019 Wolkite Unveristy OS(Seng2043) 154
CPU Scheduling

9/3/2019 Wolkite Unveristy OS(Seng2043) 155


CPU scheduling
 CPU Scheduling is the method to select a process from the ready
queue to be executed by CPU when ever the CPU becomes idle.
o CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

156 Wolkite Unveristy OS(Seng2043) 9/3/2019


Scheduling Criteria
CPU Utilization:
 The percentage of times while CPU is busy to the total time ( times
CPU busy + times it is idle). Hence, it measures the benefits from CPU.

 To maximize utilization, keep CPU as busy as possible.

 CPU utilization range from 40% (for lightly loaded systems) to 90% (for
heavily loaded) (Explain why? CPU utilization can not reach 100%,
because of the context switch between active processes).

Times CPU Busy


CPU Utilizatio n  *100
Total Time

157 Wolkite Unveristy OS(Seng2043) 9/3/2019


System Throughput:
 The number of process that are completed per time unit (hour)

Turnaround time:
 For a particular process, it is the total time needed for process execution
(from the time of submission to the time of completion).
 It is the sum of process execution time and its waiting times (to get memory,
perform I/O, ….).

Waiting time:
 The waiting time for a specific process is the sum of all periods it spends
waiting in the ready queue.

Response time.
 It is the time from the submission of a process until the first response is
produced (the time the process takes to start responding).

9/3/2019 Wolkite Unveristy OS(Seng2043) 158


It is desirable to:

 Maximize:
 CPU utilization.
 System throughput.

 Minimize:
 Turnaround time.
 Waiting time.
 Response time.

9/3/2019 Wolkite Unveristy OS(Seng2043) 159


Scheduling Algorithms
First Come First Serviced (FCFS) algorithm
 The process that comes first will be executed first.
 Not preemptive(The first job is allowed to run as long as it
wants to be executed.).
 It is easy to understand and equally easy to program.
 With this algorithm, a single linked list keeps track of all
ready processes.
Weakness
 A single process may egoistically control the CPU
time.
 It is not good for time sharing tasks.
 FCFS—discriminates against short jobs since any
short jobs arriving after long jobs will have a
longer waiting time.
9/3/2019 Wolkite Unveristy OS(Seng2043) 160
First Come First Serviced (FCFS) algorithm(con’t..)

Ready queue

FCFS Scheduling

CPU

9/3/2019 Wolkite Unveristy OS(Seng2043) 161


Consider the following set of processes, with the length of the CPU burst
(Execution) time given in milliseconds:
Burst Time Process
The processes arrive in the order 1 24 P1
P1, P2, P3. All at time 0.
2 3 P2
3 3 P3

 Gant chart:

 waiting times and turnaround times for each process are:

P3 P2 P1 Process
27 24 0 Waiting Time (WT)
30 27 24 Turnaround Time (TAT)
+
 Hence, average waiting time= (0+24+27)/3=17 milliseconds
Execution
162 Wolkite Unveristy OS(Seng2043) 9/3/2019 Time
Repeat the previous example, assuming that the processes arrive in the order
P2, P3, P1. All at time 0.

Burst Time Process


3 24 P1
1 3 P2
2 3 P3
 Gant chart:

 waiting times and turnaround times for each process are:

P3 P2 P1 Process
3 0 6 Waiting Time (WT)
6 3 30 Turnaround Time (TAT)
 Hence, average waiting time= (6+0+3)/3=3 milliseconds
163 Wolkite Unveristy OS(Seng2043) 9/3/2019
Shortest-Job-First (SJF) scheduling
• When CPU is available, it will be assigned to the process with
the smallest CPU burst (non preemptive).
 If two processes have the same next CPU burst, FCFS is used.
 Shortest job first is provably optimal when all the jobs are
available simultaneously.
 Mainly used in the long-term-scheduler.
SJF Scheduling

10 5 18 7
X
18 10 7 5

CPU
9/3/2019 Note: numbers indicates the process
Wolkite Unveristy execution time
OS(Seng2043) 164
Consider the following set of processes, with the length of the CPU burst time
given in milliseconds:
Burst Time Process
The processes arrive in the order
6 P1
P1, P2, P3, P4. All at time 0.
8 P2
7 P3
3 P4
1. Using FCFS
 Gant chart:

 waiting times and turnaround times for each process are:

P4 P3 P2 P1 Process
21 14 6 0 Waiting Time (WT)
24 21 14 6 Turnaround Time (TAT)

 Hence, average waiting time= (0+6+14+21)/4=10.25 milliseconds


165 Wolkite Unveristy OS(Seng2043) 9/3/2019
2. Using SJF Burst Time Process
6 P1
8 P2
 Gant chart: 7 P3
3 P4

 waiting times and turnaround times for each process are:


P4 P3 P2 P1 Process
0 9 16 3 Waiting Time (WT)
3 16 24 9 Turnaround Time (TAT)

 Hence,166average waiting time=


Wolkite (3+16+9+0)/4=7
Unveristy OS(Seng2043)milliseconds 9/3/2019
Shortest-Remaining-Time-First (SRTF)
 It is a preemptive version of the Shortest Job First
 It allows a new process to gain the processor if its
execution time less than the remaining time of the
currently processing one.
 When a new job arrives, its total time is compared to
the current process' remaining time.
 If the new job needs less time to finish than the
current process, the current process is suspended and
the new job started

SRTF Scheduling

2 10 7 5 3
4

CPU

9/3/2019 Wolkite Unveristy OS(Seng2043) 167


Round Robin scheduling
• Is one of the oldest, simplest, fairest, and most widely used
algorithms.
 Allocate the CPU for one Quantum time (also called time slice)
Q to each process in the ready queue.
 If the process has blocked or finished before the quantum has
elapsed, the CPU switching is done when the process blocks, of
course.
 This scheme is repeated until all processes are finished.
 A new process is added to the end of the ready queue.
 setting the quantum too short causes too many process
switches and lowers the CPU efficiency, but setting it too long
may cause poor response to short interactive requests.

9/3/2019 Wolkite Unveristy OS(Seng2043) 168


Round Robin scheduling(con’t..)
• A quantum of around 20-50 msec is often a reasonable
compromise
• RR—treats all jobs equally (giving them equal bursts of CPU
time) so short jobs will be able to leave the system faster since
they will finish first.

Round Robin Scheduling

Q Q
Q Q

CPU
9/3/2019 Wolkite Unveristy OS(Seng2043) 169
Consider the following set of processes, with the length of the CPU burst time given in
milliseconds:

The processes arrive in the order Burst Time Process


P1, P2, P3. All at time 0. 24 P1
use RR scheduling with Q=2 and Q=4 3 P2
3 P3
RR with Q=4

 Gant chart:

 waiting times and turnaround times for each process are:

P3 P2 P1 Process
7 4 6 Waiting Time (WT)
10 7 30 Turnaround Time (TAT)

 Hence, average waiting time= (6+4+7)/3=5.66 milliseconds


170 Wolkite Unveristy OS(Seng2043) 9/3/2019
RR with Q=2 Burst Time Process
24 P1
3 P2
3 P3

 Gant chart:

 waiting times and turnaround times for each process are:


P3 P2 P1 Process
7 6 6 Waiting Time (WT)
10 9 30 Turnaround Time (TAT)

 Hence,171average waiting time=


Wolkite (6+6+7)/3=6.33
Unveristy OS(Seng2043)milliseconds 9/3/2019
Explain why? If the quantum time decrease, this will slow down
the execution of the processes.

Sol:

 Because decreasing the quantum time will increase the


context switch (the time needed by the processor to switch
between the processes in the ready queue) which will
increase the time needed to finish the execution of the active
processes, hence, this slow down the system.

9/3/2019 Wolkite Unveristy OS(Seng2043) 172


Priority scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority (smallest
integer).
 It is often convenient to group processes into priority classes and use
priority scheduling among the classes but round-robin scheduling
within each class.
 There are two types:
 Preemptive
 nonpreemptive
Priority Scheduling

10 5 18 7
X
18 10 7 5

CPU
9/3/2019 Note: numbers indicates the
Wolkite process priority
Unveristy OS(Seng2043) 173
Problems with Priority scheduling

 Problem  Starvation (infinite blocking)– low priority


processes may never execute
 Solution  Aging – as time progresses increase the priority of
the process

Very lowVery
priority
low process
priority process

8 28
26 30
8 5 4 2

Starvation
Aging
9/3/2019 Wolkite Unveristy OS(Seng2043) 174
Consider the following set of processes, with the length of the CPU burst time
given in milliseconds:
priority Burst Time Process
The processes arrive in the order 3 10 P1
P1, P2, P3, P4, P5. All at time 0. 1 1 P2
4 2 P3
5 1 P4
1. Using priority scheduling 2 5 P5
 Gant chart:

 waiting times and turnaround times for each process are:

P5 P4 P3 P2 P1 Process
1 18 16 0 6 Waiting Time (WT)
6 19 18 1 16 Turnaround Time (TAT)

 Hence, average waiting time= (6+0+16+18+1)/5=8.2 milliseconds


175 Wolkite Unveristy OS(Seng2043) 9/3/2019
Multi-level queuing scheduling
• Ready queue is partitioned into separate queues:
• foreground (interactive)
• background (batch)
• Each queue has its own scheduling algorithm,
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues.
•Fixed priority scheduling: (i.e., serve all from foreground then
from background). Possibility of starvation.
•Time slice: each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR;20% to background in FCFS
 There are two types:
Without feedback: processes can not move between queues.
9/3/2019 With feedback: processes
Wolkite Unveristycan move between queues.
OS(Seng2043) 176
Multi-level queuing without feedback:

• Divide ready queue into several queues.


• Each queue has specific priority and its own scheduling algorithm
(FCFS, …).

High priority Queue

9/3/2019 Wolkite Unveristy


Low priority Queue OS(Seng2043) 177
Multi-level queuing with feedback:
 Divide ready queue into several queues.
 Each queue has specific Quantum time as shown in figure.
 Allow processes to move between queues.

Queue 0

Queue 1

Queue 2

9/3/2019 Wolkite Unveristy OS(Seng2043) 178


Multiple-Processor Scheduling
•CPU scheduling more complex when multiple CPUs are available.
•In Symmetric Multiprocessors systems all CPUs can perform
scheduling independently(complex task).
•Asymmetric multiprocessor systems :- only one processor(Master
CPU) handles all the scheduling tasks.
•Asymmetric multiprocessing – only one processor accesses the system
data structures, alleviating the need for data sharing.
•Load sharing :Load must be fairly distributed among processors to
maximize processors use. Load balancing is especially important when
each processor has its own private queue.
•Two general approaches.
• push migration:- keeping load balance by pushing processes
from overloaded processor to an idle one.
• pull migration:-an idle processor pulls processes from an
9/3/2019
overloaded one.Wolkite Unveristy OS(Seng2043) 179
Thread scheduling
•Recall that there are two types of threads.
User level threads and kernel level threads.
•On OS systems supporting them, it is kernel-level-threads -not
processes- that are scheduled by the operating system.
•User level-threads are managed by the thread library, and the kernel
is unaware of them.
•To run on CPU, user-level threads must be mapped to an associated
kernel level thread
•On systems implementing many-to-one and many-to-many models,
the thread library schedules user level-thread threads on the
available resources this scheme is called process contention
scope(PCS)- (since threads of same process compete for CPU).
•To decide which kernel-thread to schedule to CPU the kernel uses
system-contention-schedule(SCS). Competition for CPU with SCS
takes pace among all threads in the system. Systems using one -to-
9/3/2019 one models(such as windows XP, Solaris
Wolkite Unveristy 9, Linux) uses only SCS.
OS(Seng2043) 180
Chapter 7
File system
contents
o File concept
o File naming
o File type
o File access
o File attribute
o File operation
o File structure
o Directory
o Directory structure
o Directory operation
o File system implementation
o Implementing directory
File concept
o All computer applications need to store and retrieve
information.
o While a process is running, it can store a limited
amount of information within its own address space.
o But the following requirements yields for long-term
information storage:
 It must be possible to store a very large amount of
information.
 The information must survive the termination of the
process using it.
 Multiple processes must be able to access the
information concurrently.
o Magnetic disks have been used for years for this long-
term storage.
File concept(cont……)
odisk support two operations: Read block and Write
block.
owith these two operations one could, in principle,
solve the long-term storage problem.
oJust a few of the questions that quickly arise are:
 How do you find information?
 How do you keep one user from reading another
user's data?
 How do you know which blocks are free?.
o So to solve this problem the operating system use
a new abstraction called the file which is logical
units of information created by processes
oAs a whole, that part of the operating system
dealing with files is known as the file system
File naming
o Files are an abstraction mechanism and provide a way
to store information on the disk and read it back later.
o The most important characteristic of any abstraction
mechanism is the way the objects being managed are
named.
o when a process creates a file, it gives the file a name.
When the process terminates, the file continues to
exist and can be accessed by other processes using its
name.
o The exact rules for file naming vary somewhat from
system to system, but all current operating systems
allow strings of one to eight letters as legal file names.
File naming (con’t… )
o Frequently digits and special characters are also
permitted.
o Some file systems distinguish between upper and
lower case letters, whereas others do not. UNIX falls
in the first category; MS-DOS falls in the second.
o Many operating systems support two-part file names,
with the two parts separated by a period, as in prog.c.
o In some systems (e.g., UNIX), file extensions are just
conventions and are not enforced by the operating
system.
o In contrast, Windows is aware of the extensions and
assigns meaning to them.
File type
o Many operating systems support several types of files.
o UNIX and Windows, for example, have regular files and directories.
o UNIX also has character and block special files.
Regular files: are the ones that contain user information. They are
either ASCII files or binary files.
e.g word file, excel file
Application programs can understand the content and structure of
specific regular file
ASCII files: It consists line of text, user can understand the content,
it can be displayed and printed as it is
e.g C, C++, HTML files
binary files.: Contain formatted information only certain applications
and processors can understand the content of the file
binary files must be run on appropriate software or processor before
humans can read them
e.g executable file, compiled programs, compressed files, graphic or
image files
oDirectories files: are system files for
maintaining the structure of the file system.
oCharacter special files: are related to
input/output and used to model serial I/0
devices, such as terminals, printers, and
networks.
oBlock special files: are used to model disks. 1
block at a time (1block=512bytes to 32KB)
oIt is used to model disk, DVD/CD ROM and
memory regions
File access
o File can be access either sequentially or randomly.
o Sequential access: a process could read all the bytes
or records in a file in order, starting at the beginning,
but could not skip around and read them out of order.
o Random access: Files bytes or records can be read in
any order. Used when the storage medium was
magnetic disk. Essential for many applications.
o This method is used in UNIX and Windows.
File attribute
o Every file has a name and its data.
o In addition, all operating systems associate other
information with each file, for example, the date and
time the file was last modified and the file's size.
o We will call these extra items the file's attributes ,
Some people call them metadata.
o The list of attributes varies considerably from system
to system.
File attribute(con’t…)
File operation
o Different systems provide different operations to
allow storage and retrieval.
o Create: The file is created with no data. The purpose
of the call is to announce that the file is coming and to
set some of the attributes.
o Delete: When the file is no longer needed, it has to be
deleted to free up disk space.
o Open: Before using a file, a process must open it. The
purpose of the open call is to allow the system to
fetch the attributes and list of disk addresses into
main memory for rapid access on latter caller.
File operation(con’t…)
o Close: When all the access are finished, the
attributes and disk addresses are no longer needed, so
the file should be closed to free up internal table
space.
o Read: Data are read from file. Usually, the bytes come
from the current position. The caller must specify how
much data are needed and must also provide a buffer
to put them in.
o Write: Data are written to the file again, usually at
the current position. If the current position is the end
of the file, the file’s size increases. If the current
position is in the middle of the file, existing data are
and lost forever.
File operation(con’t…)
o Append: This call is a restricted in the form of write.
It can only add data to the end of the file.
o Seek: For random access files, a method is needed to
specify from where to take the data. One common
approach is a system call, seek, that repositions the
file pointer to a specific place in the file. After this
call has completed, data can be read from, or written
to, that position.
o Get Attributes: Processes often need to read file
attributes to do their work.
File operation(con’t…)
o Set Attributes: Some of the attributes are user
settable and can be changed after the file has been
created. This system call makes that possible.
o The protection mode information is an obvious example.
o Most of the flags also fall in this category.
o Rename: It frequently happens that a user needs to
change the name of an existing file. This system call
makes that possible
File structure
oFiles can be structured in any of several ways.
oThree common possibilities are:
1. Byte sequence: All it sees are bytes. Any meaning must
be imposed by user-level programs.
• Both UNIX and Windows use this approach.
2. Record sequence: Central to the idea of a file being a
sequence of records is the idea that the read operation
returns one record and the write operation overwrites
or appends one record.
• Used in database system.
3. Tree: File consists of tree not necessarily all the same
length, each containing a key field in a fixed position in
the record. The tree is sorted on the key field, to allow
rapid searching for a particular key.
File structure (con’t….)

(a) Byte sequence. (b) Record sequence.


(c) Tree.
Directory
oTo keep track of files, file systems normally
have directories or folders, which in many
systems are themselves files and containing
information about all files.
Directory structure
o Defines the organization or logical structure of
directory.
Single-Level Directory Systems:
o The simplest form of directory system is having one
directory containing all the files.
o they require unique name and Sometimes it is called
the root directory.
o The advantages of this scheme are its simplicity and
the ability to locate files quickly-there is only one
place to look, after all.
o It is often used on simple embedded devices such as
telephones, digital cameras, and some portable music
players.
Directory structure(con’t…)
Hierarchical Directory Systems:
o With this approach, there can be as many directories as
are needed to group the files in natural ways.
o When the file system is organized as a directory tree,
some way is needed for specifying file names.
o Two different methods are commonly used.
o In the first method, each file is given an absolute path name
consisting of the path from the root directory to the file.
o Example : /usr/ast/mailbox
o The other kind of name is the relative path name. This is
used in conjunction with the concept of the working
directory (also called the current directory). A user can
designate one directory as the current working directory
o For example, if the current working directory is /usr/ast,
then the file whose absolute path is /usr/ast/mailbox can
be referenced simply as mailbox.
Directory operation
oThey allowed system calls for managing directories exhibit
more variation from system to system than system calls
for files.
oCreate: A directory is created. It is empty except for dot
and dot dot, which are put there automatically by the
system .
oDelete: A directory is deleted. Only an empty directory
can be deleted.
o Opendir: Directories can be read. Before a directory can
be read, it must be opened, analogous to opening and
reading a file.
oClosedir: When a directory has been read, it should be
closed to free up internal table space.
Directory operation(con’t…)
o Readdir: This call returns the next entry in an open directory, no
matter which of the possible directory structures is being used.
o Rename: In many respects, directories are just like files and can
be renamed the same way files can be.
o Link: Linking is a technique that allows a file to appear in more
than one directory. This system call specifies an existing file and
a path name, and creates a link from the existing file to the name
specified by the path
 track of the number of directory entries containing the file), is
sometimes called a hard link.
o Unlink: A directory entry is removed. If the file being unlinked is
only present in one directory (the normal case), it is removed
from the file system. If it is present in multiple directories, only
the path name specified is removed. The others remain.
o In UNIX, the system call for deleting files (discussed earlier) is,
in fact, unlink.
File system implementation
oUsers are concerned with how files are named,
what operations are allowed on them, what the
directory tree looks like, and similar interface
issues.
oImplementers are interested in how files and
directories are stored, how disk space is
managed, and how to make everything work
efficiently and reliably
File system layout
o Most disks can be divided up into one or more
partitions, with independent file systems on each
partition.
o MBR (Master Boot Record) is used to boot the
computer.
o The end of the MBR contains the partition table and
gives the starting and ending addresses of each
partition.
o One of the partitions in the table is marked as active
o When the computer is booted, the BIOS reads in and
executes the MBR.
o The first thing the MBR program does is locate the
active partition, read in its first block, called the boot
block, and execute it.
File system layout(con’t…)
o The program in the boot block loads the operating
system contained in that partition.
o every partition starts with a boot block, even if it
does not contain a bootable operating system.
o Often the file system will contain some of the items
o superblock : contains all the key parameters about the
file system and is read into memory when the
computer is booted or the file system is first touched.
o Typical information in the superblock includes: a magic
number, to identify the file system type, the number
of blocks, in the file system, and other key
administrative information.
File system layout(con’t…)
o Free space management: includes information about
free blocks in the file system, for example in the form
of a bitmap or a list of pointers. This might be
followed by the
o i-nodes :an array of data structures, one per file,
telling all about the file.
o root directory : which contains the top of the file
system tree.
o Finally, the remainder of the disk contains all the
other directories and files.
File system layout(con’t…)

A possible file system layout.


Implementing files
o Probably the most important issue in implementing file storage
is keeping track of which disk blocks go with which file.
o Various methods are used in different operating systems.
o Contiguous allocation
o Linked list allocation
o i-node
Contiguous allocation
o The simplest allocation scheme is to store each file as a
contiguous run of disk blocks.
o it is simple to implement because keeping track of where a file's
blocks are is reduced to remembering two numbers: the disk
address of the first block and the number of blocks in the file
o the read performance is excellent because the entire file can be
read from the disk in a single operation.
o Only one seek is needed (to the first block). After that, no more
seeks or rotational delays are needed, so data come in at the full
bandwidth of the disk.
o Thus contiguous allocation is simple to implement and has high
performance.
o contiguous allocation also has a fairly significant drawback: over
the course of time, the disk becomes fragmented.
Linked list allocation
okeep each one as a linked list of disk blocks.
oThe first word of each block is used as a pointer
to the next one and the rest of the block is for
data.
oUnlike contiguous allocation, every disk block can
be used in this method. No space is lost to disk
fragmentation (except for internal fragmentation
in the last block).
oit is sufficient for the directory entry to merely
store the disk address of the first block. The
rest can be found starting there.
oalthough reading a file sequentially is
straightforward, random access is extremely slow.
Linked list allocation(con’t…)
othe amount of data storage in a block is no longer a
power of two because the pointer takes up a few
bytes.

oBoth disadvantages of the linked list allocation can be


eliminated by taking the pointer word from each disk
block and putting it in a table in memory.
i-node
o method for keeping track of which blocks belong to
which file is to associate with each file a data
structure called an i-node (index-node)

o lists the attributes and disk addresses of the file's


blocks.

o The big advantage of this scheme over linked files


using an in-memory table is that the i-node need only
be in memory when the corresponding file is open.
Implementing directory
o The main function of the directory system is to map
the ASCII name of the file onto the information
needed to locate the data.

o a directory consists of a list of fixed-size entries, one


per file, containing a (fixed-length) file name, a
structure of the file attributes, and one or more disk
addresses (up to some maximum) telling where the
disk blocks are.
Implementing directory(con’t…)
o Nearly all modem operating systems support longer,
variable-length file names.
How can these be implemented?

o The simplest approach is to set a limit on file name


length, typically 255 characters
o One alternative is to give up the idea that all directory
entries are the same size.
o With this method, each directory entry contains a
fixed portion, typically starting with the length of the
entry, and then followed by data with a fixed format
and other.
o A disadvantage of this method is that when a file is
removed, a variable-sized gap is introduced into the
directory into which the next file to be entered may
not fit.
Implementing directory(con’t…)
oAnother way to handle variable-length names is to make
the directory entries themselves all fixed length and keep
the file names together in a heap at the end of the
directory.
oThis method has the advantage that when an entry is
removed, the next file entered will always fit there.
oIn all of the designs so far, directories are searched
linearly from beginning to end when a file name has to be
looked up.
oOne way to speed up the search is to use a hash table in
each directory.
oThe table entry corresponding to the hash code is
inspected.
Implementing directory(con’t…)
o If it is unused, a pointer is placed there to the file
entry.
o If that slot is already in use, a linked list is
constructed, headed at the table entry and threading
through all entries with the same hash value.
o A different way to speed up searching large
directories is to cache the results of searches.
Chapter 8
Security and protection
Contents of Security
oSecurity problem
oProgram threat
oNetwork and system threat
oSecurity tools
oCryptography
oAuthentication
oIntrusion defense
oFirewall
Security problem
o Security must consider external environment of the system, and protect it from:
• unauthorized access.
• malicious modification or destruction
• accidental introduction of inconsistency.
• These are management, rather than system, problems.
o Easier to protect against accidental than malicious misuse.
o We say that the system is secure if its resources are used and accessed as intended
under all circumstances.
Security problem(con’t…)

o Security has many facets. Three of the more important ones are the nature of the
threats, the nature of intruders, and accidental data loss.
Threat: make sure data confidentiality, integrity, availability and etc
Intruders: people who are nosing around places where they have no business being are
called intruders or sometimes adversaries
o Intruders act in two different ways.
 Passive intruders just want to read files they are not authorized to read
 Active intruders are more malicious; they want to make unauthorized changes to
data.
Security problem(con’t…)
Accidental data loss:
o In addition to threats caused by malicious intruders, valuable
data can be lost by accident.
o Some of the common causes of accidental data loss are
1. Acts of God: fires, floods, earthquakes, wars, riots, or rats
gnawing backup tapes.
2. Hardware or software errors: CPU malfunctions, unreadable
disks or tapes, telecommunication errors, program bugs.
3. Human errors: incorrect data entry, wrong tape or CD-ROM
mounted, wrong program run, lost disk or tape, or some
other mistake.
Security Violation Categories
o Breach of confidentiality
• Unauthorized reading of data
o Breach of integrity
• Unauthorized modification of data
o Breach of availability
• Unauthorized destruction of data
o Theft of service
• Unauthorized use of resources
o Denial of service (DOS)
• Prevention of legitimate use
Security Violation Methods
o Masquerading (breach authentication)
• Pretending to be an authorized user to
escalate privileges
o Replay attack
• As is or with message modification
o Man-in-the-middle attack
• Intruder sits in data flow, masquerading as
sender to receiver and vice versa
o Session hijacking
• Intercept an already-established session to
bypass authentication
Standard Security Attacks
Security Measure Levels
o Impossible to have absolute security, but make cost to
perpetrator sufficiently high to deter most intruders
o Security must occur at four levels to be effective:
• Physical
• Against armed or surreptitious entry by
intruders.
• Human
• Careful screening of users to reduce the
chance of unauthorized access.
• Network
• No one should intercept the data on the
network.
• Operating system
• The system must protect itself from
accidental or purposeful security beaches.
• A weakness at a high level of security allows
circumvention of low-level measures.
Security measures at OS level
o User authentication
• Verifying the user’s authentication
o Program threats
• Misuse of programs unexpected misuse of
programs.
o System threats
• Worms and viruses
o Intrusion detection
• Detect attempted intrusions or successful
intrusions and initiate appropriate responses to the
intrusions.
o Cryptography
• Ensuring protection of data over network
Program Threats
o Many variations, many names
o Trojan Horse
• Code segment that misuses its environment
• Exploits mechanisms for allowing programs written by
users to be executed by other users
• Spyware, pop-up browser windows, covert channels
• Up to 80% of spam delivered by spyware-infected
systems
o Trap Door
• The designer of the code might leave a hole in the
software that only she is capable of using.
• Specific user identifier or password that circumvents
normal security procedures
• Could be included in a compiler
o Logic Bomb
• Program that initiates a security incident under certain
circumstances
Program Threats(con’t…)
o Stack and Buffer Overflow
• Exploits a bug in a program (overflow either the stack or
memory buffers.)
o The attacker determines the vulnerability and writes a
program to do the following.
• Overflow an input-field, command-line argument, or input
buffer until it writes into the stack.
• Overwrite the current return address on the stack with
the address of the exploit code in the next step.
• Write a simple set of code for the next space in the
stack that includes commands that the attacker wishes
to execute, for example, spwan a shell.
System Threats
o Viruses
• Code fragment embedded in legitimate program
• Self-replicating, designed to infect other computers
• Very specific to CPU architecture, operating system,
applications
• Usually borne via email or as a macro
• Visual Basic Macro to reformat hard drive
Sub AutoOpen()
Dim oFS
Set oFS = CreateObject(’’Scripting.FileSystemObject’’)
vs = Shell(’’c:command.com /k format c:’’,vbHide)
End Sub
System Threats (Cont.)
o Virus dropper inserts virus onto the system
o Many categories of viruses, literally many thousands of
viruses
• File / parasitic
• Boot / memory
• Macro
• Source code
• Polymorphic to avoid having a virus signature
• Encrypted
• Stealth
• Tunneling
• Multipartite
• Armored
System Threats(con’t…)
o Worms – use spawn mechanism; standalone program
• The worm spawns copies of itself, using up systems
resources and perhaps locking out system use by all
other processes.
o Internet worm
• Exploited UNIX networking features (remote access)
and bugs in finger and sendmail programs.
• Grappling hook program uploaded main worm program.
o Denial of Service
• Overload the targeted computer preventing it from
doing any useful work.
• Downloading of a page.
• Partially started TCP/IP sessions could eat up all
resources.
• Difficult to prevent denial of service attacks.
Threat Continues
o Attacks still common, still occurring
o Attacks moved over time from science experiments to tools
of organized crime
• Targeting specific companies
• Creating botnets to use as tool for spam and DDOS
delivery
• Keystroke logger to grab passwords, credit card numbers
The Morris Internet Worm
Threat Monitoring
o Check for suspicious patterns of activity – i.e., several
incorrect password attempts may signal password
guessing.

o Audit log – records the time, user, and type of all


accesses to an object; useful for recovery from a
violation and developing better security measures.

o Scan the system periodically for security holes; done


when the computer is relatively unused.
Threat Monitoring (Cont.)
o Check for:
• Short or easy-to-guess passwords
• Unauthorized set-uid programs
• Unauthorized programs in system directories
• Unexpected long-running processes
• Improper directory protections
• Improper protections on system data files
• Dangerous entries in the program search path
(Trojan horse)
• Changes to system programs: monitor checksum
values
FireWall
o A firewall is placed between trusted and untrusted
hosts.
• A firewall is a computer or router that sits between
trusted and untrusted systems. It monitors and logs
all connections.
o The firewall limits network access between these two
security domains.
o Spoofing: An unauthorized host pretends to be an
authorized host by meeting some authorization
criterion.
Network Security Through Domain Separation Via Firewall

DMZ: Demilitarized zone


Intrusion Detection
o Detect attempts to intrude into computer systems.
o Wide variety of techniques
• The time of detection
• The type of inputs examined to detect intrusion activity
• The range of response capabilities.
• Alerting the administrator, killing the intrusion process, false resource is
exposed to the attacker (but the resource appears to be real to the
attacker) to gain more information about the attacker.
o The solutions are known as intrusion detection systems.
o Detection methods:
• Auditing and logging.
• Install logging tool and analyze the external accesses.
• Tripwire (UNIX software that checks if certain files and directories have been
altered – I.e. password files)
• Integrity checking tool for UNIX.
• It operates on the premise that a large class of intrusions results in
anomalous modification of system directories and files.
• It first enumerates the directories and files to be monitored for changes
and deletions or additions. Later it checks for modifications by comparing
signatures.
o System call monitoring
• Detects when a process is deviating from expected system call behavior.
Intrusion Detection System(IDS)
• IDSs used to monitor for “suspicious activity” on a network
• Can protect against known software exploits, like buffer overflows
• IDSs serve three essential security functions; monitor, detect and respond
to unauthorized activity
• IDS can also response automatically (in real-time) to a security breach event
such as logging off a user, disabling a user account and launching of some
scripts
• It is a reactive rather than a pro-active agent.

239
FIREWALL VS IDS

• Firewall cannot detect security breaches associated with traffic that does not pass
through it. Only IDS is aware of traffic in the internal network

• Firewall does not inspect the content of the permitted traffic


• IDS is capable of monitoring messages coming from any sources

• Firewall is more likely to be attacked more often than IDS


Cryptography
o Eliminate the need to trust the network.
o Cryptography enables a recipient of a message to verify that
the message was created by some computer possessing a
certain key.
o Keys are designed to be computationally infeasible to derive
from the messages
o Means to constrain potential senders (sources) and / or
receivers (destinations) of messages
• Based on secrets (keys)
• Enables
• Confirmation of source
• Receipt only by certain destination
• Trust relationship between sender and receiver
Encryption
o Constrains the set of possible receivers of a message
o Encrypt clear text into cipher text.
o Properties of good encryption technique:
• Relatively simple for authorized users to encrypt and decrypt
data.
• Encryption scheme depends not on the secrecy of the
algorithm but on a parameter of the algorithm called the
encryption key.
• Extremely difficult for an intruder to determine the
encryption key.
o Data Encryption Standard substitutes characters and rearranges
their order on the basis of an encryption key provided to
authorized users via a secure mechanism. Scheme only as secure
as the mechanism.
• RSA : public/private key algorithm is popular
Encryption(con’t…)
o Constrains the set of possible receivers of a message
o Encryption algorithm consists of
• Set K of keys
• Set M of Messages
• Set C of ciphertexts (encrypted messages)
• A function E : K → (M→C). That is, for each k  K,
Ek is a function for generating ciphertexts from
messages
• Both E and Ek for any k should be efficiently
computable functions
• A function D : K → (C → M). That is, for each k  K,
Dk is a function for generating messages from
ciphertexts
• Both D and Dk for any k should be efficiently
computable functions
Encryption (Cont.)
o An encryption algorithm must provide this essential
property: Given a ciphertext c  C, a computer can
compute m such that Ek(m) = c only if it possesses
k
• Thus, a computer holding k can decrypt
ciphertexts to the plaintexts used to produce
them, but a computer not holding k cannot
decrypt ciphertexts
• Since ciphertexts are generally exposed (for
example, sent on the network), it is important
that it be infeasible to derive k from the
ciphertexts
Symmetric Encryption
o Same key used to encrypt and decrypt
• Therefore k must be kept secret
o DES was most commonly used symmetric block-encryption
algorithm (created by US Govt)
• Encrypts a block of data at a time
• Keys too short so now considered insecure
o Triple-DES considered more secure
• Algorithm used 3 times using 2 or 3 keys
o 2001 NIST adopted new block cipher - Advanced Encryption
Standard (AES)
• Keys of 128, 192, or 256 bits, works on 128 bit blocks
o RC4 is most common symmetric stream cipher, but known to have
vulnerabilities
• Encrypts/decrypts a stream of bytes (i.e., wireless
transmission)
• Key is a input to pseudo-random-bit generator
• Generates an infinite keystream
Secure Communication over Insecure Medium
Asymmetric Encryption

o Public-key encryption based on each user having two


keys:
• public key – published key used to encrypt data
• private key – key known only to individual user
used to decrypt data
o Must be an encryption scheme that can be made
public without making it easy to figure out the
decryption scheme
• Most common is RSA block cipher
• Efficient algorithm for testing whether or not a
number is prime
• No efficient algorithm is know for finding the
prime factors of a number
Cryptography (Cont.)

Note :
symmetric cryptography based on transformations
asymmetric based on mathematical functions
• Asymmetric much more compute intensive
• Typically not used for bulk data encryption
Authentication (Cont.)
o For a message m, a computer can generate an authenticator a
 A such that Vk(m, a) = true only if it possesses k
o Thus, computer holding k can generate authenticators on
messages so that any other computer possessing k can verify
them
o Computer not holding k cannot generate authenticators on
messages that can be verified using Vk
o Since authenticators are generally exposed (for example, they
are sent on the network with the messages themselves), it
must not be feasible to derive k from the authenticators
o Practically, if Vk(m,a) = true then we know m has not been
modified and that send of message has k
• If we share k with only one entity, know where the
message originated
User Authentication
o Crucial to identify user correctly, as protection systems depend on
user ID
o User identity most often established through passwords, can be
considered a special case of either keys or capabilities
o Passwords must be kept secret
• Frequent change of passwords
• History to avoid repeats
• Use of “non-guessable” passwords
• Log all invalid access attempts (but not the passwords
themselves)
• Unauthorized transfer
o Passwords may also either be encrypted or allowed to be used only
once
• Does encrypting passwords solve the exposure problem?
• Might solve sniffing
• Consider shoulder surfing
• Consider Trojan horse keystroke logger
• How are passwords stored at authenticating site?
Passwords
o Encrypt to avoid having to keep secret
• But keep secret anyway (i.e. Unix uses superuser-only readably
file /etc/shadow)
• Use algorithm easy to compute but difficult to invert
• Only encrypted password stored, never decrypted
• Add “salt” to avoid the same password being encrypted to the
same value
o One-time passwords
• Use a function based on a seed to compute a password, both user
and computer
• Hardware device / calculator / key fob to generate the password
• Changes very frequently
o Biometrics
• Some physical attribute (fingerprint, hand scan)
o Multi-factor authentication
• Need two or more factors for authentication
• i.e. USB “dongle”, biometric measure, and password
Protection
Contents of Protection
oGoals of protection
oPrinciples of protection
oAccess matrices'
oAccess matrices' implementation
oCapability based protection system
oLanguage based protection system
Goals of protection
oIn one protection model, computer consists of a collection of
objects, hardware or software
oEach object has a unique name and can be accessed through a
well-defined set of operations
oProtection problem - ensure that each object is accessed
correctly and only by those processes that are allowed to do so
Principles of protection
oGuiding principle – principle of least privilege
• Programs, users and systems should be given just enough privileges to
perform their tasks
• Limits damage if entity has a bug, gets abused
• Can be static (during life of system, during life of process)
• Or dynamic (changed by process as needed) – domain switching, privilege
escalation
• “Need to know” a similar concept regarding access to data
Principles of protection
oMust consider “grain” aspect
• Rough-grained privilege management easier, simpler, but least privilege now
done in large chunks
• For example, traditional Unix processes either have abilities of the associated user, or of
root
• Fine-grained management more complex, more overhead, but more
protective
• File ACL lists, RBAC
oDomain can be user, process, procedure
Domain structure
o Access-right = <object-name, rights-set>
where rights-set is a subset of all valid
operations that can be performed on the
object
o Domain = set of access-rights
Access matrics
oView protection as a matrix (access matrix)
oRows represent domains
oColumns represent objects
oAccess(i,j) is the set of operations that a process
executing in Domaini can invoke on Objectj
Use of Access Matrix
oIf a process in Domain Di tries to do “op” on
object Oj, then “op” must be in the access
matrix
oUser who creates object can define access
column for that object
oCan be expanded to dynamic protection
• Operations to add, delete access rights
• Special access rights:
• owner of Oi
• copy op from Oi to Oj (denoted by “*”)
• control – Di can modify Dj access rights
• transfer – switch from domain Di to Dj
• Copy and Owner applicable to an object
• Control applicable to domain object
Access Matrix of Figure A with Domains as Objects
Access Matrix with Copy Rights
Access Matrix With Owner Rights
Modified Access Matrix of Figure B
Implementation of Access Matrix
o Generally, a sparse matrix
o Option 1 – Global table
• Store ordered triples <domain, object,
rights-set> in table
• A requested operation M on object Oj within
domain Di -> search table for < Di, Oj, Rk >
• with M ∈ Rk
• But table could be large -> won’t fit in main
memory
• Difficult to group objects (consider an object
that all domains can read)
Implementation of Access Matrix (Cont.)

oOption 2 – Access lists for objects


• Each column implemented as an access list for
one object
• Resulting per-object list consists of ordered
pairs <domain, rights-set> defining all
domains with non-empty set of access rights
for the object
• Easily extended to contain default set -> If M
∈ default set, also allow access
Implementation of Access Matrix (Cont.)

oEach column = Access-control list for one


object
Defines who can perform what operation
Domain 1 = Read, Write
Domain 2 = Read
Domain 3 = Read

oEach Row = Capability List (like a key)


For each domain, what operations allowed
on what objects
Object F1 – Read
Object F4 – Read, Write, Execute
Object F5 – Read, Write, Delete, Copy
Implementation of Access Matrix (Cont.)

o Option 3 – Capability list for domains


• Instead of object-based, list is domain based
• Capability list for domain is list of objects together with
operations allows on them
• Object represented by its name or address, called a
capability
• Execute operation M on object Oj, process requests
operation and specifies capability as parameter
• Possession of capability means access is allowed
• Capability list associated with domain but never directly
accessible by domain
• Rather, protected object, maintained by OS and
accessed indirectly
• Like a “secure pointer”
• Idea can be extended up to applications
Implementation of Access Matrix (Cont.)
o Option 4 – Lock-key
• Compromise between access lists and capability
lists
• Each object has list of unique bit patterns,
called locks
• Each domain as list of unique bit patterns
called keys
• Process in a domain can only access object if
domain has key that matches one of the locks
Comparison of Implementations

o Many trade-offs to consider


• Global table is simple, but can be large
• Access lists correspond to needs of users
• Determining set of access rights for domain
non-localized so difficult
• Every access to an object must be checked
• Many objects and access rights -> slow
• Capability lists useful for localizing information for
a given process
• But revocation capabilities can be inefficient
• Lock-key effective and flexible, keys can be passed
freely from domain to domain, easy revocation
Comparison of Implementations (Cont.)
o Most systems use combination of access lists and
capabilities
• First access to an object -> access list searched
• If allowed, capability created and attached
to process
• Additional accesses need not be checked
• After last access, capability destroyed
• Consider file system with ACLs per file
Access Control
o Protection can be applied to non-
file resources
o Oracle Solaris 10 provides role-
based access control (RBAC) to
implement least privilege
• Privilege is right to execute
system call or use an option
within a system call
• Can be assigned to
processes
• Users assigned roles
granting access to privileges
and programs
• Enable role via password
to gain its privileges
• Similar to access matrix
Revocation of Access Rights
o Various options to remove the access right of a domain
to an object
• Immediate vs. delayed
• Selective vs. general
• Partial vs. total
• Temporary vs. permanent
o Access List – Delete access rights from access list
• Simple – search access list and remove entry
• Immediate, general or selective, total or partial,
permanent or temporary
Revocation of Access Rights (Cont.)
o Capability List – Scheme required to locate capability in the
system before capability can be revoked
• Reacquisition – periodic delete, with require and denial if
revoked
• Back-pointers – set of pointers from each object to all
capabilities of that object (Multics)
• Indirection – capability points to global table entry which
points to object – delete entry from global table, not
selective (CAL)
• Keys – unique bits associated with capability, generated
when capability created
• Master key associated with object, key matches
master key for access
• Revocation – create new master key
• Policy decision of who can create and modify keys –
object owner or others?
Capability-Based Systems
o Hydra
• Fixed set of access rights known to and interpreted by the
system
• i.e. read, write, or execute each memory segment
• User can declare other auxiliary rights and register those
with protection system
• Accessing process must hold capability and know name of
operation
• Rights amplification allowed by trustworthy procedures for
a specific type
• Interpretation of user-defined rights performed solely by
user's program; system provides access protection for use of
these rights
• Operations on objects defined procedurally – procedures are
objects accessed indirectly by capabilities
• Solves the problem of mutually suspicious subsystems
• Includes library of prewritten security routines
Capability-Based Systems (Cont.)
o Cambridge CAP System
• Simpler but powerful
• Data capability - provides standard read,
write, execute of individual storage segments
associated with object – implemented in
microcode
• Software capability -interpretation left to the
subsystem, through its protected procedures
• Only has access to its own subsystem
• Programmers must learn principles and
techniques of protection
Language-Based Protection
o Specification of protection in a programming language allows
the high-level description of policies for the allocation and use
of resources
o Language implementation can provide software for protection
enforcement when automatic hardware-supported checking is
unavailable
o Interpret protection specifications to generate calls on
whatever protection system is provided by the hardware and
the operating system
Any Questions?

9/3/2019 Wolkite Unveristy OS(Seng2043) 277

You might also like