OS Notes Module 1 2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 51

Course Name: Operating System Code: IPCC22IS34

UNIT 1
Chapter-1: Introduction to Operating System, System Structures

 What Operating Systems Do

 Operating System Structure

 Operating System Operations

What Operating Systems Do


 Operating system is a software that controls and manages the computer hardware.

 Operating system acts as an interface between user and hardware of a computer


system.

 Operating system provides an environment for execution of a program in convenient


and efficient way.

 A program that acts as an intermediary between a user of a computer and the


computer hardware.

What Operating Systems Do: Goals


 User Computations

 Execute user programs and make solving user problems easier

 Utilization Efficiency

 To ensure good resource utilization efficiency and provide appropriate


corrective actions when it becomes low

 User Convenience

 Make the computer system convenient to use

Four Components of a Computer System:

 Hardware

 provides basic computing resources

 CPU, memory, I/O devices

 Operating system

Dept of ISE, DSCE Page 1


Course Name: Operating System Code: IPCC22IS34

 Controls and coordinates use of hardware among various applications


and users

 Application programs

 define the ways in which the system resources are used to solve the
computing problems of the users

 Word processors, compilers, web browsers, database systems, video


games

 Users

 People, machines, other computers

Operating System: Different Views


 User view

 Personal Computer: Ease of use, performance

 Mainframe: Resource utilization

System view
 Resource allocator

 Control program

 OS is a resource allocator

 Manages all resources

 Decides between conflicting requests for efficient and fair resource use

OS is a control program
 Controls execution of programs to prevent errors and improper use of the
computer

Defining Operating System


 “The one program running at all times on the computer” is the kernel.

 Everything else is either a system program (ships with the operating system) or an
application program

Operating System Structure: Multiprogramming


 Needed for efficiency

 Single user cannot keep CPU and I/O devices busy at all times

 Multiprogramming organizes jobs (code and data) so CPU always has one to execute

Dept of ISE, DSCE Page 2


Course Name: Operating System Code: IPCC22IS34

 A subset of total jobs in system is kept in memory

 One job is selected and run via job scheduling

 When it has to wait (for I/O for example), OS switches to another job and so on

 I/O routine is supplied by system to many programs.

 CPU scheduling: the system must choose one job among several jobs ready to run.

 Allocation of devices should be done efficiently and fairly.

Timesharing (multitasking):
 Time-Sharing is a logical extension of Multiprogramming where many users share the
system simultaneously.

 The CPU executes several jobs that are kept in memory by switching among them.

 CPU switching is so fast that the user can interact with each program while it is
running.

 Response time is < 1 second

 Few processes should be available

 Several jobs are ready to be brought in memory à job scheduling

 Several jobs ready to run at the same time à CPU scheduling

 If processes don’t fit in memory, swapping moves them in and out from the memory
à memory management

 Virtual memory allows execution of processes not completely in memory

Operating-System Operations

Dept of ISE, DSCE Page 3


Course Name: Operating System Code: IPCC22IS34

 OS is interrupt driven

 A trap or exception is a software generated interrupt

 Division by zero or invalid memory access

 Other process problems include infinite loop, processes modifying each other
or the operating system

 Request for operating system service

 For each type of interrupt, a service routine is responsible in OS

Dual mode Operation


 As OS and users share hardware and software, they should not interfere in each other
operations.

 Dual-mode operation allows OS to protect itself and other system components

 User mode

 Kernel mode or system mode or privileged mode

 A bit, mode bit, is added to computer hardware to indicate the current mode

 Kernel (0)

 User (1)

 When a user process requests a service from OS via a system call, it transit from user
mode to kernel mode to fulfill the request

 At boot time, hardware starts in kernel mode, then OS is loaded (mode bit = 0)

 Then OS starts user applications in user mode (mode bit = 1)

 Whenever a trap occurs hardware switches from user to kernel mode (mode bit = 0)

 When a user process requests a service from OS via a system call, it transit from user
mode to kernel mode to fulfill the request

 Mode bit provided by hardware

 Provides the means for protecting the OS from erroneous programs and
erroneous programs from each other.

 Provides ability to distinguish when system is running user mode or kernel


mode

Dept of ISE, DSCE Page 4


Course Name: Operating System Code: IPCC22IS34

 Some instructions designated as privileged, only executable in kernel mode

 If any attempt is made to execute privileged instructions in user mode, hardware does
not execute these instructions and treats as traps

 MS-DOS written for Intel 8088 architecture which has no mode bit and
therefore no dual mode.

 A user program may wipe the OS by writing over it.

 Pentium provides dual mode; and Windows, Linux, Solaris make use of the feature
and provide protection to OS

Transition from User to Kernel Mode

Timer:
 A user program should not stuck in infinite loop i.e. not returning to OS.

 Timer can be used to prevent infinite loop or process hogging resources

 A timer can be set to interrupt the computer after a specified period.

 The period may be fixed (1/60 sec) or variable (from 1 milisecond to 1 second)

 A variable timer is implemented by a clock and a counter

 At each click OS decrements counter

 At 0, an interrupt occurs

 Set up timer before scheduling process to regain control or terminate program that
exceeds allotted time.

Operating System Services to the user point of view


 An operating system provides the environment within which programs are executed.

 Services provided by OS

 Interface to users and programmers

 Its components and their interconnection

Dept of ISE, DSCE Page 5


Course Name: Operating System Code: IPCC22IS34

 User Interface

 Program execution

 I/O operations

 File-system manipulation

 Communications

 Error detection

 Resource allocation

 Accounting

 Protection and Security

 User Interface

 Almost all operating systems have a user interface (UI)

 Command-Line Interface (CLI) uses text command

 Batch Interface uses files for execution containing commands and directives

 Graphics User Interface (GUI) is window and uses pointing device, keyboard,
menus etc.

 Some systems use two or all three types.

 Program execution

(system capability to load a program into memory and to run it)


 Load the instructions and data into main memory

 Initialize I/O devices and required files

 Make resources available for execution

 End execution, either normally or abnormally (indicating error)

 I/O operations

 A running program may require I/O which may involve a file to load in
memory or an I/O device like keyboard

 Since user programs cannot execute I/O operations directly due to efficiency
and protection purpose, the operating system must provide means to perform
I/O.

 A uniform interface to hide the hardware details

Dept of ISE, DSCE Page 6


Course Name: Operating System Code: IPCC22IS34

 File-system manipulation

 Programs need to read and write files and directories,

 Create and delete files and directories

 Search them, list file Information

 Permission management to allow or deny access to files and directories based


on ownership and privileges

 Communications

 Exchange of information between processes executing either on the same


computer or on different systems tied together by a network.

 Implemented via shared memory or message passing.

Error detection
 OS needs to be constantly aware of possible errors

 OS ensures correct and consistent computing by detecting errors in the CPU,


memory hardware, I/O devices, or in user programs.

 Illegal memory location

 Printer not ready and other I/O devices related

 Power failure and Connection failure in network

 Software errors like arithmetic errors, divided by zero

 Debugging facilities can greatly enhance the user’s and programmer’s abilities
to efficiently use the system

Operating System Services to the system point of view


 Resource allocation

 Allocating resources to multiple users or multiple jobs running at the same


time

 CPU cycle (CPU scheduling)

 Memory (Memory management)

 File storage (Disk scheduling)

 I/O devices (request and release)

 Registers (Job scheduling)

 Accounting

Dept of ISE, DSCE Page 7


Course Name: Operating System Code: IPCC22IS34

 To keep track of which users use how much and what kinds of computer
resources

 This record keeping may be used for account billing or for accumulating usage
statistics.

 Usage statistics is a valuable tool for researcher to reconfigure the system to


improve computing services

 Protection and Security–ensuring that access to system resources is controlled

 The owners of information stored in a multiuser or networked computer


system may want control use of that information

 Concurrent processes should not interfere with each other or with OS itself

 Protection involves ensuring that all access to system resources is controlled

 Security of the system from outsiders requires user authentication, extends to


defending external I/O devices from invalid access attempts

 If a system is to be protected and secure, precautions must be instituted


throughout it.

 Important issues related to Protection and Security are:

 Authentication

 Controlled access of files

 Authorization & privileges

User Operating System Interface:


 Command line Interface

 Graphical User Interface

Command Line Interface (CLI) :


CLI allows direct command entry
 Some OS include command interpreter in kernel

 Some OS (Windows) treat command interpreter as a special program

 Some systems (Unix) have multiple command interpreters, these interpreters are
known as shells

 Main function of command interpreter is to get and execute the command

 Commands are to manipulate the files as create, delete, list, print, copy, execute etc.

 Commands can be implemented in two following ways:

Dept of ISE, DSCE Page 8


Course Name: Operating System Code: IPCC22IS34

 Command Interpreter itself contains the code to execute the command.

 Commands are implemented through system programs, Command interpreter


can not understand the command, it uses the command to identify a file to be
loaded in memory and executed.

 Command Interpreter itself contains the code to execute the command

 A command to delete a file may cause the command interpreter to jump to a


section of code that sets up the parameters and makes the appropriate system
call.

 Number of commands determines the size of command interpreter

 Modification may be difficult

 Commands are implemented through system programs

 UNIX command to delete a file

rm file.text
 This command will search a file named rm and load in memory and execute it

 Command interpreter is smaller.

 Addition of a new command is easy

Graphical User Interface


 User-friendly desktop metaphor interface

 GUI provides mouse and menu based interface

 Mouse is moved to position its pointer on images and icons on the screen that
represent programs, files, directories etc.

 Clicking various mouse buttons over objects in the interface cause various
actions (provide information, options, execute function, open directory
(known as a folder)

 User-friendly desktop metaphor interface

 Invented at Xerox PARC: Xerox Alto computer in 1973

 Became widespread with the advent of Apple Macintosh computers in 1980s


i.e. Mac OS.

 Microsoft first version has been advanced to Windows 7.

 Traditionally Unix has CLI but few GUI like Common Desktop (CDE) and X-
Windows are available with commercial version of Unix such as Solaris,
IBM’s AIX system.

Dept of ISE, DSCE Page 9


Course Name: Operating System Code: IPCC22IS34

 KDE and GNOME by GNU project run on Linux and Unix

 User-friendly desktop metaphor interface

 Invented at Xerox PARC: Xerox Alto computer in 1973

 Became widespread with the advent of Apple Macintosh computers in 1980s


i.e. Mac OS.

 Microsoft first version has been advanced to Windows 7.

 Traditionally Unix has CLI but few GUI like Common Desktop (CDE) and X-
Windows are available with commercial version of Unix such as Solaris,
IBM’s AIX system.

 KDE and GNOME by GNU project run on Linux and Unix

 Many systems now include both CLI and GUI interfaces

 Microsoft Windows is GUI with CLI “command” shell

 Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath and
shells available

 Solaris is CLI with optional GUI interfaces

System Calls:
 System calls provide an interface to the services provided by the OS

 These routines are written in a high-level language (C or C++), low level tasks may
be in assembly language

 System call sequence to copy the contents of one file to another file

 System calls are mostly accessed by programs via a high-level Application Program
Interface (API) rather than directly using system call.
Dept of ISE, DSCE Page 10
Course Name: Operating System Code: IPCC22IS34

 The API specifies a set of functions that are available to an application programmer,
including the parameters that are passed with each function.

 Three most common APIs are Win32 API for Windows, POSIX API for POSIX-
based systems (including all versions of UNIX, Linux, and Mac OS X), and Java API
for the Java virtual machine (JVM)

 One benefit of using API is program portability.

 A program having APIs can be compiled and run on any system that support the same
API.

 Actual system calls can be more detailed and it is difficult to work with them.

 A run-time support system for most of the programming languages provides a system
call interface that serves as the link to system calls made available by OS.

 A run-time support system provides a set of functions built into libraries included
with the compiler

 A number is associated with each system call.

 System-call interface maintains a table indexed according to these numbers.

 The system call interface invokes intended system call in OS kernel and returns status
of the system call with its return value.

 The programmers need not know about the system call implementation and just need
to obey API

 Most of the details of OS interface are hidden from programmer by API and managed
by run-time support library (set of functions built into libraries included with
compiler)

 More information like parameters is required than simply identity of desired system
call
Dept of ISE, DSCE Page 11
Course Name: Operating System Code: IPCC22IS34

 Three general methods are used to pass parameters between a running program and
the operating system.

 Pass parameters in registers (simplest)

 Store the parameters in a block of memory, and the memory address is passed
as a parameter in a register as in Linux, Solaris

 Push the parameters onto the stack by the program, and pop off the stack by
operating system.

Types of System Calls


 Process control

 File management

 Device management

 Information maintenance

 Communications

Process control
 To halt the execution of a running program either normally (end) or abnormally
(abort), OS transfers the to command interpreter

 A process executing one program may load or execute another program

 For coordination of concurrent processes, system calls may be wait event, signal event

 Another set of system calls help in debugging the program

 end, abort

 load, execute

 create process, terminate process

 get process attributes, set process attributes

Dept of ISE, DSCE Page 12


Course Name: Operating System Code: IPCC22IS34

 wait for time

 wait event, signal event

 allocate and deallocate memory

System Calls:
Example: MS DOS environment FreeBSD Running Multiple
Programs

File management:
 Create and delete a file (or directory), need a file name and few attributes

 Open to read, write and reposition file

 Close the file

 Determine the values of various file attributes like file name, file type, protection
codes

 Calls to move and copy files

 Some OS may provide API and system programs

 create file, delete file

 open, close

 read, write, reposition

 get file attributes, set file attributes

Device management
 A process needs several resources for execution like main memory, disk drives, files
etc.

 All resources may be considered as devices which can be requested and released.
(same as open and close a file)

Dept of ISE, DSCE Page 13


Course Name: Operating System Code: IPCC22IS34

 request device, release device

 read, write, reposition

 get device attributes, set device attributes

 logically attach or detach devices

Information maintenance
 Many system calls exist for the purpose of transferring the information between the
user program and OS

 System calls to return time and date

 System calls to return information about the system such as number of current users,
OS version, amount of free memory or disk

 System calls to return to get and set the information for files, processes, devices

 get time or date, set time or date

 get system data, set system data

 get process, file or device attributes

 set process, file or device attributes

Communications
 Types of inter-process communication:

 Message passing

 Shared memory

 Message can be exchanged directly or indirectly

 Connection to be established

 communicators (processes) should be specified which may be identified by


process-ids on same or different computers identified by IP address

 Message transfer through read and write calls

 Communication can take place through shared memory

 Shared memory to be created

 Gaining access to regions of shared memory owned by other processes

 Processes can exchange the information by reading and writing the data in
shared memory

Dept of ISE, DSCE Page 14


Course Name: Operating System Code: IPCC22IS34

 Message passing is useful for exchanging smaller amount of data

 Shared memory is faster and can be done in same computer but synchronization is the
issue

 create, delete communication connection

 send, receive messages

 transfer status information

 attach or detach remote devices

Operating System Design and Implementation:


Design goals
 Internal structure of different Operating Systems can vary widely

 Start by defining goals and specifications

 Design of system is affected by choice of hardware, type of system like time-shared,


single-tasking, multi-tasking, multiuser, distributed, real time etc.

 Requirements can be divided into two groups:

 User goal.

 System goal.

 User goals

 Operating system should be convenient to use, easy to learn, reliable, safe, and
fast

 System goals

 Operating system should be easy to design, implement, and maintain, as well


as flexible, reliable, error-free, and efficient

 Important principle to separate

 Policy: What will be done?

 Mechanism: How to do it?

 The separation of policy from mechanism is a very important principle, it


allows maximum flexibility if policy decisions are to be changed later

 Traditionally OS have been written in Assembly language.

 Now mostly written in C / C++

 First OS (MCP) was written in variants of ALGOL,

Dept of ISE, DSCE Page 15


Course Name: Operating System Code: IPCC22IS34

 Linux and Window XP are written in C but some code sections like device drivers
and saving & restoring the state of registers are written in assembly

 Advantages using high level language for OS

 Fast implementation

 Compact

 Easy to understand and debug

 Easy to port

 MS DOS was written in Assembly language for Intel 8088 and


available only on Intel family of CPUs

 Linux is written in C (mostly) and is available on a number of different


CPUs like Intel 80X86, Motrola 680X0, SPARC, MIPS RX000 etc.

 Disadvantages using high level language for OS

 Reduced speed

 Increased storage requirements

 Solution

 Modern processors have deep pipelining and multiple functional units that can
handle complex dependencies.

 Better data structures and algorithms than same in assembly language

 Critical routines can be replaced in assembly language.

UNIT 2: Process Management


Chapter-1: Process Concept
Contents:
3.1 Process concept
3.2 Process scheduling;

3.1 The Process concept


 An operating system executes a variety of programs:

 Batch system – jobs

 Time-shared systems – user programs or tasks

 Program is a passive entity.

 A process is a program in execution.

 An active entity that can be assigned to and executed on a processor.

Dept of ISE, DSCE Page 16


Course Name: Operating System Code: IPCC22IS34

 A process is unit of work.

 It has limited time span.

 A process needs certain resources, including CPU time, memory, files, and I/O
devices, to accomplish its task.

3.1.1The Process
 A program is a passive entity i.e. a file containing a list of instructions stored on disk
called as executable file.

 A process is an active entity with a program counter specifying the next instruction to
execute and a set of associated resources.

 A program becomes process when an executable file is loaded to memory .Two


common techniques for loading executable files are:

 Double click on icon representing executable file

 Entering the name of the executable file on the command line(ex prog.exe or
a.out)

 A process includes:

 Text section (Program code)


 Value of program counter and contents of processor’s registers (PCB)
 Stack (temporary data like function parameters, return addresses, local
variables)
 Data section (global variables)
 Heap (memory dynamically allocated to process to hold intermediate
computation data at run time)

Process in memory
 Two processes may be associated with same program but they are not considered as
separate processes e.g. many copies of one mail program.

 A process may spawn many processes as it runs

3.1.2 Process State


 As the process executes ,it changes state.

Dept of ISE, DSCE Page 17


Course Name: Operating System Code: IPCC22IS34

 The state of a process is defined in part by the current activity of that process.

 Each process may be in one of the following states:

new: The process is being created.


ready: The process is waiting to be assigned to a processor.
running: Instructions are being executed.
waiting: The process is waiting for some event to occur (I/O or reception of
signal)
 terminated: The process has finished execution.
 Only one process can be running on any processor at any instant.

Diagram of process state

3.1.3 Process Control Block (PCB)


 Each process is represented by a process control block (PCB) also called as task
control block.

Process Control Block(PCB)


 PCB is repository of information

 Information associated with each process:

 Process state

 Program counter

 CPU registers

Dept of ISE, DSCE Page 18


Course Name: Operating System Code: IPCC22IS34

 CPU scheduling information

 Memory-management information

 Accounting information

 I/O status information

1. Process state

 The state may be new, ready, running, waiting, halted, and so on

2. Program counter

 The counter indicates the address of next instruction to be executed for this
process.

3. CPU registers

 Number and type of registers

 They include accumulators, index registers, stack pointers, general purpose


registers and any condition-code information.

 Along with the program counter, this state information must be saved when
an interrupt occurs, to allow the process to be continued correctly afterward.

4. CPU scheduling information

 This includes a process priority, pointers to scheduling queues, and any other
scheduling parameters.

5. Memory-management information

 Value of base and limit registers, page table or segment table

6. Accounting information

 Amount of CPU and real time used, time limits, account numbers , job or
process numbers.

7. I/O status information

 List of I/O devices allocated to the process


 list of open files
----------------------------------------------------------------------------------------------------------------
-------------------------
3.2 Process Scheduling:
 The objective of multiprogramming is to have some processes running at all time, to
maximize CPU utilization.

Dept of ISE, DSCE Page 19


Course Name: Operating System Code: IPCC22IS34

 The objective of time sharing is to switch CPU among the processes so frequently that
users can interact with each process while it is running.
 To meet these objectives, process scheduler selects an available process for execution
on CPU.
 For a single-processor system, there will never be more than one running process,.
 If there are more processes, the rest will have to wait until the CPU is free and can be
reshecduled.

Fig: The ready queue and various I/O device queues

3.2.1Scheduling Queues:
 Job queue

 Set of all processes in the system

 Ready queues

 Set of all processes residing in main m

 emory, ready and waiting to execute

 Device queues

 Set of processes waiting for an I/O device

 Processes migrate among the various queues

 Ready queue

 Set of all processes residing in main memory, ready and waiting to execute.

 Header of the queue contains the pointers to first and last PCB.

 Each PCB contains a pointer to next PCB.

Dept of ISE, DSCE Page 20


Course Name: Operating System Code: IPCC22IS34

 I/O Device queues

 Set of processes waiting for an I/O device

 A new process initially put in ready queue, where it waits for CPU for execution.

 Once process is allocated CPU, following events may occur:

 The process issues an I/O request, and then be placed in an I/O device queue.

 The process creates a new subprocess and waits for its termination.

 The process is removed forcibly from the CPU as a result of interrupt or


expired time slice.

 The process ends.

 A process migrates between the various scheduling queues throughout its life time.

 The operating system must select the processes from or to the scheduling queues.

Dept of ISE, DSCE Page 21


Course Name: Operating System Code: IPCC22IS34

 The selection process is carried out by schedulers.

Process Scheduling: Types of Schedulers


 Long-term scheduler (or job scheduler)

 Selects which processes should be brought into the ready queue.

 Short-term scheduler (or CPU scheduler)

 Selects process from ready queue which should be executed next and allocates
CPU.

 Medium-term scheduler

 Swap in the process from secondary storage into the ready queue.

Medium term scheduler


Short term
scheduler
Long term
scheduler

Process Scheduling: Schedulers


 Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast).

 Long-term scheduler is invoked infrequently (seconds, minutes) Þ (may be slow).

 The long-term scheduler controls the degree of multiprogramming.

 Processes can be described as

 I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts may be needed.

 CPU-bound process – spends more time doing computations; long CPU bursts
are required.

 System should have mix of both type of processes so I/O waiting queue and ready
queue will have equal and balanced work load.

Context Switch
 Many users processes and system processes run simultaneously.

 Switching CPU from one process to another process is context switch.

Dept of ISE, DSCE Page 22


Course Name: Operating System Code: IPCC22IS34

 The system must save the state of the old process and load the saved state of the new
process.

 Context-switch time is overhead and the system does no useful work while switching.

 Switching speed varies from machine to machine depending upon memory, number of
registers and hardware support.

 Information in saved process image

 User data

 Program counter

 System stack

 PCB

CPU Switch from Process to Process

UNIT 2: Process Management


Chapter-2 : Multithreaded Programming

Contents:

 Overview

Dept of ISE, DSCE Page 23


Course Name: Operating System Code: IPCC22IS34

 Multithreading Models

Overview
Multithreaded Processes
 If one application should be implemented as a set of related units of execution. Two
possible ways:

 Multiple processes

 Heavy weight

 Less efficient

 Multiple threads

 A thread

 is referred as a light weight process (LWP).

 is a basic unit of CPU utilization

 has a thread ID, a program counter, a register set, and a stack.

 shares a code section, data section, open files etc. with other threads belonging
to the same process.

Single and Multithreaded Processes

Multithreaded Processes
 A traditional (or heavy weight) process has a single thread of control.

 If a process can have multiple threads, it can do more than one task.

 Multiple threads

 Light weight because of shared memory

 Thread creation and termination is less time consuming

 Context switch between two threads takes less time

 Communication between threads is fast because they share address space,


intervention of kernel is not required.

 Multithreading refers to the ability of an OS to support multiple threads of execution


within a single process.

Dept of ISE, DSCE Page 24


Course Name: Operating System Code: IPCC22IS34

 A multithreaded process contains several different flows of control within the same
address space.

 Threads are tightly coupled.

 A single PCB & user space are associated with the process and all the threads belong
to this process.

 Each thread has its own thread control block (TCB) having following information:

 ID, counter, register set and other information.

Why multithreading
 Use of traditional processes incurs high overhead due to process switching.

 Two reasons for high process switching overhead:

 Unavoidable overhead of saving the state of running process and loading the
state of new process.

 Process is considered to be a unit of resource allocation, resource accounting


and interprocess communication.

 Use of threads splits the process state into two parts – resource state remain with the
process while execution state is associated with a thread.

 The thread state consists only of the state of the computation.

 This state needs to be saved and restored while switching between threads of a
process. The resource state remains with the process.

 Resource state is saved only when the kernel needs to perform switching
between threads of different processes.

Examples
 Web browser

 Displaying message or text or image

 Retrieves data from network

 Word processor

 Displaying graphics

 Reading key strokes

 Spelling and grammar checking

Benefits of Multithreading
 Responsiveness

Dept of ISE, DSCE Page 25


Course Name: Operating System Code: IPCC22IS34

 A process continues running even if one thread is blocked or performing


lengthy operations, thus increased response.

 Resource Sharing

 Threads share memory and other resources.

 It allows an application to have several different threads of activity, all within


the same address space.

 Economy

 Sharing the memory and other resources

 Thread management (creation, termination, switching between threads) is less


time consuming than process management.

 Utilization of MP Architectures

 Each thread may be running in parallel on different processor – increases


concurrency.

 A single threaded process can run on only one processor irrespective of


number of processors present.

Multithreading Models: User & Kernel Threads


 Support for threads may be provided either at user level i.e. user threads or by kernel
i.e. kernel threads.

 User threads are supported above the kernel and are managed without kernel support.

 Kernel threads are supported by OS.

Multithreading Models: User Threads


 These are supported above the kernel and are implemented by a thread library at user
level.

 Library provides thread creation, scheduling management with no support from


kernel.

 No kernel intervention is required.

 User level threads are fast to create and manage.

 Examples:

 POSIX Pthreads

 Mach C-threads

 Solaris threads

Dept of ISE, DSCE Page 26


Course Name: Operating System Code: IPCC22IS34

Multithreading Models: Kernel Threads


 Kernel performs thread creation, scheduling, management in kernel space.

 Kernel threads are slower to create and manage.

 If a thread performs a blocking system call, the kernel can schedule another kernel
thread.

 In multiprocessor environment, the kernel can schedule threads on different


processors.

 Most of the operating system support kernel threads also:

 Windows XP

 Solaris

 Mac OS X

 Tru64 UNIX

 Linux

Multithreading Models
 There must exist a relationship between user threads and kernel threads

 Many-to-One

 One-to-One

 Many-to-Many

Many-to-One Model
 Many user-level threads are mapped to single kernel thread.

 Thread management is done by the thread library in user space.

 It is fast, efficient.

 Drawback: If one user thread makes a system call and only one kernel thread is
available to handle, entire process will be blocked.

 Green threads: a library for Solaris

 GNU Portable Threads

Dept of ISE, DSCE Page 27


Course Name: Operating System Code: IPCC22IS34

One-to-One Model
 Each user-level thread is mapped to one kernel thread

 It provides more concurrency than many to one model.

 If blocking call from one user thread to one kernel thread, another kernel thread can
be mapped to other user thread.

 It allows multiple threads to run in parallel.

 Drawback: Creating a user thread requires the creation of corresponding kernel thread
which is overhead.

 Windows 95/98/NT/2000/XP

 Linux

 Solaris 9 and later

Many-to-Many Model
 Many user-level threads are mapped to a smaller or equal number of kernel threads.

 Number of kernel threads may be specific to either application or a particular


machine.

 It allows many user level threads to be created.

Dept of ISE, DSCE Page 28


Course Name: Operating System Code: IPCC22IS34

 Corresponding kernel threads can run in parallel on multiprocessor.

Variation of Many-to-Many Model: two-level model


 Two-level model is a variation of Many-to-Many which has one-to-one also.

 Examples

 IRIX

 HP-UX

 Tru64 UNIX

 Solaris 8 and earlier

Scheduling Algorithms
Basic Concepts: Multiprogrammed execution

Dept of ISE, DSCE Page 29


Course Name: Operating System Code: IPCC22IS34

Program 1

Program 2
P1 P2 P1 P2 P1 P2
Basic Concepts
 How to obtain maximum CPU utilization with multiprogramming?

 CPU scheduling is the task of selecting a waiting process from the ready queue and
allocating the CPU to it.

Basic Concepts: CPU--I/O Burst Cycle


 CPU–I/O Burst Cycle

 Process execution consists of a cycle of CPU execution and I/O wait.

 CPU bound process

 Long CPU burst, Short I/O

 I/O bound process

 Short CPU burst, Long I/O

Basic Concepts: Histogram of CPU-burst Times

Dept of ISE, DSCE Page 30


Course Name: Operating System Code: IPCC22IS34

CPU Scheduler
 CPU Scheduler (short term scheduler) selects one process from the processes in
memory that are ready to execute, and allocates the CPU.

 Ready queue may be implemented as FIFO queue, a tree or an ordered link where
PCBs are linked waiting for CPU to be available.

 CPU scheduling decisions may take place when a process:

 Switches from running to waiting state.

 Switches from running to ready state.

 Switches from waiting to ready.

 Terminates.

Nonpreemptive
 No forceful switching from running state to waiting process.

 Once CPU has been allocated to a process, the process keeps the CPU, process
switches from running state to waiting state only due to:

 I/O request

Dept of ISE, DSCE Page 31


Course Name: Operating System Code: IPCC22IS34

 Some OS service

 When a process terminates by itself.

Preemptive
 Forceful switching from running state to waiting process.

 Running process is interrupted and moved to waiting state by OS.

 It may occur when a new process arrives.

 Cost is more in preemptive scheduling.

Preemptive & Nonpreemptive


 Scheduling under nonpreemptive

 When a process switches from running to waiting state.

 When a process terminates.

 All other scheduling is preemptive

 When a process switches from running to ready state.

 When a process switches from waiting to ready.

 Windows 3.x use nonpreemptive, Windows 95 introduced preemptive.

Dispatcher
 Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:

 Switching context

 Switching to user mode

 Jumping to the proper location in the user program to restart that program

 Dispatch latency

 It is the time taken by the dispatcher to stop one process and start another running.

 Dispatcher should be very fast and dispatch latency should be less.

Scheduling Criteria
 Different CPU scheduling algorithms have different properties and choice of one
algorithm depends on different criteria.

 CPU utilization – Aim is to keep the CPU as busy as possible (lightly loaded: 40%,
heavily loaded: 90%)

(High)

Dept of ISE, DSCE Page 32


Course Name: Operating System Code: IPCC22IS34

 Throughput – number of processes that complete their execution per unit time (very
long process: 1 process/hour, short: 10 processes/sec)

(High)
 Turnaround time – amount of time to execute a particular process (from submission to
completion, includes execution time, I/O time and waiting time)

(Least)
 Waiting time – amount of time a process has been waiting in the ready queue (CPU
scheduling algorithm does not affect execution time and time required for I/O, it can
only reduce waiting time)

(Least)
 Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)

(Least)
Optimization Criteria for scheduling
 Max CPU utilization

 Max throughput

 Min turnaround time

 Min waiting time

 Min response time

Scheduling Algorithms
 First-Come, First-Served (FCFS)

 Shortest-Job-First (SJF)

 Priority Scheduling

 Round Robin (RR)

 Multilevel Queue Scheduling

 Multilevel Feedback Queue Scheduling

First-Come, First-Served (FCFS) Scheduling


 First request for CPU is served first.

 Implementation is managed by FIFO queue.

 New entered process is linked to tail of the queue.

 When CPU is free, it is allotted to the process at the head of the queue.

Dept of ISE, DSCE Page 33


Course Name: Operating System Code: IPCC22IS34

 It is non-preemptive.

Process Burst Time (ms)


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order:

P1 , P2 , P3
 Gantt Chart for the schedule

P1 P2 P3

0 24 27 30
 Waiting time for P1 = 0

P2 = 24
P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 51/3 = 17

 Process Burst Time

 P1 24

 P2 3

 P3 3

 Suppose that the processes arrive in the order

 P2 , P3 , P1 .

 The Gantt chart for the schedule

P2 P3 P1

0 3 6 30
 Waiting time for P1= 6; P2 = 0; P3 = 3

 Average waiting time: (6 + 0 + 3)/3 = 9/3 = 3

 Much better than previous case.

Convoy effect
 Thus average waiting time in FCFS vary substantially if CPU burst time vary greatly.

 Convoy effect is due to short processes behind a long process.

 All the processes wait for the one big process to get off the CPU.

Dept of ISE, DSCE Page 34


Course Name: Operating System Code: IPCC22IS34

 It results in lower CPU and device utilization.

 It is not good for time-sharing system.

Shortest-Job-First (SJF) Scheduling


 SJF associates with each process the length of its next CPU burst (Shortest-next-CPU
burst algorithm)

 CPU is assigned to the process with the shortest next CPU burst time.

 If two processes have the same length CPU burst, FCFS is used to break the tie.

 SJF is optimal and gives minimum average waiting time for a given set of processes.

Example of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3
 Gantt Chart for the schedule

P4 P1 P2
P3

0 3 9 16 24

Waiting for each process


 P1 à3, P2 à16, P3 à9, P4 à0

 Average waiting time = (3 + 16 + 9 + 0)/4 = 28/4 = 7 ms

Shortest-Job-First (SJF) Scheduling


Two schemes:
 Nonpreemptive SJF

 Once CPU is given to the process it cannot be preempted until completes its
CPU burst.

 Preemptive SJF

 If a new process arrives with CPU burst length less than remaining time of
current executing process, current process is preempted. This scheme is know
as

 Shortest-Remaining-Time-First (SRTF).

Example of Non-Preemptive SJF


Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4

Dept of ISE, DSCE Page 35


Course Name: Operating System Code: IPCC22IS34

P3 4.0 1
P4 5.0 4
 Gantt Chart for the schedule

P1 P3 P2 P4

0 7 8 12 16
 Average waiting time = (0 + (7-4) + (8-2) + (12-5))/4 = 16/4 = 4

Example of Preemptive SJF


Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 Gantt Chart for the schedule

P1 P2 P3 P2 P4 P1

0 11 16
2 4 5 7
 Average waiting time = (9 + 1 + 0 +2)/4 = 12/4 = 3ms

Priority Scheduling
 A priority number (integer) is associated with each process.

 Priorities can be defined internally or externally.

 The CPU is allocated to the process with the highest priority (smallest integer º
highest priority).

 Equal priority processes are scheduled in FCFS order.

Priority Scheduling (nonpreemptive)


 New process having the highest priority comes at head of the ready queue.

 Priority scheduling can be:

 Preemptive

 Nonpreemptive

 Process Burst Time Priority

 P1 10 3

 P2 1 1

 P3 2 4

 P4 1 5

Dept of ISE, DSCE Page 36


Course Name: Operating System Code: IPCC22IS34

 P5 5 2

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 The Gantt chart for the schedule is:

 Average waiting time= (6+0+16+18+1)/5 = 41/5 = 8.2

 Priority of newly arrived process is compared with already running process.

 Scheduler will preempt the CPU if the priority of newly arrived process is higher than
priority of running process.

 Process Burst Time Priority Arrival Time

 P1 6 3 0

 P2 1 1 2

 P3 2 4 3

 P4 1 5 5

 P5 3 2 6

Process Burst Time Priority Arrival Time


P1 6 3 0
P2 1 1 2
P3 2 4 3
P4 1 5 5
P5 3 2 6
 The Gantt chart for the schedule is:

 Average waiting time= (4+0+7+7+0)/5 = 18/5 = 3.6

Priority Scheduling
 Problem is Starvation i.e. low priority processes may wait for indefinite time or never
execute.

(In a heavily loaded system, a steady stream of higher priority process can prevent a
low priority process from ever getting the CPU)

Dept of ISE, DSCE Page 37


Course Name: Operating System Code: IPCC22IS34

 Solution of starvation is Aging i.e. technique of gradually increasing the priority of


processes that wait in the system for a long time.

(e.g. If priorities range is 100(lowest) to 0(highest), process with 100 priority is


decremented by one after each 10 minutes and at last process will gain 0, highest, priority)
Round Robin (RR) Scheduling
 It is preemptive scheduling & designed for time-sharing system.

 Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds.

 After this time elapses, the process is preempted and added to the end of the ready
queue.

 CPU scheduler goes around the ready queue allocating the CPU to each process for a
time interval of up to 1 time quantum.

 Implementation is through FIFO queue of processes.

 New processes are added to the tail of the ready queue.

 CPU scheduler picks the first process from the ready queue, sets a timer to interrupt
after 1 time quantum and dispatches the process.

 After 1 time quantum, process either will finish the execution or preempted and
scheduler will schedule other process from head of the queue.

Example of RR with Time Quantum = 4


Process Burst Time
P1 24
P2 3
P3 3
 The Gantt chart is:

 Average waiting time = ((10-4)+4+7)/3 = 5.67

 Process Burst Time

 P1 14

 P2 10

 P3 8

 The Gantt chart is:

Dept of ISE, DSCE Page 38


Course Name: Operating System Code: IPCC22IS34

 Average waiting time = (18+20+16)/3 = 18

Round Robin (RR) Scheduling


 Performance of RR depends on the size of time quantum.

 q (time quantum) is large, RR works similar to FCFS

 q (time quantum) is small, overhead due to context switch is too high

 time quantum should be optimum.

Process Management
1)The Process concept
 An operating system executes a variety of programs:

 Batch system – jobs

 Time-shared systems – user programs or tasks

 Program is a passive entity.

 A process is a program in execution.

 An active entity that can be assigned to and executed on a processor.

 A process is unit of work.

 It has limited time span.

 A process needs certain resources, including CPU time, memory, files, and I/O
devices, to accomplish its task.

1.1)The Process
 A program is a passive entity i.e. a file containing a list of instructions stored on disk
called as executable file.

 A process is an active entity with a program counter specifying the next instruction to
execute and a set of associated resources.

 A program becomes process when an executable file is loaded to memory .Two


common techniques for loading executable files are:

 Double click on icon representing executable file

 Entering the name of the executable file on the command line(ex prog.exe or
a.out)

Dept of ISE, DSCE Page 39


Course Name: Operating System Code: IPCC22IS34

 A process includes:

 Text section (Program code)


 Value of program counter and contents of processor’s registers (PCB)
 Stack (temporary data like function parameters, return addresses, local
variables)
 Data section (global variables)
 Heap (memory dynamically allocated to process to hold intermediate
computation data at run time)

Process in memory
 Two processes may be associated with same program but they are not considered as
separate processes e.g. many copies of one mail program.

 A process may spawn many processes as it runs

1.2)Process State
 As the process executes ,it changes state.

 The state of a process is defined in part by the current activity of that process.

 Each process may be in one of the following states:


new: The process is being created.

ready: The process is waiting to be assigned to a processor.

running: Instructions are being executed.

waiting: The process is waiting for some event to occur (I/O or reception of
signal)
 terminated: The process has finished execution.
 Only one process can be running on any processor at any instant.

Dept of ISE, DSCE Page 40


Course Name: Operating System Code: IPCC22IS34

Diagram of process state

1.3)Process Control Block (PCB)


 Each process is represented by a process control block (PCB) also called as task
control block.

Process Control Block(PCB)


 PCB is repository of information

 Information associated with each process:

 Process state

 Program counter

 CPU registers

 CPU scheduling information

 Memory-management information

 Accounting information

 I/O status information

3. Process state

 The state may be new, ready, running, waiting, halted, and so on

4. Program counter

 The counter indicates the address of next instruction to be executed for


this process.

5. CPU registers

 Number and type of registers

Dept of ISE, DSCE Page 41


Course Name: Operating System Code: IPCC22IS34

 They include accumulators, index registers, stack pointers, general


purpose registers and any condition-code information.

 Along with the program counter, this state information must be saved
when an interrupt occurs, to allow the process to be continued correctly
afterward.

6. CPU scheduling information

 This includes a process priority, pointers to scheduling queues, and any


other scheduling parameters.

6. Memory-management information

 Value of base and limit registers, page table or segment table

8. Accounting information

 Amount of CPU and real time used, time limits, account numbers , job or
process numbers,

9. I/O status information

 List of I/O devices allocated to the process


 list of open files
----------------------------------------------------------------------------------------------------------------
-------------------------
2)Process Scheduling:
 The objective of multiprogramming is to have some processes running at all
time, to maximize CPU utilization.
 The objective of time sharing is to switch CPU among the processes so frequently
that users can interact with each process while it is running.
 To meet these objectives, process scheduler selects an available process for
execution on CPU.
 For a single-processor system, there will never be more than one running
process,.
 If there are more processes, the rest will have to wait until the CPU is free and
can be reshecduled.

Dept of ISE, DSCE Page 42


Course Name: Operating System Code: IPCC22IS34

The ready queue and various I/O device queues

2.1)Scheduling Queues:
 Job queue

 Set of all processes in the system

 Ready queues

 Set of all processes residing in main m

 emory, ready and waiting to execute

 Device queues

 Set of processes waiting for an I/O device

 Processes migrate among the various queues

 Ready queue

 Set of all processes residing in main memory, ready and waiting to execute.

 Header of the queue contains the pointers to first and last PCB.

 Each PCB contains a pointer to next PCB.

 I/O Device queues

 Set of processes waiting for an I/O device

Dept of ISE, DSCE Page 43


Course Name: Operating System Code: IPCC22IS34

 A new process initially put in ready queue, where it waits for CPU for execution.

 Once process is allocated CPU, following events may occur:

 The process issues an I/O request, and then be placed in an I/O device
queue.

 The process creates a new subprocess and waits for its termination.

 The process is removed forcibly from the CPU as a result of interrupt or


expired time slice.

 The process ends.

 A process migrates between the various scheduling queues throughout its life
time.

 The operating system must select the processes from or to the scheduling queues.

 The selection process is carried out by schedulers.

Dept of ISE, DSCE Page 44


Course Name: Operating System Code: IPCC22IS34

Process Scheduling: Types of Schedulers


 Long-term scheduler (or job scheduler)

 Selects which processes should be brought into the ready queue.

 Short-term scheduler (or CPU scheduler)

 Selects process from ready queue which should be executed next and
allocates CPU.

 Medium-term scheduler

 Swap in the process from secondary storage into the ready queue.

Medium term scheduler


Short term scheduler

Long term scheduler

Process Scheduling: Schedulers


 Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast).

 Long-term scheduler is invoked infrequently (seconds, minutes) Þ (may be


slow).

 The long-term scheduler controls the degree of multiprogramming.

 Processes can be described as

 I/O-bound process – spends more time doing I/O than computations,


many short CPU bursts may be needed.

 CPU-bound process – spends more time doing computations; long CPU


bursts are required.

 System should have mix of both type of processes so I/O waiting queue and ready
queue will have equal and balanced work load.

Context Switch
 Many users processes and system processes run simultaneously.

 Switching CPU from one process to another process is context switch.

Dept of ISE, DSCE Page 45


Course Name: Operating System Code: IPCC22IS34

 The system must save the state of the old process and load the saved state of the
new process.

 Context-switch time is overhead and the system does no useful work while
switching.

 Switching speed varies from machine to machine depending upon memory,


number of registers and hardware support.

 Information in saved process image

 User data

 Program counter

 System stack

 PCB

CPU Switch from Process to Process

Operations on Processes
 In multiprogramming, processes may be created and deleted dynamically.

Operations on Processes: Process Creation


 One process (Parent process) can create several other processes (children
processes), which, in turn create other processes, forming a tree of processes.

 Process is identified by unique number i.e. Process identifier (PID), typically a


number.

Processes Tree on Solaris System

Dept of ISE, DSCE Page 46


Course Name: Operating System Code: IPCC22IS34

Process Creation
 Each child process needs resources, resource sharing can be handled as:

 Parent and child share no resources, but access directly from OS

 Parent and children share all the resources.

 Children share subset of parent’s resources, because subprocess can


overload the system.

 Execution

 Parent and children execute concurrently.

 Parent waits until children terminate.

 Address space

 Child is duplicate of parent (same data and program as parent has).

 Child has a program loaded into it.

 UNIX examples

 fork() system call creates new process

 exec() system call used after a fork to replace the process’ memory space
with a new program.

Dept of ISE, DSCE Page 47


Course Name: Operating System Code: IPCC22IS34

Process Termination
 Process executes last statement and ask OS to terminate itself through exit().

 Child process may return output data to its parent via wait().

 Process’ resources like virtual memory, open files, I/O buffers are
deallocated by operating system.

 A process can cause termination of other process appropriate system call (e.g.
TerminateProcess() in Win 32)

 Parent may terminate execution of children processes abort(). Reasons may be:

 Child has exceeded allocated resources.

 Task assigned to child is no longer required.

 Parent is exiting:

 Operating system does not allow child to continue if its parent


terminates. Thus Cascading termination takes place.

Cooperating Processes
 The concurrent processes executing in the OS may be either independent
processes or cooperating processes.

 Independent processes cannot affect or be affected by the execution of another


process.

 Cooperating processes (share data with other) can affect or be affected by the
execution of another process.

Advantages of process cooperation


 Information sharing

 Same information may be used by more processes (e.g. shared file)

 Computation speed-up

 A complex problem can be divided in subtasks and submitted for


execution in parallel (but multiple processing elements are required)

Advantages of process cooperation


 Modularity

 System in modular fashion through separate processes and threads.

 Convenience

 An individual user may have many tasks on which to work at same time
(editing, printing, compiling).

Dept of ISE, DSCE Page 48


Course Name: Operating System Code: IPCC22IS34

 Interprocess communication allows the processes to exchange data and


information.

 Models for Interprocess Communication

 Shared Memory

 Message passing

 Shared Memory

 A region of memory is shared, processes can exchange information by


reading and writing data to the shared region.

 It is fast and useful for large data.

 Message passing

 Communication takes place by message passing between the processes

 It is useful for exchanging small amount of data.

 Mostly used in distributed environment

Cooperating Processes: Shared Memory


 It requires to establish a region of shared memory which resides in the address
space of the process creating shared-memory segment.

 Communicating processes can exchange information by reading and writing


data in the shared area.

 Shared area need to be maintained by communicating processes.

Example: producer- consumer process


 Producer process produces information that is consumed by a consumer
process

Dept of ISE, DSCE Page 49


Course Name: Operating System Code: IPCC22IS34

 A buffer in memory is declared that can be filled by producer and


emptied by consumer.

 Producer and consumer must be synchronized as consumer should not


try to consume the data which has not been produced yet.

Example: producer- consumer process


 Unbounded-buffer solution

 It has no practical limit on the size of the buffer

 Consumer may wait for new data but producer can always
produce new data.

 Bounded-buffer solution

 It has fixed buffer size

 Consumer must wait if buffer is empty and producer must wait if


buffer is full.

Bounded-buffer solution
 Shared buffer is implemented as a circular array with two logical pointers in and
out.

 Variable in points to next free position in the buffer and out points to first full
position in the buffer.

 Buffer is empty when

in == out
 Buffer is full when

(in + 1) % BUFFER SIZE) == out


#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Producer Process
item nextProduced;
while (true) {
/* Produce an item in nextProduced */
while (((in + 1) % BUFFER SIZE) == out);
/* do nothing*/
buffer[in] = nextProduced;
in = (in + 1) % BUFFER SIZE;
}
Consumer Process

Dept of ISE, DSCE Page 50


Course Name: Operating System Code: IPCC22IS34

item nextConsumed
while (true) {

Dept of ISE, DSCE Page 51

You might also like