ECT426 M2 Ktunotes - in

Download as pdf or txt
Download as pdf or txt
You are on page 1of 89

Module 2

Process Scheduling

Downloaded from Ktunotes.in


• Multiple processes exist concurrently in main memory.

• Each process alternates between using a processor and waiting for some event to occur

• The processor is kept busy by executing one process while the others wait.

• The key to multiprogramming is scheduling

• I/O scheduling, long-term scheduling, medium-term scheduling, short-term scheduling

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
TYPES OF PROCESSOR SCHEDULING
• Aim of processor scheduling is to assign processes to be executed by the processor or
processors over time, in a way that meets system objectives, such as response time,
throughput, and processor efficiency

• Process scheduling names suggest the relative time scales with which these functions are
performed

Downloaded from Ktunotes.in


• Long-term scheduling is performed when a new process is created. This is a decision
whether to add a new process to the set of processes that are currently active.

• Medium-term scheduling is a part of the swapping function. This is a decision whether to add
a process to those that are at least partially in main memory and therefore available for
execution.

• Short-term scheduling is the actual decision of which ready process to execute next

Downloaded from Ktunotes.in


• Scheduling affects the performance of the system because it determines which processes will
wait and which will progress

• Scheduling helps to manage queues and minimize queueing delay and to optimize
performance in a queueing environment

Downloaded from Ktunotes.in


Levels of Scheduling

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
Long-Term Scheduling
• The long-term scheduler determines which programs are admitted to the system for
processing. Thus, it controls the degree of multiprogramming

• Once admitted, a job or user program becomes a process and it is either

(i) added to the queue for the short-term scheduler.

(ii) swapped-out and is added to a queue for the medium-term scheduler

Downloaded from Ktunotes.in


• In a batch system, or for the batch portion of an OS, newly submitted jobs are routed to disk
and held in a batch queue

• The long-term scheduler creates processes from the queue when it can.

• There are two decisions involved.

(i) The scheduler must decide when the OS can take on one or more additional processes.

(ii) The scheduler must decide which job or jobs to accept and turn into processes

Downloaded from Ktunotes.in


• The decision to when to create a new process is generally driven by the desired degree of
multiprogramming.

• The more number of processes created, the smaller percentage of time that each process can
be executed

• Thus, the long-term scheduler may limit the degree of multiprogramming to provide
satisfactory service to the current set of processes.

• Long term scheduler may decide to add one or more new jobs

(i) Each time a job terminates

(ii) If the fraction of time that the processor is idle exceeds a certain threshold

Downloaded from Ktunotes.in


• The decision to which job to admit next can be on a simple FCFS basis

• The criteria used may include priority, expected execution time, and I/O requirements.

• For interactive programs in a time-sharing system, a process is generated when the user attempts to
log in.

• Time-sharing users are not simply queued up and kept waiting until the system can accept them.

• OS will accept all authorized comers until the system is saturated, using some predefined measure of
saturation.

• At that point, a connection request is met with a message indicating that the system is full and the user
should try again later.

Downloaded from Ktunotes.in


Medium /Short-Term Scheduling
• Medium-term scheduling is part of the swapping function, to manage the degree of
multiprogramming

• The short-term scheduler, also known as the dispatcher, executes most frequently and makes
the fine-grained decision of which process to execute next.

• The short-term scheduler is invoked whenever an event occurs that may lead to the blocking
of the current process or that may provide an opportunity to preempt a currently running
process in favor of another.

• Examples of such events include:

• Clock interrupts, I/O interrupts, Operating system calls, Signals (e.g., semaphores)

Downloaded from Ktunotes.in


SCHEDULING ALGORITHMS - Short-Term Scheduling Criteria

• The main objective of short-term scheduling is to allocate processor time so that to optimize
system behavior.

• A set of criteria is established against which various scheduling policies may be evaluated.

• These criteria are categorized into user-oriented and system-oriented criteria.

• User oriented criteria relate to the behavior of the system as observed by the individual user
or process. Eg: Response time in an interactive system

• System oriented the focus is on effective and efficient utilization of the processor.

• Eg: Throughput, which is the rate at which processes are completed. This is a measure of
system performance and preferred to be maximized

Downloaded from Ktunotes.in


• Criteria can also be classified as

(i) performance related (eg: response time and throughput)

(ii) not directly performance related (eg: predictability)

• Good response time require a scheduling algorithm that switches between processes
frequently increasing the overhead of the system, reducing throughput.

• Thus, the design of a scheduling policy involves compromising among competing


requirements

• The relevance given to various requirements will depend on the nature and intended use of the
system.

Downloaded from Ktunotes.in


The Use of Priorities

Downloaded from Ktunotes.in


• Disadvantage:

• Lower-priority processes may suffer in a pure priority scheduling scheme, if there is always a
steady supply of higher-priority ready processes.

• If this behavior is not desirable, the priority of a process can be changed with its age or
execution history.

Downloaded from Ktunotes.in


Alternative Scheduling Policies
• Scheduling Policies consists of selection function that determines which process, among
ready processes, is selected next for execution.

• The function may be based on priority, resource requirements, or the execution


characteristics of the process.

• If,
w = time spent in system so far, waiting
e = time spent in execution so far
s = total service time required by the process, including e ;

Downloaded from Ktunotes.in


• Decision mode specifies the instants in time at which the selection function is exercised.

• Two general categories are: Nonpreemptive and Preemptive.

Non Once a process is in the Running state, it continues to execute until


preemptive (a) it terminates or
(b) it blocks itself to wait for I/O or to request some OS service.
Preemptive The currently running process may be interrupted and moved to the Ready
state by the OS.
The decision to preempt may be performed when
(a) a new process arrives
(b) when an interrupt occurs that places a blocked process in the Ready state
(c) based on a periodic clock interrupt

Downloaded from Ktunotes.in


• Preemptive policies have greater overhead than nonpreemptive ones but may provide better
service to the total population of processes, because they prevent any one process from
monopolizing the processor for very long.

• Also, the cost of preemption may be kept relatively low by using efficient process-switching
mechanisms and by providing a large main memory to keep a high percentage of programs in
main memory.

• Turnaround time (TAT) is the residence time Tr , or total time that the item spends in the
system (waiting time plus service time).

Downloaded from Ktunotes.in


All in a Glance

Downloaded from Ktunotes.in


Scheduling Policy - FIRST-COME-FIRST-SERVED
• FIFO

• As each process becomes ready, it joins the ready queue.

• When the currently running process ceases to execute, the process that has been in the ready
queue the longest is selected for running.

Downloaded from Ktunotes.in


• The normalized turnaround time for process Y is way out of line compared to the other
processes: the total time that it is in the system is 100 times the required processing time.
This will happen whenever a short process arrives just after a long process.

• Process Z has a turnaround time that is almost double that of Y, but its normalized residence
time is under 2.0.

• FCFS is that it tends to favor processor-bound processes over I/O-bound processes.

Downloaded from Ktunotes.in


• When a processor-bound process is running, all of the I/O bound processes must wait.

• Some of these may be in I/O queues (blocked state) but may move back to the ready queue
while the processor-bound process is executing.

• At this point, most or all of the I/O devices may be idle, even though there is potentially work
for them to do.

• When the currently running process leaves the Running state, the ready I/O-bound processes
quickly move through the Running state and become blocked on I/O events.

• If the processor-bound process is also blocked, the processor becomes idle.

• Thus, FCFS may result in inefficient use of both the processor and the I/O devices.

Downloaded from Ktunotes.in


• FCFS is not an attractive alternative on its own for a uniprocessor system.

• But, it is often combined with a priority scheme to provide an effective scheduler.

• The scheduler may maintain a number of queues, one for each priority level, and dispatch
within each queue on a first-come-first-served basis

Downloaded from Ktunotes.in


ROUND ROBIN
• Round robin technique reduce the difficulty suffered by short jobs with FCFS is to use
preemption based on a clock.

• A clock interrupt is generated at periodic intervals.

• When the interrupt occurs, the currently running process is placed in the ready queue, and the
next ready job is selected on a FCFS basis.

• This technique is also known as time slicing because each process is given a slice of time
before being preempted

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
• With round robin, the principal design issue is the length of the time quantum / slice, to be
used.

• If the quantum is very short, then short processes will move through the system relatively
quickly.

• And, the processing involved in handling the clock interrupt and performing the scheduling
and dispatching function will increase. Thus, very short time quanta should be avoided.

• Best way is to design the time quantum such that it is slightly greater than the time required
for a typical interaction or process function.

• If it is less, then most processes will require at least two time quanta.

Downloaded from Ktunotes.in


• Round robin is particularly effective in a general-purpose time-sharing system or transaction
processing system.

• One drawback to round robin is its relative treatment of processor-bound and I/O-bound
processes. Share of time for I/O bound process and processor bound process is a matter of
conflict

• Solution – Virtual Round Robin Scheduler

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
SHORTEST PROCESS NEXT
• This is a non-preemptive policy in which the process with the shortest expected processing

time is selected next.

• Thus, a short process will jump to the head of the queue past longer jobs.

• One difficulty with the SPN policy is the need to know or at least estimate the required
processing time of each process.

• For batch jobs, the system may require the programmer to estimate the value and supply it to
the OS.

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
• each term in this summation is given equal weight; that is, each term is multiplied by the
same constant 1/( n )

Downloaded from Ktunotes.in


Process Burst Time (mSec)
P1 24
P2 3
P3 3

Average turn around time: (24 + 27 + 30)/3 = 27msec

Downloaded from Ktunotes.in


Average turn around time: (3 + 6 + 30)/3 = 13 msec

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
SJF

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
Round Robin
• Assume a time quantum of 4msec

Waiting time for process P1 = 0 + (10-4) = 6msec


Waiting time for process P2 = 4msec
Waiting time for process P3 = 7msec

Hence average waiting time = 17/3 = 5.66msec

Downloaded from Ktunotes.in


• Draw the Gantt chart and find average waiting time and turn around time. Assume time
quantum as 2msec

Downloaded from Ktunotes.in


Waiting time for process P1 =
0 + (6-2) + (10-8) + (13-12) = 7msec

Waiting time for process P2 =


2 + (8-4) + (12-10) = 8msec

Waiting time for process P3 = 4msec

Hence average waiting time = 19/3 = 6.33msec

Downloaded from Ktunotes.in


Characteristics of Various Scheduling Policies

Downloaded from Ktunotes.in


Multilevel Queue scheduling
• Multilevel queue scheduling is a type of CPU scheduling in which the processes in the
ready state are divided into different groups, each group having its own scheduling needs.

• The ready queue is divided into different queues according to different properties of the
process like memory size, process priority, or process type.

• All the different processes can be implemented in different ways, i.e., each process queue can
have a different scheduling algorithm.

Downloaded from Ktunotes.in


Properties of Multilevel Queue Scheduling

• Multilevel Queue Scheduling distributes the processes into multiple queues based on the
properties of the processes.

• Each queue has its own priority level and scheduling algorithm to manage the processes
inside that queue.

• Queues can be arranged in a hierarchical structure.

• High-priority queues might use Round Robin scheduling. Low-priority queues might use
First Come, First Serve scheduling. (Optional)

• Processes cannot move between queues This algorithm prioritizes different types of
processes and ensures fair resource allocation

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
Downloaded from Ktunotes.in
Downloaded from Ktunotes.in
How are the Queues Scheduled?
• The scheduling of the processes in the queue is necessary to decide upon the process that
should get the CPU time first. Two methods are employed to do this:

Fixed priority preemptive scheduling

Time slicing.

Downloaded from Ktunotes.in


Fixed Priority Preemptive Scheduling Method:

When setting the priority of processes in a queue, every queue has an absolute priority over
that with a lower priority queue.

Here, unless Queue1 is empty, no process in Queue2 can be executed and so on

Time Slicing:

Each queue gets a slice or portion of the CPU time for scheduling its own processes.

For example, suppose Queue1 gets 40% 0f the CPU time then the remaining 60% of the CPU
time may be assigned as 40% to Queue2 and 20% to Queue3.

Downloaded from Ktunotes.in


• Advantages • Disadvantages
• Users can apply different scheduling • The process may go into starvation if
methods to every queue to distinguish it has low priority.
the processes. • This scheduling is rigid and difficult to
• The scheduling overhead is very low. implement.

Downloaded from Ktunotes.in


Multilevel Feedback Scheduling
• It enables a process to switch between queues.

• If a process consumes too much processor time, it will be switched to the lowest priority
queue

• A process waiting in a lower priority queue for too long may be shifted to a higher priority
queue. This type of aging prevents starvation

Downloaded from Ktunotes.in


The parameters of the multilevel feedback queue scheduler are as follows:

• The scheduling algorithm for every queue in the system.

• The queues number in the system.

• The method for determining when a queue should be demoted to a lower-priority queue.

• When a process is upgraded to a higher-priority queue, this process determines when it gets
upgraded.

• The method for determining which processes will enter the queue and when those processes
will require service

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
•This form of aging prevents starvation.
•A process entering the ready queue is put in queue 0. A process in queue 0 is given a time quantum of
8 milliseconds.
•If it does not finish within this time, it is moved to the tail of queue 1.
•If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds.
•If it does not complete, it is preempted and is put into queue 2.
•Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty

Downloaded from Ktunotes.in


Advantages of MFQS

• This is a flexible Scheduling Algorithm

• This scheduling algorithm allows different processes to move between different queues.

• In this algorithm, a process that waits too long in a lower priority queue may be moved to a
higher priority queue which helps in preventing starvation.

Disadvantages of MFQS

• This algorithm is too complex.

• As processes are moving around different queues which leads to the production of more CPU
overheads.

• In order to select the best scheduler this algorithm requires some other means to select the
values

Downloaded from Ktunotes.in


Process and Threads
• Processes are basically the programs that are dispatched from the ready state and are scheduled in
the CPU for execution. PCB holds the concept of process. A process can create other processes which
are known as Child Processes. The process takes more time to terminate and it is isolated means it
does not share the memory with any other process.

• The process can have states like new, ready, running, waiting, terminated, and suspended.

• Thread is the segment of a process which means a process can have multiple threads and these
multiple threads are contained within a process. A thread has three states: Running, Ready, and
Blocked.

• It takes less time to terminate as compared to the process but unlike the process, threads do not
isolate.

Downloaded from Ktunotes.in


• Light weight process – Thread

• Main motive for thread


• Support multiple activities in a single application at the same time
• Since light weight – easier to create and destroy than process
• Performance enhancement

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
• Concept of process is more complex and subtle than presented so far and in fact embodies
two separate and potentially independent concepts related to: resource ownership and
execution.

• These two characteristics are independent and could be treated independently by the OS.

Downloaded from Ktunotes.in


• Resource ownership: A process includes a virtual address space to hold the collection of
program, data, stack, and data in PCB.

• From time to time, a process may be allocated control or ownership of resources, such as
main memory, I/O channels, I/O devices, and files.

• The OS performs a protection function to prevent unwanted interference between processes


with respect to resources.

Downloaded from Ktunotes.in


• Scheduling/execution: The execution of a process includes execution of one or more
programs.

• This execution may be interleaved with that of other processes.

• Thus, a process has an execution state (Running, Ready, etc.) and a dispatching priority and
is the entity that is scheduled and dispatched by the OS.

Downloaded from Ktunotes.in


• This has led to the development, of a construct known as the thread

• To distinguish the two characteristics,

the unit of dispatching is usually referred to as a thread or lightweight process

the unit of resource ownership is usually referred to as a process or task

Downloaded from Ktunotes.in


TYPES OF THREADS
• User-Level Threads (ULT) and Kernel-Level Threads (KLT)

• USER-LEVEL THREADS In a pure ULT facility, all of the work of thread management is
done by the application and the kernel is not aware of the existence of threads.

• Any application can be programmed to be multithreaded by using a threads library, which is


a package of routines for ULT management.

• The threads library contains code for creating and destroying threads, for passing messages
and data between threads, for scheduling thread execution, and for saving and restoring
thread contexts.

Downloaded from Ktunotes.in


A user thread is an entity used by programmers to
handle multiple flows of controls within a program.

The API for handling user threads is provided by


the threads library.

A user thread only exists within a process; a user


thread in process A cannot reference a user thread in
process B.

Downloaded from Ktunotes.in


• By default, an application begins with a single thread and begins running in that thread.

• This application and its thread are allocated to a single process managed by the kernel.

• At any time that the application in Running state, may create a new thread to run within the same
process.

• Spawning is done by invoking the spawn utility in the threads library which is invoked by a
procedure call.

• The threads library, creates a data structure for the new thread and then passes control to one of the
threads within this process that is in the Ready state, using some scheduling algorithm.

• When control is passed to the library, the context of the current thread is saved, and when control is
passed from the library to a thread, the context of that thread is restored.

• The context essentially consists of the contents of user registers, the program counter, and stack
pointers.

Downloaded from Ktunotes.in


• All of the thread activity takes place in user space and within a single process.

• The kernel is unaware of this activity.

• The kernel continues to schedule the process as a unit and assigns a single execution state
(Ready, Running, Blocked, etc.) to that process

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
• When the kernel switches control back to process B, execution resumes in thread 2.

• A process can be interrupted, either by exhausting its time slice or by being preempted by a
higher priority process, while it is executing code in the threads library.

• Thus, a process may be in the midst of a thread switch from one thread to another when
interrupted.

• When that process is resumed, execution continues within the threads library, which
completes the thread switch and transfers control to another thread within that process

Downloaded from Ktunotes.in


KERNEL-LEVEL THREADS
• In a pure KLT facility, all of the work of thread management is done by the kernel.

• There is no thread management code in the application level, simply an application


programming interface (API) to the kernel thread facility.

Downloaded from Ktunotes.in


• The kernel maintains context information for the process and for individual threads within the
process as a whole.

• Scheduling by the kernel is done on a thread basis.

• Advantage over ULT approach.

1. Kernel can simultaneously schedule multiple threads from the same process on multiple processors.

2. If one thread in a process is blocked, the kernel can schedule another thread of the same process.

3. Kernel routines themselves can be multithreaded.

• The principal disadvantage of the KLT approach compared to the ULT approach is that the transfer of
control from one thread to another within the same process requires a mode switch to the kernel.

Downloaded from Ktunotes.in


Multithreading
• Multithreading is the ability of a program or an OS to enable more than one user at a time
without requiring multiple copies of the program running on the computer.

• Multithreading can also handle multiple requests from the same user.

• Multithreading refers to the ability of an OS to support multiple, concurrent paths of


execution within a single process.

• The traditional approach was a single thread of execution per process where the concept of a
thread is not recognized, is referred to as a single-threaded approach.

Downloaded from Ktunotes.in


• Each user request for a program or system service is tracked as a thread with a separate
identity.

• As programs work on behalf of the initial thread request and are interrupted by other
requests, the work status of the initial request is tracked until the work is completed

• Fast CPU speed and large memory capacities are needed for multithreading.

• The single processor executes pieces, or threads, of various programs so fast, it appears the
computer is handling multiple requests simultaneously.

Downloaded from Ktunotes.in


Downloaded from Ktunotes.in
• In a multithreaded environment, a process is defined as

1. the unit of resource allocation

2. unit of protection.

• The following are associated with processes:

• A virtual address space that holds the process image

• Protected access to processors, other processes (for inter process communication), files,
and I/O resources (devices and channels)

Downloaded from Ktunotes.in


Within a process, there may be one or more threads, each with the following:

• A thread execution state (Running, Ready, etc.)

• A saved thread context when not running; one way to view a thread is as an independent

program counter operating within a process

• An execution stack

• Some per-thread static storage for local variables

• Access to the memory and resources of its process, shared with all other threads in that

process

Downloaded from Ktunotes.in


Distinction between threads and processes
• In a single-threaded process model, the representation of a
process includes its PCB, user address space, user and kernel
stacks

• This will manage the call/return behavior of the execution of


the process.

• While the process is running, it controls the processor


registers. The contents of these registers are saved when the
process is not running.

Downloaded from Ktunotes.in


Distinction between threads and processes
• In a multithreaded environment, there
is still a single PCB and user address
space associated with the process.

• But there are


• separate stacks for each thread
• separate control block for each thread

Downloaded from Ktunotes.in


• All of the threads of a process share the state and resources of that process.

• They reside in the same address space and have access to the same data.

• When one thread alters an item of data in memory, other threads see the change when
accessed.

• If one thread opens a file with read privileges, other threads in the same process can also read
from that file.

Downloaded from Ktunotes.in


The key benefits of threads
• It takes far less time to create a new thread in an existing process than to create a brand-new
process.

• It takes less time to terminate a thread than a process.

• It takes less time to switch between two threads within the same process than to switch
between processes

• Threads enhance efficiency in communication between different executing programs

Thus, if there is an application or function that should be implemented as a set of related units of
execution, it is far more efficient to do so as a collection of threads rather than a collection of
separate processes.

Downloaded from Ktunotes.in


Thread Functionality - THREAD STATES
• Running, Ready, and Blocked.

• Four basic thread operations associated with a change in thread state


Spawn • When a new process is spawned, a thread for that process is also spawned.
• • Also, a thread within a process may spawn another thread within the same process,
providing an instruction pointer and arguments for the new thread.
• The new thread is provided with its own register context and stack space and placed
on the ready queue
Block When a thread needs to wait for an event, it will be blocked
The processor may now turn to the execution of another ready thread in the same or a
different process.
Unblock When the event for which a thread is blocked occurs, the thread is moved to the Ready
queue

Finish When a thread completes, its register context and stacks are deallocated.

Downloaded from Ktunotes.in


THREAD SYNCHRONIZATION
• All of the threads of a process share the same address space and other resources, such as
open files.

• Any alteration of a resource by one thread affects the environment of the other threads in the
same process.

• It is therefore necessary to synchronize the activities of the various threads so that they

do not interfere with each other or corrupt data structures.

Downloaded from Ktunotes.in


Multi-Threading Models

• Multithreading allows the execution of multiple parts of a program at the same time.

• These parts are known as threads and are lightweight processes available within the process.
Therefore, multithreading leads to maximum utilization of the CPU by multitasking.

• The main models for multithreading:

1. one to one model

2. many to one model

3. many to many model.

Downloaded from Ktunotes.in


One to One Model
• The one to one model maps each of the user
threads to a kernel thread.

• Multiple thread can run on multiple


processor.

• ie many threads can run in parallel on


multiprocessors.

• As each user thread is connected to


different kernel, if any user thread makes a
blocking system call, the other user threads
won’t be blocked.

Downloaded from Ktunotes.in


Many to One Model
• Here many of the user threads are mapped to a single
kernel thread.

• This model is quite efficient as the user space manages


the thread management.

• But, when a user thread makes a blocking system call


entire process blocks.

• As only one kernel thread exists, and only one user


thread can access kernel at a time, so multiple threads
are not able access multiprocessor at the same time.

Downloaded from Ktunotes.in


Many to Many Model
• Maps many of the user threads to a equal number or lesser
kernel threads.

• The number of kernel threads depends on the application or


machine.

• There can be as many user threads as required and their


corresponding kernel threads can run in parallel on a
multiprocessor

• If a user thread is blocked we can schedule others user thread


to other kernel thread. Thus, System doesn’t block if a
particular thread is blocked.

Downloaded from Ktunotes.in


Multiprocessor scheduling
• Multiple processor scheduling or multiprocessor scheduling focuses on designing the

system's scheduling function, which consists of more than one processor.

• Multiple CPUs share the load in multiprocessor scheduling so that various processes run

simultaneously

• The multiple CPUs in the system are in close communication, which shares a common bus,

memory, and other peripheral devices. So we can say that the system is tightly coupled.

Downloaded from Ktunotes.in


Multiprocessor scheduling

Downloaded from Ktunotes.in


Multiprocessor scheduling
• Symmetric Multiprocessing: • Asymmetric Multiprocessing:
• It is used where each processor • It is used when all the scheduling
is self-scheduling. decisions and I/O processing are
• All processes may be in a common handled by a single processor called
ready queue, or each processor may the Master Server.
have its private queue for ready • The other processors execute only
processes. the user code.
• The scheduler for each processor • This is simple and reduces the need for
examine the ready queue and select a data sharing
process to execute.

Downloaded from Ktunotes.in

You might also like