Load Scheduling

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Scheduling means divide number of processes equally among the processor.

Scheduling means
equi-partitioning.
Load scheduling is new approach for load balancing. In load scheduling all processors
cooperate together to schedule work. Load scheduling deals with the processing of massive loads
that can be partitioned into smaller load fractions and distributed to processing nodes over a
network. It provide high quality load balancing.

Fig1. LOAD SCHEDULING

2. SCHEDULING SYSTEM
Every instance of the scheduling problem consists of three main components.
 Consumers
 Resources
 Policy
Management resource is basically a mechanism or policy used to efficiently and effectively
manages the access to and use of a resource by its various consumers.
Fig2. SCHEDULING SYSTEM

The functioning of a scheduler may be done by observing the effect it has on its environment.
Two properties which may considered in evaluating and scheduling system.
 Performance
 Efficiency
The satisfaction of the consumers with how well the scheduler manages the resources is called
performance.
The satisfaction of the consumers how difficult or costly it to access the management resource
itself is called efficiency.

3. TYPES OF SCHEDULING

Scheduling mainly is of two types:


1. Local scheduling
2. Global scheduling

3.1 Local scheduling


It is performed by the OS of a processor consists of the assignment of processes to the time slice
of the processor. Job of local scheduling is left to the operating system of the processor to which
the process is ultimately allocated.

3.2 Global scheduling


It is the process of deciding where to execute a process in multiprocessor system. Global
scheduling increases autonomy and reducing the responsibility.
Fig3.CLASSIFICATION OF LOAD SCHEDULING METHODS

Global scheduling is of two types:


1. Static scheduling
2. Dynamic scheduling

1. STATIC SCHEDULING
In static scheduling the assignment of tasks to processors is done before execution begins.
Information regarding process execution time and processing resources is assumed to be known
at compile time and the process is always executed on the processor to which it is assigned.
Static scheduling methods are non pre-emptive.
In static scheduling, which is also referred to as compile-time scheduling, the characteristics of a
parallel program, including task processing times, data dependencies and synchronizations, are
known before program execution.

The goal of static scheduling is to minimize the overall execution time of concurrent program
and minimize communication delays. It attempt to

 Predict the program execution behaviour at compile time. It estimates the process or task
execution time and communication delays.
 Perform a partitioning of smaller tasks into grain processes in an attempt to reduce the
communication cost.
 Allocate processes to processors.

Advantages of static scheduling:


1. All the overhead of the scheduling process in incurred at compile time.
2. Resulting in a more efficient execution time environment compared to dynamic
scheduling method.

Disadvantage of static scheduling


1. Static scheduling methods can be classified into optimal and suboptimal scheduling. In
general generating optimal schedules is an NP complete problem. It is only possible to
generate optimal solution in restricted cases.
2. The insufficiency of efficient and accurate methods for estimating task execution times
and communication delays can cause unpredictable performance degradation. The
compile time estimation of the execution time of a program task or function is often
difficult to find due to conditional and loop constructs, whose condition values or
iteration counts are unknown before execution. Estimating communication delays at
compile time is not practical because of the run time network contention delay.
3. Static scheduling methods often ignore the data distribution issue. This omission causes
performance degradations due to run time communication delays for accessing data at
remote sites.
Various Static Scheduling Algorithms are:

1. List Scheduling
2. PSRS
3. SMART
4. SMART-FFIA

1. List Scheduling

List Scheduling is a basic and popular combinatorial algorithm intended for scheduling of set
of tasks on a single and even parallel processors. In this algorithm tasks are fed from a pre-
specified list and whenever a processor becomes idle, the first available task on the list is
scheduled and removed from list, where the availability of a task means that task has been
released and, if there are precedence constraints, all its predecessors have already been
processed. The algorithm terminates when all tasks from list are scheduled. Its complexity is
O(n). In multiprocessor case, the processor with minimal time is taken in every iteration of
algorithm and others are relaxed.

2. PSRS

The PSRS algorithm generates pre-emptive schedules. This is an offline algorithm and requires
the execution time of the jobs. In addition, it needs support for time sharing. Therefore, it
cannot be applied to target machine without modification.

Basic steps of PSRS algorithm are:

 All jobs are ordered by their modified Smith ratio (largest ratio goes first).
 A greedy list schedule is applied for all jobs requiring at most 50% of the machine
nodes. If a job needs more than half of all nodes and has been waiting for some time,
then all running jobs are preempted and the parallel job is executed. After the
completion of the parallel job, the execution of the preempted jobs is resumed.
Types of static scheduling
1. Optimal scheduling
2. Sub optimal scheduling

1. OPTIMAL SCHEDULING
In optimal scheduling all information regarding the state of the system as well as the resource
needs of a process are known. An optimal assignment can be made based on some criterion
function.
For example:
Minimizing total process completion time.
Maximizing system throughput.

2. SUBOPTIMAL SCHEDULING
When optimal scheduling problems are computationally infeasible suboptimal solution must be
tried. Suboptimal has two general categories.
a) Approximate scheduling
b) Heuristic scheduling

a) APPROXIMATION SCHEDULING
In approximate scheduling instead of searching the entire solution space for an optimal solution.
We are satisfied when we find a “good” one. We called it as suboptimal-approximate.the good
solution may not be so insignificant but in the cases where metric is available for evaluating a
solution this technique can be used to decreases the time taken to find an acceptable solution.
The factors which determine whether this approach is good include.
 Availability of a function to evaluate a solution.
 The time required to evaluate a solution.
 The ability to judge according to some metric the value of an optimal solution.
 Availability of a mechanism for intelligently pruning the solution space.

b) HEURISTIC SCHEDULING
Heuristic scheduling represents the category of static algorithms which make realistic
assumption about the prior knowledge concerning process and system loading characteristics. It
represents solution to the static scheduling problem which requires the most reasonable amount
of time and other system resources to perform their function.
The most distinguishing feature of heuristic schedulers from other is that they make use of
special parameters which affect the system in indirect ways. The parameter being monitored is
correlated to system performance in an indirect instead of direct ways and this alternate
parameter is much simpler to monitor or calculate.
For example:
Clustering groups of processes which communicate heavily on the same processor and physically
separating processes which would benefits from parallelism directly decreases the overhead
involved in passing the information between processors while reducing the interference among
processes which may run without synchronization one another. This result has an impact on the
overall service that users receive, but cannot directly related to system performance.
Static Heuristics Algorithms

Heuristics Algorithms do not guarantee to find optimal solutions for any instance of an
optimization problem. On condition of appropriate choose of heuristic these often provide
acceptable results with very good time and memory complexity.

Various Static Heuristics Algorithms are:

1. The Earliest Starting Time First Rule


2. The Earliest Completion Time First Rule
3. The Longest Processing Time First Rule
4. The Shortest Processing Time First Rule

The Earliest Starting Time First Rule: Reorder tasks in the list to no-decreasing tend of
starting time before the application of List Scheduling Algorithm. Its time complexity is
(nlogn).

The Earliest Completion Time First Rule: Reorder tasks in the list to no-decreasing tend of
completion time in every iteration of List Scheduling Algorithm. Its time complexity is
(nlogn).

The Longest Processing Time First Rule: Reorder tasks in the list to no-increasing tend of
processing time in before the application of List Scheduling Algorithm. Its time complexity is
(nlogn).

The Shortest Processing Time First Rule: Reorder tasks in the list to no-decreasing tend of
processing time before the application of List Scheduling Algorithm. Its time complexity is
(nlogn).
2. DYNAMIC SCHEDULING
Dynamic scheduling is based on the redistribution of processes among the processors during
execution time. This redistribution is performed by transferring tasks from the heavily loaded
processors to the lightly loaded processors (load balancing) with the aim of improving the
performance of the application.
In dynamic scheduling, few assumptions about the parallel program can be made before
execution.

The goal of a dynamic scheduling includes not only the minimization of the program completion
time but also the minimization of scheduling overhead, which represents a significant portion of
the cost paid for running the Scheduler.

A typical load balancing algorithm is defined by 3 inherent policies:


Information policy: information policy specifies the amount of load information made available
to job placement decision makers.
Transfer policy: Transfer policy determines the conditions under which a job should be
transferred.
Placement policy: placement policy identifies the processing element to which a job should be
transferred.

Advantage of dynamic load scheduling


1. System need not be aware of the run time behaviour of the application before execution.
2. Dynamic scheduling is useful in system consisting of a network of workstations in which
the primary performance goal is maximizing utilization of the processing power instead
of minimizing execution time of the applications.

Disadvantages of dynamic scheduling


Run time overhead due to
1. The load information transfer among processors
2. The decision making process for the selection of processes and processors for job
transfers.
3. Communication delays due to task relocation.
DYNAMIC SCHEDULING ALGORITHMS

Various Dynamic Scheduling Algorithms are:

1. Dynamic Scheduling Strategies for Shared-Memory Multiprocessors


2. Self-Adjusting Dynamic Scheduling Algorithms

1. Self -Adjusting Dynamic Scheduling Algorithms


A system consisting of a scheduling algorithm and a set of processors that execute the
parallel tasks can be assumed to perform operations in two phases, the scheduling phase and
execution phase. During the scheduling phase the scheduling algorithm searches for a
solution to a given problem instance and during the execution phase, the system executes the
scheduled tasks.

The scheduling and execution phases may be sequential in which the system statically plans
a complete solution before executing a single task.

The scheduling and execution phases may be interleaved in which partial task assignments
are computed by the scheduling algorithm and are placed on processor queues for the
execution. In the interleaved case some processor remains idle while the tasks are scheduled.

The scheduling and execution phases may be overlapped in which overhead of the
scheduling effort is completely masked by the execution of previously scheduled tasks.

Dynamic scheduling is of two types:


1. Distributed scheduling
2. Non distributed scheduling

1. DISTRIBUTED SCHEDULING
In distributed scheduling, the work involved in making decisions is physically distributed among
the processors. The concern is with the logical authority of the decision making process.
Distributed Scheduling is of two types:
a) Cooperative Scheduling
b) Non cooperative Scheduling

a) COOPERATIVE SCHEDULING
In cooperative scheduling, there is cooperation between the distributed components. In
cooperative scheduling, each processor has the responsibility to carry out its own portion of
the scheduling task, but all the processors are working toward a common system-wide goal. In
other words, each processor’s local operating system is concerned with making decisions in
concerned with making decisions in concert with the other processors the system in order to
achieve some global goal, instead of making decisions based on the way in which the decision
will affect local performance only.

b) NON COOPERATIVE SCHEDULING


In non cooperative scheduling, the individual processors make decisions independent of the
actions of other processors. In non cooperative case individual processors act alone as
autonomous entities and arrive at decisions regarding the use of their resources independent of
the effect of their decision on the rest of the system.

2. NON DISTRIBUTED SCHEDULING


In non distributed scheduling, the responsibility for the task of global dynamic scheduling
physically resides in a single processor.

4. STATIC Vs DYNAMIC SCHEDULING


In static scheduling information regarding the total mix of processes in the system as well as the
independent subtasks involved in a job or task force is assumed to be available by the time the
program object modules are linked into load modules. Each executable process in a system has a
static assignment to a particular processor and each time that process is submitted for execution
to that processor.
In dynamic scheduling a little prior knowledge is available about the resource needs of a
process. It is also unknown in what environment the process will execute during its lifetime. In
dynamic scheduling no decision is made until a process begins its life in the dynamic
environment of the system. It is the responsibility of the running system to decide where a
process is to execute.

You might also like