Load Scheduling
Load Scheduling
Load Scheduling
Scheduling means
equi-partitioning.
Load scheduling is new approach for load balancing. In load scheduling all processors
cooperate together to schedule work. Load scheduling deals with the processing of massive loads
that can be partitioned into smaller load fractions and distributed to processing nodes over a
network. It provide high quality load balancing.
2. SCHEDULING SYSTEM
Every instance of the scheduling problem consists of three main components.
Consumers
Resources
Policy
Management resource is basically a mechanism or policy used to efficiently and effectively
manages the access to and use of a resource by its various consumers.
Fig2. SCHEDULING SYSTEM
The functioning of a scheduler may be done by observing the effect it has on its environment.
Two properties which may considered in evaluating and scheduling system.
Performance
Efficiency
The satisfaction of the consumers with how well the scheduler manages the resources is called
performance.
The satisfaction of the consumers how difficult or costly it to access the management resource
itself is called efficiency.
3. TYPES OF SCHEDULING
1. STATIC SCHEDULING
In static scheduling the assignment of tasks to processors is done before execution begins.
Information regarding process execution time and processing resources is assumed to be known
at compile time and the process is always executed on the processor to which it is assigned.
Static scheduling methods are non pre-emptive.
In static scheduling, which is also referred to as compile-time scheduling, the characteristics of a
parallel program, including task processing times, data dependencies and synchronizations, are
known before program execution.
The goal of static scheduling is to minimize the overall execution time of concurrent program
and minimize communication delays. It attempt to
Predict the program execution behaviour at compile time. It estimates the process or task
execution time and communication delays.
Perform a partitioning of smaller tasks into grain processes in an attempt to reduce the
communication cost.
Allocate processes to processors.
1. List Scheduling
2. PSRS
3. SMART
4. SMART-FFIA
1. List Scheduling
List Scheduling is a basic and popular combinatorial algorithm intended for scheduling of set
of tasks on a single and even parallel processors. In this algorithm tasks are fed from a pre-
specified list and whenever a processor becomes idle, the first available task on the list is
scheduled and removed from list, where the availability of a task means that task has been
released and, if there are precedence constraints, all its predecessors have already been
processed. The algorithm terminates when all tasks from list are scheduled. Its complexity is
O(n). In multiprocessor case, the processor with minimal time is taken in every iteration of
algorithm and others are relaxed.
2. PSRS
The PSRS algorithm generates pre-emptive schedules. This is an offline algorithm and requires
the execution time of the jobs. In addition, it needs support for time sharing. Therefore, it
cannot be applied to target machine without modification.
All jobs are ordered by their modified Smith ratio (largest ratio goes first).
A greedy list schedule is applied for all jobs requiring at most 50% of the machine
nodes. If a job needs more than half of all nodes and has been waiting for some time,
then all running jobs are preempted and the parallel job is executed. After the
completion of the parallel job, the execution of the preempted jobs is resumed.
Types of static scheduling
1. Optimal scheduling
2. Sub optimal scheduling
1. OPTIMAL SCHEDULING
In optimal scheduling all information regarding the state of the system as well as the resource
needs of a process are known. An optimal assignment can be made based on some criterion
function.
For example:
Minimizing total process completion time.
Maximizing system throughput.
2. SUBOPTIMAL SCHEDULING
When optimal scheduling problems are computationally infeasible suboptimal solution must be
tried. Suboptimal has two general categories.
a) Approximate scheduling
b) Heuristic scheduling
a) APPROXIMATION SCHEDULING
In approximate scheduling instead of searching the entire solution space for an optimal solution.
We are satisfied when we find a “good” one. We called it as suboptimal-approximate.the good
solution may not be so insignificant but in the cases where metric is available for evaluating a
solution this technique can be used to decreases the time taken to find an acceptable solution.
The factors which determine whether this approach is good include.
Availability of a function to evaluate a solution.
The time required to evaluate a solution.
The ability to judge according to some metric the value of an optimal solution.
Availability of a mechanism for intelligently pruning the solution space.
b) HEURISTIC SCHEDULING
Heuristic scheduling represents the category of static algorithms which make realistic
assumption about the prior knowledge concerning process and system loading characteristics. It
represents solution to the static scheduling problem which requires the most reasonable amount
of time and other system resources to perform their function.
The most distinguishing feature of heuristic schedulers from other is that they make use of
special parameters which affect the system in indirect ways. The parameter being monitored is
correlated to system performance in an indirect instead of direct ways and this alternate
parameter is much simpler to monitor or calculate.
For example:
Clustering groups of processes which communicate heavily on the same processor and physically
separating processes which would benefits from parallelism directly decreases the overhead
involved in passing the information between processors while reducing the interference among
processes which may run without synchronization one another. This result has an impact on the
overall service that users receive, but cannot directly related to system performance.
Static Heuristics Algorithms
Heuristics Algorithms do not guarantee to find optimal solutions for any instance of an
optimization problem. On condition of appropriate choose of heuristic these often provide
acceptable results with very good time and memory complexity.
The Earliest Starting Time First Rule: Reorder tasks in the list to no-decreasing tend of
starting time before the application of List Scheduling Algorithm. Its time complexity is
(nlogn).
The Earliest Completion Time First Rule: Reorder tasks in the list to no-decreasing tend of
completion time in every iteration of List Scheduling Algorithm. Its time complexity is
(nlogn).
The Longest Processing Time First Rule: Reorder tasks in the list to no-increasing tend of
processing time in before the application of List Scheduling Algorithm. Its time complexity is
(nlogn).
The Shortest Processing Time First Rule: Reorder tasks in the list to no-decreasing tend of
processing time before the application of List Scheduling Algorithm. Its time complexity is
(nlogn).
2. DYNAMIC SCHEDULING
Dynamic scheduling is based on the redistribution of processes among the processors during
execution time. This redistribution is performed by transferring tasks from the heavily loaded
processors to the lightly loaded processors (load balancing) with the aim of improving the
performance of the application.
In dynamic scheduling, few assumptions about the parallel program can be made before
execution.
The goal of a dynamic scheduling includes not only the minimization of the program completion
time but also the minimization of scheduling overhead, which represents a significant portion of
the cost paid for running the Scheduler.
The scheduling and execution phases may be sequential in which the system statically plans
a complete solution before executing a single task.
The scheduling and execution phases may be interleaved in which partial task assignments
are computed by the scheduling algorithm and are placed on processor queues for the
execution. In the interleaved case some processor remains idle while the tasks are scheduled.
The scheduling and execution phases may be overlapped in which overhead of the
scheduling effort is completely masked by the execution of previously scheduled tasks.
1. DISTRIBUTED SCHEDULING
In distributed scheduling, the work involved in making decisions is physically distributed among
the processors. The concern is with the logical authority of the decision making process.
Distributed Scheduling is of two types:
a) Cooperative Scheduling
b) Non cooperative Scheduling
a) COOPERATIVE SCHEDULING
In cooperative scheduling, there is cooperation between the distributed components. In
cooperative scheduling, each processor has the responsibility to carry out its own portion of
the scheduling task, but all the processors are working toward a common system-wide goal. In
other words, each processor’s local operating system is concerned with making decisions in
concerned with making decisions in concert with the other processors the system in order to
achieve some global goal, instead of making decisions based on the way in which the decision
will affect local performance only.