RTOS

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

RTOS

2.1 What is RTOS RTOS comprises of two components, namely, “Real-Time” and “Operating System”. 2.1.1 Real-Time Real-Time indicates an
expectant response or reaction to an event on the instant of its evolution. The expectant response depicts the logical correctness of the result
produced. The instant of the events’ evolution depicts deadline for producing the result. 2.1.2 Operating System Operating System (OS) is a
system program that provides an interface between hardware and application programs. OS is commonly equipped with features like:
Multitasking, Synchronization, Interrupt and Event Handling, Input/ Output, Inter-task Communication, Timers and Clocks and Memory
Management to fulfill its primary role of managing the hardware resources to meet the demands of application programs. RTOS is therefore an
operating system that supports real-time applications and embedded systems by providing logically correct result within the deadline required.
Such capabilities define its deterministic timing behavior and limited resource utilization nature.

2.2 Why RTOS for Real-Time Application RTOS is not a required component of all real-time application in embedded systems. An embedded
system in a simple electronic rice cooker does not require RTOS. But as the complexity of applications expands beyond simple tasks, benefits of
having an RTOS far outweigh the associate costs. Embedded systems are becoming more complex hardware-wise with every generation. And as
more features are put into them in each iteration, application programs running on the embedded system platforms will become increasingly
complex to be managed as they strive to meet the system response requirements. An RTOS will be effective to allow the real-time applications to
be designed and expanded more easily whilst meeting the performances required.

2.3 Classification of RTOS RTOS’s are broadly classified into three types, namely, Hard Real Time RTOS, Firm Real Time RTOS and Soft Real
Time RTOS as described below: • Hard real-time: degree of tolerance for missed deadlines is extremely small or zero. A missed deadline has
catastrophic results for the system • Firm real-time: missing a deadline might result in an unacceptable quality reduction • Soft real-time:
deadlines may be missed and can be recovered from. Reduction in system quality is acceptable.

2.4 Misconception of RTOS a) RTOS must be fast The responsiveness of an RTOS depends on its deterministic behavior and not on its
processing speed. The ability of RTOS to response to events within a timeline does not imply it is fast. b) RTOS introduce considerable amount
of overhead on CPU An RTOS typically only require between 1% to 4% of a CPU time. c) All RTOS are the same RTOS are generally designed
for 3 types of real-time systems (i.e. hard, firm & soft). In addition, they are further classified according to the types of hardware devices (e.g. 8-
bit, 16-bit, 32-bit MPU) supported.

2.5 Features of RTOS The design of an RTOS is essentially a balance between providing a reasonably rich feature set for application
development and deployment and, not sacrificing predictability and timeliness. A basic RTOS will be equipped with the following features: i.
Multitasking and Preemptibility An RTOS must be multi-tasked and preemptible to support multiple tasks in real-time applications. The
scheduler should be able to preempt any task in the system and allocate the resource to the task that needs it most even at peak load. ii. Task
Priority Preemption defines the capability to identify the task that needs a resource the most and allocates it the control to obtain the resource. In
RTOS, such capability is achieved by assigning individual task with the appropriate priority level. Thus, it is important for RTOS to be equipped
with this feature. iii. Reliable and Sufficient Inter Task Communication Mechanism For multiple tasks to communicate in a timely manner and to
ensure data integrity among each other, reliable and sufficient inter-task communication and synchronization mechanisms are required. iv.
Priority Inheritance To allow applications with stringent priority requirements to be implemented, RTOS must have a sufficient number of
priority levels when using priority scheduling. v. Predefined Short Latencies An RTOS needs to have accurately defined short timing of its
system calls. The behavior metrics are: • Task switching latency: The time needed to save the context of a currently executing task and switching
to another task is desirable to be short. • Interrupt latency: The time elapsed between execution of the last instruction of the interrupted task and
the first instruction in the interrupt handler. • Interrupt dispatch latency. The time from the last instruction in the interrupt handler to the next task
scheduled to run.

vi. Control of Memory Management To ensure predictable response to an interrupt, an RTOS should provide way for task to lock its code and
data into real memory. 2.6 RTOS Architecture The architecture of an RTOS is dependent on the complexity of its deployment. Good RTOSs are
scalable to meet different sets of requirements for different applications. For simple applications, an RTOS usually comprises only a kernel. For
more complex embedded systems, an RTOS can be a combination of various modules, including the kernel, networking protocol stacks, and
other components as illustrated in Figure 3.

Exokernel The concept is orthogonal to that of micro- vs. monolithic kernels by giving an application efficient control over hardware. It runs only
services protecting the resources (i.e. tracking the ownership, guarding the usage, revoking access to resources, etc) by providing low-level
interface for library operating systems (libOSes) and leaving the management to the application.

An RTOS generally avoids implementing the kernel as a large monolithic program. The kernel is developed instead as a micro-kernel with added
configurable functionalities. This implementation gives resulting benefit in increase system configurability, as each embedded application
requires a specific set of system services with respect to its characteristics. The kernel of an RTOS provides an abstraction layer between the
application software and hardware. This abstraction layer comprises of six main types of common services provided by the kernel to the
application software. Figure 7 shows the six common services of an RTOS kernel.
2.6.2 Task Management Task management allows programmers to design their software as a number of separate “chunks” of codes with each
handling a distinct goal and deadline. This service encompasses mechanism such as scheduler and dispatcher that creates and maintain task
objects. Task Object To achieve concurrency in real-time application program, the application is decompose into small, schedulable, and
sequential program units known as “Task”. In real-time context, task is the basic unit of execution and is governed by three time-critical
properties; release time, deadline and execution time. Release time refers to the point in time from which the task can be executed. Deadline is the
point in time by which the task must complete. Execution time denotes the time the task takes to execute. A task object is defined by the
following set of components: • Task Control block (Task data structures residing in RAM and only accessible by RTOS) • Task Stack (Data
defined in program residing in RAM and accessible by stack pointer) • Task Routine (Program code residing in ROM)

Each task may exist in any of the four states, including running, ready, or blocked and dormant as shown in Figure 9. During the execution of an
application program, individual tasks are continuously changing from one state to another. However, only one task is in the running mode (i.e.
given CPU control) at any point of the execution. In the process where CPU control is change from one task to another, context of the to-be-
suspended task will be saved while context of the to-be-executed task will be retrieved. This process of saving the context of a task being
suspended and restoring the context of a task being resumed is called context switching.

Scheduler The scheduler keeps record of the state of each task and selects from among them that are ready to execute and allocates the CPU to
one of them. A scheduler helps to maximize CPU utilization among different tasks in a multi-tasking program and to minimize waiting time.
There are generally two types of schedulers: non-preemptive and priority-based preemptive. Non-preemptive scheduling or cooperative
multitasking requires the tasks to cooperate with each other to explicitly give up control of the processor. When a task releases the control of the
processor, the next most important task that is ready to run will be executed. A task that is newly assigned with a higher priority will only gain
control of the processor when the current executing task voluntarily gives up the control. Figure 10 gives an example of a non-preemptive
scheduling Figure 10 Non-preemptive Scheduling Priority-based preemptive scheduling requires control of the processor be given to the task of
the highest priority at all time. In the event that makes a higher priority task ready to run, the current task is immediately suspended and the
control of the processor is given to the higher priority task. Figure 11 shows an example of a preemptive scheduling

Dispatcher The dispatcher gives control of the CPU to the task selected by the scheduler by performing context switching and changes the flow of
execution. At any time an RTOS is running, the flow of execution passes through one of three areas: through the task program code, through an
interrupt service routine, or through the kernel.

Intertask Communication Intertask communication involves sharing of data among tasks through sharing of memory space, transmission of data
and etc. Few of mechanisms available for executing intertask communications includes: • Message queues • Pipes • Remote procedural calls
(RPC) A message queue is an object used for intertask communication through which task send or receive messages placed in a shared memory.
Tasks and ISRs send and receive messages to the queue through services provided by the kernel. A task seeking for a message from an empty
queue is blocked either for a duration or until a message is received. The sending and receiving of messages to and from the queue may follow 1)
First In First Out (FIFO), 2) Last in First Out (LIFO) or 3) Priority (PRI) sequence. Usually, a message queue comprises of an associated queue
control block (QCB), name, unique ID, memory buffers, queue length, maximum message length and one or more task waiting lists. A message
queue with a length of 1 is commonly known as a mailbox. A pipe is an object that provide simple communication channel used for unstructured
data exchange among tasks. A pipe can be opened, closed, written to and read from. Traditionally, a pipe is a unidirectional data exchange
facility. There are two descriptors respectively at each end of the pipe for reading and writing. Data is written into the pipe as an unstructured
byte stream via one descriptor and read from the pipe in the FIFO order from the other. Unlike message queue, a pipe does not store multiple
messages but stream of bytes. In addition, data flow from a pipe cannot be prioritized. Remote procedure call (RPC) component permits
distributed computing where task can invoke the execution of a another task on a remote computer, as if the task ran on the same computer. 2.6.4
Memory Management An embedded RTOS usually strive to achieve small footprint by including only the functionality needed for the user’s
applications. There are two types of memory management in RTOSs. They are Stack and Heap managements. In a multi-tasking RTOS, each task
needs to be allocated with an amount of memory for storing their contexts (i.e. volatile information such as registers contents, program counter,
etc) for context switching. This allocation of memory is done using task-control block model (as mentioned in section 2.6.2). This set of memory
is commonly known as kernel stack and the management process termed Stack Management.

Upon the completion of a program initialization, physical memory of the MCU or MPU will usually be occupied with program code, program
data and system stack. The remaining physical memory is called heap. This heap memory is typically used by the kernel for dynamic memory
allocation of data space for tasks. The memory is divided into fixed size memory blocks, which can be requested by tasks. When a task finishes
using a memory block it must return it to the pool. This process of managing the heap memory is known as Heap management. 2.6.5 Timer
Management In embedded systems, system and user tasks are often scheduled to perform after a specified duration. To provide such scheduling,
there is a need for a periodical interrupt to keep track of time delays and timeout. Most RTOSs today offer both “relative timers” that work in
units of ticks, and “absolute timers” that work with calendar date and time. For each kind of timer, RTOSs provide a “task delay” service, and
also a “task alert” service based on the signaling mechanism (e.g. event flags). Another timer service provided is in meeting task deadline by
cooperating with task schedulers to determine whether tasks have met or missed their real-time deadlines.

2.6.6 Interrupt and Event Handling An interrupt is a hardware mechanism used to inform the CPU that an asynchronous event has occurred. A
fundamental challenge in RTOS design is supporting interrupts and thereby allowing asynchronous access to internal RTOS data structures. The
interrupt and event handling mechanism of an RTOS provides the following functions: • Defining interrupt handler • Creation and deletion of ISR
• Referencing the state of an ISR • Enabling and disabling of an interrupt • Changing and referencing of an interrupt mask and help to ensure: •
Data integrity by restricting interrupts from occurring when modifying a data structure • Minimum interrupt latencies due to disabling of
interrupts when RTOS is performing critical operations • Fastest possible interrupt responses that marked the preemptive performance of an
RTOS • Shortest possible interrupt completion time with minimum overheads 2.6.7 Device I/O Management An RTOS kernel is often equipped
with a device I/O management service to provide a uniform framework (application programmer’s interface-“API”) and supervision facility for
an embedded system to organize and access large numbers of diverse hardware device drivers. However, most device driver APIs and supervisors
are “standard” only within a specific RTOS.

Selection of RTOS RTOS tends to be a selection for many embedded projects. But is an RTOS always necessary? The answer lies on careful
analysis in understanding what an application needs to deliver to determine whether implementing RTOS is a requirement or an extravagance.
Most programmers are not familiar with RTOS constraints and requirements. An RTOS is usually chosen based on its performance or one’s
comfort and familiarity with the product. However, such a selection criteria is insufficient. To make matter worse, there is a wide variety of
RTOS ranging from commercial RTOS, open-source RTOS to internally developed RTOS to choose from. Therefore, it is incumbent upon the
programmers to exercise extra caution in the selection process. The selection criteria of RTOS can be broadly classified into two main areas;
technical features of RTOS and commercial aspect of the implementation. 3.1 Technical Considerations 3.1.1 Scalability Size or memory
footprint is an important consideration. Most RTOS are scalable in which only the code required is included in the final memory footprint.
Looking for granular scalability in an RTOS is a worthwhile endeavor, as it minimizes memory usage. 3.1.2 Portability Often, a current
application may outgrow the hardware it was originally designed for as the requirements of the product increases. An RTOS with such a
capability can therefore be ported between processor architectures and between specific target systems. 3.1.3 Run-time facilities Run-time
facilities refer to the services of the kernel (i.e. intertask communication, task synchronization, interrupts and events handling, etc). Different
application systems have different sets of requirements. Comparison of RTOSs is frequently between the kernel-level facilities they provided.

3.1.5 Development tools A sufficient set of development tools including debugger; compiler and performance profiler might help in shortening
the development and debugging time, and improve the reliability of the coding. Commercial RTOSs usually have a complete set of tools for
analyzing and optimizing the RTOSs’ behavior whereas Open-Source RTOSs will not have. 3.2 Commercial Considerations 3.2.1 Costs Costs are
a major consideration in selection of RTOS. There are currently more than 80 RTOS vendors. Some of the RTOS packages are complete
operating systems including not only the real-time kernel but also an input/output manager, windowing systems, a file system, networking,
language interface libraries, debuggers, and cross platform compilers. And the cost of an RTOS ranges from US$70 to over US$30,000. The
RTOS vendor may also require royalties on a per-target-system basis, which may varies between USS5 to more than US$250 per unit. In
addition, there will be maintenance required and that can easily cost between US$100 to US$5,000 per year. 3.2.2 License An RTOS vendor
usually has a few license models for customers to choose from. A perpetual license enables customers to purchase the development set and pay an
annual maintenance fee, which entitles he/her to upgrades and bug fixes. An alternative model known as subscription model allow customers to
“rent” the development set whilst paying an annual fee to renew the access. This model provides customers with lower technology acquisition
fees but cost from annual renewal fee can escalate after many years. 3.2.3 Supplier stability/ longevity Development with RTOS is not a problem
free process. Reliable and consistent support from supplier is a critical factor in ensuring the prompt completion of a project. Supplier longevity
thus helps to determine the availability of support.

Event-driven Scheduling – An Introduction In this lesson, we shall discuss the various algorithms for event-driven scheduling. From the previous
lesson, we may recollect the following points: The clock-driven schedulers are those in which the scheduling points are determined by the
interrupts received from a clock. In the event-driven ones, the scheduling points are defined by certain events which precludes clock interrupts.
The hybrid ones use both clock interrupts as well as event occurrences to define their scheduling points Cyclic schedulers are very efficient.
However, a prominent shortcoming of the cyclic schedulers is that it becomes very complex to determine a suitable frame size as well as a
feasible schedule when the number of tasks increases. Further, in almost every frame some processing time is wasted (as the frame size is larger
than all task execution times) resulting in sub-optimal schedules. Event-driven schedulers overcome these shortcomings. Further, eventdriven
schedulers can handle aperiodic and sporadic tasks more proficiently. On the flip side, event-driven schedulers are less efficient as they deploy
more complex scheduling algorithms. Therefore, event-driven schedulers are less suitable for embedded applications as these are required to be
of small size, low cost, and consume minimal amount of power. It should now be clear why event-driven schedulers are invariably used in all
moderate and large-sized applications having many tasks, whereas cyclic schedulers are predominantly used in small applications. In event-
driven scheduling, the scheduling points are defined by task completion and task arrival events. This class of schedulers is normally preemptive,
i.e., when a higher priority task becomes ready, it preempts any lower priority task that may be running.

1.1. Types of Event Driven Schedulers We discuss three important types of event-driven schedulers: • Simple priority-based • Rate Monotonic
Analysis (RMA) • Earliest Deadline First (EDF) The simplest of these is the foreground-background scheduler, which we discuss next. In
section 3.4, we discuss EDF and in section 3.5, we discuss RMA. 1.2. Foreground-Background Scheduler A foreground-background
scheduler is possibly the simplest priority-driven preemptive scheduler. In foreground-background scheduling, the real-time tasks in an
application are run as fore- ground tasks. The sporadic, aperiodic, and non-real-time tasks are run as background tasks. Among the
foreground tasks, at every scheduling point the highest priority task is taken up for scheduling. A background task can run when none of the
foreground tasks is ready. In other words, the background tasks run at the lowest priority. Let us assume that in a certain real-time system,
there are n foreground tasks which are denoted as: T1,T2,...,Tn. As already mentioned, the foreground tasks are all periodic. Let TB be the
only background task. Let eB be the processing time requirement of TB. In this case, the completion time (ctB) for the background task is
given by: ctBB = eBB / (1−i=1∑ n ei / pi) … (3.1/2.7) This expression is easy to interpret. When any foreground task is executing, the
background task waits. The average CPU utilization due to the foreground task Ti is ei/pi, since ei amount of processing time is required
over every pi period. It follows that all foreground tasks together would result in CPU utilization of i=1∑ n ei / pi. Therefore, the average
time available for execution of the background tasks in every unit of time is 1−i=1∑ n ei / pi. Hence, Expr. 2.7 follows easily. We now
illustrate the applicability of Expr. 2.7 through the following three simple examples. 1.3. Examples Example 1: Consider a real-time system
in which tasks are scheduled using foregroundbackground scheduling. There is only one periodic foreground task Tf : (φf =0, pf =50 msec,
ef =100 msec, df =100 msec) and the background task be TB B = (eBB =1000 msec). Compute the completion time for background task.
Solution: By using the expression (2.7) to compute the task completion time, we have ctBB = 1000 / (1−50/100) = 2000 msec So, the
background task TB would take 2000 milliseconds to complete.
1.2. B Example 2: In a simple priority-driven preemptive scheduler, two periodic tasks T1 and T2 and a background task are scheduled. The
periodic task T1 has the highest priority and executes once every 20 milliseconds and requires 10 milliseconds of execution time each time.
T2 requires 20 milliseconds of processing every 50 milliseconds. T3 is a background task and requires 100 milliseconds to complete.
Assuming that all the tasks start at time 0, determine the time at which T3 will complete.
1.3. Solution: The total utilization due to the foreground tasks: i=1∑ 2 ei / pi = 10/20 + 20/50 = 90/100. This implies that the fraction of time
remaining for the background task to execute is given by: 1−i=1∑ 2 ei / pi = 10/100. Therefore, the background task gets 1 millisecond
every 10 milliseconds. Thus, the background task would take 10∗(100/1) = 1000 milliseconds to complete. Example 3: Suppose in Example
1, an overhead of 1 msec on account of every context switch is to be taken into account. Compute the completion time of TB. B Foreground
Back ground Foreground Back ground Foreground Time in milli secs Context Switching Time 0 1 51 52 100 Fig. 30.1 Task Schedule for
Example 3 Solution: The very first time the foreground task runs (at time 0), it incurs a context switching overhead of 1 msec. This has
been shown as a shaded rectangle in Fig. 30.1. Subsequently each time the foreground task runs, it preempts the background task and incurs
one context switch. On completion of each instance of the foreground task, the background task runs and incurs another context switch.
With this observation, to simplify our computation of the actual completion time of TB, we can imagine that the execution time of every
foreground task is increased by two context switch times (one due to itself and the other due to the background task running after each time
it completes). Thus, the net effect of context switches can be imagined to be causing the execution time ofthe foreground task to increase by
2 context switch times, i.e. to 52 milliseconds from 50 milliseconds. This has pictorially been shown in Fig. 30.1. B Now, using Expr. 2.7,
we get the time required by the background task to complete: 1000/(1−52/100) = 2083.4 milliseconds In the following two sections, we
examine two important event-driven schedulers: EDF (Earliest Deadline First) and RMA (Rate Monotonic Algorithm). EDF is the optimal
dynamic priority real-time task scheduling algorithm and RMA is the optimal static priority real-time task scheduling algorithm. 1.4.
Earliest Deadline First (EDF) Scheduling In Earliest Deadline First (EDF) scheduling, at every scheduling point the task having the shortest
deadline is taken up for scheduling. This basic principles of this algorithm is very intuitive and simple to understand. The schedulability test
for EDF is also simple. A task set is schedulable under EDF, if and only if it satisfies the condition that the total processor utilization due to
the task set is less than 1. For a set of periodic real-time tasks {T1, T2, …, Tn}, EDF schedulability criterion can be expressed as: i=1∑ n ei
/ pi = i=1∑ n ui ≤ 1 … (3.2/2.8) where ui is average utilization due to the task Ti and n is the total number of tasks in the task set. Expr. 3.2
is both a necessary and a sufficient condition for a set of tasks to be EDF schedulable. EDF has been proven to be an optimal uniprocessor
scheduling algorithm. This means that, if a set of tasks is not schedulable under EDF, then no other scheduling algorithm can feasibly
schedule this task set. In the simple schedulability test for EDF (Expr. 3.2), we assumed that the period of each task is the same as its
deadline. However, in practical problems the period of a task may at times be different from its deadline. In such cases, the schedulability
test needs to be changed. If pi > di, then each task needs ei amount of computing time every min(pi, di) duration of time. Therefore, we can
rewrite Expr. 3.2 as: i=1∑ n ei / min(pi, di) ≤ 1 … (3.3/2.9) However, if pi < di, it is possible that a set of tasks is EDF schedulable, even
when the task set fails to meet the Expr 3.3. Therefore, Expr 3.3 is conservative when pi < di, and is not a necessary condition, but only a
sufficient condition for a given task set to be EDF schedulable. Example 4: Consider the following three periodic real-time tasks to be
scheduled using EDF on a uniprocessor: T1 = (e1=10, p1=20), T2 = (e2=5, p2=50), T3 = (e3=10, p3=35). Determine whether the task set is
schedulable. Solution: The total utilization due to the three tasks is given by: i=1∑ 3 ei / pi = 10/20 + 5/50 + 10/35 = 0.89 This is less than
1. Therefore, the task set is EDF schedulable. Though EDF is a simple as well as an optimal algorithm, it has a few shortcomings which
render it almost unusable in practical applications. The main problems with EDF are discussed in Sec. 3.4.3. Next, we discuss the concept
of task priority in EDF and then discuss how EDF can be practically implemented.
1.4. 1.4.1. Is EDF Really a Dynamic Priority Scheduling Algorithm? We stated in Sec 3.3 that EDF is a dynamic priority scheduling algorithm.
Was it after all correct on our part to assert that EDF is a dynamic priority task scheduling algorithm? If EDF were to be considered a
dynamic priority algorithm, we should be able determine the precise priority value of a task at any point of time and also be able to show
how it changes with time. If we reflect on our discussions of EDF in this section, EDF scheduling does not require any priority value to be
computed for any task at any time. In fact, EDF has no notion of a priority value for a task. Tasks are scheduled solely based on the
proximity of their deadline. However, the longer a task waits in a ready queue, the higher is the chance (probability) of being taken up for
scheduling. So, we can imagine that a virtual priority value associated with a task keeps increasing with time until the task is taken up for
scheduling. However, it is important to understand that in EDF the tasks neither have any priority value associated with them, nor does the
scheduler perform any priority computations to determine the schedulability of a task at either run time or compile time. 1.4.2.
Implementation of EDF A naive implementation of EDF would be to maintain all tasks that are ready for execution in a queue. Any freshly
arriving task would be inserted at the end of the queue. Every node in thequeue would contain the absolute deadline of the task. At every
preemption point, the entire queue would be scanned from the beginning to determine the task having the shortest deadline. However, this
implementation would be very inefficient. Let us analyze the complexity of this scheme. Each task insertion will be achieved in O(1) or
constant time, but task selection (to run next) and its deletion would require O(n) time, where n is the number of tasks in the queue. A more
efficient implementation of EDF would be as follows. EDF can be implemented by maintaining all ready tasks in a sorted priority queue. A
sorted priority queue can efficiently be implemented by using a heap data structure. In the priority queue, the tasks are always kept sorted
according to the proximity of their deadline. When a task arrives, a record for it can be inserted into the heap in O(log2 n) time where n is
the total number of tasks in the priority queue. At every scheduling point, the next task to be run can be found at the top of the heap. When a
task is taken up for scheduling, it needs to be removed from the priority queue. This can be achieved in O(1) time. A still more efficient
implementation of the EDF can be achieved as follows under the assumption that the number of distinct deadlines that tasks in an
application can have are restricted. In this approach, whenever task arrives, its absolute deadline is computed from its release time and its
relative deadline. A separate FIFO queue is maintained for each distinct relative deadline that tasks can have. The scheduler inserts a newly
arrived task at the end of the corresponding relative deadline queue. Clearly, tasks in each queue are ordered according to their absolute
deadlines. To find a task with the earliest absolute deadline, the scheduler only needs to search among the threads of all FIFO queues. If the
number of priority queues maintained by the scheduler is Q, then the order of searching would be O(1). The time to insert a task would also
be O(1). 1.4.3. Shortcomings of EDF In this subsection, we highlight some of the important shortcomings of EDF when used for scheduling
real-time tasks in practical applications. Transient Overload Problem: Transient overload denotes the overload of a system for a very short
time. Transient overload occurs when some task takes more time to complete than what was originally planned during the design time. A
task may takelonger to complete due to many reasons. For example, it might enter an infinite loop or encounter an unusual condition and
enter a rarely used branch due to some abnormal input values. When EDF is used to schedule a set of periodic real-time tasks, a task
overshooting its completion time can cause some other task(s) to miss their deadlines. It is usually very difficult to predict during program
design which task might miss its deadline when a transient overload occurs in the system due to a low priority task overshooting its
deadline. The only prediction that can be made is that the task (tasks) that would run immediately after the task causing the transient
overload would get delayed and might miss its (their) respective deadline(s). However, at different times a task might be followed by
different tasks in execution. However, this lead does not help us to find which task might miss its deadline. Even the most critical task
might miss its deadline due to a very low priority task overshooting its planned completion time. So, it should be clear that under EDF any
amount of careful design will not guarantee that the most critical task would not miss its deadline under transient overload. This is a serious
drawback of the EDF scheduling algorithm. Version 2 EE IIT, Kharagpur 7 Resource Sharing Problem: When EDF is used to schedule a set
of real-time tasks, unacceptably high overheads might have to be incurred to support resource sharing among the tasks without making
tasks to miss their respective deadlines. We examine this issue in some detail in the next lesson.
1.5. Efficient Implementation Problem: The efficient implementation that we discussed in Sec. 3.4.2 is often not practicable as it is difficult to
restrict the number of tasks with distinct deadlines to a reasonable number. The efficient implementation that achieves O(1) overhead
assumes that the number of relative deadlines is restricted. This may be unacceptable in some situations. For a more flexible EDF
algorithm, we need to keep the tasks ordered in terms of their deadlines using a priority queue. Whenever a task arrives, it is inserted into
the priority queue. The complexity of insertion of an element into a priority queue is of the order log2 n, where n is the number of tasks to
be scheduled. This represents a high runtime overhead, since most real-time tasks are periodic with small periods and strict deadlines. 1.5.
Rate Monotonic Algorithm(RMA) We had already pointed out that RMA is an important event-driven scheduling algorithm. This is a static
priority algorithm and is extensively used in practical applications. RMA assigns priorities to tasks based on their rates of occurrence. The
lower the occurrence rate of a task, the lower is the priority assigned to it. A task having the highest occurrence rate (lowest period) is
accorded the highest priority. RMA has been proved to be the optimal static priority real-time task scheduling algorithm. In RMA, the
priority of a task is directly proportional to its rate (or, inversely proportional to its period). That is, the priority of any task Ti is computed
as: priority = k / pi, where pi is the period of the task Ti and k is a constant. Using this simple expression, plots of priority values of tasks
under RMA for tasks of different periods can be easily obtained. These plots have been shown in Fig. 30.10(a) and Fig. 30.10(b). It can be
observed from these figures that the priority of a task increases linearly with the arrival rate of the task and inversely with its period.
1.6. 1.5.1. Schedulability Test for RMA An important problem that is addressed during the design of a uniprocessor-based real-time system is to
check whether a set of periodic real-time tasks can feasibly be scheduled under RMA. Schedulability of a task set under RMA can be
determined from a knowledge of the Version 2 EE IIT, Kharagpur 8 worst-case execution times and periods of the tasks. A pertinent
question at this point is how can a system developer determine the worst-case execution time of a task even before the system is developed.
The worst-case execution times are usually determined experimentally or through simulation studies. The following are some important
criteria that can be used to check the schedulability of a set of tasks set under RMA. 1.5.1.1 Necessary Condition A set of periodic real-time
tasks would not be RMA schedulable unless they satisfy the following necessary condition: i=1∑ n ei / pi = i=1∑ n ui ≤ 1 where ei is the
worst case execution time and pi is the period of the task Ti, n is the number of tasks to be scheduled, and ui is the CPU utilization due to
the task Ti. This test simply expressesthe fact that the total CPU utilization due to all the tasks in the task set should be less than 1. 1.5.1.2
Sufficient Condition The derivation of the sufficiency condition for RMA schedulability is an important result and was obtained by Liu and
Layland in 1973. A formal derivation of the Liu and Layland’s results from first principles is beyond the scope of this discussion. We
would subsequently refer to the sufficiency as the Liu and Layland’s condition. A set of n real-time periodic tasks are schedulable under
RMA, if i=1∑ n ui ≤ n(21/n − 1) (3.4/2.10) where ui is the utilization due to task Ti. Let us now examine the implications of this result. If a
set of tasks satisfies the sufficient condition, then it is guaranteed that the set of tasks would be RMA schedulable. Consider the case where
there is only one task in the system, i.e. n = 1. Substituting n = 1 in Expr. 3.4, we get, i=1∑ 1 ui ≤ 1(21/1 − 1) or i=1∑ 1 ui ≤ 1 Similarly for
n = 2, we get, i=1∑ 2 ui ≤ 2(21/2 − 1) or i=1∑ 2 ui ≤ 0.828 For n = 3, we get, i=1∑ 3 ui ≤ 3(21/3 − 1) or i=1∑ 3 ui ≤ 0.78 For n → ∞, we
get, i=1∑ ∞ ui ≤ 3(21/∞ − 1) or i=1∑ ∞ ui ≤ ∞.0
1.7. applying L’Hospital’s rule, we can verify that the right hand side of the expression evaluates to loge2 = 0.692. From the above
computations, it is clear that the maximum CPU utilization that can be achieved under RMA is 1. This is achieved when there is only a
single task in the system. As the number of tasks increases, the achievable CPU utilization falls and as n → ∞, the achievable utilization
stabilizes at loge2, which is approximately 0.692. This is pictorially shown in Fig. 30.3. We now illustrate the applicability of the RMA
schedulability criteria through a few examples. 1.5.2. Examples

Example 5: Check whether the following set of periodic real-time tasks is schedulable under RMA on a uniprocessor: T1 = (e1=20,
p1=100), T2 = (e2=30, p2=150), T3 = (e3=60, p3=200).
Solution: Let us first compute the total CPU utilization achieved due to the three given tasks. i=1∑ 3 ui = 20/100 + 30/150 + 60/200 = 0.7
This is less than 1; therefore the necessary condition for schedulability of the tasks is satisfied. Now checking for the sufficiency condition,
the task set is schedulable under RMA if Liu and Layland’s condition given by Expr. 3.4 is satisfied Checking for satisfaction of Expr. 3.4,
the maximum achievable utilization is given by: 3(2 1/3 − 1) = 0.78 The total utilization has already been found to be 0.7. Now substituting
these in Liu and Layland’s criterion: i=1∑ 3 ui ≤ 3(2 1/3 − 1) Therefore, we get 0.7 < 0.78. Expr. 3.4, a sufficient condition for RMA
schedulability, is satisfied. Therefore, the task set is RMA-schedulable

Example 6: Check whether the following set of three periodic real-time tasks is schedulable under RMA on a uniprocessor: T1 = (e1=20,
p1=100), T2 = (e2=30, p2=150), T3 = (e3=90, p3=200). Solution: Let us first compute the total CPU utilization due to the given task set:
i=1∑ 3 ui = 20/100 + 30/150 + 90/200 = 0.7 Version 2 EE IIT, Kharagpur 10 Now checking for Liu and Layland criterion: i=1∑ 3 ui ≤ 0.78
Since 0.85 is not ≤ 0.78, the task set is not RMA-schedulable. Liu and Layland test (Expr. 2.10) is pessimistic in the following sense. If a
task set passes the Liu and Layland test, then it is guaranteed to be RMA schedulable. On the other hand, even if a task set fails the Liu and
Layland test, it may still be RMA schedulable. It follows from this that even when a task set fails Liu and Layland’s test, we should not
conclude that it is not schedulable under RMA. We need to test further to check if the task set is RMA schedulable. A test that can be per-
formed to check whether a task set is RMA schedulable when it fails the Liu and Layland test is the Lehoczky’s test. Lehoczky’s test has
been expressed as Theorem 3.

You might also like