Real-Time Systems. Design Principles for Distributed Embedded Applications. Herman Kopetz. Second Edition (811374), страница 73
Текст из файла (страница 73)
Thus, the server taskpreserves its execution time until it is needed by a sporadic request. The sporadicserver task is scheduled dynamically in response to the sporadic request event.Mode Changes. During the operation of most real-time applications, a number ofdifferent operating modes can be distinguished. Consider the example of a flightcontrol system in an airplane. When a plane is taxiing on the ground, a different setof services is required than when the plane is flying. Better resource utilization canbe realized if only those tasks that are needed in a particular operating mode must bescheduled. If the system leaves one operating mode and enters another, acorresponding change of schedules must take place.During system design, one must identify all possible operating and emergencymodes. For each mode, a static schedule that will meet all deadlines is calculatedoff-line.
Mode changes are analyzed and the appropriate mode change schedulesare developed. Whenever a mode change is requested at run time, the applicablemode change schedule will be activated immediately. We conclude this sectionwith a comment by Xu and Parnas [Xu91, p. 134]:For satisfying timing constraints in hard real-time systems, predictability of the systemsbehavior is the most important concern; pre-run-time scheduling is often the only practicalmeans of providing predictability in a complex system.10.4Dynamic SchedulingAfter the occurrence of a significant event, a dynamic scheduling algorithm determines on-line which task out of the ready task set must be serviced next.
Thealgorithms differ in the assumptions about the complexity of the task model and thefuture task behavior [But04].10.4.1 Scheduling Independent TasksThe classic algorithm for scheduling a set of periodic independent hard real-timetasks in a system with a single CPU, the rate monotonic algorithm, was published in1973 by [Liu73].Rate Monotonic Algorithm. The rate monotonic algorithm is a dynamic preemptivealgorithm based on static task priorities. It makes the following assumptions aboutthe task set:1. The requests for all tasks of the task set {Ti} for which hard deadlines must besatisfied, are periodic.25210 Real-Time Scheduling2.
All tasks are independent of each other. There exist no precedence constraints ormutual exclusion constraints between any pair of tasks.3. The deadline interval of every task Ti is equal to its period pi.4. The required maximum computation time of each task ci is known a priori and isconstant.5. The time required for context switching can be ignored.6. The sum of the utilization factors m of the n tasks with period p is given byXm¼ci =pi bnð21=n 1Þ:The term n(21/n 1) approaches ln 2, i.e., about 0.7, as n goes to infinity.The rate monotonic algorithm assigns static priorities based on the task periods.The task with the shortest period gets the highest static priority, and the task with thelongest period gets the lowest static priority. At run time, the dispatcher selects thetask request with the highest static priority.If all the assumptions are satisfied, the rate monotonic algorithm guarantees thatall tasks will meet their deadline.
The algorithm is optimal for single processorsystems. The proof of this algorithm is based on the analysis of the behavior of thetask set at the critical instant. A critical instant of a task is the moment at which therequest of this task will have the longest response time. For the task system as awhole, the critical instant occurs when requests for all tasks are made simultaneously. Starting with the highest priority task, it can be shown that all tasks willmeet their deadlines, even in the case of the critical instant. In a second phase of theproof it must be shown that any scenario can be handled if the critical-instantscenario can be handled.
For the details of the proof refer to [Liu73].It is also shown that assumption (6) above can be relaxed in case the task periodsare harmonic, i.e., they are multiples of the period of the highest priority task. In thiscase the utilization factor m of the n tasks,m¼Xci =pi b1;can approach the theoretical maximum of unity in a single processor system.In recent years, the rate monotonic theory has been extended to handle a set oftasks where the deadline interval can be different from the period [But04].Earliest-Deadline-First (EDF) Algorithm.
This algorithm is an optimal dynamicpreemptive algorithm in single processor systems which are based on dynamicpriorities. The assumptions (1) to (5) of the rate monotonic algorithm must hold.The processor utilization m can go up to 1, even when the task periods are notmultiples of the smallest period.
After any significant event, the task with theearliest deadline is assigned the highest dynamic priority. The dispatcher operatesin the same way as the dispatcher for the rate monotonic algorithm.Least-Laxity (LL) Algorithm. In single processor systems, the least laxity algorithm is another optimal algorithm.
It makes the same assumptions as the EDF10.4 Dynamic Scheduling253algorithm. At any scheduling decision instant the task with the shortest laxity l, i.e., the difference between the deadline interval d and the computation time cdc¼lis assigned the highest dynamic priority.In multiprocessor systems, neither the earliest-deadline-first nor the least-laxityalgorithm is optimal, although the least-laxity algorithm can handle task scenarios,which the earliest-deadline-first algorithm cannot handle and vice-versa.10.4.2 Scheduling Dependent TasksFrom a practical point of view, results on how to schedule tasks with precedenceand mutual exclusion constraints are much more important than the analysis of theindependent task model.
Normally, the concurrently executing tasks must exchangeinformation and access common data resources to cooperate in the achievement ofthe overall system objectives. The observation of given precedence and mutualexclusion constraints is thus rather the norm than the exception in distributed realtime systems.To solve this problem, the priority ceiling protocol was developed by [Sha90].The priority ceiling protocol can be used to schedule a set of periodic tasks thathave exclusive access to common resources protected by semaphores.
Thesecommon resources, e.g., common data structures, can be utilized to realize aninter-task communication.The priority ceiling of a semaphore is defined as the priority of the highestpriority task that may lock this semaphore. A task T is allowed to enter a criticalsection only if its assigned priority is higher than the priority ceilings of allsemaphores currently locked by tasks other than T. Task T runs at its assignedpriority unless it is in a critical section and blocks higher priority tasks.
In this caseit inherits the highest priority of the tasks it blocks. When it exits, the critical sectionit resumes the priority it had at the point of entry into the critical section.The example of Fig. 10.7, taken from [Sha90], illustrates the operation of thepriority ceiling protocol. A system of 3 tasks: T1 (highest priority), T2 (middlepriority), and T3 (lowest priority) compete for three critical regions protected by thethree semaphores S1, S2 and S3.Schedulability Test for the Priority Ceiling Protocol. The following sufficient schedulability test for the priority ceiling protocol has been given by [Sha90]. Assume aset of periodic tasks, {Ti} with periods pi and computation times ci. We denotethe worst-case blocking time of a task ti by lower priority tasks by Bi.
The set ofn periodic tasks {Ti} can be scheduled, if the following set of inequalities holds, 8i, 1 i n.ðc1 =p1 þ c2 =p2 þ þ ci =pi þ Bi =pi Þbi(21=i 1Þ25410 Real-Time SchedulingT1: . ., P(S1), . ., V(S1), . . .(highest priority)T2: . ., P(S2), .
., P(S3), . ., V(S3), . ., V(S2), . . (middle priority)T3: . ., P(S3), . ., P(S2), . ., V(S2), . ., V(S3), . . (lowest priority)Command SequenceExecuted by Task:Task 1Task 2Task 3TimeEvent 12345Critical Section Guarded by:Event12345678910111213141516678S1 (high)91011121314S2 (middle)1516S3 (middle)ActionT3 begins execution.T3 locks S3.T2 is started and preempts T3.T2 becomes blocked when trying to access S2 since the priority of T2 is not higher than thepriority ceiling of the locked S3. T3 resumes the execution of its critical section at theinherited priority of T2.T1 is initiated and preempts T3.T1 locks the semaphore S1.
The priority of T1 is higher than the priority ceiling of all lockedsemaphores.T1 unlocks semaphore S1.T1 finishes its execution. T3 continues with the inherited priority of T2.T3 locks semaphore S2.T3 unlocks S2.T3 unlocks S3 and returns to its lowest priority. At this point T2 can lock S2.T2 locks S3.T2 unlocks S3.T2 unlocks S2.T2 completes. T3 resumes its operation.T3 completes.Fig. 10.7 The priority ceiling protocol (example taken from [Sha90])In these inequalities the effect of preemptions by higher priority tasks is consideredin the first i terms (in analogy to the rate monotonic algorithm), whereas the worstcase blocking time due to all lower priority tasks is represented in the term Bi/pi.The blocking term Bi/pi, which can become very significant if a task with a shortperiod (i.e., small pi) is blocked for a significant fraction of its time, effectivelyreduces the CPU utilization of the task system.