Smartphone Operating System (779883), страница 24
Текст из файла (страница 24)
It canload arbitrary code and execute it at run time; it can interact withusers through applications. At the same time, the operating system mustsupport real-time functionality, especially where communication functions are concerned. This combination of requirements makes schedulinginteresting.Because of the real-time requirements, Symbian OS is implemented asa real-time operating system. It is built to run on multiple phone platforms,without specialized hardware, so the operating system is considered tobe a soft real-time system.
It needs enough real-time capabilities to runthe protocols for mobile protocol stacks, such as GSM and 3G (not tomention future protocols). In fact, Symbian OS considers scheduling tobe such a basic service that the nanokernel provides it.The combination of general-purpose functionality with real-time system requirements means that the best choice for implementation is asystem that uses a static, monotonic scheduling strategy, augmented bytime slices. Static, monotonic scheduling is a simple strategy to use – itorganizes processes with the shortest deadline first – and the introductionof time slices means that processes with the same deadline (or no deadline)can be assigned time slices and scheduling using a priority-schedulingscheme. There are 64 levels of priority in Symbian OS.As we discussed before, a key to soft real-time performance is predictable execution time.
If an operating system can predict how long aprocess will run, then a static, monotonic scheduling strategy will work,since it makes some big assumptions about run time. Predicting execution106PROCESS SCHEDULINGtime is based on the process and several system characteristics. There areseveral important characteristics that must be predictable, including:• latency times: an important benchmark is the latency of handlinginterrupts: the time from an interrupt to a user thread and from aninterrupt to a kernel thread• the time to get information about processes: for example, the time ittakes to find the highest priority thread in the ready state• the time to move threads between queues and the CPU: manipulatingscheduling queues – for example, moving processes to and from theready queue – must be bounded.
This functionality is used all thetime and it must have a bound on it or the system cannot predictperformance.Predicting these quantities is important and is reflected in the designof the scheduler. For example, in order for Symbian OS to predict thetime for finding the highest-priority thread, the operating system uses 64separate queues, one for each priority level.
In addition, there is a 64-bitmask, where a bit being on in the mask indicates that there are processesin the corresponding queue. This means that to choose a process from aqueue the operating system scans the mask and chooses the first processin the first available queue, instead of searching over a single queue withan unknown number of processes in it.5.6 SummaryIn this chapter, we have looked at the requirements of sharing a computer’s CPU between processes. We began the chapter by outlining whatwe mean by ‘sharing’ and what criteria can be used to assess how gooda sharing strategy is. We then examined several different strategies thatare used to schedule a CPU to be used by multiple processes. Finally, wedescribed three scheduling implementations, for Linux, general microkernels and Symbian OS.In this chapter, we have examined how to build the ‘illusion’ ofsupporting multiple processes executing at the same time on a singleprocessor.
The next chapter examines how to continue this illusion bydiscussing how concurrently running processes can communicate witheach other.EXERCISES107Exercises1.We discussed pre-emptive and non-pre-emptive scheduling. Listcomputing environments and state whether pre-emptive or non-preemptive scheduling would be best used. Give environments thatcannot use either pre-emptive or non-pre-emptive scheduling.2.Describe why non-pre-emptive scheduling should not be used in asoft real-time environment.3.How should an I/O call be used by a non-pre-emptive scheduler?4.Consider the following set of processes, listed with the time neededand scheduling priority requested. Assume that 1 is the highestpriority and that the requests arrive in the order P1 to P5 .
Assume thata time slice is 2 time units.ProcessP1P2P3P4P5Time NeededPriority21193101324365a. Draw time bars to indicate the scheduling sequence for theseprocesses under an FCFS, an SJF, a round-robin and a pre-emptivepriority scheduling scheme.b. Compute the turnaround time for each process for each of thescheduling strategies.c. Compute the average waiting time for each process for each ofthe scheduling strategies.d. Which of the scheduling scenarios has the best turnaround time?e. Which of the scheduling scenarios has the least waiting time?5.Suppose a new scheduling strategy, called least processor time first(LPTF), is invented. In an LPTF strategy, the next process chosenfor the CPU is the one that has used the least processor time. Why108PROCESS SCHEDULINGdoes this favor I/O-bound processes? Is it effective in eliminatingstarvation?6.How would you tune the Linux scheduling strategy to better supportreal-time computing? Give at least three suggestions.7.Suppose a new Linux strategy is invented that looks for higher-priorityprocesses between the scheduling of each process.
If one is waiting, itis scheduled for an extra time slice. Why does this method encouragestarvation?8.Should interrupts be disabled during a context switch? Describe howdisabling and enabling affects a scheduling algorithm.9.How does queue maintenance affect a scheduling algorithm? In otherwords, explain how careful placing of a process’s PCB in a queue afterremoving the process from the CPU affects a scheduling algorithm.6Process Concurrencyand SynchronizationAs children, most people are taught to take turns. Taking turns is a way toshare something between two or more people in an organized fashion sothat everyone has time to use the object, but no one can exclusively takeit as their own.
Taking turns is a great way to share, but it is often not donecorrectly, especially with children. If turns are unfair or someone is seenas taking more than her turn, several bad things can happen. Usually, theresult is a fight among the children sharing the object or damage to theobject being shared. Often an adult must step in to monitor the situation.When processes must share objects in a computer system, the samecareful attention must be given to proper sharing.
If an object – saya device or a portion of memory – is not shared correctly, several badthings can happen. The shared object could be corrupted or the processesinvolved could deadlock as they fight for access. In all cases, things arebest handled when the operating system steps in and provides some kindof supervision.This chapter deals with how to share objects in an operating systembetween processes. We have established in previous chapters that processes can be seen as executing concurrently, even when they sharea single processor. We examine what it takes to get those concurrentprocesses to coordinate with respect to shared objects and communicatebetween processes within an operating environment.
We also examinehow to avoid pitfalls of sharing, specifically deadlocks.110PROCESS CONCURRENCY AND SYNCHRONIZATION6.1 Concepts and Models for ConcurrencyIn an operating system, multiple processes run virtually at the same timeand share all the resources of a computer system. It is the nature of modernoperating systems to support applications that work this way. Modernoperating systems are built assuming that processes share memory space,devices and other resources with each other. Much of the time, dwellingin this shared world is easy and requires no special action. However, thereare times when processes must cooperate to properly share somethingbetween them.
This section introduces the concepts and ideas necessaryto discuss sharing and cooperation.Understanding the EnvironmentLet’s begin by looking at the environment in which cooperating processesexecute. A usual execution environment contains any number of cooperating sequential processes, which are running in parallel with each other.Each process is running sequential code within its process space. Eventhough other processes are also executing concurrently, each process isunaware of other actions taken by other processes.For example, consider a process that reads data from a memory bufferthat is generated by a second process.