Concepts with Symbian OS (779878), страница 24
Текст из файла (страница 24)
We began the chapter by outlining whatwe mean by ‘sharing’ and what criteria can be used to assess how gooda sharing strategy is. We then examined several different strategies thatare used to schedule a CPU to be used by multiple processes. Finally, wedescribed three scheduling implementations, for Linux, general microkernels and Symbian OS.In this chapter, we have examined how to build the ‘illusion’ ofsupporting multiple processes executing at the same time on a singleprocessor.
The next chapter examines how to continue this illusion bydiscussing how concurrently running processes can communicate witheach other.EXERCISES107Exercises1.We discussed pre-emptive and non-pre-emptive scheduling. Listcomputing environments and state whether pre-emptive or non-preemptive scheduling would be best used.
Give environments thatcannot use either pre-emptive or non-pre-emptive scheduling.2.Describe why non-pre-emptive scheduling should not be used in asoft real-time environment.3.How should an I/O call be used by a non-pre-emptive scheduler?4.Consider the following set of processes, listed with the time neededand scheduling priority requested. Assume that 1 is the highestpriority and that the requests arrive in the order P1 to P5 . Assume thata time slice is 2 time units.ProcessP1P2P3P4P5Time NeededPriority21193101324365a.
Draw time bars to indicate the scheduling sequence for theseprocesses under an FCFS, an SJF, a round-robin and a pre-emptivepriority scheduling scheme.b. Compute the turnaround time for each process for each of thescheduling strategies.c. Compute the average waiting time for each process for each ofthe scheduling strategies.d. Which of the scheduling scenarios has the best turnaround time?e. Which of the scheduling scenarios has the least waiting time?5.Suppose a new scheduling strategy, called least processor time first(LPTF), is invented. In an LPTF strategy, the next process chosenfor the CPU is the one that has used the least processor time. Why108PROCESS SCHEDULINGdoes this favor I/O-bound processes? Is it effective in eliminatingstarvation?6.How would you tune the Linux scheduling strategy to better supportreal-time computing? Give at least three suggestions.7.Suppose a new Linux strategy is invented that looks for higher-priorityprocesses between the scheduling of each process.
If one is waiting, itis scheduled for an extra time slice. Why does this method encouragestarvation?8.Should interrupts be disabled during a context switch? Describe howdisabling and enabling affects a scheduling algorithm.9.How does queue maintenance affect a scheduling algorithm? In otherwords, explain how careful placing of a process’s PCB in a queue afterremoving the process from the CPU affects a scheduling algorithm.6Process Concurrencyand SynchronizationAs children, most people are taught to take turns.
Taking turns is a way toshare something between two or more people in an organized fashion sothat everyone has time to use the object, but no one can exclusively takeit as their own. Taking turns is a great way to share, but it is often not donecorrectly, especially with children. If turns are unfair or someone is seenas taking more than her turn, several bad things can happen.
Usually, theresult is a fight among the children sharing the object or damage to theobject being shared. Often an adult must step in to monitor the situation.When processes must share objects in a computer system, the samecareful attention must be given to proper sharing. If an object – saya device or a portion of memory – is not shared correctly, several badthings can happen. The shared object could be corrupted or the processesinvolved could deadlock as they fight for access.
In all cases, things arebest handled when the operating system steps in and provides some kindof supervision.This chapter deals with how to share objects in an operating systembetween processes. We have established in previous chapters that processes can be seen as executing concurrently, even when they sharea single processor. We examine what it takes to get those concurrentprocesses to coordinate with respect to shared objects and communicatebetween processes within an operating environment.
We also examinehow to avoid pitfalls of sharing, specifically deadlocks.110PROCESS CONCURRENCY AND SYNCHRONIZATION6.1 Concepts and Models for ConcurrencyIn an operating system, multiple processes run virtually at the same timeand share all the resources of a computer system.
It is the nature of modernoperating systems to support applications that work this way. Modernoperating systems are built assuming that processes share memory space,devices and other resources with each other. Much of the time, dwellingin this shared world is easy and requires no special action. However, thereare times when processes must cooperate to properly share somethingbetween them. This section introduces the concepts and ideas necessaryto discuss sharing and cooperation.Understanding the EnvironmentLet’s begin by looking at the environment in which cooperating processesexecute.
A usual execution environment contains any number of cooperating sequential processes, which are running in parallel with each other.Each process is running sequential code within its process space. Eventhough other processes are also executing concurrently, each process isunaware of other actions taken by other processes.For example, consider a process that reads data from a memory bufferthat is generated by a second process. This could occur, for example, ifone process is producing timing data and the other process is reading thatdata and displaying it.
Consider, for example, the code below:while (true){while (timing_count == 0) ; // do nothingtiming_data = time_buffer[time_out];time_out = (time_out + 1) % buffer_size;timing_count --;display(timing_data);}The consumer is waiting – doing nothing – until a variable namedtiming_count is greater than zero, which indicates how much timingdata there is in the buffer to be read.
This timing data is produced by adata producer that executes code similar to that below:while (true){CONCEPTS AND MODELS FOR CONCURRENCY111while (timing_count == buffer_size) ; // do nothingtiming_data = timer();time_buffer[time_in] = timing_data;time_in = (time_in + 1) % buffer_size;timing_count ++;}These two code fragments run in parallel. Note that each processshares the time_buffer, which is filled and emptied of time data. Eachprocess keeps its own idea of how much is in each buffer. Finally, notethat the count timing_count is also shared between processes. Thevariables time_out and time_in are private variables and are meantto keep track on a circular basis.
The shared timing_count variable ismeant to indicate how much data is in the buffer that should be displayed.It is easy to demonstrate how these code sections would work togetherwell. For example, if a producer section is run before a consumer section,all objects are shared correctly, as in the sequence of code below (thetime-data producer is in italics and indented):timing_data = timer();time_buffer[time_in] = timing_data;time_in = (time_in + 1) % buffer_size;timing_count ++;timing_data = time_buffer[time_out];time_out = (time_out + 1) % buffer_size;timing_count --;display(timing_data);Even certain interleavings of the code work correctly:timing_data = time_buffer[time_out];time_out = (time_out + 1) % buffer_size;timing_data = timer();time_buffer[time_in] = timing_data;time_in = (time_in + 1) % buffer_size;timing_count --;timing_count ++;display(timing_data);These are ‘macro-style’ interleavings.
That is, they interleave entirestatements, which themselves are comprised of instructions. Interleavingprocess execution ultimately happens at the instruction level. Considerwhat happens if we interleave the instructions that these statements112PROCESS CONCURRENCY AND SYNCHRONIZATIONcomprise a bit differently. Let’s assume that the execution of timing_count ++ and timing_count -- are implemented like this:load register from timing_count locationadd (or subtract) 1 to (or from) registerstore register into timing_count locationNow consider the following interleaving of instructions:load register from timing_count locationadd 1 to registerload register from timing_count locationsubtract 1 from registerstore register into timing_count locationstore register into timing_count locationIf the value of timing_count is 10 at the beginning of this sequence,then the producer sets timing_count to 11 and the consumer sets it to9. In this case, both values are wrong; the correct result of this interleavingshould leave timing_count at 10.We can see, therefore, that certain interleavings of statements arecorrect and others leave corrupted data.
Our goal is to derive ideas aboutparallel execution that allow us to ensure correct manipulation of dataall the time.The Goal: SerializabilityIt is clear that, without proper precautions, data manipulation can onlybe guaranteed to be correct when processes are not concurrent. Thatis, resource corruption cannot occur when only one process at a timeis using the resource. This is a dilemma, however, because, as we sawin Chapter 5, running processes one at a time does not make sense forthe performance of a system. The goal, then, is to make shared accessto resources look as if it is done sequentially.
This property is calledserializability.We must invent ways for processes to cooperate and be coordinatedthat allows concurrency, yet makes access to shared resources look likeserial access. We start to do this by identifying the code sections thataccess shared resources. We call these code sections critical sections.We must coordinate the access these critical sections have to sharedresources so as to adhere to these criteria:CONCEPTS AND MODELS FOR CONCURRENCY113• mutual exclusion : this is a guarantee that, when a process is executinginside a critical section, it is the only one in the critical sectionaccessing the shared resource• no starvation : if a process wants to enter its critical section and noother processes are in their critical sections, then it is eventuallyallowed to do so• bounded waiting : there must be a limit to the number of times otherprocesses can enter their critical sections between the time a processrequests to enter its critical section and the time that request is granted.In other words, we must ensure that our serialization mechanism iseffective, with no starvation and no indefinite postponement.While we consider ways to guarantee these criteria, we must continueto remind ourselves that statements in a program that make up a process’scritical section are actually instructions.
At some point, as we drill downto the machine language making up a program, we have to assumethat something executes without interruption. (If any process can beinterrupted at any time, we can say very little about guaranteeing thecriteria above.) We assume that machine language instructions executewithout interruption – atomically. This means that the execution of oneinstruction completes before the execution of another instruction fromanother process begins.Synchronization of Two ProcessesLet us start our consideration of process synchronization by consideringthe case of only two processes executing concurrently. We branch outfrom this view in the next section, but restricting ourselves to twoprocesses allows us to examine the issues more closely.If we are only looking at two processes, a first solution to sharing aresource might be to take turns by using a turn variable that indicateswhose turn it is. When the turn variable has a process’s ID, then it isthat process’s turn.