Smartphone Operating System (779883), страница 26
Текст из файла (страница 26)
This is enough to show that no starvation canoccur and waiting is bounded.6.2 SemaphoresAs we saw in the previous sections, using algorithms to guarantee ourthree criteria – mutual exclusion, no starvation and bounded waiting – iscomplicated and clumsy. They also require a bit of overhead to use.The complexity of the algorithms has developed because the atomicitySEMAPHORES117of operations cannot be ensured. For example, interleaving of statementscan cause problems with the implementation. If the statements from oneprocess could be executed atomically – without interleaving – we wouldbe in much better shape.We can get around these methods by the use of a tool called asemaphore.
A semaphore is a data object – often a simple integer – thatis supplied by the operating system and guaranteed to be manipulatedatomically. Using a semaphore to make processes cooperate takes theplace of using the methods from Section 6.1 for critical sections.Semaphores are accessed through two operations: wait() and signal(). These two operations can be thought of in the following genericfashion:wait(S){while (S <= 0) ;S--;}signal(S){S++;}These methods require an initial value for the semaphore S, whichis usually considered to be the number of processes that can access aparticular resource. These methods also assume that operations on S areatomic and cannot be interrupted.
This includes arithmetic operations aswell as testing.To see how we can use semaphores, consider this code sequence:while (true){wait(CS);// critical sectionsignal(CS);// whatever else needs to be done}In this example, processes share a semaphore, CS, initialized to 1(there can only be one process in a critical section at a time). As a process118PROCESS CONCURRENCY AND SYNCHRONIZATIONwants to enter its critical section, it waits for CS to be equal to 1, thendecrements it, all in one uninterruptible action. When it is finished withits critical section, the process increments the semaphore in an atomicmanner, thereby signaling to other processes that they may enter thecritical section.Consider the implementation of semaphores.
The methods we outlinedin Section 6.1 were all based on busy waiting. Busy waiting occurs whena process is waiting for something that it needs to check constantly.This type of waiting is extremely inefficient because it requires CPUtime. This wastes CPU time that another process might be using. Instead,semaphores are implemented more like devices: waiting for a semaphorecauses movement of the process from the running queue to the waitingqueue. This means that a process is blocked while it waits for a semaphoreto be available, but it does not consume CPU cycles. It is the use of thesignal() operation by some other process that unblocks the waitingprocess and moves it to the ready queue again.
The result is an efficientmethod of coordination that uses no CPU time and abides by oursynchronization criteria.Binary semaphores are a special case of semaphore. They have one oftwo values: 0, indicating no availability, or 1, indicating availability. Thisgives the user a means of indicating ‘taken’ and ‘available’ – or ‘lock’ and‘unlock’ as we see in Section 6.3.6.3 Locks, Monitors and Other AbstractionsFor some people, using a semaphore is too ‘close’ to system operations.That is to say, semaphores are not a sufficiently abstract idea. Other, moreabstract, concepts have been invented to hide the use of semaphores.A lock is an abstraction that masks a semaphore. Locks are usuallyassociated with data items, not sections of code.
The operation of binarylocking is usually done on a data item and guarantees that all manipulations on that data item by the owner of the lock are done atomically,without interruption. Trying to lock a locked data item usually causes thelock operation to block until the data item is unlocked. Locks can be ofother types than binary and can sometimes allow certain operations tohappen concurrently. For example, read locks are usually distinct fromwrite locks. If a read lock is held on a data object, then only read operations – from any process – can proceed in parallel.
Write locks guaranteemutual exclusion: the owner of a write is guaranteed atomicity withrespect to the locked data item.LOCKS, MONITORS AND OTHER ABSTRACTIONS119Locks can be implemented by semaphores. Binary semaphores areused, with the lock operation implemented by wait() and the unlockoperation implemented with signal().A critical region is a programming language construct designed toensure access to critical sections by focusing on shared variables.
Criticalregions require two types of special syntax, for variables and for criticalsections of code. For example, if we were to design a concurrent queue,we might declare that queue as a concurrent C++ struct as follows:1struct concurrentQueue{int queue[];int head, tail;}time_buffer: shared struct concurrentQueue;Notice the last line that declares the time_buffer variable to bea shared concurrent queue.
We can specify a consumer that uses thisshared queue as in the following code:region time_buffer when (timing_count > 0){timing_data = time_buffer[time_out];time_out = (time_out + 1) % buffer_size;timing_count --;display(timing_data);}The code’s syntax incorporates a guard condition – in our case thisis timing_count > 0 – and focuses the critical section of code onthe shared resource. The intent of this kind of syntax is to guardagainst programmer errors associated with manipulating semaphores.This abstraction takes away the semaphore manipulation and focuses onsyntax.Another form of abstraction that hides semaphores is a monitor. Amonitor is a programming language construct similar to a class or otherobject-oriented abstraction.
In object-oriented languages, classes are an1This and other examples of code are written in a fictitious enhancement to C++. Thereis no ‘shared’ keyword, for example, but I explain the syntax and it should be clear whatthe code is used for.120PROCESS CONCURRENCY AND SYNCHRONIZATIONencapsulation of data definitions and the procedures that operate onthose definitions. A monitor is an extended definition of a class whereany use of a monitor’s operation guarantees atomicity with respect to thedata items defined in the monitor. Consider the following specification ofa monitor:public monitor concurrentQueue{int queue[];int head, tail;concurrentQueue() { // constructor code }void enqueue(int aQueueItem) { ... }int dequeue() { ... }int length() { ...
}}This example is certainly not that mysterious; you could replace thekeyword ‘monitor’ with the keyword ‘class’ and have a class specificationwritten in Java.A monitor is implemented with semaphores and critical sections. Useof a monitor’s methods implies locking the data items defined in themonitor’s declaration first, then making the method call. The data itemsneed to be unlocked just before the method returns.
And locking, as westated previously, can be implemented with semaphores. So again wehave semaphores as the basis for higher-level abstractions.The concepts we have just discussed – such as critical regions andmonitors – were developed as experimental concepts by researchers trying to understand how best to program with concurrent processes inan operating system.
Some of the first researchers in this field were PerBrinch Hansen and C.A.R. Hoare. These scientists invented new languages or augmented existing ones so that they could experiment withhow these new constructs worked. Languages such as Concurrent Pascalrepresented existing languages (Pascal) that were augmented and Mesaand CSP (and later, Occam) were new languages influenced by theirwork.6.4 The Dining Philosophers: A Classic ProblemA classic problem in concurrency has to do with five philosophers sittingaround a table. These five philosophers have nothing to do except eatTHE DINING PHILOSOPHERS: A CLASSIC PROBLEM121RICEFigure 6.1The philosophers’ dining tableand think.
They can think on their own, but to eat, they need a sharedbowl of rice and two chopsticks. The table is built so that each of the fivephilosophers has a single chopstick to her left and right, shared by thephilosophers to the left and right (see Figure 6.1).The problem is that a philosopher cannot eat unless she has twochopsticks. This has several implications.
If a philosopher is eating, thenher neighbors cannot eat. If a philosopher picks up one chopstick, buther neighbor has also picked up one, then she cannot eat. The goal inthis problem is to allow every philosopher to go through eat–think cycleswithout starvation.The obvious solution is shown in the following code:while (true){think();take_chopstick(i);take_chopstick(i+1 % 5);eat();drop_chopstick(i);drop_chopstick(i+1 % 5);}The call to take_chopstick(i) waits until the specified chopstick isavailable and then takes it.