Concepts with Symbian OS (779878), страница 25
Текст из файла (страница 25)
The following code shows how this might work:while (true){while (turn != myID) ;// critical section114PROCESS CONCURRENCY AND SYNCHRONIZATIONturn = nextID;// whatever else needs to be done}The process IDs of the two processes involved are stored in myID andnextID. This method ensures some of our criteria but not all of them. Foronly two processes, this method does indeed ensure mutual exclusion:the turn variable can only have one value at a time and changes onlywhen the critical section is complete. There is also a bound on waiting:when a process is ready to enter its critical section, the other process canenter only once. However, the rule against no starvation is violated: if aprocess is ready to enter its critical section, and the other process is not,the current process can only enter if it is its turn.
A process can starveanother process simply by not taking its turn.The method failed because we did not know enough information aboutprocesses. If we knew whether a process was ready to enter its criticalsection, we might be able to fix this. The following code shows a wayaround this:while (true){ready[myID] = true;while (ready[nextID]) ;// critical sectionready[myID] = false;// whatever else needs to be done}In this method, we establish two flags – an array of Booleans – thatindicate whether a process is ready to enter its critical section. By settinga flag and checking the other process’s flag, a process can implement theidea of taking turns while avoiding starvation.This still does not satisfy all our criteria.
It does indeed satisfy mutualexclusion and goes some way towards meeting the starvation requirement. However, it does not completely work: since the ready flags areset in separate statements, a second process could set its ready flagbetween the two statements before entering the critical section. That is,we could have:CONCEPTS AND MODELS FOR CONCURRENCY115ready[myID] = true;ready[nextID] = true;while (ready[nextID]) ;while (ready[myID]) ;Now both processes are stuck in their waiting loops and each processstarves.The correct solution lies in combining the ideas of both of the previousmethods: take turns, but only if the other process is not ready.
For thissolution, we need both turn variables and ready arrays:while (true){ready[myID] = true;turn = nextID;while (ready[nextID] && turn == nextID) ;// critical sectionready[myID] = false;// whatever else needs to be done}Synchronization of Multiple ProcessesWe have seen a two-process solution; now we need to generalize to amethod that works in a multiple-process environment. There are manymethods of multiple-process synchronization; this topic remains fodderfor many research projects and doctoral dissertations. Examples includethe colored ticket algorithm, clock algorithms, and many unnamed mutualexclusion algorithms.
For further reading, [Lamport 1987] is an excellentpaper that surveys synchronization methods.We outline the bakery method – sometimes called the grocery storeor shop method. The bakery method is based on an algorithm used inbakeries and shops. In these situations, customers organize themselves bytaking a number and waiting their turn until their number is called. Thecustomer with the lowest number is the next to be served. We simulatethis type of behavior:do{116PROCESS CONCURRENCY AND SYNCHRONIZATIONchoosing[pid] = true;number[pid] = max(number[0], ..., number[n-1]) + 1;choosing[pid] = false;for (i=0; i<num_processes; i++){while (choosing[i]) ;// do nothingwhile ( (number[i] != 0) &&( (number[pid], pid) < (number[i], i) ) ) ;}// critical sectionnumber[pid] = 0;// whatever} while (true);We start by picking a number – which we assume is more than allthe other numbers chosen.
However, we cannot guarantee that multipleprocesses do not choose the same number. Then we check all processes,waiting for each to finish choosing and for each to return their number ifit is less. The notation (number[i],i) is meant to convey that eitherthe number chosen is less or the process ID is less (if the number is thesame).
To understand that this works, consider that if a process Pi is inits critical section and Pj (where i is not the same as j ) is waiting withan already-selected number then it must be true that (number[i],i) <(number[j],j).In this algorithm, we can see that mutual exclusion is adhered toby observing that when Pi is in its critical section, number[i] !=0. When a second process Pj wants to enter its critical section, then(number[i],i) < (number[j],j).
So the algorithm makes Pj waituntil Pi is done and the statement number[pid] = 0 is executed.Notice that processes enter critical sections when they need to, on afirst-come-first-served basis. This is enough to show that no starvation canoccur and waiting is bounded.6.2 SemaphoresAs we saw in the previous sections, using algorithms to guarantee ourthree criteria – mutual exclusion, no starvation and bounded waiting – iscomplicated and clumsy. They also require a bit of overhead to use.The complexity of the algorithms has developed because the atomicitySEMAPHORES117of operations cannot be ensured. For example, interleaving of statementscan cause problems with the implementation.
If the statements from oneprocess could be executed atomically – without interleaving – we wouldbe in much better shape.We can get around these methods by the use of a tool called asemaphore. A semaphore is a data object – often a simple integer – thatis supplied by the operating system and guaranteed to be manipulatedatomically. Using a semaphore to make processes cooperate takes theplace of using the methods from Section 6.1 for critical sections.Semaphores are accessed through two operations: wait() and signal().
These two operations can be thought of in the following genericfashion:wait(S){while (S <= 0) ;S--;}signal(S){S++;}These methods require an initial value for the semaphore S, whichis usually considered to be the number of processes that can access aparticular resource. These methods also assume that operations on S areatomic and cannot be interrupted. This includes arithmetic operations aswell as testing.To see how we can use semaphores, consider this code sequence:while (true){wait(CS);// critical sectionsignal(CS);// whatever else needs to be done}In this example, processes share a semaphore, CS, initialized to 1(there can only be one process in a critical section at a time).
As a process118PROCESS CONCURRENCY AND SYNCHRONIZATIONwants to enter its critical section, it waits for CS to be equal to 1, thendecrements it, all in one uninterruptible action. When it is finished withits critical section, the process increments the semaphore in an atomicmanner, thereby signaling to other processes that they may enter thecritical section.Consider the implementation of semaphores.
The methods we outlinedin Section 6.1 were all based on busy waiting. Busy waiting occurs whena process is waiting for something that it needs to check constantly.This type of waiting is extremely inefficient because it requires CPUtime. This wastes CPU time that another process might be using. Instead,semaphores are implemented more like devices: waiting for a semaphorecauses movement of the process from the running queue to the waitingqueue.
This means that a process is blocked while it waits for a semaphoreto be available, but it does not consume CPU cycles. It is the use of thesignal() operation by some other process that unblocks the waitingprocess and moves it to the ready queue again. The result is an efficientmethod of coordination that uses no CPU time and abides by oursynchronization criteria.Binary semaphores are a special case of semaphore. They have one oftwo values: 0, indicating no availability, or 1, indicating availability. Thisgives the user a means of indicating ‘taken’ and ‘available’ – or ‘lock’ and‘unlock’ as we see in Section 6.3.6.3 Locks, Monitors and Other AbstractionsFor some people, using a semaphore is too ‘close’ to system operations.That is to say, semaphores are not a sufficiently abstract idea.
Other, moreabstract, concepts have been invented to hide the use of semaphores.A lock is an abstraction that masks a semaphore. Locks are usuallyassociated with data items, not sections of code. The operation of binarylocking is usually done on a data item and guarantees that all manipulations on that data item by the owner of the lock are done atomically,without interruption. Trying to lock a locked data item usually causes thelock operation to block until the data item is unlocked.
Locks can be ofother types than binary and can sometimes allow certain operations tohappen concurrently. For example, read locks are usually distinct fromwrite locks. If a read lock is held on a data object, then only read operations – from any process – can proceed in parallel. Write locks guaranteemutual exclusion: the owner of a write is guaranteed atomicity withrespect to the locked data item.LOCKS, MONITORS AND OTHER ABSTRACTIONS119Locks can be implemented by semaphores.
Binary semaphores areused, with the lock operation implemented by wait() and the unlockoperation implemented with signal().A critical region is a programming language construct designed toensure access to critical sections by focusing on shared variables. Criticalregions require two types of special syntax, for variables and for criticalsections of code. For example, if we were to design a concurrent queue,we might declare that queue as a concurrent C++ struct as follows:1struct concurrentQueue{int queue[];int head, tail;}time_buffer: shared struct concurrentQueue;Notice the last line that declares the time_buffer variable to bea shared concurrent queue. We can specify a consumer that uses thisshared queue as in the following code:region time_buffer when (timing_count > 0){timing_data = time_buffer[time_out];time_out = (time_out + 1) % buffer_size;timing_count --;display(timing_data);}The code’s syntax incorporates a guard condition – in our case thisis timing_count > 0 – and focuses the critical section of code onthe shared resource.