Distributed Algorithms. Nancy A. Lynch (1993) (811416), страница 40
Текст из файла (страница 40)
If it gets the second fork immediately, then within anadditional time s, pi goes critical. If not, then the other process has it, and because of thearrangement, since it's pi 's second fork, it must be its neighbor's second fork also. So withintime at most s + c + 2s, the neighbor gets to C , leaves, and returns both forks. Then withinadditional time s, pi will retest the second fork and succeed in getting it. Then additionaltime s will take pi to C . This analysis says thatS (n) s + max fs s + c + 2s + s + sg = 6s + c :(16.5)Putting Eq. (16.4) and (16.5) together, we conclude thatT (n) 4s + c + 2(6s + c) = 16s + 3c :Note that the bound is independent of n. This is a very nice property for a distributedalgorithm to have.
It expresses a strong kind of locality, or \network transparency".Extension to More General Resource ProblemsDue to the nice features of the Left-Right algorithm, it is natural to ask whether can weextend this alternating left-right idea to more general resource allocation problems. In thissection we show an extension that is not completely general, but rather applies only toconjunctive problems, where the resource requirement of a process is exactly one particularset.
(This case does not cover k-exclusion, for example, which is a disjunctive problem, withmore than one alternative allowed).Again, for each resource we have an associated variable, shared by all processes that usethat resource. The variable contains a FIFO queue to record who's waiting (as before).
Asbefore, the processes will wait for the resources one at a time. To avoid deadlock, we arrangeresources in some global total ordering, and let each process seek resources in order accordingto this ordering, smallest to largest. It is easy to see that this prevents deadlock: if process197i waits for process j , then j holds a resource which is strictly larger than the one for which iis waiting, and hence the process holding the largest-numbered resource can always advance.The FIFO nature of the queues also prevents lockout.However, the time performance of this strategy is not very good, in general.
There is nolimit on the length of waiting chains (except for the number of processes), and so the timebounds can be linearly dependent on the number of nodes in the network. We saw such anexample above, for rings. So we must rene the ordering scheme by giving a way to choose agood total ordering, i.e., one that doesn't permit long waiting chains.
We use the followingstrategy for selecting a good total order. Dene the resource problem graph, where the nodesrepresent the resources, and we have an edge from one node to another if there is a user thatneeds both corresponding resources. See Figure 16.4 for an example of Dining Philosopherswith 6 nodes.F0p1F1p0p2F5F2p5p3F4p4F3Figure 16.4: Resource graph for six dining philosophers.Now color the nodes of the graph so that no two adjacent nodes have the same color.We can try to minimize the number of colors: nding minimal coloring is NP-hard, but agreedy algorithm is guranteed to color the graph in no more than m + 1 colors.Now totally order the colors.
This only partially orders the resources, but it does totallyorder the resources needed by any single process. See Figure 16.5 for an example.Now each process accesses the forks in increasing order according to the color ordering.(Equivalently, take any topological ordering of the given partial ordering, and let all processesfollow that total ordering in the earlier strategy.) Carrying on with our example, this boilsdown to the alternating left-right strategy.It is easy to see that the length of waiting chains is bounded by the number of colors.This is true since a process can be waiting for a resource of minimum color, which anotherprocess has. That process could only be waiting for a resource of \larger" color, etc.The algorithm has the nice property that the worst-case time is not directly dependent198F0p1F1p0p2F5F2p5p3F4p4F3Figure 16.5: coloring.on the total number of processes and resources.
Rather, let s be an upper bound on steptime, and c an upper bound on critical section time, as before. Let k be the number of colors,and let m be the maximal number of users for a single resource. We show that the worstcase time bound is O (mk )c + (kmk )s . That is, time depends only on \local" parameters.(We can make a reasonable case that in a realistic distributed network, these parameters areactually local, not dependent on the size of the network.) Note that this a big bound | itis exponential in k, so complexity is not proportional to the length of the waiting chain, butis actually more complicated. There are some complex interleavings that get close to thisbound.To prove the upper bound, we use the same strategy is as before. Dene Ti j , for 1 i kindicating colors, and 1 j m indicating positions on a queue, to be the worst-case timefrom when a process reaches position j on any queue of a resource with color i, until itreaches C .
Then set up inequalities as for the Left-Right algorithm. The base case is whenthe process already has the last resource:Tk 1 s(16.6)Also, when a process is rst on any queue, within time s it will be at worst at position mon the next higher queue:Ti 1 s + Ti+1 m for all i k(16.7)And lastly, when a process in in position j > 1 on queue i, it only has to wait for itspredecessor on the queue to get the resource and give it up, and then it gets the resource:Ti j Ti j;1 + c + ks + Ti 1 for all i and for all j > 1The claimed bound follows from solving for T1 m (after adding an extra s).199(16.8)Cutting Down the Time Bound This result shows that a very general class of resourceallocation problems can have time independent of global network parameters.
But it wouldbe nice to cut down the time bound from exponential to linear in the number of colors. Thereare some preliminary results by Awerbuch-Saks, and Choy-Singh. But there is probably stillmore work to be done.16.2 Safe and Regular RegistersIn this section we switch from the very powerful read-modify-write shared memory model tosome much weaker variants. We start with some denitions.16.2.1 DenitionsWe now focus on read-write objects (registers) again, and consider generalizations of thenotion of an atomic read-write register: safe and regular registers.
These variants are weakerthan atomic registers, but can still be used to solve interesting problems. They t our generalobject denition, in particular, inputs are read i x and write i x(v), with outputs as before. Theregisters are modeled as I/O automata as in Figure 16.6.write(v)write−respondXreadread−respond(v)Figure 16.6: Concurrent Read/Write Register X . Subscripts mentioned in the text areomitted from the diagram.As before, the index i on the read and write operations corresponds to a particular lineon which a process can access the register.
Also as before, we assume that operations on eachline are invoked sequentially, i.e., no new operations are invoked on a line until all previousoperations invoked on that line have returned. But otherwise, operations can overlap.Recall the denition of atomic registers. An atomic register has three properties: itpreserves well-formedness, it satises the atomicity condition, and fairness implies that op200erations complete. Also, depending on the architecture of the implementation, we can havea wait-free condition. Now we generalize the second (atomicity) condition, without changingthe other conditions. This still ts the general object spec model, so the modularity resultshold this allows us to build objects hierarchically, even preserving the wait-free property.For these weaker conditions, we only consider single-writer registers.
This means thatwrites never overlap one another. We require in all cases that any read that doesn't overlapa write gets the value of the closest preceding write (or the initial value if none). This iswhat would happen in an atomic register. The denitions dier from atomic registers inallowing for more possibilities when a read does overlap a write.The weakest possibility is a safe register, in which a read that overlaps a write can returnan arbitrary value. The other possibility is a regular register, which falls somewhere inbetween safe and atomic registers.
As above, a read operation on a regular register returnsthe correct value if it does not overlap any write operations. However, if a read overlaps oneor more writes, it has to return the value of the register either before or after any of thewrites it overlaps.W0W1W2R1R2W3W4Figure 16.7: Read Overlapping WritesFor example, consider the scenario in Figure 16.7. The set of feasible writes for R1is fW0 W1 W2 W3 W4g because it is allowed to return a value written by any of thesewrite operations. Similarly, the set of feasible writes for R2 is fW1 W2 W3g.
The importantdierence between regular and atomic registers is that for regular registers, consecutive reads201can return values in inverted order.16.2.2 Implementation Relationships for RegistersSuppose we only have safe registers because that's all our hardware provides, and we wouldlike to have atomic registers. We shall show that in fact, we can start with safe, single-readersingle-writer, binary registers, and build all the way up to atomic, multi-reader multi-writer,k-ary registers for arbitrary k.