Distributed Algorithms. Nancy A. Lynch (1993) (811416), страница 28
Текст из файла (страница 28)
Without loss of generality, suppose that and 0 only dierin the initial value of process i. Now consider any extension of in which i fails right atthe beginning, and never takes a step. The rest of the processes must eventually decide, bythe termination condition. Since is 0-valent, this decision must be 0.
Now run the sameextension after 0, still having i fail at the beginning. Since i fails at the beginning, and and 0 are the same except for process i, the other processes will behave in the same way,and decide 0 in this case as well. But that contradicts the assumption that 0 is 1-valent.Remark. Note, for later reference, that this lemma still holds for a more constrainedassumption about A: that it experiences at most a single fault. Technically, the lemmaholds under the same conditions as above, with the additional constraint in the terminationcondition that there is at most one failure.Next, we dene a decider to be an input-rst execution that is bivalent, but any one-stepextension of it is univalent.
We have the following lemma.Lemma 3 A has a decider.Proof: Suppose not. Then any bivalent input-rst execution has a bivalent one-step extension. Then starting with a bivalent initial execution (as guaranteed by Lemma 2), we willproduce an innite (input-rst) execution all of whose prexes are bivalent. The constructionis extremely simple | at each stage, we start with a bivalent initial execution, and we extendit by one step to another bivalent conguration.
Since any bivalent input-rst execution hasa bivalent one-step extension, we know we can do this. But this is a contradiction becausesome process does not fail in this execution, so a decision is supposed to be reached by thatprocess.Remark. Note that this lemma does require the full resiliency: assuming resiliency to asingle fault is not sucient to guarantee the existence of a decider.Now we can complete the proof of the theorem as follows. Dene extension ( i) to bethe execution obtained by extending execution by a single step of i.
By Lemma 3, weobtain a decider .Suppose that pi 's step leads to a 0-valent execution, and pj 's step leads to a 1-valentexecution (see Figure 12.2). That is, extension ( i) is 0-valent and extension ( j ) is 1valent. Clearly, i 6= j . (Note that by \process determinism" assumption, each process can138αi0−valentj1−valentFigure 12.2: is a decider. If i takes a step, then the resulting conguration is 0-valent, and if jtakes a step, the resulting conguration is 1-valent.only lead to a unique extension of .)We now proceed by case analysis, and get a contradiction for each possibility.iαjjββdecide 1decide 1Figure 12.3: The extension does not include steps of i, and therefore must result in decisionvalue 1 in both cases.Case 1: pi 's step is a read step.
Consider extending extension ( j ) in such a way that allprocesses except for pi continue to take steps. Eventually, they must decide 1. Now take thesame extension, beginning with the step of pj , and run it after extension ( i) (see Figure12.3). Since pi 's step is just a read, it does not leave any trace that would cause the otherprocesses to behave any dierently. So in this case also, they all decide 1 contradicting theassumption that extension ( i) is 0-valent.Case 2: pj 's step is a read step.
This case is symmetric to case 1, and the same argumentapplies.Case 3: pi 's and pj 's steps are both writes. We distinguish between the following subcases.Case 3a: pi and pj write to dierent variables. In this case, the result of running eitherpi and then pj , or pj and then pi, after , is exactly the same global state (see Figure 12.4).139iαjjiFigure 12.4: If i and j write to dierent variables, then applying j after i results in the sameconguration as in applying i after j .But then we have a common global state that can follow either a 0-valent or a 1-valentexecution. If we run all the processes from this state, they must reach a decision, but eitherdecision will yield a contradiction. For instance, if a decision of 0 is reached, we have adecision of 0 in an execution extending a 1-valent prex.Case 3b: They are writes to the same variable.
As in Figure 12.3, we can run all but piuntil they decide 1. On the other hand, we can apply that extension also after extension ( i)and obtain the same decision, because the very rst step of the extension will overwrite thevalue written by pi , and reach decision value 1 from a 0-valent state, a contradiction as inCase 1.In summary, we have contradictions in all possible cases, and thus we conclude that nosuch algorithm A can exist.12.1.2 Impossibility for a Single Stopping FaultWe can strengthen Theorem 1 to show impossibility of even 1-resilient consensus algorithm.The stronger version is due to Loui, Abu-Amara, following the Fischer, Lynch, Patersonproof for message-passing systems.Theorem 4 There is no algorithm that solves consensus under the conditions above in thepresence of a single stopping fault.Proof: As noted above, Lemma 2 holds also for the case of one stopping fault, and hencewe are still guaranteed that there exists a bivalent initial execution.
However, our proof willnow proceed using a dierent argument. Specically, we now show the following lemma.Lemma 5 For every bivalent input-rst execution , and every process i, there is an extension 0 of such that extension (0 i) is bivalent.Before we turn to prove the lemma, we show how it implies the theorem. Lemma 5 allows usto construct the following bad execution, in which no one fails, and yet no one ever decides:We start with the given initial bivalent execution, and repeatedly extend it, including atleast one step of process 1 in the rst extension, then at least one step of 2 in the second140extension, etc., in round-robin order, while keeping all prexes bivalent. The key step isthe extension for any particular process, but that is done by applying the lemma.
In theresulting execution, each process takes many steps, yet no process ever reaches a decision.So it remains only to prove the lemma.Proof: (of Lemma 5)By contradiction: suppose the lemma is false. Let us say what this means: we assume thatthere exist some bivalent input-rst execution and some process i, such that for every extension 0 of , extension (0 i) is univalent.
This, in particular, implies that extension ( i)is univalent suppose without loss of generality that it is 0-valent. Since is bivalent, thereis some extension 00 of leading to a decision of 1. Now, we must have extension (00 i)1-valent. Consider applying i at all points along the path from to 00. At the beginning,we get 0-valence, and at the end, 1-valence. At each intermediate step, we have univalence.Therefore, it must be that there are two consecutive steps at which i is applied, in the rstof which it yields a 0-valent extension and in the second a 1-valent extension. See Figure12.5.αjii0−valent1−valentFigure 12.5: whenever we extend with i, we get a 0-valent conguration.Suppose that j is the process that takes the intervening step. We claim j 6= i. Thisis true because if j = i, we have one step of i leading to 0-valence while two steps lead to1-valence, contradicting the process-determinism. So we must have somewhere the conguration depicted in Figure 12.6.We again proceed by case analysis, and obtain a contradiction for each case.Case 1: i's step is a read.
In this case, after ji and ij , the values in all shared variablesand the states of all processes other than i, are identical. So from either of these states, runall processes except i, and they will have to decide. But one of these positions is 0-valentand the other 1-valent, so we get a contradiction for any decision value.Case 2: j 's step is a read. This case is similar to Case 1. After i and ji, the values ofall shared variables and the states of all processes other than j are identical, and therefore,141αj1−valenti0−valentFigure 12.6: the point in the execution which we analyze by cases.when we let all processes except j run from both states, we obtain the same contradictionas above.Case 3: Both steps are writes.Case 3a: The writes are to dierent variables. In this case we get a commutative scenariodepicted in Figure 12.4, which immediately implies a contradiction.αjiiββdecide 1decide 0decide 0Figure 12.7: case 3b.
Process j does not take steps in .Case 3b: The writes are to the same variable. In this case, in a ji extension, the i stepoverwrites the j step (see Figure 12.7). Thus, after i and ji, the values of all shared variablesand the states of all processes other than j are identical, and we obtain a contradiction asbefore.This completes the proof of Theorem 4.14212.2 Modeling and Modularity for Shared Memory SystemsIn this section we go back to considering the idea of atomic objects, and show how they mightbe used to describe modular system designs. An atomic object is a concurrent \version" of acorresponding serial specication. We want to prove a theorem that says, roughly speaking,that any algorithm written for the \instantaneous access" model we have already been usingcan also run correctly in a variant of the model where all the shared variables are replacedby corresponding atomic objects.
This would allow us to decompose the task of solving aproblem into the task of solving it, assuming some kind of instantaneous access memory,then separately implement atomic objects of the corresponding kind. This could be donein a series of stages, if the problem being solved is itself that of building a (more powerful)atomic object.To make all of this precise, we should be a little more formal about the model we areusing.Recall the model we have been assuming | state machines with input, output, localcomputation and shared memory actions, where the latter have transitions of the form((s m) (s0 m0)). So far, all accesses to shared memory have been by instantaneousactions.Now we would like to dene a slightly dierent version of the model, where insteadof sharing memory with instantaneous actions, the processes communicate with separateobjects.
The communication is no longer instantaneous, but involves separate invocationsand responses. We must say what kind of fairness we want here too.It is helpful at this point to introduce a very general, simple and precise formal modelfor asynchronous concurrent systems: I/O Automata. This model is useful as a generalfoundation for the various kinds of asynchronous systems we will be studying. By addingparticular extra structure, it can be used to describe the \instantaneous access" shared memory systems we have been studying. It is also very natural for describing non-instantaneousaccess shared memory systems, as well as message-passing systems, dataow systems, etc.Virtually, just about any asynchronous model.12.2.1 The Basic Input/Output Automaton ModelIn this section we give the basic denitions for the I/O Automaton model for asynchronousconcurrent computation.