Distributed Algorithms. Nancy A. Lynch (1993) (811416), страница 50
Текст из файла (страница 50)
In this event, a new INITIATE broadcast emanates from the new core edge and the250newly-formed fragment begins once more to look for its overall MWOE. If an absorbingCONNECT occurs, from a lower-level to a higher-level fragment, then the node in the highlevel fragment knows whether it has found its own MWOE and thus whether to send backan INITIATE message to be broadcast in the lower-level fragment.19.2.7 Complexity AnalysisMessage complexity.
The analysis is similar to the synchronous case, giving the same boundof O(n log n + E ). More specically, in order to analyze the message complexity of the GHSalgorithm, we apportion the messages into two dierent sets, resulting separately (as we willsee) in the O(n log n) term and the O(E ) term.The O(E ) term arises from the fact that each edge in the graph must be tested at leastonce: in particular, we know that TEST messages and associated REJECT messages canoccur at most once for each edge: this follows from the observation that once a REJECTmessage has been sent over an edge, that edge will never be tested again.All other messages sent in the course of the algorithm|the TEST-ACCEPT pairs that gowith the acceptances of edges, the INITIATE-REPORT broadcast-convergecast messages,and the CHANGEROOT-CONNECT messages that occur when fragments combine|canbe considered as part of the overall process of nding the MWOE for a given fragment.
Inperforming this task for a fragment, there will be at most one of these messages associatedwith each node (each node receives at most one INITIATE and one ACCEPT each sendsat most one successful TEST, one REPORT, and one of either CHANGEROOT or CONNECT). Thus, the number of messages sent within a fragment in nding the MWOE is O(m)where m is the number of nodes in the fragment.The total number of messages sent in the MWOE-nding process, therefore, is proportional to:Xall fragments Fnumber of nodes in Fwhich isXXall level numbers L all fragments F of level Lnumber of nodes in F!Now, the total number of nodes in the inner sum at each level is at most n, since eachnode appears in at most one fragment at a given level-number L.
And since the biggest251possible value of L is log n, the sum above is bounded by:logXn1n = O(n log n)Thus, the overall message complexity of the algorithm is O(E + (n log n)).Time Complexity: It can be shown by induction on l that the time for all the nodes toreach level at least l is at most O(ln). Hence, the time complexity of the GHS algorithm isO(n log n).19.2.8 Proving Correctness for the GHS AlgorithmA good deal of interesting work remains to be done in the eld of proving correctness forcommunication algorithms like the Gallager-Humblet-Spira algorithm. The level of complexity of this code makes direct invariant assertion proofs quite dicult to do, at least at thedetailed level of the code.
Note that most of the discussion in this lecture has been at thehigher level of graphs, fragments, levels, MWOE's, etc. So it seems that a proof ought totake advantage of this high-level structure.One promising approach is to apply invariant-assertion and other techniques to provecorrectness for a high-level description of the algorithm and then prove independently thatthe code in fact correctly simulates the high-level description (see WelchLL88]).A proof can be formalized by describing the high-level algorithm as an I/O automaton(in which the state consists of fragments, and actions include merge and absorb operations),describing the low-level code as another I/O automaton, and then producing that there is asimulation mapping between the two automata.2526.852J/18.437J Distributed AlgorithmsLecturer: Nancy LynchNovember 19, 1992Lecture 2020.1 Minimum Spanning Tree (cont.)20.1.1 Simpler \Synchronous" StrategyNote that in the asynchronous algorithm we have complications we did not encounter inthe synchronous algorithm (e.g., absorbing vs.
merging, waiting for fragments to catch up).Another idea to cope with these diculties is to try to simulate the synchronous algorithmmore closely, somehow synchronizing the levels.More specically, we can design an algorithm based on combining fragments, where eachfragment has an associated level number, as follows. Each individual node starts as a singlenode fragment at level 0. This time, fragments of level l can only be combined into fragmentsof level l + 1, as in the synchronous algorithm.
Each node keeps a local-level variable, whichis the latest level it knows of the fragment it belongs to so initially every node's local-level is0, and when it nds out about its membership in a new fragment of level l, it raises its locallevel to l. The basic idea is that a node at level l tries not to participate in the algorithmfor nding its level l fragment's MWOE until it knows that all the nodes in the system havereached level at least l. Implementing this test requires global synchronization. (This isbarrier synchronization.) But in fact, some kind of a weaker local synchronization suces.Namely, each node only tests that all of its neighbors have local-level at least l. This requireseach node to send a message on all of its incident edges every time its level is incremented.This algorithm has the same time performance as before, i.e., O(n log n).
The messagecomplexity gets worse, however: it is now O(E log n).20.2 SynchronizersThe idea discussed above suggests a general strategy for running a synchronous algorithm inan asynchronous network. This will only apply to a fault-free asynchronous algorithm (i.e.,no failed processes or messages being lost, duplicated, etc.), since, as we shall soon see, thefault-tolerance capability is very dierent in synchronous and asynchronous systems. We are253working from Awerbuch, with some formalization and proofs done recently by Devarajan (astudent).The strategy works for general synchronous algorithms (without faults) the idea is tosynchronize at each round rather than every so often as in the MST algorithm sketchedabove. Doing this every so often depends on special properties of the algorithm, e.g., thatit works correctly if it is allowed to exhibit arbitrary interleavings of node behavior betweensynchronization points.
We shall only describe how to synchronize at every round.20.2.1 Synchronous ModelRecall that in the synchronous model, each node had message generation and transitionfunctions. It is convenient to reformulate each node as an IOA with output actions synchsend and input actions synch-receive. The node must alternate these actions, starting withsynch-send. At rounds at which the synchronous algorithm doesn't send anything, we assumethat a synch-send() action occurs. We call such a node automaton a client automaton. Theclient automaton may also have other external actions, interacting with the outside world.The rest of the system can be described as a synchronous message system (called \synchsys" for short in the sequel), which at each round, collects all the messages that are sentat that round in synch-send actions, and then delivers them to all the client automata insynch-receive actions.
In particular, this system synchronizes globally, after all the synchsend events and before all the synch-receive events of each round.Note that this composition of IOA's is equivalent to the more tightly coupled synchronousmodel described earlier, where the message system was not modeled explicitly.We will describe how to \simulate" the synch-sys using an asynchronous network, so thatthe client automata running at the network nodes cannot tell the dierence between runningin the simulation system and running in a real synchronous system.20.2.2 High-Level Implementation Using a SynchronizerThe basic idea discussed by Awerbuch is to separate out the processing of the actual messagesfrom the synchronization.
He splits up the implementation into several pieces: a front-endfor each node, able to communicate with the front-ends of neighbors over special channels,and a pure synchronizer S . The job of each front end is to process the messages receivedfrom the local client in synch-send events. At a particular round, the front end collects allthe messages that are to be sent to neighbors (note that a node might send to some neighborsand not to some others). It actually sends those messages to the neighbors, along the specialchannels.