3. Verification of sequential and concurrent programs. Apt_ Olderog (3rd edition) (2010) (811405), страница 47
Текст из файла (страница 47)
. . , n} and < U, σ > → < V, τ > is a transition in ξ caused byithe activation of the ith component of U . Thus the labeled arrows → arerelations between configurations which are included in the overall transitionrelation → .Recall from Section 2.1 that for arbitrary binary relations → 1 and → 2the relational composition → 1 ◦ → 2 is defined as follows:a → 1 ◦ → 2 b if for some c, a → 1 c and c → 2 b.We say that → 1 and → 2 commute if→ 1 ◦ → 2 = → 2 ◦ → 1.ijProve that for i, j ∈ {1, . . .
, n} the transition relations → and → commute.Hint. Use the Change and Access Lemma 3.4.7.6. Prove thatMtot [[[S1 k. . .kSn ]]] = Mtot [[S1 ; . . .; Sn ]]using Exercise 7.5.7.7. Call a program S determinate if for all proper states σ, Mtot [[S]](σ) is asingleton.Prove that if S1 , S2 are determinate and B is a Boolean expression, then(i) S1 ; S2 is determinate,(ii) if B then S1 else S2 fi is determinate,(iii) while B do S1 od is determinate.7.8. Provide an alternative proof of the Determinism Lemma 7.6 using Exercises 7.6 and 7.7.7.9.
Show that the correctness formula{x = y} [x := x + 1ky := y + 1] {x = y}cannot be proved in the proof systems P W +rule 24 and T W +rule 24.2667 Disjoint Parallel Programs7.10. Prove the correctness formula{x = y} [x := x + 1ky := y + 1] {x = y}in the proof system P W +rule A5 +rule 24.7.6 Bibliographic RemarksThe symbol k denoting parallel composition is due to Hoare [1972]. The interleaving semantics presented here is a widely used approach to modelingparallelism. Alternatives are the semantics of maximal parallelism of Salwickiand Müldner [1981] and semantics based on partial orders among the configurations in computations —see, for example, Best [1996] and Fokkinga, Poeland Zwiers [1993].Abstract reduction systems as used in Section 7.2 are extensively covered inTerese [2003].
The sequentialization rule (rule 23) and the disjoint parallelismrule (rule 24) were first discussed in Hoare [1975], although on the basis ofan informal semantics only. The proof of the Sequentialization Lemma 7.7 isbased on the fact that transitions of disjoint programs commute, see Exercises7.5 and 7.6. Semantic commutativity of syntactically nondisjoint statementswas studied by Best and Lengauer [1989].The need for auxiliary variables in correctness proofs was first realizedby Clint [1973].
A critique of auxiliary variables ranging over an unboundeddomain of values can be found in Clarke [1980]. The name of the StutteringLemma is motivated by the considerations of Lamport [1983].The program FIND studied in Section 7.4 is a disjoint parallelism version ofthe program FINDPOS due to Rosen [1974].
Its correctness proof is a variationof the corresponding proof of FINDPOS in Owicki and Gries [1976a].8 Parallel Programs with SharedVariables8.18.28.38.48.58.68.78.88.98.10DAccess to Shared Variables . . . . . . . . . . . . . . . . . . . . . . . . . . .Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Verification: Partial Correctness . . . . . . . . . . . . . . . . . . . . . .Verification: Total Correctness . . . . . . . . . . . . . . . . . .
. . . . .Case Study: Find Positive Element More Quickly . . . . . .Allowing More Points of Interference . . . . . . . . . . . . . . . . .Case Study: Parallel Zero Search . . . . . . . . . . . . . . . . . . . . .Exercises . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bibliographic Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269270271274284291294299303305ISJOINT PARALLELISM IS a rather restricted form of concurrency. In applications, concurrently operating components oftenshare resources, such as a common database, a line printer or a data bus.Sharing is necessary when resources are too costly to have one copy for eachcomponent, as in the case of a large database.
Sharing is also useful to establish communication between different components, as in the case of a databus. This form of concurrency can be modeled by means of parallel programswith shared variables, variables that can be changed and read by severalcomponents.Design and verification of parallel programs with shared variables are muchmore demanding than those of disjoint parallel programs. The reason is thatthe individual components of such a parallel program can interfere with each2672688 Parallel Programs with Shared Variablesother by changing the shared variables. To restrict the points of interference,we consider so-called atomic regions whose execution cannot be interruptedby other components.After illustrating the problems with shared variables in Section 8.1 weintroduce the syntax of this class of programs in Section 8.2.
In Section 8.3we define the semantics of these programs.Next, we study the verification of parallel programs with shared variables.We follow here the approach of Owicki and Gries [1976a]. In Section 8.4we deal with partial correctness. The proof rule for parallelism with sharedvariables includes a test of interference freedom of proof outlines for thecomponent programs. Intuitively, interference freedom means that none ofthe proof outlines invalidates the (intermediate) assertions in any other proofoutline.In Section 8.5 we deal with total correctness. To prove termination ofparallel programs with shared variables, we have to strengthen the notion ofa proof outline for the total correctness of a component program and extendthe test of interference freedom appropriately.As a case study we prove in Section 8.6 the correctness of a more efficientversion of the program FIND of Chapter 7 which uses shared variables.In Section 8.7 we consider two program transformations that allow us tointroduce in parallel programs more points of interference without changingthe correctness properties.
We demonstrate the use of these transformationsin Section 8.8, where we prove as a case study partial correctness of the zerosearch program ZERO-3 from Chapter 1.8.1 Access to Shared Variables2698.1 Access to Shared VariablesThe input/output behavior of a disjoint parallel program can be determinedby looking only at the input/output behavior of its components. This is nolonger the case when shared variables are allowed. Here the program executions have to be considered.Example 8.1. Consider the two component programsS1 ≡ x := x + 2 and S1′ ≡ x := x + 1; x := x + 1.In isolation both programs exhibit the same input/output behavior, sincethey increase the value of the variable x by 2.
However, when executed inparallel with the componentS2 ≡ x := 0,S1 and S1′ behave differently. Indeed, upon termination of[S1 kS2 ]the value of x can be either 0 or 2 depending on whether S1 or S2 is executedfirst. On the other hand, upon termination of[S1′ kS2 ]the value of x can be 0, 1 or 2.
The new value 1 is obtained when S2 is⊓⊔executed between the two assignments of S1′ .The informal explanation of these difficulties clarifies that the input/output (i/o) behavior of a parallel program with shared variables critically depends on the way its components access the shared variables duringits computation. Therefore any attempt at understanding parallelism withshared variables begins with the understanding of the access to shared variables. The explanation involves the notion of an atomic action.In this context, by an action A we mean a statement or a Boolean expression. An action A within a component Si of a parallel program S ≡[S1 k. . .kSn ] with shared variables is called indivisible or atomic if during itsexecution the other components Sj (j 6= i) may not change the variables ofA.
Thus during the execution of A only A itself may change the variables invar(A). Computer hardware guarantees that certain actions are atomic.The computation of each component Si can be thought of as a sequenceof executions of atomic actions: at each instance of time each component isready to execute one of its atomic actions.
The components proceed asynchronously; that is, there is no assumption made about the relative speed atwhich different components execute their atomic actions.2708 Parallel Programs with Shared VariablesThe executions of atomic actions within two different components of a parallel program with shared variables may overlap provided these actions donot change each other’s variables. But because of this restriction, their possibly overlapping execution can be modeled by executing them sequentially, inany order.
This explains why asynchronous computations are modeled hereby interleaving.There still remains the question of what size of atomic actions can beassumed. We discuss this point in Sections 8.2 and 8.3.8.2 SyntaxFormally, shared variables are introduced by dropping the disjointness requirement for parallel composition. Atomic regions may appear inside a parallel composition. Syntactically, these are statements enclosed in angle bracketsh and i.Thus we first define component programs as programs generated by thesame clauses as those defining while programs in Chapter 3 together withthe following clause for atomic regions:S ::= hS0 i,where S0 is loop-free and does not contain further atomic regions.Parallel programs with shared variables (or simply parallel programs) aregenerated by the same clauses as those defining while programs togetherwith the following clause for parallel composition:S ::= [S1 k.