3. Verification of sequential and concurrent programs. Apt_ Olderog (3rd edition) (2010) (811405), страница 52
Текст из файла (страница 52)
Indeed,along the path y := 0; y := 0 of the first loop body the proposed boundfunction max(x, 0) does not decrease. This path cannot be taken if S1 isexecuted in isolation but it can be taken due to interference with S2 as theabove example shows.Unfortunately, the stronger premises in the new formation rule (xi) fortotal correctness proof outlines of while loops given in Definition 8.3 reduceits applicability.
For example, we have seen that the component program S1terminates when considered in isolation. This can be easily proved using theloop II rule (rule 7) but we cannot record this proof as a proof outline fortotal correctness in the sense of Definition 8.3 because on the path y := 0 thevariable x is not decreased.However, as we are going to see, many parallel programs can be successfullyhandled in the way proposed here.8.6 Case Study: Find Positive Element More QuicklyIn Section 7.4, we studied the problem of finding a positive element in anarray a : integer → integer. As solution we presented a disjoint parallelprogram FIND.
Here we consider an improved program FINDPOS for thesame problem. Thus it should satisfy the correctness formula{true}FINDPOS{1 ≤ k ≤ N + 1 ∧ ∀(0 < l < k) : a[l] ≤ 0 ∧ (k ≤ N → a[k] > 0)}(8.17)2928 Parallel Programs with Shared Variablesin the sense of total correctness, where a 6∈ change(FINDPOS). Just as inFIND, the program FINDPOS consists of two components S1 and S2 activatedin parallel. S1 searches for an odd index k of a positive element and S2searches for an even one.What is new is that now S1 should stop searching once S2 has founda positive element and vice versa for S2 .
Thus some communication shouldtake place between S1 and S2 . This is achieved by making oddtop and eventopshared variables of S1 and S2 by refining the loop conditions of S1 and S2intoi < min{oddtop, eventop} and j < min{oddtop, eventop},respectively. Thus the program FINDPOS is of the formFINDPOS ≡ i := 1; j := 2; oddtop := N + 1; eventop := N + 1;[S1 kS2 ];k := min(oddtop, eventop),whereS1 ≡ while i < min(oddtop, eventop) doif a[i] > 0 then oddtop := ielse i := i + 2 fiodandS2 ≡ while j < min(oddtop, eventop) doif a[j] > 0 then eventop := jelse j := j + 2 fiod.This program is studied in Owicki and Gries [1976a].To prove (8.17) in the system TSV, we first construct appropriate proofoutlines for S1 and S2 .
Let p1 , p2 and t1 , t2 be the invariants and boundfunctions introduced in Section 7.4; that is,p1 ≡1 ≤ oddtop ≤ N + 1 ∧ odd(i) ∧ 1 ≤ i ≤ oddtop + 1∧ ∀l : (odd(l) ∧ 1 ≤ l < i → a[l] ≤ 0)∧ (oddtop ≤ N → a[oddtop] > 0),t1 ≡ oddtop + 1 − i,p2 ≡2 ≤ eventop ≤ N + 1 ∧ even(j) ∧ j ≤ eventop + 1∧ ∀l : (even(l) ∧ 1 ≤ l < j → a[l] ≤ 0)∧ (eventop ≤ N → a[eventop] > 0),t2 ≡ eventop + 1 − j.Then we consider the following standard proof outlines for total correctness.For S18.6 Case Study: Find Positive Element More Quickly293{inv : p1 }{bd : t1 }while i < min(oddtop, eventop) do{p1 ∧ i < oddtop}if a[i] > 0 then {p1 ∧ i < oddtop ∧ a[i] > 0}oddtop := ielse {p1 ∧ i < oddtop ∧ a[i] ≤ 0}i := i + 2fiod{p1 ∧ i ≥ min(oddtop, eventop)}and there is a symmetric standard proof outline for S2 .
Except for the newpostconditions which are the consequences of the new loop conditions, allother assertions are taken from the corresponding proof outlines in Section 7.4. Note that the invariants and the bound functions satisfy the newconditions formulated in Definition 8.3.To apply the parallelism with shared variables rule 27 for the parallelcomposition of S1 and S2 , we must show interference freedom of the two proofoutlines. This amounts to checking 24 correctness formulas! Fortunately, 22of them are trivially satisfied because the variable changed by the assignmentdoes not appear in the assertion or bound function under consideration.
Theonly nontrivial cases deal with the interference freedom of the postconditionof S1 with the assignment to the variable eventop in S2 and, symmetrically,of the postcondition of S2 with the assignment to the variable oddtop in S1 .We deal with the postcondition of S1 ,p1 ∧ i ≥ min(oddtop, eventop),and the assignment eventop := j. Since pre(eventop := j) implies j <eventop, we have the following proof of interference freedom:{p1 ∧ i ≥ min(oddtop, eventop) ∧ pre(eventop := j)}{p1 ∧ i ≥ min(oddtop, eventop) ∧ j < eventop}{p1 ∧ i ≥ min(oddtop, j)}eventop := j{p1 ∧ i ≥ min(oddtop, eventop)}.An analogous argument takes care of the postcondition of S2 . This finishesthe overall proof of interference freedom of the two proof outlines.An application of the parallelism with shared variables rule 27 now yields{p1 ∧ p2 }[S1 kS2 ]{p1 ∧ p2 ∧ i ≥ min(oddtop, eventop) ∧ j ≥ min(oddtop, eventop)}.By the assignment axiom and the consequence rule,2948 Parallel Programs with Shared Variables{true}i := 1; j := 2; oddtop := N + 1; eventop := N + 1;[S1 kS2 ]{min(oddtop, eventop) ≤ N + 1∧ ∀(0 < l < min(oddtop, eventop)) : a[l] ≤ 0∧ (min(oddtop, eventop) ≤ N → a[min(oddtop, eventop)] > 0)}.Hence the final assignment k := min(oddtop, eventop) in FINDPOS establishes the desired postcondition of (8.17).8.7 Allowing More Points of InterferenceThe fewer points of interference there are, the simpler the correctness proofsof parallel programs become.
On the other hand, the more points of interference parallel programs have, the more realistic they are. In this section wepresent two program transformations that allow us to introduce more pointsof interference without changing the program semantics. The first transformation achieves this by reducing the size of atomic regions.Theorem 8.1. (Atomicity) Consider a parallel program of the form S ≡S0 ; [S1 k. . .kSn ] where S0 is a while program. Let T result from S by replacing in one of its components, say Si with i > 0, either• an atomic region hR1 ; R2 i where one of the Rl s is disjoint from all components Sj with j 6= i byhR1 i; hR2 ior• an atomic region hif B then R1 else R2 fii where B is disjoint from allcomponents Sj with j 6= i byif B then hR1 i else hR2 i fi.Then the semantics of S and T agree; that is,M[[S]] = M[[T ]] and Mtot [[S]] = Mtot [[T ]].Proof.
We treat the case when S has no initialization part S0 and whenT results from S by splitting hR1 ; R2 i into hR1 i; hR2 i. We proceed in fivesteps.Step 1 We first define good and almost good (fragments of) computationsfor the program T . By an Rk -transition, k ∈ {1, 2}, we mean a transitionoccurring in a computation of T which is of the form8.7 Allowing More Points of Interference295< [U1 k.
. .khRk i; Ui k. . .kUn ], σ > → < [U1 k. . .kUi k. . .kUn ], τ > .We call a fragment ξ of a computation of T good if in ξ each R1 -transition isimmediately followed by the corresponding R2 -transition, and we call ξ almostgood if in ξ each R1 -transition is eventually followed by the corresponding R2 transition.Observe that every finite computation of T is almost good.Step 2 To compare the computations of S and T , we use the i/o equivalenceintroduced in Definition 7.4.
We prove the following two claims.• Every computation of S is i/o equivalent to a good computation of T ,• every good computation of T is i/o equivalent to a computation of S.First consider a computation ξ of S. Every program occurring in a configuration of ξ is a parallel composition of n components. For such a program Ulet the program split(U ) result from U by replacing in the ith component ofU every occurrence of hR1 ; R2 i by hR1 i; hR2 i. For example, split(S) ≡ T .We construct an i/o equivalent good computation of T from ξ by replacing• every transition of the form< [U1 k. .
.khR1 ; R2 i; Ui k. . .kUn ], σ >→ < [U1 k. . .kUi k. . .kUn ], τ >with two consecutive transitions< split([U1 k. . .khR1 ; R2 i; Ui k. . .kUn ]), σ >→ < split([U1 k. . .khR2 i; Ui k. . .kUn ]), σ 1 >→ < split([U1 k. . .kUi k. . .kUn ]), τ >,where the intermediate state σ 1 is defined by< hR1 i, σ > → < E, σ 1 >,• every other transition< U, σ > → < V, τ >with< split(U ), σ > → < split(V ), τ > .Now consider a good computation η of T . By applying the above replacement operations in the reverse direction we construct from η an i/o equivalentcomputation of S.Step 3 For the comparison of computations of T we use i/o equivalence, butto reason about it we also introduce a more discriminating variant that wecall “permutation equivalence”.2968 Parallel Programs with Shared VariablesFirst consider an arbitrary computation ξ of T .
Every program occurringin a configuration of ξ is the parallel composition of n components. To distinguish between different kinds of transitions in ξ, we attach labels to thetransition arrow → . We writeR< U, σ > →k < V, τ >if k ∈ {1, 2} and < U, σ > → < V, τ > is an Rk -transition of the i-thcomponent of U ,i< U, σ > → < V, τ >if < U, σ > → < V, τ > is any other transition caused by the activation ofthe ith component of U , andj< U, σ > → < V, τ >if j 6= i and < U, σ > → < V, τ > is a transition caused by the activation ofthe jth component of U .Hence, a unique label is associated with each transition arrow in a computation of T .
This enables us to define the following.Two computations η and ξ of T are permutation equivalent if• η and ξ start in the same state,• for all states σ, η terminates in σ iff ξ terminates in σ,• the possibly infinite sequence of labels attached to the transition arrowsin η and ξ are permutations of each other.Clearly, permutation equivalence of computations of T implies their i/oequivalence.Step 4 We prove now that every computation of T is i/o equivalent to agood computation of T . To this end, we establish two simpler claims.Claim 1 Every computation of T is i/o equivalent to an almost good computation of T .Proof of Claim 1.
Consider a computation ξ of T that is not almost good.Then by the observation stated at the end of Step 1, ξ is infinite. Moreprecisely, there exists a suffix ξ1 of ξ that starts in a configuration < U, σ >with an R1 -transition and then continues with infinitely many transitions notinvolving the ith component, say,Rj1j2ξ1 :< U, σ > →1 < U0 , σ 0 > → < U1 , σ 1 > → . . .,where jk 6= i for k ≥ 1. Using the Change and Access Lemma 3.4 we concludethe following: if R1 is disjoint from Sj with j 6= i, then there is also an infinitetransition sequence of the form8.7 Allowing More Points of Interference297j2j1ξ2 :< U, σ > → < V1 , τ1 > → .