3. Verification of sequential and concurrent programs. Apt_ Olderog (3rd edition) (2010) (811405), страница 54
Текст из файла (страница 54)
, n} either Rj ≡ E or Rj ≡ at(T, Sj ) for a normalsubprogram T of Sj .Hint. See Exercise 3.13.8.5.(i) Prove the correctness formula{x = 0} [x := x + 1kx := x + 2] {x = 3}in the proof system P W + rule 27.(ii) By contrast, show that the correctness formula{x = 0} [x := x + 1kx := x + 1] {x = 2}is not a theorem in the proof system P W + rule 27.(iii) Explain the difference between (i) and (ii), and prove the correctnessformula of (ii) in the proof system PSV.8.6. Prove the correctness formula{true} [x := x + 2; x := x + 2kx := 0] {x = 0 ∨ x = 2 ∨ x = 4}in the proof system PSV.8.7. Show that the rule of disjoint parallelism (rule 24) is not sound forparallel programs.Hint.
Consider the component programs x := 0 and x := 1; y := x.8.8. Consider the parallel program ZERO-3 from the Case Study 8.8. Provethe correctness formula{∃u : f (u) = 0} ZERO-3 {f (x) = 0 ∨ f (y) = 0}in the proof system PSV.Hint. Introduce two Boolean auxiliary variables init1 and init2 to recordwhether the initializations x := 0 and y := 1 of the component programs S1and S2 of ZERO-3 have been executed.
Thus instead of S1 consider3048 Parallel Programs with Shared VariablesS1′ ≡ hx := 0; init1 := truei;while ¬f ound dox := x + 1;if f (x) = 0 then f ound := true fiodand analogously with S2 . Usep1 ≡init1 ∧ x ≥ 0∧ (f ound →(x > 0 ∧ f (x) = 0)∨ (init2 ∧ y ≤ 0 ∧ f (y) = 0))∧ (¬f ound ∧ x > 0 → f (x) 6= 0)as a loop invariant in S1′ and a symmetric loop invariant in S2′ to prove{¬f ound ∧ ¬init1 ∧ ¬init2 } [S1′ kS2′ ] {f (x) = 0 ∨ f (y) = 0}.Finally, apply the rule of auxiliary variables (rule 25).8.9. Consider the parallel program ZERO-2 from Solution 2 in Section 1.1.(i) Prove the correctness formula{true} ZERO-2 {f (x) = 0 ∨ f (y) = 0}in the proof system PSV.Hint.
Introduce a Boolean auxiliary variable to indicate which of thecomponents of ZERO-2 last updated the variable f ound.(ii) Show that the above correctness formula is false in the sense of totalcorrectness by describing an infinite computation of ZERO-2.8.10. The parallel programs considered in Case Studies 7.4 and 8.6 bothbegin with the initialization parti := 1; j := 2; oddtop := N + 1; eventop := N + 1.Investigate which of these assignments can be moved inside the parallel composition without invalidating the correctness formulas (8.15) in Section 7.4and (8.17) in Section 8.6.Hint. Apply the Initialization Theorem 8.2 or show that the correctness formulas (8.15) in Section 7.4 and (8.17) in Section 8.6 are invalidated.8.11. Prove the Atomicity Theorem 8.1 for the cases when S has an initialization part and when T is obtained from S by splitting the atomic regionhif B then R1 else R2 fii.8.12.
Prove the Initialization Theorem 8.2.8.13. Prove the Sequentialization Lemma 7.7 using the Stuttering Lemma 7.9and the Initialization Theorem 8.2.8.10 Bibliographic Remarks3058.14. Consider component programs S1 , . . ., Sn and T1 , . . ., Tn such that Siis disjoint from Tj whenever i 6= j. Prove that the parallel programsS ≡ [S1 k. . .kSn ]; [T1 k. .
.kTn ]andT ≡ [S1 ; T1 k. . .kSn ; Tn ]have the same semantics under M, Mtot and Mfair . In the terminology ofElrad and Francez [1982] the subprograms [S1 k. . .kSn ] and [T1 k. . .kTn ] of Sare called layers of the parallel program T .8.10 Bibliographic RemarksAs already mentioned, the approach to partial correctness and total correctness followed here is due to Owicki and Gries [1976a] and is known as the“Owicki/Gries method.” A similar proof technique was introduced independently in Lamport [1977]. The presentation given here differs in the way totalcorrectness is handled. Our presentation follows Apt, de Boer and Olderog[1990], in which the stronger formation rule for proof outlines for total correctness of while loops (formation rule (viii) given in Definition 8.3) wasintroduced.The Owicki/Gries method has been criticized because of its missing compositionality as shown by the global test of interference freedom.
This motivated research on compositional semantics and proof methods for parallelprograms —see, for example, Brookes [1993] and de Boer [1994].Atomic regions were considered by many authors, in particular Lipton[1975], Lamport [1977] and Owicki [1978]. The Atomicity Theorem 8.1 andthe Initialization Theorem 8.2 presented in Section 8.7 are inspired by theconsiderations of Lipton [1975].A systematic derivation of a parallel program for zero search is presentedby Knapp [1992].
The derivation is carried out in the framework of UNITY.The transformation of a layered program into a fully parallel programpresented in Exercise 8.14 is called the law of Communication Closed Layersin Janssen, Poel and Zwiers [1991] and Fokkinga, Poel and Zwiers [1993]and is the core of a method for developing parallel programs.9 Parallel Programs withSynchronization9.19.29.39.49.59.69.79.89.9FSyntax .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .Case Study: Producer/Consumer Problem . . . . . . . . . . . .Case Study: The Mutual Exclusion Problem . . . . . . . . . . .Allowing More Points of Interference . . . . . . . . . . . . . . . . .Case Study: Synchronized Zero Search . . . . . . . . . . . . . . . .Exercises . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bibliographic Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .309310311319324334335344345OR MANY APPLICATIONS the classes of parallel programs considered so far are not sufficient. We need parallel programs whosecomponents can synchronize with each other. That is, components must beable to suspend their execution and wait or get blocked until the execution ofthe other components has changed the shared variables in such a way thata certain condition is fulfilled.
To formulate such waiting conditions we extend the program syntax of Section 9.1 by a synchronization construct, theawait statement introduced in Owicki and Gries [1976a].This construct permits a very flexible way of programming, but at thesame time opens the door for subtle programming errors where the programexecution ends in a deadlock. This is a situation where some componentsof a parallel program did not terminate and all nonterminated components3073089 Parallel Programs with Synchronizationare blocked because they wait eternally for a certain condition to becomesatisfied. The formal definition is given in Section 9.2 on semantics.In this chapter we present a method of Owicki and Gries [1976a] for proving deadlock freedom.
For a clear treatment of this verification method weintroduce in Section 9.3 besides the usual notions of partial and total correctness an intermediate property called weak total correctness which guaranteestermination but not yet deadlock freedom.As a first case study we prove in Section 9.4 the correctness of a typical synchronization problem: the consumer/producer problem. In Section 9.5we consider another classical synchronization problem: the mutual exclusionproblem. We prove correctness of two solutions to this problem, one formulated in the language without synchronization and another one in the fulllanguage of parallel programs with synchronization.In Section 9.6 we restate two program transformations of Section 8.7 inthe new setting where synchronization is allowed.
These transformations areused in the case study in Section 9.7 where we prove correctness of the zerosearch program ZERO-6 from Chapter 1.9.1 Syntax3099.1 SyntaxA component program is now a program generated by the same clauses asthose defining while programs in Chapter 3 together with the followingclause for await statements:S ::= await B then S0 end,where S0 is loop free and does not contain any await statements.Thanks to this syntactic restriction no divergence or deadlock can occurduring the execution of S0 , which significantly simplifies our analysis.Parallel programs with synchronization (or simply parallel programs) arethen generated by the same clauses as those defining while programs, together with the usual clause for parallel composition:S ::= [S1 k.
. .kSn ],where S1 , . . ., Sn are component programs (n > 1). Thus, as before, we do notallow nested parallelism, but we do allow parallelism within sequential composition, conditional statements and while loops. Note that await statementsmay appear only within the context of parallel composition.Throughout this chapter the notions of a component program and a parallel program always refer to the above definition.To explain the meaning of await statements, let us imagine an interleavedexecution of a parallel program where one component is about to execute astatement await B then S end. If B evaluates to true, then S is executedas an atomic region whose activation cannot be interrupted by the othercomponents.
If B evaluates to false, the component gets blocked and the othercomponents take over the execution. If during their execution B becomestrue, the blocked component can resume its execution. Otherwise, it remainsblocked forever.Thus await statements model conditional atomic regions.