3. Verification of sequential and concurrent programs. Apt_ Olderog (3rd edition) (2010) (811405), страница 48
Текст из файла (страница 48)
. .kSn ],where S1 , . . ., Sn are component programs (n > 1). Again, we do not allownested parallelism, but we allow parallelism within sequential composition,conditional statements and while loops.Intuitively, an execution of [S1 k. . .kSn ] is obtained by interleaving theatomic, that is, noninterruptible, steps in the executions of the componentsS1 , . . ., Sn . By definition,• Boolean expressions,• assignments and skip, and• atomic regionsare all evaluated or executed as atomic steps.
The reason why an atomicregion is required to be loop-free, is so its execution is then guaranteed toterminate; thus atomic steps are certain to terminate. An interleaved execution of [S1 k. . .kSn ] terminates if and only if the individual execution of eachcomponent terminates.For convenience, we identify8.3 Semantics271hAi ≡ Aif A is an atomic statement, that is, an assignment or skip.
By a normalsubprogram of a program S we mean a subprogram of S not occurring withinany atomic region of S. For example, the assignment x := 0, the atomicregion hx := x + 2; z := 1i and the program x := 0; hx := x + 2; z := 1i arethe only normal subprograms of x := 0; hx := x + 2; z := 1i.As usual, we assume that all considered programs are syntactically correct.Thus when discussing an atomic region hSi it is assumed that S is loop-free.8.3 SemanticsThe semantics of parallel programs is defined in the same way as that ofdisjoint parallel programs, by using transition axioms and rules (i)–(vii) introduced in Section 3.2 together with transition rule (xvii) introduced inSection 7.2.
So, as in Chapter 7, parallelism is modeled here by means ofinterleaving. To complete the definition we still need to define the semanticsof atomic regions. This is achieved by the following transition rule(xviii)< S, σ > →∗ < E, τ >< hSi, σ > → < E, τ >.This rule formalizes the intuitive meaning of atomic regions by reducingeach terminating computation of the “body” S of an atomic region hSi toa one-step computation of the atomic region. This reduction prevents interference of other components in a computation of hSi within the context of aparallel composition.As in Section 7.2 the following obvious lemma holds.Lemma 8.1.
(Absence of Blocking) Every configuration < S, σ > withS 6≡ E and a proper state σ has a successor configuration in the transitionrelation → .This leads us, as in Section 7.2, to two semantics of parallel programs, partial correctness semantics M and total correctness semantics Mtot , definedas before.In the informal Section 8.1 we have already indicated that parallel programs with shared variables can exhibit nondeterminism.
Here we state thisfact more precisely.Lemma 8.2. (Bounded Nondeterminism) Let S be a parallel programand σ a proper state. Then Mtot [[S]](σ) is either finite or it contains ⊥.2728 Parallel Programs with Shared VariablesThis lemma stands in sharp contrast to the Determinism Lemma 7.6 fordisjoint parallelism. The proof combines a simple observation on the transition relation → with a fundamental result about trees due to König [1927](see also Knuth [1968,page 381]).Lemma 8.3. (Finiteness) For every parallel program S and proper state σ,the configuration < S, σ > has only finitely many successors in the relation→.Proof.
The lemma follows immediately from the shape of the transitionaxioms and rules (i) – (xviii) defining the transition relation.⊓⊔Lemma 8.4. (König’s Lemma) Any finitely branching tree is either finiteor it has an infinite path.Proof. Consider an infinite, but finitely branching tree T . We construct aninfinite path in T , that is, an infinite sequenceξ : N0 N1 N2 . .
.of nodes so that, for each i ≥ 0, Ni+1 is a child of Ni . We construct ξ byinduction on i so that every Ni is the root of an infinite subtree of T .Induction basis : i = 0. As node N0 we take the root of T .Induction step : i −→ i + 1. By induction hypothesis, Ni is the root of aninfinite subtree of T . Since T is finitely branching, there are only finitelymany children M1 , . .
., Mn of Ni . Thus at least one of these children, say Mj ,is a root of an infinite subtree of T . Then we choose Mj as node Ni+1 .This completes the inductive definition of ξ.⊓⊔We now turn to theProof of Lemma 8.2. By Lemma 8.3, the set of computation sequences ofS starting in σ can be represented as a finitely branching computation tree.By König’s Lemma 8.4, this tree is either finite or it contains an infinite path.Clearly, finiteness of the computation tree implies finiteness of Mtot [[S]](σ),and by definition an infinite path in the tree means that S can diverge fromσ, thus yielding ⊥ ∈ Mtot [[S]](σ).⊓⊔AtomicityAccording to the given transition rules, our semantics of parallelism assumesthat Boolean expressions, assignments, skip and atomic regions are evaluated or executed as atomic actions.
But is this assumption guaranteed by8.3 Semantics273conventional computer hardware? The answer is no. Usually, we may assumeonly that the hardware guarantees the atomicity of a single critical reference,that is, an exclusive read or write access to a single shared variable, either asimple one or a subscripted one.For illustration, consider the programS ≡ [x := yky := x].Under the single reference assumption, executing the assignment x := y requires two atomic variable accesses: first y is read and then x is changed (tothe value of y). Symmetrically, the same holds for y := x.
Thus executing Sin a state with x = 1 and y = 2 can result in the following three final states:(i) x = y = 2,(ii) x = y = 1,(iii) x = 2 and y = 1.Note that (iii) is obtained if both x and y are first read and then changed.This result is impossible in our semantics of parallelism where the wholeassignment is treated as one atomic action.Thus, in general, our semantics of parallelism does not model the reality ofconventional hardware.
Fortunately, this is not such a severe shortcoming asit might seem at first sight. The reason is that by using additional variablesevery component program can be transformed into an equivalent one whereeach atomic action contains exactly one shared variable access. For example,the program S above can be transformed intoS ′ ≡ [AC1 := y; x := AC1 kAC2 := x; y := AC2 ].The additional variables ACi represent local accumulators as used in conventional computers to execute assignments. Now our operational semantics ofS ′ mirrors exactly its execution on a conventional computer.
Indeed, for S ′ ,the final states (i)–(iii) above are all possible.Summarizing, in our semantics of parallelism the grain of atomicity waschosen to yield a simple definition. This definition is not realistic, but forprograms all of whose atomic actions contain at most one shared variableaccess, this definition models exactly their execution on conventional computer hardware. Moreover, in correctness proofs of parallel programs it ismost convenient not to be confined to the grain of atomicity as provided byreal computers, but to work with virtual atomicity, freely defined by atomicregions hS0 i.
Generally speaking, we can observe the following dichotomy:• The smaller the grain of atomicity the more realistic the program.• The larger the grain of atomicity the easier the correctness proof of theprogram.Further elaboration of this observation can be found at the end of Section 8.4and in Sections 8.7 and 8.8.2748 Parallel Programs with Shared Variables8.4 Verification: Partial CorrectnessComponent ProgramsPartial correctness of component programs is proved by using the rules of thesystem P W for the partial correctness of while programs plus the followingrule dealing with atomic regions:RULE 26: ATOMIC REGION{p} S {q}{p} hSi {q}This rule is sound for partial (and total) correctness of component programsbecause atomicity has no influence on the input/output behavior of individualcomponent programs.Proof outlines for partial correctness of component programs are generatedby the formation rules (i)–(vii) given for while programs plus the followingone.(x){p} S ∗ {q}{p} hS ∗ i {q}where as usual S ∗ stands for an annotated version of S.A proof outline {p} S ∗ {q} for partial correctness is called standard ifwithin S ∗ every normal subprogram T is preceded by exactly one assertion,called pre(T ), and there are no further assertions within S ∗ .
In particular,there are no assertions within atomic regions. The reason for this omission isbecause the underlying semantics of parallel programs with shared variablescauses atomic regions to be executed as indivisible actions. This is explainedmore fully when we discuss the notion of interference freedom.For while programs the connection between standard proof outlines andcomputations is stated in the Strong Soundness Theorem 3.3. We need herean analogous result. To this end we use the notation at(T, S) introduced inDefinition 3.7, but with the understanding that T is a normal subprogram ofa component program S.