3. Verification of sequential and concurrent programs. Apt_ Olderog (3rd edition) (2010) (811405), страница 14
Текст из файла (страница 14)
This form of proof presentation is especially important in Chapters 8 and 9 when studying parallelprograms.In Section 3.5 we study completeness of the introduced proof systems,that is, the problem whether all true correctness formulas can be proved inthe corresponding proof systems. Then, in Sections 3.6 and 3.7 we study twosimple programming constructs that will be useful in the later chapters: theparallel assignment and the failure statement.Next, in Section 3.8, we introduce auxiliary axioms and proof rules thatallow us to organize correctness proofs in a different way. These axioms andproof rules are especially helpful when studying other classes of programs inthis book.In Section 3.9 we prove as a first case study the correctness of the wellknown partition program. In Section 3.10 we explain an approach originatedby Dijkstra [1976] allowing us to systematically develop programs togetherwith their correctness proofs.
Finally, in Section 3.11, as a case study, wedevelop a small but not so obvious program for computing the so-calledminimum-sum section of an array.3.1 Syntax573.1 SyntaxA while program is a string of symbols including the keywords if, then, else,fi, while, do and od, that is generated by the following grammar:S ::= skip | u := t | S1 ; S2 | if B then S1 else S2 fi | while B do S1 od.Following the conventions of the previous chapter, the letter u stands for asimple or subscripted variable, t for an expression and B for a Boolean expression. We require that in an assignment u := t the variable u and theexpression t are of the same type. Since types are implied by the notationalconventions of the previous chapter, we do not declare variables in the programs.
To avoid repeated qualifications, we assume that all programs considered in this book are syntactically correct. Sometimes instead of programswe talk about statements. As an abbreviation we introduceif B then S fi ≡ if B then S else skip fi.As usual, spaces and indentation are used to make programs more readable,but these additions are not part of the formal syntax. Here and elsewhere,programs are denoted by letters R, S, T .Although we assume that the reader is familiar with while programs asdefined above, we recall how they are executed.
The statement skip changesnothing and just terminates. An assignment u := t assigns the value of theexpression t to the (possibly subscripted) variable u and then terminates.A sequential composition S1 ; S2 is executed by executing S1 and, when itterminates, executing S2 . Since this interpretation of sequential compositionis associative, we need not introduce brackets enclosing S1 ; S2 . Executionof a conditional statement if B then S1 else S2 fi starts by evaluating theBoolean expression B. If B is true, S1 is executed; otherwise (if B is false), S2is executed.
Execution of a loop while B do S od starts with the evaluationof the Boolean expression B. If B is false, the loop terminates immediately;otherwise S is executed. When S terminates, the process is repeated.Given a while program S, we denote by var(S) the set of all simple andarray variables that appear in S and by change(S) the set of all simple andarray variables that can be modified by S. Formally,change(S) ={x | x is a simple variable that appears inan assignment of the form x := t in S}∪ {a | a is an array variable that appears inan assignment of the form a[s1 , .
. . , sn ] := t in S}.Both notions are also used in later chapters for other classes of programs.By a subprogram S of a while program R we mean a substring S of R,which is also a while program. For example,583 while ProgramsS ≡ x := x − 1is a subprogram ofR ≡ if x = 0 then y := 1 else y := y − x; x := x − 1 fi.3.2 SemanticsYou may be perfectly happy with this intuitive explanation of the meaningof while programs. In fact, for a long time this has been the style of describing what constructs in programming languages denote. However, this stylehas proved to be error-prone both for implementing programming languagesand for writing and reasoning about individual programs.
To eliminate thisdanger, the informal explanation should be accompanied by (but not substituted for!) a rigorous definition of the semantics. Clearly, such a definition isnecessary to achieve the aim of our book: providing rigorous proof methodsfor program correctness.So what exactly is the meaning or semantics of a while program S? Itis a mapping M[[S]] from proper (initial) states to (final) states, using ⊥ toindicate divergence. The question now arises how to define M[[S]].
There aretwo main approaches to such definitions: the denotational approach and theoperational one.The idea of the denotational approach is to provide an appropriate semantic domain for M[[S]] and then define M[[S]] by induction on the structure of S, in particular, using fixed point techniques to deal with loops, ormore generally, with recursion (Scott and Strachey [1971], Stoy [1977], Gordon [1979],.
. .). While this approach works well for deterministic sequentialprograms, it gets a lot more complicated for nondeterministic, parallel anddistributed programs.That is why we prefer to work with an operational approach proposedby Hennessy and Plotkin [1979] and further developed in Plotkin [1981].Here, definitions remain very simple for all classes of programs considered inthis book. “Operational” means that first a transition relation → betweenso-called configurations of an abstract machine is specified, and then thesemantics M[[S]] is defined with the help of → .
Depending on the definitionof a configuration, the transition relation → can model executions at variouslevels of detail.We choose here a “high level” view of an execution, where a configurationis simply a pair < S, σ > consisting of a program S and a state σ. Intuitively,a transition< S, σ > → < R, τ >(3.1)means: executing S one step in a proper state σ can lead to state τ withR being the remainder of S still to be executed. To express termination, we3.2 Semantics59allow the empty program E inside configurations: R ≡ E in (3.1) means thatS terminates in τ .
We stipulate that E; S and S; E are abbreviations of S.The idea of Hennessy and Plotkin is to specify the transition relation →by induction on the structure of programs using a formal proof system, calledhere a transition system. It consists of axioms and rules about transitions(3.1). For while programs, we use the following transition axioms and ruleswhere σ is a proper state:(i) < skip, σ > → < E, σ >,(ii) < u := t, σ > → < E, σ[u := σ(t)] >,< S1 , σ > → < S 2 , τ >,(iii)< S1 ; S, σ > → < S2 ; S, τ >(iv) < if B then S1 else S2 fi, σ > → < S1 , σ > where σ |= B,(v) < if B then S1 else S2 fi, σ > → < S2 , σ > where σ |= ¬B,(vi) < while B do S od, σ > → < S; while B do S od, σ >where σ |= B,(vii) < while B do S od, σ > → < E, σ >, where σ |= ¬B.A transition < S, σ > → < R, τ > is possible if and only if it canbe deduced in the above transition system.
(For simplicity we do not useany provability symbol ⊢ here.) Note that the skip statement, assignmentsand evaluations of Boolean expressions are all executed in one step. This“high level” view abstracts from all details of the evaluation of expressionsin the execution of assignments. Consequently, this semantics is a high-levelsemantics.Definition 3.1. Let S be a while program and σ a proper state.(i) A transition sequence of S starting in σ is a finite or infinite sequence ofconfigurations < Si , σ i > (i ≥ 0) such that< S, σ >=< S0 , σ 0 > → < S1 , σ 1 > → .
. . → < Si , σ i > → . . . .(ii) A computation of S starting in σ is a transition sequence of S startingin σ that cannot be extended.(iii) A computation of S is terminating in τ (or terminates in τ ) if it is finiteand its last configuration is of the form < E, τ >.(iv) A computation of S is diverging (or diverges) if it is infinite. S candiverge from σ if there exists an infinite computation of S starting in σ.(v) To describe the effect of finite transition sequences we use the transitive,reflexive closure →∗ of the transition relation → :< S, σ > →∗ < R, τ >holds when there exist configurations < S1 , σ 1 >, .
. ., < Sn , σ n > withn ≥ 0 such that603 while Programs< S, σ >=< S1 , σ 1 > → . . . → < Sn , σ n >=< R, τ >holds. In the case when n = 0, < S, σ >=< R, τ > holds.⊓⊔We have the following lemmata.Lemma 3.1. (Determinism) For any while program S and a proper stateσ, there is exactly one computation of S starting in σ.Proof. Any configuration has at most one successor in the transition relation→.⊓⊔This lemma explains the title of this part of the book: deterministic programs. It also shows that for while programs the phrase “S can diverge fromσ” may be actually replaced by the more precise statement “S diverges fromσ.” On the other hand, in subsequent chapters we deal with programs admitting various computations from a given state and for which we retain thisdefinition.
For such programs, this phrase sounds more appropriate.Lemma 3.2. (Absence of Blocking) If S 6≡ E then for any proper stateσ there exists a configuration < S1 , τ > such that< S, σ > → < S1 , τ > .Proof. If S 6≡ E then any configuration < S, σ > has a successor in thetransition relation → .⊓⊔This lemma states that if S did not terminate then it can be executed forat least one step. Both lemmata clearly depend on the syntax of the programsconsidered here. The Determinism Lemma 3.1 will fail to hold for all classes ofprograms studied from Chapter 7 on and the Absence of Blocking Lemma 3.2will not hold for a class of parallel programs studied in Chapter 9 and fordistributed programs studied in Chapter 11.Definition 3.2.