2004. Precise Interprocedural Analysis through Linear Algebra, страница 3
PDF-файл из архива "2004. Precise Interprocedural Analysis through Linear Algebra", который расположен в категории "статьи". Всё это находится в предмете "конструирование компиляторов" из седьмого семестра, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр PDF-файла онлайн
Текст 3 страницы из PDF
Thedimension of this vector space equals k 1 2 . We observe:L EMMA 2. Let M denote a set of n ( n matrices.3 Affine Relations and Weakest Preconditionsa) For every W M, the set a : Wan .An affine relation over a vector space k is an equation a0 a1 x1 . Geometrically, it can be viewed as a ak xk 0 for some aib) As an intersection of vector spaces, the set a :Wa 0 forms a subspace of n .0 forms a subspace of W M:c) For every a lent:n,the following three statements are equiva-– Wa 0 for all W M;height of Sub V , i.e., the maximal length of a strictly increasingchain, equals the dimension of V .The desired abstraction of run sets is described by the mapping α :2 Sub k 1 k 1 :– Wa 0 for all W Span M .α R– Wa 0 for all W in a basis of Span M .Based on these observations, we can determine the set of all affinerelations at program point u from a basis of Span Wr : r R u and estimate the complexity of the resulting algorithm.
For simplicity we use here and in the following unit cost measure for arithmeticoperations.T HEOREM1. Assume we are given a basis B for the setSpan Wr : r R u . Then we have:a) Affine relation a k 1 is valid at program point u iff Wa for all W B.0b) A basis for the subspace of all affine relations valid at program point u can be computed in time O k5 .P ROOF. Statement a) follows directly from Lemma 1 andLemma 2,c).α 0/ α r kj 0for each matrix Wα εbecause AεIk and bε0.In order to solve the constraint system for the run sets R u overabstract domain Sub k 1 k 1 , we need adequate abstract versions of the operators and constants in this constraint system. Inparticular, we need an abstract version of the concatenation of runsets.
For M1 M2 ' k 1 k 1 , we define:M1 M2Span A1 A2 : Ai Mi First of all, we observe:L EMMA 3. For all sets of matrices M1 M2 , Span MSpan M1 2 P ROOF. Observe first that Span Mi Span MM1 M2 Mi and therefore,2M1 M2by monotonicity of “ ”.We denote the complete lattice of subspaces of V by Sub V . TheiAj in Span Mi for suitableB1 B2So we are left with the task to compute, for every program point u,(a basis of) Span Wr : r R u .
This subspace of k 1 k 1 can be seen as an abstraction of the set R u of program executionsreaching u. We are going to compute it by an abstract interpretationof the constraint system for R u from Section 2. Recall that theset of subspaces of a finite-dimensional -vector space V forms acomplete lattice (w.r.t.
the ordering set inclusion) where the leastelement is given by the 0-dimensional vector space consisting ofthe 0-vector only. The least upper bound of two spaces V1 V2 isgiven by:1 The mapping α is monotonic (w.r.t. subset ordering on sets of runsand subspaces.) Also it is not hard to see that it commutes witharbitrary unions.iAjThe basis B contains at most O k2 matrices each of which contributes k 1 equations. Thus, we must determine, the solution ofan equation system with O k3 equations over k 1 variables. Thiscan be done, e.g. by Gaussian elimination, in time O k5 .Span Ik For the reverse inclusion, consider arbitrary elements Bi0Span V1 ) V2 v1 v2 : vi Vi 0for a single run r. By Equation (2) we get for the empty run,wi j B and i 0 k.V1 V2Span 0/ Span Wr Span M1 For seeing b), consider that by a) the affine relation a is valid at uiff a is a solution of all the equations∑ wi j a j Thus, we have:Here, Span M denotes the vector space generated by the elementsin M, i.e., the vector space of all linear combinations of elementsin M.
We conclude that we can work with Span W , i.e., the subspace of k 1 k 1 generated by W without losing interestinginformation. As a subspace of the vector space k 1 k 1 of dimension k 1 2 , Span W can be described by a basis of at most k 1 2 matrices. Indeed, due to the special form of the matricesWr —in the first column all but the first entry are zero—Span W can have at most dimension k2 k 1.Span Wr : r R 1∑ ∑ λmmi∑j λj Mi . Then212λ j Am A jj12by linearity of matrix multiplication. Since each Am A j is contained in M1 M2 , B1 B2 is contained in M1 M2 as well. Therefore,also the inclusion “ ' ” follows.Accordingly, a generating system for M1 M2 can be computedfrom generating systems G1 G2 for M1 and M2 by multiplying eachmatrix in G1 with each matrix in G2 .Secondly, we observe that “ ” precisely abstracts the concatenationof run sets:L EMMA 4. Let R1 R2 'α R1 BAC?D .
Then αR2 α R1 ; R2 P ROOF. Consider the auxiliary map W mapping run sets to sets ofmatrices by:W RWr : r R Then we have α R Span W R . We observe:A1 A2 : Ai W Ri T HEOREM 2. For every program of size n with k variables thefollowing holds:W R1 ; R2 This suffices as the span construction commutes with compositionby Lemma 3.Let us now turn attention to the abstraction of base edges. Letus first consider a base edge e 7 annotated by an affineassignment,i.e., A e @ x j : t where t @ t0 ∑ni 1 ti xi . ThenS e x j : t .
By (1) and (2), the corresponding abstract transformer is given by α x j : t α S e Span Ij0t0...tk0Ik jInformally, the weakest precondition for an affine relation a k 1is computed by substituting t into x j of the corresponding affinecombination.Next, consider a base edge e 7 9 annotated with x j : ?. In thiscase, S e x j : c : c —implying that we have to abstract aninfinite set of runs if the field is infinite. Clearly, the abstractionof this set again can be finitely represented.
We obtain this representation by selecting two different values from , e.g., 0 and 1. Wefind:L EMMA 5.α S e α x j : c : c Span T0 T1 where Tc Wx j : c is the matrix obtained from Ik j 1-th column with c 0 0 t .1by replacing theP ROOF.
Only the second equation requires a proof. From Equations (1) and (2) we get α x j : c : c Span Tc : c .We verify: Tc 1 c T0 c T1 . Hence, Tc Span T0 T1 and Span Tc : c Span T0 T1 .From the constraint systems S and R for run sets, we construct nowthe constraint systems Sα and Rα by application of α. The variablesin the new constraint systems take subspaces of k 1 k 1 asvalues.
We apply α to the occurring constant sets ε and S e andreplace the concatenation operator “;” with “ ”:Sα q Sα e q Sα v Sα v Rα Main Rα p Rα u S α rq Span Sα u Sα u Span Rα u Rα p α S e if e Sα p if e Sα u u v 7 9u v = > > pif u = > > pif u NpThe resulting constraint system can be solved by computing onbases. For estimating the complexity of the resulting algorithm,we assume that the basic statements in the given program havesize O 1 . Thus, we measure the size n of the given program by: N : : E : .
Note that program nodes typically have bounded outdegree, such that typically : N : : E : O : N : .a) The values:Span Wr : r S u , u N,Span Wr : r S p , p !"$#% ,Span Wr : r R u , p !"$#% , andSpan Wr : r R u , u N,are the least solutions of the constraint systems Sα and Rα ,respectively.b) These values can be computed in time O p k8 .c) The sets of all valid affine relations at program point u, u N,can be computed in time O n k8 .P ROOF. Statement a) amounts to saying that the least solution ofconstraint systems Sα and Rα is obtained from the least solution ofS and R by applying the abstraction α. This follows from the Transfer Lemma known in fixpoint theory (see, e.g., [1, 4]), which can beapplied since α commutes with arbitrary unions, the concatenationoperator is precisely abstracted by the operator (Lemma 4), andthe constant run setsε and S e are replaced by their abstractionsα ε Span and α S e , respectively.For b) we show that the least solution of the abstracted constraintsystems can be computed in time O n k8 .
For that, recall that thelattice of all subspaces of k 1 k 1 has height k 1 2 . Thus,a worklist-based fixpoint algorithm will evaluate at most O n k2 constraints. Each constraint evaluation consists of multiplying twosets of at most k 1 2 matrices. The necessary k 1 4 matrixmultiplications can be executed in time O k7 . Finally, we mustcompute a basis for the span of the resulting k 1 4 matrices. ByGaussian elimination, this can be done in time O k8 .