Deturck, Wilf - Lecture Notes on Numerical Analysis (523142), страница 18
Текст из файла (страница 18)
. . , ph (but not at (p + 1)h as before). Then,in place of (2.13.7) we getyp+1 = yp + hpX(−1)p−ii=0i!(p − i)!y 0 (ih)Zp+1ppY(x − j) dx.(2.13.17)j=0j6=iAs before, we can write this in the more familiar formyn+1 = yn + hpX0b−i yn−i(2.13.18)i=0where the numbers b−i are now given byb−i =(−1)i(p − i)!i!Zp+1ppY(x − j) dxi = 0, 1, . . . , p.(2.13.19)j=0j6=p−iWe tabulate below these formulas in the cases p = 1, 2, 3, together with their errorterms:h5h3 0000yn+1 = yn + (3yn0 − yn−1)+(2.13.20)y212h3h4 (iv00yn+1 = yn + (23yn0 − 16yn−1+ 5yn−2)+(2.13.21)y128h251h5 (v000yn+1 = yn + (55yn0 − 59yn−1+ 37yn−2− 9yn−3)+(2.13.22)y24720Notice, for example, that the explicit fourth-order formula (2.13.22) has about 13 timesas large an error term as the implicit fourth-order formula (2.13.15).
This is typical formatched pairs of predictor-corrector formulas. As we have noted previously, a single application of a corrector formula reduces error by about h ∂f∂y , so if we keep h small enough sothat h ∂f∂y is less than 1/13 or thereabouts, then a single application of the corrector formulawill produce an estimate of the next value of y with full attainable accuracy.We are now fully equipped with Adams formulas for prediction and correction, inmatched pairs, of whatever accuracy is needed, all of them stable.
The use of these pairsrequires special starting formulas, since multistep methods cannot get themselves startedor restarted without assistance. Once again the Lagrange interpolation formula comes tothe rescue.This time we begin with a slight variation of (2.13.6),Zy(mh) = y(0) +0mhy 0 (t) dt.(2.13.23)Next, replace y 0 (t) by the Lagrange polynomial that agrees with it at 0, h, 2h, . . . , ph (forp ≥ m). We then obtainym = y0 + hpX(−1)p−ii=0i!(p − i)!yi0Z0mpY(t − j) dtj=0j6=im = 1, 2, . .
. , p.(2.13.24)2.13 Lagrange and Adams formulas79We can rewrite these equations in the formym = y0 + hpXλj yj0m = 1, 2, . . . , p(2.13.25)j=0where the coefficients λi are given byλi =(−1)p−ii!(p − i)!Z0mpY(t − j) dti = 0, . . . , p.(2.13.26)j=0j6=iOf course, when these formulas are used on a differential equation y 0 = f (x, y), eachof the values of y 0 on the right side of (2.13.25) is replaced by an f (x, y) value. Thereforeequations (2.13.25) are a set of p simultaneous equations in p unknowns y1 , y2 , . .
. , yp (y0is, of course, known). We can solve them with an iteration in which we first guess all p ofthe unknowns, and then refine the guesses all at once by using (2.13.25) to give us the newguesses from the old, until sufficient convergence has occurred.For example, if we take p = 3, then the starting formulas areh19h5 (v)(9y00 + 19y10 − 5y20 + y30 ) −y24720hh5y2 = y0 + (y00 + 4y10 + y20 ) − y (v)3903h 03h5 (v)y3 = y0 +(y0 + 3y10 + 3y20 + y30 ) −y .880y1 = y0 +(2.13.27)The philosophy of using a matched pair of Adams formulas for propagation of the solution,together with the starter formulas shown above has the potential of being implemented in acomputer program in which the user could specifiy the desired precision by giving the valueof p.
The program could then calculate from the formulas above the coefficients in thepredictor-corrector pair and the starting method, and then proceed with the integration.This would make a very versatile code.80The Numerical Solution of Differential EquationsChapter 3Numerical linear algebra3.1Vector spaces and linear mappingsIn this chapter we will study numerical methods for the solution of problems in linearalgebra, that is to say, of problems that involve matrices or systems of linear equations.Among these we mention the following:(a) Given an n × n matrix, calculate its determinant.(b) Given m linear algebraic equations in n unknowns, find the most general solution ofthe system, or discover that it has none.(c) Invert an n × n matrix, if possible.(d) Find the eigenvalues and eigenvectors of an n × n matrix.As usual, we will be very much concerned with the development of efficient softwarethat will accomplish the above purposes.We assume that the reader is familiar with the basic constructs of linear algebra: vector space, linear dependence and independence of vectors, Euclidean n-dimensional space,spanning sets of vectors, basis vectors.
We will quickly review some additional conceptsthat will be helpful in our work. For a more complete discussion of linear algebra, see anyof the references cited at the end of this chapter.And now, to business. The first major concept we need is that of a linear mapping.Let V and W be two vector spaces over the real numbers (we’ll stick to the real numbersunless otherwise specified).
We say that T is a linear mapping from V to W if T associateswith every vector v of V a vector T v of W (so T is a mapping) in such a way thatT (αv0 + βv00 ) = αT v0 + βT v00(3.1.1)for all vectors v0 , v00 of V and real numbers (or scalars) α and β (i.e., T is linear).
Noticethat the “+” signs are different on the two sides of (3.1.1). On the left we add two vectorsof V , on the right we add two vectors of W .82Numerical linear algebraHere are a few examples of linear mappings.First, let V and W both be the same, namely the space of all polynomials of some givendegree n. Consider the mapping that associates with a polynomial f of V its derivativeT f = f 0 in W . It’s easy to check that this mapping is linear.Second, suppose V is Euclidean two-dimensional space (the plane) and W is Euclideanthree-dimensional space. Let T be the mapping that carries the vector (x, y) of V to thevector (3x + 2y, x − y, 4x + 5y) of W . For instance, T (2, −1) = (4, 3, 3).
Then T is a linearmapping.More generally, let A be a given m × n matrix of real numbers, let V be Euclideann-dimensional space and let W be Euclidean m-space. The mapping T that carries a vectorx of V into Ax of W is a linear mapping. That is, any matrix generates a linear mappingbetween two appropriately chosen (to match the dimensions of the matrix) vector spaces.The importance of studying linear mappings in general, and not just matrices, comesfrom the fact that a particular mapping can be represented by many different matrices.Further, it often happens that problems in linear algebra that seem to be questions aboutmatrices, are in fact questions about linear mappings. This means that we can change toa simpler matrix that represents the same linear mapping before answering the question,secure in the knowledge that the answer will be the same.
For example, if we are given asquare matrix and we want its determinant, we seem to confront a problem about matrices.In fact, any of the matrices that represent the same mapping will have the same determinantas the given one, and making this kind of observation and identifying simple representativesof the class of relevant matrices can be quite helpful.To get back to the matter at hand, suppose the vector spaces V and W are of dimensionsm and n, respectively.
Then we can choose in V a basis of m vectors, say e1 , e2 , e3 , . . . , em ,and in W there is a basis of n vectors f1 , f2 , . . . , fn . Let T be a linear mapping from V toW . Then we have the situation that is sketched in figure 3.1 below.We claim now that the action of T on every vector of V is known if we know only itseffect on the m basis vectors of V . Indeed, suppose we know T e1 , T e2 , . .
. , T em . Then letx be any vector in V . Express x in terms of the basis of V ,x = α1 e1 + α2 e2 + · · · + αm em .(3.1.2)Now apply T to both sides and use the linearity of T (extended, by induction, to linearcombinations of more than two vectors) to obtainT x = α1 (T e1 ) + α2 (T e2 ) + · · · + αm (T em ).(3.1.3)The right side is known, and the claim is established.So, to describe a linear mapping, “all we have to do” is describe its action on a set ofbasis vectors of V . If ei is one of these, then T ei is a vector in W . As such, T ei can bewritten as a linear combination of the basis vectors of W .
The coefficients of this linearcombination will evidently depend on i, so we writeT ei =nXj=1tji fji = 1, . . . , m.(3.1.4)3.1 Vector spaces and linear mappingsT2em . . . e*YHHHAAe1AAU83f . . . f2-Wm*HHYHAAf1AAUVFigure 3.1: The action of a linear mappingNow the mn numbers tji , i = 1, . . . , m, j = 1, . . . , n, together with the given sets of basisvectors for V and W , are enough to describe the linear operator T completely. Indeed, ifwe know all of those numbers, then by (3.1.4) we know what T does to every basis vectorof V , and then by (3.1.3) we know the action of T on every vector of V .To summarize, an n × m matrix tji represents a linear mapping T from a vector space Vwith a distinguished basis E = {e1 , e2 , . . .