Shampine, Allen, Pruess - Fundamentals of Numerical Computing (523185), страница 16
Текст из файла (страница 16)
. . , N.If we substitute the expansion into the definition of the residual, this becomesThis is a system of linear equations Ca = b for the coefficients ai , whereand b i = - ( φ i , 1). When there is more than one independent variable, it is not soeasy to piece together polynomial approximations over elements so that they connectsmoothly, and it is usual to reduce the order of differentiation in the inner product of Cijfrom two to one by means of integration by parts (Green’s theorem). With the globalpolynomial approximation of this example, there is no difficulty forming directly theLaplacian of the basis functions.In the classic Galerkin method the integrals of the inner products are computedexactly. This is possible for our simple example, but in practice integrals are approximated numerically.
A common procedure for general functions is based on Gaussianquadrature. As we shall see in Chapter 5, a Gaussian quadrature formula of M pointsconsists of a set of nodes η i and weights Ai such thatprovides the best possible approximation in a certain sense. In particular, the formulais exact for any polynomial u(x) of degree no more than 2M - 1. In Section 5.6 wediscuss how such a formula can be used to obtain an approximation to the integral ofa function of two variables:Generally M is quite small, but for our example we take it to be 5.
The basis functionswe use are polynomials and inspection of the integrands for the cases N = 1, 2, 3that we solve numerically shows that with M = 5, the integrals we need are computedexactly.Often when solving PDEs, quantities computed from the solution are at least as interesting as the solution itself. Such a quantity for this example is the nondimensionalflow rateThe Galerkin approximations computed in the manner described gave74CHAPTER 2SYSTEMS OF LINEAR EQUATIONSFigure 2.2 The flow rate w(x, 0) approximated by Galerkin’s method (3 terms).The “exact” values here were obtained from a series solution for w(x, y).
Figure 2.2shows w3(x,0), which exhibits the kind of velocity distribution across the duct thatwe expect on physical grounds. These are exceptionally good approximations becausewe have incorporated so much information about the solution into the basis functions.It should be appreciated that if the solution did not have the symmetry properties ofthe basis functions, we could not expect such a good answer. Moreover, consistentresults found on adding terms to the expansion might just be consistently poor resultsbecause it is not possible to represent well the solution with the basis functions chosen.Because finite element methods approximate the solution locally, they are better suitedfor a general problem.Finite differences represent another approach to solving PDEs that leads to largesystems of linear equations.
In this approach w(x,y) is approximated only on a mesh.For example, if we choose an integer N and define a mesh spacing h = l/N, we mightapproximate w(ih,jh) by wij for i, j = -N, - N + 1,. . . , N - 1, N. To satisfy the boundary conditions we take wij = 0 when i = ±N and j = ±N. Taylor series expansion ofa smooth solution w(x,y) shows thatFurther, there is a constant γ such that |τ| < γ h2 for all i, j and all sufficiently small h.We say that this is a difference approximation to the second derivative that is of ordertwo. All we wish to do here is make plausible that the expression on the left imitatesthe partial derivative for small h.
This expression and the differential equation suggestthat we define the approximation wij at mesh points (ih, jh) interior to the square by2.6 CASE STUDY 275the equationThe set of (2N - 1)2 equations can be rewritten asFor this discrete approximation to the PDE to be a reasonable one, the mesh spacing hmust be small, but then we must have a large set of linear equations.
It is of the utmostpractical importance that the equations are sparse. Indeed, there are only five nonzerocoefficients in each row of the matrix. Historically such equations were solved by iterative methods. The methods used are very simple and self-correcting, so that a mistakedoes not prevent computing the correct answer. Both characteristics are very importantto hand computation. Also important is that the methods require storage only for thenonzero coefficients, the solution itself, and perhaps another vector of the same size asthe solution. For a problem as simple as this example, it is not even necessary to storethe coefficients of the matrix.
Iterative methods take advantage of the fact that we donot require an accurate solution. After all, wi,j is only an approximation to w(ih, jh)of modest accuracy, so there is little point to computing it very accurately.The classic iterative procedures for solving equations like those arising in thisexample are most naturally described in terms of the equation itself. First let us rewritethe equation asThe Jacobi iteration improves an approximation w(k) by computingfor all i, j. This amounts to saying that we defineso as to make the residual inequation j equal to zero. This is a very simple and inexpensive iteration that requiresonly storage for the current and next approximations. A refinement suggests itself:instead offor the rest of the computation?Would it not be better to useBesides, if we do this, we would halve the storage required.
This is called the GaussSeidel iteration. There is a complication with this method. So far we have not writtenout the matrix explicitly, so we have not specified the order of the componentsas soon as it is computed, the orderin the vector w(k). If we are to start usingin which components are improved matters. ‘A question that arises in the use of anyiterative method is when to quit. A natural way is to measure the residual of the currentsolution and quit when it is less than a tolerance.For the example we used the Gauss-Seidel iteration and improved the unknownsfrom left to right, bottom to top, that is, for each i = -N + 1,. .
. , N - 1, we improvedwi,j for j = -N + l,..., N - 1. We took N = 5, which led to a system of 81 equationsfor unknowns at points interior to the region. It was convenient to start withfor all i, j. With a tolerance of 5 × 10-4, convergence was achieved in 89 iterations.76CHAPTER 2SYSTEMS OF LINEAR EQUATIONSFigure 2.3 Residual versus iteration number for Gauss-Seidel iteration.The behavior of the residuals is displayed in Figure 2.3.
Except for the first few iterates, the residual was reduced by a factor close to 0.9 at each iteration. It mightbe remarked that the residual measured was that of the finite difference equation inthe form with the 1 present because it provides a natural measure of scale. The approximation to w(0,0) found in this way was 0.2923. A plot of the approximations tow( ih,0) was in good agreement with that of Figure 2.2. We have had to solve a verymuch larger system of equations to get an approximation comparable to that of theglobal Galerkin method. On the other hand, the finite difference approximation doesnot depend on the special form of the solution. Another distinction appears when weask about the flow rate, for we now have to compute the integral of a function definedonly on a mesh.
How this might be done is discussed in Chapter 5, but for the sake ofsimplicity, we do not go into the matter here.Let us now consider the Jacobi and Gauss-Seidel iterations for a general problemAx = b and prove a simple result about convergence. One approach to iterative methods supposes that it is relatively easy to solve systems of the form My = c for a matrixM that is “close” to A. The idea is to rewrite the given problem asand calculate a sequence of approximate solutions x(k)byThe Jacobi iteration arises in this form when we take M to be the diagonal of A.
Similarly, the Gauss-Seidel iteration arises when we take M to be the diagonal of A andall the elements below the diagonal. Clearly it is very easy to solve linear systemsinvolving M in either form. We have also seen another example in this chapter. Generally the factors L and U resulting from Gaussian elimination applied to A are veryaccurate in the sense that M = LU is very nearly A. By virtue of having a factorization2.6 CASE STUDY 277of this M, it is easy to compute the iterates by forward and back substitution in theusual fashion. With this choice of M, the iteration is the iterative refinement discussedin Section 2.3.2.Passing to the limit on both sides of the equation defining the iteration, we see thatif the approximations converge at all, they must converge to x.