Heath - Scientific Computing (523150), страница 89
Текст из файла (страница 89)
Such upwind differencing biases the approximation toward the passing front,reducing the tendency for unwanted oscillation. Upwind differencing is but the simplest ofseveral approaches to dealing with sharp fronts and discontinuities. Of course, if the initialfunction is sufficiently smooth, such measures may not be required.Example 11.3 Centered Versus Upwind Differencing for Sharp Front. Considerthe one-way wave equationut = −ux334CHAPTER 11. PARTIAL DIFFERENTIAL EQUATIONSwith initial function u0 taken to be the step function defined by1 if x ≤ 0u0 (x) =.0 if x > 0The discontinuity in u0 , a jump at x = 0, will be propagated to the right with time.
Fromthe viewpoint of a particular point along the spatial axis, the solution will be 0 until thestep function passes by, after which the solution will be 1 (i.e., for a fixed x, the solution willbe a step function in t). We should expect finite difference methods to have some troublefollowing this sharp front.Fig. 11.4 shows the computed solution as a function of t at the point x = 1 (i.e., it is aslice of the solution surface parallel to the t axis at the point x = 1).
The solution on theleft was computed using centered spatial differencing of the formux ≈u(t, x + ∆x) − u(t, x − ∆x),2∆xwhereas the solution on the right was computed using upwind spatial differencing of theformu(t, x) − u(t, x − ∆x)ux ≈.∆xThe centered difference formula is second-order accurate and gives a closer approximation ofthe sharp front. It overshoots, however, and then goes into an oscillation that is not presentin the true solution, which is the step function plotted on the same graph for comparison.The one-sided difference formula is only first-order accurate and captures the sharp frontless well, but it is free of the undesirable oscillation exhibited by the centered method, andin this sense it may be a better solution for many purposes.
Notice that the one-sideddifference uses the adjacent point on the side from which the front is coming. If the frontwere coming from the opposite direction, we would use the adjacent point on that side (i.e.,x + ∆x).u.u.1............................. ................................ .................................................................................................................................... ..... ........... ...... ............................................ ...................
......... ...... .......... .........................................................................................................................................................................................................................................0121t..................................................................................................................................................... ..................
....... ........................................ ......... ...... ........ ................................................................................................................................................................................................................................................................................012tFigure 11.4: Approximations to step function solution of one-way wave equation using centered(left) and upwind (right) differencing.In marked contrast to the behavior just described, parabolic equations are dissipative.The solution tends toward a steady state with time, eventually “forgetting” the initial11.3.
TIME-INDEPENDENT PROBLEMS335conditions. Any lack of smoothness in the initial conditions, even possible inconsistencyin the initial and boundary conditions, is damped out. This behavior makes parabolicequations very “forgiving” and relatively easy to solve numerically, as numerical errors tendto diminish with time (provided a stable method is used). Thus, centered differences tend towork well for parabolic problems, and high-accuracy solutions are relatively easy to obtain.11.3Time-Independent ProblemsWe now consider time-independent, elliptic PDEs in two space dimensions, such as theHelmholtz equationuxx + uyy + λu = f (x, y).Important special cases of this equation include the Poisson equation (λ = 0) and theLaplace equation (λ = 0 and f = 0).
For simplicity, we consider this equation on the unitsquare, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1. There are numerous possibilities for the boundary conditionsthat must be specified along each side of the square:• Dirichlet boundary conditions, sometimes called essential boundary conditions, in whichthe solution u is specified• Neumann boundary conditions, sometimes called natural boundary conditions, in whichone of the derivatives ux or uy is specified• Mixed boundary conditions, in which a combination of solution values and derivativevalues is specified.11.3.1Finite Difference MethodsFinite difference methods for elliptic boundary value problems proceed as we have seenbefore: we define a discrete mesh of points within the domain of the equation, replace thederivatives in the PDE by finite differences, and seek a numerical solution at each of themesh points.
Unlike time-dependent problems, however, we do not produce the solutiongradually by marching forward in time, but rather determine the approximate solution atall of the mesh points simultaneously by solving a single system of algebraic equations.Example 11.4 Laplace Equation. We illustrate this procedure with a simple example.Consider the Laplace equation on the unit squareuxx + uyy = 0,with boundary conditions as shown on the left in Fig. 11.5.
We define a discrete mesh inthe domain, including boundaries, as shown on the right in Fig. 11.5.The interior grid points where we will compute the approximate solution are given by(xi , yj ) = (ih, jh),i, j = 1, . . . , n,where in our example n = 2 and h = 1/(n + 1) = 13 .
Next we replace the second derivativesin the equation with the usual centered difference approximation at each interior mesh point336CHAPTER 11. PARTIAL DIFFERENTIAL EQUATIONSyy......................................................................................................................................................................................................................................................................................................................................................10•000.......................................................................................................................................................................................................................................................................................................................................................x1•••••••••0••x0Figure 11.5: Boundary conditions (left) and mesh (right) for Laplace equation example.to obtain the finite difference equationui+1,j − 2ui,j + ui−1,jui,j+1 − 2ui,j + ui,j−1+= 0,h2h2where ui,j is an approximation to the true solution u(xi , yj ) for i, j = 1, .
. . , n, and representsone of the given boundary values if i or j is 0 or n + 1. Simplifying and writing out theresulting four equations explicitly, we obtain4u1,1 − u0,1 − u2,1 − u1,0 − u1,2 = 0,4u2,1 − u1,1 − u3,1 − u2,0 − u2,2 = 0,4u1,2 − u0,2 − u2,2 − u1,1 − u1,3 = 0,4u2,2 − u1,2 − u3,2 − u2,1 − u2,3 = 0.Writing these four equations in matrix form, we have u0,1 + u1,00u1,14 −1 −10 −1 u2,1 u3,1 + u2,0 0 40−1= = . −104 −1 u1,2 u0,2 + u1,3 1 u2,2u3,2 + u2,310 −1 −14This system of equations can be solved for the unknowns ui,j either by a direct methodbased on factorization or by an iterative method, yielding the solution u1,10.125 u2,1 0.125 u1,2 = 0.375 .u2,20.375In a practical problem, the mesh size h would be much smaller and the resulting linearsystem would be much larger than in the preceding example.
The matrix would be verysparse, however, since each equation would still involve at most only five of the variables,11.4. DIRECT METHODS FOR SPARSE LINEAR SYSTEMS337thereby saving substantially on work and storage. We can be a bit more specific about thenonzero pattern of the matrix of such a linear system.
We have already seen in Section 10.4how this type of finite difference method on a one-dimensional grid yields a tridiagonalsystem. A rectangular two-dimensional grid can be thought of as a one-dimensional grid ofone-dimensional grids. Thus, with a row- or column-wise ordering of the grid points, thecorresponding matrix will be block tridiagonal , with each nonzero block being tridiagonalor diagonal. Such a pattern is barely evident in the matrix of the previous example, wherethe blocks are only 2 × 2; for a slightly larger example, where the pattern is more evident,see Fig.
11.6. This pattern generalizes to a three-dimensional grid, which can be viewedas a one-dimensional grid of two-dimensional grids, so that the matrix would be blocktridiagonal, with the nonzero blocks themselves being block tridiagonal, and their subblocksbeing tridiagonal. Of course, for a less regular grid or mesh, or a more complicated finitedifference stencil, the pattern would not be so simple, but sparsity would still prevail owingto the local connectivity among the grid points.11.3.2Finite Element MethodsIn Section 10.5 we considered finite element methods for solving boundary value problemsfor ODEs.
Finite element methods are also applicable to boundary value problems forPDEs as well. Conceptually, there is no change in going from one dimension to two or threedimensions: the solution is still represented as a linear combination of basis functions, andsome criterion (e.g., Galerkin) is applied to derive a system of equations that determinesthe coefficients of the linear combination.The main practical difference is that instead of subintervals in one dimension, the elements usually become triangles or rectangles in two dimensions, or tetrahedra or hexahedrain three dimensions.