Heath - Scientific Computing (523150), страница 98
Текст из файла (страница 98)
Can you find an optimalvalue for ω? Does the optimal ω become largeror smaller as h decreases?11.7 The time-independentequation in two dimensions,Schrödinger−(ψxx (x, y) + ψyy (x, y)) + V (x, y)ψ(x, y)= Eψ(x, y),where we have chosen units so that the quantities are dimensionless, describes the wave function ψ of a particle of energy E subject to apotential V .
The square of the wave function,|ψ(x, y)|2 , can be interpreted as the probability of finding the particle at position (x, y).Assume that the particle is confined to a twodimensional box, say, the unit square, withinwhich it can move freely. Thus, the potential is zero within the unit square and infiniteelsewhere. Since there is zero probability offinding the particle outside the box, the wavefunction must be zero at its boundaries. Thus,we have an eigenvalue problem for the ellipticPDE−(ψxx (x, y) + ψyy (x, y)) = Eψ(x, y),on the unit square, subject to the boundarycondition ψ(x, y) = 0 on all boundaries. Notethat the discrete eigenvalues E are the only energy levels permitted; this feature gives quantum mechanics its name.Use a finite difference discretization of thePDE to derive an algebraic eigenvalue problem whose eigenvalues and eigenvectors approximate those of the PDE, then computethe eigenvalues and eigenvectors using a library routine (see Section 4.6).
Experimentwith various mesh sizes and observe how theeigenvalues behave.An analytical solution to this problem is easilyobtained, which gives the eigenvaluesEk,j = (k 2 + j 2 )π 2and corresponding eigenfunctionsψk,j (x, y) = sin(kπx) sin(jπy),k, j = 1, 2, . . . .How do your computed eigenvalues and eigenvectors compare with these analytical values asthe mesh size of your discretization decreases?Try to characterize the error as a function ofthe mesh size.Note that a nonzero potential V would notseriously complicate the numerical solution ofthe Schrödinger equation but would generallymake an analytical solution much more difficult to obtain.11.8 Verify empirically that the Jacobian ofthe semidiscrete system of ODEs given in Section 11.2.1 has eigenvalues between −4c/(∆x)2and 0.
For this exercise, you may take c = 1.Experiment with several different mesh sizes∆x. How do the eigenvalues behave as ∆x →0?COMPUTER PROBLEMS11.9 Consider the semidiscrete system ofODEs obtained from centered spatial discretization of the one-way wave equation, asin Example 11.3. Examine the stability of Euler’s method for this system of ODEs by computing the eigenvalues of its Jacobian matrix.Is there any value for the size of the time stepthat yields a stable method? Answer the samequestion for the semidiscrete system obtainedfrom one-sided upwind differencing.11.10 For the standard five-point approximation to Laplace’s equation on a k × k grid, experiment with various values of the relaxationparameter ω in the SOR method to see howmuch difference this makes in the spectral radius of the resulting iteration matrix G and therate of convergence of the algorithm.
Draw aplot of ρ(G) as a function of ω for 0 < ω < 2.How does the minimum of this curve comparewith the theoretical minimum given byρ(G) =1 − sin(πh),1 + sin(πh)which occurs forω=2,1 + sin(πh)where h = 1/(k +1)? Experiment with variousvalues of k, say, k = 10, 50, 100.11.11 Verify empirically the spectral radiusand rate of convergence data given in Tables11.1 and 11.2.11.12 Implement the steepest descent andconjugate gradient methods for solving symmetric positive definite linear systems.
Compare their performance, both in rate of convergence and in total time required, in solving arepresentative sample of test problems, bothwell-conditioned and ill-conditioned.Howdoes the rate of convergence of the conjugategradient method compare with the theoreticalestimate given in Section 11.5.6?11.13 Implement the Jacobi method for solving the n × n linear system Ax = b, where Ais the matrix resulting from a finite differenceapproximation to the one-dimensional Laplaceequation on an interval, with boundary valuesof zero at the endpoints. Thus, A is tridiagonal with diagonal elements equal to 2 andsubdiagonal and superdiagonal elements equal365to −1, and x represents the solution values atthe interior mesh points. Take b = o, so thatthe exact solution is x = o, and therefore atany iteration the error is equal to the currentvalue for x.For the initial starting guess, takejkπ, j = 1, .
. . , n.xj = sinn+1For any given value of k, the resulting vector x represents a discrete sample of a sinewave whose frequency depends on k. Thus,by choosing various values for k and then observing the resulting Jacobi iterations, we candetermine the relative speeds with which components of the error of various frequencies aredamped out.With n = 50, perform this experiment for eachvalue of k, where k = 1, 5, 10, .
. . , 25. For eachvalue of k, make a plot of the solution x at thestarting value and for each of the first ten iterations of the Jacobi method. For what valuesof k is the error damped out most rapidly andmost slowly? It turns out that frequencies beyond k = n/2 (called the Nyquist frequency;see Section 12.1.3), simply repeat the previous frequencies, as you may wish to verify. Doyour results suggest that the Jacobi method isa smoother, as discussed in Section 11.5.7? Trythe same experiment using a starting value forx with all entries equal to 1.
Does the errordecay rapidly or slowly, compared with yourprevious experiments?11.14 Implement a two-grid version of themultigrid algorithm, as outlined in Section 11.5.7. If recursion is supported in thecomputing environment you use, you may findit almost as easy to implement multiple gridlevels. Test your program on a simple ellipticboundary value problem, such as the Laplaceequation in one dimension on the unit intervalor in two dimensions on the unit square withgiven boundary conditions.11.15 Using the standard five-point discretization of the Laplace equation on a k × kgrid in two dimensions, and systematicallyvarying k, verify empirically as many of theresults given in Table 11.3 as you can.
Tryto determine the approximate proportionalityconstant for the dominant term in each case.366CHAPTER 11. PARTIAL DIFFERENTIAL EQUATIONSChapter 12Fast Fourier Transform12.1Trigonometric InterpolationWe have already studied interpolation by polynomials, piecewise polynomials, and splines.In dealing with cyclic phenomena, it is often more appropriate and informative to usetrigonometric functions, specifically, linear combinations of sines and cosines.
A functionx(t) is periodic if there is some constant p > 0 such that x(t + p) = x(t) for all t. Thesmallest such p for a given function is called the period of the function. For example, sineand cosine are both periodic with period 2π.Representing a periodic function as a linear combination of sines and cosines decomposesthe function into its components of various frequencies, much as a prism resolves a light beaminto its constituent colors. The resulting coefficients of the trigonometric basis functionstell us what frequencies are present in the function and in what amounts.
Moreover, thisrepresentation of the function in frequency space enables some of the manipulations of thefunction required in many applications, such as signal processing or solving differentialequations, to be done much more efficiently than in the original time domain.In representing a given function x(t) by a linear combination of sines and cosines, severalcases may arise:• The function x(t) may be defined for all t ∈ (−∞, ∞), or it may be defined only on somefinite interval.• The function x(t) may or may not be periodic.• The function x(t) may be a continuous function whose value is known for any argumentt, or its value may be given only at a discrete set of points tk .Depending on the particular situation, we may require the continuous Fourier transform(CFT), the Fourier series expansion, or the discrete Fourier transform (DFT) of the functionx, all of which will be defined shortly.In this chapter it will be convenient to use complex exponential notation, which is related367368CHAPTER 12.
FAST FOURIER TRANSFORMto ordinary trigonometric functions by Euler’s identity,eiθ = cos θ + i sin θ,where i =√−1. Sincee−iθ = cos(−θ) + i sin(−θ) = cos θ − i sin θ,we see that for a cosine wave of frequency k we havecos(2πkt) =e2πikt + e−2πikt,2and similarly for a sine wave of frequency k,sin(2πkt) = ie−2πikt − e2πikt,2which means that a pure cosine or sine wave of frequency k is equivalent to a sum ordifference, respectively, of complex exponentials of half the amplitude and of frequencies kand −k.For a given integer n, we will use the notationw = e2πi/nfor the primitive nth root of unity.
The nth roots of unity, sometimes called twiddle factorsin this context, are then given by wk , k = 0, . . . , n − 1, and are illustrated for the case n = 4in Fig. 12.1.i = w1 = w5 = w9 · · ·•.................................... ......................................................................................................................................................................................................................................2π/4· · · w10 = w6 = w2 = −1 •• 1 = w0 = w4 = w8 · · ·•−i = w3 = w7 = w11 · · ·Figure 12.1: Roots of unity wk = e2πik/n in the complex plane for n = 4.12.1.1Continuous Fourier TransformIf x(t) is defined for all t ∈ (−∞, ∞), then we can writeZ ∞x(t) =y(f )e−2πif t df,−∞12.1.