MacKinnon - Computational Physics (523159), страница 6
Текст из файла (страница 6)
If runs from to then and will be needed. This is best done byofcreating “ghost” cells at the end of the arrays. In C(++) this can be done by a declaration such asþKþdouble x[N+1], u[N+1], P[N+2];S=Tm TþþK1'hK $ , where is theand, after each loop in which or are updated, by setting andequilibrium length of the grid. Note that this requires you to leave a couple of empty array elements, butfor large this is not significant.þ2.9.3 Initial ConditionsBQ \ M µ )E¹ " f 8 7 M O 7&(2.58)ÆK QB 8 7 "ñ7 f & ³ ´nµ " f 8 7 & :(2.59)where , the amplitude of the sound wave, is the ratio of to the soundK speedMach number).
BeQ should(i.e.risethemonotonicallyÆcareful to choose sensible values for the parameters: e.g. the values ofwith8 , otherwise some cells will have negative fluid densities.ï values of the equilibrium pressureThe essential physics of the problem isÂindependent" 7 & andof the absolute/anddensitysoyoucanset, 7 þ Ü . In addition you can assume that K ¢.Note that the scheme is leap-frog but not of the dangerous variety: alternate steps solve for and .ÆSensible initial conditions might be,"ô &2.9.4 The PhysicsStart with a relatively small value for and show that the wave maintains its shape and movesat the correct velocity.
Then increaseto find out what happens. The stability analysis for a non–linearequation like this is difficult. Try halving the distance between grid points. How does the behaviour of thesystem change? Do the effects you observe describe real physics or are they numerical artefacts?One common numerical problem only manifests itself for large . Try running for a few steps at, look at the values for and try to work out what has gone wrong.
Think about how to prevent thisproblem (Hint: you should find something very similar to the Courant–Friedrichs-Lewy condition).a KSolitons232.9.5 An Improvement?Q F T <" QS#T f Q QeT &(2.60)By applying the von Neumann trick you should be able to deduce the effect of such an operation. Now tryKapplying the transformation (2.60) to to and after a few iterations. How does the behaviour of yourÆsimulation change?Consider the operation:2.9.6 The ReportIn your report you should discuss the behaviour of the simulation, being careful to distinguish between realphysics and numerical artefacts.
Discuss what measures you employed to control the numerical problemsand explain the reasoning behind them.2.10 Project — Solitons2.10.1 IntroductionJ r K r KÃr :(2.61)rrris one of a class of non–linear equations which have so–called soliton solutions. In this case a solution canbe written in the form, f 0 µ ³ 0 "<K$ ×n 0 %&C :(2.62)The Korteweg de Vries equation,which has the form of a pulse which moves unchanged through the system. Ever since the phenomenonwas first noted (on a canal in Scotland) it has been recognised in a wide range of different physical situations. The “bore” which occurs on certain rivers, notably the Severn, is one such. The dynamics of phaseboundaries in various systems, such as domain boundaries in a ferromagnet, and some meteorologicalphenomena can also be described in terms of solitons.2.10.2 Discretisation QS#T Q T ê Q 0 Q 0 T Q Q Q QÑ y $ p O K ° y =S T ± $ ° y eT ± ì 0 O K y S 0 $ f y #S T f Ñ eT V$ y e 0 t [ (2.63)OOYou should check that this is indeed a sensible discretisation of (2.61) but that it is unstable.
Note that F O and retain thewhen analysing the non–linear term in (2.63) you should make the substitution Klinear terms in O . Thereafteryou should treat as a constant, independent of and , and apply the vonNeumann method to O .If you find the full stability analysis difficult you might consider the 2 limits of large and small . In theThe simplest discretisation of (2.61), based on the Euler method, gives the equationformer case the 3rd derivative is negligible and (2.61) reduces to a non–linear advection equation, whereasin the latter the non–linear term is negligible and the equation is similar to the diffusion equation but witha 3rd derivative. In any case you will require to choose so that the equation is stable in both limits.You are free to choose any method you wish to solve the equation but you will find the Runge–Kuttaor Predictor–Corrector methods most reliable.
Hence treatingas a long vector y and the terms on theright–hand–side of (2.63) as a vector function y the R–K method can be written concisely asOyB QS=T 0 QS#T y"& Q $y Q $yOT Q0 O " " B y Q S#TR& 0 & y:(2.64)(2.65)SolitonsO " &24uvwhere y is the quantity in square brackets in (2.63). Check that this method is stable, at least in the2 limiting cases. Bear in mind that the Runge–Kutta method is usable for oscillatory equations in spite ofthe small instability as long as the termis small.By studying the analytical solution (2.62) you should be able to choose a sensible value forin termsof and from the stability conditions you can deduce an appropriate .
Again, by looking at (2.62) youshould be able to decide on a sensible size for the total system.You should use periodic boundary conditions, so that your solitons can run around your system severaltimes if necessary. The easiest way to do this is by using “ghost” elements at each end of your arrays.Suppose your arrays should run from to .
Then you can add a couple of extra elements to each end:. After each step you can then assign these values as egT : : =S T : S 0" * O % & OKOT egTmh ge T : h : S=TmlT : S 0 h 0(2.66)so that the derivative terms in (2.63) can be calculated without having to take any special measures.2.10.3 PhysicsP as your initialYou should first check your program by studying the single soliton case. Use (2.62) atcondition and watch how the soliton develops.
Check that after it has gone round the system once it retainsits shape.Now you can try 2 solitons of different sizes. The initial conditions should be the simple sum of 2solitons well separated from one another. Watch what happens when the solitons meet.In your report you should discuss the behaviour of the solitons as well as the properties of the methodchosen to solve the equations.Chapter 3Matrix Algebra3.1 IntroductionNearly every scientific problem which is solvable on a computer can be represented by matrices. Howeverthe ease of solution can depend crucially on the types of matrices involved. There are 3 main classes ofproblems which we might want to solve:2.
Systems of equations: Ax1. Trivial Algebraic Manipulation such as addition, AegTB or multiplication, AB, of matrices.b where A and b are known and we have to find x. This also includesthe case of finding the inverse, A . The standard example of such a problem is Poisson’s equation(section 3.4). 3. Eigenvalue Problems: Axx.
This also includes the generalised eigenvalue problem: AxHere we will consider the time–independent Schrödinger equation. Bx.In most cases there is no point in writing your own routine to solve such problems. There are manycomputer libraries, such as Numerical Algorithms Group (n.d.), Lapack Numerical Library (n.d.) (forlinear algebra problems and eigenvalue problems). which contain well tried routines. In addition vendorsof machines with specialised architectures often provide libraries of such routines as part of the basicsoftware.3.2 Types of MatricesThere are several ways of classifying matrices depending on symmetry, sparsity etc.
Here we provide a listof types of matrices and the situation in which they may arise in physics.RHermitian Matrices: Many Hamiltonians have this property especially those containing magnetic where at least some elements are complex.fields: RÎÎReal Symmetric Matrices: These are the commonest matrices in physics as most Hamiltonians canbe represented this way: and all are real. This is a special case of Hermitian matrices.Positive Definite Matrices: A special sort of Hermitian matrix in which all the eigenvalues arepositive.
The overlap matrices used in tight–binding electronic structure calculations are like this.Sometimes matrices are non–negative definite and zero eigenvalues are also allowed. An example isthe dynamical matrix describing vibrations of the atoms of a molecule or crystal, where.*0Õ Unitary Matrices: The product of the matrix and its Hermitian conjugate is a unit matrix, U U I.A matrix whose columns are the eigenvectors of a Hermitian matrix is unitary; the unitarity is aconsequence of the orthogonality of the eigenvectors. A scattering (S) matrix is unitary; in this casea consequence of current conservation.25Matrix AlgebraÒ â26 Diagonal Matrices: All matrix elements are zero except the diagonal elements, when. The matrix of eigenvalues of a matrix has this form.
Finding the eigenvalues is equivalent todiagonalisation.Tridiagonal Matrices:All matrix elements are zero except the diagonal and first off diagonal elements, , . Such matrices often occur in 1 dimensional problems and at anintermediate stage in the process of diagonalisation.á á T Î Ò âUpper and Lower Triangular Matrices: In Upper Triangular Matrices all the matrix elementsbelow the diagonal are zero, for.
A Lower Triangular Matrix is the other way round,for. These occur at an intermediate stage in solving systems of equations and invertingmatrices. UÒ â¥Sparse Matrices: Matrices in which most of the elements are zero according to some pattern. Ingeneral sparsity is only useful if the number of non–zero matrix elements of an matrix isproportional to rather than . In this case it may be possible to write a function which willmultiply a given vector x by the matrix A to give Ax without ever having to store all the elements ofA. Such matrices commonly occur as a result of simple discretisation of partial differential equations(see chapter 2), and in simple models to describe many physical phenomena.0General Matrices: Any matrix which doesn’t fit into any of the above categories, especially non–square matrices.There are a few extra types which arise less often:Complex Symmetric Matrices: Not generally a useful symmetry.
There are however two relatedsituations in which these occur in theoretical physics: Green’s functions and scattering (S) matrices.In both these cases the real and imaginary parts commute with each other, but this is not true for ageneral complex symmetric matrix.Symplectic Matrices: This designation is used in 2 distinct situations:– The eigenvalues occur in pairs which are reciprocals of one another. A common example is aTransfer Matrix.f¥ f [– Matrices whose elements are Quaternions, which are $ "! !matrices like(3.1)Such matrices describe systems involving spin–orbit coupling. All eigenvalues are doublydegenerate (Kramers degeneracy).3.3 Simple Matrix Problems3.3.1 Addition and SubtractionIn programming routines to add or subtract matrices it pays to remember how the matrix is stored in thecomputer.