c10-8 (Numerical Recipes in C)
Описание файла
Файл "c10-8" внутри архива находится в папке "Numerical Recipes in C". PDF-файл из архива "Numerical Recipes in C", который расположен в категории "". Всё это находится в предмете "цифровая обработка сигналов (цос)" из 8 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "цифровая обработка сигналов" в общих файлах.
Просмотр PDF-файла онлайн
Текст из PDF
430Chapter 10.Minimization or Maximization of FunctionsQuasi-Newton methods like dfpmin work well with the approximate lineminimization done by lnsrch. The routines powell (§10.5) and frprmn (§10.6),however, need more accurate line minimization, which is carried out by the routinelinmin.Although rare, it can conceivably happen that roundoff errors cause the matrix Hi tobecome nearly singular or non-positive-definite. This can be serious, because the supposedsearch directions might then not lead downhill, and because nearly singular Hi ’s tend to givesubsequent Hi ’s that are also nearly singular.There is a simple fix for this rare problem, the same as was mentioned in §10.4: In caseof any doubt, you should restart the algorithm at the claimed minimum point, and see if itgoes anywhere. Simple, but not very elegant.
Modern implementations of variable metricmethods deal with the problem in a more sophisticated way.Instead of building up an approximation to A−1 , it is possible to build up an approximationof A itself. Then, instead of calculating the left-hand side of (10.7.4) directly, one solvesthe set of linear equationsA · (xm − xi ) = −∇f (xi )(10.7.11)At first glance this seems like a bad idea, since solving (10.7.11) is a process of orderN 3 — and anyway, how does this help the roundoff problem? The trick is not to store A butrather a triangular decomposition of A, its Cholesky decomposition (cf. §2.9).
The updatingformula used for the Cholesky decomposition of A is of order N 2 and can be arranged toguarantee that the matrix remains positive definite and nonsingular, even in the presence offinite roundoff. This method is due to Gill and Murray [1,2] .CITED REFERENCES AND FURTHER READING:Dennis, J.E., and Schnabel, R.B.
1983, Numerical Methods for Unconstrained Optimization andNonlinear Equations (Englewood Cliffs, NJ: Prentice-Hall). [1]Jacobs, D.A.H. (ed.) 1977, The State of the Art in Numerical Analysis (London: Academic Press),Chapter III.1, §§3–6 (by K. W. Brodlie). [2]Polak, E. 1971, Computational Methods in Optimization (New York: Academic Press), pp. 56ff. [3]Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), pp. 467–468.10.8 Linear Programming and the SimplexMethodThe subject of linear programming, sometimes called linear optimization,concerns itself with the following problem: For N independent variables x1 , .
. . , xN ,maximize the functionz = a01 x1 + a02 x2 + · · · + a0N xN(10.8.1)subject to the primary constraintsx1 ≥ 0,x2 ≥ 0,...xN ≥ 0(10.8.2)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Advanced Implementations of Variable Metric Methods10.8 Linear Programming and the Simplex Method431and simultaneously subject to M = m1 + m2 + m3 additional constraints, m1 ofthem of the formai1 x1 + ai2 x2 + · · · + aiN xN ≤ bi(bi ≥ 0)i = 1, . .
. , m1(10.8.3)m2 of them of the formj = m1 + 1, . . . , m1 + m2 (10.8.4)and m3 of them of the formak1 x1 + ak2 x2 + · · · + akN xN = bk ≥ 0k = m1 + m2 + 1, . . . , m1 + m2 + m3(10.8.5)The various aij ’s can have either sign, or be zero. The fact that the b’s must all benonnegative (as indicated by the final inequality in the above three equations) is amatter of convention only, since you can multiply any contrary inequality by −1.There is no particular significance in the number of constraints M being less than,equal to, or greater than the number of unknowns N .A set of values x1 . . . xN that satisfies the constraints (10.8.2)–(10.8.5) is calleda feasible vector. The function that we are trying to maximize is called the objectivefunction. The feasible vector that maximizes the objective function is called theoptimal feasible vector.
An optimal feasible vector can fail to exist for two distinctreasons: (i) there are no feasible vectors, i.e., the given constraints are incompatible,or (ii) there is no maximum, i.e., there is a direction in N space where one or moreof the variables can be taken to infinity while still satisfying the constraints, givingan unbounded value for the objective function.As you see, the subject of linear programming is surrounded by notational andterminological thickets.
Both of these thorny defenses are lovingly cultivated by acoterie of stern acolytes who have devoted themselves to the field. Actually, thebasic ideas of linear programming are quite simple. Avoiding the shrubbery, wewant to teach you the basics by means of a couple of specific examples; it shouldthen be quite obvious how to generalize.Why is linear programming so important? (i) Because “nonnegativity” is theusual constraint on any variable xi that represents the tangible amount of somephysical commodity, like guns, butter, dollars, units of vitamin E, food calories,kilowatt hours, mass, etc. Hence equation (10.8.2). (ii) Because one is ofteninterested in additive (linear) limitations or bounds imposed by man or nature:minimum nutritional requirement, maximum affordable cost, maximum on availablelabor or capital, minimum tolerable level of voter approval, etc.
Hence equations(10.8.3)–(10.8.5). (iii) Because the function that one wants to optimize may belinear, or else may at least be approximated by a linear function — since that is theproblem that linear programming can solve. Hence equation (10.8.1). For a short,semipopular survey of linear programming applications, see Bland [1].Here is a specific example of a problem in linear programming, which hasN = 4, m1 = 2, m2 = m3 = 1, hence M = 4:Maximize z = x1 + x2 + 3x3 − 12 x4(10.8.6)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).aj1 x1 + aj2 x2 + · · · + ajN xN ≥ bj ≥ 0432Chapter 10.Minimization or Maximization of Functionsaddia feasible basic vector(not optimal)nalconstraint(equality)lity)primary constraintitiosome feasible vectorsthe optimal feasible vectoronaltiaddility)equa(inrainttconsz=z=z=primary constraintz=z=z=z=z=2.93.03.1x22.82.72.62.52.4Figure 10.8.1. Basic concepts of linear programming.
The case of only two independent variables,x1 , x2 , is shown. The linear function z, to be maximized, is represented by its contour lines. Primaryconstraints require x1 and x2 to be positive. Additional constraints may restrict the solution to regions(inequality constraints) or to surfaces of lower dimensionality (equality constraints). Feasible vectorssatisfy all constraints. Feasible basic vectors also lie on the boundary of the allowed region. The simplexmethod steps among feasible basic vectors until the optimal feasible vector is found.with all the x’s nonnegative and also withx1 + 2x3 ≤ 7402x2 − 7x4 ≤ 0x2 − x3 + 2x4 ≥12(10.8.7)x1 + x2 + x3 + x4 = 9The answer turns out to be (to 2 decimals) x1 = 0, x2 = 3.33, x3 = 4.73, x4 = 0.95.In the rest of this section we will learn how this answer is obtained. Figure 10.8.1summarizes some of the terminology thus far.Fundamental Theorem of Linear OptimizationImagine that we start with a full N -dimensional space of candidate vectors.
Then(in mind’s eye, at least) we carve away the regions that are eliminated in turn by eachimposed constraint. Since the constraints are linear, every boundary introduced bythis process is a plane, or rather hyperplane. Equality constraints of the form (10.8.5)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.