Главная » Просмотр файлов » Nash - Compact Numerical Methods for Computers

Nash - Compact Numerical Methods for Computers (523163), страница 49

Файл №523163 Nash - Compact Numerical Methods for Computers (Nash - Compact Numerical Methods for Computers) 49 страницаNash - Compact Numerical Methods for Computers (523163) страница 492013-09-15СтудИзба
Просмтор этого файла доступен только зарегистрированным пользователям. Но у нас супер быстрая регистрация: достаточно только электронной почты!

Текст из файла (страница 49)

Theresulting function was minimised using the Nelder-Mead algorithm starting fromb 1= b2=0 with a step of 0·01 to generate the initial simplex. The results arepresented in table 18.4 together with the alternative method of assigning the225Left-oversTABLE 18.3. Solutions found for Z Hassan problem via Marquardt-type algorithm using numericalapproximation of the Jacobian and elimination of one parameter via equation (18.14). The values initalics are for the eliminated parameter.

All calculations run in BASIC on a Data General NOVA in23-bit binary arithmetic.b2b1b3b4b5-6·58771 0·910573 1·58847E-2 -2·54972E-2 -46·152480·6448 0·587883 0·1556153·27508E-2Elimi- Sum ofnated squarestb6b374·08130·541558b40·113976706·745(104)749·862(170)606·163(67)606·127(67)0·026591-59·629-9·49263b50·757074 0·1670332·76222E-2-58·976-9·75284b6With analytic derivatives0·760703 0·16702945·36232·72248E-2-59·1436-9·64008b6606·106(65)Penalty method; analytic derivatives: initial w = 1002·67492E-244·9836 0·761652 0·165848-58·9751-8·85664-604·923(73)Increase to w = 1E40·760732 0·16700545·3532·72233E-2-59·1574-9·63664-606·097(+48)Increase to w = 1E645·3508 0·760759 0·1670232·72204E-2-59·1504-9·63989-606·108(+22)43·9342 0·7676246·04810·167034† Figures in brackets below each sum of squares denote total number of equivalent functionevaluations (= (n+1) *(number of Jacobian calculations) + (number of function calculations)) to convergence.function a very large value whenever one or more of the constraints is violated.

Inthis last approach it has been reported that the simplex may tend to ‘collapse’ or‘flatten’ against the constraint. Swann discusses some of the devices used tocounteract this tendency in the book edited by Gill and Murray (1974). Dixon(1972, chap 6) gives a discussion of various methods for constrained optimisationwith a particular mention of some of the convergence properties of the penaltyfunction techniques with respect to the weighting factors w and W.TABLE 18.4. Results of solution of problem (18.24)-( 18.26) by the Nelder-Meadalgorithm.Weighting101E4Set functionvery largeFunctionvalueNumber of evaluationsto convergeb1b2-5·35397-5·351361131671·461611·459330·407790·40558-5·351351211·459240·405569Calculations performed on a Data General NOVA in 23-bit binary arithmetic.226Compact numerical methods for computers18.4. A COMPARISON OF FUNCTION MINIMISATION ANDNONLINEAR LEAST-SQUARES METHODSIt is quite difficult to compare the four basic algorithms presented in the precedingchapters in order to say that one of them is ‘best.’ The reason for this is that onealgorithm may solve some problems very easily but run out of space on another.An algorithm which requires gradients may be harder to use than another whichneeds only function values, hence requiring more human time and effort eventhough it saves computer time.

Alternatively, it may only be necessary to improvean approximation to the minimum, so that a method which will continue until ithas exhausted all the means by which it may reduce the function value may verywell be unnecessarily persistent.Despite all these types of question, I have run an extensive comparison of themethods which have been presented. In this, each method was applied to 79 testproblems, all of which were specified in a sum-of-squares formS=f Tf.(18.27)Gradients, where required, were computed viag=-JTf .(18.28)Nash (1976) presents 77 of these problems, some of which have been used asexamples in this text†.

The convergence criteria in each algorithm were asstringent as possible to allow execution to continue as long as the function wasbeing reduced. In some cases, this meant that the programs had to be stoppedmanually when convergence was extremely slow. Some judgement was then usedto decide if a satisfactory solution had been found. In programs using numericalapproximation to the derivative, the same kind of rather subjective judgementwas used to decide whether a solution was acceptable based on the computedgradient.

This means that the results given below for minimisation methodsemploying the approximation detailed in §18.2 have used a more liberal definitionof success than that which was applied to programs using analytic or no derivatives. The question this raises is: do we judge a program by what it is intended todo? Or do we test to see if it finds some globally correct answer? I have chosenthe former position, but the latter is equally valid. Such differences of viewpointunfortunately give rise to many disputes and controversies in this field.Having now described what has been done, a measure of success is needed.

Forreliability, we can compare the number of problems for which acceptable (ratherthan correct) solutions have been found to the total number of problems run.Note that this total is not necessarily 79 because some problems have an initial setof parameters corresponding to a local maximum and the methods which usegradient information will not generally be able to proceed from such points. Otherproblems may be too large to fit in the machine used for the tests-a partition of aData General NOVA which uses 23-bit binary arithmetic. Also some of thealgorithms have intermediate steps which cause overflow or underflow to occurbecause of the particular scale of the problems at hand.† The two remaining functions are given by Osborne (1972).227Left-oversTo measure efficiency of an algorithm, we could time the execution.

This isgenerally quite difficult with a time-shared machine and is in any case highlydependent on the compiler or interpreter in use as well as on a number ofprogramming details, and may be affected quite markedly by special hardwarefeatures of some machines. Therefore, a measure of work done is needed whichdoes not require reference to the machine and I have chosen to use equivalentfunction evaluations (efe’s). This is the number of function evaluations required tominimise a function plus, in the case of algorithms requiring derivatives, thenumber of function evaluations which would have been required to approximatethe derivatives numerically.

For a problem of order n, a program requiring iggradient evaluations and ifn function evaluations is then assignede f e = (n+l)*ig+ifn(18.29)equivalent function evaluations. I use the factor (n+1) rather than n because ofthe particular structure of my programs. The use of equivalent function evaluations, and in particular the choice of multiplier for ig, biases my appraisal againstmethods using derivatives, since by and large the derivatives are not n times morework to compute than the function. Hillstrom (1976) presents a more comprehensive approach to comparing algorithms for function minimisation, though in theend he is still forced to make some subjective judgements.Having decided to use equivalent function evaluations as the measure ofefficiency, we still have a problem in determining how to use them, since theomission of some problems for each of the methods means that a method whichhas successfully solved a problem involving many parameters (the maximum inany of the tests was 20, but most were of order five or less) may appear lessefficient than another method which was unable to tackle this problem.

To takeaccount of this, we could determine the average number of equivalent functionevaluations to convergence per parameter, either by averaging this quantity overthe successful applications of a method or by dividing the total number of efe’s forall successful cases by the total number of parameters involved. In practice, it wasdecided to keep both measures, since their ratio gives a crude measure of theperformance of algorithms as problem order increases.To understand this, consider that the work required to minimise the function inany algorithm is proportional to some power, a, of the order n, thusw =p n a .(18.30)The expected value of the total work over all problems divided by the totalnumber of parameters is approximated by(18.31)The average of w/n over all problems, on the other hand, is approximately(18.32)228Compact numerical methods for computersHence the ratio(18.33)givesa = l / ( 2r - 1 )(18.34)as an estimate of the degree of the relationship between work and problem order,n, of a given method.

The limited extent of the tests and the approximation ofsums by integrals, however, mean that the results of such an analysis are no morethan a guide to the behaviour of algorithms. The results of the tests are presentedin table 18.5.The conclusions which may be drawn from the table are loosely as follows.(i) The Marquardt algorithm 23 is generally the most reliable and efficient.Particularly if problems having ‘large’ residuals, which cause the Gauss-Newtonapproximation (17.16) to be invalid, are solved by other methods or by increasingthe parameter phi in algorithm 23, it is extremely efficient, as might be expectedsince it takes advantage of the sum-of-squares form.(ii) The Marquardt algorithm using a numerical approximation (18.4) for theJacobian is even more efficient than its analytic-derivative counterpart on thoseproblems it can solve.

It is less reliable, of course, than algorithms using analyticderivatives. Note, however, that in terms of the number of parameters determinedsuccessfully, only the variable metric algorithm and the Marquardt algorithm aremore effective.(iii) The Nelder-Mead algorithm is the most reliable of the derivative-freemethods in terms of number of problems solved successfully. However, it is alsoone of the least efficient, depending on the choice of measure w1 or w0, though insome ways this is due to the very strict convergence criterion and the use of theaxial search procedure. Unfortunately, without the axial search, the number ofproblems giving rise to ‘failures’ of the algorithm is 11 instead of four, so I cannotrecommend a loosening of the convergence criteria except when the properties ofthe function to be minimised are extremely well known.(iv) The conjugate gradients algorithm, because of its short code length and lowworking-space requirements, should be considered whenever the number ofparameters to be minimised is large, especially if the derivatives are inexpensiveto compute.

The reliability and efficiency of conjugate gradients are lower thanthose measured for variable metric and Marquardt methods. However, this study,by using equivalent function evaluations and by ignoring the overhead imposed byeach of the methods, is biased quite heavily against conjugate gradients, and Iwould echo Fletcher’s comment (in Murray 1972, p 82) that ‘the algorithm isextremely reliable and well worth trying’.As a further aid to the comparison of the algorithms, this chapter is concludedwith three examples to illustrate their behaviour.Example 18.1. Optimal operation of a public lotteryIn example 12.1 a function minimisation problem has been described which arisesin an attempt to operate a lottery in a way which maximises revenue per unitTABLE 18.5.

Характеристики

Тип файла
PDF-файл
Размер
1,71 Mb
Тип материала
Учебное заведение
Неизвестно

Список файлов книги

Свежие статьи
Популярно сейчас
Как Вы думаете, сколько людей до Вас делали точно такое же задание? 99% студентов выполняют точно такие же задания, как и их предшественники год назад. Найдите нужный учебный материал на СтудИзбе!
Ответы на популярные вопросы
Да! Наши авторы собирают и выкладывают те работы, которые сдаются в Вашем учебном заведении ежегодно и уже проверены преподавателями.
Да! У нас любой человек может выложить любую учебную работу и зарабатывать на её продажах! Но каждый учебный материал публикуется только после тщательной проверки администрацией.
Вернём деньги! А если быть более точными, то автору даётся немного времени на исправление, а если не исправит или выйдет время, то вернём деньги в полном объёме!
Да! На равне с готовыми студенческими работами у нас продаются услуги. Цены на услуги видны сразу, то есть Вам нужно только указать параметры и сразу можно оплачивать.
Отзывы студентов
Ставлю 10/10
Все нравится, очень удобный сайт, помогает в учебе. Кроме этого, можно заработать самому, выставляя готовые учебные материалы на продажу здесь. Рейтинги и отзывы на преподавателей очень помогают сориентироваться в начале нового семестра. Спасибо за такую функцию. Ставлю максимальную оценку.
Лучшая платформа для успешной сдачи сессии
Познакомился со СтудИзбой благодаря своему другу, очень нравится интерфейс, количество доступных файлов, цена, в общем, все прекрасно. Даже сам продаю какие-то свои работы.
Студизба ван лав ❤
Очень офигенный сайт для студентов. Много полезных учебных материалов. Пользуюсь студизбой с октября 2021 года. Серьёзных нареканий нет. Хотелось бы, что бы ввели подписочную модель и сделали материалы дешевле 300 рублей в рамках подписки бесплатными.
Отличный сайт
Лично меня всё устраивает - и покупка, и продажа; и цены, и возможность предпросмотра куска файла, и обилие бесплатных файлов (в подборках по авторам, читай, ВУЗам и факультетам). Есть определённые баги, но всё решаемо, да и администраторы реагируют в течение суток.
Маленький отзыв о большом помощнике!
Студизба спасает в те моменты, когда сроки горят, а работ накопилось достаточно. Довольно удобный сайт с простой навигацией и огромным количеством материалов.
Студ. Изба как крупнейший сборник работ для студентов
Тут дофига бывает всего полезного. Печально, что бывают предметы по которым даже одного бесплатного решения нет, но это скорее вопрос к студентам. В остальном всё здорово.
Спасательный островок
Если уже не успеваешь разобраться или застрял на каком-то задание поможет тебе быстро и недорого решить твою проблему.
Всё и так отлично
Всё очень удобно. Особенно круто, что есть система бонусов и можно выводить остатки денег. Очень много качественных бесплатных файлов.
Отзыв о системе "Студизба"
Отличная платформа для распространения работ, востребованных студентами. Хорошо налаженная и качественная работа сайта, огромная база заданий и аудитория.
Отличный помощник
Отличный сайт с кучей полезных файлов, позволяющий найти много методичек / учебников / отзывов о вузах и преподователях.
Отлично помогает студентам в любой момент для решения трудных и незамедлительных задач
Хотелось бы больше конкретной информации о преподавателях. А так в принципе хороший сайт, всегда им пользуюсь и ни разу не было желания прекратить. Хороший сайт для помощи студентам, удобный и приятный интерфейс. Из недостатков можно выделить только отсутствия небольшого количества файлов.
Спасибо за шикарный сайт
Великолепный сайт на котором студент за не большие деньги может найти помощь с дз, проектами курсовыми, лабораторными, а также узнать отзывы на преподавателей и бесплатно скачать пособия.
Популярные преподаватели
Добавляйте материалы
и зарабатывайте!
Продажи идут автоматически
6353
Авторов
на СтудИзбе
311
Средний доход
с одного платного файла
Обучение Подробнее