Главная » Просмотр файлов » 1610906232-7ba7dbaea13262b50a6029d682cb7f1b

1610906232-7ba7dbaea13262b50a6029d682cb7f1b (824370), страница 74

Файл №824370 1610906232-7ba7dbaea13262b50a6029d682cb7f1b (Elements of Theory of Computation 2ed Lewis Papadimitriou) 74 страница1610906232-7ba7dbaea13262b50a6029d682cb7f1b (824370) страница 742021-01-17СтудИзба
Просмтор этого файла доступен только зарегистрированным пользователям. Но у нас супер быстрая регистрация: достаточно только электронной почты!

Текст из файла (страница 74)

In an optimization problem we can also think thatwe have an exponentially large set of candidate solutions; however, this timeeach solution has a cost t associated with it, and we wish to find the candidatesolution in So with the smallest cost.

The branch-and-bound algorithm is ingeneral the one shown below (the algorithm shown only returns the optimalcost, but it can be easily modified to return the optimal solution).A:= {So}, bestsofar:= 00while A is not empty dochoose a subproblem S and delete it from Achoose a way of branching out of S, say to subproblems SI, ... ,Srfor each subproblem Si in this list doif ISil = 1 (that is, Si is a complete solution) then update bestsofarelse if 10werbound(Si) <bestsofar then add Si to Areturn bestsofarThe algorithm always remembers the smallest cost of any solution seen sofar, initially 00 (performance often improves a lot if bestsofar is initialized tothe cost of a solution obtained by another heuristic).

Every time a full solutionto the original problem is found, bestsofar is updated. The key ingredient of abranch-and-bound algorithm (besides the design decisions (1) and (2) it shareswith backtracking) is a method for obtaining a lower bound on the cost of anysolution in a subproblem S. That is, the function 10werbound(S) returns anumber that is guaranteed to be less than or equal to the lowest cost of anysolution in S. The branch-and-bound algorithm above will always terminate withthe optimal solution. This is because the only subproblems left unconsidered arethose for which 10werbound(Si) 2bestsofar -that is, those subproblems of whichthe optimal solution is provably no better than the best solution we have seenso far.tWe shall assume that the optimization problem in question is a minimizationproblem; maximization problems can be treated in a very similar way.7.4: Coping with NP-completeness345Naturally, there are many ways of obtaining lower bounds (lowerbound(S) =usually do ...

). The point is that, if 10werbound(S) is a sophisticatedalgorithm returning a value that is usually very close to the optimum solutionin S, then the branch-and-bound algorithm is likely to perform very well, thatis, to terminate reasonably fast.o wouldExample 7.4.7: Let us adapt the backtracking algorithm we developed forHAMILTON CYCLE to obtain a branch-and-bound algorithm for the TRAVELINGSALESMAN PROBLEM. As before, a subproblem S is characterized by a path froma to b through a set T of cities.

What is a reasonable lower bound? Here isone idea: For each city outside T U {a, b}, calculate the sum of its two shortestdistances to another city outside T. For a and b, calculate their shortest distanceto another city outside T. It is not hard to prove (see Problem 7.4.4) that thehalf sum of these numbers, plus the cost of the already fixed path from a to bthrough T, is a valid lower bound on the cost of any tour in the subproblem S.The branch-and-bound algorithm is now completely specified.There are far more sophisticated lower bounds for the TRAVELING SALESMAN PROBLEM.<>Local ImprovementOur final family of algorithms is inspired by evolution: What if we allow asolution of an optimization problem to change a little, and adopt the new solutionif it has improved cost? Concretely, let So be the set of candidate solutions inan instance of an optimization problem (again, we shall assume that it is aminimization problem).

Define a neighborhood relation N on the set ofsolutions N ~ So x So ~it captures the intuitive notion of "changing a little."For s E So, the set {s' : (s, s') E N} is called the neighborhood of s.The algorithm is simply this (see Figure 7-16 for a suggestive depiction ofthe operation of local improvement algorithms):s :=initialsolutionwhile there is a solution s' such thatN(s, s') and cost(s') <cost(s) do: s := s'return sThat is, the algorithm keeps improving s by replacing with a neighbor s'with a better cost, until there is no s' in the neighborhood of s with better cost;in the latter case we say that s is a local optimum. Obviously, a local optimumis not guaranteed to be an optimal solution ~unless of course N = So x So.The quality of local optima obtained and the running time of the algorithmboth depend critically on N: the larger the neighborhoods, the better the localoptimum; on the other hand, large neighborhoods imply that the iteration ofthe algorithm (an execution of the while loop, and the ensuing search through346Chapter 7: NP-COMPLETENESSthe neighborhood of the current solution s) will be slower.

Local improvementalgorithms seek a favorable compromise in this trade-off. As usual, there are nogeneral principles to guide us in designing a good neighborhood; the choice seemsvery problem-dependent, even instance-dependent, and is best made throughexperimentation.local optimaFigure 7-16: Once the neighborhood relation has been fixed, the solutions ofan optimization problem can be pictured as an energy landscape, in which localoptima are depicted as valleys.

Local improvement heuristics jump from solutionto solution, until a local optimum is found.Another issue that affects the performance of a local improvement algorithmis the method used in finding s'. Do we adopt the first better solution we find inthe neighborhood of s, or do we wait to find the best? Is the longer iteration justified by the speed of descent -and do we want speedy descent anyway? Finally,the performance of a local improvement algorithm also depends on the procedure initialsolution. It is not clear at all that better initial solutions will resultin better performance --often a mediocre starting point is preferable, because itgives the algorithm more freedom to explore the solution space (see Figure 7-16).Incidentally, the procedure initialsolution should best be randomized -that is,able to generate different initial solutions when called many times.

This allowsus to restart many times the local improvement algorithm above, and obtain3477.4: Coping with NP-completenessmany local optima.Example 7.4.8: Let us take again the TRAVELING SALESMAN PROBLEM. Whenshould we consider two tours as neighbors? Since a tour can be consideredas a set of n undirected inter-city "links," one plausible answer is, when theyshare all but very few links. Two is the minimum possible number of linksin which two tours may differ, and this suggests a well-known neighborhoodrelation for the TRAVELING SALESMAN PROBLEM that we call 2-change (seeFigure 7-17).

That is, two tours are related by N if and only if they differ in justtwo links. The local improvement algorithm using the 2-change neighborhoodperforms reasonably well in practice. However, much better results are achievedby adopting the 3-change neighborhood; furthermore, it is reported in theliterature that 4-change does not return sufficiently better tours to justify theincrease in iteration time.Figure 7-17Perhaps the best heuristic algorithm currently known for the TRAVELING'x-change,a neighborhood so sophisticated and complex that it does not even fit in ourframework (whether two solutions are neighbors depends on the distances). Asits name suggests, 'x-change allows arbitrary many link changes in one step(but of course, not all possible such changes are explored, this would make theiteration exponentially slow).OSALESMAN PROBLEM, the Lin-Kernighan algorithm, relies onExample 7.4.9: In order to develop a local improvement algorithm for MAXSAT (the version of SATISFIABILITY in which we wish to satisfy as many clausesas possible; recall Theorem 7.2.4), we might choose to consider two truth assignments to be related by N if they only differ in the value of a single variable.This immediately defines an interesting, and empirically successful, local improvement algorithm for MAX SAT.

It is apparently advantageous in this case toadopt as s' the best neighbor of s, instead of the first one found that is betterthan s. Also, it has been reported that it pays to make "lateral moves" (adopta solution even if the inequality in the third line of the algorithm is not strict).348Chapter 7: NP-COMPLETENESSThis heuristic is considered a very effective way of obtaining good solutions toMAX SAT, and is often used to solve SATISFIABILITY (in this use, it is hopedthat in the end the algorithm will return a truth assignment that satisfies allclauses).OAn interesting twist on local improvement algorithms is a method calledsimulated annealing. As the name suggests, the inspiration comes from thephysics of cooling solids.

Simulated annealing allows the algorithm to "escape"from bad local optima (see Figure 7-18, and compare with 7-17) by performingoccasional cost-increasing changes.Figure 7-18: Simulated annealing has an advantage over the basic local improvement algorithm because its occasional cost-increasing moves help it avoid earlyconvergence in a bad local optimum. This often comes at a great loss of efficiency.8 :=initialsolution, T := Torepeatgenerate a random solution s' such that N(8, 8'),and let ~ :=cost(8')-cost(s)if ~ < 0 then 8 := 8', else8 :: 8' with probability e-~update(T)until T = 07.4: Coping with NP-completeness349return the best solution seenIntuitively, the probability that a cost-increasing change will be adopted isdetermined by the amount of the cost increase ~, as well as by an importantparameter T, the temperature. The higher the temperature, the more aggressively more expensive solutions are pursued.

The way in which T is updated inthe penultimate line of the algorithm -the annealing schedule of the algorithm,as it is called- is perhaps the most crucial design decision in these algorithms-besides, of course, the choice of neighborhood.There are several other related genres of local improvement methods, manyof them based, like the ones we described here, on some loose analogy withphysical or biological systems (genetic algorithms, neural networks, etc.; see thereferences) .From the point of view of the formal criteria that we have developed inthis book, the local improvement algorithms and their many variants are totally unattractive: They they do not in general return the optimum solution,they tend to have exponential worst-case complexity, and they are not evenguaranteed to return solutions that are in any well-defined sense "close" to theoptimum.

Характеристики

Тип файла
PDF-файл
Размер
11,07 Mb
Тип материала
Высшее учебное заведение

Список файлов решённой задачи

Свежие статьи
Популярно сейчас
Зачем заказывать выполнение своего задания, если оно уже было выполнено много много раз? Его можно просто купить или даже скачать бесплатно на СтудИзбе. Найдите нужный учебный материал у нас!
Ответы на популярные вопросы
Да! Наши авторы собирают и выкладывают те работы, которые сдаются в Вашем учебном заведении ежегодно и уже проверены преподавателями.
Да! У нас любой человек может выложить любую учебную работу и зарабатывать на её продажах! Но каждый учебный материал публикуется только после тщательной проверки администрацией.
Вернём деньги! А если быть более точными, то автору даётся немного времени на исправление, а если не исправит или выйдет время, то вернём деньги в полном объёме!
Да! На равне с готовыми студенческими работами у нас продаются услуги. Цены на услуги видны сразу, то есть Вам нужно только указать параметры и сразу можно оплачивать.
Отзывы студентов
Ставлю 10/10
Все нравится, очень удобный сайт, помогает в учебе. Кроме этого, можно заработать самому, выставляя готовые учебные материалы на продажу здесь. Рейтинги и отзывы на преподавателей очень помогают сориентироваться в начале нового семестра. Спасибо за такую функцию. Ставлю максимальную оценку.
Лучшая платформа для успешной сдачи сессии
Познакомился со СтудИзбой благодаря своему другу, очень нравится интерфейс, количество доступных файлов, цена, в общем, все прекрасно. Даже сам продаю какие-то свои работы.
Студизба ван лав ❤
Очень офигенный сайт для студентов. Много полезных учебных материалов. Пользуюсь студизбой с октября 2021 года. Серьёзных нареканий нет. Хотелось бы, что бы ввели подписочную модель и сделали материалы дешевле 300 рублей в рамках подписки бесплатными.
Отличный сайт
Лично меня всё устраивает - и покупка, и продажа; и цены, и возможность предпросмотра куска файла, и обилие бесплатных файлов (в подборках по авторам, читай, ВУЗам и факультетам). Есть определённые баги, но всё решаемо, да и администраторы реагируют в течение суток.
Маленький отзыв о большом помощнике!
Студизба спасает в те моменты, когда сроки горят, а работ накопилось достаточно. Довольно удобный сайт с простой навигацией и огромным количеством материалов.
Студ. Изба как крупнейший сборник работ для студентов
Тут дофига бывает всего полезного. Печально, что бывают предметы по которым даже одного бесплатного решения нет, но это скорее вопрос к студентам. В остальном всё здорово.
Спасательный островок
Если уже не успеваешь разобраться или застрял на каком-то задание поможет тебе быстро и недорого решить твою проблему.
Всё и так отлично
Всё очень удобно. Особенно круто, что есть система бонусов и можно выводить остатки денег. Очень много качественных бесплатных файлов.
Отзыв о системе "Студизба"
Отличная платформа для распространения работ, востребованных студентами. Хорошо налаженная и качественная работа сайта, огромная база заданий и аудитория.
Отличный помощник
Отличный сайт с кучей полезных файлов, позволяющий найти много методичек / учебников / отзывов о вузах и преподователях.
Отлично помогает студентам в любой момент для решения трудных и незамедлительных задач
Хотелось бы больше конкретной информации о преподавателях. А так в принципе хороший сайт, всегда им пользуюсь и ни разу не было желания прекратить. Хороший сайт для помощи студентам, удобный и приятный интерфейс. Из недостатков можно выделить только отсутствия небольшого количества файлов.
Спасибо за шикарный сайт
Великолепный сайт на котором студент за не большие деньги может найти помощь с дз, проектами курсовыми, лабораторными, а также узнать отзывы на преподавателей и бесплатно скачать пособия.
Популярные преподаватели
Добавляйте материалы
и зарабатывайте!
Продажи идут автоматически
6531
Авторов
на СтудИзбе
301
Средний доход
с одного платного файла
Обучение Подробнее