Главная » Просмотр файлов » Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest

Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417), страница 8

Файл №811417 Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest.pdf) 8 страницаIntroduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417) страница 82020-08-25СтудИзба
Просмтор этого файла доступен только зарегистрированным пользователям. Но у нас супер быстрая регистрация: достаточно только электронной почты!

Текст из файла (страница 8)

For example, in searching adatabase for a particular piece of information, the searching algorithm's worst casewill often occur when the information is not present in the database. In some searchingapplications, searches for absent information may be frequent.The "average case" is often roughly as bad as the worst case. Suppose that werandomly choose n numbers and apply insertion sort. How long does it take todetermine where in subarray A[1 j - 1] to insert element A[j]? On average, half theelements in A[1 j - 1] are less than A[j], and half the elements are greater.

Onaverage, therefore, we check half of the subarray A[1 j - 1], so tj = j/2. If we workout the resulting average-case running time, it turns out to be a quadratic function ofthe input size, just like the worst-case running time.In some particular cases, we shall be interested in the average-case or expected running timeof an algorithm; in Chapter 5, we shall see the technique of probabilistic analysis, by whichwe determine expected running times. One problem with performing an average-caseanalysis, however, is that it may not be apparent what constitutes an "average" input for aparticular problem.

Often, we shall assume that all inputs of a given size are equally likely. Inpractice, this assumption may be violated, but we can sometimes use a randomizedalgorithm, which makes random choices, to allow a probabilistic analysis.Order of growthWe used some simplifying abstractions to ease our analysis of the INSERTION-SORTprocedure. First, we ignored the actual cost of each statement, using the constants ci torepresent these costs. Then, we observed that even these constants give us more detail than wereally need: the worst-case running time is an2 + bn + c for some constants a, b, and c thatdepend on the statement costs ci.

We thus ignored not only the actual statement costs, but alsothe abstract costs ci.We shall now make one more simplifying abstraction. It is the rate of growth, or order ofgrowth, of the running time that really interests us. We therefore consider only the leadingterm of a formula (e.g., an2), since the lower-order terms are relatively insignificant for largen. We also ignore the leading term's constant coefficient, since constant factors are lesssignificant than the rate of growth in determining computational efficiency for large inputs.Thus, we write that insertion sort, for example, has a worst-case running time of Θ(n2)(pronounced "theta of n-squared"). We shall use Θ-notation informally in this chapter; it willbe defined precisely in Chapter 3.We usually consider one algorithm to be more efficient than another if its worst-case runningtime has a lower order of growth.

Due to constant factors and lower-order terms, thisevaluation may be in error for small inputs. But for large enough inputs, a Θ(n2) algorithm, forexample, will run more quickly in the worst case than a Θ(n3) algorithm.Exercises 2.2-1Express the function n3/1000 - 100n2 - 100n + 3 in terms of Θ-notation.Exercises 2.2-2Consider sorting n numbers stored in array A by first finding the smallest element of A andexchanging it with the element in A[1]. Then find the second smallest element of A, andexchange it with A[2].

Continue in this manner for the first n - 1 elements of A. Writepseudocode for this algorithm, which is known as selection sort. What loop invariant doesthis algorithm maintain? Why does it need to run for only the first n - 1 elements, rather thanfor all n elements? Give the best-case and worst-case running times of selection sort in Θnotation.Exercises 2.2-3Consider linear search again (see Exercise 2.1-3).

How many elements of the input sequenceneed to be checked on the average, assuming that the element being searched for is equallylikely to be any element in the array? How about in the worst case? What are the average-caseand worst-case running times of linear search in Θ-notation? Justify your answers.Exercises 2.2-4How can we modify almost any algorithm to have a good best-case running time?[4]There are some subtleties here. Computational steps that we specify in English are oftenvariants of a procedure that requires more than just a constant amount of time. For example,later in this book we might say "sort the points by x-coordinate," which, as we shall see, takesmore than a constant amount of time. Also, note that a statement that calls a subroutine takesconstant time, though the subroutine, once invoked, may take more.

That is, we separate theprocess of calling the subroutine-passing parameters to it, etc.-from the process of executingthe subroutine.[5]This characteristic does not necessarily hold for a resource such as memory. A statementthat references m words of memory and is executed n times does not necessarily consume mnwords of memory in total.2.3 Designing algorithmsThere are many ways to design algorithms.

Insertion sort uses an incremental approach:having sorted the subarray A[1 j - 1], we insert the single element A[j] into its proper place,yielding the sorted subarray A[1 j].In this section, we examine an alternative design approach, known as "divide-and-conquer."We shall use divide-and-conquer to design a sorting algorithm whose worst-case running timeis much less than that of insertion sort.

One advantage of divide-and-conquer algorithms isthat their running times are often easily determined using techniques that will be introduced inChapter 4.2.3.1 The divide-and-conquer approachMany useful algorithms are recursive in structure: to solve a given problem, they callthemselves recursively one or more times to deal with closely related subproblems.

Thesealgorithms typically follow a divide-and-conquer approach: they break the problem intoseveral subproblems that are similar to the original problem but smaller in size, solve thesubproblems recursively, and then combine these solutions to create a solution to the originalproblem.The divide-and-conquer paradigm involves three steps at each level of the recursion:•••Divide the problem into a number of subproblems.Conquer the subproblems by solving them recursively.

If the subproblem sizes aresmall enough, however, just solve the subproblems in a straightforward manner.Combine the solutions to the subproblems into the solution for the original problem.The merge sort algorithm closely follows the divide-and-conquer paradigm.

Intuitively, itoperates as follows.•••Divide: Divide the n-element sequence to be sorted into two subsequences of n/2elements each.Conquer: Sort the two subsequences recursively using merge sort.Combine: Merge the two sorted subsequences to produce the sorted answer.The recursion "bottoms out" when the sequence to be sorted has length 1, in which case thereis no work to be done, since every sequence of length 1 is already in sorted order.The key operation of the merge sort algorithm is the merging of two sorted sequences in the"combine" step.

To perform the merging, we use an auxiliary procedure MERGE(A, p, q, r),where A is an array and p, q, and r are indices numbering elements of the array such that p ≤ q< r. The procedure assumes that the subarrays A[p q] and A[q + 1 r] are in sorted order.It merges them to form a single sorted subarray that replaces the current subarray A[p r].Our MERGE procedure takes time Θ(n), where n = r - p + 1 is the number of elements beingmerged, and it works as follows. Returning to our card-playing motif, suppose we have twopiles of cards face up on a table. Each pile is sorted, with the smallest cards on top.

We wishto merge the two piles into a single sorted output pile, which is to be face down on the table.Our basic step consists of choosing the smaller of the two cards on top of the face-up piles,removing it from its pile (which exposes a new top card), and placing this card face downonto the output pile.

We repeat this step until one input pile is empty, at which time we justtake the remaining input pile and place it face down onto the output pile. Computationally,each basic step takes constant time, since we are checking just two top cards. Since weperform at most n basic steps, merging takes Θ(n) time.The following pseudocode implements the above idea, but with an additional twist that avoidshaving to check whether either pile is empty in each basic step.

The idea is to put on thebottom of each pile a sentinel card, which contains a special value that we use to simplify ourcode. Here, we use ∞ as the sentinel value, so that whenever a card with ∞ is exposed, itcannot be the smaller card unless both piles have their sentinel cards exposed. But once thathappens, all the nonsentinel cards have already been placed onto the output pile.

Since weknow in advance that exactly r - p + 1 cards will be placed onto the output pile, we can stoponce we have performed that many basic steps.MERGE(A, p, q, r)1 n1 ← q - p + 12 n2 ← r - q3 create arrays L[1n1 + 1] and R[14 for i ← 1 to n15do L[i] ← A[p + i - 1]6 for j ← 1 to n27do R[j] ← A[q + j]8 L[n1 + 1] ← ∞9 R[n2 + 1] ← ∞10 i ← 111 j ← 112 for k ← p to r13do if L[i] ≤ R[j]14then A[k] ← L[i]15i ← i + 116else A[k] ← R[j]17j ← j + 1n2 + 1]In detail, the MERGE procedure works as follows. Line 1 computes the length n1 of thesubarray A[p q], and line 2 computes the length n2 of the subarray A[q + 1 r].

We createarrays L and R ("left" and "right"), of lengths n1 + 1 and n2 + 1, respectively, in line 3. The forloop of lines 4-5 copies the subarray A[p q] into L[1 n1], and the for loop of lines 6-7copies the subarray A[q + 1 r] into R[1 n2]. Lines 8-9 put the sentinels at the ends of thearrays L and R. Lines 10-17, illustrated in Figure 2.3, perform the r - p + 1 basic steps bymaintaining the following loop invariant:•At the start of each iteration of the for loop of lines 12-17, the subarray A[p k - 1]contains the k - p smallest elements of L[1 n1 + 1] and R[1 n2 + 1], in sortedorder.

Moreover, L[i] and R[j] are the smallest elements of their arrays that have notbeen copied back into A.Figure 2.3: The operation of lines 10-17 in the call MERGE(A, 9, 12, 16), when the subarrayA[9 16] contains the sequence 2, 4, 5, 7, 1, 2, 3, 6 . After copying and insertingsentinels, the array L contains 2, 4, 5, 7, ∞ , and the array R contains 1, 2, 3, 6, ∞ .Lightly shaded positions in A contain their final values, and lightly shaded positions in L andR contain values that have yet to be copied back into A.

Taken together, the lightly shadedpositions always comprise the values originally in A[9 16], along with the two sentinels.Heavily shaded positions in A contain values that will be copied over, and heavily shadedpositions in L and R contain values that have already been copied back into A. (a)-(h) Thearrays A, L, and R, and their respective indices k, i, and j prior to each iteration of the loop oflines 12-17. (i) The arrays and indices at termination. At this point, the subarray in A[9 16]is sorted, and the two sentinels in L and R are the only two elements in these arrays that havenot been copied into A.We must show that this loop invariant holds prior to the first iteration of the for loop of lines12-17, that each iteration of the loop maintains the invariant, and that the invariant provides auseful property to show correctness when the loop terminates.•Initialization: Prior to the first iteration of the loop, we have k = p, so that thesubarray A[p k - 1] is empty.

Характеристики

Тип файла
PDF-файл
Размер
12,48 Mb
Тип материала
Высшее учебное заведение

Список файлов книги

Свежие статьи
Популярно сейчас
Почему делать на заказ в разы дороже, чем купить готовую учебную работу на СтудИзбе? Наши учебные работы продаются каждый год, тогда как большинство заказов выполняются с нуля. Найдите подходящий учебный материал на СтудИзбе!
Ответы на популярные вопросы
Да! Наши авторы собирают и выкладывают те работы, которые сдаются в Вашем учебном заведении ежегодно и уже проверены преподавателями.
Да! У нас любой человек может выложить любую учебную работу и зарабатывать на её продажах! Но каждый учебный материал публикуется только после тщательной проверки администрацией.
Вернём деньги! А если быть более точными, то автору даётся немного времени на исправление, а если не исправит или выйдет время, то вернём деньги в полном объёме!
Да! На равне с готовыми студенческими работами у нас продаются услуги. Цены на услуги видны сразу, то есть Вам нужно только указать параметры и сразу можно оплачивать.
Отзывы студентов
Ставлю 10/10
Все нравится, очень удобный сайт, помогает в учебе. Кроме этого, можно заработать самому, выставляя готовые учебные материалы на продажу здесь. Рейтинги и отзывы на преподавателей очень помогают сориентироваться в начале нового семестра. Спасибо за такую функцию. Ставлю максимальную оценку.
Лучшая платформа для успешной сдачи сессии
Познакомился со СтудИзбой благодаря своему другу, очень нравится интерфейс, количество доступных файлов, цена, в общем, все прекрасно. Даже сам продаю какие-то свои работы.
Студизба ван лав ❤
Очень офигенный сайт для студентов. Много полезных учебных материалов. Пользуюсь студизбой с октября 2021 года. Серьёзных нареканий нет. Хотелось бы, что бы ввели подписочную модель и сделали материалы дешевле 300 рублей в рамках подписки бесплатными.
Отличный сайт
Лично меня всё устраивает - и покупка, и продажа; и цены, и возможность предпросмотра куска файла, и обилие бесплатных файлов (в подборках по авторам, читай, ВУЗам и факультетам). Есть определённые баги, но всё решаемо, да и администраторы реагируют в течение суток.
Маленький отзыв о большом помощнике!
Студизба спасает в те моменты, когда сроки горят, а работ накопилось достаточно. Довольно удобный сайт с простой навигацией и огромным количеством материалов.
Студ. Изба как крупнейший сборник работ для студентов
Тут дофига бывает всего полезного. Печально, что бывают предметы по которым даже одного бесплатного решения нет, но это скорее вопрос к студентам. В остальном всё здорово.
Спасательный островок
Если уже не успеваешь разобраться или застрял на каком-то задание поможет тебе быстро и недорого решить твою проблему.
Всё и так отлично
Всё очень удобно. Особенно круто, что есть система бонусов и можно выводить остатки денег. Очень много качественных бесплатных файлов.
Отзыв о системе "Студизба"
Отличная платформа для распространения работ, востребованных студентами. Хорошо налаженная и качественная работа сайта, огромная база заданий и аудитория.
Отличный помощник
Отличный сайт с кучей полезных файлов, позволяющий найти много методичек / учебников / отзывов о вузах и преподователях.
Отлично помогает студентам в любой момент для решения трудных и незамедлительных задач
Хотелось бы больше конкретной информации о преподавателях. А так в принципе хороший сайт, всегда им пользуюсь и ни разу не было желания прекратить. Хороший сайт для помощи студентам, удобный и приятный интерфейс. Из недостатков можно выделить только отсутствия небольшого количества файлов.
Спасибо за шикарный сайт
Великолепный сайт на котором студент за не большие деньги может найти помощь с дз, проектами курсовыми, лабораторными, а также узнать отзывы на преподавателей и бесплатно скачать пособия.
Популярные преподаватели
Добавляйте материалы
и зарабатывайте!
Продажи идут автоматически
6392
Авторов
на СтудИзбе
307
Средний доход
с одного платного файла
Обучение Подробнее