1610906232-7ba7dbaea13262b50a6029d682cb7f1b (824370), страница 2
Текст из файла (страница 2)
Real-life problems are introduced side-byside with their language representations (a distinction that is deliberatelyblurred), and their algorithmic questions are examined extensively.There is a separate NP-completeness chapter with a new, extensive, and,we think, pedagogically helpful suite of NP-completeness reductions, culminating with the equivalence problem for regular expressions --closinga full circle to the first subject of the book. As mentioned above, thebook ends with a section on algorithmic techniques for "coping with NPcompleteness."There are no logic chapters in the new edition. This was a difficult decision,made for two reasons: According to all evidence, these were the least readand taught chapters of the book; and there are now books that treat thissubject better. However, there is extensive treatment of Boolean logic andits satisfiability problems in Chapter 6.Overall, proofs and exposition have been simplified and made more informalat some key points.
In several occasions, as in the proof of the equivalenceof context-free languages and pushdown automata, long technical proofs ofinductive statements have become exercises. There are problems followingeach section.As a result of these changes, there is now at least one more way of teaching outof the material of this book (besides the ones outlined in the first edition, andthe ones that emerged from its use): A semester-length course aiming at thePrefacecoverage of the basics of both the theory of computation and algorithms may bebased on a selection of material from Chapters 2 through 7.We want to express our sincere thanks to all of our students and colleagueswho provided feedback, ideas, errors, and corrections during these fifteen yearsit is impossible to come up with a complete list.
Special thanks to Martha Siderifor her help with the revision of Chapter 3. Also, many thanks to our editor,Alan Apt, and the people at Prentice-Hall-Barbara Kraemer, Sondra Chavez,and Bayani de Leon- who have been so patient and helpful.In the present second printing. many errors have been corrected. We are indebted to Carl Smith. Elaine Rich. Ray Miller. James Grimrnelmann. Rocio Guillen.Paliath Narendran.
Kuo-liang Chung Zhizhang Shen. Hua Ren. Charles Wells. EricThomson. Eric Fried. Jeremy Dawson. and especially Mikkel Nygaard Hansen. forpointing out errors to us. and to Barbara Taylor-Laino for making sure they werecorrected.Finally, we would appreciate receiving error reports or other comments.preferably by electronic mail to the address elements(Olcs.berkeley.edu. Confirmed errors, corrections. and other information about the book can also beobtained by writing to this address.ELEMENTS OF THETHEORY OFCOMPUTATIONIntroductionLook around you. Computation happens everywhere, all the time, initiated byeverybody, and affecting us all.
Computation can happen because computerscientists over the past decades have discovered sophisticated methods for managing computer resources, enabling communication, translating programs, designing chips and databases, creating computers and programs that are faster,cheaper, easier to use, more secure.As it is usually the case with all major disciplines, the practical successesof computer science build on its elegant and solid foundations. At the basisof physical sciences lie fundamental questions such aH what is the nature ofmatter? and what is the basis and origin of organic life? Computer sciencehas its own set of fundamental questions: What is an algorithm? What can andwhat cannot be computed? When should an algorithm be considered practicallyfeasible? For more than sixty years (starting even before the advent of theelectronic computer) computer scientists have been pondering these questions,and coming up with ingenious answers that have deeply influenced computerscience.The purpose of this book is to introduce you to these fundamental ideas,models, and results that permeate computer science, the basic paradigms of ourfield.
They are worth studying, for many reasons. First, much of modern computer science is based more or less explicitly on them -and much of the restshould ... Also, these ideas and models are powerful and beautiful, excellentexamples of mathematical modeling that is elegant, productive, and of lastingvalue. Besides, they are so much a part of the history and the "collective subconscious" of our field, that it is hard to understand computer science withoutfirst being exposed to them.It probably comes as no surprise that these ideas and models are mathematical in nature.
Although a computer is undeniably a physical object, it is also1Introduction2true that very little that is useful can be said of its physical aspects, such as itsmolecules and its shape; the most useful abstractions of a computer are clearlymathematical, and so the techniques needed to argue about them are necessarilylikewise. Besides, practical computational tasks require the ironclad guaranteesthat only mathematics provides (we want our compilers to translate correctly,our application programs to eventually terminate, and so on). However, themathematics employed in the theory of computation is rather different from themathematics used in other applied disciplines. It is generally discrete, in thatthe emphasis is not on real numbers and continuous variables, but on finite setsand sequences.
It is based on very few and elementary concepts, and draws itspower and depth from the careful, patient, extensive, layer-by-Iayer manipulation of these concepts -just like the computer. In the first chapter you willbe reminded of these elementary concepts and techniques (sets, relations, andinduction, among others), and you will be introduced to the style in which theyare used in the theory of computation.The next two chapters, Chapters 2 and 3, describe certain restricted models of computation capable of performing very specialized string manipulationtasks, such as telling whether a given string, say the word punk, appears in agiven text, such as the collective works of Shakespeare; or for testing whethera given string of parentheses is properly balanced --like 0 and (())O, but not)0. These restricted computational devices (called finite-state automata andpushdown automata, respectively) actually come up in practice as very usefuland highly optimized components of more general systems such as circuits andcompilers.
Here they provide fine warm-up exercises in our quest for a formal,general definition of an algorithm. Furthermore, it is instructive to see how thepower of these devices waxes and wanes (or, more often, is preserved) with theaddition or removal of various features, most notably of non determinism, an intriguing aspect of computation which is as central as it is (quite paradoxically)unrealistic.In Chapter 4 we study general models of algorithms, of which the most basic is the Turing machine,t a rather simple extension of the string-manipulatingdevices of Chapters 2 and 3 which turns out to be, surprisingly, a general frame-tNamed after Alan M.
Turing (1912~1954), the brilliant English mathematicianand philosopher whose seminal paper in 1936 marked the beginning of the theoryof computation (and whose image, very appropriately, adorns the cover of thisbook). Turing also pioneered the fields of artificial intelligence and chess-playingby computer, as well as that of morphogenesis in biology, and was instrumentalin breaking Enigma, the German naval code during World War II.
For more onhis fascinating life and times (and on his tragic end in the hands of official crueltyand bigotry) see the book Alan Turing: The Enigma, by Andrew Hodges, NewYork: Simon Schuster, 1983.Introduction3work for describing arbitrary algorithms. In order to argue this point, known asthe Church- Turing thesis, we introduce more and more elaborate models of computation (more powerful variants of the Turing machine, even a random accessTuring machine and recursive definitions of numerical functions), and show thatthey are all precisely equivalent in power to the basic Turing machine model.The following chapter deals with undecidability, the surprising property ofcertain natural and well-defined computational tasks to lie provably beyond thereach of algorithmic solution.