1610906232-7ba7dbaea13262b50a6029d682cb7f1b (824370), страница 8
Текст из файла (страница 8)
Need this statement be true if the requirementof finiteness is lifted?1.5.6. Show that in any group of at least two people there are at least two personsthat have the same number of acquaintances within the group. (Use thepigeonhole principle.)1.5.7. Suppose we try to prove, by an argument exactly parallel to the proof ofTheorem 1.5.2, that the set of all finite subsets of N is uncountable. Whatgoes wrong?1.5.8.
Give examples to show that the intersection of two countably infinite setscan be either finite or countably infinite, and that the intersection of twouncountable sets can be finite, countably infinite, or uncountable.1.5.9.
Show that the difference of an uncountable set and a countable set is uncountable .. 5.10. Show that if S is any set, then there is a one-to-one function from S to 25 ,but not vice versa .. 5.11. Show that the set of all real numbers in the interval [0,1] is uncountable.(Hint: It is well known that each such number can be written in binarynotation as an infinite sequence of Os and Is ~such as .0110011100000 ...Assume that an enumeration of these sequences exists, and create a "diagonal" sequence by "flipping" the ith bit of the ith sequence.)30BChapter 1: SETS, RELATIONS, AND LANGUAGESCLOSURES AND ALGORITHMSConsider the two directed graphs Rand R* in Figure 1-9(a) and (b). R* containsR; also, R* is reflexive and transitive (whereas R is neither). In fact, it is easyto see that R* is the smallest possible directed graph that has these properties--that is, contains R, is reflexive, and is transitive (by "smallest" we mean theone with the fewest edges).
For this reason, R* is called the reflexive transitiveclosure of R. We next define this useful concept formally:(b)(a)Figure 1-9Definition 1.6.1: Let R ~ A2 be a directed graph defined on a set A. Thereflexive transitive closure of R is the relationR* = Ha, b) : a, bE A and there is a path from a to bin R.}Notice the interesting contrast between the definition "from below" articulated in Definition 1.6.1 and the informal definition "from above" whereby weintroduced the reflexive transitive closure (the smallest relation that contains Rand is reflexive and transitive).
It is perhaps intuitively clear that the two definitions are equivalent. Towards the end of this section we shall study more closelysuch "definitions from above," and the reason why they always correspond toan alternative definition "from below."AlgorithmsDefinition 1.6.1 immediately suggests an algorithm for computing the reflexivetransitive closure R* of any given binary relation R over some finite set A ={ aI, a2, ... , an} :1.6: Closures and Algorithms31Initially R* := 0for i = 1, ...
, n dofor each i-tuple (b 1 , ... , bi ) E Ai doif (b 1 , ... , bi ) is a path in R then add (b 1 , bi ) to R*.The rigorous study of algorithms is in some sense what this book is allabout; but to give a formal definition of an algorithm at this point would beto spoil the nice story we have to tell. Until a formal general model for algorithms is introduced, we shall be treating algorithms informally and somewhatnonrigorously, gradually increasing our insight and familiarity.
It will then beclear that our definition of an algorithm, when its time comes, indeed capturescorrectly this important concept. Accordingly, this section only contains anintuitive treatment of algorithms and an informal introduction to their analysis.Fortunately, it is easy to tell an algorithm when you see one. The procedurewe describe above for computing R* is a detailed and unambiguous sequence ofinstructions that produces a result -what we call R*. It consists of elementarysteps which we may reasonably assume are easy to carry out: Initializing R* to0, adding new elements to R*, testing whether (b j , bj+l) E R -this latter has tobe done i - I times in the last line of the algorithm to test whether (b 1 , ...
,b i )is a path of R. We assume that somehow this algorithm operates on elementsof A and R directly, so we need not specify how such sets and relations arerepresented for manipulation by the algorithm.We shall next argue that the relation R* computed by this algorithm isindeed the reflexive transitive closure of R (that is, the algorithm is correct).The reason is that our algorithm is just a straightforward implementation ofDefinition 1.6.1. It adds to R*, initially empty, all pairs of elements of A that areconnected by a path in R. Possible paths are checked one by one, in increasinglength. We stop at sequences of length n because, by Theorem 1.5.1, if twonodes of A are connected by a path, then there is a path of length n or less thatconnects them.It is thus clear that our algorithm will eventually terminate with the correct answer.
One question that will prove most important and relevant to olirconcerns in this book is, after how many steps will it terminate? In the last partof the book we shall develop a whole theory of how the answer to such questionsis calculated, and when the result is deemed satisfactory. But let us proceedinformally for now.What we need is an indication of how many elementary steps, how much"time," the algorithm requires when presented with a relation R as an input.As it is reasonable to expect that the algorithm will take more time on largerinput relations, the answer will depend on how large the input relation is -moreconcretely, it will depend on the number n of elements in the set A. Thus, weare seeking a function f : N H N such that, for each n ::::: 1, if the algorithm is32Chapter 1: SETS, RELATIONS, AND LANGUAGESpresented with a binary relation R ~ A x A with IAI = n, it will terminate afterat most f(n) steps.
As is typical in the analysis of algorithms, we shall allowI(n) to be a rough overestimate, as long as it has the correct rate 01 growth.Rates of growth are thus our next topic.The Growth of FunctionsOf these three functions from the set of natural numbers to itself, which is thelargest?f(n) = 1,000,000· n; g(n) = 10· n 3 ; h(n) = 2nAlthough for all values of n up to about a dozen we have I(n) > g(n) > h(n),it should be intuitively clear that the correct ranking is the exact opposite; ifwe take n large enough we shall eventually have I(n) < g(n) < h(n).
In thissubsection we shall develop the concepts necessary for ranking such functionsaccording to their ultimate potential for producing large values.Definition 1.6.2: Let I : N f-t N be a function from the natural numbers tothe natural numbers. The order of f, denoted 0U), is the set of all functions9 : N f-t N with the property that there are positive natural numbers c > 0 andd > 0 such that, for all n EN, g( n) -:; c· I (n) + d. If in fact this inequality holdsfor all n, we say that g(n) E OU(n)) with constants c and d.If for two functions I,g : N f-t N we have that I E O(g) and 9 E 0U),then we write 1:0::: g.
1t is clear that the relation :0::: defined on functions is anequivalence relation: It is reflexive (because always I E 0(1), with constants1 and 0) and it is symmetric (because the roles of I and 9 are completelyinterchangeable in the definition of :0:::). Finally, it is transitive. Because supposethat IE O(g) with constants c,d, and 9 E O(h) with constants c',d'. Then forall n,f(n) -:; c· g(n)+ d -:; c· (c'. h(n)+ d') + d =(c· c')h(n)+ (d + c . d').Thus I E O(h) with constants c· c' and d + c . d'.Thus, all functions from the set of natural numbers to itself are partitionedby :0::: into equivalence classes.
The equivalence class of I with respect to :0::: iscalled the rate of growth of I.Example 1.6.1: Consider the polynomial I(n) = 31n 2 + 17n + 3. We claimthat I(n) E 0(n2). To see this, notice that n 2 ~ n, and so I(n) -:; 48n 2 + 3,and thus I(n) E 0(n 2 ) with constants 48 and 3. Of course, also n 2 E O(l(n))-with constants 1 and o. Hence n 2 :0::: 31n 2 + 17n + 3, and the two functionshave the same rate of growth.Similarly, for any polynomial of degree d with nonnegative coefficientsI( n ) =adnd+ ad-l n d-l + ... + al n + ao1.6: Closures and Algorithms33where ai ~ 0 for all i, and ad > 0, it is easy to prove that fen) E O(nd) -withconstants L~=l ai and ao.
All polynomials of the same degree have the samerate of growth.Consider then two polynomials with different degrees; do they also have thesame rate of growth? The answer here is negative. Since all polynomials withthe same degree have the same rate of growth, it suffices to consider the twosimplest representatives, namely two polynomials of the form n i and n j , with0< i < j. Obviously, ni E O(n j ) with constants 1 and O. We shall prove thatni¢ O(ni).Suppose for the sake of contradiction that indeed n j E O(ni) with constantse and d.
That is, for all n E N n j -:; en i + d. But this is easily found to beabsurd by trying n = e + d:To summariz,e, for any two polynomials f and g, if they have the samedegree then f ;;( g. Otherwise, if g has larger degree than f, then f E O(g) butg ¢ 0(1); that is, g has higher rate of growth than f.0Example 1.6.2: The previous example suggests that the rate of growth of apolynomial is captured by its degree. The larger the degree of f the higher therate of growth. But there are functions with rate of growth higher than that ofany polynomial. A simple example is the exponential function 2n.Let us first establish that, for all n EN, n -:; 2n.