A.W. Appel, J. Palsberg - Modern Compiler Implementation in Java (Second Edition), страница 32
PDF-файл из архива "A.W. Appel, J. Palsberg - Modern Compiler Implementation in Java (Second Edition)", который расположен в категории "книги и методические указания". Всё это находится в предмете "конструирование компиляторов" из седьмого семестра, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр PDF-файла онлайн
Текст 32 страницы из PDF
If this leaves any block not ending with a JUMP orCJUMP, then a JUMP to the next block's label is appended to the block. If any block has beenleft without a LABEL at the beginning, a new label is invented and stuck there.We will apply this algorithm to each function-body in turn.
The procedure "epilogue" (whichpops the stack and returns to the caller) will not be part of this body, but is intended to followthe last statement. When the flow of program execution reaches the end of the last block, theepilogue should follow. But it is inconvenient to have a "special" block that must come lastand that has no JUMP at the end. Thus, we will invent a new label done - intended to meanthe beginning of the epilogue - and put a JUMP(NAME done) at the end of the last block.In the MiniJava compiler, the class Canon.BasicBlocks implements this simple algorithm.TRACESNow the basic blocks can be arranged in any order, and the result of executing the programwill be the same - every block ends with a jump to the appropriate place.
We can takeadvantage of this to choose an ordering of the blocks satisfying the condition that eachCJUMP is followed by its false label.At the same time, we can also arrange that many of the unconditional JUMPs are immediatelyfollowed by their target label. This will allow the deletion of these jumps, which will makethe compiled program run a bit faster.A trace is a sequence of statements that could be consecutively executed during the executionof the program. It can include conditional branches. A program has many different,overlapping traces.
For our purposes in arranging CJUMPs and false-labels, we want to makea set of traces that exactly covers the program: Each block must be in exactly one trace. Tominimize the number of JUMPs from one trace to another, we would like to have as fewtraces as possible in our covering set.A very simple algorithm will suffice to find a covering set of traces. The idea is to start withsome block - the beginning of a trace - and follow a possible execution path - the rest of thetrace.
Suppose block b1 ends with a JUMP to b4, and b4 has a JUMP to b6. Then we can makethe trace b1, b4, b6.But suppose b6 ends with a conditional jump CJUMP(cond, b7, b3). We cannot know atcompile time whether b7 or b3 will be next. But we can assume that some execution willfollow b3, so let us imagine it is that execution that we are simulating. Thus, we append b3 toour trace and continue with the rest of the trace after b3. The block b7 will be in some othertrace.Algorithm 8.2 (which is similar to Canon.TraceSchedule) ordersthe blocks into traces asfollows: It starts with some block and follows a chain of jumps, marking each block andappending it to the current trace. Eventually it comes to a block whose successors are allmarked, so it ends the trace and picks an unmarked block to start the next trace.149ALGORITHM 8.2: Generation of traces.Put all the blocks of the program into a list Q.while Q is not emptyStart a new (empty) trace, call it T.Remove the head element b from Q.while b is not markedMark b; append b to the end of the current trace T.Examine the successors of b (the blocks to which b branches);if there is any unmarked successor cb ← cEnd the current trace T.FINISHING UPAn efficient compiler will keep the statements grouped into basic blocks, because many kindsof analysis and optimization algorithms run faster on (relatively few) basic blocks than on(relatively many) individual statements.
For the MiniJava compiler, however, we seeksimplicity in the implementation of later phases. So we will flatten the ordered list of tracesback into one long list of statements.At this point, most (but not all) CJUMPs will be followed by their true or false label. Weperform some minor adjustments:•••Any CJUMP immediately followed by its false label we let alone (there will be manyof these).For any CJUMP followed by its true label, we switch the true and false labels andnegate the condition.For any CJUMP(cond, a, b, lt, lf) followed by neither label, we invent a new false labellf′ and rewrite the single CJUMP statement as three statements, just to achieve thecondition that the CJUMP is followed by its false label:•••CJUMP(cond, a, b, lt, lf′)LABEL lf′JUMP(NAME lf)The trace-generating algorithm will tend to order the blocks so that many of the unconditionalJUMPs are immediately followed by their target labels.
We can remove such jumps.OPTIMAL TRACESFor some applications of traces, it is important that any frequently executed sequence ofinstructions (such as the body of a loop) should occupy its own trace. This helps not only tominimize the number of unconditional jumps, but also may help with other kinds ofoptimizations, such as register allocation and instruction scheduling.Figure 8.3 shows the same program organized into traces in different ways. Figure 8.3a has aCJUMP and a JUMP in every iteration of the while-loop; Figure 8.3b uses a different tracecovering, also with CJUMP and a JUMP in every iteration.
But Figure 8.3c shows a bettertrace covering, with no JUMP in each iteration.150Figure 8.3: Different trace coverings for the same program.The MiniJava compiler's Canon module doesn't attempt to optimize traces around loops, but itis sufficient for the purpose of cleaning up the Tree-statement lists for generating assemblycode.FURTHER READINGThe rewrite rules of Figure 8.1 are an example of a term rewriting system; such systems havebeen much studied [Dershowitz and Jouannaud 1990].Fisher  shows how to cover a program with traces so that frequently executing pathstend to stay within the same trace. Such traces are useful for program optimization andscheduling.EXERCISES•*8.1 The rewriting rules in Figure 8.1 are a subset of the rules necessary to eliminateall ESEQs from expressions.
Show the right-hand side for each of the followingincomplete rules:a. MOVE(TEMP t, ESEQ(s, e)) ⇒b. MOVE(MEM(ESEQ(s, e1)), e2) ⇒c. MOVE(MEM(e1), ESEQ(s, e2)) ⇒d. EXP(ESEQ(s, e)) ⇒e. EXP(CALL(ESEQ(s, e), args)) ⇒f. MOVE(TEMP t, CALL(ESEQ(s, e), args)) ⇒g. EXP(CALL(e1, [e2, ESEQ(s, e3), e4])) ⇒In some cases, you may need two different right-hand sides depending on whethersomething commutes (just as parts (3) and (4) of Figure 8.1 have different right-handsides for the same left-hand sides).••8.2 Draw each of the following expressions as a tree diagram, and then apply therewriting rules of Figure 8.1 and Exercise 8.1, as well as the CALL rule on page 168.a.
MOVE(MEM(ESEQ(SEQ(CJUMP(LT, TEMPi, CONST0, Lout, Lok),LABELok) TEMPi)), CONST1)b. MOVE(MEM(MEM(NAMEa)), MEM(CALL(TEMPf, )))c. BINOP(PLUS, CALL(NAMEf, [TEMPx]), CALL(NAMEg,[ESEQ(MOVE(TEMPx, CONST0), TEMPx)]))*8.3 The directory $MINIJAVA/chap8 contains an implementation of every algorithmdescribed in this chapter. Read and understand it.151•8.4 A primitive form of the commute test is shown on page 164. This function isconservative: If interchanging the order of evaluation of the expressions will changethe result of executing the program, this function will definitely return false; but if aninterchange is harmless, commute might return true or false.Write a more powerful version of commute that returns true in more cases, but is stillconservative. Document your program by drawing pictures of (pairs of) expressiontrees on which it will return true.••*8.5 The left-hand side of a MOVE node really represents a destination, not anexpression.
Consequently, the following rewrite rule is nota good idea:MOVE(e1, ESEQ(s, e2)) → SEQ(s, MOVE(e1, e2))if s, e1 commuteWrite a statement matching the left side of this rewrite rule that produces a differentresult when rewritten.Hint: It is very reasonable to say that the statement MOVE(TEMPa, TEMPb)commutes with expression TEMPb (if a and b are not the same), since TEMPb yieldsthe same value whether executed before or after the MOVE.Conclusion: The only subexpression of MOVE(TEMPa, e) is e, and thesubexpressions of MOVE(MEM(e1), e2) are [e1, e2]; we should not say that a is asubexpression of MOVE(a, b).•8.6 Break this program into basic blocks.m←01.2.v←03.if v ≥ n goto 15r←v4.5.6.s←0if r < n goto 97.8.v←v+1goto 39.x ← M[r]s←s+x10.if s ≤ m goto 1311.m←s18.104.22.168.•r←r+1goto 6return m8.7 Express the basic blocks of Exercise 8.6 as statements in the Tree intermediateform, and use Algorithm 8.2 to generate a set of traces.152Chapter 9: Instruction Selectionin-struc-tion: a code that tells a computer to perform a particular operationWebster's DictionaryOVERVIEWThe intermediate representation (Tree) language expresses only one operation in each treenode: memory fetch or store, addition or subtraction, conditional jump, and so on.
A realmachine instruction can often perform several of these primitive operations. For example,almost any machine can perform an add and a fetch in the same instruction, corresponding tothe treeFinding the appropriate machine instructions to implement a given intermediaterepresentation tree is the job of the instruction selection phase of a compiler.TREE PATTERNSWe can express a machine instruction as a fragment of an IR tree, called a tree pattern. Theninstruction selection becomes the task of tiling the tree with a minimal set of tree patterns.For purposes of illustration, we invent an instruction set: the Jouette architecture.
Thearithmetic and memory instructions of Jouette are shown in Figure 9.1. On this machine,register r0 always contains zero.153Figure 9.1: Arithmetic and memory instructions. The notation M[x] denotes the memory wordat address x.Each instruction above the double line in Figure 9.1 produces a result in a register. The veryfirst entry is not really an instruction, but expresses the idea that a TEMP node is implementedas a register, so it can "produce a result in a register" without executing any instructions at all.The instructions below the double line do not produce results in registers, but are executedonly for side effects on memory.For each instruction, the tree patterns it implements are shown.