A.W. Appel, J. Palsberg - Modern Compiler Implementation in Java (Second Edition), страница 44
Описание файла
PDF-файл из архива "A.W. Appel, J. Palsberg - Modern Compiler Implementation in Java (Second Edition)", который расположен в категории "". Всё это находится в предмете "конструирование компиляторов" из 7 семестр, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр PDF-файла онлайн
Текст 44 страницы из PDF
A naive spillCost that just returns 1 forevery temporary will also work.A simple implementation of the coloring algorithm without coalescing requires only one worklist: the simplifyWorklist, which contains all non-precolored, nonsimplified nodes ofdegree less than K . Obviously, no freezeWorklist is necessary. No spillWorklist isnecessary either, if we are willing to look through all the nodes in the original graph for a spillcandidate every time the simplifyWorklist becomes empty.With only a simplifyWorklist, the doubly linked representation is not necessary: This worklist can be implemented as a singly linked list or a stack, since it is never accessed "in themiddle."ADVANCED PROJECT: SPILLINGImplement spilling, so that no matter how many parameters and locals a MiniJava programhas, you can still compile it.ADVANCED PROJECT: COALESCINGImplement coalescing, to eliminate practically all the MOVE instructions from the program.FURTHER READINGKempe [1879] invented the simplification algorithm that colors graphs by removing verticesof degree < K.
Chaitin [1982] formulated register allocation as a graph-coloring problem using Kempe's algorithm to color the graph - and performed copy propagation by(nonconservatively) coalescing nonin- terfering move-related nodes before coloring the graph.Briggs et al. [1994] improved the algorithm with the idea of optimistic spilling, and alsoavoided introducing spills by using the conservative coalescing heuristic before coloring thegraph. George and Appel [1996] found that there are more opportunities for coalescing ifconservative coalescing is done during simplification instead of beforehand, and developedthe work-list algorithm presented in this chapter.Ershov [1958] developed the algorithm for optimal register allocation on expression trees;Sethi and Ullman [1970] generalized this algorithm and showed how it should handle spills.EXERCISES•11.1 The following program has been compiled for a machine with three registers r1,r2, r3; r1 and r2 are (caller-save) argument registers and r3 is a callee-save register.Construct the interference graph and show the steps of the register allocation processin detail, as on pages 229−232.
When you coalesce two nodes, say whether you areusing the Briggs or George criterion.Hint: When two nodes are connected by an interference edge andamove edge, youmay delete the move edge; this is called constrain and is accomplished by the first elseif clause of procedure Coalesce.208•11.2 The table below represents a register-interference graph. Nodes 1−6 areprecolored (with colors 1−6), and nodes A−H are ordinary (non-precolored). Everypair of precolored nodes interferes, and each ordinary node interferes with nodeswhere there is an × in the table.The following pairs of nodes are related by MOVE instructions:Assume that register allocation must be done for an 8-register machine.a.
Ignoring the MOVE instructions, and without using the coalesce heuristic,color this graph using simplify and spill. Record the sequence (stack) ofsimplify and potential-spill decisions, show which potential spills becomeactual spills, and show the coloring that results.o b. Color this graph using coalescing. Record the sequence of simplify,coalesce, freeze, and spill decisions. Identify each coalesce as Briggs- orGeorge-style. Show how many MOVE instructions remain.o *c.
Another coalescing heuristic is biased coloring. Instead of using aconservative coalescing heuristic during simplification, run the simplify-spillpart of the algorithm as in part (a), but in the selectpart of the algorithm,i. When selecting a color for node X that is move-related to node Y, whena color for Y has already been selected, use the same color if possible(to eliminate the MOVE).ii. When selecting a color for node X that is move-related to node Y, whena color for Y has not yet been selected, use a color that is not the sameas the color of any of Y 's neighbors (to increase the chance of heuristic(i) working when Y is colored).oConservative coalescing (in the simplify phase) has been found to be moreeffective than biased coloring, in general; but it might not be on this particular209graph. Since the two coalescing algorithms are used in different phases, theycan both be used in the same register allocator.*d.
Use both conservative coalescing and biased coloring in allocatingregisters. Show where biased coloring helps make the right decisions.11.3 Conservative coalescing is so called because it will not introduce any (potential)spills. But can it avoid spills? Consider this graph, where the solid edges representinterferences and the dashed edge represents a MOVE:o••a. 4-color the graph without coalescing. Show the select-stack, indicating theorder in which you removed nodes. Is there a potential spill? Is there an actual spill?b.
4-color the graph with conservative coalescing. Did you use the Briggs orGeorge criterion? Is there a potential spill? Is there an actual spill?11.4 It has been proposed that the conservative coalescing heuristic could besimplified. In testing whether MOVE(a, b) can be coalesced, instead of askingwhether the combined node ab is adjacent to < K nodes of significant degree, we couldsimply test whether ab is adjacent to < K nodes of any degree. The theory is that if abis adjacent to many low-degree nodes, they will be removed by simplification anyway..Show that this kind of coalescing cannot create any new potential spills.a.
Demonstrate the algorithm on this graph (with K = 3):b.*Show that this test is less effective than standard conservative coalescing.Hint: Use the graph of Exercise 11.3, with K = 4.210Chapter 12: Putting It All Togetherde-bug: to eliminate errors in or malfunctions ofWebster's DictionaryOVERVIEWChapters 2-11 have described the fundamental components of a good compiler: a front end,which does lexical analysis, parsing, construction of abstract syntax, type-checking, andtranslation to intermediate code; and a back end, which does instruction selection, dataflowanalysis, and register allocation.What lessons have we learned? We hope that the reader has learned about the algorithms usedin different components of a compiler and the interfaces used to connect the components.
Butthe authors have also learned quite a bit from the exercise.Our goal was to describe a good compiler that is, to use Einstein's phrase, "as simple aspossible - but no simpler." we will now discuss the thorny issues that arose in designing theMiniJava compiler.Structured l-values Java (and MiniJava) have no record or array variables, as C, C++, andPascal do. Instead, all object and array values are really just pointers to heap-allocated data.Implementing structured l-values requires some care but not too many new insights.Tree intermediate representation The Tree language has a fundamental flaw: It does notdescribe procedure entry and exit. These are handled by opaque procedures inside the Framemodule that generate Tree code. This means that a program translated to Trees using, forexample, the Pentium-Frame version of Frame will be different from the same programtranslated using SparcFrame - the Tree representation is not completely machineindependent.Also, there is not enough information in the trees themselves to simulate the execution of anentire program, since the view shift (page 128) is partly done implicitly by procedureprologues and epilogues that are not represented as Trees.
Consequently, there is not enoughinformation to do whole-program optimization (across function boundaries).The Tree representation is a low-level intermediate representation, useful for instructionselection and intraprocedural optimization. A high-level intermediate representation wouldpreserve more of the source-program semantics, including the notions of nested functions (ifapplicable), nonlocal variables, object creation (as distinguished from an opaque externalfunction call), and so on. Such a representation would be more tied to a particular family ofsource languages than the general-purpose Tree language is.Register allocation Graph-coloring register allocation is widely used in real compilers, butdoes it belong in a compiler that is supposed to be "as simple as possible"? After all, itrequires the use of global dataflow (liveness) analysis, construction of interference graphs,and so on.
This makes the back end of the compiler significantly bigger.It is instructive to consider what the MiniJava compiler would be like without it. We couldkeep all local variables in the stack frame, fetching them into temporaries only when they are211used as operands of instructions. The redundant loads within a single basic block can beeliminated by a simple intrablock liveness analysis. Internal nodes of Tree expressions couldbe assigned registers using Algorithms 11.10 and 11.9. But other parts of the compiler wouldbecome much uglier: The TEMPs introduced in canonicalizing the trees (eliminating ESEQs)would have to be dealt with in an ad hoc way, by augmenting the Tree language with anoperator that provides explicit scope for temporary variables; the Frame interface, whichmentions registers in many places, would now have to deal with them in more complicatedways.