Главная » Все файлы » Просмотр файлов из архивов » PDF-файлы » Cooper_Engineering_a_Compiler(Second Edition)

Cooper_Engineering_a_Compiler(Second Edition) (Rice), страница 83

Rice 1869

Описание файла

Файл "Cooper_Engineering_a_Compiler(Second Edition)" внутри архива находится в следующих папках: Rice, Купер и Торчсон - перевод. PDF-файл из архива "Rice", который расположен в категории "разное". Всё это находится в предмете "конструирование компиляторов" из седьмого семестра, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .

Просмотр PDF-файла онлайн

Текст 83 страницы из PDF

Finally, Figure 6.14c shows the initial ar for a computation.For the example program in Figure 6.14a, draw the set of its ars justprior to the return from function F1. Include all entries in the ars. Useline numbers for return addresses. Draw directed arcs for access links.Label the values of local variables and parameters. Label each ar withits procedure name.11. Assume that the compiler is capable of analyzing the code todetermine facts such as “from this point on, variable v is not usedagain in this procedure” or “variable v has its next use in line 11 ofthis procedure,” and that the compiler keeps all local variables inregisters for the following three procedures:procedure maininteger a, b, cb = a + c;c = f1(a,b);call print(c);end;procedure f1(integer x, y)integer v;v = x * y;call print(v);call f2(v);return -x;end;procedure f2(integer q)integer k, r;···k = q / r;end;a.

Variable x in procedure f1 is live across two procedure calls. Forthe fastest execution of the compiled code, should the compilerkeep it in a caller-saves or callee-saves register? Justify youranswer.b. Consider variables a and c in procedure main. Should the compilerkeep them in caller-saves or callee-saves registers, again assumingthat the compiler is trying to maximize the speed of the compiledcode? Justify your answer.330 CHAPTER 6 The Procedure Abstraction12. Consider the following Pascal program. Assume that the ars followthe same layout as in problem 10,with the same initial condition,except that the implementation uses a global display rather than accesslinks.123456789101112131415161718192021program main(input, output);var x : integer;a : float;procedure p1();var g:character;begin···end;procedure p2();var h:character;procedure p3();var h,i:integer;beginp1();end;beginp3();end;beginp2();endDraw the set of ars that are on the runtime stack when the programreaches line 7 in procedure p1.Chapter7Code ShapenCHAPTER OVERVIEWTo translate an application program, the compiler must map each sourcelanguage statement into a sequence of one or more operations in the targetmachine’s instruction set.

The compiler must chose among many alternativeways to implement each construct. Those choices have a strong and directimpact on the quality of the code that the compiler eventually produces.This chapter explores some of the implementation strategies that thecompiler can employ for a variety of common programming-languageconstructs.Keywords: Code Generation, Control Structures, Expression Evaluation7.1 INTRODUCTIONWhen the compiler translates application code into executable form, it facesmyriad choices about specific details, such as the organization of the computation and the location of data.

Such decisions often affect the performanceof the resulting code. The compiler’s decisions are guided by informationthat it derives over the course of translation. When information is discoveredin one pass and used in another, the compiler must record that informationfor its own later use.Often, compilers encode facts in the ir form of the program—facts that arehard to re-derive unless they are encoded.

For example, the compiler mightgenerate the ir so that every scalar variable that can safely reside in a register is stored in a virtual register. In this scheme, the register allocator’s jobis to decide which virtual registers it should demote to memory. The alternative, generating the ir with scalar variables stored in memory and having theallocator promote them into registers, requires much more complex analysis.Engineering a Compiler. DOI: 10.1016/B978-0-12-088478-0.00007-4c 2012, Elsevier Inc. All rights reserved.Copyright 331332 CHAPTER 7 Code ShapeEncoding knowledge into the ir name space in this way both simplifies thelater passes and improves the compiler’s effectiveness and efficiency.Conceptual RoadmapThe translation of source code constructs into target-machine operations isone of the fundamental acts of compilation. The compiler must produce target code for each source-language construct.

Many of the same issues arisewhen generating ir in the compiler’s front end and generating assembly codefor a real processor in its back end. The target processor may, due to finiteresources and idiosyncratic features, present a more difficult problem, butthe principles are the same.This chapter focuses on ways to implement various source-language constructs. In many cases, specific details of the implementation affect thecompiler’s ability to analyze and to improve the code in later passes. Theconcept of “code shape” encapsulates all of the decisions, large and small,that the compiler writer makes about how to represent the computation inboth ir and assembly code.

Careful attention to code shape can both simplify the task of analyzing and improving the code, and improve the qualityof the final code that the compiler produces.OverviewIn general, the compiler writer should focus on shaping the code so that thevarious passes in the compiler can combine to produce outstanding code.

Inpractice, a compiler can implement most source-language constructs manyways on a given processor. These variations use different operations anddifferent approaches. Some of these implementations are faster than others;some use less memory; some use fewer registers; some might consume lessenergy during execution. We consider these differences to be matters of codeshape.Code shape has a strong impact both on the behavior of the compiled codeand on the ability of the optimizer and back end to improve it. Consider,for example, the way that a c compiler might implement a switch statement that switched on a single-byte character value. The compiler mightuse a cascaded series of if–then–else statements to implement the switchstatement. Depending on the layout of the tests, this could produce different results.

If the first test is for zero, the second for one, and so on, thenthis approach devolves to linear search over a field of 256 keys. If characters are uniformly distributed, the character searches will require an averageof 128 tests and branches per character—an expensive way to implement acase statement. If, instead, the tests perform a binary search, the average casewould involve eight tests and branches, a more palatable number. To trade7.1 Introduction 333Source CodeCodex+y+zLow-Level, Three-Address Coder1 ← rx + ry r1 ← rx + rz r1 ← ry + rzr2 ← r1 + rz r2 ← r1 + ry r2 ← r1 + rx+Treexy+z++rzrx ry+rx rz++ryryrxrzn FIGURE 7.1 Alternate Code Shapes for x + y + z.data space for speed, the compiler can construct a table of 256 labels andinterpret the character by loading the corresponding table entry and jumpingto it—with a constant overhead per character.All of these are legal implementations of the switch statement. Decidingwhich one makes sense for a particular switch statement depends on manyfactors.

In particular, the number of cases and their relative execution frequencies are important, as is detailed knowledge of the cost structure forbranching on the processor. Even when the compiler cannot determine theinformation that it needs to make the best choice, it must make a choice. Thedifferences among the possible implementations, and the compiler’s choice,are matters of code shape.As another example, consider the simple expression x + y + z, where x, y,and z are integers.

Figure 7.1 shows several ways of implementing thisexpression. In source-code form, we may think of the operation as a ternaryadd, shown on the left. However, mapping this idealized operation into asequence of binary additions exposes the impact of evaluation order. Thethree versions on the right show three possible evaluation orders, both asthree-address code and as abstract syntax trees. (We assume that each variable is in an appropriately named register and that the source language doesnot specify the evaluation order for such an expression.) Because integeraddition is both commutative and associative, all the orders are equivalent;the compiler must choose one to implement.Left associativity would produce the first binary tree.

This tree seems “natural” in that left associativity corresponds to our left-to-right reading style.Consider what happens if we replace y with the literal constant 2 and z with3. Of course, x + 2 + 3 is equivalent to x + 5. The compiler should detect thecomputation of 2 + 3, evaluate it, and fold the result directly into the code.In the left-associative form, however, 2 + 3 never occurs. The order x + z + yhides it, as well. The right-associative version exposes the opportunity for334 CHAPTER 7 Code Shapeimprovement.

For each prospective tree, however, there is an assignmentof variables and constants to x, y, and z that does not expose the constantexpression for optimization.As with the switch statement, the compiler cannot choose the best shapefor this expression without understanding the context in which it appears.If, for example, the expression x + y has been computed recently and neitherthe values of x nor y have changed, then using the leftmost shape would letthe compiler replace the first operation, r1 ← rx + ry , with a reference tothe previously computed value.

Often, the best evaluation order depends oncontext from the surrounding code.This chapter explores the code-shape issues that arise in implementingmany common source-language constructs. It focuses on the code thatshould be generated for specific constructs, while largely ignoring the algorithms required to pick specific assembly-language instructions. The issuesof instruction selection, register allocation, and instruction scheduling aretreated separately, in later chapters.7.2 ASSIGNING STORAGE LOCATIONSAs part of translation, the compiler must assign a storage location to eachvalue produced by the code. The compiler must understand the value’s type,its size, its visibility, and its lifetime.

Свежие статьи
Популярно сейчас