K. Cooper, L. Torczon - Engineering a Compiler (2011 - 2nd edition) (798440), страница 100
Текст из файла (страница 100)
Because it is local and unambiguous, the compilermight try to keep it in a register throughout its lifetime. Because it isExercises 403passed as a call-by-reference parameter, it must have a memoryaddress at the point of the call.a. Where should the compiler store x?b. How should the compiler handle x at the call site?c.
How would your answers change if x was passed as a call-by-valueparameter?19. The linkage convention is a contract between the compiler and anyoutside callers of the compiled code. It creates a known interface thatcan be used to invoke a procedure and obtain any results that it returns(while protecting the caller’s runtime environment). Thus, thecompiler should only violate the linkage convention when such aviolation cannot be detected from outside the compiled code.a.
Under what circumstances can the compiler be certain that using avariant linkage is safe? Give examples from real programminglanguages.b. In these circumstances, what might the compiler change about thecalling sequence and the linkage convention?This page intentionally left blankChapter8Introduction to OptimizationnCHAPTER OVERVIEWTo improve the quality of the code that it generates, an optimizing compileranalyzes the code and rewrites it into a more efficient form.
This chapterintroduces the problems and techniques of code optimization and presentskey concepts through a series of example optimizations. Chapter 9 expandson this material with a deeper exploration of program analysis. Chapter 10provides a broader coverage of optimizing transformations.Keywords: Optimization, Safety, Profitability, Scope of Optimization,Analysis, Transformation8.1 INTRODUCTIONThe compiler’s front end translates the source-code program into some intermediate representation (ir). The back end translates the ir program into aform where it can execute directly on the target machine, either a hardwareplatform such as a commodity microprocessor or a virtual machine as inJava.
Between these processes sits the compiler’s middle section, its optimizer. The task of the optimizer is to transform the ir program produced bythe front end in a way that will improve the quality of the code produced bythe back end. “Improvement” can take on many meanings. Often, it impliesfaster execution for the compiled code. It can also mean an executable thatuses less energy when it runs or that occupies less space in memory. All ofthese goals fall into the realm of optimization.This chapter introduces the subject of code optimization and providesexamples of several different techniques that attack different kinds of inefficiencies and operate on different regions in the code.
Chapter 9 provides adeeper treatment of some of the techniques of program analysis that are usedEngineering a Compiler. DOI: 10.1016/B978-0-12-088478-0.00008-6c 2012, Elsevier Inc. All rights reserved.Copyright 405406 CHAPTER 8 Introduction to Optimizationto support optimization. Chapter 10 describes additional code-improvementtransformations.Conceptual RoadmapThe goal of code optimization is to discover, at compile time, informationabout the runtime behavior of the program and to use that information toimprove the code generated by the compiler.
Improvement can take manyforms. The most common goal of optimization is to make the compiled coderun faster. For some applications, however, the size of the compiled codeoutweighs its execution speed; consider, for example, an application that willbe committed to read-only memory, where code size affects the cost of thesystem.
Other objectives for optimization include reducing the energy costof execution, improving the code’s response to real-time events, or reducingtotal memory traffic.Optimizers use many different techniques to improve code. A proper discussion of optimization must consider the inefficiencies that can be improvedand the techniques proposed for doing so.
For each source of inefficiency, thecompiler writer must choose from multiple techniques that claim to improveefficiency. The remainder of this section illustrates some of the problems thatarise in optimization by looking at two examples that involve inefficienciesin array-address calculations.SafetyA transformation is safe when it does not changethe results of running the program.ProfitA transformation is profitable to apply at somepoint when the result is an actual improvement.Before implementing a transformation, the compiler writer must understandwhen it can be safely applied and when to expect profit from its application.Section 8.2 explores safety and profitability.
Section 8.3 lays out the different granularities, or scopes, over which optimization occurs. The remainderof the chapter uses select examples to illustrate different sources of improvement and different scopes of optimization. This chapter has no “AdvancedTopics” section; Chapters 9 and 10 serve that purpose.OverviewOpportunities for optimization arise from many sources. A major source ofinefficiency arises from the implementation of source-language abstractions.Because the translation from source code into ir is a local process—it occurswithout extensive analysis of the surrounding context—it typically generatesir to handle the most general case of each construct.
With contextual knowledge, the optimizer can often determine that the code does not need that fullgenerality; when that happens, the optimizer can rewrite the code in a morerestricted and more efficient way.A second significant source of opportunity for the optimizer lies with thetarget machine. The compiler must understand, in detail, the properties8.2 Background 407of the target that affect performance. Issues such as the number of functional units and their capabilities, the latency and bandwidth to variouslevels of the memory hierarchy, the various addressing modes supported inthe instruction set, and the availability of unusual or complex operationsall affect the kind of code that the compiler should generate for a givenapplication.Historically, most optimizing compilers have focused on improving the runtime speed of the compiled code.
Improvement can, however, take otherforms. In some applications, the size of the compiled code is as important asits speed. Examples include code that will be committed to read-only memory, where size is an economic constraint, or code that will be transmittedover a limited-bandwidth communications channel before it executes, wheresize has a direct impact on time to completion.
Optimization for these applications should produce code that occupies less space. In other cases, the usermay want to optimize for criteria such as register use, memory use, energyconsumption, or response to real-time events.Optimization is a large and detailed subject whose study could fill one ormore complete courses (and books). This chapter introduces the subject andsome of the critical ideas from optimization that play a role in Chapters 11,12, and 13. The next two chapters delve more deeply into the analysis andtransformation of programs.
Chapter 9 presents an overview of static analysis. It describes some of the analysis problems that an optimizing compilermust solve and presents practical techniques that have been used to solvethem. Chapter 10 examines so-called scalar optimizations—those intendedfor a uniprocessor—in a more systematic way.8.2 BACKGROUNDUntil the early 1980s, many compiler writers considered optimization as afeature that should be added to the compiler only after its other parts wereworking well. This led to a distinction between debugging compilers andoptimizing compilers. A debugging compiler emphasized quick compilationat the expense of code quality.
These compilers did not significantly rearrange the code, so a strong correspondence remained between the sourcecode and the executable code. This simplified the task of mapping a runtimeerror to a specific line of source code; hence the term debugging compiler. Incontrast, an optimizing compiler focuses on improving the running time ofthe executable code at the expense of compile time. Spending more time incompilation often produces better code. Because the optimizer often movesoperations around, the mapping from source code to executable code is lesstransparent, and debugging is, accordingly, harder.408 CHAPTER 8 Introduction to OptimizationAs risc processors have moved into the marketplace (and as risc implementation techniques were applied to cisc architectures), more of the burdenfor runtime performance has fallen on compilers.
To increase performance,processor architects have turned to features that require more support fromthe compiler. These include delay slots following branches, nonblockingmemory operations, increased use of pipelines, and increased numbers offunctional units. These features make processors more performance sensitive to both high-level issues of program layout and structure and tolow-level details of scheduling and resource allocation. As the gap betweenprocessor speed and application performance has grown, the demand foroptimization has grown to the point where users expect every compiler toperform optimization.The routine inclusion of an optimizer, in turn, changes the environment inwhich both the front end and the back end operate. Optimization furtherinsulates the front end from performance concerns.
To an extent, this simplifies the task of ir generation in the front end. At the same time, optimizationchanges the code that the back end processes. Modern optimizers assumethat the back end will handle resource allocation; thus, they typically targetan idealized machine that has an unlimited supply of registers, memory, andfunctional units. This, in turn, places more pressure on the techniques usedin the compiler’s back end.If compilers are to shoulder their share of responsibility for runtime performance, they must include optimizers.