Cooper_Engineering_a_Compiler(Second Edition) (1157546), страница 98
Текст из файла (страница 98)
Assume that the allocatedlength of a is stored as an unsigned four-byte integer at an offset of–8 from the start of a.13. Arbitrary string assignments can generate misaligned cases.a. Write the iloc code that you would like your compiler to emit foran arbitrary pl/i-style character assignment, such asfee(i:j) = fie(k:l);where j-i = l-k. This statement copies the characters in fie,starting at location k and running through location l into the stringfee, starting at location i and running through location j.Include versions using character-oriented memory operations andversions using word-oriented memory operations. You may assumethat fee and fie do not overlap in memory.b.
The programmer can create character strings that overlap. In pl/i,the programmer might writefee(i:j) = fee(i+1:j+1);or, even more diabolically,fee(i+k:j+k) = fee(i:j);How does this complicate the code that the compiler must generatefor the character assignment?c.
Are there optimizations that the compiler could apply to thevarious character-copying loops that would improve runtimebehavior? How would they help?14. Consider the following type declarations in c:struct S2 {union U {int i;int f;};Section 7.7struct S1 {float r;int a;struct S2;double b;};union U;int d;};Build a structure-element table for S1. Include in it all the informationthat a compiler would need to generate references to elements of a402 CHAPTER 7 Code Shapevariable of type S1, including the name, length, offset, and type ofeach element.15.
Consider the following declarations in c:struct record {int StudentId;int CourseId;int Grade;} grades[1000];int g, i;Show the code that a compiler would generate to store the value invariable g as the grade in the ith element of grades, assuming thefollowing:a. The array grades is stored as an array of structures.b. The array grades is stored as a structure of arrays.Section 7.816. As a programmer, you are interested in the efficiency of the code thatyou produce. You recently implemented, by hand, a scanner. Thescanner spends most of its time in a single while loop that contains alarge case statement.a. How would the different case statement implementation techniquesaffect the efficiency of your scanner?b.
How would you change your source code to improve the runtimeperformance under each of the case statement implementationstrategies?17. Convert the following c tail-recursive function to a loop:List * last(List *l) {if (l == NULL)return NULL;else if (l->next == NULL)return l;elsereturn last(l->next); }Section 7.918.
Assume that x is an unambiguous, local, integer variable and that x ispassed as a call-by-reference actual parameter in the procedure whereit is declared. Because it is local and unambiguous, the compilermight try to keep it in a register throughout its lifetime. Because it isExercises 403passed as a call-by-reference parameter, it must have a memoryaddress at the point of the call.a.
Where should the compiler store x?b. How should the compiler handle x at the call site?c. How would your answers change if x was passed as a call-by-valueparameter?19. The linkage convention is a contract between the compiler and anyoutside callers of the compiled code. It creates a known interface thatcan be used to invoke a procedure and obtain any results that it returns(while protecting the caller’s runtime environment). Thus, thecompiler should only violate the linkage convention when such aviolation cannot be detected from outside the compiled code.a. Under what circumstances can the compiler be certain that using avariant linkage is safe? Give examples from real programminglanguages.b.
In these circumstances, what might the compiler change about thecalling sequence and the linkage convention?This page intentionally left blankChapter8Introduction to OptimizationnCHAPTER OVERVIEWTo improve the quality of the code that it generates, an optimizing compileranalyzes the code and rewrites it into a more efficient form. This chapterintroduces the problems and techniques of code optimization and presentskey concepts through a series of example optimizations. Chapter 9 expandson this material with a deeper exploration of program analysis. Chapter 10provides a broader coverage of optimizing transformations.Keywords: Optimization, Safety, Profitability, Scope of Optimization,Analysis, Transformation8.1 INTRODUCTIONThe compiler’s front end translates the source-code program into some intermediate representation (ir).
The back end translates the ir program into aform where it can execute directly on the target machine, either a hardwareplatform such as a commodity microprocessor or a virtual machine as inJava. Between these processes sits the compiler’s middle section, its optimizer. The task of the optimizer is to transform the ir program produced bythe front end in a way that will improve the quality of the code produced bythe back end. “Improvement” can take on many meanings.
Often, it impliesfaster execution for the compiled code. It can also mean an executable thatuses less energy when it runs or that occupies less space in memory. All ofthese goals fall into the realm of optimization.This chapter introduces the subject of code optimization and providesexamples of several different techniques that attack different kinds of inefficiencies and operate on different regions in the code.
Chapter 9 provides adeeper treatment of some of the techniques of program analysis that are usedEngineering a Compiler. DOI: 10.1016/B978-0-12-088478-0.00008-6c 2012, Elsevier Inc. All rights reserved.Copyright 405406 CHAPTER 8 Introduction to Optimizationto support optimization. Chapter 10 describes additional code-improvementtransformations.Conceptual RoadmapThe goal of code optimization is to discover, at compile time, informationabout the runtime behavior of the program and to use that information toimprove the code generated by the compiler. Improvement can take manyforms. The most common goal of optimization is to make the compiled coderun faster.
For some applications, however, the size of the compiled codeoutweighs its execution speed; consider, for example, an application that willbe committed to read-only memory, where code size affects the cost of thesystem. Other objectives for optimization include reducing the energy costof execution, improving the code’s response to real-time events, or reducingtotal memory traffic.Optimizers use many different techniques to improve code. A proper discussion of optimization must consider the inefficiencies that can be improvedand the techniques proposed for doing so. For each source of inefficiency, thecompiler writer must choose from multiple techniques that claim to improveefficiency.
The remainder of this section illustrates some of the problems thatarise in optimization by looking at two examples that involve inefficienciesin array-address calculations.SafetyA transformation is safe when it does not changethe results of running the program.ProfitA transformation is profitable to apply at somepoint when the result is an actual improvement.Before implementing a transformation, the compiler writer must understandwhen it can be safely applied and when to expect profit from its application.Section 8.2 explores safety and profitability.
Section 8.3 lays out the different granularities, or scopes, over which optimization occurs. The remainderof the chapter uses select examples to illustrate different sources of improvement and different scopes of optimization. This chapter has no “AdvancedTopics” section; Chapters 9 and 10 serve that purpose.OverviewOpportunities for optimization arise from many sources.
A major source ofinefficiency arises from the implementation of source-language abstractions.Because the translation from source code into ir is a local process—it occurswithout extensive analysis of the surrounding context—it typically generatesir to handle the most general case of each construct. With contextual knowledge, the optimizer can often determine that the code does not need that fullgenerality; when that happens, the optimizer can rewrite the code in a morerestricted and more efficient way.A second significant source of opportunity for the optimizer lies with thetarget machine. The compiler must understand, in detail, the properties8.2 Background 407of the target that affect performance. Issues such as the number of functional units and their capabilities, the latency and bandwidth to variouslevels of the memory hierarchy, the various addressing modes supported inthe instruction set, and the availability of unusual or complex operationsall affect the kind of code that the compiler should generate for a givenapplication.Historically, most optimizing compilers have focused on improving the runtime speed of the compiled code.
Improvement can, however, take otherforms. In some applications, the size of the compiled code is as important asits speed. Examples include code that will be committed to read-only memory, where size is an economic constraint, or code that will be transmittedover a limited-bandwidth communications channel before it executes, wheresize has a direct impact on time to completion.
Optimization for these applications should produce code that occupies less space. In other cases, the usermay want to optimize for criteria such as register use, memory use, energyconsumption, or response to real-time events.Optimization is a large and detailed subject whose study could fill one ormore complete courses (and books). This chapter introduces the subject andsome of the critical ideas from optimization that play a role in Chapters 11,12, and 13. The next two chapters delve more deeply into the analysis andtransformation of programs.