» » » » MIT 29 Partial Redundancy Elimination

# MIT 29 Partial Redundancy Elimination

## Описание файла

PDF-файл из архива "MIT 29 Partial Redundancy Elimination", который расположен в категории "статьи". Всё это находится в предмете "конструирование компиляторов" из седьмого семестра, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .

## Текст из PDF

CS 4120 Lecture 29Partial Redundancy Elimination4 November 2011Lecturer: Andrew Myers1Cascaded dataflow analysesSome analyses lead to optimizations that enable more optimization. For example, CSE plus copy propagation can lead to more CSE. To avoid rerunning analyses after optimizations, we can design analyses to takeinto account optimizations that will be performed. For example, we can change live variable analysis totake into account the removal of dead code.

Compared to CSE, local variable numbering is also a cascadedanalysis, at least within the scope of an extended basic block.2Partial redundancy eliminationCSE eliminates computation of fully redundant expressions: those computed on all paths leading to a node.Partially redundant expressions are those computed at least twice along some path, but not necessarilyall paths. Partial redundancy elimination (PRE) eliminates these partially redundant expressions. PREsubsumes CSE and loop-invariant code motion.t=b+cXa=b+cXa=tt=b+cd=b+cYd=tYFigure 1: Partial redundancy eliminationFigure 1 shows an example of PRE. The computation b+c is redundant along some paths but not others.To make it fully redundant, we place computation of b+c onto earlier edges so that it has always beencomputed at each point where it is needed.2.1Lazy code motionThe idea of lazy code motion is to eliminate all redundant computations while avoid creating any unnecessary computation: computations are moved earlier in the CFG.

Further, we want to make sure that althoughthe computations are moved earlier in the CFG, they are postponed as long as possible, to avoid creatingregister pressure.The approach is to first identify candidate locations where the partially redundant expression couldhave been moved in order to make it fully redundant, without creating extra computations. Then amongthese candidates we choose the one that comes latest along each path that needs it.12.2Anticipated expressionsThe anticipated expressions analysis (also known as very busy expressions) find expressions that are neededalong every path leaving a given node. If an expression is needed along every path leaving the node, thenthere can be no wasted computation if the expression is moved to that node.This is a backward analysis, in which the dataflow values are sets of expressions and the meet operatoris ∩.Once we know the anticipated expressions at each node, we tentatively place computations of theseexpressions and use an available expressions analysis to find expressions that are fully redundant under theassumption that the anticipated expressions are computed everywhere anticipated.

These fully redundantexpressions are the expressions to which we can apply the PRE optimization.2.3Postponable expressionsAt this point we know some set of nodes where the expression can be moved, and we know where it isused. We need to pick a set of edges that separate these two parts of the CFG, and put the computation ofthe expression on those edges. We want to postpone the computation as long as possible. The postponableexpressions analysis finds expressions e that are are anticipated at program point p but not yet used: everypath from the start to p contains an anticipation of e and no use before p. This is a forward analysis withmeet operator ∩.Once postponable expressions have been computed, certain edges form a frontier where the expressiontransitions from postponable to not postponable. It is on these edges that the new node computing theexpressions is placed.2.

Свежие статьи
Популярно сейчас