1994. Compiler Transformations for High-Perforamce Computing (798436), страница 16
Текст из файла (страница 16)
Thesecond pair of boundcomputationsbecomes useless and is also removed.Thefinal result is shown in Figure58(c).7.3.3BoundsReductionIn Figure58(c), the guards control whichiterationsof the loop performcomputation. Since the desired set of iterationsisa contiguousrange,the compilercanachieve the same effect by changingtheinductionexpressionsto reduce the loopbounds[Koelbel1990].Figure58(d)shows the result of the transformation.7.4 CommunicationOptimizationAn importantpart of compilationfor distributed-memorymachinesis the analysis of an application’scommunicationneeds and introductionof explicitmessage-passingoperationsinto it.
Before astatementis executedon a given processor, any data it relieson thatis notavailablelocally must be sent. A simplemindedapproachis togenerateamessagefor each nonlocaldataitemreferencedby thestatement,buttheresultingoverheadwillusuallybe unacceptablyhigh. Compilerdesignershavedevelopeda set of optimizationsthat reduce the time requiredfor communication.The same fundamentalissues arise incommunicationoptimizationas in optimizationfor memoryaccess (describedin Section 6.5): maximizingreuse, minimizingthe workingset, and makinguseof availableparallelismin the em-nmunication system. However,the problemsaremagnifiedbecause while the ratio of onememoryaccess to one computationonS-DLX is 16:1,the ratio of the numberTransformations●399of cvcles reauiredto send one word to thenumberof ‘cycles requiredfor a singleoperationon dMX is about 500:1.Like vectors on vector processors,communicationo~erationson a distributedmemorymachineare characterizedby astartuptime, t~, and a per-elementcost,t~, the time requiredto send one byteonce the message has been initiated.OndMX,t, = 10PS andth = 100ns,sosending ‘one 10-byte message costs 11 pswhile sendingtwo 5-byte messages costs21 ps.
To avoid payingthe startupcostunnecessarily. . . the o~timizationsin this.section combinedata from multiplemessagesandsendthemina singleoperation.7.4.1 MessageVectorizationAnalysiscan often determinethe set ofdata items transferredin a loop. Ratherthan sending each elementof an array inan individualmessage, the compilercangroup many of them togetherand sendthem in a sirwle block transfer.Becausethis is analo~ousto the way a vectorprocessor interactswith memory,the optimizationis calledmessagevectorization [Balasundaramet al. 1990; Gerndt1990; Zima et al. 1988].Figure59(a) shows a sample loop thatmultiplieseach elementof array a by themirrorelementof array b. Figure 59(b) isan inefficientparallelversion for processors O and 1 on dMX.
To simplifythecode, we assumethatthe arrayshavealreadybeen allocatedto the processorsusinga block decomposition:the lowerhalf of each array is on processorO andthe upper half on processor1.Each processorbeginsby computingthe lower and upper bounds of the rangethatit is res~onsiblefor. Duringeachiteration,it s&dsthe elementof”b thatthe other processorwill need and waitsto receivethe correspondingmessage.NotethatFortran’scall-by-referencesemanticsc.on~ertthearrav.referenceimplicitlyinto the addressof the correspondingelement,which is then used bythe low-levelcommunicationroutinetoextractthe bytes to be sent or received,ACMComputmgSurveys,Vol.
26, No. 4, December1994400“Daviddoi=l,na[il= a[ilendF. Baconet al.theirinternalbuffersizes are exceeded.The compilermay be able to performadditionalcommunicationoptimizationby using strip miningto reduce the message lengthas shown in Figure59(d).+ b[n+l-i]do(a) originalLB = Pid*(n/2)loop+ 1US = LB + (n/2)otherPid= idoi=LB,7.4.2 Message- PidUBcallSEND(b[i]callRECEIVE (b[n+l-i]a[i]= a[i]end, 4,Oncemessagevectorizationhas beenperformed,the compilercan furtherreduce the frequencyof communicationbygroupingmessagestogetherthatsendoverlappingor adjacentdata.
The Fortran D compiler[Tseng 1993] uses RegularSections[CallahanandKennedy1988a],an arraysummarizationstrategy, to describethe arrayslice in eachmessage. When two slices being sent tothe same processoroverlapor cover contiguousrangesof the array,the associated messages are combined.otherPid), 4)+ b[n+l-i]do(b) parallelLB = Pid*(n/2)loop+ 1LIE = LB + (n/2)otherPid= 1 - PidotherLB= otherPidotherUB= otherLBcallSEND(b[LB]callRECEIVE(bdoi=LB,(n/2),(n/2)*4,[otherLB]otherPid), (n/2)*4,otherPid)= a[i]+ b[n+l-i]7.4.3 Messagedo j= LB,loopUB,withvectorizedSEND(b[j]callRECEIVE(b[otherLB+, 256*4,256*4,doi= j,a[i]endSendinga messageis generallymuchmore expensivethan performinga blockcopy locallyon a processor.Thereforeitis worthwhileto aggregatemessages being sent to the same processor even if thedatatheycontainis unrelated[Tseng1993].
A simpleaggregationstrategyisto identifyall the messagesdirectedatthe same targetprocessorand copy thevariousstrips of data into a single buffer.Thetargetprocessorperformsthereverse operation.messages256callotherPid)(j-LB)],otherPid)j+255= a[i]+ b[n+l-i]dodo(d) afterstriparrayminingmessages (assumingsize is a multipleFigure 59.Messageof 256)vectorization7.4.4 CollectiveWhen the message arrives,the iterationproceeds.Figure59(c) is a much more efficientversionthat handlesall communicationin a single messagebefore the loop begins executing.Each processorcomputesthe upper and lower bound for both itselfand for the other processorso as to placethe incomingelementsof b properly.Message-passinglibrariesand networkhardwareoftenperformpoorlywhenACMAggregationdo(c) parallelend+ 1+ (n/2)UBa[i]end*CoalescingComputmgSurveys,Vol. 26, No. 4, December1994CommunicationManyparallelarchitecturesand message-pawinglibrariesoffer e.pecial-purpose communicationprimitivessuch asbroadcast,hardwarereduction,and scatter-gather.Compilerscan improveperformanceby recognizingopportunitiestoexploitthese operations,which are oftenhighlyefficient.The idea is analogoustoidiomand reductionrecognitionon sequentialmachines.Li and Chen [1991]presentcompilertechniquesthat rely onpatternmatchingto identifyopportunities for using collectivecommunication.Compiler7.4.5 MessagePipelining7.5Anotherimportantoptimizationis topipelineparallelcomputationsby overlappingcommunicationand computation.Studieshave demonstratedthatmanyapplicationsperformvery poorlywithout pipelining[Rogers1991].
Many message-passingsystems allow the processorto continueexecutinginstructionswhilea messageis being sent. Some supportfully nonblockingsend and receive operations. In either case, the compilerhas theopportunityto arrangefor useful computation to be performedwhile the networkis deliveringmessages.A varietyof algorithmshave been developedto discoveropportunitiesforpipeliningand to move message transferoperationsso as to maximizethe amountof resultingoverlap[Rogers1991;Koelbel and Mehrotra1991; Tseng 1993].7.4.6 RedundantCommunicationEliminationTo avoidsendingmessageswhereverpossible, the compilercan performa variety of transformationsto eliminateredundantcommunication.Manyof theoptimizationscovered earliercan also beused on messages.If a message is sentwithinthe body of a loop but the datadoes not changefromone iterationtothe next, the SEND can be hoistedout ofthe loop.
When two messages containthesame data, only one need be sent.Messagesofferfurtheropportunitiesfor optimization.If the contentsof a message are subsumedby a previouscommunication,the message need not be sent;thissituationis oftencreatedwhenSENDS are hoistedin order to maximizepipeliningopportunities.If a messagecontainsdata, some of which has alreadybeen sent, the overlap can be removedtoreduce the amountof data transferred.Anotherpossibilityis that a message isbeing sent to a collectionof processors,some of whichpreviouslyreceivedthedata.
The list of recipientscan be pruned,reducingthe amountof communicationtraffic.These optimizationsare used bythe PTRANII compiler[Guptaet al.1993] to reduce overall message traffic.Transformations●401SIMD TransformationsSIMDarchitecturesexhibitmuchmoreregularbehaviorthan MIMDmachines,eliminatingmanyproblematicsynchronizationissues.
In additionto the alignmentand decompositionstrategiesfordistributed-memorysystems (see Section7.1.2), the regularityof SIMDinterconnection networksoffers additionalopportunitiesfor optimization.The compilercan use very accurate cost models to estimatethe performanceof a particularlayout.Early SIMD compilationwork targetedIVTRAN[Millsteinand Muntz1975], aFortrandialect for the Illiac IV that provided layout and alignmentdeclarations.Thecompilerprovideda parallelizingmodulecalledthe Paralyzer[Presbergand Johnson1975] thatused an earlyform of dependenceanalysisto identifyindependentloopsandappliedlineartransformationsto optimizecommunication.The ConnectionMachineConvolutionCompiler[Bromleyet al. 1991] targetsthe topologyof the SIMDarchitectureexplicitlywith a pattern-matchingstrategy.