FDVM2 (1158347), страница 8
Текст из файла (страница 8)
S = 0
X = A(1)
Y = A(1)
MINI = 1
CDVM$ PARALLEL ( I ) ON A( I )
DO 10 I = 1, N
S = S + A(I)
X = MAX(X, A(I))
IF(A(I) .LT. Y) THEN
Y = A(I)
MINI = I
ENDIF
10 CONTINUE
CDVM$ REDUCTION_START RG
CDVM$ PARALLEL ( I ) ON B( I )
DO 20 I = 1, N
B(I) = C(I) + A(I)
20 CONTINUE
CDVM$ REDUCTION_WAIT RG
PRINT *, S, X, Y, MINI
While executing reduction group the values of array B elements are computed.
7.Task Parallelism
DVM parallelism model joins data parallelism and task parallelism.
Data parallelism is implemented by distribution of arrays and loop iterations over virtual processor subsystem. Virtual processor subsystem can include as whole processor arrangement as its section.
Task parallelism is implemented by independent computations on disjoined sections of processor arrangement.
Let us define a set of virtual processors, where a procedure is executed, as current virtual processor system. For main program the current system consists of whole set of virtual processors.
The separate task group is defined by the following directives.
-
Declaration of task array (TASK directive).
-
Mapping task array on the sections of the processor arrangement (MAP directive).
-
Redistribution of arrays over tasks (REDISTRIBUTE directive)
-
Distribution of computations (blocks of statements or iterations of distributed loop) over tasks (TASK_REGION construct).
Several tasks can be described in a procedure. Nested tasks are not allowed.
7.1. Declaration of Task Array
A task array is described by the following directive:
| task-directive | is TASK task-list |
| task | is task-name ( max-task ) |
TASK directive declares one-dimensional array, which will contain references to the sections of the processor arrangement.
7.2. Mapping Tasks on Processors. MAP Directive
The task mapping on processor arrangement section is performed by the following directive
| map-directive | is MAP task-name (index-task) |
| ONTO processors-name( section-subscript-list)) |
The tasks of the same array must be mapped on disjoined sections of processor arrangement. Several tasks can be mapped on the same section.
7.3. Array Distribution on Tasks
Array distribution on tasks are performed by directives REDISTRIBUTE with the following extension:
| dist-target | is . . . |
| or task-name ( task-index) |
The array is distributed on processor arrangement section, provided to the specified task.
7.4. Distribution of Computations. TASK_REGION Directive
Distribution of statement blocks on the tasks is described by TASK_REGION construct:
| block-task-region | is task-region-directive |
| on-block | |
| [ on-block ]... | |
| end-task-region-directive | |
| task-region-directive | is TASK_REGION task-name |
| end-task-region-directive | END TASK_REGION |
| on-block | is on-directive |
| block | |
| end-on-directive | |
| on-directive | is ON task-name ( task-index ) [ , new-clause ] |
| end-on-directive | is END ON |
Task region and each on-block are sequences of statements with single entry (a first statement) and single exit (after last statement). TASK_REGION construct is semantically equivalent to parallel section construction for common memory systems. The difference is that statement block in task region can be executed on several processors in data parallelism model.
Distribution of the distributed loop iterations on tasks is performed by the following construct:
| loop-task-region | is task-region-directive |
| parallel-task-loop | |
| end-task-region-directive | |
| parallel-task-loop | is parallel-task-loop-directive |
| do-loop | |
| parallel-task-loop-directive | is PARALLEL ( do-variable ) ON task-name ( do-variable ) [ , new-clause ] |
Distributed computation unit is an iteration of one-dimensional distributed loop. The difference from usual distributed loop is the distribution of the iteration on processor arrangement section, the section being defined by reference to the element of the task array.
7.5. Data Localization in Tasks
A task is on-block or loop iteration. The tasks of the same group have the following constraints on data
-
there are no data dependencies;
-
all used and computed data are allocated (localized) on processor arrangement section of the given task;
-
task can't change distribution of the array, distributed before entering the task.
-
there is no input/output;
-
task can update only the values of arrays, distributed on the section, and the NEW-variable values.
Semantics of NEW-variables is the same as semantics of NEW-variables of distributed loop.
7.6. Fragment of Static Multiblock Problem
The program fragment, describing realization of three-block problem (fig.6.2) is presented below.
CHPF$ PROCESSORS P( NUMBER_OF_PROCESSORS( ) )
С arrays A1,А2,А3 - the function values on the previous iteration
С arrays В1,В2,В3 - the function values on the current iteration
REAL A1( M, N1+1 ), B1( M, N1+1 )
REAL A2( M1+1, N2+1 ), B2(M1+1, N2+1 )
REAL A3( M2+1, N2+1 ), B3(M2+1, N2+1 )
С declaration of task array
CDVM$ TASK MB (3)
С aligning arrays of each block
CHPF$ ALIGN B1( I, J ) WITH A1( I, J )
CHPF$ ALIGN B2( I, J ) WITH A2( I, J )
CHPF$ ALIGN B3( I, J ) WITH A3( I, J )
С
CHPF$ DISTRIBUTE :: A1, A2, A3
CDVM$ REMOTE_GROUP RS
. . .
C distribution of tasks on processor arrangement sections and
С distribution of arrays on tasks
С ( each section contain third of all the processors)
NP = NUMBER_OF_PROCESSORS( ) / 3
CDVM$ MAP MB( 1 ) ONTO P( 1 : NP )
CDVM$ REDISTRIBUTE ( *, BLOCK ) ONTO MB( 1 ) :: A1
CDVM$ MAP MB( 2 ) ONTO P( NP+1 : 2*NP )
CDVM$ REDISTRIBUTE ( *, BLOCK ) ONTO MB( 2 ) :: A2
CDVM$ MAP MB( 3 ) ONTO P( 2*NP+1 : 3*NP )
CDVM$ REDISTRIBUTE ( *, BLOCK ) ONTO MB( 3 ) :: A3
. . .
DO 10 IT = 1, MAXIT
. . .
CDVM$ PREFETCH RS
C exchanging edges of adjacent blocks
. . .
С distribution of computations (statement blocks) on tasks
CDVM$ TASK_REGION MB
CDVM$ ON MB( 1 )
CALL JACOBY( A1, B1, M, N1+1 )
CDVM$ END ON
CDVM$ ON MB( 2 )
CALL JACOBY( A2, B2, M1+1, N2+1 )
CDVM$ END ON
CDVM$ ON MB( 3 )
CALL JACOBY( A3, B3, M2+1, N2+1 )
CDVM$ END ON
CDVM$ END TASK_REGION
10 CONTINUE
7.7. Fragment of Dynamic Multiblock Problem
Let us consider the fragment of the program, which dynamically is tuned on a number of blocks and the sizes of each block.
С NA - maximal number of blocks
PARAMETER ( NA=20 )
CHPF$ PROCESSORS R( NUMBER_OF_PROCESSORS( ) )
С memory for dynamic arrays
REAL HEAP(100000)
С sizes of dynamic arrays
INTEGER SIZE( 2, NA )
С arrays of pointers for А и В
CDVM$ REAL, POINTER ( :, : ) :: PA, PB, P1, P2
INTEGER P1, P2
CDVM$ TASK PT ( NA )
CDVM$ ALIGN :: PB, P2
CDVM$ DISTRIBUTE :: PA, P1
. . .
NP = NUMBER_OF_PROCESSORS( )
С distribution of arrays on tasks
С dynamic allocation of the arrays and execution of postponed directives
С DISTRIBUTE и ALIGN
IP = 1
DO 20 I = 1, NA
CDVM$ MAP PT( I ) ONTO R( IP : IP+1 )
P1 = ALLOCATE ( SIZE(1,I), HEAP )
CDVM$ REDISTRIBUTE ( *, BLOCK ) ONTO PT( I ) :: P1
P2 = ALLOCATE ( SIZE(1,I), HEAP )
CDVM$ REALIGN P2( I, J ) WITH P1( I, J )
PA(I) = P1
PB(I) = P2
IP = IP + 2
IF( IP .GT. NP ) THEN IP = 1
20 CONTINUE
. . .
С distribution of computations on tasks
CDVM$ TASK_REGION PT
CDVM$ PARALLEL ( I ) ON PT( I )
DO 50 I = 1,NA
CALL JACOBY( HEAP(PA(I)), HEAP(PB(I)), SIZE(1, I), SIZE(2, I) )
50 CONTINUE
CDVM$ END TASK_REGION
The arrays (blocks) are cyclically distributed on two processor sections. If NA > NP/2, then several arrays will be distributed on some sections. The loop iterations, distributed on the same section, will be executed sequentially in data parallelism model.
8.COMMON and EQUIVALENCE
The arrays, distributed by default, can be used in COMMON blocks and EQUIVALENCE statements without restrictions.
The arrays, distributed by DISTRIBUTE or ALIGN directives, can't be used in EQUIVALENCE statements. Moreover, these arrays can't be associated with other data objects. Explicitly distributed arrays can be the components of COMMON block under the following conditions:
-
COMMON block must be described in main program unit.
-
Every occurrence of the COMMON block must have the same number of components and each corresponding component must have a storage sequences of the same size;
-
If explicitly mapped array is the component of the COMMON block, then the array declarations in different program units must specify the same data type and shape. The DISRIBUTE and ALIGN directives, applied to the array, must have the identical parameters.
Example 8.1. Explicitly distributed array in COMMON block.
Declaration in main program.
PROGRAM MAIN
CDVM$ DISTRIBUTE B ( *, BLOCK )
COMMON /COM1/ X, Y(12), B(12,30)
Declaration in subroutine. The error is another number of components.
SUBROUTINE SUB1
CDVM$ DISTRIBUTE B1 ( *, BLOCK )
COMMON /COM1/ X, Y(12), Z, B1(12,30)
Declaration in subroutine. The error is other distribution of the array.















