<< Chapter < Page | Chapter >> Page > |
Clearly the processes are operating in parallel, and the order of execution is somewhat random. This code is an excellent skeleton for handling a wide range of computations. In the next example, we perform an SPMD-style computation to solve the heat flow problem using PVM.
This next example is a rather complicated application that implements the heat flow problem in PVM. In many ways, it gives some insight into the work that is performed by the HPF environment. We will solve a heat flow in a two-dimensional plate with four heat sources and the edges in zero-degree water, as shown in [link] .
The data will be spread across all of the processes using a (
*
,
BLOCK
) distribution. Columns are distributed to processes in contiguous blocks, and all the row elements in a column are stored on the same process. As with HPF, the process that “owns” a data cell performs the computations for that cell after retrieving any data necessary to perform the computation.
We use a red-black approach but for simplicity, we copy the data back at the end of each iteration. For a true red-black, you would perform the computation in the opposite direction every other time step.
Note that instead of spawning slave process, the parent process spawns additional copies of itself. This is typical of SPMD-style programs. Once the additional processes have been spawned, all the processes wait at a barrier before they look for the process numbers of the members of the group. Once the processes have arrived at the barrier, they all retrieve a list of the different process numbers:
% cat pheat.f
PROGRAM PHEATINCLUDE ’../include/fpvm3.h’
INTEGER NPROC,ROWS,COLS,TOTCOLS,OFFSETPARAMETER(NPROC=4,MAXTIME=200)
PARAMETER(ROWS=200,TOTCOLS=200)PARAMETER(COLS=(TOTCOLS/NPROC)+3)
REAL*8 RED(0:ROWS+1,0:COLS+1), BLACK(0:ROWS+1,0:COLS+1)LOGICAL IAMFIRST,IAMLAST
INTEGER INUM,INFO,TIDS(0:NPROC-1),IERRINTEGER I,R,C
INTEGER TICK,MAXTIMECHARACTER*30 FNAME* Get the SPMD thing going - Join the pheat group
CALL PVMFJOINGROUP(’pheat’, INUM)* If we are the first in the pheat group, make some helpersIF ( INUM.EQ.0 ) THEN
DO I=1,NPROC-1CALL PVMFSPAWN(’pheat’, 0, ’anywhere’, 1, TIDS(I), IERR)
ENDDOENDIF* Barrier to make sure we are all here so we can look them up
CALL PVMFBARRIER( ’pheat’, NPROC, INFO )* Find my pals and get their TIDs - TIDS are necessary for sendingDO I=0,NPROC-1
CALL PVMFGETTID(’pheat’, I, TIDS(I))ENDDO
At this point in the code, we have
NPROC
processes executing in an
SPMD
mode. The next step is to determine which subset of the array each process will compute. This is driven by the
INUM
variable, which ranges from 0 to 3 and uniquely identifies these processes.
We decompose the data and store only one quarter of the data on each process. Using the
INUM
variable, we choose our continuous set of columns to store and compute. The
OFFSET
variable maps between a “global” column in the entire array and a local column in our local subset of the array.
[link] shows a map that indicates which processors store which data elements. The values marked with a B are boundary values and won’t change during the simulation. They are all set to 0. This code is often rather tricky to figure out. Performing a (
BLOCK
,
BLOCK
) distribution requires a two-dimensional decomposition and exchanging data with the neighbors above and below, in addition to the neighbors to the left and right:
Notification Switch
Would you like to follow the 'High performance computing' conversation and receive update notifications?