<< Chapter < Page Chapter >> Page >

Once we have all of our ghost points from our neighbors, we can perform the algorithm on our subset of the space:


* Perform the flow DO C=1,MYLENDO R=1,ROWS RED(R,C) = ( BLACK(R,C) ++ BLACK(R,C-1) + BLACK(R-1,C) + + BLACK(R+1,C) + BLACK(R,C+1) ) / 5.0ENDDO ENDDO* Copy back - Normally we would do a red and black version of the loopDO C=1,MYLEN DO R=1,ROWSBLACK(R,C) = RED(R,C) ENDDOENDDO ENDDO

Again, for simplicity, we don’t do the complete red-black computation. Note that you could do two time steps (one black-red-black iteration) if you exchanged two ghost columns at the top of the loop. We have no synchronization at the bottom of the loop because the messages implicitly synchronize the processes at the top of the next loop.

Again, we dump out the data for verification. As in the PVM example, one good test of basic correctness is to make sure you get exactly the same results for varying numbers of processes:


* Dump out data for verification IF ( ROWS .LE. 20 ) THENFNAME = ’/tmp/mheatcout.’ // CHAR(ICHAR(’0’)+INUM) OPEN(UNIT=9,NAME=FNAME,FORM=’formatted’)DO C=1,MYLEN WRITE(9,100)(BLACK(R,C),R=1,ROWS)100 FORMAT(20F12.6) ENDDOCLOSE(UNIT=9) ENDIF

To terminate the program, we call MPI_FINALIZE :


* Lets all go together CALL MPI_FINALIZE(IERR)END

As in the PVM example, we need a routine to store a value into the proper strip of the global array. This routine simply checks to see if a particular global element is in this process and if so, computes the proper location within its strip for the value. If the global element is not in this process, this routine simply returns doing nothing:


SUBROUTINE STORE(RED,ROWS,COLS,S,E,R,C,VALUE,INUM) REAL*8 RED(0:ROWS+1,0:COLS+1)REAL VALUE INTEGER ROWS,COLS,S,E,R,C,I,INUMIF ( C .LT. S .OR. C .GT. E ) RETURN I = ( C - S ) + 1* PRINT *,’STORE, INUM,R,C,S,E,R,I’,INUM,R,C,S,E,R,I,VALUE RED(R,I) = VALUE RETURNEND

When this program is executed, it has the following output:


% mpif77 -c mheatc.f mheatc.f: MAIN mheatc:store: % mpif77 -o mheatc mheatc.o -lmpe% mheatc -np 4 Calling MPI_INITBack from MPI_INIT Back from MPI_INITBack from MPI_INIT Back from MPI_INIT0 4 0 -1 1 1 50 2 4 2 1 3 101 1503 4 3 2 -1 151 200 1 4 1 0 2 51 100%

As you can see, we call MPI_INIT to activate the four processes. The PRINT statement immediately after the MPI_INIT call appears four times, once for each of the activated processes. Then each process prints out the strip of the array it will process. We can also see the neighbors of each process including -1 when a process has no neighbor to the left or right. Notice that Process 0 has no left neighbor, and Process 3 has no right neighbor. MPI has provided us the utilities to simplify message-passing code that we need to add to implement this type of grid- based application.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, High performance computing. OpenStax CNX. Aug 25, 2010 Download for free at http://cnx.org/content/col11136/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'High performance computing' conversation and receive update notifications?

Ask