<< Chapter < Page Chapter >> Page >

The topology we use is a one-dimensional decomposition that isn’t periodic. If we specified that we wanted a periodic decomposition, the far-left and far-right processes would be neighbors in a wrapped-around fashion making a ring. Given that it isn’t periodic, the far-left and far-right processes have no neighbors.

In our PVM example above, we declared that Process 0 was the far-right process, Process NPROC-1 was the far-left process, and the other processes were arranged linearly between those two. If we set REORDER to .FALSE. , MPI also chooses this arrangement. However, if we set REORDER to .TRUE. , MPI may choose to arrange the processes in some other fashion to achieve better performance, assuming that you are communicating with close neighbors.

Once the communicator is set up, we use it in all of our communication operations:


* Get my rank in the new communicatorCALL MPI_COMM_RANK( COMM1D, INUM, IERR)

Within each communicator, each process has a rank from zero to the size of the communicator minus 1. The MPI_COMM_RANK tells each process its rank within the communicator. A process may have a different rank in the COMM1D communicator than in the MPI_COMM_WORLD communicator because of some reordering.

Given a Cartesian topology communicator, Remember, each communicator may have a topology associated with it. A topology can be grid, graph, or none. Interestingly, the MPI_COMM_WORLD communicator has no topology associated with it. we can extract information from the communicator using the MPI_CART_GET routine:


* Given a communicator handle COMM1D, get the topology, and my position * in the topologyCALL MPI_CART_GET(COMM1D, NDIM, DIMS, PERIODS, COORDS, IERR)

In this call, all of the parameters are output values rather than input values as in the MPI_CART_CREATE call. The COORDS variable tells us our coordinates within the communicator. This is not so useful in our one-dimensional example, but in a two-dimensional process decomposition, it would tell our current position in that two-dimensional grid:


* Returns the left and right neighbors 1 unit away in the zeroth dimension * of our Cartesian map - since we are not periodic, our neighbors may* not always exist - MPI_CART_SHIFT handles this for usCALL MPI_CART_SHIFT(COMM1D, 0, 1, LEFTPROC, RIGHTPROC, IERR) CALL MPE_DECOMP1D(TOTCOLS, NPROC, INUM, S, E)MYLEN = ( E - S ) + 1 IF ( MYLEN.GT.COLS ) THENPRINT *,’Not enough space, need’,MYLEN,’ have ’,COLS PRINT *,TOTCOLS,NPROC,INUM,S,ESTOP ENDIFPRINT *,INUM,NPROC,COORDS(1),LEFTPROC,RIGHTPROC, S, E

We can use MPI_CART_SHIFT to determine the rank number of our left and right neighbors, so we can exchange our common points with these neighbors. This is necessary because we can’t simply send to INUM-1 and INUM+1 if MPI has chosen to reorder our Cartesian decomposition. If we are the far-left or far-right process, the neighbor that doesn’t exist is set to MPI_PROC_NULL , which indicates that we have no neighbor. Later when we are performing message sending, it checks this value and sends messages only to real processes. By not sending the message to the “null process,” MPI has saved us an IF test.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, High performance computing. OpenStax CNX. Aug 25, 2010 Download for free at http://cnx.org/content/col11136/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'High performance computing' conversation and receive update notifications?

Ask