<< Chapter < Page Chapter >> Page >

Increasing the number of threads

This figure is a table describing the changes in time and speedup for different numbers of processors.

Diminishing returns

This figure is a cartesian graph with horizontal axis labeled processors and vertical axis labeled speedup. There are two strings of plots on the graph, ideal and actual. The Ideal plot appears to increase in speedup constantly with an increase in processors, whereas the actual string increases for some time, but peaks at approximately 8 processors and 8 units of speedup, where it then decreases back down to zero units of speedup for sixteen processors.

What has happened here? Things were going so well, and then they slowed down. We are running this program on a 16-processor system, and there are eight other active threads, as indicated below:


E6000:uptime 4:00pm up 19 day(s), 37 min(s), 5 users, load average: 8.00, 8.05, 8.14E6000:

Once we pass eight threads, there are no available processors for our threads. So the threads must be time-shared between the processors, significantly slowing the overall operation. By the end, we are executing 16 threads on eight processors, and our performance is slower than with one thread. So it is important that you don’t create too many threads in these types of applications.

Compiler considerations

Improving performance by turning on automatic parallelization is an example of the “smarter compiler” we discussed in earlier chapters. The addition of a single compiler flag has triggered a great deal of analysis on the part of the compiler including:

  • Which loops can execute in parallel, producing the exact same results as the sequential executions of the loops? This is done by checking for dependencies that span iterations. A loop with no interiteration dependencies is called a DOALL loop.
  • Which loops are worth executing in parallel? Generally very short loops gain no benefit and may execute more slowly when executing in parallel. As with loop unrolling, parallelism always has a cost. It is best used when the benefit far outweighs the cost.
  • In a loop nest, which loop is the best candidate to be parallelized? Generally the best performance occurs when we parallelize the outermost loop of a loop nest. This way the overhead associated with beginning a parallel loop is amortized over a longer parallel loop duration.
  • Can and should the loop nest be interchanged? The compiler may detect that the loops in a nest can be done in any order. One order may work very well for parallel code while giving poor memory performance. Another order may give unit stride but perform poorly with multiple threads. The compiler must analyze the cost/benefit of each approach and make the best choice.
  • How do we break up the iterations among the threads executing a parallel loop? Are the iterations short with uniform duration, or long with wide variation of execution time? We will see that there are a number of different ways to accomplish this. When the programmer has given no guidance, the compiler must make an educated guess.

Even though it seems complicated, the compiler can do a surprisingly good job on a wide variety of codes. It is not magic, however. For example, in the following code we have a loop-carried flow dependency:


PROGRAM DEP PARAMETER(NITER=300,N=1000000)REAL*4 A(N) DO ITIME=1,NITERCALL WHATEVER(A) DO I=2,NA(I) = A(I-1) + A(I) * C ENDDOENDDO END

When we compile the code, the compiler gives us the following message:


E6000: f77 -O3 -autopar -loopinfo -o dep dep.f dep.f:"dep.f", line 6: not parallelized, call may be unsafe "dep.f", line 8: not parallelized, unsafe dependence (a)E6000:

The compiler throws its hands up in despair, and lets you know that the loop at Line 8 had an unsafe dependence, and so it won’t automatically parallelize the loop. When the code is executed below, adding a thread does not affect the execution performance:


E6000:setenv PARALLEL 1 E6000:/bin/time depreal 18.1 user 18.1sys 0.0 E6000:setenv PARALLEL 2E6000:/bin/time depreal 18.3 user 18.2sys 0.0 E6000:

A typical application has many loops. Not all the loops are executed in parallel. It’s a good idea to run a profile of your application, and in the routines that use most of the CPU time, check to find out which loops are not being parallelized. Within a loop nest, the compiler generally chooses only one loop to execute in parallel.

Other compiler flags

In addition to the flags shown above, you may have other compiler flags available to you that apply across the entire program:

  • You may have a compiler flag to enable the automatic parallelization of reduction operations. Because the order of additions can affect the final value when computing a sum of floating-point numbers, the compiler needs permission to parallelize summation loops.
  • Flags that relax the compliance with IEEE floating-point rules may also give the compiler more flexibility when trying to parallelize a loop. However, you must be sure that it’s not causing accuracy problems in other areas of your code.
  • Often a compiler has a flag called “unsafe optimization” or “assume no dependencies.” While this flag may indeed enhance the performance of an application with loops that have dependencies, it almost certainly produces incorrect results.

There is some value in experimenting with a compiler to see the particular combination that will yield good performance across a variety of applications. Then that set of compiler options can be used as a starting point when you encounter a new application.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, High performance computing. OpenStax CNX. Aug 25, 2010 Download for free at http://cnx.org/content/col11136/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'High performance computing' conversation and receive update notifications?

Ask