<< Chapter < Page | Chapter >> Page > |
The challenge is to try parallel computing, not just talk about it.
During the week of May 21st to May 26th in 2006, this author attended a workshop on Parallel and Distributed Computing. The workshop was given by the National Computational Science Institute and introduced parallel programming using multiple computers (a group of micro computers grouped or clustered into a super-micro computer). The conference emphasized several important points related to the computer industry:
This last item was emphasized to those of you beginning a career in computer programming that as you progress in your education, you should be aware of the changing nature of computer programming as a profession. Within a few years all professional programmers will have to be familiar with parallel programming .
During the conference this author wrote a program that sorts an array of 150,000 integers using two different approaches. The first way was without parallel processing. When it was compiled and executed using a single machine, it took 120.324 seconds to run (2 minutes). The second way was to redesign the program so parts of it could be run on several processors at the same time. When it was compiled and executed using 11 machines within a cluster of micro-computers, it took 20.974 seconds to run. That’s approximately 6 times faster. Thus, parallel programming will become a necessity to be able to utilize the multi-processor hardware of the near future.
A distributed computing environment was set up in a normal computer lab using a Linix operating system stored on a CD. After booting several computers with the CD, the computers can communicate with each other with the support of "Message Passing Interface" or MPI commands. This model known as the Bootable Cluster CD (BCCD) is available from:
Bootable Cluster CD – University of Northern Iowa at: (External Link)
The source code files used during the above workshop were modified to a version 8, thus an 8 is in the filename. The non-parallel processing "super" code was named: nonps8.cpp with the parallel processing "super" code named: ps8.cpp (Note: The parallel processing code contains some comments that describe that part of the code being run by a machine identified as the "SERVER_NODE" with a part of the code being run by the 10 other machines (the Clients). The client machines communicate critical information to the server node using "Message Passing Interface" or MPI commands.)
You may need to right click on the link and select "Save Target As" in order to download these source code files.
Download the source code file from Connexions: nonps8.cpp
Download the source code file from Connexions: ps8.cpp
Two notable resources with super computer information were provided by presenters during the workshop:
Oklahoma University – Supercomputing Center for Education&Research at: (External Link)
Contra Costa College – High Performance Computing at: (External Link)
You can also "Google" the topic's key words and spend several days reading and experimenting with High Performance Computing.
Consider reviewing the "Educator Resources" links provided in the next section.
There are many sites that provide materials and assistance to those teaching the many aspects of High Performance Computing. A few of them are:
Shodor – A National Resource for Computational Science Education at: (External Link)
CSERD – Computational Science Education Reference Desk at: (External Link)
National Computational Science Institute at: (External Link)
Association of Computing Machinery at: (External Link)
Super Computing – Education at: (External Link)
Notification Switch
Would you like to follow the 'Programming fundamentals - a modular structured approach using c++' conversation and receive update notifications?