Patternlets in Parallel Programming
Summary
- message passing used on clusters of distributed computers or on multiprocessors, and
- mutual exclusion between threads executing concurrently on a single shared memory system.
Both sets of examples are illustrated with the C programming language, using standard popular available libraries. The message passing example uses a C library called MPI (Message Passing Interface). The mutual exclusion/shared memory examples use the OpenMP library.
Each C code example has a makefile indicating the compiler flags needed.
These examples could be used as demonstrations in lecture or lab to introduce students to the basic constructs used in OpenMP and MPI.
Learning Goals
The primary learning goals for this module are:
Given basic C code examples using MPI, students should be able to recognize how to employ the following patterns in parallel program solutions:
- The master-worker implementation strategy.
- Basic message passing using send and receive.
- Slicing data decomposition using parallel for loops.
- Blocking data decomposition using parallel for loops.
- Broadcast as a form of message passing.
- Reduction following task and data decomposition.
- Scatter and gather as forms of message passing.
- The master-worker implementation strategy.
- Striping data decomposition using parallel for loops.
- Blocking data decomposition using parallel for loops.
- Ensuring mutual exclusion and its performance.
- General task decomposition.
Context for Use
Description and Teaching Materials
Teaching Notes and Tips
The C code examples must be used with certain hardware and libraries. The OpenMP examples requires a machine with multiple cores and a compiler that enables the use of OpenMP directives (the gnu gcc compiler is such a compiler). The MPI examples require that you have a version of MPI installed, and can be run on either a multiprocessor or a cluster.
OpenMP enables multithreading. MPI does multiprocessing, rather than multithreading, and the processes communicate via message passing since processes (unlike threads) have no shared memory. The message-passing model is more generally useful than the shared-memory model because the processes in a message-passing program can run anywhere (distributed-mem multiprocessor, shared-mem multiprocessor, uniprocessor), whereas the threads in a multithreaded program cannot be distributed across a distributed multiprocessor.
Note that the makefile provided for each example makes it easy for students to compile and run each example. There are sometimes comments in the example that instruct students to change some of the code by uncommenting some lines and then observe what changes when they re-compile and run again.