Parallel Computing in the Computer Science Curriculum > Modules > Patternlets in Parallel Programming
Find more modules »

Patternlets in Parallel Programming

Material originally created by Joel Adams, Calvin College

Compiled by Libby Shoop, Macalester College


This concept module contains simple examples of basic elements that are combined to form patterns often used in programs employing parallelism. The examples are separated between two major coordination patterns:
  1. message passing used on clusters of distributed computers or on multiprocessors, and
  2. mutual exclusion between threads executing concurrently on a single shared memory system.

Both sets of examples are illustrated with the C programming language, using standard popular available libraries. The message passing example uses a C library called MPI (Message Passing Interface). The mutual exclusion/shared memory examples use the OpenMP library.

Each C code example has a makefile indicating the compiler flags needed.

These examples could be used as demonstrations in lecture or lab to introduce students to the basic constructs used in OpenMP and MPI.

Learning Goals

The primary learning goals for this module are:

Given basic C code examples using MPI, students should be able to recognize how to employ the following patterns in parallel program solutions:

Given basic code examples using OpenMP, students should be able to recognize how to employ the following patterns in parallel program solutions:

Context for Use

These code examples can be used as the starting point for introducing students to either MPI or OpenMP and for discussing common patterns used in parallel programs.

Description and Teaching Materials

Patternlets module web page, with link to code

Patternlets module latex format tarball (latex.tar.gz) This includes a PDF version.

Patternlets module MS Word docx format (ParallelPatternlets.docx)

Teaching Notes and Tips

The C code examples must be used with certain hardware and libraries. The OpenMP examples requires a machine with multiple cores and a compiler that enables the use of OpenMP directives (the gnu gcc compiler is such a compiler). The MPI examples require that you have a version of MPI installed, and can be run on either a multiprocessor or a cluster.

OpenMP enables multithreading. MPI does multiprocessing, rather than multithreading, and the processes communicate via message passing since processes (unlike threads) have no shared memory. The message-passing model is more generally useful than the shared-memory model because the processes in a message-passing program can run anywhere (distributed-mem multiprocessor, shared-mem multiprocessor, uniprocessor), whereas the threads in a multithreaded program cannot be distributed across a distributed multiprocessor.

Note that the makefile provided for each example makes it easy for students to compile and run each example. There are sometimes comments in the example that instruct students to change some of the code by uncommenting some lines and then observe what changes when they re-compile and run again.


Assessment of this module can be done by assigning programming problems that require the students to use a combination of the patternlets provided here to solve a problem.

References and Resources

MPI Standard: documentation and Information

OpenMP Information at

See more Modules »

Patternlets in Parallel Programming -- Discussion  

Join the Discussion

Log in to reply