Initial Publication Date: March 29, 2016

CCSC Central Plains 2016 Pre-Conference Workshop

Teaching Parallel and Distributed Computing with MPI

Abstract: CS 2013 brings parallelism into the CS curricular mainstream. The Message Passing Interface (MPI) is a platform independent, industry-standard library for parallel and distributed computing (PDC). The MPI standard includes support for C, C++, and Fortran; third parties have created implementations for Python and Java. This hands-on workshop introduces MPI basics using parallel patterns, including the single program multiple data (SPMD) execution, send-receive message passing, master-worker, parallel loop, and the broadcast, reduction, scatter, gather, and barrier patterns. Participants will explore 12 short programs designed to help students understand MPI basics, plus longer programs that use MPI to solve significant problems. The intended audience is CS educators who want to learn about how message passing can be used to teach PDC. No prior experience with PDC or MPI is required; familiarity with a C-family language and the command-line are helpful but not required. The workshop includes: (i) self-paced hands-on experimentation with the working MPI programs, and (ii) a discussion of how these may be used to achieve the goals of CS2013. Participants will work on a remote Beowulf cluster accessed via SSH, and will either need a laptop or a tablet with an SSH client (e.g., BitVise, iSSH) installed, or use desktops available at the venue with pre-installed SSH clients. See


9:00 - 9:45

Introduction (Joel), 20 minutes

Slides: Teaching Distributed-Memory Parallel Concepts with MPI (Acrobat (PDF) 13.6MB Mar29 16)

Using the CDER cluster (Joel), 10 minutes

Handout: Getting started guide

Introduction to the MPI patternlets (Libby), 15 minutes

9:45 - 10:45

Self-paced, hands-on exploration of MPI patternlets

Module: Patternlets in Parallel Programming

10:45 - 11:00

Introduction to MPI exemplars (Libby)

11:00 - 11:45

Self-paced, hands-on exploration of MPI exemplars

Module: Concept: Data Decomposition Pattern

This above module is accessible code for beginners illustrating vector addition.

Module: Distributed Computing Fundamentals

This module is also accessible and designed to step beginners to MPI through classic examples.

Module:Pandemic Exemplar using MPI

This module is an interesting example that contains more sophisticated code for students already fairly well-versed in C and MPI programming.

11:45 - 12:00

Discussion: Where to teach students about distributed computing parallelism? (Joel)

Assessment: Please fill out this CCSC survey about this workshop


If you are interested, you can find more information about the rest of our modules from the modules link on the left menu of this page.