CCSC Midwest 2017 Workshop
Teaching Parallel and Distributed Computing with MPI
Friday, September 22, 2017
Instructors: Joel Adams (Calvin College), and Libby Shoop (Macalester College)
Abstract: CS 2013 brings parallelism into the CS curricular mainstream. The Message Passing Interface (MPI) is a platform independent, industry-standard library for parallel and distributed computing (PDC). The MPI standard includes support for C, C++, and Fortran; third parties have created implementations for Python and Java. This hands-on workshop introduces MPI basics using parallel patterns, including the single program multiple data (SPMD) execution, send-receive message passing, master-worker, parallel loop, and the broadcast, reduction, scatter, gather, and barrier patterns. Participants will explore 12 short programs designed to help students understand MPI basics, plus longer programs that use MPI to solve significant problems. The intended audience is CS educators who want to learn about how message passing can be used to teach PDC. No prior experience with PDC or MPI is required; familiarity with a C-family language and the command-line are helpful but not required. The workshop includes: (i) self-paced hands-on experimentation with the working MPI programs, and (ii) a discussion of how these may be used to achieve the goals of CS2013. Participants will learn how to compile and run MPI programs in a lab of Linux workstations.
Schedule
30 minutes 4:15-4:45 PM:
Introduction (Joel), 10 minutes
Slides: Teaching Distributed-Memory Parallel Concepts with MPI (Acrobat (PDF) 13.6MB Mar29 16)
Resource: How to build a cluster of NVidia Jetson TK1 boards
Using MPI in Calvin's Lab (Joel), 10 minutes
Handout: Getting started guide
Introduction to the MPI patternlets (Libby), 10 minutes
30 minutes 4:45-5:15:
Self-paced, hands-on exploration of MPI patternlets
Module: Patternlets in Parallel Programming
20 minutes 5:15- 5:35:
Introduction to MPI exemplars (Libby, Joel)
Video: Integration using trapezoidal rule, 1 process
Video:Integration using trapezoidal rule, 4 processes
Self-paced, hands-on exploration of MPI exemplars
5 minutes 5:35-5:40:
Discussion: Where to teach students about distributed computing parallelism? (Joel)
5 minutes 5:40-5:45:
Assessment
Explore
If you are interested, you can find more information about the rest of our modules from the modules link on the left menu of this page.