Parallel Computing in the Computer Science Curriculum > Modules > Distributed Computing Fundamentals
Find more modules »

Distributed Computing Fundamentals

Professor Libby Shoop, Macalester College
Sophors Khut, Macalester College
Author Profile


Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. Using MPI, programmers can divide massive data with same task and then and distribute them to multiple processing unit within the cluster. By contrast, distinct different tasks could be assigned to separate processes to be run on different machines in a cluster. In this module, we will learn how to solve larger problems more efficiently by writing programs using the MPI 'distributed memory' programming model and then execute them on cluster.

We will start our MPI journey by learning MPI Communications using the simple example "Hello World". Then we will learn MPI Compiling by computing PI using calculus (area under the curve). Finally, we will learn MPI Decomposition by doing Matrix-Vector Multiplication and Matrix-Matrix Multiplication.

Module Characteristics

Language Supported: C
Relevant Parallel Computing Concepts: Data Parallelism
Operating System Supported: Linux
Possible Course Use: Programming Languages, Hardware Design, Parallel Computing Systems
Recommended Teaching Level: Intermediate, Advanced

Learning Goals

Context for Use

Description and Teaching Materials

You can visit the module in your browser:

Distributed Computing Fundamentals

or you can download the module in either PDF format or latex format.

PDF Format: Distributed Computing Fundamentals.pdf.

Latex Format: Distributed Computing Fundamentals.tar.gz.

Word Formats:

Distributed Computing Fundamentals.docx.

Distributed Computing Fundamentals.doc.

Teaching Notes and Tips


We are still developing an assessment instrument for this module.

References and Resources

See more Modules »

Comment? Start the discussion about Distributed Computing Fundamentals