Distributed Computing Fundamentals
Summary
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. Using MPI, programmers can divide massive data with same task and then and distribute them to multiple processing unit within the cluster. By contrast, distinct different tasks could be assigned to separate processes to be run on different machines in a cluster. In this module, we will learn how to solve larger problems more efficiently by writing programs using the MPI 'distributed memory' programming model and then execute them on cluster.
We will start our MPI journey by learning MPI Communications using the simple example "Hello World". Then we will learn MPI Compiling by computing PI using calculus (area under the curve). Finally, we will learn MPI Decomposition by doing Matrix-Vector Multiplication and Matrix-Matrix Multiplication.
Module Characteristics
Language Supported: CRelevant Parallel Computing Concepts: Data Parallelism
Operating System Supported: Linux
Possible Course Use: Programming Languages, Hardware Design, Parallel Computing Systems
Recommended Teaching Level: Intermediate, Advanced
Learning Goals
- Students should be able to describe the basic ideas of communicating processes in the MPI Programming Model and clusters of computers each with their own memory.
- Students should be able to write programs using the MPI Programming Model and then execute them on a cluster (provided independently).
- Students should be able to design programs clearly express the concept of Decomposition of data to tasks using the MPI Programming Model
Context for Use
- This module can be taught in a C Programming Language based course or in a course in which students have had prior C Programming Language experience. Student with little or no knowledge of C Programming Language will find materials inside this module difficult to apprehend, and therefore adequate of C Programming Language background is mandatory.
- It is designed for use as a lab.
- Depending on curriculum, this module could be considered to be at an "intermediate" or "advanced" level.
Description and Teaching Materials
You can visit the module in your browser:
Teaching Notes and Tips
- This module has been used on the hardware platform LittleFe, a 6-node distributed memory cluster where each node is equipped Intel® Atom CPU and NVIDIA® GT218 GPU and all nodes are connected via network router. LittleFe is a Linux based cluster provided by Bootable Cluster CD (BCCD), who also provided the modified operation system and software.
- Although this module is based on the LittleFe platform, any instructors with a working cluster platform based on Linux operating system can also use this module for teaching.
- Note: This module is written in the form of progressive mode. To be more specific, every following chapter is more complicated and difficult than the previous chapter. We design this module like this so that instructors can decide how many chapters to include into their own schedule. Based on our estimates, If instructor have one or two course periods, we advice you to proceed only the Introduction to MPI and MPI Communications chapters. If instructor have more than a week, we advice you to finish all the chapters.
Assessment
References and Resources
Comment? Start the discussion about Distributed Computing Fundamentals