Modules

Choose a Visual Search Option

Hardware Software Visual Module SearchCross-Curriculum Visual Module Search

Want to know more about modules?

Find out more about modules and their contents.

Have a module of your own?

Contribute to the site by submitting your own module. Your submission will be reviewed by CS In Parallel to determine what categories it should be listed under. After that process, it will become available to all viewers of this site.

Full-Text Search Across the Module Collection



Help

Results 1 - 20 of 26 matches

MPI Programming Exemplars
Elizabeth Shoop, Macalester College
Four complete examples that use MPI. They can be used to study parallel patterns and learn how to time code.

Introducing Students to MapReduce using Phoenix++
Suzanne Matthews, United States Military Academy
MapReduce using Phoenix++, which is shared-memory implementation of the map-reduce framework. Through code provided students learn to implement a mapper and reducer function for the classic word count example in C++ to use with Phoenix++.

Monte Carlo Simulations: Parallelism in CS1/CS2
David Valentine, Slippery Rock University of Pennsylvania
Use Monte Carlo Simulations in CS1/CS2 to expose students to parallel programming with OpenMP.

GPU Programming
Elizabeth Shoop, Macalester College; Yu Zhao, Macalester College
In this module, we will learn how to create programs that intensionally use GPU to execute. To be more specific, we will learn how to solve parallel problems more efficiently by writing programs in CUDA C Programming Language and then executes them on GPUs based on CUDA architecture.

Concurrent Access to Data Structures in C++
Richard Brown, Saint Olaf College
This module enables students to experiment with creating a task-parallel solution to the problem of crawling the web by using C++ with Boost threads and thread-safe data structures available in the Intel Threading ...

Map-reduce Computing for Introductory Students using WebMapReduce
Professor Richard Brown, St. Olaf College Professor Libby Shoop, Macalester College
This module emphasizes data-parallel problems and solutions, the so-called 'embarrassingly parallel' problems where processing of input data can easily be split among several parallel processes. Students use a web application called WebMapReduce (WMR) to write map and reduce functions that operate on portions of a massive dataset in parallel.

WMR Exemplar: LastFM million-song dataset
Elizabeth Shoop, Macalester College
This module demonstrates how hadoop and WMR can be used to analyze the lastFM million song dataset. It incorporates several advanced hadoop techniques such as job chaining and multiple input.

WMR Exemplar: Flickster network data
Elizabeth Shoop, Macalester College
The exercises in this module use a network of friendships on the social movie recommendation site Flixster. Students will use it to learn how to analyze networks and chain jobs, using the WebMapReduce interface.

WMR Exemplar: UK Traffic Incidents
Elizabeth Shoop, Macalester College
Using data published by the United Kingdom department of Transportation about traffic incidents, students can explore and perform analyses using map-reduce techniques.

Concept: Data Decomposition Pattern
Elizabeth Shoop, Macalester College
This module consists of reading material and code examples that depict the data decomposition pattern in parallel programming, using a small-sized example of vector addition (sometimes called the "Hello, World" of parallel programming.

Drug Design Exemplar
Richard Brown, Saint Olaf College
An important problem in the biological sciences is that of drug design: finding small molecules, called ligands, that are good candidates for use as drugs. We introduce the problem and provide several different parallel solutions, in the context of parallel program design patterns.

Patternlets in Parallel Programming
Material originally created by Joel Adams, Calvin College Compiled by Libby Shoop, Macalester College
Short, simple C programming examples of basic shared memory programming patterns using OpenMP and basic distributed memory patterns using MPI.

Heterogeneous Computing
Elizabeth Shoop, Macalester College; , Macalester College
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. NVIDIA®'s CUDA, a parallel computing platform and programming model, uses GPU for parallel computation problems. This module will explore ways to combine these two parallel computing platforms to make parallel computation more efficient.

Parallel Sorting
Elizabeth Shoop, Macalester College
This module, targeted for algorithms and data structures courses, examines the theoretical PRAM model and its use when designing a parallel version of the mergesort algorithm.

Concurrency and Map-Reduce Strategies in Various Programming Languages
Professor Richard Brown, St. Olaf College
This concept module explores how concurrency and parallelism have been established in programming languages and how one can implement map-reduce in several high-level programming languages taught in a CS curriculum, including Scheme, C++, Java, and Python.

Parallel Computing Concepts
Richard Brown, Saint Olaf College
This concept module will introduce a core of parallel computing notions that CS majors and minors should know in preparation for the era of manycore computing, including parallelism categories, concurrency issues and solutions, and programming strategies.

Concurrent Access to Data Structures
Professor Libby Shoop, Macalester College
This module enables students to experiment with creating a task-parallel solution to the problem of crawling the web by using Java threads and thread-safe data structures available in the java.util.concurrent package.

Pandemic Exemplar using MPI
Yu Zhao, Macalester College
This module will develop a simple agent-based infectious disease model, develop a parallel algorithm based on the model, provide a coded implementation for the algorithm, and explore the scaling of the coded implementation on high performance cluster resources.

Parallel Processes in Python
Steven Bogaerts, DePauw University
This module is designed for use in the latter half of a semester-long CS1 course. It introduces students to the concepts of forking child processes to do work in parallel and how multiple concurrent processes can coordinate using a shared data queue.

Instructor Example: Optimizing CUDA for GPU Architecture
Elizabeth Shoop, Macalester College
This module, designed for instructors to use as an example, explains how to take advantage of the CUDA GPU architecture to provide maximum speedup for your CUDA applications using a Mandelbrot set generator as an example.


      Next Page »