- https://www.nat-esm.de/services/workshops-and-trainings/events/supercomputing-academy-parallel-programming-with-mpi
- π π Supercomputing Academy: Parallel Programming with MPI
- 2025-11-03T10:00:00+01:00
- 2025-12-05T15:00:00+01:00
- This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.
Nov 03, 2025
10:00 AM
to
Dec 05, 2025
03:00 PM
(Europe/Berlin / UTC100)
Online (flexible)
MPI is a communication protocol for parallel programming based on message exchange between individual processes. These processes can be executed on distributed memory systems with multiple nodes. This makes MPI scalable on systems larger than single computers. MPI offers a range of tools to maintain the flow of information between individual processes. This enables the execution of a parallel program that can be divided into several smaller parts. In the course of necessary communication, overhead always occurs, which normally limits the scalability of a parallel program. However, a properly optimized program opens up the possibility of using MPI on a distributed memory system (e.g., cluster or supercomputer) with satisfactory efficiency, where thousands or tens of thousands of nodes can be used. This course also offers the opportunity for intensive exchange with the instructors and other course participants.
Location
Flexible online course: Combination of self-study and live seminars (HLRS Supercomputing Academy)
Organizer: HLRS, University of Stuttgart, Germany
Prerequisites
- Basic knowledge in C or Fortran
- Basic knowledge in Linux
- Basic knowledge in Hardware Understanding
Content levels
- Beginners: 12 hours
- Intermediate: 16 hours
- Advanced: 12 hours
Learn more about course curricula and content levels.
Target audience
This course is intended for, but is not limited to, the following groups:
- Software developers
- Software architects
- Computer scientists
- IT enthusiasts
- Simulation engineers
Learning outcomes
Please refer to the course overview.
This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.
Instructor
Dr. Rolf Rabenseifner (HLRS) rolf.rabenseifner(at)hlrs.de
Agenda
- Week 1: The Beginners' MPI
Introduce the fundamentals of MPI, including the process model, basic communication techniques, and non blocking communication. - Week 2: Beginner & Intermediate MPI
Dive deeper into intermediate MPI concepts, including collective communication, error handling, derived datatypes, and virtual topologies. - Week 3: Advanced MPI - Part 1
Explore advanced MPI topics, including one-sided communication, shared memory communication, and synchronization rules. - Week 4: Advanced MPI - Part 2
Continue with advanced MPI topics, focusing on collective communications, and virtual topologies.
It wraps up with the parallelization of a simulation of the 2-dimensional heat equation. - Week 5: Advanced MPI - Additional optional content
Parallel file I/O, advanced topics on communicators and derived datatypes, and a short tour through other MPI topics. It wraps up with a lecture on best practices.
Dates for the Seminars and Exam (Preliminary schedule)
- Seminars are scheduled on Mondays, 17:30-19:00: Nov. 3 (kick-off), and Nov. 10, 17, 24, and Dec. 1 (discuss the content of weeks 1-4). It is highly recommended to participate in these seminars, but if you have a conflict, you may view the recording.
- Exam is scheduled for Friday, Dec. 12. You may start the approximately 2-hour exam anytime between 06:00 and 23:00. The official course dates reflect the course weeks only, not your exam preparation or the exam itself.
- Although the schedule is preliminary, we strongly recommend that you reserve these dates when you register for this course.Β
Registration information
Register here.Β
We encourage you to register to the waiting list if the course is full. Places might become available.
Registration closes on October 26, 2025.