Hybrid Programming in HPC – MPI+X

Europe/Berlin
Room 0.439 / Rühle Saal (HLRS, University of Stuttgart)

Room 0.439 / Rühle Saal

HLRS, University of Stuttgart

Nobelstraße 19 70569 Stuttgart, Germany
Maksym Deliyergiyev (HLRS, University of Stuttgart)
Description

HY-HLRS_2026_IndusStyle1_800x450.png

Learn how to use and program HLRS's system Hunter.

Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory consumption and communication time has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory).

This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI.

Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming. Hands-on sessions are included on all days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section.

This course provides scientific training in Computational Science and, in addition, the scientific exchange of the participants among themselves.

This course is a joint training event of SIDE and EuroCC-Austria, the German and Austrian National Competence Centres for High-Performance Computing. It is organized by the HLRS in cooperation with the VSC Research Center, TU Wien and NHR@FAU.

    • Welcome
    • Day1: Introduction to Hybrid Programming in HPC – MPI+X
      • 1
        Hunter's hardware architecture and its programming models

        Hunter's hardware architecture and its programming models
        Dr. Christian Simmendinger (HPE) and Igor Pasichnyk (AMD), and Johanna Potyka (AMD)

      • 10:00 AM
        Coffee Break
      • 2
        Introduction to Hybrid Programming in HPC – MPI+X
      • 3
        Programming Models
      • 4
        Programming Models - MPI + OpenMP
      • 5
        Practical (how to compile and start)
      • 12:30 PM
        Lunch
      • 6
        MPI + OpenMP
      • 2:45 PM
        Coffee Break
      • 7
        MPI + OpenMP
      • 8
        Practical (how to do pinning)
      • 9
        Q&A
    • Day2: Overlapping Communication and Computation
      • 10
        MPI + OpenMP
      • 11
        Case study: Simple 2D stencil smoother
      • 12
        Practical (hybrid through OpenMP parallelization)
      • 10:45 AM
        Coffee Break
      • 13
        Overlapping Communication and Computation
      • 14
        Practical (taskloops)
      • 15
        MPI + OpenMP Conclusions
      • 12:30 PM
        Lunch
      • 16
        MPI + Accelerators
      • 3:00 PM
        Coffee Break
      • 17
        MPI + Accelerators
      • 18
        Q&A
    • Day3: MPI Memory Models and Synchronization
      • 19
        Programming Models (continued)
      • 20
        MPI + MPI-3.0 Shared Memory
      • 10:00 AM
        Coffee Break
      • 21
        MPI Memory Models and Synchronization
      • 11:00 AM
        Coffee Break
      • 22
        Optimized node to node communication
      • 23
        Recap - MPI Virtual Topologies
      • 12:05 PM
        Lunch
      • 24
        Topology Optimization
      • 25
        Conclusions
      • 2:30 PM
        Coffee Break
      • 26
        Practical (replicated data)
      • 27
        Q&A