The EPiGRAM project which concluded 2016 has worked on programming models for exascale systems. Exascale computing power is likely to be reached in the next decade. While the precise system architectures are still evolving, one can safely assume that they will be largely based on deep hierarchies of multicore CPUs with similarly deep memory hierarchies, and likely also supported by accelerators.
Appropriate programming models are needed to allow applications to run efficiently at large scale on these platforms. Message Passing (MPI) has emerged as the de-facto standard for parallel programming on current Petascale machines; but Partitioned Global Address Space (PGAS) languages and libraries are increasingly being considered as alternatives or complements to MPI. These models will likely also play an important role in Exascale systems. However, both approaches have problems that will prevent them reaching Exascale performance. In the EPiGRAM project, we addressed some of the main limitations of MP and PGAS programming models by: investigating new disruptive concepts and algorithms in Exascale programming models; providing prototypical implementations; and validating our approach with three real-world applications that have the potential for reaching Exascale performance.
The GASPI programming model with its GPI implementation, which is developed and maintained by Fraunhofer ITWM, is a PGAS representative. EPiGRAM made GASPI and GPI-2 more complete and robust by closing gaps preventing GASPI and GPI-2 to go for Exascale. Within EPiGRAM strong scaling of the RTM application up to 60k cores at SuperMUC have been shown to be almost linear.