Global Adress Space Programming Interface

GPI revolutionizes algorithmic development for powerful software. It is considered as key to enabling the next generation of supercomputers - exascale computers that are 1,000 times faster than today's mainframes.

GPI – Programming Model for Supercomputers of the Future

The demand for even faster, more effective, and also energy-saving computer clusters is growing in almost every sector. Our asynchronous programming model Global address space Programming Interface (GPI) might become a key building block towards realizing the next generation of supercomputers.

High-performance computing is one of the key technologies for numerous applications that we have come to take for granted – everything from Google searches to weather forecasting and from climate simulation to bioinformatics requires an ever increasing amount of computing resources. Big data analysis additionally is driving the demand for even faster, more effective, and also energy-saving computer clusters.

The number of processors per system has now reached the millions and looks set to grow even faster in the future. Yet, something has to a large extend remained unchanged over the past 20 years. That is the programming model for these supercomputers. The Message Passing Interface (MPI) ensures that the microprocessors in the distributed systems can communicate. For some time now, however, it has been reaching the limits of its capability.

Asynchronous Communication and Universal Programming Interface

GPI is based on a completely new approach: an asynchronous communication model, which is supplemented by remote completion. With this approach, each processor can directly access all data – regardless of its location and without affecting other parallel processes.

GPI has not been developed as a parallel programming language, but as a parallel programming interface, which means it can be used universally. The demand for such a scalable, flexible, and fault-tolerant interface is large and growing, especially in relation to the exponential growth in the number of processors in supercomputers.

Advantages of GPI

  • One-sided, asynchronous and parallel communication of all threads
  • Separation of data synchronization from data transfer
  • Communication at wire speed
  • Perfect overlap of communication and computation
  • Fault tolerant
  • Energy efficient
  • Robust and industry proven
  • Not a new programming language, GPI is an API supporting C++, Fortran and C
  • Works with Pthreads, OpenMP or its own manycore thread package
  • Supports Hybrid Systems
  • Zero-copy communication between NVIDIA GPUs
  • Available for Infiniband, Cray Gemini and Aries, Ethernet, Intel Omnipath Architektur (OPA)
  • Easy to use
  • Open source implementation of the GASPI specifications and free to use for researchers and developers. More on
GPI liefert exzellente Leistung und Skalierbarkeit
© Fraunhofer ITWM
GPI liefert exzellente Leistung und Skalierbarkeit für Applikationen auf Höchstleistungsrechnern der jetzigen und nächsten Generation. Die Grafik zeigt am Beispiel einer Finiten-Differenzen-Anwendung die starke Skalierbarkeit, die aufgrund von GPI auf dem SuperMUC-Cluster des Leibniz-Rechenzentrums (LRZ) erreicht werden kann.

Application Examples

High-performance computing has become a universal tool in science and business, a fixed part of the design process in fields such as automotive and aircraft manufacturing.

Example Aerodynamics: One of the simulation cornerstones in the European aerospace sector, the software TAU, was ported to the GPI platform in a project with the German Aerospace Center (DLR). GPI allowed them to significantly increase parallel efficiency.

HD Video: Programming Model for Supercomputers of the Future

[only available in German]

Example Projects

Project INTERWinE

Exascale Modeling and Implementation

The project INTERWinE addresses the problem of programming model design and implementation for the Exascale. The first Exascale computers will be very highly parallel systems, consisting of a hierarchy of architectural levels. To program such systems effectively and portably, programming APIs with efficient and robust implementations must be ready in the appropriate timescale.


We introduce the programming model GASPI and its implementation GPI into the project and evaluate the interoperability requirements with a number of applications such as the Computational Fluid Dynamics Code TAU of DLR (German Aerospace) for aerodynamic simulation.

Project ExaNoDe

Exascale Computing

Together with 13 European partners we are involved in the project ExaNoDe (European Exascale Processor Memory Node Design). This project will investigate, develop, integrate and pilot the building blocks for a highly efficient, highly integrated, multi-way, high-performance, heterogeneous compute element aimed towards Exascale computing. 


We develop the Fraunhofer GASPI/GPI, which is an open-source communication library to be used for communication between computing nodes. The API has been designed with a variety of possible memory spaces in mind.  GASPI/GPI provides configurable memory segments which aim at mapping the hardware configuration and making them available for the application.

BMBF Project


The solution of partial differential equations with computer assistance is used in many areas of science to predict the behavior of complex systems. One example is the prediction of abrasion in the human knee joint, where bones, muscles and ligaments interact with each other.

In the HighPerMeshes project, led by the Paderborn Center for Parallel Computing at the University of Paderborn, we are jointly developing simulation methods and the corresponding software to investigate such processes. We contribute our expertise in the development and application of new software tools such as GPI-2.