GPI – Global Adress Space Programming Interface

Programming Model for Supercomputers of the Future

The demand for even faster, more effective, and also energy-saving computer clusters is growing in almost every sector. Our asynchronous programming model Global address space Programming Interface (GPI) might become a key building block towards realizing the next generation of supercomputers.

»High Performance Computing« is one of the key technologies for numerous applications that we have come to take for granted – everything from Google searches to weather forecasting and from climate simulation to bioinformatics requires an ever increasing amount of computing resources. Big data analysis additionally is driving the demand for even faster, more effective, and also energy-saving computer clusters.

The number of processors per system has now reached the millions and looks set to grow even faster in the future. Yet, something has to a large extend remained unchanged over the past 20 years. That is the programming model for these supercomputers. The Message Passing Interface (MPI) ensures that the microprocessors in the distributed systems can communicate. For some time now, however, it has been reaching the limits of its capability.

Asynchronous Communication and Universal Programming Interface

Our Global Adress Space Programming Interface is based on a completely new approach: an asynchronous communication model, which is supplemented by remote completion. With this approach, each processor can directly access all data – regardless of its location and without affecting other parallel processes.

We have not developed our Global Adress Space Programming Model as a parallel programming language, but as a parallel programming interface, which means it can be used universally. The demand for such a scalable, flexible, and fault-tolerant interface is large and growing in the field of »High Performance Computing«, especially in relation to the exponential growth in the number of processors in supercomputers.

Advantages of GPI

  • One-sided, asynchronous and parallel communication of all threads
  • Separation of data synchronization from data transfer
  • Communication at wire speed
  • Perfect overlap of communication and computation
  • Fault tolerant & Energy efficient
  • Robust and industry proven
  • Not a new programming language, GPI is an API supporting C++, Fortran and C
  • Works with Pthreads, OpenMP or any threading package
  • Supports Hybrid Systems
  • Zero-copy communication between NVIDIA GPUs
  • Available for all major HPC interconnects
  • Open source implementation of the GASPI specifications and free to use for researchers and developers
GPI delivers excellent performance and scalability for applications on current and next generation supercomputers.
© Fraunhofer ITWM
GPI delivers excellent performance and scalability for applications on current and next generation supercomputers. The graph shows the strong scalability that can be achieved due to GPI on the SuperMUC cluster of the Leibniz Computing Center (LRZ) using the example of a finite difference application.

Application Examples

High-performance computing has become a universal tool in science and business, a fixed part of the design process in fields such as automotive and aircraft manufacturing.

Example Aerodynamics: One of the simulation cornerstones in the European aerospace sector, the software TAU, was ported to the GPI platform in a project with the German Aerospace Center (DLR). GPI allowed them to significantly increase parallel efficiency.

The GPI Ecosystem

To increase productivity when working with GPI, we offer additional software tools within the GPI ecosystem.

GaspiCxx is an easy to use C++ interface for GPI. The management of GPI-2 communication resources within the application is completely handled by GaspiCxx without any limitation of the underlying performance. The application does not have to take care of this anymore. This eliminates much of the implementation work normally required when developing GPI applications. Developing GPI applications and taking advantage of the associated benefits – such as good scalability is thus easier than ever before. In addition, GaspiCxx offers collective operations that go beyond the normal scope of GPI implementation.

Many simulations in the engineering field are based on CFD and FEM methods, for example the determination of aerodynamic properties of aircraft or the analysis of building statics. Ultimately, a complex system of equations has to be solved. Such simulations now benefit directly from the GPI programming model when using GaspiLS.
GaspiLS is a scalable linear solver library. It makes consistent use of the parallel programming methods and tools we have developed, and is trimmed for maximum efficiency. The complexities typically associated with an efficient hybrid parallel implementation are inherent in the matrices provided by GaspiLS. vectors and solvers and are completely transparent to the user.

Video: Programming Model for Supercomputers of the Future

[only available in German] The GPI programming model was awarded the Fraunhofer Research Prize. Dr. Carsten Lojewski, Dr. Christian Simmendinger, Dr. Rui Machado (from left to right) developed a programming model that makes maximum efficient use of high-performance computers.