Scalable Parallel Programming

Beginning at the level of the single core performance, the group is developing methods and tools for the optimal and efficient utilization of all available hardware resources up to the level of the complete supercomputer.

 

Tuning

We follow a holistic approach for the optimization of software. Our demand is to pair a comprehensive understanding of methods and algorithms and their implementation with the deep knowledge about the underlying architecture and the potential of the tools in order to provide the optimal performance.

 

Numerical Solver

GaspiLS is a numerical solver library which is completely build on top of the principles of the GPI-2 programming model. As such, it is trimmed to achieve optimal scalability.

GPI

GPI-2 is the communication library of first choice when it comes to higly scalable applications. GPI-2 allows truely asynchronous and parallel communication of all threads and achieves optimal overlap of communication by computation. Fast and partially cost-free notifications of remote components and a well defined system state in case of a failure make GPI-2 the world-leading communication library.

 

GPI-Space

GPI-Space is abstracting away the complexity of big machines without impacting the efficiency. Based on a generic failure tolerant and scalable distributed runtime system on a dynamic set of resources, a Petri net based workflow engine and a scalable virtual memory layer, GPI-space allows for the development of domain specific development- and runtime-systems.

Example Projects

INTERTWinE – Exascale Modeling and Implementation

This project addresses the problem of programming model design and implementation for the Exascale. The first Exascale computers will be very highly parallel systems, consisting of a hierarchy of architectural levels. To program such systems effectively and portably, programming APIs with efficient and robust implementations must be ready in the appropriate timescale.

A single, »silver bullet« API which addresses all the architectural levels does not exist and seems very unlikely to emerge soon enough. We must therefore expect that using combinations of different APIs at different system levels will be the only practical solution in the short to medium term. Although there remains room for improvement in individual programming models and their implementations, the main challenges lie in interoperability between APIs. It is this interoperability, both at the specification level and at the implementation level, which this project seeks to address and to further the state of the art.

Fraunhofer ITWM introduces the programming model GASPI and its implementation GPI into the project and evaluates the interoperability requirements with a number of applications such as the Computational Fluid Dynamics Code TAU of DLR (German Aerospace) for aerodynamic simulation.

Further information on the project on the INTERTWinE Website

EPiGRAM – Programming Models for Exascale Systems

The EPiGRAM project which concluded 2016 has worked on programming models for exascale systems. Exascale computing power is likely to be reached in the next decade. While the precise system architectures are still evolving, one can safely assume that they will be largely based on deep hierarchies of multicore CPUs with similarly deep memory hierarchies, and likely also supported by accelerators.

Appropriate programming models are needed to allow applications to run efficiently at large scale on these platforms. Message Passing (MPI) has emerged as the de-facto standard for parallel programming on current Petascale machines; but Partitioned Global Address Space (PGAS) languages and libraries are increasingly being considered as alternatives or complements to MPI. These models will likely also play an important role in Exascale systems. However, both approaches have problems that will prevent them reaching Exascale performance.  In the EPiGRAM project, we addressed some of the main limitations of MP and PGAS programming models by: investigating new disruptive concepts and algorithms in Exascale programming models; providing prototypical implementations; and validating our approach with three real-world applications that have the potential for reaching Exascale performance.

The GASPI programming model with its GPI implementation, which is developed and maintained by Fraunhofer ITWM, is a PGAS representative.  EPiGRAM made GASPI and GPI-2 more complete and robust by closing gaps preventing GASPI and GPI-2 to go for Exascale. Within EPiGRAM strong scaling of the RTM application up to 60k cores at SuperMUC have been shown to be almost linear.

Further information on the project on the EPiGRAM Website

EXA2CT – Creation of Exascale Codes

The EXA2CT project which concluded in 2016 brought together experts at the cutting edge of the development of solvers, related algorithmic techniques, and HPC software architects for programming models and communication.

In the scope of the project modular open source proto-applications were developed that demonstrate the algorithms and programming techniques developed in the project, to help boot-strap the creation of genuine Exascale codes. Technologies developed in EXA2CT range from advanced libraries such as ExaShark and GASPI/GPI that help to program massively parallel machines to solver algorithms improved by better overlapping communication and computation and by increasing the arithmetic intensity. All of this is verified on industry relevant prototype applications.

Tom van der Aa (ExaScience Lifelab at imec, Belgium), coordinator of the EXA2CT project states that In the project it could be shown that Fraunhofer ITWM’s programming API GPI-2 can genuinely outperform MPI.

Further information on the project on the EXA2CT Website

SafeClouds – Cutting Edge Technologies for Aviation Safety Assurance

SafeClouds is a research project powered by a full spectrum of aviation stakeholders that develops cutting-edge technologies for aviation safety assurance in a cost-effective manner. SafeClouds proposes a data-driven approach to achieve a deeper understanding of the dynamics of the system, where risks are pro-actively identified and mitigated in a continuous effort to enhance the already excellent European aviation safety records.

SafeClouds develops an innovative aviation safety data analysis approach. Currently each stakeholder owns different isolated datasets and data-sharing paradigms are rare. However, the combination of those datasets is critical in discovering unknown safety hazards and in understanding and defining a performance-based system safety concept. The new data-driven paradigm, capable of extracting safety intelligence in a fast, connected and inexpensive way requires the collaboration of aviation and IT entities sharing their raw datasets, tools, techniques and information.

We provide the IT infrastructure for this project. Our runtime framework GPI-Space will be used throughout the project.

Further information on the project on the SafeClouds Website

ExaNoDe - European Project for Exascale Computing

Aufbau des ExaNoDe Projektes
© Photo ExaNoDe

The structure of the ExaNoDe project

The Fraunhofer Institute for Industrial Mathematics ITWM in Kaiserslautern, Germany, is one of thirteen partners from six European countries which are involved in the ExaNoDe (European Exascale Processor Memory Node Design) project.

This project will investigate, develop, integrate and pilot (using a hardware-emulated interconnect) the building blocks for a highly efficient, highly integrated, multi-way, high-performance, heterogeneous compute element aimed towards Exascale computing. It will build on multiple European initiatives for scalable computing, utilizing a low-power architecture and advanced nanotechnologies.

The Fraunhofer ITWM develops the Fraunhofer GASPI/GPI (Global Address Space Programming Interface), which is an open-source communication library to be used for communication between computing nodes. The API has been designed with a variety of possible memory spaces in mind. GASPI/GPI provides configurable memory segments which aim at mapping the hardware configuration and making them available for the application.

In the context of the ExaNoDe hardware, GASPI/GPI will be implemented using the RDMA primitives provided by the hardware and the interconnect. The ExaNode UNIMEM architecture will allow the user to run single processes on a larger number of sockets. The PGAS approach of GASPI/GPI  matches such an architecture perfectly. In the scope of the project we will collaborate closely with the hardware developers to provide sufficient functionality and consider the use of processing power to enable low latency communication.

Further information on the project on the ExaNoDe Website