Computing Power for Deep Learning

Deep Learning approaches, such as the algorithm shown in the picture for object segmentation of our project partner from the University of Heidelberg (Kirillov et al.), require enormous computing power. With the help of HP-DLF large models can be computed faster.

BMBF Project »High Performance Deep Learning Framework«

The goal of the BMBF project »High Performance Deep Learning Framework« (HP-DLF) is to provide researchers and developers in the »Deep Learning« domain an easy access to current and future high-performance computing systems.

How does an autonomously driving car recognize pedestrians and other road users? How does speech recognition for everyday use work? How does the Internet search engine recognize people in photos? The answer is: with machine learning algorithms. In recent years considerable progress has been made in the field of machine learning. A significant part of this success is due to the further development of so-called »Deep Learning« algorithms.  

Scheme »High Performance Deep Learning Framework«.
© Fraunhofer ITWM

Scheme »High Performance Deep Learning Framework«.

In the course of this development, larger and more complex artificial neural networks are being designed and trained. However, this procedure, which has been successful for many practical applications, requires enormous computing effort and a great deal of training data. Therefore, the further development of »Deep Learning« depends on the development of methods and infrastructures that will ensure the predictability of increasingly complex neural networks in the future.

Goals of the Project

  • Support the introduction of HPC to a new, large user group right from the start with innovative tools.
  • Hide the complexity of the hardware from the users and lead them to a highly scalable and energy-efficient solution.
  • Not only make existing HPC methods accessible to new users, but also gain knowledge about the system requirements of a very important HPC application in the future.

To this end, a new software framework is to be developed that automates the highly complex parallelization of the training of large neural networks on heterogeneous computing clusters.

 

The Software Framework Focuses on

  • Scalability and energy efficiency
  • High portability
  • User transparency

The training of networks designed in existing frameworks should be scaled over a three-digit number of compute nodes without additional user effort.  

 

GPI Space as a Base

The generic parallelization framework GPI-Space developed at our institute, which uses Petri nets for the effective description of data and task parallelism, provides the basis. In the project, the programming model is further developed and generically linked to the leading Deep Learning frameworks (e.g. Caffe and Tensorflow) using a domain-specific compiler.

 

Project Partners

 

Project Duration

The work in the project is scheduled for a period of three years (1.11.2017-31.10.2020).