In recent years, Machine Learning (ML) methods have evolved to become one of the most dynamic research areas with great impact on our current and future everyday life. Astonishing progress has been made in the application of ML algorithms to areas like speech-recognition, automatic image-analysis and scene understanding. Machine Learning enables computers to drive cars autonomously or to learn how to play video games, pushing the frontier towards abilities that have been exclusive to humans.
This rapid development is accompanied by a constantly increasing complexity of the underlying models. However, there are still significant hurdles to overcome on the way to the everyday use of many existing approaches to machine learning: One of them is the enormous computing power requirement of machine learning. For example, it is currently not unusual for a single learning process with current methods (keyword Deep Learning) to require several days of computing time.
In the area of data analysis and machine learning, we work on new algorithms for efficient distributed computation of learning algorithms and their implementation on specialized hardware. The focus of our work is on the development of scalable optimization algorithms for the distributed parallelization of large machine learning problems. The basis for this work are the HPC components developed at the CC HPC, such as the parallel file system BeeGFS or the programming framework GPI 2.0, which are the first to enable the efficient implementation of new algorithms such as ASGD (Asynchronous Stochastic Gradient Descent).