Simulation of Deep Learning Models

Project TensorQuant

Machine learning methods are being used more and more in the industrial and service sector. Especially artificial neural networks or Deep Learning have a high impact on the development of intelligent systems. Research continuously provides new Deep Learning methods, which open a wide range of possibilities for these algorithms in many different practical application scenarios.

However, a significant technological hurdle on the way to such applications in production is the enormous computational effort required to calculate and evaluate the Deep Learning models.

The result of a simulation of the well-known GoogleNet model shows that the concrete choice of the number representation has a considerable influence on the performance of Deep Learning applications.
© Fraunhofer ITWM
TensorQuant allows the automatic simulation of given TensorFlow models with any number representations of individual tensor operations.
The evaluation of the well-known ResNet-50 model shows that the concrete choice of the number representation has a considerable influence on the performance of Deep Learning applications, which is difficult to estimate in advance without the simulation in TensorQuant.
© Fraunhofer ITWM
The evaluation of the well-known GoogleNet model shows that the concrete choice of the number representation has a considerable influence on the performance of Deep Learning applications, which is difficult to estimate in advance without the simulation in TensorQuant.

This explains why the development of specialized Deep Learning hardware has recently come into focus. In the future, new chip and memory architectures will enable the use of high performance hardware components that save energy and, at the same time, expand the use of Deep Learning, for example to autonomous vehicles, mobile phones, or integrated production controls.

 

Learning Does Not Require High Precision in Numerical Processing

We exploit a mostly mathematical feature of Deep Learning: Learning and evaluating models can be reduced to a numerical computation of a small number of operations using tensor-algebra (for example, matrix-multiplications). In addition, tensor calculation works well with much less precision in terms of numerical processing than it is typically the case with physical simulations. In comparison to general computational units such as CPUs and GPUs, these features enable a highly efficient hardware implementation.

 

TensorQuant Allows the Simulation of Machine Learning Hardware

In the development of Deep Learning applications on specialized hardware, difficulties are encountered as the minimum requirements for computational precision vary significantly between the individual models. As a result, the simultaneous optimization of Deep Learning models and hardware is difficult in terms of computational performance, power consumption, and predictive accuracy. Our TensorQuant software lets developers identify critical tensor operations and emulate Deep Learning models with numerical processing and computing accuracy, which in effect accelerates development. TensorQuant has already been used in collaborative research projects with the automobile industry.