AI-Services: NASE – Neural Architecture Search Engine

NASE: Multi-Target Neural Architecture Optimization

The amount of data is continuously increasing, and many companies are already leveraging the high economic value of the information hidden inside. Edge devices in particular, such as mobile phones and vehicles, with their large number of sensors, are producing more and more data, offering great potential for innovation.

One hurdle is that the data often has to be evaluated in real time on site, but the computing effort can be enormous. It is essential to look for efficient solutions. Efficiency can be defined as speed, power consumption, model size and/or much more.

However, one of the most difficult and time-consuming aspects is the design of an optimal neural network architecture that meets all criteria. To create such optimal architectures, you need senior deep learning scientist with lots of experience.

Efficiency Begins With the Algorithm

For us, efficiency begins with the algorithm. We use the most modern methods of automatic neural network search, which can deliver networks that can be efficient in many aspects at the same time. Our search is able to consider peculiarities of the underlying platform and incorporate them into the network design. The algorithm adapts to the hardware.

We offer you our technology and computing capacities to find an optimal individual neural architecture that meets your requirements. You provide us with your data sets, and we use our supercomputers and our framework to automatically search for the best model. You can then use the network directly for your tasks. If networks already exist, they can be used as a starting point, and parts of the architecture can be designed by hand.

We Need

  • labeled data 
  • your requirements (e.g. regarding accuracy, memory consumption, speed of execution, etc.)

You Get

  • pareto-optimal networks regarding several desired criteria such as detection rate, false alarm rate, speed, number of parameters, memory consumption, number of layers etc.
  • an optimal use of the computing resources on your target platform. For this we use methods such as quantization, pruning and other
  • Individual consulting and integration support
© Fraunhofer ITWM
Scheme Multi-Target Neural Architecture Optimization

Our Common Work Process in Three Steps

  1. To make sure we can deliver satisfying results we'll have a meeting about the goals and the dataset that are available in a first step.
  2. If we conclude that we can deliver we will enter the search process in step two. Together we will discuss the different results. 
  3. In a final step we will support you in integrating the deep neural architecture.
Custom Hardware Design: Together with our associated partners we are able to deliver fully customized hardware designs (FPGA, ASIC) optimized to your neural network architecture and requirements. If interested please contact us. 

Use Case: Neural Architecture Optimization for ECG Data Analysis on Embedded Platforms

To detect atrial fibrillation in 2-channel ECG data, we were able to manually create a network topology with 72K parameters that meets the criteria of >90 percent detection rate and <20 percent false alarm rate. With the target to run on embedded devices, the least amount of memory consumption and computation is desired. It directly translates in less energy consumption and hence longer battery lifetimes. With the help of the Fraunhofer automated topology search framework, we were able to reduce the amount of parameters within our design goals to about 6K parameters – That means a 12-fold reduction.

Statement Dominik Loroch about NASE

Video made at the trade fair Embedded World 2024

Privacy warning

With the click on the play button an external video from is loaded and started. Your data is possible transferred and stored to third party. Do not start the video if you disagree. Find more about the youtube privacy statement under the following link:

At the »Embedded World« trade fair in Nuremberg, Dominik Loroch from Fraunhofer ITWM presents our work at the joint Fraunhofer booth (Hall 4, Booth 422) from April 9 to 11, 2024.