Next Generation Computing

Next Generation Computing Is Based on Three Pillars

Computers are an indispensable part of everyday life, and the digital transformation is constantly generating new business models and innovations. However, conventional computing technologies are gradually reaching the limits of their speed, performance and energy efficiency. Time for Next Generation Computing! Fraunhofer is driving the development of hardware and technologies for the next generation and is setting priorities in trusted computing, neuromorphic computing and quantum computing.

Interview With Dr. Jens Krüger, Consultant for Next Generation Computing at Fraunhofer Itwm

Dr. Jens Krüger from our »High Performance Computing« department is a Fraunhofer consultant for the strategic research field »Next Generation Computing«. He ventures a look into the future and describes which computing technologies will shape the way we work and perhaps even our everyday lives.

The next generation of computers will be diverse. It is based on three different pillars: the first pillar is based on classical architectures as we know them today, but specialized for certain application fields. The second pillar is analog and neuromorphic computers, which function in much the same way as our brain, and the third pillar is quantum computers: Fraunhofer is also at the forefront here and has been operating IBM System One near Stuttgart since June 2021.

What challenges can we meet with the next computer generation?

Digitization presents us with major challenges, for example in the areas of health, mobility and the energy transition – more and more data has to be processed ever faster. One approach to solving this is neuromorphic computing, in which computers try to imitate the human brain. Our brain is extremely efficient at processing information and very good at pattern recognition, while also being extremely energy-efficient. We want to imitate this system of neurons and synapses by processing data in a network of neurons and synapses, not after it has been transported from memory to the processing unit, but while it is being forwarded. A major advantage is the energy saving, because only the neurons of a network are activated that are actually needed.

An einem IBM Quantum System One können Industrie und Forschung jetzt unter deutschem Recht anwendungsbezogene Quantensoftware entwickeln, sie testen und ihre Kompetenzen ausbauen.
© IBM Research
An einem IBM Quantum System One können Industrie und Forschung jetzt unter deutschem Recht anwendungsbezogene Quantensoftware entwickeln, sie testen und ihre Kompetenzen ausbauen.

What are the distinguishing features of high efficiency trusted computing? 

At the core is a trustworthy microelectronics, i.e. one that is protected, for example, against hacker attacks on infrastructures or prevents the decryption of data communications. These microprocessors are also to be developed in Europe in order to become more independent of global partners and to have access to this key technology even in times of bottlenecks such as those we are currently experiencing.


How Will Classic Processors Develop?

There will be more diversity – many different types of chip architectures as well as new market players. One example of a new generation of European processors is EPI, the European Processor Initiative, in which 28 partners from ten European countries are jointly developing the first high-performance computing processors and accelerator units. The EPI processors are designed to efficiently run simulation applications such as weather forecasts and fluid dynamics simulations. We are focusing on the development of an energy-efficient simulation accelerator called STX, which will be applied in a European exascale system. 

What Is Nase – Neural Architecture Search Engine All About?

Data volumes are constantly increasing, and many companies are already aware of the high economic value of the information hidden within them. Edge devices in particular, such as cell phones and vehicles, with their large number of sensors, are producing more and more data that offer great potential for innovation. One hurdle here is that the data often has to be analyzed in real time on site; however, the computing effort required for this is usually enormous. Efficient solutions must therefore be sought. Efficiency can mean speed, power consumption, model size, and more.

One of the most difficult and time-consuming aspects is designing an optimal neural network architecture that meets all criteria. Creating such optimal architectures requires scientists with a wealth of experience. For us, efficiency starts with the algorithm. We use state-of-the-art methods of automatic neural network search, which provide networks that can be efficient with respect to many aspects at the same time.

We support you in the design and integration of your optimal, induvidual neural network. You can find all information here: AI-Services: NASE –Neural Architecture Search Engine

Lecture by Dr. Jens Krüger on Next Generation Computing

Privacy warning

With the click on the play button an external video from www.youtube.com is loaded and started. Your data is possible transferred and stored to third party. Do not start the video if you disagree. Find more about the youtube privacy statement under the following link: https://policies.google.com/privacy

Lecture on Next Generation Computing

Due to climate change and the steadily growing world population, efficiency improvements are essential in all areas of life in our society. Digitization is an important part of this process. But how do we manage to process the ever-increasing amounts of data through more and more complex algorithms without digitalization itself contributing to the problem instead of the solution through excessive energy consumption? The online lecture, which was held at Fraunhofer ITWM, presents, from a technical perspective, promising computing technologies and trends that could help to address these challenges. [Only available in German]