Generative Machine Learning Methods in Decision Support

Innovative Approaches for Better Decisions

Modeling Approaches with Vae for the Detection of Rare Events

The Variational Autoencoder (VAE) is a flexible generative model consisting of an encoder and a decoder. The encoder compresses input data into a latent space, while the decoder uses these latent variables to reconstruct the original data. The aim is to reconstruct the input data as well as possible and to approximate the probability distribution over the latent space.

The VAE thus offers many advantages for use in anomaly detection. Among other things, our research focuses on the interpretability of the latent space representation. The VAE calculates a compressed and structured representation of the input data. This makes it possible to identify anomalies where the distributions of the latent variables deviate significantly from the expected distribution. The analysis of the latent structure provides valuable information for the interpretation of the underlying characteristics.

We are researching different modeling approaches with different distribution assumptions and analyzing their significance for conspicuity detection – especially with regard to very rare events.

Challenges and Innovative Approaches in Anomaly Detection 

A central challenge in the detection of anomalies is that very different types of anomalies can occur – including previously unknown ones. The application is often an explorative process in which experience is continuously gathered. As anomalies are usually very rare events, it is crucial to incorporate existing experience into the modeling process. We are researching partially supervised training approaches with which existing experience can be specifically integrated into the training.

Importance of Interpretability and Explainability in Machine Learning Methods  

The European Commission's »Regulation on Artificial Intelligence« (AI Act) aims to create a uniform legal framework for the use of artificial intelligence (AI) within the EU. A central concern is the demand for transparency with regard to the functioning of AI systems. Interpretability and explainability are therefore crucial aspects for the modeling of machine learning processes.

Counterfactual explanations answer the question: »What would have happened if...?« and help users in particular to understand and analyze alternative scenarios. The VAE offers the possibility to generate new data from the learned latent distributions.

We are researching approaches in which not only anomalies can be predicted, but also an alternative »non-anomalous data point« can be optimally generated.