Machine and Deep Learning Seminar  /  April 28, 2022, 10:00 – 11:00 Uhr

Aliasing Coincides With CNNs Vulnerability Towards Adversarial Attacks

Speaker: Julia Grabinski (Fraunhofer ITWM, Division »High Performance Computing«)

Abstract: 


Many commonly well-performing convolutional neural network (CNN) models have shown to be susceptible to input data perturbations, indicating a low model robustness. While much effort and research has been invested to designed more robust networks and training schedule, the research on analyzing the source of a model's vulnerability is scarce.

In this seminar we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.