Deep Learning Seminar  /  November 12, 2020, 10:00 – 11:00

My Latest Papers at ICPR20 and ACCV20

Speaker: Kalun Ho (Fraunhofer ITWM)

Abstract:

In this talk, I will give an overview over my latest accepted papers at ICPR20: »Learning Embeddings for Image Clustering: An Empirical Study of Triplet Loss Approaches« and ACCV20: »A Two-Stage Minimum Cost Multicut Approach to Self-Supervised Multiple Person Tracking«.
 

Learning Embeddings for Image Clustering: An Empirical Study of Triplet Loss Approaches

We evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a CNN to learn discriminative features by optimizing two popular versions of the Triplet Loss to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
 

A Two-Stage Minimum Cost Multicut Approach to Self-Supervised Multiple Person Tracking:

Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches are based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. We use a selfsupervised MOT approach based on visual features and minimum cost lifted multicuts.

Our method is based on straightforward spatio-temporal cues that can be extracted from neighboring frames in an image sequence without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an Auto-Encoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted.

Despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.