Google Research

Deep Bregman Divergence for Contrastive Learning of Visual Representations

ArXiV (2021)


Deep Bregman divergence measure divergence of data points using neural networks which is beyond Euclidean distance and capable of capturing divergence over distributions. In this paper, we propose deep Bregman divergences for contrastive learning of visual representation where we aim to enhance contrastive loss used in self-supervised learning by training additional network based on functional Bregman divergence. In contrast to the conventional contrastive learning methods which are solely based on divergences between single points, our framework can capture the divergence between distributions which improves the quality of learned representation. By combining conventional contrastive loss with the proposed contrastive divergence loss, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on multiple classification and object detection tasks and datasets.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work