- Adriano Cardace
- Alessio Tonioni
- Luca De Luigi
- Luigi Di Stefano
- Pierluigi Zama Ramirez
- Samuele Salti
Abstract
The availability of labelled data is the major obstacle to the deployment of deep learning algorithms to solve computer vision tasks in new domains. Recent works have shown that it is possible to leverage on correlations between features learned by neural networks for different tasks on different domains to reduce the need for full supervision. This is achieved by learning to transfer features across both tasks and domains. In this work, we show how constraining the structure of the source and target feature space is the key to improve the performances of such a transfer framework. In particular, we demonstrate the benefits of: learning features able to capture fine-grain details of the image and aligning the space across tasks by means of an auxiliary task; aligning the feature spaces across domains by means of a novel norm discrepancy loss. We achieve state of the art results in synthetic-to-real adaptation scenarios for this novel setting.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work