Jump to Content
Fotios Iliopoulos

Fotios Iliopoulos

I have been a research scientist at Google Research since August 2021. Previously, I was a post-doctoral scholar at the Institute for Advanced Study from September 2019 to August 2020, and at the Princeton TCS group from September 2020 to August 2021. I received my Ph.D. from University of California Berkeley where I was advised by Alistair Sinclair. I am broadly interested in Theoretical Computer Science and Machine Learning. I am currently mainly focusing on algorithmic techniques for Efficient Deep Learning. I have also been working in the design and analysis of stochastic local search algorithms for finding and sampling solutions of constraint satisfaction problems, and for statistical inference.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available. In large-scale applications, however, the teacher tends to provide a large number of incorrect soft-labels that impairs student performance. The sheer size of the teacher additionally constrains the number of soft-labels that can be queried due to prohibitive computational and/or financial costs. The difficulty in achieving simultaneous efficiency (i.e., minimizing soft-label queries) and robustness (i.e., training with correct soft-labels) hurts the widespread application of knowledge distillation to many modern tasks. In this paper, we present a parameter-free approach with provable guarantees to query the soft-labels of points that are simultaneously informative and correctly labeled by the teacher. At the core of our work lies a game-theoretic formulation that explicitly considers the inherent trade-off between the informativeness and correctness of input instances. We establish bounds on the expected performance of our approach that hold even in worst-case distillation instances. We present empirical evaluations on popular benchmarks that demonstrate the improved distillation performance enabled by our work relative to that of state-of-the-art active learning and active distillation methods. View details
    Preview abstract Distillation with unlabeled examples is a popular and powerful method for training deep neural networks in settings where the amount of labeled data is limited: A large ``teacher'' neural network is trained on the labeled data available, and then it is used to generate labels on an unlabeled dataset (typically much larger in size). These labels are then utilized to train the smaller ``student'' model which will actually be deployed. The main drawback of the method is that the teacher often generates inaccurate labels, confusing the student. This paper proposes a principled approach for addressing this issue based on importance reweighting. Our method is hyper-parameter free, efficient, data-agnostic, and simple to implement, while it applies to both ``hard'' and ``soft'' distillation. We accompany our results with a theoretical analysis which rigorously justifies the performance of our method in certain settings. Finally, we demonstrate significant improvements on popular academic datasets when compared to conventional (unweighted) distillation with unlabeled examples. View details
    No Results Found