Gaurav Menghani
Authored Publications
Sort By
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster and, Better
ACM Computing Surveys (2023)
Preview abstract
Deep Learning has revolutionized the fields of Computer Vision, Natural Language, Speech, Information Retrieval and more. However, with the growth of Deep Learning models, the number of parameters, latency, resources required to train, all have increased significantly.
Consequently, it has become important to focus on the footprint of the model, not just its quality. We present and motivate the problem of efficiency in Deep Learning, followed by a thorough survey of the five core areas of model efficiency and the seminal work there.
We also present an experiment-based guide for practitioners to optimize their models. We believe this is the first comprehensive survey in the Efficient Deep Learning space. Our hope is that this survey would provide the reader with both the mental model and the necessary understanding of the field to firstly apply generic efficiency techniques to immediately get a sizeable improvements, and secondly ideas for experimentation to achieve additional gains.
View details
Preview abstract
Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available. In large-scale applications, however, the teacher tends to provide a large number of incorrect soft-labels that impairs student performance. The sheer size of the teacher additionally constrains the number of soft-labels that can be queried due to prohibitive computational and/or financial costs. The difficulty in achieving simultaneous efficiency (i.e., minimizing soft-label queries) and robustness (i.e., training with correct soft-labels) hurts the widespread application of knowledge distillation to many modern tasks. In this paper, we present a parameter-free approach with provable guarantees to query the soft-labels of points that are simultaneously informative and correctly labeled by the teacher. At the core of our work lies a game-theoretic formulation that explicitly considers the inherent trade-off between the informativeness and correctness of input instances. We establish bounds on the expected performance of our approach that hold even in worst-case distillation instances. We present empirical evaluations on popular benchmarks that demonstrate the improved distillation performance enabled by our work relative to that of state-of-the-art active learning and active distillation methods.
View details
Weighted distillation with unlabeled examples
Vasilis Kontonis
Cenk Baykal
Khoa Trinh
NeurIPS 2022 (2022)
Preview abstract
Distillation with unlabeled examples is a popular and powerful method for training deep neural networks in settings where the amount of labeled data is limited: A large ``teacher'' neural network is trained on the labeled data available, and then it is used to generate labels on an unlabeled dataset (typically much larger in size). These labels are then utilized to train the smaller ``student'' model which will actually be deployed. The main drawback of the method is that the teacher often generates inaccurate labels, confusing the student. This paper proposes a principled approach for addressing this issue based on importance reweighting. Our method is hyper-parameter free, efficient, data-agnostic, and simple to implement, while it applies to both ``hard'' and ``soft'' distillation. We accompany our results with a theoretical analysis which rigorously justifies the performance of our method in certain settings. Finally, we demonstrate significant improvements on popular academic datasets when compared to conventional (unweighted) distillation with unlabeled examples.
View details