Kartikeya Badola
Kartikeya is a Software Engineer at Google Research India working on Multi-turn Dialogue Evaluations for Large Language Models (LLMs). Previously, he worked on Multilingual Semantic Parsing in collaboration with the Google Assistant team. He completed his undergraduate degree in Electrical Engineering from the Indian Institute of Technology, Delhi, where he worked on Multilingual Distantly Supervised Relation Extraction. He was also an intern at MILA where he worked on Modular Neural Networks.
Research Areas
Authored Publications
Sort By
Parameter-Efficient Finetuning for Robust Continual Multilingual Learning
Findings of the Association for Computational Linguistics: ACL 2023
Preview abstract
We introduce and study the problem of Continual Multilingual Learning (CML), where a previously trained multilingual model is periodically updated using new data arriving in stages. If the new data is present only in a subset of languages, we find that the resulting model shows improved performance only on the languages included in the latest update (and few closely related languages) while its performance on all the remaining languages degrade significantly. We address this challenge by proposing LAFT-URIEL, a parameter-efficient finetuning strategy which aims to increase the number of languages on which the model improves after an update, while reducing the magnitude of loss in performance for the remaining languages. LAFT-URIEL uses linguistic knowledge to balance overfitting and knowledge sharing across languages, thus resulting in 25% increase in the number of languages whose performances improve during an update and 78% relative decrease in average magnitude of losses on the remaining languages.
View details