Jump to Content
Jonas Pfeiffer

Jonas Pfeiffer

I am a Research Scientist at Google Research working on Natural Language Processing, interested in modular representation learning in multi-task, multilingual, and multi-modal contexts, and in low-resource scenarios.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Modular Deep Learning
    Sebastian Ruder
    Ivan Vulić
    Edoardo Ponti
    arXiv (2023)
    Preview abstract Transfer learning has recently become the dominant paradigm of machine learning. Pre-trained models fine-tuned for downstream tasks achieve better performance with fewer labelled examples. Nonetheless, it remains unclear how to develop models that specialise towards multiple tasks without incurring negative interference and that generalise systematically to non-identically distributed tasks. Modular deep learning has emerged as a promising solution to these challenges. In this framework, units of computation are often implemented as autonomous parameter-efficient modules. Information is conditionally routed to a subset of modules and subsequently aggregated. These properties enable positive transfer and systematic generalisation by separating computation from routing and updating modules locally. We offer a survey of modular architectures, providing a unified view over several threads of research that evolved independently in the scientific literature. Moreover, we explore various additional purposes of modularity, including scaling language models, causal inference and discovery, programme simulation, and hierarchical reinforcement learning. Finally, we report various concrete applications where modularity has been successfully deployed such as cross-lingual and cross-modal knowledge transfer. More information on modular deep learning is available at www.modulardeeplearning.com/. View details
    No Results Found