Deep Fusion: Efficient Network Training via Pre-trained Initializations

Hanna Mazzawi
Michael Wunder
Sammy Jerome
ICML (2025)

Abstract

In recent years, deep learning has made remarkable progress in a wide range of domains, with a
particularly notable impact on natural language
processing tasks. One of the challenges associated
with training deep neural networks is the need
for large amounts of computational resources and
time. In this paper, we present Deep Fusion, an efficient approach to network training that leverages
pre-trained initializations of smaller networks.
We show that Deep Fusion accelerates the training process, reduces computational requirements,
and leads to improved generalization performance
on a variety of NLP tasks and T5 model sizes.
Our experiments demonstrate that Deep Fusion
is a practical and effective approach to reduce
the training time and resource consumption while
maintaining, or even surpassing, the performance
of traditional training methods.
×