Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
Abstract
Pre-training Neural Networks have become widely successful in Natural Language Processing.
Training these large models on unsupervised data is costly and often not feasible.
We therefore concentrate on publicly available checkpoints.
While most of them improve the Natural Language Understanding, we investigate initializing Transformer-based Sequence-to-sequence models with these pre-trained models for Natural Language Understanding and Generation.
Using these pre-trained models we achieve new state-of-the-art results on Machine translation, Summarization and Sentence Splitting/Fusion.
Training these large models on unsupervised data is costly and often not feasible.
We therefore concentrate on publicly available checkpoints.
While most of them improve the Natural Language Understanding, we investigate initializing Transformer-based Sequence-to-sequence models with these pre-trained models for Natural Language Understanding and Generation.
Using these pre-trained models we achieve new state-of-the-art results on Machine translation, Summarization and Sentence Splitting/Fusion.