A Hybrid Convolutional Variational Autoencoder for Text Generation
Abstract
In this paper we explore the effect of architectural choices on Variational Autoencoder models for text.
In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends a fully feed-forward convolutional and deconvolutional component with a recurrent language model. This architecture exhibits several attractive properties such as fast run time, ability to better handle long sequences and, more importantly, we demonstrate that our model helps to avoid some of the major difficulties posed by training VAE models on textual data.
In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends a fully feed-forward convolutional and deconvolutional component with a recurrent language model. This architecture exhibits several attractive properties such as fast run time, ability to better handle long sequences and, more importantly, we demonstrate that our model helps to avoid some of the major difficulties posed by training VAE models on textual data.