GANterpretations
Abstract
Since the introduction of Generative Adversarial Networks (GANs) [Goodfellow et al., 2014] there
has been a regular stream of both technical advances (e.g., Arjovsky et al. [2017]) and creative uses
of these generative models (e.g., [Karras et al., 2019, Zhu et al., 2017, Jin et al., 2017]. In this work
we propose an approach for using the power of GANs to automatically generate videos to accompany
a audio recordings by aligning to spectral properties of the recording. This allows musicians to
explore new forms of multi-modal creative expression, where musical performance can induce an
AI-generated musical video that is guided by said performance, as well as a medium for creating a
visual narrative to follow a storyline.
has been a regular stream of both technical advances (e.g., Arjovsky et al. [2017]) and creative uses
of these generative models (e.g., [Karras et al., 2019, Zhu et al., 2017, Jin et al., 2017]. In this work
we propose an approach for using the power of GANs to automatically generate videos to accompany
a audio recordings by aligning to spectral properties of the recording. This allows musicians to
explore new forms of multi-modal creative expression, where musical performance can induce an
AI-generated musical video that is guided by said performance, as well as a medium for creating a
visual narrative to follow a storyline.