Julian Salazar
Personal Website • Google Scholar • X (Twitter) • LinkedIn
The publications below are only those via Google Research. See Google DeepMind publications.
The publications below are only those via Google Research. See Google DeepMind publications.
Authored Publications
Sort By
Zero-Shot Mono-to-Binaural Speech Synthesis
Alon Levkovitch
Nadav Bar
Bastiaan Kleijn
Interspeech (2025), pp. 4168-4172
Preview abstract
We present ZeroBAS, a neural method to synthesize binaural speech from monaural speech recordings and positional information without training on any binaural data. To our knowledge, this is the first published zero-shot neural approach to mono-to-binaural speech synthesis. Specifically, we show that a parameter-free geometric time warping and amplitude scaling based on source location suffices to get an initial binaural synthesis that can be refined by iteratively applying a pretrained denoising vocoder. Furthermore, we find this leads to generalization across room conditions, which we measure by introducing a new dataset, TUT Mono-to-Binaural, to evaluate state-of-the-art monaural-to-binaural synthesis methods on unseen conditions. Our zero-shot method is perceptually on-par with the performance of supervised methods on previous standard mono-to-binaural dataset, and even surpasses them on our out-of-distribution TUT Mono-to-Binaural dataset.
View details
Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM
Alon Levkovitch
Roy Hirsch
Chulayuth Asawaroengchai
Ehud Rivlin
ICLR (2024)
Preview abstract
We present Spectron, a novel approach to adapting pre-trained large language models (LLMs) to perform spoken question answering (QA) and speech continuation. By endowing the LLM with a pre-trained speech encoder, our model becomes able to take speech inputs and generate speech outputs. The entire system is trained endto-end and operates directly on spectrograms, simplifying our architecture. Key to our approach is a training objective that jointly supervises speech recognition, text continuation, and speech synthesis using only paired speech-text pairs, enabling a ‘cross-modal’ chain-of-thought within a single decoding pass. Our method surpasses existing spoken language models in speaker preservation and semantic coherence. Furthermore, the proposed model improves upon direct initialization in retaining the knowledge of the original LLM as demonstrated through spoken QA datasets. We release our audio samples and spoken QA dataset via our website.
View details