Jump to Content
Ahmed Omran

Ahmed Omran

I'm an AI Resident at Google Zurich, working on neural-network based audio models for sound compression and processing. In a previous life, I graduated with a degree in Engineering Physics from Technical University of Munich in 2011 and did my PhD work at the Max-Planck-Institute of Quantum Optics in Garching, building a quantum simulator for electronic lattice models using ultracold gases of atoms. After graduating in 2016. I then spent a four-year postdoc period at Harvard University, developing a platform for experimental quantum simulations using cold atoms trapped in optical tweezers.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract We present a method to separate speech signals from noisy environments in the embedding space of a neural audio codec. We introduce a new training procedure that allows our model to produce structured encodings of audio waveforms given by embedding vectors, where one part of the embedding vector represents the speech signal, and the rest represent the environment. We achieve this by partitioning the embeddings of different input waveforms and training the model to faithfully reconstruct audio from mixed partitions, thereby ensuring each partition encodes a separate audio attribute. As use cases, we demonstrate the separation of speech from background noise or from reverberation characteristics. Our method also allows for targeted adjustments of the audio output characteristics. View details
    SoundStream: An End-to-End Neural Audio Codec
    Neil Zeghidour
    Alejandro Luebs
    Transactions on Audio, Speech and Language Processing (2021)
    Preview abstract We present SoundStream, a novel neural audio codec that can efficiently compress speech, music and general audio at bitrates normally targeted by speech-tailored codecs. SoundStream relies on a model architecture composed by a fully convolutional encoder/decoder network and a residual vector quantizer, which are trained jointly end-to-end. Training leverages recent advances in text-to-speech and speech enhancement, which combine adversarial and reconstruction losses to allow the generation of high-quality audio content from quantized embeddings. By training with structured dropout applied to quantizer layers, a single model can operate across variable bitrates from 3 kbps to 18 kbps, with a negligible quality loss when compared with models trained at fixed bitrates. In addition, the model is amenable to a low latency implementation, which supports streamable inference and runs in real time on a smartphone CPU. In subjective evaluations using audio at 24 kHz sampling rate, SoundStream at 3 kbps outperforms Opus at 12 kbps and approaches EVS at 9.6 kbps. Moreover, we are able to perform joint compression and enhancement either at the encoder or at the decoder side with no additional latency, which we demonstrate through background noise suppression for speech. View details
    No Results Found