Jump to Content
Praneet Dutta

Praneet Dutta

I received the M.S in Computer Engineering (2017) and B.Tech in ECE (2016) from Carnegie Mellon University and the Vellore Institute of Technology respectively. At Google, I work on Reinforcement Learning and Generative Modeling.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    3D Conditional Generative Adversarial Networks to enable large-scale seismic image enhancement
    Bruce Power
    Adam Halpert
    Carlos Ezequiel
    Aravind Subramanian
    Chanchal Chatterjee
    Sindhu Hari
    Kenton Prindle
    Vishal Vaddina
    Andrew Leach
    Raj Domala
    Laura Bandura
    NeurIPS (2019) (to appear)
    Preview abstract We propose GAN-based image enhancement models for frequency enhancement of 2D and 3D seismic images. Seismic imagery is used to understand and characterize the Earth's subsurface for energy exploration. Because these images often suffer from resolution limitations and noise contamination, our proposed method performs large-scale seismic volume frequency enhancement and denoising. The enhanced images reduce uncertainty and improve decisions about issues, such as optimal well placement, that often rely on low signal-to-noise ratio (SNR) seismic volumes. We explored the impact of adding lithology class information to the models, resulting in improved performance on PSNR and SSIM metrics over a baseline model with no conditional information. View details
    AutoML for Contextual Bandits
    Man Kit (Joe) Cheuk
    Jonathan Kim
    REVEAL Workshop @ ACM RecSys 2019 Conference, Copenhagen (2019) (to appear)
    Preview abstract Contextual Bandits is one of the widely popular techniques used in applications such as personalization, recommendation systems, mobile health, causal marketing etc . As a dynamic approach, it can be more efficient than standard A/B testing in minimizing regret. We propose an end to end meta-learning pipeline to approximate the optimal Q function for contextual bandits problems. We see that our model is able to perform much better than random exploration, being more regret efficient and able to converge with a limited number of samples, while remaining very general and easy to use due to the meta-learning approach.We used a linearly annealed e-greedy exploration policy to define the exploration vs exploitation schedule. We tested the system on a synthetic environment to characterize it fully and we evaluated it on some open source datasets. We see that our model outperforms or performs comparatively to other models while requiring no tuning nor feature engineering. View details
    No Results Found