Jump to Content

Edwin Toh

I'm a UX Engineer from AIUX, working with the PAIR team.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract In this paper, we present a natural language code synthesis tool, GenLine, backed by a large generative language model and a set of task-specific prompts. To understand the user experience of natural language code synthesis with these types of models, we conducted a user study in which participants applied GenLine to two programming tasks. Our results indicate that while natural language code synthesis can sometimes provide a magical experience, participants still faced challenges. In particular, participants felt that they needed to learn the model’s "syntax,'' despite their input being natural language. Participants also faced challenges in debugging model input, and demonstrated a wide range of variability in the scope and specificity of their requests. From these findings, we discuss design implications for future natural language code synthesis tools built using generating language models. View details
    Preview abstract Prototyping is notoriously difficult to do with machine learning (ML), but recent advances in large language models may lower the barriers to people prototyping with ML, through the use of natural language prompts. This case study reports on the real-world experiences of industry professionals (e.g. designers, program managers, front-end developers) prototyping new ML-powered feature ideas via prompt-based prototyping. Through interviews with eleven practitioners during a three-week sprint and a workshop, we find that prompt-based prototyping reduced barriers of access by substantially broadening who can prototype with ML, sped up the prototyping process, and grounded communication between collaborators. Yet, it also introduced new challenges, such as the need to reverse-engineer prompt designs, source example data, and debug and evaluate prompt effectiveness. Taken together, this case study provides important implications that lay the groundwork toward a new future of prototyping with ML. View details
    Tone Transfer: In-Browser Interactive Neural Audio Synthesis
    Michelle Carney
    Chong Li
    Ping Yu
    https://hai-gen2021.github.io/ (2021) (to appear)
    Preview abstract Tone Transfer lets you transform everyday sounds into musical instruments. Record and upload audio directly into the browser and hear our machine learning models re-render it into saxophones, flutes and more! Don’t fancy singing? Play around with a curated set of samples that will get your creative juices flowing! Tone Transfer was born from a year-long collaboration between two teams within Google Research: Magenta and AIUX. AI Researchers, UX engineers and designers worked together to create an experience that opens up the magic of audio machine learning to a wider audience; from musicians to non-coders alike. Tone Transfer is built on a technology Magenta open-sourced earlier this year called Differentiable Digital Signal Processing or DDSP. At first, Magenta’s only demo was a technical colab notebook intended for folks with coding backgrounds. Through many iterations of design explorations and user research, the AIUX team developed and refined an experience that makes DDSP’s sound transformation approachable for everyone and more fun than ever to play with! View details
    No Results Found