Jump to Content
Jason Lawrence

Jason Lawrence

I am a research scientist at Google in Seattle, working at the intersection of 3d computer vision, machine learning, and computer graphics. I co-founded and currently lead the research and engineering team behind Project Starline (video). A longer bio and more information about my work is available at my personal webpage.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Project Starline: A high-fidelity telepresence system
    Supreeth Achar
    Gregory Major Blascovich
    Joseph G. Desloge
    Tommy Fortes
    Eric M. Gomez
    Sascha Häberling
    Hugues Hoppe
    Andy Huibers
    Claude Knaus
    Brian Kuschak
    Ricardo Martin-Brualla
    Harris Nover
    Andrew Ian Russell
    Steven M. Seitz
    Kevin Tong
    ACM Transactions on Graphics (Proc. SIGGRAPH Asia), vol. 40(6) (2021)
    Preview abstract We present a real-time bidirectional communication system that lets two people, separated by distance, experience a face-to-face conversation as if they were copresent. It is the first telepresence system that is demonstrably better than 2D videoconferencing, as measured using participant ratings (e.g., presence, attentiveness, reaction-gauging, engagement), meeting recall, and observed nonverbal behaviors (e.g., head nods, eyebrow movements). This milestone is reached by maximizing audiovisual fidelity and the sense of copresence in all design elements, including physical layout, lighting, face tracking, multi-view capture, microphone array, multi-stream compression, loudspeaker output, and lenticular display. Our system achieves key 3D audiovisual cues (stereopsis, motion parallax, and spatialized audio) and enables the full range of communication cues (eye contact, hand gestures, and body language), yet does not require special glasses or body-worn microphones/headphones. The system consists of a head-tracked autostereoscopic display, high-resolution 3D capture and rendering subsystems, and network transmission using compressed color and depth video streams. Other contributions include a novel image-based geometry fusion algorithm, free-space dereverberation, and talker localization. (presentation video) View details
    Image Perforation: Automatically Accelerating Image Pipelines by Intelligently Skipping Samples
    Liming Lou
    Paul Nguyen
    Connelly Barnes
    ACM Transactions on Graphics, vol. 35 (2016)
    Preview abstract Image pipelines arise frequently in modern computational photography systems and consist of multiple processing stages where each stage produces an intermediate image that serves as input to a future stage. Inspired by recent work on loop perforation [Sidiroglou-Douskos et al. 2011], this article introduces image perforation, a new optimization technique that allows us to automatically explore the space of performance-accuracy tradeoffs within an image pipeline. Image perforation works by transforming loops over the image at each pipeline stage into coarser loops that effectively “skip” certain samples. These missing samples are reconstructed for later stages using a number of different interpolation strategies that are relatively inexpensive to perform compared to the original cost of computing the sample. We describe a genetic algorithm for automatically exploring the resulting combinatoric search space of which loops to perforate, in what manner, by how much, and using which reconstruction method. We also present a prototype language that implements image perforation along with several other domain-specific optimizations and show results for a number of different image pipelines and inputs. For these cases, image perforation achieves speedups of 2× - 10× with acceptable loss in visual quality and significantly outperforms loop perforation. View details
    No Results Found