Eric Lee Turner
I studied Electrical and Computer Engineering at CMU, getting my B.S. in 2011. I got my M.S. and Ph.D. at the Video and Image Processing Lab at U.C. Berkeley in 2013 and 2015, respectively. Specifically, my thesis focused on indoor modeling and surface reconstruction.
While at Google, I have focused on two topics: Foveated Rendering for Virtual Reality systems, and depth sensing.
Public website
While at Google, I have focused on two topics: Foveated Rendering for Virtual Reality systems, and depth sensing.
Public website
Research Areas
Authored Publications
Sort By
Experiencing Real-time 3D Interaction with Depth Maps for Mobile Augmented Reality in DepthLab
Maksym Dzitsiuk
Luca Prasso
Ivo Duarte
Jason Dourgarian
Joao Afonso
Jose Pascoal
Josh Gladstone
Nuno Moura e Silva Cruces
Shahram Izadi
Konstantine Nicholas John Tsotsos
Adjunct Publication of the 33rd Annual ACM Symposium on User Interface Software and Technology, ACM (2020), pp. 108-110
Preview abstract
We demonstrate DepthLab, a wide range of experiences using the ARCore Depth API that allows users to detect the shape and depth in the physical environment with a mobile phone. DepthLab encapsulates a variety of depth-based UI/UX paradigms, including geometry-aware rendering (occlusion, shadows, texture decals), surface interaction behaviors (physics, collision detection, avatar path planning), and visual effects (relighting, 3D-anchored focus and aperture effects, 3D photos). We have open-sourced our software at https://github.com/googlesamples/arcore-depth-lab to facilitate future research and development in depth-aware mobile AR experiences. With DepthLab, we aim to help mobile developers to effortlessly integrate depth into their AR experiences and amplify the expression of their creative vision.
View details
DepthLab: Real-time 3D Interaction with Depth Maps for Mobile Augmented Reality
Maksym Dzitsiuk
Luca Prasso
Ivo Duarte
Jason Dourgarian
Joao Afonso
Jose Pascoal
Josh Gladstone
Nuno Moura e Silva Cruces
Shahram Izadi
Konstantine Nicholas John Tsotsos
Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, ACM (2020), pp. 829-843
Preview abstract
Mobile devices with passive depth sensing capabilities are ubiquitous, and recently active depth sensors have become available on some tablets and VR/AR devices. Although real-time depth data is accessible, its rich value to mainstream AR applications has been sorely under-explored. Adoption of depth-based UX has been impeded by the complexity of performing even simple operations with raw depth data, such as detecting intersections or constructing meshes. In this paper, we introduce DepthLab, a software library that encapsulates a variety of depth-based UI/UX paradigms, including geometry-aware rendering (occlusion, shadows), surface interaction behaviors (physics-based collisions, avatar path planning), and visual effects (relighting, depth-of-field effects). We break down depth usage into localized depth, surface depth, and dense depth, and describe our real-time algorithms for interaction and rendering tasks. We present the design process, system, and components of DepthLab to streamline and centralize the development of interactive depth features. We have open-sourced our software to external developers, conducted performance evaluation, and discussed how DepthLab can accelerate the workflow of mobile AR designers and developers. We envision that DepthLab may help mobile AR developers amplify their prototyping efforts, empowering them to unleash their creativity and effortlessly integrate depth into mobile AR experiences.
View details
Depth from motion for smartphone AR
Julien Valentin
Neal Wadhwa
Max Dzitsiuk
Michael John Schoenberg
Vivek Verma
Ambrus Csaszar
Ivan Dryanovski
Joao Afonso
Jose Pascoal
Konstantine Nicholas John Tsotsos
Mira Angela Leung
Mirko Schmidt
Sameh Khamis
Vladimir Tankovich
Shahram Izadi
Christoph Rhemann
ACM Transactions on Graphics (2018)
Preview abstract
Augmented reality (AR) for smartphones has matured from a technology for earlier adopters, available only on select high-end phones, to one that is truly available to the general public. One of the key breakthroughs has been in low-compute methods for six degree of freedom (6DoF) tracking on phones using only the existing hardware (camera and inertial sensors). 6DoF tracking is the cornerstone of smartphone AR allowing virtual content to be precisely locked on top of the real world. However, to really give users the impression of believable AR, one requires mobile depth. Without depth, even simple effects such as a virtual object being correctly occluded by the real-world is impossible. However, requiring a mobile depth sensor would severely restrict the access to such features. In this article, we provide a novel pipeline for mobile depth that supports a wide array of mobile phones, and uses only the existing monocular color sensor. Through several technical contributions, we provide the ability to compute low latency dense depth maps using only a single CPU core of a wide range of (medium-high) mobile phones. We demonstrate the capabilities of our approach on high-level AR applications including real-time navigation and shopping.
View details
Limits of Peripheral Acuity and Implications for VR System Design
David Morris Hoffman
Zoe Meraz
Journal of Society for Information Display (2018), pp. 13
Preview abstract
At different locations in the visual field, we measured the visual system’s sensitivity to a number of artifacts that can be introduced in near-eye display systems. One study examined the threshold level of downsampling that an image can sustain at different position in the retina and found that temporally stable approaches, both blurred and aliased, were much less noticeable than temporally volatile approaches. Also, boundaries between zones of different resolution had low visibility in the periphery. We also examined the minimum duration needed for the visual system to detect a low resolution region in an actively tracked system and found that low resolution images presented for less than 40ms before being replaced with a high resolution image are unlikely to be visibly degraded. We also found that the visual system shows a rapid fall-off in its ability to detect chromatic aberration in the periphery. These findings can inform the design on high performance and computationally efficient near-eye display systems.
View details
Sensitivity to peripheral artifacts in VR display systems
David Morris Hoffman
Zoe Meraz
Proceedings of the Society for information display, Society for Information Display (2018), pp. 4
Preview abstract
We evaluated the visual system’s sensitivity to different classes of image impairment that are closely associated with rendering in VR display systems. Even in the far periphery, the visual system was highly sensitive to volatile downsampling solutions. Temporally stable downsampling in the periphery was generally acceptable even with sample spacing up to half a degree.
View details
Phase-Aligned Foveated Rendering for Virtual Reality Headsets
Haomiao Jiang
Damien Saint-Macary
Behnam Bastani
The 25th IEEE Conference on Virtual Reality and 3D User Interfaces (2018)
Preview abstract
We propose a novel method of foveated rendering for virtual reality,
targeting head-mounted displays with large fields of view or high
pixel densities. Our foveation method removes motion-induced
flicker in the periphery by aligning the rendered pixel grid to the
virtual scene content during rasterization and upsampling. This
method dramatically reduces detectability of motion artifacts in the
periphery without complex interpolation or anti-aliasing algorithms.
View details