Eric J. Gonzalez

Eric J. Gonzalez is a Research Scientist in the Blended Interaction Research & Devices (BIRD) Lab at Google AR. His work focuses on enabling interactions and experiences that bridge the physical and digital in immersive mixed reality.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Interactions with Extended Reality Head Mounted Devices (XR HMDs) applications require precise, intuitive and efficient input methods. Current approaches either rely on power-intensive sensors, such as cameras for hand-tracking, or specialized hardware in the form of handheld controllers. As an alternative, past works have explored the use of devices already present with the user, in the form of smartphones and smartwatches as practical input solutions. However, this approach risks interaction overload---how can one determine whether the user’s interaction gestures on the watch-face or phone screen are directed toward control of the mobile device itself or the XR device? To this effect, we propose a novel framework for cross-device input routing and device arbitration by employing Inertial Measurement Units (IMUs) within these devices. We validate our approach in a user study with six participants. By making use of the relative orientation between the headset and the target input device, we can estimate the intended device of interaction with 93.7% accuracy. Our method offers a seamless, energy-efficient alternative for input management in XR, enhancing user experience through natural and ergonomic interactions. View details
    Preview abstract We present XDTK, an open-source Unity/Android toolkit for prototyping multi-device interactions in extended reality (XR). With the Unity package and Android app provided in XDTK, data from any number of devices (phones, tablets, or wearables) can be streamed to and surfaced within a Unity-based XR application. ARCore-supported device also provide self-tracked pose data. Devices on the same local network are automatically discovered by the Unity server and their inputs are routed using a custom event framework. We designed XDTK to be modular and easily extendable to enable fast, simple, and effective prototyping of multi-device experiences by both researchers and developers. View details
    Hovering Over the Key to Text Input in XR
    Diar Abdlkarim
    Arpit Bhatia
    Stuart Macgregor
    Jason Fotso-Puepi
    Hasti Seifi
    Massimiliano Di Luca
    Karan Ahuja
    Preview abstract Virtual, Mixed, and Augmented Reality (XR) technologies hold immense potential for transforming productivity beyond PC. Therefore there is a critical need for improved text input solutions for XR. However, achieving efficient text input in these environments remains a significant challenge. This paper examines the current landscape of XR text input techniques, focusing on the importance of keyboards (both physical and virtual) as essential tools. We discuss the unique challenges and opportunities presented by XR, synthesizing key trends from existing solutions. View details
    Preview abstract WindowMirror is a framework for using XR headsets in productivity scenarios. The toolkit provides users with a simulated, extended screen real-estate. It allows users to interact with multiple desktop applications in real-time within a XR environment. Our architecture has two main modules: one a Unity package and a Python backend, which makes it easy to use and extend. WindowMirror supports traditional desktop interaction methods such as mouse, keyboard, and hand tracking. Furthermore, it features a Cylindrical Window Layout, an emerging design pattern which is particularly effective for single-user, egocentric perspectives. The introduction of WindowMirror aims to set a foundation for future research in XR screen-focused productivity scenarios. View details
    Augmented Object Intelligence with XR-Objects
    Mustafa Doga Dogan
    Karan Ahuja
    Andrea Colaco
    Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (UIST), ACM (2024), pp. 1-15
    Preview abstract Seamless integration of physical objects as interactive digital entities remains a challenge for spatial computing. This paper explores Augmented Object Intelligence (AOI) in the context of XR, an interaction paradigm that aims to blur the lines between digital and physical by equipping real-world objects with the ability to interact as if they were digital, where every object has the potential to serve as a portal to digital functionalities. Our approach utilizes real-time object segmentation and classification, combined with the power of Multimodal Large Language Models (MLLMs), to facilitate these interactions without the need for object pre-registration. We implement the AOI concept in the form of XR-Objects, an open-source prototype system that provides a platform for users to engage with their physical environment in contextually relevant ways using object-based context menus. This system enables analog objects to not only convey information but also to initiate digital actions, such as querying for details or executing tasks. Our contributions are threefold: (1) we define the AOI concept and detail its advantages over traditional AI assistants, (2) detail the XR-Objects system’s open-source design and implementation, and (3) show its versatility through various use cases and a user study. View details