Mar Gonzalez-Franco
Mar Gonzalez-Franco, PhD, is a Computer Scientist and Neuroscientist at Google working on a new generation of Immersive technologies. With a background in real-time systems in her research she tries to build better interactions for immersive technologies using different disciplines: Virtual Reality, Augmented Reality, AI, computer graphics, computer vision, Avatars, and haptics. All while studying human behavior, perception and neuroscience.
A part from her scientific contributions, she has a deep interest in helping the community grow more diverse and was awarded the 2022 IEEE VGTC VR New Researcher Award.
She leads the BIRD lab, working on Blended Interactions Research and Devices.
Authored Publications
Sort By
Preview abstract
For Extended Reality (XR) headsets, a key aim is the natural interaction in 3D space beyond what traditional methods of keyboard, mouse, and touchscreen can offer. With the release of the Apple Vision Pro, a novel interaction paradigm is now widely available where users seamlessly navigate content through the combined use of their eyes and hands. However, blending these modalities poses unique design challenges due to their dynamic nature and the absence of established principles and standards.
In this article, we present five design principles and issues for the Gaze + Pinch interaction technique, informed by eye-hand research in the human-computer interaction field. The design principles encompass mechanisms like division of labor and minimalistic timing, which are crucial for usability, alongside enhancements for the manipulation of objects, indirect interactions, and drag & drop. Whether in design, technology, or research domains, this exploration offers valuable perspectives for navigating the evolving landscape of 3D interaction.
View details
Preview abstract
Most of our interactions with digital content currently occur inside 2D screens, however moving from that format to immersive setups brings a paradigm shift. From content inside the screen to users inside the content. This change requires a revisit to how we blend the analog and the digital and how we transfer content between the two modes. Perhaps it even asks for new guidelines too. While different solutions appear in the space, the dynamic range only seems to widen. We can start to see what works and what does not work so well, in an empirical or ethnographic approach, beyond laboratory studies. But if we want to accelerate adoption we need to further the understanding on how current tasks can be improved. How this new form of interaction can increase their productivity. In this paper we focus on analyzing and converging what we think works, and envisioning how this new set of immersive devices and interactions can enable productivity beyond already existing tools.
View details
Hovering Over the Key to Text Input in XR
Diar Abdlkarim
Arpit Bhatia
Stuart Macgregor
Jason Fotso-Puepi
Hasti Seifi
Massimiliano Di Luca
Karan Ahuja
Preview abstract
Virtual, Mixed, and Augmented Reality (XR) technologies hold immense potential for transforming productivity beyond PC. Therefore there is a critical need for improved text input solutions for XR. However, achieving efficient text input in these environments remains a significant challenge. This paper examines the current landscape of XR text input techniques, focusing on the importance of keyboards (both physical and virtual) as essential tools. We discuss the unique challenges and opportunities presented by XR, synthesizing key trends from existing solutions.
View details
Preview abstract
WindowMirror is a framework for using XR headsets in productivity scenarios. The toolkit provides users with a simulated, extended screen real-estate. It allows users to interact with multiple desktop applications in real-time within a XR environment. Our architecture has two main modules: one a Unity package and a Python backend, which makes it easy to use and extend. WindowMirror supports traditional desktop interaction methods such as mouse, keyboard, and hand tracking. Furthermore, it features a Cylindrical Window Layout, an emerging design pattern which is particularly effective for single-user, egocentric perspectives. The introduction of WindowMirror aims to set a foundation for future research in XR screen-focused productivity scenarios.
View details
Preview abstract
Interactions with Extended Reality Head Mounted Devices (XR HMDs) applications require precise, intuitive and efficient input methods. Current approaches either rely on power-intensive sensors, such as cameras for hand-tracking, or specialized hardware in the form of handheld controllers. As an alternative, past works have explored the use of devices already present with the user, in the form of smartphones and smartwatches as practical input solutions. However, this approach risks interaction overload---how can one determine whether the user’s interaction gestures on the watch-face or phone screen are directed toward control of the mobile device itself or the XR device? To this effect, we propose a novel framework for cross-device input routing and device arbitration by employing Inertial Measurement Units (IMUs) within these devices. We validate our approach in a user study with six participants. By making use of the relative orientation between the headset and the target input device, we can estimate the intended device of interaction with 93.7% accuracy. Our method offers a seamless, energy-efficient alternative for input management in XR, enhancing user experience through natural and ergonomic interactions.
View details
Augmented Object Intelligence with XR-Objects
Mustafa Doga Dogan
Karan Ahuja
Andrea Colaco
Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (UIST), ACM (2024), pp. 1-15
Preview abstract
Seamless integration of physical objects as interactive digital entities remains a challenge for spatial computing. This paper explores Augmented Object Intelligence (AOI) in the context of XR, an interaction paradigm that aims to blur the lines between digital and physical by equipping real-world objects with the ability to interact as if they were digital, where every object has the potential to serve as a portal to digital functionalities. Our approach utilizes real-time object segmentation and classification, combined with the power of Multimodal Large Language Models (MLLMs), to facilitate these interactions without the need for object pre-registration. We implement the AOI concept in the form of XR-Objects, an open-source prototype system that provides a platform for users to engage with their physical environment in contextually relevant ways using object-based context menus. This system enables analog objects to not only convey information but also to initiate digital actions, such as querying for details or executing tasks. Our contributions are threefold: (1) we define the AOI concept and detail its advantages over traditional AI assistants, (2) detail the XR-Objects system’s open-source design and implementation, and (3) show its versatility through various use cases and a user study.
View details
Preview abstract
We present XDTK, an open-source Unity/Android toolkit for prototyping multi-device interactions in extended reality (XR). With the Unity package and Android app provided in XDTK, data from any number of devices (phones, tablets, or wearables) can be streamed to and surfaced within a Unity-based XR application. ARCore-supported device also provide self-tracked pose data. Devices on the same local network are automatically discovered by the Unity server and their inputs are routed using a custom event framework. We designed XDTK to be modular and easily extendable to enable fast, simple, and effective prototyping of multi-device experiences by both researchers and developers.
View details
Preview abstract
This workshop aims to unite experts and practitioners in XR and AI to envision the future of AI-enabled virtual, augmented, and mixed reality experiences. Our expansive discussion includes a
variety of key topics: Generative XR, Large Language Models (LLMs) for XR, Adaptive and Context-Aware XR, Explainable AI for XR, and harnessing AI to enhance and prototype XR experiences. We aim to identify the opportunities and challenges of how recent advances of AI could bring new XR experiences, which cannot be done before, with a keen focus on the seamless blending of our digital and physical worlds.
View details
The Work Avatar Face-Off: Knowledge Worker Preferences for Realism in Meetings
Kristin Moore
22nd IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2023) (to appear)
Preview abstract
While avatars have grown in popularity in social settings, their use in the workplace is still debatable. We conducted a large-scale survey to evaluate knowledge worker sentiment towards avatars, particularly the effects of realism on their acceptability for work meetings. Our survey of 2509 knowledge workers from multiple countries rated five avatar styles for use by managers, known colleagues and unknown colleagues.
In all scenarios, participants favored higher realism, but fully realistic avatars were sometimes perceived as uncanny. Less realistic avatars were rated worse when interacting with an unknown colleague or manager, as compared to a known colleague. Avatar acceptability varied by country, with participants from the United States and South Korea rating avatars more favorably. We supplemented our quantitative findings with a thematic analysis of open-ended responses to provide a comprehensive understanding of factors influencing work avatar choices.
In conclusion, our results show that realism had a significant positive correlation with acceptability. Non-realistic avatars were seen as fun and playful, but only suitable for occasional use.
View details
Preview abstract
With the arrival of immersive technologies, virtual avatars have gained a prominent role in the future of social computing. However, there is a lack of free resources that can provide researchers with diverse sets of virtual avatars, and the few that are available have not been validated. In this paper, we present VALID a new, freely available 3D avatar library. VALID includes 210 fully rigged avatars that were modeled through an iterative design process and represent the seven ethnicities recommended by U.S. Census Bureau research. We validated the avatars through a user study with participants (n = 132) from 33 countries, and provide statistically validated labels for each avatar’s perceived ethnicity and gender. Through our validation, we also advance the understanding of avatar ethnicity and show it can replicate the human psychology phenomenon of own-race bias in face recognition.
View details