Publications

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

people standing in front of a screen with images and a chipboard

Publications

people standing in front of a screen with images and a chipboard

Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
1 - 15 of 182 publications
    Preview abstract Interactions with Extended Reality Head Mounted Devices (XR HMDs) applications require precise, intuitive and efficient input methods. Current approaches either rely on power-intensive sensors, such as cameras for hand-tracking, or specialized hardware in the form of handheld controllers. As an alternative, past works have explored the use of devices already present with the user, in the form of smartphones and smartwatches as practical input solutions. However, this approach risks interaction overload---how can one determine whether the user’s interaction gestures on the watch-face or phone screen are directed toward control of the mobile device itself or the XR device? To this effect, we propose a novel framework for cross-device input routing and device arbitration by employing Inertial Measurement Units (IMUs) within these devices. We validate our approach in a user study with six participants. By making use of the relative orientation between the headset and the target input device, we can estimate the intended device of interaction with 93.7% accuracy. Our method offers a seamless, energy-efficient alternative for input management in XR, enhancing user experience through natural and ergonomic interactions. View details
    XDTK: A Cross-Device Toolkit for Input & Interaction in XR
    Eric Gonzalez
    Karan Ahuja
    Khushman Patel
    IEEE VR, IEEE(2024)
    Preview abstract We present XDTK, an open-source Unity/Android toolkit for prototyping multi-device interactions in extended reality (XR). With the Unity package and Android app provided in XDTK, data from any number of devices (phones, tablets, or wearables) can be streamed to and surfaced within a Unity-based XR application. ARCore-supported device also provide self-tracked pose data. Devices on the same local network are automatically discovered by the Unity server and their inputs are routed using a custom event framework. We designed XDTK to be modular and easily extendable to enable fast, simple, and effective prototyping of multi-device experiences by both researchers and developers. View details
    APG: Audioplethysmography for Cardiac Monitoring in Hearables
    David Pearl
    Rich Howard
    Longfei Shangguan
    Trausti Thormundsson
    MobiCom 2023: The 29th Annual International Conference On Mobile Computing And Networking (MobiCom), Association for Computing Machinery (ACM) (to appear)
    Preview abstract This paper presents Audioplethysmography (APG), a novel cardiac monitoring modality for active noise cancellation (ANC) headphones. APG sends a low intensity ultrasound probing signal using an ANC headphone's speakers and receives the echoes via the on-board feedback microphones. We observed that, as the volume of ear canals slightly changes with blood vessel deformations, the heartbeats will modulate these ultrasound echoes. We built mathematical models to analyze the underlying physics and propose a multi-tone APG signal processing pipeline to derive the heart rate and heart rate variability in both constrained and unconstrained settings. APG enables robust monitoring of cardiac activities using mass-market ANC headphones in the presence of music playback and body motion such as running. We conducted an eight-month field study with 153 participants to evaluate APG in various conditions. Our studies conform to the (Institutional Review Board) IRB policies from our company. The presented technology, experimental design, and results have been reviewed and further improved by feedback garnered from our internal Health Team, Product Team, User Experience (UX) Team and Legal team. Our results demonstrate that APG achieves consistently high HR (3.21% median error across 153 participants in all scenarios) and HRV (2.70% median error in interbeat interval, IBI) measurement accuracy. Our UX study further shows that APG is resilient to variation in: skin tone, sub-optimal seal conditions, and ear canal size. View details
    Design Earable Sensing Systems: Perspectives and Lessons Learned from Industry
    Trausti Thormundsson
    Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2023 ACM International Symposium on Wearable Computers, ACM (to appear)
    Preview abstract Earables computing is an emerging research community as the industry witnesses the soaring of True Wireless Stereo (TWS) Active Noise Canceling (ANC) earbuds in the past ten years. There is an increasing trend of newly initiated earable research spanning across mobile health, user-interfaces, speech processing, and context-awareness. Head-worn devices are anticipated to be the next generation Mobile Computing and Human-Computer Interaction (HCI) platform. In this paper, we share our design experiences and lessons learned in building hearable sensing systems from the industry perspective. We also give our takes on future directions of the earable research. View details
    UE Security Reloaded: Developing a 5G Standalone User-Side Security Testing Framework
    Aanjhan Ranganathan
    Christina Pöpper
    Evangelos Bitsikas
    Syed Khandker
    16th ACM Conference on Security and Privacy in Wireless and Mobile Networks(2023)
    Preview abstract Security flaws and vulnerabilities in cellular networks directly lead to severe security threats given the data-plane services, from calls to messaging and Internet access, that are involved. While the 5G Standalone (SA) system is currently being deployed worldwide, practical security testing of user equipment has only been conducted for 4G/LTE and earlier network generations. In this paper, we develop and present the first security testing framework for 5G SA user equipment. To that end, we modify the functionality of open-source suites (Open5GS and srsRAN) and develop a broad set of test cases for 5G NAS and RRC layers. We apply our testing framework in a proof-of-concept manner to 5G SA mobile phones, report identified vulnerabilities, and provide detailed insights from our experiments. View details
    Preview abstract Keynote at the srsRAN Project Workshop in October 2023: https://srs.io/srsran-project-workshop-october-23-24/ The talk is a summary of the impact that open-source tools and SW radio have had on cellular security research in academia over the last 15 years. It summarizes 2G security research in ~2008-2012 and how the first OSS tools for LTE (openLTE and srsLTE) were a game changer for the field, enabling a tremendous spike in excellent cellular security research work. View details
    RetroSphere: Self-Contained Passive 3D Controller Tracking for Augmented Reality
    Ananta Narayanan Balaji
    Clayton Merrill Kimber
    David Li
    Shengzhi Wu
    Ruofei Du
    David Kim
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(4)(2022), 157:1-157:36
    Preview abstract Advanced AR/VR headsets often have a dedicated depth sensor or multiple cameras, high processing power, and a highcapacity battery to track hands or controllers. However, these approaches are not compatible with the small form factor and limited thermal capacity of lightweight AR devices. In this paper, we present RetroSphere, a self-contained 6 degree of freedom (6DoF) controller tracker that can be integrated with almost any device. RetroSphere tracks a passive controller with just 3 retroreflective spheres using a stereo pair of mass-produced infrared blob trackers, each with its own infrared LED emitters. As the sphere is completely passive, no electronics or recharging is required. Each object tracking camera provides a tiny Arduino-compatible ESP32 microcontroller with the 2D position of the spheres. A lightweight stereo depth estimation algorithm that runs on the ESP32 performs 6DoF tracking of the passive controller. Also, RetroSphere provides an auto-calibration procedure to calibrate the stereo IR tracker setup. Our work builds upon Johnny Lee’s Wii remote hacks and aims to enable a community of researchers, designers, and makers to use 3D input in their projects with affordable off-the-shelf components. RetroSphere achieves a tracking accuracy of about 96.5% with errors as low as ∼3.5 cm over a 100 cm tracking range, validated with ground truth 3D data obtained using a LIDAR camera while consuming around 400 mW. We provide implementation details, evaluate the accuracy of our system, and demonstrate example applications, such as mobile AR drawing, 3D measurement, etc. with our Retrosphere-enabled AR glass prototype. View details
    Preview abstract Consumer electronics are increasingly using everyday materials to blend into home environments, often using LEDs or symbol displays under textile meshes. Our surveys (n=1499 and n=1501) show interest in interactive graphical displays for hidden interfaces --- however, covering such displays significantly limits brightness, material possibilities and legibility. To overcome these limitations, we leverage parallel rendering to enable ultrabright graphics that can pass through everyday materials. We unlock expressive hidden interfaces using rectilinear graphics on low-cost, mass-produced passive-matrix OLED displays. A technical evaluation across materials, shapes and display techniques, suggests 3.6--40X brightness increase compared to more complex active-matrix OLEDs. We present interactive prototypes that blend into wood, textile, plastic and mirrored surfaces. Survey feedback (n=1572) on our prototypes suggests that smart mirrors are particularly desirable. A lab evaluation (n=11) reinforced these findings and allowed us to also characterize performance from hands-on interaction with different content, materials and under varying lighting conditions. View details
    Preview abstract We introduce a framework for adapting a virtual keyboard to individual user behavior by modifying a Gaussian spatial model to use personalized key center offset means and, optionally, learned covariances. Through numerous real-world studies, we determine the importance of training data quantity and weights, as well as the number of clusters into which to group keys to avoid overfitting. While past research has shown potential of this technique using artificially-simple virtual keyboards and games or fixed typing prompts, we demonstrate effectiveness using the highly-tuned Gboard app with a representative set of users and their real typing behaviors. Across a variety of top languages,we achieve small-but-significant improvements in both typing speed and decoder accuracy. View details
    Preview abstract Over the years, the mobile devices landscape has grown to hundreds of OEMs and tens of thousands of device models. This landscape has made it difficult to develop quality mobile applications that are user-friendly, stable, and performant. To accelerate mobile development and to help developers build better performing, more stable apps, Google built a large Mobile Device Farm that allows developers to test their mobile applications. In this document we share lessons learned while building the Mobile Device Farm, including pitfalls and successes, to help the mobile development community leverage our experience and build better mobile apps. While we describe both Android and iOS, we primarily focus on the Android Open Source Project (AOSP) because it is highly diverse and dynamic. We've scaled from 10s of devices to 10s of thousands, and from hundreds of tests a day to millions. View details
    PriGen: Towards Automated Translation of Android Applications' Code to Privacy Captions
    Vijayanta Jain
    Sanonda Datta Gupta
    Sepideh Ghanavati
    Research Challenges in Information Science, Springer International Publishing(2021), pp. 142-151
    Preview abstract Mobile applications are required to give privacy notices to the users when they collect or share personal information. Creating consistent and concise privacy notices can be a challenging task for developers. Previous work has attempted to help developers create privacy notices through a questionnaire or predefined templates. In this paper, we propose a novel approach and a framework, called PriGen, that extends these prior work. PriGen uses static analysis to identify Android applications’ code segments which process personal information (i.e. permission-requiring code segments) and then leverages a Neural Machine Translation model to translate them into privacy captions. We present the initial analysis of our translation task for ~300,000 code segments. View details
    EdgeSharing: Edge Assisted Real-time Localization and Object Sharing in Urban Streets
    Marco Gruteser
    The 40th IEEE International Conference on Computer Communications (IEEE INFOCOM 2021).(2021)
    Preview abstract Collaborative object localization and sharing at smart intersections promises to improve situational awareness of traffic participants in key areas where hazards exist due to visual obstructions. By sharing a moving object's location between different camera-equipped devices, it effectively extends the vision of traffic participants beyond their field of view. However, accurately sharing objects between moving clients is extremely challenging due to the high accuracy requirements for localizing both the client position and positions of its detected objects. Existing approaches based on direct sharing between devices are limited by the computational resources of (potentially legacy) vehicles and bad flexibility. To address these challenges, we introduce EdgeSharing, a novel localization and object sharing system leveraging the resources of edge cloud platforms. EdgeSharing holds a real-time 3D feature map of its coverage region on the edge cloud and uses it to provide accurate localization and object sharing service to the client devices passing through this region. We further propose several optimization techniques to increase the localization accuracy, reduce the bandwidth consumption and decrease the offloading latency of the system. The result shows that the system is able to achieve a mean vehicle localization error of 0.2813-1.2717 meters, an object sharing accuracy of 82.3\%-91.44\%, and a 54.68\% object awareness increment in urban streets and intersections. In addition, the proposed optimization techniques are able to reduce 70.12\% of bandwidth consumption and reduce 40.09\% of the end-to-end latency. View details
    ASAP: Fast Mobile Application Switch via Adaptive Prepaging
    Sam Son
    Seung Yul Lee
    Jonghyun Bae
    Yunho Jin
    Jinkyu Jeong
    Tae Jun Ham
    Jae W. Lee
    USENIX Association, ., pp. 365-380
    Preview abstract With ever-increasing demands for memory capacity from a mobile application, along with a steady increase in the number of applications running concurrently, the memory capacity is becoming a scarce resource on mobile devices. When the memory pressure is high, current mobile OSes often kill application processes that have not recently been used to reclaim memory space. This leads to a long delay when the user relaunches the killed application, which degrades the user experience. Even if this mechanism is disabled to utilize a compression-based in-memory swap mechanism, relaunching the application still incurs a substantial latency penalty as it requires decompression of compressed anonymous pages and a stream of I/O accesses to retrieve file-backed pages into memory. This paper identifies the conventional demand paging as the primary source of this inefficiency and proposes ASAP, a mechanism for fast application switch via adaptive prepaging on mobile devices. Specifically, ASAP performs prepaging effectively by combining i) high-precision switch footprint estimators for both file-backed and anonymous pages, and ii) efficient implementation of the prepaging mechanism to minimize resource wastes for CPU cycles and disk bandwidth during an application switch. Our evaluation of ASAP using eight real-world applications on Google Pixel 4 demonstrates that ASAP can reduce the switch time by 22.2% on average (with 33.3% at maximum) over the vanilla Android 10. View details
    Preview abstract No abstract View details
    Holography-Based Target Localization and Health Monitoring Technique using UHF Tag Array
    Aline Eid
    Jiang Zhu
    Jimmy G. D. Hester
    Manos M. Tentzeris
    IEEE Intern of Things Journal(2021)
    Preview abstract Radio technologies are appealing for unobtrusive and remote monitoring of human activities. Radar based human activity recognition proves to be a success, for example, Project Soli developed by Google. However, it is expensive to scale up for multi-user environments. In this paper, we propose a solution—the HoloTag system—which circumvents the multi-channel-radar scaling problem through the use of a quasi-virtual ultra-low-cost UHF RFID array over which a holographic projection of its environment is measured and used to both localize and monitor the health of several targets. The method is first described in detail, before the image reconstruction process, employing known beamforming algorithms—Delay & Sum, and Capon—is shown and its scaling properties simulated. Then, the idiosyncrasies of the implementation of HoloTag using low-cost Off-The-Shelf hardware are explained, before its ability to simultaneously measure the breathing rates and positions of multiple real and synthetic targets with accuracies of better than 0.8 bpm and 20 cm is demonstrated. View details