Jump to Content

Mobile Systems

Mobile devices are the prevalent computing device in many parts of the world, and over the coming years it is expected that mobile Internet usage will outpace desktop usage worldwide. Google is committed to realizing the potential of the mobile web to transform how people interact with computing technology. Google engineers and researchers work on a wide range of problems in mobile computing and networking, including new operating systems and programming platforms (such as Android and ChromeOS); new interaction paradigms between people and devices; advanced wireless communications; and optimizing the web for mobile settings. In addition, many of Google’s core product teams, such as Search, Gmail, and Maps, have groups focused on optimizing the mobile experience, making it faster and more seamless. We take a cross-layer approach to research in mobile systems and networking, cutting across applications, networks, operating systems, and hardware. The tremendous scale of Google’s products and the Android and Chrome platforms make this a very exciting place to work on these problems.

Some representative projects include mobile web performance optimization, new features in Android to greatly reduce network data usage and energy consumption; new platforms for developing high performance web applications on mobile devices; wireless communication protocols that will yield vastly greater performance over today’s standards; and multi-device interaction based on Android, which is now available on a wide variety of consumer electronics.

Recent Publications

APG: Audioplethysmography for Cardiac Monitoring in Hearables
David Pearl
Rich Howard
Longfei Shangguan
Trausti Thormundsson
MobiCom 2023: The 29th Annual International Conference On Mobile Computing And Networking (MobiCom), Association for Computing Machinery (ACM) (to appear)
Preview abstract This paper presents Audioplethysmography (APG), a novel cardiac monitoring modality for active noise cancellation (ANC) headphones. APG sends a low intensity ultrasound probing signal using an ANC headphone's speakers and receives the echoes via the on-board feedback microphones. We observed that, as the volume of ear canals slightly changes with blood vessel deformations, the heartbeats will modulate these ultrasound echoes. We built mathematical models to analyze the underlying physics and propose a multi-tone APG signal processing pipeline to derive the heart rate and heart rate variability in both constrained and unconstrained settings. APG enables robust monitoring of cardiac activities using mass-market ANC headphones in the presence of music playback and body motion such as running. We conducted an eight-month field study with 153 participants to evaluate APG in various conditions. Our studies conform to the (Institutional Review Board) IRB policies from our company. The presented technology, experimental design, and results have been reviewed and further improved by feedback garnered from our internal Health Team, Product Team, User Experience (UX) Team and Legal team. Our results demonstrate that APG achieves consistently high HR (3.21% median error across 153 participants in all scenarios) and HRV (2.70% median error in interbeat interval, IBI) measurement accuracy. Our UX study further shows that APG is resilient to variation in: skin tone, sub-optimal seal conditions, and ear canal size. View details
Design Earable Sensing Systems: Perspectives and Lessons Learned from Industry
Trausti Thormundsson
Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2023 ACM International Symposium on Wearable Computers, ACM (to appear)
Preview abstract Earables computing is an emerging research community as the industry witnesses the soaring of True Wireless Stereo (TWS) Active Noise Canceling (ANC) earbuds in the past ten years. There is an increasing trend of newly initiated earable research spanning across mobile health, user-interfaces, speech processing, and context-awareness. Head-worn devices are anticipated to be the next generation Mobile Computing and Human-Computer Interaction (HCI) platform. In this paper, we share our design experiences and lessons learned in building hearable sensing systems from the industry perspective. We also give our takes on future directions of the earable research. View details
RetroSphere: Self-Contained Passive 3D Controller Tracking for Augmented Reality
Ananta Narayanan Balaji
Clayton Merrill Kimber
David Li
Shengzhi Wu
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6(4) (2022), 157:1-157:36
Preview abstract Advanced AR/VR headsets often have a dedicated depth sensor or multiple cameras, high processing power, and a highcapacity battery to track hands or controllers. However, these approaches are not compatible with the small form factor and limited thermal capacity of lightweight AR devices. In this paper, we present RetroSphere, a self-contained 6 degree of freedom (6DoF) controller tracker that can be integrated with almost any device. RetroSphere tracks a passive controller with just 3 retroreflective spheres using a stereo pair of mass-produced infrared blob trackers, each with its own infrared LED emitters. As the sphere is completely passive, no electronics or recharging is required. Each object tracking camera provides a tiny Arduino-compatible ESP32 microcontroller with the 2D position of the spheres. A lightweight stereo depth estimation algorithm that runs on the ESP32 performs 6DoF tracking of the passive controller. Also, RetroSphere provides an auto-calibration procedure to calibrate the stereo IR tracker setup. Our work builds upon Johnny Lee’s Wii remote hacks and aims to enable a community of researchers, designers, and makers to use 3D input in their projects with affordable off-the-shelf components. RetroSphere achieves a tracking accuracy of about 96.5% with errors as low as ∼3.5 cm over a 100 cm tracking range, validated with ground truth 3D data obtained using a LIDAR camera while consuming around 400 mW. We provide implementation details, evaluate the accuracy of our system, and demonstrate example applications, such as mobile AR drawing, 3D measurement, etc. with our Retrosphere-enabled AR glass prototype. View details
Preview abstract We introduce a framework for adapting a virtual keyboard to individual user behavior by modifying a Gaussian spatial model to use personalized key center offset means and, optionally, learned covariances. Through numerous real-world studies, we determine the importance of training data quantity and weights, as well as the number of clusters into which to group keys to avoid overfitting. While past research has shown potential of this technique using artificially-simple virtual keyboards and games or fixed typing prompts, we demonstrate effectiveness using the highly-tuned Gboard app with a representative set of users and their real typing behaviors. Across a variety of top languages,we achieve small-but-significant improvements in both typing speed and decoder accuracy. View details
Preview abstract Consumer electronics are increasingly using everyday materials to blend into home environments, often using LEDs or symbol displays under textile meshes. Our surveys (n=1499 and n=1501) show interest in interactive graphical displays for hidden interfaces --- however, covering such displays significantly limits brightness, material possibilities and legibility. To overcome these limitations, we leverage parallel rendering to enable ultrabright graphics that can pass through everyday materials. We unlock expressive hidden interfaces using rectilinear graphics on low-cost, mass-produced passive-matrix OLED displays. A technical evaluation across materials, shapes and display techniques, suggests 3.6--40X brightness increase compared to more complex active-matrix OLEDs. We present interactive prototypes that blend into wood, textile, plastic and mirrored surfaces. Survey feedback (n=1572) on our prototypes suggests that smart mirrors are particularly desirable. A lab evaluation (n=11) reinforced these findings and allowed us to also characterize performance from hands-on interaction with different content, materials and under varying lighting conditions. View details
Preview abstract Over the years, the mobile devices landscape has grown to hundreds of OEMs and tens of thousands of device models. This landscape has made it difficult to develop quality mobile applications that are user-friendly, stable, and performant. To accelerate mobile development and to help developers build better performing, more stable apps, Google built a large Mobile Device Farm that allows developers to test their mobile applications. In this document we share lessons learned while building the Mobile Device Farm, including pitfalls and successes, to help the mobile development community leverage our experience and build better mobile apps. While we describe both Android and iOS, we primarily focus on the Android Open Source Project (AOSP) because it is highly diverse and dynamic. We've scaled from 10s of devices to 10s of thousands, and from hundreds of tests a day to millions. View details

Some of our teams