Xiaojun Bi
Xiaojun Bi is a Human-Computer Interaction research scientist at Google in Mountain View, California. His research focuses on building interactive systems, implementing interaction techniques, and studying fundamental issues of user interface design especially on mobile devices. He has pioneered a number of techniques for mobile text entry systems, pen and touch based interactive systems, and large display interaction. His recent innovations such as keyboard correction & completion algorithm (CHI 2013) , bimanual gesture typing (UIST 2012), personalizing language models for text entry (CHI 2015) have been integrated into the Android keyboard, used by more than 100 million users. The keyboard evaluation system, Octopus (CHI 2013) has been widely used in the product development. Mobile Interaction Research at Google, a Google Research blog article highlights some of his recent research integrated into the Google products.
Xiaojun Bi has authored 20 publications in the premier HCI publication venues such as CHI, UIST, and Human Computer Interaction Journal, and 20 US patents (11 issued and 9 pending). His research papers have received awards at CHI, the flagship conference in HCI. His paper studying the speed-accuracy tradeoff of finger touch input (FFitts Law, CHI 2013) won the Google 2013 Influential Paper Award. Xiaojun Bi is active in the HCI academic community. He constantly serves as an Associate Chair on the CHI and UIST program committees, and organizes/co-organizes CHI workshops to prompt research in computational interaction design and text input technology. He is now co-editing the book Computational Interaction Design. He was a program co-chair for Chinese CHI 2015, and now is a general co-chair for Chinese CHI 2016. Xiaojun Bi earned his Ph.D. from the Department of Computer Science at the University of Toronto. He received his Master's and Bachelor's degrees from Tsinghua University. He won the first place in the National Mathematical Olympiad (China) in his home province when he was a high school student, recruited by Tsinghua University with national college entrance examinations waived. Here is his personal webpage.
Research Areas
Authored Publications
Sort By
M3 Gesture Menu: Design and Experimental Analyses of Marking Menus for Touchscreen Mobile Interaction
Kun Li
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 249:1-249:14
Preview abstract
Despite their learning advantages in theory, marking menus have faced adoption challenges in practice, even on today's touchscreen-based mobile devices. We address these challenges by designing, implementing, and evaluating multiple versions of M3 Gesture Menu (M3), a reimagination of marking menus targeted at mobile interfaces. M3 is defined on a grid rather than in a radial space, relies on gestural shapes rather than directional marks, and has constant and stationary space use. Our first controlled experiment on expert performance showed M3 was faster and less error-prone by a factor of two than traditional marking menus. A second experiment on learning demonstrated for the first time that users could successfully transition to recall-based execution of a dozen commands after three ten-minute practice sessions with both M3 and Multi-Stroke Marking Menu. Together, M3, with its demonstrated resolution, learning, and space use benefits, contributes to the design and understanding of menu selection in the mobile-first era of end-user computing.
View details
Effects of Language Modeling and its Personalization on Touchscreen Typing Performance
Andrew Fowler
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2015), ACM, New York, NY, USA, pp. 649-658
Preview abstract
Modern smartphones correct typing errors and learn userspecific
words (such as proper names). Both techniques are useful, yet little has been published about their technical specifics and concrete benefits. One reason is that typing accuracy is difficult to measure empirically on a large scale. We describe a closed-loop, smart touch keyboard (STK) evaluation system that we have implemented to solve this problem. It includes a principled typing simulator for generating human-like noisy touch input, a simple-yet-effective decoder for reconstructing typed words from such spatial data, a large web-scale background language model (LM), and a method for incorporating LM
personalization. Using the Enron email corpus as a personalization test set, we show for the first time at this scale that a combined spatial/language model reduces word error rate from a pre-model baseline of 38.4% down to 5.7%, and that LM personalization can improve this further to 4.6%.
View details
Optimizing Touchscreen Keyboards for Gesture Typing
Brian Smith
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2015), ACM, New York, NY, USA, pp. 3365-3374
Preview abstract
Despite its growing popularity, gesture typing suffers from a major problem not present in touch typing: gesture ambiguity on the Qwerty keyboard. By applying rigorous mathematical optimization methods, this paper systematically investigates the optimization space related to the accuracy, speed, and Qwerty similarity of a gesture typing keyboard. Our investigation shows that optimizing the layout for gesture clarity (a metric measuring how unique word gestures are on a keyboard) drastically improves the accuracy of gesture typing. Moreover, if we also accommodate gesture speed, or both gesture speed and Qwerty similarity, we can still reduce error rates by 52% and 37% over Qwerty, respectively. In addition to investigating the optimization space, this work contributes a set of optimized layouts such as GK-D and GK-T that can immediately benefit mobile device users.
View details
Both Complete and Correct? Multi-Objective Optimization of Touchscreen Keyboard
Preview
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2014), ACM, New York, NY, USA, pp. 2297-2306
Bayesian Touch - A Statistic Criterion of Target Selection with Finger Touch
Proceedings of UIST 2013 – The ACM Symposium on User Interface Software and Technology, ACM, New York, NY, USA, pp. 51-60
Preview abstract
To improve the accuracy of target selection for finger touch, we conceptualize finger touch input as an uncertain process, and derive a statistical target selection riterion, Bayesian Touch Criterion, from combining the basic Bayes’ rule of probability with the generalized dual Gaussian distribution hypothesis of finger touch. Bayesian Touch Criterion states that the selected target is the candidate with the shortest Bayesian Touch Distance to the touch point, which is computed from the touch point to target center distance and the size of the target. We give the derivation of the Bayesian touch criterion and its empirical
evaluation with two experiments. The results show for 2D circular target selection, Bayesian Touch Criterion is
significantly more accurate than the commonly used Visual Boundary Criterion (i.e., a target is selected if and only if the touch point falls within its boundary) and its two variants.
View details
FFitts Law: Modeling Finger Touch with Fitts’ Law
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2013), ACM, New York, NY, USA, pp. 1363-1372
Preview abstract
Fitts’ law has proven to be a strong predictor of pointing performance under a wide range of conditions. However, it has been insufficient in modeling small-target acquisition with finger-touch based input on screens. We propose a dual-distribution hypothesis to interpret the distribution of the endpoints in finger touch input. We hypothesize the movement endpoint distribution as a sum of two independent normal distributions. One distribution reflects the relative precision governed by the speed-accuracy tradeoff rule in the human motor system, and the other captures the absolute precision of finger touch independent of the speed-accuracy tradeoff effect. Based on this hypothesis, we derived the FFitts model—an expansion of Fitts’ law for finger touch input. We present three experiments in 1D target acquisition, 2D target acquisition and touchscreen keyboard typing tasks respectively. The results showed that FFitts law is more accurate than Fitts’ law in modeling finger input on touchscreens. At 0.91 or a greater R2 value, FFitts’ index of difficulty is able to account for significantly more variance than conventional Fitts’ index of difficulty based on either a nominal target width or an effective target width in all the three experiments.
View details
Octopus: Evaluating Touchscreen Keyboard Correction and Recognition Algorithms via “Remulation”
Shiri Azenkot
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2013), ACM, New York, NY, USA, pp. 543-552
Preview abstract
The time and labor demanded by a typical laboratory-based keyboard evaluation are limiting resources for algorithmic adjustment and optimization. We propose Remulation, a complementary method for evaluating touchscreen keyboard correction and recognition algorithms. It replicates prior user study data through real-time, on-device simulation. To demonstrate remulation, we have developed Octopus, an evaluation tool that enables keyboard developers to efficiently measure and inspect the impact of algorithmic changes without conducting resource-intensive user studies. It can also be used to evaluate third-party keyboards in a “black box” fashion, without access to their algorithms or source code. Octopus can evaluate both touch keyboards and word-gesture keyboards. Two empirical examples show that Remulation can efficiently and effectively measure many aspects of touch screen keyboards at both macro and micro levels. Additionally, we contribute two new metrics to measure keyboard accuracy at the word level: the Ratio of Error Reduction (RER) and the Word Score.
View details
Bimanual gesture keyboard
Proceeding of UIST 2012 – The ACM Symposium on User Interface Software and Technology, ACM, New York, NY, USA, pp. 137-146
Preview abstract
Gesture keyboards represent an increasingly popular way to input text on mobile devices today. However, current gesture keyboards are exclusively unimanual. To take advantage of the capability of modern multi-touch screens, we created a novel bimanual gesture text entry system, extending the gesture keyboard paradigm from one finger to multiple fingers. To address the complexity of recognizing bimanual gesture, we designed and implemented two related interaction methods, finger-release and space-required, both based on a new multi-stroke gesture recognition algorithm. A formal experiment showed that bimanual gesture behaviors were easy to learn. They improved comfort and reduced the physical demand relative to unimanual gestures on tablets. The results indicated that these new gesture keyboards were valuable complements to unimanual gesture and regular typing keyboards.
View details