Authored Publications
Sort By
Probability Weighting in Interactive Decisions: Evidence for Overuse of Bad Assistance, Underuse of Good Assistance
Andy Cockburn
Carl Gutwin
Zhe Chen
Pang Suwanaposee
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, ACM, New York, NY
Preview abstract
The effective use of assistive interfaces (i.e. those that offer suggestions or reform the user’s input to match inferred intentions) depends on users making good decisions about whether and when to engage or ignore assistive features. However, prior work from economics and psychology shows systematic decision-making biases in which people overreact to low probability events and underreact to high probability events – modelled using a probability weighting function. We examine the theoretical implications of this probability weighting for interaction, including its suggestion that users will overuse inaccurate interface assistance and underuse accurate assistance. We then conduct a new analysis of data from a previously published study, quantifying the degree of bias users exhibited, and demonstrating conformance with these predictions. We discuss implications for design, including strategies that could be used to mitigate the deleterious effects of the observed biases.
View details
Active Edge: Designing Squeeze Gestures for the Google Pixel 2
Claire Lee
Melissa Barnhart
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM, New York, NY, 274:1-274:13
Preview abstract
Active Edge is a feature of Google Pixel 2 smartphone devices that creates a force-sensitive interaction surface along their sides, allowing users to perform gestures by holding and squeezing their device. Supported by strain gauge elements adhered to the inner sidewalls of the device chassis, these gestures can be more natural and ergonomic than on-screen (touch) counterparts. Developing these interactions is an integration of several components: (1) an insight and understanding of the user experiences that benefit from squeeze gestures; (2) hardware with the sensitivity and reliability to sense a user's squeeze in any operating environment; (3) a gesture design that discriminates intentional squeezes from innocuous handling; and (4) an interaction design to promote a discoverable and satisfying user experience. This paper describes the design and evaluation of Active Edge in these areas as part of the product's development and engineering.
View details
Estimating Touch Force with Barometric Pressure Sensors
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM, New York, NY, 689:1-689:7
Preview abstract
Finger pressure offers a new dimension for touch interaction, where input is defined by its spatial position and orthogonal force. However, the limited availability and complexity of integrated force-sensing hardware in mobile devices is a barrier to exploring this design space. This paper presents a synthesis of two features in recent mobile devices - a barometric sensor (pressure altimeter) and ingress protection - to sense a user's touch force. When a user applies force to a device's display, it flexes inward and causes an increase in atmospheric pressure within the sealed chassis. This increase in pressure can be sensed by the device's internal barometer. However, this change is uncontrolled and requires a calibration model to map atmospheric pressure to touch force. This paper derives such a model and demonstrates its viability on four commercially-available devices (including two with dedicated force sensors). The results show this method is sensitive to forces of less than 1 N, and is comparable to dedicated force sensors.
View details
Preview abstract
Word–Gesture keyboards allow users to enter text using continuous input strokes (also known as gesture typing or shape writing). We developed a production model of gesture typing input based on a human motor control theory of optimal control (specifically, modeling human drawing movements as a minimization of jerk—the third derivative of position). In contrast to existing models, which consider gestural input as a series of concatenated aiming movements and predict a user’s time performance, this descriptive theory of human motor control predicts the shapes and trajectories that users will draw. The theory is supported by an analysis of user-produced gestures that found qualitative and quantitative agreement between the shapes users drew and the minimum jerk theory of motor control. Furthermore, by using a small number of statistical via-points whose distributions reflect the sensorimotor noise and speed–accuracy trade-off in gesture typing, we developed a model of gesture production that can predict realistic gesture trajectories for arbitrary text input tasks. The model accurately reflects features in the figural shapes and dynamics observed from users and can be used to improve the design and evaluation of gestural input systems.
View details
A Cost–Benefit Study of Text Entry Suggestion Interaction
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ACM, New York, NY, pp. 83-88
Preview abstract
Mobile keyboards often present error corrections and word completions (suggestions) as candidates for anticipated user input. However, these suggestions are not cognitively free: they require users to attend, evaluate, and act upon them. To understand this trade-off between suggestion savings and interaction costs, we conducted a text transcription experiment that controlled interface assertiveness: the tendency for an interface to present itself. Suggestions were either always present (extraverted), never present (introverted), or gated by a probability threshold (ambiverted). Results showed that although increasing the assertiveness of suggestions reduced the number of keyboard actions to enter text and was subjectively preferred, the costs of attending to and using the suggestions impaired average time performance.
View details