Enabling E-Textile Microinteractions: Gestures and Light through Helical Structures

May 15, 2020

Posted by Alex Olwal, Research Scientist, Google Research



Textiles have the potential to help technology blend into our everyday environments and objects by improving aesthetics, comfort, and ergonomics. Consumer devices have started to leverage these opportunities through fabric-covered smart speakers and braided headphone cords, while advances in materials and flexible electronics have enabled the incorporation of sensing and display into soft form factors, such as jackets, dresses, and blankets.
A scalable interactive E-textile architecture with embedded touch sensing, gesture recognition and visual feedback.
In “E-textile Microinteractions” (Proceedings of ACM CHI 2020), we bring interactivity to soft devices and demonstrate how machine learning (ML) combined with an interactive textile topology enables parallel use of discrete and continuous gestures. This work extends our previously introduced E-textile architecture (Proceedings of ACM UIST 2018). This research focuses on cords, due to their modular use as drawstrings in garments, and as wired connections for data and power across consumer devices. By exploiting techniques from textile braiding, we integrate both gesture sensing and visual feedback along the surface through a repeating matrix topology.

For insight into how this works, please see this video about E-textile microinteractions and this video about the E-textile architecture.
E-textile microinteractions combining continuous sensing with discrete motion and grasps.
The Helical Sensing Matrix (HSM)
Braiding generally refers to the diagonal interweaving of three or more material strands. While braids are traditionally used for aesthetics and structural integrity, they can also be used to enable new sensing and display capabilities.

Whereas cords can be made to detect basic touch gestures through capacitive sensing, we developed a helical sensing matrix (HSM) that enables a larger gesture space. The HSM is a braid consisting of electrically insulated conductive textile yarns and passive support yarns,where conductive yarns in opposite directions take the role of transmit and receive electrodes to enable mutual capacitive sensing. The capacitive coupling at their intersections is modulated by the user’s fingers, and these interactions can be sensed anywhere on the cord since the braided pattern repeats along the length.
Left: A Helical Sensing Matrix based on a 4×4 braid (8 conductive threads spiraled around the core). Magenta/cyan are conductive yarns, used as receive/transmit lines. Grey are passive yarns (cotton). Center: Flattened matrix, that illustrates the infinite number of 4×4 matrices (colored circles 0-F), which repeat along the length of the cord. Right: Yellow are fiber optic lines, which provide visual feedback.
Rotation Detection
A key insight is that the two axial columns in an HSM that share a common set of electrodes (and color in the diagram of the flattened matrix) are 180º opposite each other. Thus, pinching and rolling the cord activates a set of electrodes and allows us to track relative motion across these columns. Rotation detection identifies the current phase with respect to the set of time-varying sinusoidal signals that are offset by 90º. The braid allows the user to initiate rotation anywhere, and is scalable with a small set of electrodes.
Rotation is deduced from horizontal finger motion across the columns. The plots below show the relative capacitive signal strengths, which change with finger proximity.
Interaction Techniques and Design Guidelines
This e-textile architecture makes the cord touch-sensitive, but its softness and malleability limit suitable interactions compared to rigid touch surfaces. With the unique material in mind, our design guidelines emphasize:
  • Simple gestures. We design for short interactions where the user either makes a single discrete gesture or performs a continuous manipulation.

  • Closed-loop feedback. We want to help the user discover functionality and get continuous feedback on their actions. Where possible, we provide visual, tactile, and audio feedback integrated in the device.
Based on these principles, we leverage our e-textile architecture to enable interaction techniques based on our ability to sense proximity, area, contact time, roll and pressure.
Our e-textile enables interaction based on capacitive sensing of proximity, contact area, contact time, roll, and pressure.
The inclusion of fiber optic strands that can display color of varying intensity enable dynamic real-time feedback to the user.
Braided fiber optics strands create the illusion of directional motion.
Motion Gestures (Flicks and Slides) and Grasping Styles (Pinch, Grab, Pinch)
We conducted a gesture elicitation study, which showed opportunities for an expanded gesture set. Inspired by these results, we decided to investigate five motion gestures based on flicks and slides, along with single­-touch gestures (pinch, grab and pat).
Gesture elicitation study with imagined touch sensing.
We collected data from 12 new participants, which resulted in 864 gesture samples (12 participants performed eight gestures each, repeating nine times), each having 16 features linearly interpolated to 80 observations over time. Participants performed the eight gestures in their own style without feedback as we wanted to accommodate individual differences since the classification is highly dependent on user style (“contact”), preference (“how to pinch/grab”) and anatomy (e.g., hand size). Our pipeline was thus designed for user-dependent training to enable individual styles with differences across participants, such as the inconsistent use of clockwise/counterclockwise, overlap between temporal gestures (e.g., flick vs. flick and hold, and similar pinch and grab gestures.) For a user-independent system, we would need to address such differences, for example with stricter instructions for consistency, data from a larger population, and in more diverse settings. Real-time feedback during training will also help mitigate differences as the user learns to adjust their behavior.
Twelve participants (horizontal axis) performed 9 repetitions (animation) for the eight gestures (vertical axis). Each sub-image shows 16 overlaid feature vectors, interpolated to 80 observations over time.
We performed cross-validation for each user across the gestures by training on eight repetitions and testing on one, through nine permutations, and achieved a gesture recognition accuracy of ~94%. This result is encouraging, especially given the expressivity enabled by such a low-resolution sensor matrix (eight electrodes).

Notable here is that inherent relationships in the repeated sensing matrices are well-suited for machine learning classification. The ML classifier used in our research enables quick training with limited data, which makes a user-dependent interaction system reasonable. In our experience, training for a typical gesture takes less than 30s, which is comparable to the amount of time required to train a fingerprint sensor.

User-Independent, Continuous Twist: Quantifying Precision and Speed
The per-user trained gesture recognition enabled eight new discrete gestures. For continuous interactions, we also wanted to quantify how well user-independent, continuous twist performs for precision tasks. We compared our e-textile with two baselines, a capacitive multi-touch trackpad (“Scroll”) and the familiar headphone cord remote control (“Buttons”). We designed a lab study where the three devices controlled 1D movement in a targeting task.

We analysed three dependent variables for the 1800 trials, covering 12 participants and three techniques: time on task (milliseconds), total motion, and motion during end-of-trial. Participants also provided qualitative feedback through rankings and comments.

Our quantitative analysis suggests that our e-textile’s twisting is faster than existing headphone button controls and comparable in speed to a touch surface. Qualitative feedback also indicated a preference for e-textile interaction over headphone controls.
Left: Weighted average subjective feedback. We mapped the 7-point Likert scale to a score in the range [-3, 3] and multiplied by the number of times the technique received that rating, and computed an average for all the scores. Right: Mean completion times for target distances show that Buttons were consistently slower.
These results are particularly interesting given that our e-textile was more sensitive, compared to the rigid input devices. One explanation might be its expressiveness — users can twist quickly or slowly anywhere on the cord, and the actions are symmetric and reversible. Conventional buttons on headphones require users to find their location and change grips for actions, which adds a high cost to pressing the wrong button. We use a high-pass filter to limit accidental skin contact, but further work is needed to characterize robustness and evaluate long-term performance in actual contexts of use.

Gesture Prototypes: Headphones, Hoodie Drawstrings, and Speaker Cord
We developed different prototypes to demonstrate the capabilities of our e-textile architecture: e-textile USB-C headphones to control media playback on the phone, a hoodie drawstring to invisibly add music control to clothing, and an interactive cord for gesture controls of smart speakers.
Left: Tap = Play/Pause; Center: Double-tap = Next track; Right: Roll = Volume +/-
Interactive speaker cord for simultaneous use of continuous (twisting/rolling) and discrete gestures (pinch/pat) to control music playback.
Conclusions and Future Directions
We introduce an interactive e-textile architecture for embedded sensing and visual feedback, which can enable both precise small-scale and large-scale motion in a compact cord form factor. With this work, we hope to advance textile user interfaces and inspire the use of microinteractions for future wearable interfaces and smart fabrics, where eyes-free access and casual, compact and efficient input is beneficial. We hope that our e-textile will inspire others to augment physical objects with scalable techniques, while preserving industrial design and aesthetics.

Acknowledgements
This work is a collaboration across multiple teams at Google. Key contributors to the project include Alex Olwal, Thad Starner, Jon Moeller, Greg Priest-Dorman, Ben Carroll, and Gowa Mainini. We thank the Google ATAP Jacquard team for our collaboration, especially Shiho Fukuhara, Munehiko Sato, and Ivan Poupyrev. We thank Google Wearables, and Kenneth Albanowski and Karissa Sawyer, in particular. Finally, we would like to thank Mark Zarich for illustrations, Bryan Allen for videography, Frank Li for data processing, Mathieu Le Goc for valuable discussions, and Carolyn Priest-Dorman for textile advice.