Google Research

T(ether): Spatially-Aware Handhelds, Gestures and Proprioception for Multi-User 3D Modeling and Animation

  • Dávid Lakatos
  • Matthew Blackshaw
  • Alex Olwal
  • Zachary Barryte
  • Ken Perlin
  • Hiroshi Ishii
Proceedings of SUI 2014 (ACM Symposium on Spatial User Interaction), ACM, pp. 90-93

Abstract

T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work