Modern Interactive Techniques

Interacting with digital devices usually requires large touch panels or external input devices such as keyboards. However, wearable and IoT devices are small in nature with no resources to support large input panels or other input modules. Therefore, directly incorporating touch panels to control wearable devices compromises their usability. To provide a better interaction experience on wearable devices, we use the ubiquitous sensing approach to design rich yet reliable gesture-based control with minimal or no hardware overhead to the device. Specifically, we either reprogram the device’s existing sensors or install miniature sensors to support expressive, around-the-device gesture recognition. With this interaction design methodology, we significantly extend the original small interaction space on these devices, enabling the user to control them with greater flexibility.

This line of research focuses on the interaction of traditional and emerging wearables, including smartwatches, smart eyeglasses, earbuds, etc. Based on these wearables, we design applications including localization, motion tracking, gesture recognition, text entry etc.

Related publications:

  • [PerCom 2024] L. Ge, W. Xie, J. Zhang, Q. Zhang, “BLEAR: Practical Wireless Earphone Tracking under BLE protocol”. 2024 IEEE International Conference on Pervasive Computing and Communications, Biarritz, France, 2024


    Introduction: Existing vision- and motion-based head tracking struggles with accuracy, usability, and COTS compatibility. EHTrack addresses these issues with an earphone-based approach that uses acoustic signals only. A pair of speakers creates a periodic sound field detected by the user’s two earphones; by estimating changes in distance and angle between earphones and speakers, EHTrack models head movement and orientation. Evaluations show high accuracy—2.98 cm average error for movement, 1.83° for orientation—and 89.2% accuracy in estimating user focus direction in an exhibition deployment.


  • [IoTJ 2024] L. Ge, Q. Zhang, J. Zhang, H. Chen, “EHTrack: Earphone-Based Head Tracking via Only Acoustic Signals”, IEEE Internet of Things Journal, 11 (3), 2024.


    Introduction: Head tracking measures human focus and attention to improve HCI. Current vision- and motion-based methods struggle with accuracy, usability, and COTS compatibility. EHTrack addresses these limits with an earphone-based system that performs head tracking using only acoustic signals. Two speakers emit a periodic sound field detected by the user’s earphones; by estimating distance and angle changes between earphones and speakers, EHTrack models head movement and orientation. Evaluations show high accuracy: 2.98 cm average error for movement, 1.83° for orientation, and 89.2% accuracy for focus direction in an exhibition deployment.


  • [ACM IMWUT 2024] W. Xie, H. Chen, J. Wei, J. Zhang, Q. Zhang, “RimSense: Enabling Touch-based Interaction on Eyeglass Rim Using Piezoelectric Sensors”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 7 (4), 2023


    Introduction: Smart eyewear typically uses temple-mounted touch panels, but their plane misalignment with the display breaks intuitive gesture-to-object mapping. RimSense offers an alternative: touch interaction on the eyewear rim. Using two commercial piezoelectric (PZT) transducers, RimSense turns the rim into a touch-sensitive surface and captures structural changes as channel frequency response (CFR) patterns. A buffered chirp probe ensures granularity and noise resilience, while a deep learning framework with a Finite-State Machine (FSM) maps fine-grained sequences to event-level gestures and durations. In evaluations with 30 subjects, RimSense recognizes eight gestures plus a negative class with an F1-score of 0.95 and estimates durations with 11% relative error. Real-time prototypes and a user study with 14 participants show strong performance, usability, learnability, and enjoyment, with interview feedback informing future eyewear design.


  • [ACM IMWUT 2021] W. Xie, Q. Zhang, J. Zhang, “Acoustic-based Upper Facial Action Recognition for Smart Eyewear”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5 (2), 2021


    Introduction: Smart eyewear represents the next leap in wearables, yet reliance on obtrusive touchpads hinders usability. We propose a hands-free, acoustic-based system that recognizes Upper Facial Actions (UFAs) using glass-mounted speakers and microphones. To address severe multipath fading and subtle skin deformations, we utilize OFDM-based Channel State Information (CSI) estimation and time-frequency analysis. These patterns feed into a CNN that classifies six actions—including winks, blinks, and brow movements. Experiments with 26 subjects demonstrated an average F1-score of 0.92.


  • [ACM IMWUT 2020] L. Ge, J. Zhang, Q. Zhang, “Acoustic Strength-based Motion Tracking”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(4), 2020.


    Introduction: Accurate motion tracking is vital for VR/AR, yet existing acoustic methods rely on distance estimation, necessitating large arrays (>1m) that are impractical for compact devices. To address this, we propose the Acoustic Strength-based Angle Tracking (ASAT) system. ASAT generates a periodically changing sound field; as the device moves, the received signal period shifts, allowing for precise angle derivation. This approach achieves 5cm localization accuracy within a 3m range without requiring bulky hardware.


Demos:

Acoustic-based Upper Facial Action Recognition for Smart Eyewear

RimSense: Enabling Touch-based Interaction on Eyeglass Rim Using Piezoelectric Sensors