Visuo-tactile AR for Enhanced Safety Awareness in HRI

We describe our approach for developing a multimodal AR-system that combines visual and tactile cues in order to enhance the safety-awareness of humans in human-robot interaction tasks. Motivated by a competition for attentional resources between the need for safety-maintenance and achievement of a primary task, we employ multimodal cues that inform a user about unsafe proximities to dangerous areas. The system augments a scene with both visual output provided via AR-glasses and tactile stimuli produced by vibration motors embedded into a belt. The tactile belt allows the user to focus visual attention on a primary task while keeping him or her safety-aware. The visual representation that is additionally rendered into the scene provides visual grounding. This feedback is beneficial to a user as well as to external observers in training and supervision scenarios. We tested the system with informed and naive users to iterate over the design and to gain first insights into the utility of our multimodal approach.

Poster

Publication