Teleoperation

Tele-operation platforms

This project involves the design and implementation of a multi-sensory control system that enables a single user to easily operate and coordinate multiple autonomous mobile robots in safety-critical applications such as surveillance.

The platform consists of a visual, audio and haptic human-robot interaction system that intends to expand a person’s sensing and mobility capabilities to different simultaneous remote locations through a set of omni-directional mobile robots that navigate autonomously in an environment. These robots are equipped with a video camera system, a loudspeaker/microphone set and several tactile sensors distributed along key contact points of their structure. Using a broadband wireless system, these three visual, audio and tactile data are transmitted to a server station where they are processed to provide a real-time perspective of the robots’ activities to the user.

The user may interrupt the autonomous navigation of any robot at any time, take full control of their actions and switch the control among them. Using a virtual reality helmet and glove, the user has in real-time both robots’ visual and audio information as well as force feedback. On the other hand, the user is able to re-position the robots with the glove, their cameras with the helmet and to speak through the robots’ loud-speakers: a “quite” global user-environment interaction.

 

Contact: José Antonio Aguilera, Jorge Varona and Ramiro Velázquez