Project

FaceDrive

FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms

Abstract

Supernumerary Robotic Arms (SRAs) can make physical activities easier, but require cooperation with the operator. To improve cooperation, we predict the operator’s intentions by using his/her Facial Expressions (FEs). We propose to map FEs to SRAs commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD), the sensors data are fed to a SVM classifying them in FEs. The SRAs can then carry out commands by predicting the operator’s FEs (and arguably, the operator’s intention). We made a Virtual reality Environment (VE) with SRAs and synchronizable avatar to investigate the most suitable mapping between FEs and SRAs. In SIGGRAPH Asia 2019, the user can manipulate virtual SRAs using his/her FEs.

Members

Masaaki Fukuoka,
Adrien Verhulst,
Fumihiko Nakamura,
Ryo Takizawa,
Katsutoshi Masai,
Maki Sugimoto

Publication

  1. Masaaki Fukuoka, Adrien Verhulst, Fumihiko Nakamura, Ryo Takizawa, Katsutoshi Masai, and Maki Sugimoto. FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms. In Proceedings of ICAT-EGVE 2019 – International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments. pp.17-24, Tokyo, Japan, Sept. 11-13, 2019,
    URL:
    https://doi.org/10.2312/egve.20191275