Project

Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display

Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display

Fumihiko Nakamura, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto

Abstract

Facial expressions enrich communication via avatars. However, in common immersive virtual reality systems, facial occlusions by head-mounted displays (HMD) lead to difficulties in capturing users’ faces. In particular, the mouth plays an important role in facial expressions because it is essential for rich interaction. In this paper, we propose a technique that classifies mouth shapes into six classes using optical sensors embedded in HMD and gives labels automatically to the training dataset by vowel recognition. We experiment with five subjects to compare the recognition rates of machine learning under manual and automated labeling conditions. Results show that our method achieves average classification accuracy of 99.9% and 96.3% under manual and automated labeling conditions, respectively. These findings indicate that automated labeling is competitive relative to manual labeling, although the former’s classification accuracy is slightly higher than that of the latter. Furthermore, we develop an application that reflects the mouth shape on avatars. This application blends six mouth shapes and then applies the blended mouth shapes to avatars.

Video

Publications

中村文彦,鈴木克洋,正井克俊,伊藤勇太,杉浦裕太,杉本麻樹,ヘッドマウントディスプレイに装着した光センサによる口元形状の識別,第136回ヒューマンインタフェース学会研究会,2016

中村文彦,鈴木克洋,正井克俊,伊藤勇太,杉浦裕太,杉本麻樹,HMD組み込み式光センサによる口元形状認識と母音認識を用いた訓練データのラベリング手法,日本バーチャルリアリティ学会論文誌 Vol.22, No.4, 2017, p.523-534, 2017

Fumihiko Nakamura, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto. Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display. International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments. pp.9-16, Tokyo, Japan, Sept. 11-13, 2019 [DOI]

Acknowledgements

This research was supported by JSPS KAKENHI 16H05870.