Facial expressions are a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a lot of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. It is difficult to track users in wide area. User’s facial expression can be recognized at only a limited place.
We present a photo sensor and machine learning based recognition system that can detect facial expressions anytime, anywhere. Our system can categorize various facial expressions by measuring the distance between embedded sensors and the skin surface of user’s face. Recognizable states are as follows: neutral, angry (2 states), smile, laugh, sad, and surprise. With our method, an individual difference can be ignored with user-dependent training.

Project Members

Katsutoshi Masai
Yuta Sugiura
Masa Ogata
Katsuhiro Suzuki
Fumihiko Nakamura
Sho Shimamura
Kai Kunze
Masahiko Inami
Maki Sugimoto



Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, Sho Shimamura, Kai Kunze, Masahiko Inami, Maki Sugimoto, AffectiveWear: Toward Recognizing Facial Expressions, SIGGRAPH 2015 Emerging Technologies, 2015

Katsutoshi Masai, Yuta Sugiura, Katsuhiro Suzuki, Sho Shimamura, Kai Kunze, Masa Ogata, Masahiko Inami, and Maki Sugimoto. 2015. AffectiveWear: towards recognizing affect in real life. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (UbiComp ’15). ACM, New York, NY, USA, 357-360.