WiFi and Vision Multimodal Learning for Accurate and Robust Device-Free Human Activity Recognition

Han Zou, Jianfei Yang, Hari Prasanna Das, Huihan Liu, Yuxun Zhou, Costas J. Spanos; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 0-0

Abstract


Human activity recognition plays an indispensable role in a myriad of emerging applications in context-aware services. Accurate activity recognition systems usually require the user to carry mobile or wearable devices, which is inconvenient for long term usage. In this paper, we design WiVi, a novel human activity recognition scheme that is able to identify common human activities in an accurate and device-free manner via multimodal machine learning using only commercial WiFi-enabled IoT devices and camera. For sensing using WiFi, a new platform is developed to extract fine-grained WiFi channel information and transform them into WiFi frames. A tailored convolutional neural network model is designed to extract high-level representative features among the WiFi frames in order to provide human activity estimation. We utilized a variant of C3D model for activity sensing using vision. Following this, WiVi performs multimodal fusion at the decision level to combine the strength of WiFi and vision by constructing an ensembled DNN model. Extensive experiments are conducted in an indoor environment, demonstrating that WiVi achieves 97.5% activity recognition accuracy and is robust under unfavorable situations, as each modality provides the complementary sensing when the other faces its limiting conditions.

Related Material


[pdf]
[bibtex]
@InProceedings{Zou_2019_CVPR_Workshops,
author = {Zou, Han and Yang, Jianfei and Prasanna Das, Hari and Liu, Huihan and Zhou, Yuxun and Spanos, Costas J.},
title = {WiFi and Vision Multimodal Learning for Accurate and Robust Device-Free Human Activity Recognition},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}