Building Secure and Engaging Video Communication by Using Monitor Illumination

Jun Myeong Choi, Johnathan Leung, Noah Frahm, Max Christman, Gedas Bertasius, Roni Sengupta; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 4377-4386

Abstract


In this paper we develop a neural network that can detect a mismatch between the light emitted from a monitor and the light reflected from the face of a user sitting in front of a monitor-webcam setup. This can be useful to detect the presence of a deep fake virtual avatar or an inattentive attendee to create a secure and engaging virtual communication platform e.g. a student in a virtual education environment. We can perform this detection passively without requiring the authenticator to project any specific patterns intermittently on the screen hence it does not disrupt the meeting flow or alert the bad actors. We develop a personalized model where the authenticator requires each team member to watch ~30 minutes of video content only once on their monitor as their faces are captured with a webcam. We then train a neural network that learns to predict the monitor content from their facial image and compares it with the intended monitor content to detect `on-task' (real) vs `off-task' (fake). This personalized network can then detect `off-task' scenarios where monitor lighting does not match the face for any unseen user appearances. Our method produces a binary classification accuracy of 70% surpassing a baseline that always predicts `on-task' with 58% accuracy.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Choi_2024_CVPR, author = {Choi, Jun Myeong and Leung, Johnathan and Frahm, Noah and Christman, Max and Bertasius, Gedas and Sengupta, Roni}, title = {Building Secure and Engaging Video Communication by Using Monitor Illumination}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4377-4386} }