Future Frame Prediction for Anomaly Detection – A New Baseline

Wen Liu, Weixin Luo, Dongze Lian, Shenghua Gao; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6536-6545

Abstract


Anomaly detection in videos refers to the identification of events that do not conform to expected behavior. However, almost all existing methods tackle the problem by minimizing the reconstruction errors of training data, which cannot guarantee a larger reconstruction error for an abnormal event. In this paper, we propose to tackle the anomaly detection problem within a video prediction framework. To the best of our knowledge, this is the first work that leverages the difference between a predicted future frame and its ground truth to detect an abnormal event. To predict a future frame with higher quality for normal events, other than the commonly used appearance (spatial) constraints on intensity and gradient, we also introduce a motion (temporal) constraint in video prediction by enforcing the optical flow between predicted frames and ground truth frames to be consistent, and this is the first work that introduces a temporal constraint into the video prediction task. Such spatial and motion constraints facilitate the future frame prediction for normal events, and consequently facilitate to identify those abnormal events that do not conform the expectation. Extensive experiments on both a toy dataset and some publicly available datasets validate the effectiveness of our method in terms of robustness to the uncertainty in normal events and the sensitivity to abnormal events.

Related Material


[pdf] [Supp]
[bibtex]
@InProceedings{Liu_2018_CVPR,
author = {Liu, Wen and Luo, Weixin and Lian, Dongze and Gao, Shenghua},
title = {Future Frame Prediction for Anomaly Detection – A New Baseline},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}