Relaxed Spatio-Temporal Deep Feature Aggregation for Real-Fake Expression Prediction

Savas Ozkan, Gozde Bozdagi Akar; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3094-3100

Abstract


Frame-level visual features are generally aggregated in time with the techniques such as LSTM, Fisher Vectors, NetVLAD etc. to produce a robust video-level representation. We here introduce a learnable aggregation technique whose primary objective is to retain short-time temporal structure between frame-level features and their spatial interdependencies in the representation. Also, it can be easily adapted to the cases where there have very scarce training samples. We evaluate the method on a real-fake expression prediction dataset to demonstrate its superiority. Our method obtains 65% score on the test dataset in the official MAP evaluation and there is only one misclassified decision with the best reported result in the Chalearn Challenge (i.e. 66:7%) . Lastly, we believe that this method can be extended to different problems such as action/event recognition in future.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Ozkan_2017_ICCV,
author = {Ozkan, Savas and Bozdagi Akar, Gozde},
title = {Relaxed Spatio-Temporal Deep Feature Aggregation for Real-Fake Expression Prediction},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}