Knowing When to Stop: Evaluation and Verification of Conformity to Output-Size Specifications

Chenglong Wang, Rudy Bunel, Krishnamurthy Dvijotham, Po-Sen Huang, Edward Grefenstette, Pushmeet Kohli; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12260-12269

Abstract


Neural architectures able to generate variable-length outputs are extremely effective for applications like Machine Translation and Image Captioning. In this paper, we study the vulnerability of these models to attacks aimed at changing the output-size that can have undesirable consequences including increased computation and inducing faults in downstream modules that expect outputs of a certain length. We show the existence and construction of such attacks with two key contributions. First, to overcome the difficulties of discrete search space and the non-differentiable adversarial objective function, we develop an easy-to-compute differentiable proxy objective that can be used with gradient-based algorithms to find output-lengthening inputs. Second, we develop a verification approach to formally prove that the network cannot produce outputs greater than a certain length. Experimental results on Machine Translation and Image Captioning models show that our adversarial output-lengthening approach can produce outputs that are 50 times longer than the input, while our verification approach can, given a model and input domain, prove that the output length is below a certain size.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wang_2019_CVPR,
author = {Wang, Chenglong and Bunel, Rudy and Dvijotham, Krishnamurthy and Huang, Po-Sen and Grefenstette, Edward and Kohli, Pushmeet},
title = {Knowing When to Stop: Evaluation and Verification of Conformity to Output-Size Specifications},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}