Authors
Ping-yeh Chiang, Michael J Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein
Publication date
2020/7/7
Journal
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020)
Description
Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date. While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive. This work is motivated by recent progress on certified classification by randomized smoothing. We start by presenting a reduction from object detection to a regression problem. Then, to enable certified regression, where standard mean smoothing fails, we propose median smoothing, which is of independent interest. We obtain the first model-agnostic, training-free, and certified defense for object detection against -bounded attacks.
Scholar articles
P Chiang, M Curry, A Abdelkader, A Kumar… - Advances in Neural Information Processing Systems, 2020