CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

Abstract

In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks — small modifications of the input that change the predictions. Besides rigorously studied $\ell_p$-bounded additive perturbations, semantic perturbations (e.g. rotation, translation) raise a serious concern on deploying ML systems in real-world. Therefore, it is important to provide provable guarantees for deep learning models against semantically meaningful input transformations. In this paper, we propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds that can be used in general attack settings. We estimate the probability of a model to fail if the attack is sampled from a certain distribution. Our theoretical findings are supported by experimental results on different datasets.

Publication
In Association for the Advancement of Artificial Intelligence Conference 2022 (AAAI 2022)
Aleksandr Petiushko Александр Петюшко
Aleksandr Petiushko Александр Петюшко
Sr. Director, Head of AI Research / Adjunct Professor / PhD

Principal R&D Researcher (15+ years of experience), R&D Technical Leader (10+ years of experience), and R&D Manager (8+ years of experience). Running and managing industrial research and academic collaboration (35+ publications, 30+ patents). Hiring and transforming AI/ML teams. Inspired by theoretical computer science and how it changes the world.