Real-world Attack on MTCNN Face Detection System

Abstract

Recent studies proved that deep learning approaches achieve remarkable results on face detection task. On the other hand, the advances gave rise to a new problem associated with the security of the deep convolutional neural network models unveiling potential risks of DCNNs based applications. Even minor input changes in the digital domain can result in the network being fooled. It was shown then that some deep learning-based face detectors are prone to adversarial attacks not only in a digital domain but also in the real world. In the paper, we investigate the security of the well-known cascade CNN face detection system - MTCNN and introduce an easily reproducible and a robust way to attack it. We propose different face attributes printed on an ordinary white and black printer and attached either to the medical face mask or to the face directly. Our approach is capable of breaking the MTCNN detector in a realworld scenario.

Publication
In 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON 2019)
Aleksandr Petiushko Александр Петюшко
Aleksandr Petiushko Александр Петюшко
Director, Head of ML Research / Adjunct Professor / PhD

Principal R&D Researcher (15+ years of experience), R&D Technical Leader (10+ years of experience), and R&D Manager (7+ years of experience). Running and managing industrial research and academic collaboration (35+ publications, 30+ patents). Inspired by theoretical computer science and how it changes the world.