Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
Ben Y. Zhao
Proceedings of 40th IEEE Symposium on Security and Privacy (Oakland 2019)
[Full Text in PDF Format, 639KB]
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor attacks, where hidden associations or triggers override normal classification to produce unexpected results. For example, a model with a backdoor always identifies a face as Bill Gates if a specific symbol is present in the input. Backdoors can stay hidden indefinitely until activated by an input, and present a serious security risk to many security or safety related applications, e.g., biometric authentication systems or self-driving cars.
We present the first robust and generalizable detection and
mitigation system for DNN backdoor attacks. Our techniques
identify backdoors and reconstruct possible triggers. We identify
multiple mitigation techniques via input filters, neuron pruning
and unlearning. We demonstrate their efficacy via extensive
experiments on a variety of DNNs, against two types of backdoor
injection methods identified by prior work. Our techniques also
prove robust against a number of variants of the backdoor attack.