Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
Ben Y. Zhao
Proceedings of the 29th USENIX Security Symposium (USENIX Security 2020)
[Full Text in PDF Format, 946KB]
Today's proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their knowledge. We need tools to protect ourselves from potential misuses of unauthorized facial recognition systems. Unfortunately, no practical or effective solutions exist.
In this paper, we propose Fawkes, a system that helps individuals
inoculate their images against unauthorized facial recognition models.
Fawkes achieves this by helping users add imperceptible pixel-level
changes (we call them "cloaks") to their own photos before
releasing them. When used to train
facial recognition models, these "cloaked" images produce
functional models that consistently cause normal images of the
user to be misidentified. We
experimentally demonstrate that Fawkes provides 95+% protection against user
recognition regardless of how trackers train their models. Even when clean,
uncloaked images are "leaked" to the tracker and used for training, Fawkes
can still maintain an 80+% protection success rate. We
achieve 100% success in
experiments against today's state-of-the-art facial recognition
services. Finally, we show that Fawkes is robust
against a variety of countermeasures that try to detect or disrupt image cloaks.