Face Anonymization: A survey of what works and what doesn’t
November 20, 2020
Nikhil Nagaraj

Face Anonymization: A survey of what works and what doesn’t

Computer Vision

From schools to shopping centers, closed-circuit and monitoring cameras are now found everywhere. With their rising prevalence, concerns about individual privacy and data protection have risen. Even if these cameras do not use facial-recognition technology, the current reality is that wherever there are security cameras, in large cities, buildings, or other locations, there are people looking at these images irrespective of GDPR or other regulations. Watching this footage invades the privacy of its subjects and the information so obtained is only protected by the discretion of the observer. Anonymization of these images would improve compliance with data protection regulations while also increasing public confidence in such monitoring systems.

A major step in any endeavor to protect the privacy of individuals is the removal of personally identifiable information of which facial information is a large part. While simple techniques such as blurring/obfuscation of detected faces serve this purpose, there is a risk of the identity being revealed if the face detection algorithm fails for a few frames. A technique to mitigate this is by replacing the real faces with faces that are artificially generated. If the real face is not replaced for a few frames, this will be less obvious when watching the video as compared to a momentary lapse of blurring/obfuscation.

This blog explores two recently released facial anonymization techniques: DeepPrivacy and CLEANIR and expounds on their strengths and weaknesses.

Interested in the results and more ? Read the full interactive blogpost on our Medium blog.

Related posts

Want to learn more?

Let’s have a chat.
Contact us
No items found.