What the Fawkes: Facial Recognition, Digital Masking, and AI
The Guy Fawkes mask is often associated with hacktivists and protesters, but is there a deeper lesson in privacy and responsibility in technology to be learned?
In recent years, facial recognition technology has gone through cycles of falling in and out of favor, adding to complex questions about artificial intelligence and privacy. Neither facial recognition nor AI is new, but the ways they are applied opens up business potential as well as liability.
An interview for an upcoming InformationWeek series on generative AI shed additional light on the intersection of these technologies and why the public may need proactive deterrents for both.
Ben Y. Zhao, professor of computer science at the University of Chicago, led a team that worked on a tool called Fawkes, which is meant to disrupt unknown third parties from creating facial recognition profiles to track individuals by using images found on social media. The conversation with Zhao on Fawkes, and his later work on Nightshade, will appear next week on InformationWeek.
The name of the Fawkes tool was derived from Guy Fawkes, whose likeness was adopted in the form of a mask as a symbol by hacktivist group Anonymous and others who seek to maintain anonymity while engaging in various forms of protest.
The Fawkes tool was developed around the time Clearview.ai got sued for scraping data from social media to flesh out its facial recognition database and tools, which were sold to law enforcement.
Not long after that lawsuit, Clearview.ai was back in headlines, this time offering its services to Ukraine for the potential use of facial recognition to spot Russian agents.
This exemplifies some of the back and forth associated with facial recognition. Meta bowed out of facial recognition for tagging photos on Facebook. The Federal Trade Commission banned Rite Aid from using facial recognition for five years after it was discovered the pharmacy chain’s use of biometric technology unjustly target and tagged minorities as potential shoplifters.
And those are just some recent steps in this dance between innovating with AI and facial recognition and addressing the issues they raise. Over the past five years there have been discussions of facial recognitions risks, the liability concerns associated with AI, as well as potential uses of facial recognition that do not violate civil rights while responsibly respecting individuals.
This particular dilemma between privacy, security, and opportunity dates back even further. The rush of interest in biometrics that arose after the Sept. 11 terrorist attacks -- just a few years shy of a quarter century ago -- never completely went away. The effort to identify potential attackers became paramount, a natural reaction steeped in a desire to restore a sense of safety.
This led to a boom of business for companies that promised to sift through digital images to catch bad actors. This also opened the door for potential authoritarian abuses. Regardless, businesses continue to look for ways to make facial recognition part of the identification equation. But is that viable in a world where the right to privacy and the right to be forgotten continue to gain momentum and strength?
This episode of DOS Won’t Hunt explores the tension at that intersection of AI, security, facial recognition, and lessons that could still be learned from Guy Fawkes and other masks.
About the Author
You May Also Like