The Coming Wave of Regulation over Facial Recognition
More applications utilizing facial recognition are inevitable in the public and private sectors, and those will generate plenty of new rules and regulations.
Facial recognition is one of the most widely adopted applications of artificial intelligence. It’s almost certainly the most controversial.
Riding on the smart cameras and computer vision algorithms that have started to permeate every device, facial recognition is AI’s ultimate double-edged sword. It’s a privacy advocate’s nightmare but also the key to simpler, faster, stronger, and more seamless device-level authentication and powerfully personalized customer service for consumers everywhere. It’s an authoritarian government’s ultimate control tool but also an amazingly effective resource for identifying and nabbing criminals.
As AI comes under increased scrutiny in many countries, we are sure to see calls for tighter regulation of the technology’s use in facial recognition. Some of the new regulations may apply across the board to all applications of facial regulation. Some may be incrementally applied within the context of existing regulations governing law enforcement, healthcare, e-commerce, social media, autonomous vehicles, and other domains.
With privacy protections a key focus, many regulations over facial recognition will focus on giving consumers the right to opt out of its uses; examine how it’s being used to target them; receive a full accounting of how their facial data is being managed, and request that it be permanently expunged from corporate databases. In fact, these regulations may already apply in many jurisdictions, now that the EU’s General Data Protection Regulation and similar mandates are on the books in various jurisdiction.
But given the complex issues surfaced by AI-driven facial recognition, there are almost certainly going to be additional regulations specifically focused on this technology. Over the next several years, the following new regulatory structures are likely to be imposed:
Citizen oversight: At the national level, we’re likely to see citizen oversight boards, established through legislation or referendum, that provide watchdog functions over use of facial recognition in the public and private sectors. These will be established on the model of boards that oversee enforcement of laws on racial profiling, civil rights, police brutality, government corruption, and other sensitive matters of civic concern. As regards facial recognition, their core job will be to flag violations or abuse of laws on the book, and possibly to propose new regulations to deal with gaps in current legislation. They will also catalyze public discussions on how to encourage the socially beneficial use of facial recognition while mitigating its downsides.
Prior consent: In such relationship-oriented domains as e-commerce, mobile applications, and social media, there will be more laws requiring that companies obtain prior consent before collecting individuals’ images for facial recognition. These regulations will specify the locations and circumstances where such prior consent must be obtained, as well as the appropriate procedures, formats, and channels under which consumers must be provide consent, and in which they may opt out, modify, or revoke that consent.
Visible notifications: In both public and private spaces, more regulations will mandate that facility owners post visible notices that they are using facial recognition, and perhaps also post those notices, via proximity sensing technologies, to people’s portable devices. Rather than go through the cumbersome process of gaining prior consent through online forms and the like, more people will consent to facial recognition — or withdraw — simply by entering a space that is equipped with the requisite signage. And, as more face recognition happens surreptitiously online — such as through browser-based machine learning tools — there will be calls for visible on-screen notifications in web and mobile apps to signal that face recognition features are being used.
Ad-hoc opt-outs: In the public sphere, countries that wish to safeguard anonymity may legislate legal protections for people who wish to block facial recognition in ad-hoc circumstances. For example, some people may choose to wear special face coverings, eyeglasses, pigmentation, and other adversarial masking techniques that prevent the algorithms from positively identifying them in real time. At the very least, some jurisdictions may decide that people have the right to apply selective post-hoc masking on their images when stored in non-sensitive commercial and public databases. This will be a fraught topic in many societies that ban face masks and other cloaking devices, on the legitimate grounds that they may prevent terrorists and other criminals from being identified.
Subject sovereignty: Consistent with the requirements of GDPR and similar mandates, many jurisdictions will pass regulations to ensure that facial-recognition subjects have the legal right to know what photos of them have been collected, how they’re being stored and processed, and how they can deidentify, limit, or deny downstream uses of those photos as well as any inferences that derive from facial recognition. Many countries will also strengthen the regulations ensuring that face-recognition subjects can appeal when they believe they have been misidentified by a facial recognition system or when they believe that an accurate facial recognition is being used for an illegal or inappropriate purpose. Just as important, there will almost certainly be laws governing the circumstances when facial recognition evidence is and isn’t inadmissible in criminal and civil court proceedings.
Debiasing requirements: Given all the concern around facial recognition being used for racial, sexual, and other profiling, many jurisdictions will mandate that enterprises regularly demonstrate that their algorithms have been debiased with respect to all distinction of gender, race, ethnicity, sexual orientation, and other protected attributes. This will require regular retraining and recertification of facial recognition algorithms so that their incidences of false positives and false negatives are consistent across these attributes. Just as important, there will be requirements to ensure that no protected attribute is inadvertently and falsely tagged as criminals, animals, or other unfortunate and spurious correlations.
Embedded technological safeguards: Though this is a bit speculative, it may in the future become possible to incorporate logic into AI accelerator chipsets that deactivates face recognition algorithms under programmatic control or automatically in various temporal, geospatial, and other environmental circumstances. Or it may become feasible to build those chip-level safeguards into image capture system. It’s safe to assume that as such technological possibilities unfold that regulators may require that they be built into mass-market computer vision systems, smartphones, and other devices used for facial recognition.
Hopefully, these safeguards will be implemented proactively by technology vendors, as proposed recently by Microsoft President Brad Smith, so that the regulators won’t have to encounter inertia and resistance as address growing popular concerns over the technology’s misuse.
Like it or not, facial recognition is coming to every facet of our lives. Let’s hope that the consumer, commercial, government, and legal establishments converge on a consensus approach for realizing the technology’s promise while mitigating the risks and keeping the inevitable regulatory regime from becoming unduly draconian.
About the Author
You May Also Like