Will Facial Recognition Thrive in the Post-Pandemic Economy?
Facial recognition is at the heart of the new normal of our lives, and many uses don’t violate privacy or civil rights. But the political and regulatory controversies will continue to burn hot for many years.
Facial recognition has become one of the most polarizing applications of artificial intelligence. Demonized for its use in government surveillance and biased algorithmic decision making, facial recognition has become a lightning rod for popular unrest in many countries.
Fueled by deepening distrust of AI, regulations are expected to tighten their grip on facial recognition over the next several years. In fact, regulating facial recognition has become one of the most salient issues in this year’s US presidential election campaign.
Defend or defund?
Biases in facial recognition applications are especially worrisome if used to direct predictive policing programs in urban areas with many disadvantaged minority groups. Indeed, a 2019 study by the US government’s National Institute of Standards and Technology found that many facial recognition algorithms appear to perpetuate racial bias, “misidentifying Asian- and African-Americans far more often than Caucasians”. The American Civil Liberties Union is tracking growing instances of wrongful arrests due to police misuse of facial recognition, such as in this recent incident.
Understood in that context, we can see legitimate concerns behind calls such as this recent article in The Atlantic to “defund facial recognition.” Trying to get ahead of this issue before the November elections, the Democratic-controlled US House of Representatives passed a sweeping police reform bill in June that would prohibit federal law enforcement’s use of real-time AI-powered facial recognition. The measure, which is far from certain to pass the Republican-controlled Senate or be signed by President Trump, applies only to facial recognition software in police body cameras. It also falls short of all-out ban or moratorium that many activists have called for at the local, state and federal level over the past year.
Where facial recognition is concerned, the current patchwork of state and local laws -- as well as the spotty nature of regulations overseas -- exposes citizens to surveillance, privacy violations, bias, and other abuses of this technology without clear or consistent legal recourse. By the same token, vendors and users of this technology face an inordinate amount of risk if they deploy this technology aggressively.
Much of the controversy surrounding facial recognition has to do with potential abuses in law enforcement. Within many AI solution vendors, there has been an extensive grass-roots effort among employees to get their employers to take a strong stand against police abuses of facial recognition. In June alone, as the Black Lives Matter protests heated up, employees at Amazon Web Services called on the firm to sever its police contracts, and over 250 Microsoft employees published an open letter demanding that the company end its work with police departments.
Some vendors have been taking a proactive stance on the matter for some time. In late 2018, Google Cloud temporarily stopped offering general-purpose facial recognition APIs, pending ongoing reviews of technology and policy implications. In 2019, Microsoft's concern over misuse of facial-recognition technologies led it to reject a request by California law enforcement to use its system in police cars and body cameras. Indeed, Microsoft first called for federal facial recognition regulation two years ago.
AI vendors navigating a minefield of issues
Sensing that the political and cultural landscape is shifting under their feet, AI solution providers have been ramping up their visibility on this issue over the past few months. Trying to hold onto potential business opportunities while navigating a minefield of political, legal, and business risks, high-profile vendors made several splashy announcements in June to address popular concerns surrounding potential abuses of facial recognition. Those announcements reflect a variety of hedging strategies that solution providers are taking on this issue:
Pausing: Every AI vendor knows that the opportunities in the facial recognition market are too lucrative to sacrifice indefinitely. In that regard, Amazon Web Services announced a yearlong pause -- in other words, a moratorium -- on offering its Rekognition technology for “police use.” The service uses AI to automate facial analysis and search for user verification, people counting, and public safety use cases. AWS didn’t state when the moratorium will begin, how they will enforce it, or whether it will apply to federal law enforcement agencies as well as state and local police forces. Nevertheless, AWS specifically stated that it will not extend the moratorium to uses of Rekognition in the rescue of human trafficking victims and reuniting of missing children with their families. And the company’s announcement doesn’t impact current use of Rekognition software by AWS customers among US police forces.
Exiting: Every AI vendor must consider whether to exit from specific facial-recognition niche markets that are too politically sensitive. To that end, IBM announced that it “has sunset its general-purpose facial recognition and analysis software products.” IBM stated that it “firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. However, the announcement left IBM with plenty of wiggle room to offer special-purpose or custom facial recognition software for such opportunities as multifactor authentication and advanced visual query.
Escalating: Every AI vendor knows that they may incur insupportable liabilities for abuse of facial recognition unless legal guardrails are put in place. To that end, Microsoft explicitly escalated the issue to lawmakers, announcing that it won't be selling facial recognition technology to US police departments until there is a national law in place that is "grounded in the protection of human rights." AWS said its moratorium should give the US Congress “enough time to implement appropriate rules” and that they “stand ready to help if requested.” IBM called for US national policy to “encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques. All of these vendors expressed interest in working with governments to institute stronger regulations to govern the ethical use of facial recognition technology.
Be that as it may, some facial recognition vendors are standing firm and continuing to sell their wares to police agencies. One of the most controversial vendors in this regard is Clearview AI, which has assembled a facial-image base of over 3 billion people that were posted on the internet -- and sells its system to hundreds of police departments. Rather than present itself as a rogue, Clearview AI encourages its customers to use its facial recognition solution responsibly. The company provides a search engine into which a police officer can upload a picture of someone’s face and return matching images along with links to the person’s identity. The company created its database by scraping people’s images from news sites and social media profiles.
In May, the ACLU filed a lawsuit in Illinois against Clearview, charging the company has violated local laws by collecting people’s facial data without their permission. In its defense, Clearview says its solution isn’t intended as a surveillance tool, and that it is interested in “working with government and policy makers to help develop appropriate protocols for the proper use of facial recognition.”
Central to the new normal, regulations notwithstanding
So far there is little consensus on viable frameworks for regulating uses, deployments, and management of facial recognition programs, beyond a general sense that bias should be eliminated in public and private sector facial recognition programs that touch people’s lives. Actually, I’d like to call your attention to my own framework, which I initially published 2 years ago in InformationWeek.
However, the popular furor surrounding this technology is unlikely to stop facial recognition from gaining traction in people’s lives. Already, according to the US National Institute for Standards and Technology, there are at least 45 vendors that provide real-time facial recognition services, with more entering the marketplace all the time. A 2019 study by Markets & Markets estimated that the facial recognition software market will generate $7 billion of revenue by 2024, growing at a compound annual growth rate of 16% through that year.
One point that’s lost in this controversy is that facial recognition is at the heart of the new normal of our lives -- and that many uses don’t violate privacy or anybody’s civil rights in any way. Indeed, a recent Cap Gemini global survey found that adoption of facial recognition is likely to continue growing among large businesses in every sector and geography even as the pandemic recedes. According to the study, the COVID-19 crisis is boosting demand for a wide range of contactless technologies, with facial recognition as their centerpiece.
User experience will be a key benefit of facial recognition in many business and consumer apps. In the Cap Gemini study, over 75% of respondents report that they have increased their organizations’ use of touch-free interfaces, such as facial and voice recognition, in order to spare employees and customers from having to make direct contact with humans, screens, and devices. Sixty-two percent of respondents expect to continue prioritizing use of contactless technologies after the COVID-19 threat vanishes.
Takeaway
Where facial recognition is concerned, these political and regulatory controversies will continue to burn hot for many years. However, the average consumer is already warming up to facial recognition to a degree that today’s headlines ignore.
During the COVID-19 crisis, social distancing has made many people more receptive to facial recognition as a contactless option for strong authentication to many device-level and online services. Embedding of facial recognition into iPhone and other devices will ensure that this technology is a key tool in everybody’s personal tech portfolio.
Just as important, businesses will incorporate facial recognition into internal and customer-facing applications for biometric authentication, image/video auto-tagging, query by image, and other valuable uses.
For more on facial recognition and ethical uses, follow up with these articles:
How Machine Learning is Influencing Diversity & Inclusion
Tech Giants Back Off Selling Facial Recognition AI to Police
About the Author
You May Also Like