FTC Prescribes Ban on Rite Aid’s AI Facial Recognition Use
The FTC’s 5-year ban on the struggling pharmacy chain’s biometric surveillance use highlights the dangers of artificial intelligence misuse.
At a Glance
- The FTC says Rite Aid misused biometric surveillance, wrongly accusing customers at several stores in urban areas.
- Rite Aid says it stopped using the technology three years ago but agreed to the 5-year ban.
- IT leaders should expect 2024 to be an important year for responsible AI initiatives and regulations, experts say.
The US Federal Trade Commission (FTC) on Tuesday said Rite Aid misused an artificial intelligence facial recognition system that mistakenly tagged customers -- often African Americans, Latinos, and women -- as shoplifters.
The FTC’s complaint says that from 2012-2020, Rite Aid used flawed facial recognition technology to identify potential shoplifters based on security camera images. FTC alleges Rite Aid failed to take reasonable measures to protect innocent consumers, many of whom were falsely accused of wrongdoing after AI software flagged someone as a shoplifter or “troublemaker” based on previous security images. The technology was deployed without customer knowledge, the FTC says, in stores in several major US cities.
“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk,” Samuel Levine, director of FTC’s Bureau of Consumer Protection, said in a statement. “… the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.”
Rite Aid has agreed to settle the charges by implementing comprehensive safeguards to prevent similar future AI failings. The settlement also requires Rite aid to discontinue using such technology if it cannot control potential risks, the FTC said in a statement.
Rite Aid, which is in the middle of bankruptcy proceedings, agreed to the FTC’s ban, but disagreed with the commission’s findings. “We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy,” the company said in a press release. “However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint. The allegations relate to a facial recognition technology pilot program the company deployed in a limited number of stores.”
Rite Aid goes on to say they stopped using the technology more than three years ago, before the FTC’s investigation took place. “We are pleased to reach an agreement with the FTC and put this matter behind us,” the company said.
A Baseline for Fairness
FTC Commissioner Alvaro Bedoya released a statement detailing Rite Aid’s AI failings, calling out “the blunt fact that surveillance can hurt people.” He pointed to specific examples, including the charge that a Rite Aid employee stopped and searched an 11-year-old girl because of a false match, and other incidents of people wrongly searched, accused, and thrown out of stores.
“It has been clear for years that facial recognition systems can perform less effectively for people with darker skin and women,” Bedoya wrote. “In spite of this, we allege that Rite Aid was more likely to deploy face surveillance in stores located in plurality non-White areas than in other areas. Rite Aid then failed to take some of the most basic precautions.”
The company, he said, only took two risks into consideration when implementing the system: media attention and customer acceptance.
Bedoya said the settlement produces a baseline of standards that should be considered when implementing a “comprehensive algorithmic fairness program.”
“No one should walk away from this settlement thinking the Commission affirmatively supports the use of biometric surveillance in commercial settings,” he wrote. “… there is a powerful policy argument that there are some decisions that should not be automated at all … I urge legislators who want to see greater protections against biometric surveillance to write those protections into legislation and enact them into law.”
What IT Leaders Need to Know
AI bias is one of the chief concerns surrounding AI safety but is often overshadowed by more fantastical doomsday fears involving the emerging technology. But worries about AI’s use as a recruitment tool, for example, fuel fears about bias and safety.
Geoff Schaefer, head of responsible AI at Booz Allen, in an email interview says organizations suffer without responsible AI guardrails in place. “We think about responsible AI as a formula: Responsible AI equals ethics plus governance plus safety … there are basic guardrails that organizations should put in place to ensure their AI use is ethical, well-governed, and safe for customers.”
Schaefer says the coming year will be an important year for AI safety.
“AI is now sufficiently mature that the lack of such guardrails will constitute negligence by an organization, so I think it’s instructive that the term ‘negligence’ was used explicitly in this case. 2024 will be the year that organizations must get from ‘zero to one’ in their responsible AI capacity so they can execute their work in measurably responsible ways.”
Var Shankar, executive director of RAI Institute, tells InformationWeek, “This order is a continuing reflection of the FTC’s expectation that organizations use automated systems responsibly. As AI becomes ubiquitous, it is crucial that organizations improve their AI governance, monitoring and risk mitigation efforts on an ongoing basis.”
Read more about:
RegulationAbout the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022