According to the FTC, Rite Aid’s ‘covert surveillance program’ wrongly identified customers as shoplifters

Rite Aid is banned from using facial recognition surveillance technology for five years to settle Federal Trade Commission charges that it failed to protect consumers in hundreds of its stores, the agency said Tuesday.

Rite Aid used a “covert surveillance program” based on AI to ID potential shoplifters from 2012 to 2020, the FTC said in a complaint filed in the U.S. District Court for the Eastern District of Pennsylvania. Based on the faulty system, the pharmacy chain’s workers erroneously accused customers of wrongdoing in front of friends and relatives, in some cases searching them, ordering them to leave the store or reporting them to the police, according to the complaint. 

According to the FTC, the retailer hired two companies to help create a database of tens of thousands of images of people that Rite Aid believed had committed crimes or intended to at one of its locations. Collected from security cameras, employee phone cameras and even news stories, many of the images were of poor quality, with the system generating thousands of false positives, the FTC alleges.

Rite Aid failed to test the system for accuracy, and deployed the technology even though the vendor expressly stated it couldn’t vouch for its reliability, according to the agency.

Preventing the misuse of biometric information is a high priority for the FTC, the agency said in its statement. 

“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.”

11-year-old girl searched by Rite Aid employee

During one five-day period, Rite Aid generated more than 900 separate alerts in more than 130 stores from New York to Seattle, all claiming to match one single person in its database. “Put another way, Rite Aid’s facial recognition technology told employees that just one pictured person had entered more than 130 Rite Aid locations from coast to coast more than 900 times in less than a week,” according to an FTC blog post

In one incident, a Rite Aid worker stopped and searched an 11-year-old girl based on a false match, with the child’s mother reporting having to miss work because her daughter was so distraught, the complaint stated.

Black, Asian, Latino and women consumers were at increased risk of being incorrectly matched, the FTC stated. 

Further, Rite Aid didn’t tell consumers it used the technology and specifically instructed workers not to tell patrons or the media, the agency relayed.

Rite Aid said it was pleased to put the matter behind it, but disputed the allegations in the agency’s complaint. 

“The allegations relate to a facial recognition technology pilot program the company deployed in a limited number of stores. Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began,” stated the retailer, which is in bankruptcy court and currently restructuring. 

Rite Aid has been banned from using facial recognition surveillance technology for five years by the Federal Trade Commission (FTC) after failing to protect consumers in its stores. The pharmacy chain had been using an AI-based covert surveillance program to identify potential shoplifters from 2012 to 2020. However, the faulty system led to customers being falsely accused of wrongdoing, resulting in humiliation and other harms. The FTC claims that Rite Aid failed to test the system for accuracy and deployed it despite the vendor’s admission of its unreliability. The retailer hired two companies to create a database of tens of thousands of images, many of which were of poor quality and resulted in thousands of false positives. The misuse of biometric information is a significant concern for the FTC, which aims to protect the public from unfair biometric surveillance and data security practices. Rite Aid’s use of facial recognition technology also disproportionately affected Black, Asian, Latino, and women consumers. The company did not inform customers about its use of the technology and instructed employees to keep it a secret. Rite Aid has expressed its pleasure in resolving the matter but disputes the allegations made by the FTC.

Leave a Reply

Your email address will not be published. Required fields are marked *