How not to catch a shoplifter

How not to catch a shoplifter

Lessons from the frontlines of AI deployment*

Imagine you run a nationwide retail chain. Every year 2-3% of products walk out the door in the pockets of shoplifters. On gross margins of 20%, that’s a big bite out of your business. In fact, that cost alone is turning your operating profit into a loss. Shareholders are unhappy, and your board wants you to solve it.

Along comes a technology vendor offering a solution: AI-powered facial recognition. Since you know that half of the thefts are committed by a small number of repeat offenders and—if you could only identify them when they enter the store and stop them—this seems like a great idea.

The vendor explains that for this to work you need to train the system on images of shoplifters. No problem. You’ve been in business a long time and have stopped many many shoplifters, who you captured on security cameras or photographed once they were held. All that goes into the database. To make sure you have lots of images, you incentivise employees from stores all over the country to add as many photos as possible.

Now the system is ready to go. You install the cameras in hundreds of stores, tell employees they will get alerted when a shoplifter comes in, give them instructions on how to stop them, and look forward to an improvement in your bottom line. But then this:


In numerous instances, Rite Aid’s employees acted on match alerts that were false positives. As a result, numerous consumers were mistakenly identified as shoplifters or wrongdoers. […] Rite Aid employees:

a. surveilled and followed consumers around the store;

b. instructed consumers to leave Rite Aid stores and prevented them from making needed or desired purchases, including prescribed and over-the-counter medications and other health aids;

c. subjected consumers to unwarranted searches;

d. publicly and wrongly accused consumers of shoplifting, including, according to consumer complaints, in front of the consumers’ coworkers, employers, children, and others; or

e. called the police to confront or remove the consumer.

[…] Consumers complained to Rite Aid that they had experienced humiliation and feelings of stigmatization as a result of being confronted by Rite Aid’s employees based on false positive facial recognition matches.

Moreover, some of the consumers enrolled in Rite Aid’s database or approached by Rite Aid’s employees as a result of facial recognition match alerts were children. For example, Rite Aid employees stopped and searched an 11-year-old girl on the basis of a false positive facial recognition match. The girl’s mother told Rite Aid that she had missed work because her daughter was so distraught by the incident.

Multiple consumers told Rite Aid that they believed the false-positive facial recognition stops were a result of racial profiling. One consumer wrote to Rite Aid: “…. [E]very black man is not [a] thief nor should they be made to feel like one.”


That’s from the FTC complaint against Rite-Aid, the US pharmacy chain, which was announced just before the holidays. This is a sobering, useful and important case, because:

  1. The company’s actions read like a roadmap of what not to do when deploying an AI system.
  2. The case tells us all we need to know about how the FTC is going to make use of its Article 5 powers to regulate ‘automated decision-making’, which is legal-speak for most-of-the-harms-AI-can-do-today.

What not to do

In its complaint the FTC describes in gory detail how Rite-Aid went about developing and deploying its shoplifter-catching system. The company did not ask its vendors about accuracy or bias or do reasonable diligence on their technology. It did not consider the risk of misidentification and did not try to measure or test the output against some standard of reliability. In fact, its two vendors both contractually disclaimed any warranty as to the accuracy of results.

Rite-Aid had a theoretical threshold of quality for images going into the system, but it did not enforce it. Many of the training images were from security cameras or mobile phones. Internal documents show that the company was aware that this may lead to false positives, but it went ahead anyway.

The system would provide match alerts to employees with instructions on what to do (depending on the perceived severity)—ie, walk the person out the door, or call the police. Even though the system tagged each match with a confidence score, this was not provided to employees. Finally, the company did not monitor ongoing performance, ie to track false positives or any unusual results, including:

[D]uring a five-day period, Rite Aid’s facial recognition technology generated over 900 match alerts for a single enrollment. The match alerts occurred in over 130 different Rite Aid stores (a majority of all locations using facial recognition technology), including hundreds of alerts each in New York and Los Angeles, over 100 alerts in Philadelphia, and additional alerts in Baltimore; Detroit; Sacramento; Delaware; Seattle; Manchester, New Hampshire; and Norfolk, Virginia. In multiple instances, Rite Aid employees took action, including asking consumers to leave stores, based on matches to this enrollment.

As it happens the system was deployed more widely at Rite-Aid stores in poorer, non-white neighborhoods, which further exacerbated the errors created by racial bias in the system.

For more of how this played out, Reuters did a great investigative piece way back in 2020.

Automated decision-wha?

This case is very relevant now because of what it means for AI regulation in the US.

The US has no federal privacy law and no law regulating AI (yet). Europe has GDPR and a draft AI Act. But the US has a very powerful consumer protection law in Section 5 of the FTC Act, which prohibits “unfair and deceptive trade practices”. And the current FTC has made very clear[1] that it is prepared to use those powers to take action against automated decision-making (ADM) systems that cause consumer harms resulting from bias, inaccuracy or privacy breaches (among others).

ADM is a powerful and useful concept that really made its way into the popular legal lexicon via the GDPR’s Article 22. This affirmed the right of people not to be subject to a decision based solely on automated processing (whether by a simple rules-based algorithm, or a sophisticated AI) if it has a ‘legal’ effect on them. Such systems are now getting a lot more attention because when they get it wrong they can cause real harm, like the Dutch benefits scandal in 2022.

But in the US it has not been obvious that there is an enforcement framework to prevent these kinds of harms. Not so. In this week’s Technology Summit on AI hosted by the FTC, there were really two core themes: ensuring a level playing field for AI companies[2]; and preventing consumer harms in the here and now.

The FTC intends to be proactive in using its mandate to fight unfair trade practices by looking at how automated systems impact consumers[3]. This is not about regulating foundation AI models or AI chatbots. As Commissioner Alvaro Bedoya highlighted at the summit, this is about the companies deploying the systems— they will be held responsible for the effect it has on consumers. This is true even if—perhaps especially if—the deploying company is not particularly tech-savvy.

I think this approach can go quite far in the absence of new regulation specifically addressing AI. Ultimately, ‘unfairness’ covers a very wide range of consumer harms that can be (are being) supercharged by AI: from racist credit scoring, to targeting teens with harmful content, to deception via deepfakes, to mass production of personalised spam, etc.

This FTC will have plenty of sway in regulating AI, even if Congress does nothing.


This article first appeared on my Substack. If you like it and would like timely delivery of future posts directly in your inbox, please consider subscribing.


* OK, so maybe Rite-Aid's implementation was not exactly 'front lines of AI deployment' given that they used rudimentary tech by today's standards, but the lessons apply nonetheless ;-).

[1] There have been more than a few hints under this FTC administration: Keep your AI claims in check (Feb 2023); Chatbots, deepfakes, and voice clones: AI deception for sale (March 2023); FTC Report Warns About Using Artificial Intelligence to Combat Online Problems (June 2022); Aiming for truth, fairness, and equity in your company’s use of AI (April 2021).

[2] More on this shortly.

[3] alongside its sister agencies, who are also using existing tools to cover AI, eg the CFPB protects consumers from unfair adverse credit decision; the DOJ’s civil rights division enforces statutes against discrimination including in housing, education, voting, etc; and the EEOC enforces the Americans with Disabilities Act which prohibits using systems (including AI) to discriminate in hiring.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics