Alba Kapoor of Amnesty International UK has urged the Metropolitan Police to abandon plans to scale up live facial recognition deployments, warning that wider use will entrench racial discrimination and endanger privacy, peaceful assembly and equality. Campaigners point to wrongful stops such as Shaun Thompson’s detention and research from NIST and the Gender Shades project to demand a moratorium, independent audits and stronger legal safeguards.
Alba Kapoor of Amnesty International UK has urged the Metropolitan Police to abandon plans to expand live facial recognition, arguing the technology will further entrench racial discrimination in policing and put basic civil liberties at risk. Writing in The Guardian on 8 August, Kapoor said the systems are already known to misidentify people from marginalised communities and warned that deploying them more widely at events such as Notting Hill Carnival threatens the rights to privacy, peaceful assembly, expression and equality. She called for the Met’s plans to be scrapped immediately.
The Met says it intends to increase the number of live facial recognition deployments significantly, from a handful of uses across two days to multiple operations over an extended period, a change explained by force officials as a response to budget cuts and reductions in officer numbers. Police spokespeople argue the technology helps to identify wanted offenders at public events, but campaigners counter that scaling up a system with known error rates risks producing more false matches and more intrusive stops.
The human cost of those false matches was underscored by recent reporting about Shaun Thompson, a community worker who was wrongly flagged while returning from a volunteering shift. According to the BBC, officers detained and questioned him for some 20 to 30 minutes, requested fingerprints and identity documents before accepting his passport and releasing him; Thompson told the BBC the episode was “intrusive” and that he felt he had been “presumed guilty.” Such incidents feed wider concerns that biometric tools can translate algorithmic mistakes into real-world harms.
Technical research provides a clear basis for those concerns. The National Institute of Standards and Technology’s landmark Face Recognition Vendor Test found persistent demographic differentials across roughly 200 algorithms, documenting higher error rates for women and people with darker skin while also noting substantial variation between vendors — with top-performing systems in some tests approaching parity. Earlier academic work, notably the Gender Shades project led by Joy Buolamwini and Timnit Gebru, showed the same pattern: off‑the‑shelf systems performed far better on lighter‑skinned men than on darker‑skinned women, a finding that helped catalyse vendor reassessments and wider debate about dataset representativeness and transparency.
Civil society has long warned that technical fixes alone cannot eliminate the human-rights harms of mass biometric surveillance. Amnesty International led a 2021 coalition of more than 170 organisations calling for a global ban on public‑space biometric systems, arguing they enable identification, tracking and single‑out of people without consent and that the risks fall disproportionately on marginalised groups. Against that backdrop, critics of the Met say the absence of a clear legal framework or independent oversight leaves decisions about when, where and how to deploy such intrusive tools to police discretion.
Policymakers now face a choice between imposing strict limits — including moratoria on public‑space deployments, mandatory independent auditing, transparent procurement and stronger data‑protection safeguards — or permitting a continued, ad hoc rollout that campaigners say will reproduce and amplify existing inequalities. The Met insists the technology is a necessary tool for public safety; human‑rights groups and technical experts insist its costs are too high without robust regulation, transparency and redress. For now, Amnesty’s intervention adds weight to calls for immediate restraint while lawmakers and regulators consider whether the existing patchwork of rules is fit for purpose.
Reference Map:
Reference Map:
- Paragraph 1 – [1], [2], [7]
- Paragraph 2 – [3], [1]
- Paragraph 3 – [4], [1]
- Paragraph 4 – [5], [6]
- Paragraph 5 – [7], [1], [2]
- Paragraph 6 – [3], [5], [7]
Source: Noah Wire Services
- https://www.theguardian.com/technology/2025/aug/08/facial-recognition-technology-discriminates-against-people-of-colour – Please view link – unable to able to access data
- https://www.theguardian.com/technology/2025/aug/08/facial-recognition-technology-discriminates-against-people-of-colour – Alba Kapoor of Amnesty International UK argues that the Metropolitan Police’s expanded use of live facial recognition is discriminatory and risks entrenching racism in policing. The letter highlights recent reporting about Shaun Thompson’s misidentification and broader evidence that algorithms are less accurate for people of colour, raising the prospect of wrongful arrest and harassment at events such as Notting Hill Carnival. Kapoor warns that these systems violate privacy, freedom of assembly, expression, and equality, especially given the absence of government regulation and independent oversight. The piece calls for a halt to the Met’s plans to deploy live facial recognition now.
- https://www.theguardian.com/technology/2025/jul/31/met-police-to-more-than-double-use-of-live-facial-recognition – The Guardian reports that the Metropolitan Police plans to more than double deployments of live facial recognition, increasing operations from four uses over two days to up to ten uses across five days. The article explains this expansion forms part of a restructure driven by budget cuts and officer reductions, with officials arguing the technology helps identify wanted offenders at public events. Critics including civil liberties groups warn of inadequate legal framework, potential civil rights infringements, and the dangers of scaling a system which can produce false matches, particularly for marginalised communities. The piece details police statements and campaigner responses.
- https://www.bbc.com/news/technology-69055945 – The BBC reports that Shaun Thompson, a London community worker, was wrongly flagged by the Metropolitan Police’s live facial recognition system while returning from a volunteering shift. Officers detained and questioned him for about twenty to thirty minutes, requesting fingerprints and identity documents before accepting his passport and releasing him. Thompson described the episode as intrusive and likened it to being presumed guilty; authorities suggested a possible family resemblance caused the error. The story is presented alongside broader concerns about LFR’s accuracy and impact on civil liberties, with the BBC noting campaigners’ calls for clearer safeguards, transparency and legal regulation.
- https://www.nist.gov/publications/face-recognition-vendor-test-part-3-demographic-effects – The National Institute of Standards and Technology’s Face Recognition Vendor Test Part 3: Demographic Effects summarises testing of around 200 face recognition algorithms, analysing performance differences across age, sex and race. Published in December 2019, the report documents demographic differentials with algorithms often showing higher error rates for women and people with darker skin tones. NIST emphasises variability between vendors: top-performing systems can approach parity while many others exhibit marked disparities. The report provides technical measures, datasets and guidance intended to inform policymakers and practitioners about the limits and risks of facial recognition. It remains influential in policy debates worldwide.
- https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212 – Researchers Joy Buolamwini and Timnit Gebru’s Gender Shades project revealed performance disparities in facial analysis systems, showing higher error rates for women with darker skin compared with lighter-skinned men. Published in 2018, findings tested products from major companies and showed systems performed very well for lighter-skinned males while failing more frequently for darker-skinned females. The study brought public attention to algorithmic bias, prompted vendors to pause or reassess facial recognition offerings, and catalysed calls for transparency, representative datasets and regulatory oversight. Gender Shades remains a foundational reference in debates about fairness, data quality and the societal impacts of biometric technologies.
- https://www.amnesty.org/en/latest/press-release/2021/06/amnesty-international-and-more-than-170-organisations-call-for-a-ban-on-biometric-surveillance/ – Amnesty International’s 2021 open letter, joined by over 170 organisations, calls for a global ban on facial recognition and remote biometric technologies that enable mass and discriminatory surveillance. The press release argues these tools can identify, track and single out people in public spaces without consent, undermining rights to privacy, freedom of expression, peaceful assembly and non-discrimination. Amnesty highlights documented misuses, wrongful arrests, and evidence that systems disproportionately harm marginalised groups, warning that technical fixes cannot remove the human-rights harms. The letter demands laws to prohibit public-space deployments, curb government procurement and protect individuals from biometric-driven discrimination and ensure accountability.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is fresh, published on 8 August 2025, with no prior substantially similar content found. The article is based on a press release from Amnesty International UK, which typically warrants a high freshness score.
Quotes check
Score:
10
Notes:
No direct quotes are present in the provided text, indicating potentially original or exclusive content.
Source reliability
Score:
10
Notes:
The narrative originates from The Guardian, a reputable organisation, enhancing its credibility.
Plausability check
Score:
10
Notes:
The claims align with established research on facial recognition technology’s biases against people of colour. The article references a recent case of misidentification, supporting the plausibility of the narrative.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is fresh, original, and originates from a reputable source. The claims are plausible and supported by recent events, indicating a high level of credibility.