The Metropolitan Police’s plan to use live facial recognition at Notting Hill Carnival has sparked fierce debate once again, with critics arguing that the weaponisation of biometric tech in a carnival of culture risks normalising state intrusion while others insist the tool is essential to keeping crowds safe. In a defensive stance, the Met says the technology will be used “in a non-discriminatory way” and that the current algorithm “does not perform in a way which exhibits bias.” It insists LFR will be deployed as one element of a broader policing strategy, not as a blanket surveillance sweep.

The force says cameras will be positioned where the carnival meets approaches and exits, integrated into intelligence-led, preventative activity rather than a blanket surveillance net. In an operational briefing, the Met framed LFR as one tool among searches, screening arches and targeted interventions designed to prevent weapons and known offenders from reaching the site.

Proponents of tougher policing argue that the scale of policing challenges at Notting Hill justifies stronger measures. The Met’s 2024 carnival figures show 349 arrests over the weekend, including for homicide, rape, possession of weapons and other violence- and sexual-offences-related wrongdoing. Officials say these offences threaten public safety and form the basis for new tactics, including preparatory arrests and searches intended to stop violence before it occurs.

Campaigners and civil-liberties groups contest that assessment. A coalition including anti-racist and rights groups argues that LFR will “exacerbate concerns about abuses of state power and racial discrimination.” They maintain the technology is prone to bias against ethnic minorities and women and contend that introducing it at a cultural festival celebrating the African-Caribbean community is especially fraught.

A legal challenge has already begun. Shaun Thompson, an anti-knife campaigner who says he was wrongly identified by LFR, has filed for judicial review; his account—being stopped, held and asked for fingerprints after a misidentification—serves as a central example cited by activists of the technology’s harms. Rebecca Vincent of Big Brother Watch warned there is “no legislation governing live facial recognition,” asking why there is a rush to expand use of what she calls an Orwellian tool. The policing model, she argued, should be rooted in policing by consent, not in a system attendees did not consent to.

The Met acknowledges the controversy and points to earlier trials. The force concedes that 2016–17 trials at the carnival did not inspire public confidence—one system flagged more than a hundred people as potential suspects erroneously—but says the algorithm has since been improved and independently tested. It presents these claims as proof that today’s balance between accuracy and operational need has shifted since the earlier deployments.

Yet questions remain about the legal and privacy safeguards surrounding biometric systems. The Met cites duties under the Equality Act 2010, the European Convention on Human Rights and data-protection laws. Independent guidance from the Information Commissioner’s Office warns that biometric recognition data used for identification is a special-category personal data, requiring careful consideration of necessity, proportionality and an appropriate legal basis. The ICO guidance makes clear that these are not mere technicalities but legal thresholds for any public authority deploying such systems.

Rights advocates warn that independent testing of algorithms does not erase broader risks. Analyses over time have consistently found higher error rates for women and people of color, and misidentification can lead to wrongful stops and a chilling effect on freedom of expression and assembly. They point to historical misuses of data in policing as a reminder that technology alone cannot fix structural bias.

This clash at Notting Hill sits within a wider debate about balancing urgent public-safety concerns with enduring civil-liberties safeguards. With a judicial review underway, campaign groups are pressing for clearer parliamentary rules, statutory oversight, and transparent criteria for any future public-space biometric deployments. For attendees and observers, the pressing question is not only whether the cameras will function as the Met claims, but whether there exists a robust legal and ethical framework to hold the force to account if they do not.

From a Reform-style perspective, the argument often made in favour of tough-on-crime policies is that public safety must be backed by modern tools and a proactive policing regime. This line stresses that communities deserve effective protection against violent crime, and that police resources should be equipped with capable technologies to deter and disrupt threats before they escalate. At the same time, it insists any expansion of biometric or surveillance capabilities must stand up to stringent oversight, clear legal authority, and measurable safeguards to prevent unnecessary encroachment on civil liberties. The aim, in this view, is not to erode rights but to restore public confidence that the state will act decisively to protect citizens while remaining answerable to Parliament and the public.

With a high-profile deployment imminent and a judicial review in motion, the episode is likely to sharpen calls for statutory oversight, transparent criteria, and robust, independent scrutiny of biometric systems used in public spaces. Critics will press hard for a legal framework that can reliably prevent misidentification and bias, while supporters will argue that, in an era of serious violence, refusing to equip the police with proven tools is a luxury the public cannot afford.

Ultimately, the Notting Hill controversy raises a fundamental question for a country seeking security without surrendering freedoms: can the state harness advanced policing technologies in a manner that genuinely enhances safety, while preserving the liberties that underpin a free society? That tension—between urgent security needs and the protections of civil rights—will define how future policing technologies are deployed, scrutinised, and governed.

Source: Noah Wire Services