Justice Victoria Sharp condemns the use of AI-generated fictitious legal precedents in court, highlighting risks to judicial integrity and calling for stronger regulatory oversight amid rising cases of ‘hallucinated’ citations.
A significant warning has emerged from England’s High Court regarding the use of artificial intelligence (AI) in legal proceedings. High Court justice Victoria Sharp highlighted alarming instances where lawyers cited fake cases generated by AI, potentially jeopardising the integrity of the legal system. This caution comes in the wake of cases that raise serious questions about lawyers’ responsibilities in ensuring the accuracy of their submissions.
Justice Sharp expressed that the misuse of legal AI could lead to severe repercussions, including contempt of court and criminal charges. In two notable recent cases, lawyers relied on AI tools to generate fictitious legal precedents, leading to misleading arguments presented in court. Sharp’s statement underscores the threat this poses not only to the judicial process but also to public confidence in the legal framework. “Artificial intelligence is a powerful technology and a useful tool,” she remarked, emphasising that its deployment must be met with oversight and adherence to ethical standards.
In one alarming instance, involving a £90 million lawsuit against Qatar National Bank, a lawyer cited 18 non-existent cases, attributing the error to his client, Hamad Al-Haroun, who expressed remorse and accepted responsibility for the misinformation. Sharp noted it was “extraordinary” for the attorney to depend on the client for accurate legal research, reversing the typical relationship. This admission exposes a troubling trend among legal professionals where reliance on AI-generated content can lead to a degradation of due diligence.
A second case featured barrister Sarah Forey, who faced scrutiny for citing five fictitious cases in a tenant’s housing claim against the London Borough of Haringey. Forey denied utilising AI, yet failed to offer a coherent account of how these fake citations appeared in her submissions, prompting the judges to refer her to professional regulators. The judges, including Justice Jeremy Johnson, chose not to impose harsher penalties but warned that presenting false material could amount to serious legal violations, potentially leading to life imprisonment under severe circumstances.
These incidents illustrate a broader challenge faced by legal systems worldwide in adapting to the rapid integration of generative AI tools. Critically, sharp calls have arisen for regulatory bodies within the legal profession to enhance existing guidelines around AI usage, as current measures appear insufficient to mitigate risks associated with fake legal arguments. Speaking to media outlets, legal experts have echoed Sharp’s concerns, arguing that a lack of rigorous verification processes might endanger the foundational principles of justice.
This discussion is not confined to the UK alone. Similar incidents have arisen globally, including cases in the United States and Canada, where lawyers faced sanctions for relying on AI-generated legal precedents. In these cases, while the firms recognised the mistakes, critics have pointed out the ethical implications of using AI in legal research. The phenomenon has been termed “hallucination,” whereby AI can generate plausible but entirely fictive content, raising questions about the responsibility lawyers must maintain in verifying the authenticity of their citations.
The High Court’s stark reminder serves as both a caution and a call to action for legal professionals. As AI continues to permeate the legal landscape, maintaining rigorous oversight and ethical standards is paramount in order to protect the justice system from the potential hazards posed by these advanced technologies.
Reference Map:
- Paragraph 1 – [1], [2]
- Paragraph 2 – [1], [3]
- Paragraph 3 – [1], [4]
- Paragraph 4 – [2], [6]
- Paragraph 5 – [5], [3]
Source: Noah Wire Services
- https://www.1news.co.nz/2025/06/08/uk-judge-warns-lawyers-citing-fake-ai-generated-cases-in-court/ – Please view link – unable to able to access data
- https://www.reuters.com/world/uk/lawyers-face-sanctions-citing-fake-cases-with-ai-warns-uk-judge-2025-06-06/ – A senior UK judge has issued a stern warning to lawyers using artificial intelligence (AI) to cite non-existent legal cases, highlighting the potential for severe consequences including contempt of court and criminal charges. The warning follows two recent cases in London’s High Court where lawyers appeared to have relied on AI tools such as ChatGPT to generate supporting arguments based on fictitious case law. Judge Victoria Sharp emphasized the serious threat this misuse of AI poses to the integrity of the justice system and public confidence in legal proceedings. She called upon legal regulators and industry leaders to implement more effective measures to ensure lawyers recognize and uphold their ethical duties. While existing guidance on AI use exists, Sharp stressed it is not sufficient to curb misuse. In extreme instances, submitting deliberately false material to court could constitute the criminal offense of perverting the course of justice. This ruling adds to global concerns about the rapid adoption of generative AI in legal practice without adequate oversight.
- https://apnews.com/article/46013a78d78dc869bdfd6b42579411cb – A UK High Court judge has warned about the risk to the justice system after lawyers cited fake legal cases generated by artificial intelligence (AI) in court. Justice Victoria Sharp noted the serious implications of such misuse for public trust and legal integrity. In one case involving a £90 million lawsuit with Qatar National Bank, a lawyer cited 18 non-existent cases generated by AI, relying on the client, Hamad Al-Haroun, for legal research. In another case, barrister Sarah Forey referenced five fictitious cases in a housing claim. Though Forey denied using AI, she failed to provide a clear explanation. The judges, including Jeremy Johnson, referred both attorneys to professional regulators. Sharp emphasized that knowingly presenting false information could lead to contempt of court or, in severe cases, charges such as perverting the course of justice—an offense punishable by life imprisonment. She acknowledged AI as a powerful and useful legal tool but stressed the importance of accurate oversight and adherence to ethical standards to maintain public confidence in the justice system.
- https://www.legalcheek.com/2025/05/judge-fury-after-fake-cases-cited-by-rookie-barrister-in-high-court/ – A High Court judge has issued a scathing ruling after multiple fictitious legal authorities were included in court submissions. The case concerned a homeless claimant seeking accommodation from Haringey council. Things took a sharp turn when the defendant discovered five ‘made-up’ cases in the claimant’s submissions. Although the judge could not rule on whether artificial intelligence (AI) had been used by the lawyers for the claimant, who had not been sworn or cross-examined, he also left little doubt about the seriousness of the lapse, stating: ‘These were not cosmetic errors, they were substantive fakes and no proper explanation has been given for putting them into a pleading’ said Mr Justice Ritchie, adding: ‘I have a substantial difficulty with members of the Bar who put fake cases in statements of facts and grounds.’ He added: ‘On the balance of probabilities, I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading. However, I am not in a position to determine whether she did use AI. I find as a fact that Ms Forey intentionally put these cases into her statement of facts and grounds, not caring whether they existed or not, because she had got them from a source which I do not know but certainly was not photocopying cases, putting them in a box and tabulating them, and certainly not from any law report. I do not accept that it is possible to photocopy a non-existent case and tabulate it.’ The 2025 Legal Cheek Chambers Most List Judge Ritchie found that the junior barrister in question, Sarah Forey of 3 Bolt Court Chambers, instructed by Haringey Law Centre solicitors, had acted improperly, unreasonably and negligently. He ordered both Forey and the solicitors to personally pay £2,000 each to Haringey Council’s legal costs. Certainly the judge’s warning will echo across the profession: ‘It would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading.’ This case has sparked discussion on social media. Writing on LinkedIn, Adam Wagner KC of Doughty Street Chambers commented on the judgment, noting that while the court didn’t confirm AI was responsible for the fake cases, ‘it seems a very reasonable possibility.’
- https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt – Two lawyers in the United States have been fined for submitting fake court citations generated by ChatGPT, an artificial intelligence tool. The case involved a personal injury lawsuit where the lawyers, Steven Schwartz and Peter LoDuca, included fabricated legal precedents in their court filings. The judge, P. Kevin Castel, stated that while using AI for legal assistance is not inherently improper, lawyers must ensure the accuracy of their filings. The lawyers’ firm, Levidow, Levidow & Oberman, acknowledged the mistake but disagreed with the court’s assessment of bad faith. This incident highlights the risks associated with relying on AI-generated content without proper verification in legal proceedings.
- https://www.theguardian.com/world/2024/feb/29/canada-lawyer-chatgpt-fake-cases-ai – A Canadian lawyer is under scrutiny for submitting fake cases created by the AI chatbot ChatGPT. The lawyer, Ke, used ChatGPT to generate instances of previous case law applicable to her client’s circumstances. However, two of the cases cited could not be found upon investigation. When confronted, Ke apologized, stating she had no intention to mislead the court. The incident underscores the potential risks of using AI in legal research, as AI tools can produce convincing but false information, known as ‘hallucinations.’ The case has raised concerns about the ethical implications of AI use in the legal profession.
- https://www.independent.co.uk/news/uk/home-news/chatgot-woman-court-case-ai-b2462142.html – A woman who used nine fabricated AI-generated cases in her court appeal has lost her case. The Law Society Gazette reported that Lord Justice Birss stated that while AI is a useful tool, he takes full personal responsibility for his judgments and does not delegate this to others. The incident highlights the risks of relying on AI-generated content without proper verification, as AI tools can produce convincing but false information, known as ‘hallucinations.’ The case has raised concerns about the ethical implications of AI use in the legal profession.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative was first published on June 6, 2025, by Reuters, with similar reports from AP News on June 7, 2025, and 1News on June 8, 2025. The 1News article includes references to the Reuters and AP News reports, indicating it is a republished version. The Reuters report is the earliest known publication date. The 1News article includes updated data but recycles older material, which may justify a higher freshness score but should still be flagged. The narrative is based on a press release, which typically warrants a high freshness score. No discrepancies in figures, dates, or quotes were found. The narrative has not appeared more than 7 days earlier.
Quotes check
Score:
9
Notes:
The direct quotes from Justice Victoria Sharp in the 1News article match those in the Reuters and AP News reports, indicating they are not original. No online matches were found for other direct quotes, suggesting they may be original or exclusive content.
Source reliability
Score:
7
Notes:
The narrative originates from 1News, a New Zealand-based news outlet. While it references reputable organizations like Reuters and AP News, the primary source is 1News, which may be less well-known internationally. The 1News article includes references to the Reuters and AP News reports, indicating it is a republished version. The Reuters report is the earliest known publication date.
Plausability check
Score:
8
Notes:
The narrative reports on a warning issued by High Court Justice Victoria Sharp regarding the use of AI-generated fake cases in court proceedings. Similar incidents have occurred globally, including cases in the United States and Canada, where lawyers faced sanctions for relying on AI-generated legal precedents. The language and tone are consistent with typical legal reporting. No excessive or off-topic detail unrelated to the claim is present. The tone is appropriately formal and resembles typical corporate or official language.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is a republished version of earlier reports from Reuters and AP News, with no discrepancies found. The direct quotes from Justice Victoria Sharp are not original, but other quotes may be. The primary source, 1News, is less well-known internationally, but the content is consistent with similar reports from reputable organizations. The plausibility of the claims is supported by similar incidents globally, and the language and tone are appropriate.