The High Court of England has issued a grave warning to lawyers, indicating that the use of AI-generated false materials could lead to criminal charges and undermining public confidence in the justice system.
High Court of England Issues Strong Warning Against Use of Fake AI-Generated Legal Materials

High Court of England Issues Strong Warning Against Use of Fake AI-Generated Legal Materials
A senior judge cautions lawyers about the potential criminal repercussions of utilizing fictitious AI-generated evidence in legal cases.
The Royal Courts of Justice, home to England’s High Court, recently underscored a pressing concern regarding the use of artificial intelligence in legal procedures. As AI continues to evolve, a specific instance reported on June 6, 2025, illustrates how two recent cases employed fabricated material generated by AI, raising alarm among judicial authorities.
In a notable intervention, Judge Victoria Sharp, president of the King’s Bench Division of the High Court, alongside Judge Jeremy Johnson, highlighted that existing regulations were inadequate in curbing the misuse of AI, calling for immediate reform in the guidance provided to legal practitioners. A recent incident surfaced where an individual and their legal counsel acknowledged that the AI tool utilized produced “inaccurate and fictitious” evidence in litigation involving two financial institutions.
In another case concluded in April, a lawyer for a plaintiff contesting actions of a local council faced scrutiny after being unable to account for non-existent legal precedents cited in their arguments. Sharp emphasized the need for the judiciary to enforce adherence to ethical standards, noting that misuse of AI could jeopardize justice and diminish public trust in legal processes.
Drawing upon rarely exercised powers, Judge Sharp stated, “There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” indicating that lawyers who present fabricated AI-generated content might face prosecution or bans from the legal profession.
As AI technology becomes more integrated into daily life, the need for regulatory measures to uphold the integrity of the legal system is increasingly critical. The High Court's warning serves as a reminder of the responsibilities lawyers hold in ensuring the accuracy and credibility of their arguments and materials used in court.
In a notable intervention, Judge Victoria Sharp, president of the King’s Bench Division of the High Court, alongside Judge Jeremy Johnson, highlighted that existing regulations were inadequate in curbing the misuse of AI, calling for immediate reform in the guidance provided to legal practitioners. A recent incident surfaced where an individual and their legal counsel acknowledged that the AI tool utilized produced “inaccurate and fictitious” evidence in litigation involving two financial institutions.
In another case concluded in April, a lawyer for a plaintiff contesting actions of a local council faced scrutiny after being unable to account for non-existent legal precedents cited in their arguments. Sharp emphasized the need for the judiciary to enforce adherence to ethical standards, noting that misuse of AI could jeopardize justice and diminish public trust in legal processes.
Drawing upon rarely exercised powers, Judge Sharp stated, “There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” indicating that lawyers who present fabricated AI-generated content might face prosecution or bans from the legal profession.
As AI technology becomes more integrated into daily life, the need for regulatory measures to uphold the integrity of the legal system is increasingly critical. The High Court's warning serves as a reminder of the responsibilities lawyers hold in ensuring the accuracy and credibility of their arguments and materials used in court.