Expert Witness Journal Issue 63 October 2025 - Flipbook - Page 53
Can AI be trusted for
legal research?
by James Tumbridge, Robert Peake, Ryan Abbott
In the 昀椀rst of a two-part series, Technology partners
James Tumbridge and Robert Peake, and consultant
solicitor Ryan Abbott consider the use of Arti昀椀cial
Intelligence (AI) in legal research, comparing the
approach in the UK with that in the US and Canada.
responsible for any errors. In November 2023, the
Solicitors Regulation Authority issued guidance on
AI use and The Bar Council published guidance in
January 2024. More recently in 2025, the Chartered
Institute of Arbitrators also issued guidance.
AI has been thought of as the solution to everything
for the past couple of years. The use of AI in legal
disputes presents positive opportunity, but issues
have been spotted, resulting in various guidelines
and rules being published. The concerns are now
growing in the UK following embarrassment of
lawyers and trade mark attorneys in using AI that
produced inaccurate outputs.
England has been looking to technology and,
potentially, AI helping with cases for some time.
In March 2004, algorithm-based digital decision
making was working behind the scenes in the
justice system. Lord Justice Birss explained then
that algorithm-based decision making was already
solving a problem at the online money claims
service, with a formula applied where defendants
accept a debt but ask for time to pay. Birss LJ went on
to say that looking to the future: “AI used properly
has the potential to enhance the work of lawyers
and judges enormously.” In October 2024, the
Lord Chancellor and Secretary of State for Justice,
Shabana Mahmood MP and the Lady Chief Justice,
The Right Honourable the Baroness Carr of Waltonon-the-Hill, also echoed the potential of technology
for the future of the courts and justice system.
Not everything is Generative AI (genAI), meaning
you ask it for something, and it generates an
outcome. The generative product of AI is what has
caused most concern in legal proceedings. The 昀椀rst
reported case of lawyers relying on the use of genAI
occurred in the US in May 2023. This involved two
New York lawyers who used an AI tool, ChatGPT, for
legal research, which produced results that included
made-up cases. These results were submitted in
federal court 昀椀lings without being reviewed or
validated by attorneys, resulting in Judge Castel
demanding the legal team explain itself. Despite the
widespread attention this case garnered, American
attorneys still continue to submit ChatGPT output
without review or validation. There are also
similar examples from Canada, including the April
2025 case of Hussein v. Canada, where the lawyer
apparently relied on a tailored legal genAI tool
called Visto.ai designed for Canadian immigration
cases, but still ended up using fake cases in the
submissions, as well as citing real cases but making
the wrong points. Canada requires disclosure of the
use of AI but that did not stop these mistakes.
However, alongside accuracy, there is concern
about the ethics in the use of AI. On ethical AI
and international standards, the UK promotes the
Ethical AI Initiative, and the international standard
– speci昀椀cally ISO 42001, the AI management
system. This may be adopted as a standard in English
procedure at some point. In April 2025 the judiciary
updated its guidance to judicial o昀케ce holders on the
use of AI. Yet all this guidance seems to be unheeded:
There is a clear need for better understanding of the
rules and policing of lawyers.
Use of AI in American courts
As discussed above, the 昀椀rst case to involve a lawyer
caught submitting inaccurate ChatGPT-generated
content was Mata v. Aviance, Inc. No. 1:2022cv01461
(S.D.N.Y. 2023) involving attorneys Steven Schwartz
and Peter LoDuca and their 昀椀rm Levidow, Levidow
& Oberman. The judge sanctioned both attorneys
and their 昀椀rm, levying a $5,000 昀椀ne for misleading
the court. The judge found that the lawyers acted
in bad faith and made “acts of conscious avoidance and
false and misleading statements to the court” in order to
obfuscate their conduct. The judge found Schwartz
did not understand ChatGPT’s limitations, did not
self-verify the AI-generated results, and relied on
ChatGPT’s self-veri昀椀cation.
The judge commented:
“[39] I do not accept that this is permissible. The use of
generative arti昀椀cial intelligence … must be declared and as
a matter of both practice, good sense and professionalism,
its output must be veri昀椀ed by a human…”
Use of AI in English courts
The English courts do not ban the use of AI but
both judges and lawyers have been told they are
responsible for the material which is produced in
their name. In England & Wales, AI can be used but
the human user is responsible for its accuracy, and
EXPERT WITNESS JOURNAL
51
OCTOBER/NOVEMBER 2025