Expert Witness Journal Issue 64 December 2025 - Flipbook - Page 90
as discussed above, could become an incredibly
powerful tool once its accuracy rate improves. AI’s
sweet spot in analysing and summarising data means
it is in prime position to quickly review a hearing
transcript to pick out key themes relevant for future
preparation and also any inconsistencies in the
testimony given by witnesses or experts that can be
seized upon to a party’s advantage. Being able to
quickly analyse evidence given on the stand against
written statements is a game changer that will allow
teams to use technology to gain an advantage during
trials. In cases where there are multiple witnesses
and experts, client representatives can obtain
regular updates on the progress of the hearing and
may notice key points which may warrant further
re昀氀ection. In that scenario, using AI to quickly turn
around a summary analysis following receipt of a
daily transcript may give technologically literate
teams the edge.
of two motions which contained fabricated
citations. They were removed from the
case and reported to the Alabama State
Bar.35
•
•
•
•
The highest pro昀椀le risk when using AI,
about which practitioners and clients may be
preoccupied, is the problem of hallucinations–
principally of hallucinated (i.e. 昀椀ctional) casereferences. Stories from jurisdictions around
the world have already shown how lawyers can
get themselves into a lot of trouble when using
LLM research tools without proper scrutiny. As
at the date of this article, there are already 633
cases worldwide in which a hallucinated case
reference has been created by AI.31 Here are
some high-pro昀椀le examples.
Moreover, not only do sources require veri昀椀cation,
but the answer to a research question generated by
AI should not automatically be trusted to be correct.
It is a well-known problem with LLMs that they will
prefer to answer in the a昀케rmative–i.e. to give you the
answer that you want and to avoid telling you no. This
is why it is so important to stress test the reasoning
that has been given to you by the AI to ascertain
whether it is a sound and defensible response. Even
AI tools which are legally trained will sometimes use
the wrong source material, or material that does not
provide su昀케cient support for a proposition, to give
an a昀케rmative answer so as to please the user rather
than answering in the negative or avoiding giving
an answer at all. For example, you should check
whether an answer comes from a valid case citation
or whether it has come from a precedent document
or template that has no legal force. The latter might
be included in a legal database as part of the training
data of a platform–and it can therefore still become
a hallucinated response.
In England and Wales:
•
A junior barrister was handed a wasted
costs order for relying on 昀椀ve authorities
that did not exist. The barrister has been
referred to the Bar Standards Board
for disciplinary action, and the High
Court considered whether their conduct
amounted to contempt of court.32
•
45 citations within a witness statement
drafted by a solicitor were found to be
false in some way, including 18 which did
not exist at all. The solicitor was referred
to the Solicitors Regulation Authority for
disciplinary action.33
In the United States:
•
•
A law 昀椀rm and an individual attorney
received a joint sanction of US$5,500
and a mandatory requirement to attend
a course on the dangers of AI after 昀椀ling
a brief containing fake quotations and
nonexistent authority.34
Three
attorneys
received
public
reprimands from the court for making
false statements following the submission
EXPERT WITNESS JOURNAL
A lawyer with over 30 years of
experience relied on fabricated cases in
a memorandum submitted to the court.
The court stated that “counsel who
misrepresent the law, submit fake case
precedents, or who utterly misrepresent
the holdings of cases cited as precedents,
violate their duties to the court”.36
A seasoned practitioner will understand that the
phrase “don’t trust, always verify” means that even
human generated research should be properly vetted
and stress tested to ensure accuracy not only of the
answer but of the sources themselves. When it comes
to using AI for research, tools that are speci昀椀cally
designed for legal practitioners are likely to yield
more trustworthy results than open source LLM
platforms. This is because “guard rails” have been
developed around the training data collated for
legal industry AI tools. Nevertheless, this is not an
automatic guarantee of accuracy; checking source
materials and conducting searches independently
in legal databases for cited materials is vital for
avoiding the embarrassment and potential sanctions
from falling into the fake case-citation trap.
Risks
•
In Canada:
Senior lawyers need to understand how LLM
platforms work and the type of results that are likely
to be generated, so that they can properly supervise
the juniors working for them and especially the
next generation trainees, paralegals and junior
lawyers. For them, using AI will be as normal as
using email was to most senior lawyers at the start of
87
DECEMBER/JANUARY 2025-2026