Expert Witness Journal Issue 64 December 2025 - Flipbook - Page 91
their careers. It is likely that the next generation of
lawyers will use AI far more readily than any other
generation of practitioner. Being able to properly
supervise and train these lawyers to spot the pitfalls
of using AI when conducting “manual” research
will be vital to ensure that there is not a drop in
the quality of the supervision given to these junior
lawyers, and therefore to the quality of their work
product and ultimately the service to clients.
the public and accretive nature by which LLMs
gather, store and share data. More importantly,
however, when it comes to AI e-discovery,
commentators have expressed that the use of such
technology needs to be tested before the courts,
and guidance and principles laid down to ensure
that they can be utilised e昀昀ectively without the risk
of disclosing privileged information such as review
logs and prompting.
Perhaps even more troubling than accidental
reliance on fake citations is the potential deliberate
use of falsi昀椀ed evidence in legal proceedings. The
alarming quality of so-called deepfakes, which seek
to use the image or voice of somebody to generate
something that that person has not actually said or
done, poses an extreme risk to dispute resolution
since the veracity of evidence may become
increasingly questionable.37
Given the plain language nature of prompting
will bring review protocols closer in line with case
strategies, there is a danger that using AI too liberally
or without proper consideration of privilege may
result in accidental over-disclosure of one’s strategy
to the other side. AI in e-discovery may have its place
for the time being in an initial internal review stage
where the initial universe of documents is analysed
for relevance to the dispute and key documents.
However, when the disclosure requests and Redfern
schedules are in play, it is likely that the use of AI
will be limited to avoid over-disclosure, and those
who use it should proceed with caution to ensure
that they do not give away more than they would
want using search terms.
When faced with a document that one party alleges
is fraudulent (or which appears questionable),
how does an arbitral tribunal carry out the task of
ascertaining the veracity of the evidence? While
tools exist which claim to be able to spot deepfakes,
testing has shown that these platforms are not
yet reliable when it comes to spotting falsi昀椀ed
evidence and therefore of limited utility in these
circumstances.38 39 40 If a tribunal cannot rely on
technology to ascertain whether something has
been created by AI, how can it equip itself to make
that decision? Should it simply decide that the
document in question holds no weight? Should it
seek submissions from the parties on the issue?
Should it engage forensic analysis? The answer to
this will need to be one for each tribunal depending
on the speci昀椀c circumstances in each case.
Regulations and procedures
As noted above, there is a growing call for arbitrators
to use all the powers at their disposal to better
control arbitration, while clients are calling for
their lawyers to be innovative in their approach to
dispute resolution. AI will prove itself to be a catalyst
to this increased focus on the e昀케cacy and e昀케ciency
of arbitration, if all parties involved look to use the
tools at their disposal appropriately.
To that end, discussions regarding the use of AI
in arbitration are likely to become part of the
early conversations both with clients and, more
importantly, with opposing counsel and the
tribunal. Seeking to set the parameters for the use
of AI is likely to become part of the negotiation of
the terms of reference or 昀椀rst procedural order of
an arbitration as parties seek to use the tools to
their advantage whilst catering for ethical and legal
obligations.
Nevertheless, being alive to the evidential issues
that AI can cause is already important, and will
only become more so as the levels of AI content
within arbitration increases. Within that context,
the Global Investigative Journalism Network has
released a guide to detecting AI-generated content,41
in which it identi昀椀ed seven categories of AI detection,
and has advocated for three levels of checking based
on the time available for review: a 30-second red 昀氀ag
check, a 昀椀ve-minute technical veri昀椀cation, and a
deep investigation.
Maintaining data security and privilege is another
area where practitioners will need to be extremely
careful in the adoption of AI. Lawyers will need
to ensure that their LLM products do not ingest
privileged material into the training data and then
apply that data in a way that inadvertently waves
privilege. The security arrangements even for
internal LLM platforms will need to be thoroughly
scrutinised in order to assess the risks attached to
them. External platforms will require explicit client
waivers as to con昀椀dentiality, GDPR and privilege
before data can be uploaded to a public LLM, given
EXPERT WITNESS JOURNAL
88
DECEMBER/JANUARY 2025-2026