Expert Witness Journal Issue 64 December 2025 - Flipbook - Page 92
As arbitral institutions issue more guidance and
rules on the use of AI, and national courts develop
their approach to the use of AI in court proceedings,
there will be a more detailed body of guiding
principles that will help arbitrators to establish the
boundaries for e昀昀ective AI use. Guidelines already
exist from certain arbitral institutions which provide
guidance on what they consider to be e昀昀ective
governance of AI in international arbitration. Two
major developments are as follows:
•
Further publications have been issued by other global
institutions, with many law 昀椀rms and barristers
chambers also now providing their own guidance
on the use of AI in arbitration. As governments
and international bodies issue laws and regulations
on the use of AI (including the EU’s AI Act), and
national courts issue not only judgments on the use
of AI but also guidance and practice directions on
how it should be used in court, practitioners and
arbitrators will need to stay abreast of their legal
and regulatory obligations. This will help them to
ensure that their use of AI in arbitration complies
not only with the relevant arbitral rules governing
their dispute, but also the law of the seat of the
arbitration, the governing law of the arbitration,
and indeed their own professional obligations.
Silicon Valley Arbitration & Mediation Center42
were 昀椀rst-movers, consulting on their guidelines
which were published on 30 April 2024. Among
the provisions contained within the guidelines,
there are provisions:
•
•
•
a duty not to mislead. The template protocol
also provides governance for so-called “High
Risk AI Use”, described as a use-case that risks
a breach of privacy/con昀椀dentiality or data
security obligations, the potential to undermine
procedural integrity, or the potential to assert a
nonhuman in昀氀uence on the award.
For parties and their representatives to
demonstrate competence and diligence
in the use of AI, and a respect for the
integrity of the arbitration and the
evidence used within it, placing a duty
on practitioners to ensure that they
understand the tools that they are using
and safeguard against the inappropriate
use of those tools either by a failure to
interrogate the output from the AI, or
by using AI in a way which harms the
integrity of the arbitration, including by
falsifying evidence.
Conclusion
If we are to meet the challenge that AI sets for us
and also meet the expectations of our clients as they
evolve alongside the development of these tools, it
will not be enough to stick with “tried and tested”,
nor will it be su昀케cient to rely on specialists or
younger team members who have more experience
and facility in using AI technology. As the famous
computer pioneer Admiral Grace Hopper observed,
when commenting on the future of data processing
as far back as 1976, “the most dangerous phrase a
[data processing] manager can use is ‘We’ve always
done it this way.’”.44 Arbitrators, experienced
practitioners, experts and all levels of the legal
profession–both in house and private practice–must
make sure that they learn and appreciate the impact
that AI is having and will have on how disputes are
to be conducted. They need to learn this because AI
is already here. Failing to understand it will not only
mean being left behind, but may also run the risk of
being caught out.
For arbitrators, by forbidding the
delegation of the decision-making
function of their mandate to AI, and
ensuring the integrity of the proceedings
by avoiding information outside of the
record being introduced through AI, and
by ensuring the veri昀椀cation of sources.
The Chartered Institute of Arbitrators (CIArb)
has issued guidelines43 which set out both
the bene昀椀ts and risks of AI in arbitration,
recommendations for the proper use of AI,
and addressing arbitrators’ powers to give
directions and rulings on the use of AI by
parties in arbitration. The CIArb guidelines can
be distinguished from the ICDR AI-arbitration
product mentioned in paragraph 4, above, since
they prohibit decision-making being delegated
to AI. Instead, the CIArb guidelines provide
arbitrators with tools by way of a template
agreement on the use of AI in arbitration,
and a template procedural order on the use
of AI in arbitration. These templates allow
for agreement of parameters for establishing
either (i) which tools can be used by counsel,
or (ii) which functions and tasks AI can be used
for. It also provides a list of obligations on the
parties to ensure that they understand the
tools that they are to use, their limitations, and
the impact of their use including ethical/bias
concerns, con昀椀dentiality and data security, and
EXPERT WITNESS JOURNAL
matthew.walker@klgates.com
www.klgates.com
89
DECEMBER/JANUARY 2025-2026