“AI”, “artificial intelligence”, “big data” – these are phrases we hear around us daily. As AI continues to seep into our social structures at an ever-increasing pace, the question arises: how are we affected?

So far, AI-assisted technology is helping us to conduct research, compose emails, draft essays, do our homework, and even write code itself. The pace of AI outreach growth is astronomical, raising the possibility that AI will eventually outpace human intelligence.

Over 60 countries have adopted national policies on the use of AI and the EU is having these discussions as well. In April 2021, the European Commission put forward a new regulation, Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), which addresses the risks of AI and positions Europe to play a leading role in regulating this field.

In a very recent development, a group of AI experts and industry executives signed an open letter calling for a six-month pause in developing advanced AI. The signatories consider that “contemporary AI systems are now becoming human-competitive at general tasks” and AI labs are “in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”. A six-month pause is therefore needed to develop and implement safety protocols.

On the bright side, AI is enabling rusty processes to pick up pace, particularly in the legal industry: research and drafting are made easier, and even legal reasoning is said to benefit. Legal research that would take hours, or even days, is now generated by AI in a matter of seconds. However, research is not necessarily flawless. It can contain errors because it is based on unverified internet sources and suffers from data disparity that, for now, can only be verified by humans. Also, technology is helping lawyers with other routine legal tasks, such as contract analysis and review of voluminous documents. As a result, lawyers are becoming more efficient by spending less time on mundane tasks and instead focusing on more sophisticated legal work.

But can decision-making be outsourced to robots? In February 2023, a judge in Colombia used a language-processing tool to decide whether an autistic child’s insurance policy should cover his medical treatment. While this caused quite a stir, Judge Padilla (from the city of Cartagena) only used AI to supplement an otherwise standard legal reasoning process, drawing on precedent. He included the conversations with the chatbot in the ruling, which included the following discourse: (Padilla) “Is an autistic minor exonerated from paying fees for their therapies?” (AI) “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”

As AI spreads, international arbitration is bound to feel the effect as well.

Last week our Paris office hosted a panel discussion dedicated to the topic, which was organized as part of Paris Arbitration Week (an yearly one-week gathering of arbitration lawyers from around the globe). Participants of our conference agreed that the use of AI in arbitration will increase in the future, most probably in the very near future. The systems currently in use show remarkable capability and can process large data volumes quickly and with precision. The goal of these technological developments is the enhancement of human productivity and, consequently, to enable lawyers to focus their attention on strategic issues. Whether this is going to render certain entry-level legal jobs (or legal functions) obsolete, we are to see. In any event, lawyers unable to use artificial intelligence effectively may soon find themselves replaced by those who can.

But these systems are fundamentally input orientated and prone to mistakes. They also lack critical reasoning: they cannot make judgement calls, let alone draw moral or ethical conclusions. While AI can help summarize information about a dispute and the procedure, and help accelerate the issuance of an arbitral award, it cannot reason on behalf of an arbitrator. Arguments are made that AI can ensure bias-free and reasonable decision making, but it cannot replace human intelligence and a sense of fairness that often drives decision-making processes. Even if AI strengthens the logos (logical reasoning) of advocacy and legal decision-making, such processes, as highlighted by Aristotle, are also determined by human ethos (credibility or expertise) and pathos (emotions). These are areas yet far beyond the reach of AI.

For AI and dispute resolution – including arbitration – the answer to the question of how it should be embraced seems to be: it carries the potential to massively assist, not substitute, the legal decision-making process.