Infra.Law


Navigating AI in Dispute Resolution: Insights from LIDW's Core Conference

By Melanie Tomlin

Infra.Law


Navigating AI in Dispute Resolution: Insights from LIDW's Core Conference

By Melanie Tomlin


Mel Tomlin attended LIDW’s core conference at Westminster’s QEII Centre which brought together leaders from across the dispute resolution world to discuss the most topical issues of the moment, on a theme of “Innovation in Dispute Resolution: Navigating Global Risks”.

This article draws together key insights made from the various high-profile panel members at the conference on thoughts and reflections on the use of AI as a tool in the dispute resolution offering within the legal sector. Many diverse and interesting topics were explored at LIDW, an event now firmly established in the hearts and minds of the international disputes community as London’s flagship event for global dispute resolution.

The conference kicked off with renowned legal and geopolitical experts, including Cherie Blair OBE KC, exploring the legal challenges presented by the increasing geopolitical instability of recent years and how tumultuous world events are shaping dispute resolution. Of the many notable panellists, some of the content drawn upon in this article came from:

  • Lorraine Medcraft (Vice President of Court Reporting Sales, Epiq) who was introduced as being someone who is at the coalface of technology.
  • James Besley (Co-Head of Legal of Google DeepMind), whose illuminating discussions had a particular focus on in-house legal teams. This was set against the context that Google DeepMind’s own mission is to build AI responsibly to benefit humanity and with a recognition that AI is much more at the fore of the public’s imagination now than ever.
  • Greg Harman, the Managing Director of BRG, who gave some interesting insights on AI from an expert’s perspective

It was described at the LIDW as a really exciting time to be at the forefront of technology because it is evolving so quickly. Technologies to assist in disputes, such as generative AI (document classification, identification summaries, analysis) will change the way in which dispute practitioners of the future will work, if they haven’t done so already.

The role of AI within in-house legal teams


What should in-house legal teams consider?

  • Do they have a balanced AI and tech literacy program? For example, will the in-house teams have lawyers who are not only partly responsible for counselling the business, but also partly responsible for managing the tech side of things – such as looking at bringing in AI tools, tailoring tools, beta testing tools?
  • Do they have the space to play around with the AI technology? To educate themselves with it, understand what the technology can do, see what its limitations are, look for opportunities to introduce it to optimise their workflows and so forth.
  • Establish a data governance and data strategy; specifically, how the data is being used. For example, if sensitive client information is put in court documents in the AI tool, will it be used to train future iterations of models and if so, is there a risk it might regurgitate some of the sensitive information later down the line? In-house teams should do their diligence on those AI tools to understand how they operate.
  • Approach AI technology as a collaboration or assistance tool; something that can help identify gaps in people’s knowledge and then close those gaps as opposed to just completely replacing the need to learn for those junior in the team.
  • Learn how to prompt these tools properly and help junior lawyers to understand how to contextualise the output.
  • Retain human oversight in terms of outputs. Or to put it another way, with AI, it’s got to be seen as a means to an end, not an end in itself. This was a common theme throughout the conference.

One of the challenges highlighted in the panel discussion in managing AI risks is the emergence of regulatory regimes in different jurisdictions, which are coming out at different paces; with a note that there are definitely challenges counselling in this kind of environment where advisors will need to take account of all these different regimes, particularly when the technology is globally accessible. There remains a premium in the importance of human judgment applications when dealing with complex fact scenarios and situations that warrant having in-person interaction.

The role of AI within in-house legal teams


What should in-house legal teams consider?

  • Do they have a balanced AI and tech literacy program? For example, will the in-house teams have lawyers who are not only partly responsible for counselling the business, but also partly responsible for managing the tech side of things – such as looking at bringing in AI tools, tailoring tools, beta testing tools?
  • Do they have the space to play around with the AI technology? To educate themselves with it, understand what the technology can do, see what its limitations are, look for opportunities to introduce it to optimise their workflows and so forth.
  • Establish a data governance and data strategy; specifically, how the data is being used. For example, if sensitive client information is put in court documents in the AI tool, will it be used to train future iterations of models and if so, is there a risk it might regurgitate some of the sensitive information later down the line? In-house teams should do their diligence on those AI tools to understand how they operate.
  • Approach AI technology as a collaboration or assistance tool; something that can help identify gaps in people’s knowledge and then close those gaps as opposed to just completely replacing the need to learn for those junior in the team.
  • Learn how to prompt these tools properly and help junior lawyers to understand how to contextualise the output.
  • Retain human oversight in terms of outputs. Or to put it another way, with AI, it’s got to be seen as a means to an end, not an end in itself. This was a common theme throughout the conference.

One of the challenges highlighted in the panel discussion in managing AI risks is the emergence of regulatory regimes in different jurisdictions, which are coming out at different paces; with a note that there are definitely challenges counselling in this kind of environment where advisors will need to take account of all these different regimes, particularly when the technology is globally accessible. There remains a premium in the importance of human judgment applications when dealing with complex fact scenarios and situations that warrant having in-person interaction.

Potential limitations of AI technology and how to address this


Against the context of the importance of tech literacy, it is also necessary to understand the limitations. Most will be aware of the risk that the AI tool may produce ‘hallucinations’ – namely the potential for inaccurate or fabricated information. As a general point, one panellist reiterated that provenance and watermarking will be critical, particularly where disputes are based on incorrect information. Fact-checking is vital. So, for example, having an AI tool which can show the citation of where everything comes from so that it is possible to corroborate and verify it. The limitations of AI were similarly discussed by LIDW’s judicial panel - one of the highlights of the main conference – in the session: “How are the courts around the world innovating?”  The distinguished speakers included:

  • Lord Justice Birss (Lord Justice of Appeal and Deputy Head of Civil Justice, England and Wales),
  • The Hon, Wayne Martin AC KC (Chief Justice of the Dubai International Financial Centre Courts),
  • Judge Elizabeth S. Strong (US Bankruptcy Court, Eastern District of New York) and
  • The Rt. Hon. The Lord Thomas of Cwmgiedd (President of the Qatar International Court).

Views expressed by the judicial panel echoed the earlier point that AI should be seen as a tool towards getting to the end result, but not a result in itself. So rather than using AI to produce the end result itself, it should be seen as a tool to provide leads that can then be used to actually verifying every statement made by reference to the evidence and to propositions of law. With this approach, the potential dangers of AI can then be limited because hallucinations will be identified through the checking process.


One panellist discussed the scenario where there is a large case with massive data and it needs to be reviewed and summarised. Should it be given to an associate with a longer response period for receiving the summary or obtain a summary in minutes or even moments when using AI? If an associate or someone who knows the case undertakes the review (someone who has learned the documents, seen the handwritten note in the margin or words which got circled), they will likely pick up on these things which perhaps AI would not but which may be critical to the outcome of the case. The suggestion was made that practitioners should be encouraged to think about this as not ‘either / or’, but rather as ‘both’, namely that these are yet more tools in a toolkit and that one needs to be thoughtful about how to deploy them, mindful of the benefits as well as being thoughtful about what one might be giving up. It was interesting to hear the judicial panel’s enthusiastic endorsement of AI, and the key takeaway that, whilst AI should be viewed as a means by which you can arrive at a result, subject to all the normal constraints of checking data and legal authorities and things of that sort. Whilst acknowledging the need to have a sensitivity to these guardrails, the judicial panel were otherwise fully behind the potential value that AI can bring. Of some of the interesting insights on AI given from an expert’s perspective, limitations were suggested to include that:

  • It is based on a particular set of data and it has been taught, or trained, by human interventions.
  • Weights are given to different elements of what the AI tool should focus on and so forth.
  • AI is very good at looking at structured text-based problems. But, asking very specific questions, such as “Who’s going to win the Champions League next year” is a difficult question for it. In the expert process – as well as the legal process – there is more than just summarising documents. Different GenAI chatbots will likely give different answers to the same question whereas, in the expert world, experts want the “right answer” or at least a “balanced answer”. From an expert’s perspective, it still needs to be reviewed, have context, and sophisticated questions need to be put through that filter.

Conclusion


To conclude, using the same apt quotation from Vladmir Lenin given at the LIDW conference: “There are decades where nothing happens, and there are weeks where decades happen”. Whilst AI has been a popular topic for the past decade, there has now been an explosion in the capabilities of these AI models. Whilst most of the panellists at the LIDW conference were discussing AI within the context of ‘generative AI’, more recently the term ‘agentic AI’ has also emerged. In the context of legal practice, ‘agentic AI’ is where the AI tool can either autonomously or semi-autonomously take multiple steps in a row to produce an output. So, for example, it could review documents and then draft a statement of claim, or review a data room, and draft various documents based on that with potentially limited human oversight. This was described by one panellist as like having a virtual assistant that you are able to ask questions, and it will be able to go into the document base and come back with the responses, with the potential for a further discussion with it about for example, the underlying case law and maybe why positions have been taken and what positions to go back with. Of course, retaining human oversight of the outputs is key.

Conclusion


To conclude, using the same apt quotation from Vladmir Lenin given at the LIDW conference: “There are decades where nothing happens, and there are weeks where decades happen”. Whilst AI has been a popular topic for the past decade, there has now been an explosion in the capabilities of these AI models. Whilst most of the panellists at the LIDW conference were discussing AI within the context of ‘generative AI’, more recently the term ‘agentic AI’ has also emerged. In the context of legal practice, ‘agentic AI’ is where the AI tool can either autonomously or semi-autonomously take multiple steps in a row to produce an output. So, for example, it could review documents and then draft a statement of claim, or review a data room, and draft various documents based on that with potentially limited human oversight. This was described by one panellist as like having a virtual assistant that you are able to ask questions, and it will be able to go into the document base and come back with the responses, with the potential for a further discussion with it about for example, the underlying case law and maybe why positions have been taken and what positions to go back with. Of course, retaining human oversight of the outputs is key.

Our AI Law Business Guide


We recently published our AI Business Guide which aims to help both in-house lawyers and senior executives in improving their understanding of AI and some of the key legal issues associated with it. The Guide includes contributions from practice area experts from across the Firm to provide a perspective on the legal implications of AI and how the rapid development of different types and branches of AI creates additional and more complex issues that require consideration.

Next

Navigating supply chain disputes and risk

Read here

Alumni • Legal Notices • Accessibility • Privacy Notice • Fraudulent or 'scam' communication • Complaints Procedure • Pricing Information

© Charles Russell Speechlys 2025. Solicitors Regulation Authority number 420625.