Litigation Trends 2025

LITIGATION TRENDS 2025 | 67 T O C E M P E S G A N T I I P C A P R O W C S P O R T C O N T A C T I N T A P P P A T C C L S E C AI in International Arbitration Like many other industries, the legal industry has sought to harness the power of artificial intelligence technology. AI has broad applicability to legal services, including international arbitration. AI can streamline the management of arbitration proceedings, crucially by assisting with discovery and document review processes, which can alleviate the time and cost pressures associated with arbitration. However, risks and potential misuse are also front and center of considerations, particularly given a lack of general regulation relating to AI. Several arbitration institutions, including the Silicon Valley Arbitration & Mediation Centre and the SCC Arbitration Institute, have published guidelines offering best practices for the use of AI in arbitration. These guidelines are non-binding in nature and drafted widely, to permit parties and tribunals the flexibility to adopt and apply these guidelines as necessary. CONCLUSION The guidelines discussed in this article represent a framework for the approach taken by arbitral institutions in relation to AI. In the coming years, it is likely that other international arbitration institutions will follow this example and in addition, that there may be a need to regulate the safe use of AI in arbitration. As AI technology develops further, particularly in relation to decision-making and data analysis, the question of regulation will be in ever sharper focus. International Arbitration Jamie Maples Head London jamie.maples@weil.com CONFIDENTIALITY Participants should ensure that AI software used during proceedings has appropriate safeguards in place to protect the confidentiality of client information. Without these safeguards, AI models may retain confidential information inputted by users to train the software, which could result in sensitive information becoming available to third parties. Parties may seek to instruct experts to advise on the AI tools best suited for privacy. DISCLOSURE While there is no strict obligation to disclose the use of AI tools in arbitration, disclosure may be useful to avoid misleading another party or disrupting due process. Given that AI-generated outputs are reflective of their inputs, it may additionally be necessary to provide other parties with the material submitted to the AI model. NON-DELEGATION OF DECISION-MAKING This guideline applies specifically to arbitrators and underscores the fact that an arbitrator’s personal decision-making role is non-delegable. Arbitrators may deploy AI tools to analyse information and help draft awards but liability for any inaccuracies ultimately resides with them. There is no replacement for independent human analysis, particularly given AI output may be bias depending on the data it has been trained on. submitted to the AI model. OTHER Arbitral institutions are not solely considering AI in the context of arbitration procedure but also AI companies as the subject of disputes particularly with regard to issues such as copyright infringement and data privacy. Judicial Arbitration and Mediation Services (an alternative dispute resolution service provider) published guidelines in April 2024, which clarify the correct approach for cases in which AI companies are the subject. Companies operating in the AI space will no doubt be paying close attention to future developments in this space, particularly since among other topics, the guidelines focus on confidentiality safeguards for information originating in AI systems. RISK AND MITIGATION To reduce errors or so-called “hallucinations” (results that are incorrect but prima facie appear reasonable) produced by AIgenerated responses, users can employ “prompt engineering” to communicate with the software in a way that will generate the most accurate responses. Examples of potential hallucinations include incorrect case citations and footnoted sources. Human input is therefore crucial to both identify these errors as they arise and to operate AI in a way that generates optimal results. There should be human oversight of AI models and any AI outputs utilised in the arbitration process should be subject to human review. General Guidelines I I 66 | Weil, Gotshal & Manges LLP

RkJQdWJsaXNoZXIy MTI5NDgyMg==