There have not been any significant court decisions in the last couple of days; SCOTUS, for one, has been busy with other, more urgent, matters. So, today we get a chance to consider three really good articles on ADR, which address arbitrator conflicts and AI in arbitration and mediation.
Conflicts
I could not find any current studies quantifying the number of awards which were vacated for an arbitrator’s failure to disclose a potential conflict. However, particularly in the realm of international arbitration, such failures resulted in several very visible cases. In 2014, the International Bar Association adopted Guidelines on Conflicts of Interest in International Arbitration. Last year, the association updated those standards. Erica Stein addresses those changes in the IBA’s journal, Dispute Resolution International, Stein, The 2024 IBA Guidelines on Conflicts of Interest in International Arbitration, 18 DRI 123 (2024), available on Lexis at The 2024 IBA Guidelines on Conflicts of Interest in International Arbitration. While going into detail on all the changes, the article highlights revisions to the Guideline’s well-known “traffic light” tests.
The Red List sets forth conflicts which, except for a subset which are waivable after full disclosure, require the proposed arbitrator to turn down the assignment. The revisors have changed the list to recognize the reality that lawyers sharing a large, international firm may never have even met each other. Therefore, while the arbitrator’s direct involvement in representing a party is still a non-waivable Red List item, a conflict may be waived when “the firm, without the arbitrator’s direct involvement, represents a party and derives significant financial income therefrom.”
The Orange List, which governs circumstances where the arbitrator must make a disclosure, now includes specific disclosure of those situations in which the arbitrator has served as an expert or mock arbitrator or serves concurrently with a fellow arbitrator in another matter.
The Green List, i.e., those items that ordinarily do not require disclosure, has been revised to include in the non-disclosure list “the fact that an expert appearing before the arbitrator in one matter also appeared before the arbitrator in another matter.” However, Attorney Stein points out that this change still requires proposed arbitrators to consult the changes to the Orange List regarding experts.
The Guideline revisions “clarify” the scope of their application, by stating that they apply to “all international arbitration[s],” not just commercial matters. Further, they do not override “codes of conduct or other binding instruments chosen by the parties.” While Attorney Stein states that the purpose of the provision is to accommodate the 2024 UNCITRAL Code of Conduct for Arbitrators, does the exception also incorporate professional Codes of Conduct, like those governing attorneys?
Even for those not doing international arbitration, the article is a great reminder of the types of disclosures that we arbitrators need to make. However, remember that the principles in the Guidelines are not always a shield against vacatur due to the arbitrator’s conflict. Tribunals like the American Arbitration Association and FINRA have broader rules under which items protected by the Green List must still be disclosed. So, view the Lists as a guide, but, remember, it is a lot better to disclose – and maybe lose an appointment – than to incur the reputational damage of a published opinion overturning your award.
Artificial Intelligence
News flash – AI is a hot issue in the ADR world. (My family accuses me of undue snark; maybe, they are right). Two articles address the relationship between dispute resolution and AI.
Bradford Newman and Daniel Garrie discuss in depth the problems which lawyers encounter when using AI (remember the false citation cases) and the requirements of courts and bar associations when counsel uses the tool, Newman and Garrie, The Current State of US Regulation of the Use of AI in Dispute Resolution, 18 DRI 105 (2024), available on Lexis at The Current State of US Regulation of the Use of AI in Dispute Resolution. Four themes ring through their article and the extensive citations therein – the need for transparency and disclosure as to the use of the tools; human oversight of the process; counsel’s verification that the results are accurate; and appropriate limitation of its use. While not mentioned specifically in the article, arbitrators and counsel appearing before them need to make themselves familiar with the rules or guidelines of the relevant arbitral tribunal, see e.g., Silicon Valley Arbitration and Mediation Center, Guidelines on the Use of Artificial Intelligence (AI) in International Arbitration (1st Ed. 2024), available at SVAMC-AI-Guidelines-First-Edition.pdf; American Arbitration Association, Principles Supporting the Use of AI in Alternative Dispute Resolution, (2024), available at AAAi lab principles press release.indd.
Where there is innovation, disputes follow. The article also addresses JAM’s new rules covering disputes regarding AI systems. Mr. Garrie was a co-creator of those procedures.
There is on-going speculation as to whether AI will eventually replace human mediation as a method for resolving disputes or, at least, will help the parties and mediator find solutions which they might not discover themselves. In Ringort and Sela, Article: An Information Flow Model of Online Mediation; Jeopardizing Privacy and Autonomy in the Shadow of Innovation, 25 Cardozo J. Conflict Resolution 443 (Summer, 2024), available at Ringort+&+Sela-.pdf, the authors discuss two risks associated with using any such platforms. The first and more readily recognized concern is that the decision engine, in building its ever-growing library of information, might impinge on confidentiality by utilizing information in ways of which the parties would not approve. Ringort and Sela, also, go into a less considered area, however, and address the risk that AI may skew a party’s decision-making by promoting certain solutions through the analytical operations of the AI system. To manage these risks, the authors propose several “norms” to “protect confidentiality of the process and the self-determination of the parties.” The article is incredibly researched.[1] Ringort is a PhD student and research fellow at Bar-Ilan University; Sela teaches law there and is an Innovation Fellow at Stanford Law School. Their analytical bent is evident in the work. Nor is it a casual read; the data-driven approach requires the reader’s close attention. But, everyone who is considering any use of AI in decision-making needs to skim the article to understand the world into which they would be entering. Those who want to learn more about the issues AI raises in mediation can use the article and its extensive citations as a starting point for their own thinking.
As I was writing this yesterday, I was looking out the window as the day lengthened here in Connecticut. I thought of the contrast between that pastoral scene and the devastation so many are facing in Los Angeles. Please support the organizations dedicated to providing food, shelter, health care, and emotional support to those affected by the fire storms. The need is so great.
David Reif, FCIArb
Reif ADR
Dreif@reifadr.com
[1] There are 203 footnotes.
Leave a Reply
Your email is safe with us.