Cases were slim yesterday – in fact, virtually non-existent. So, I’ll use this “Highlights” to point you to some articles on the issue of the day. Will AI put arbitrators out of business?
Artificial intelligence and Arbitration
The discussion over Artificial Intelligence takes me back to the scene in the 1968 movie, 2001 Space Odyssey, where HAL, the AI operating system running a space craft, decides to kill the astronauts because it figures that they are jeopardizing the mission. Maybe because one of the astronauts was David Bowman, I get a little paranoid about all this. For great dialogue between HAL and David, go to https://www.imdb.com/title/tt0062622/quotes/ So, out of curiosity, I decided to look at some literature on AI and arbitration. Here’s a sampling. The articles are all deeply analytical, heavily footnoted, and well worth reading. But, don’t expect to skim any of them; they will take time to absorb. All of the articles are available on LEXIS.
Horst Eidenmuller and Faidon Varesis, professors at the University of Oxford, tee up the basic question in Article: What Is an Arbitration? Artificial intelligence and the Vanishing Human Arbitrator, 17 N.Y.U. J.L. & Bus.49 (Fall, 2020). They argue that “there is nothing in the concept of arbitration that requires human control, governance, or even input.” They further argue that the “existing legal framework” of the New York Convention “is capable of adapting to and accommodating fully AI-powered arbitrations.”
Dimitrios Ioannidis, a Boston-based attorney and arbitrator, takes a similar tack in Article: Will Artificial Intelligence Replace Arbitrators under the Federal Arbitration Act?”, 28 Rich. J.L. & Tech. 505 (2022). Much of the article looks back to the struggles over the development of the then-innovative FAA. Attorney Ioannidis argues that, just as lawyers adjusted to the FAA, institutions, “such as the American Arbitration Association,” will “eventually be forced to incorporate an AI arbitrator platform as part of its suite of available services,” and that, because of client demands, lawyers will have to go along. The article’s conclusion is capsulized in one sentence. “I [the author] can imagine a DABUS-like arbitrator that can know that it is thinking or creating inventions but also having a potentially powerful stream of consciousness that can logically evaluate factual and legal patterns in resolving disputes.” Like Eidenmuller and Varesis, he concludes that there is nothing in the existing rules structure, pointing specifically to the FAA, that would prohibit the parties from designating an AI system as their arbitrator. The article dives deeply into technology, so be ready to test your engineering mettle as you read it.
Gizem Halis Kasap, an attorney licensed in New York and Istanbul, is less willing to leap into the AI arbitration world than the previous two authors. In Article: Can Artificial Intelligence (“AI”) Replace Human Arbitrators? Tecnological Concerns and Legal Implications,” 2021 J. Disp. Resol. 208 (Spring, 2021), she argues that implementation of such systems requires more study. More than the authors of the two articles cited above, Kasap looks at the non-technical aspects of AI decision making. While she acknowledges that existing rules may allow systems to act as arbitrators, she argues that those rules “assume” that arbitrators are “mortal.” For example, he points out that FAA uses the pronouns “he” and “they;” the English Arbitration Act of 1996 stipulates that an arbitrator’s authority ceases “on his death.” In one of the most interesting portions of the article, Kasap addresses due process concerns in using machine learning to decide cases. For example, to what degree can we assure that the system’s training data was not biased or did not “represent the real world?” Further, in light of the “black box” nature of an AI system’s reasoning and the proprietary nature of such programs, can counsel or the courts discover or establish such bias? Finally, can AI produce novel results or does it “only mimic existing thought patterns or combine them to produce what appears to be a novel outcome?” (Emphasis added).
Cole Disney argues that arbitration is an “inherently human process” and critiques the ability of AI to undertake such non-linear reasoning. In Article: Hypothetical AI Arbitrators: A Deficiency in Empathy and Intuitive Decision-Making, 13 Arb. L. Rev. 126 (2021), he cites studies that “have found emotions are essential to human cognition, and that emotions play a pivotal role in rational decision-making.” However, AI decision-making “currently is without an intuitive element.” (Remember that this is a three-year-old article in a fast-moving world). It may not be able to view the dispute from the perspective of both parties. As such, it may lack the empathy needed to reach results which are acceptable to real people. He closes by comparing results from a human arbitrator and a hypothetical deep learning system in two cases. One example involves resolving a contract case in which the parties’ actual understanding of the transaction and their intentions regarding the deal were relevant. He argues that, to the extent the case merely involves interpreting a contract, AI can do well. However, absent human empathy, it becomes far more difficult for a machine to determine intent and actual understanding. In the second case, involving the law of multiple jurisdictions, he argues that the factual situation may simply be too complicated for a system to learn without on-going adjustment during the arbitration. He concludes, however, that, as AI develops, the ability to empathize may eventually be built into the system.
These are great reads on a contemporary subject. Next time you’re on a plane or killing time, through them in your brief case or backpack. You’ll be rewarded.
David A. Reif, FCIArb
ReifADR
Dreif@Reifadr.com
ReifADR.com
Leave a Reply
Your email is safe with us.