By Jeanne Somma, Director, Analytics and Managed Review
When I first joined RVM, I had a meeting with a law firm client that was supposed to focus on how RVM could use technology to cut costs for the firm and ultimately their client’s. What it turned into, though, was a frank conversation about the firm’s reluctance to rely on technology without something more to back it up. The client put a name to that “something more,” calling it “legal legs.” His frustration was that technology companies are selling tech to lawyers, and that these things don’t have the proper legal legs to warrant complete adoption.
At the time I found that to be an interesting and frustrating stance. There is case law, metrics, and a whole host of other things that I believed to be the legal legs necessary to back up the use of technology. I left that meeting with a sticky note that simply read “tech needs legal legs” and fear that I wouldn’t be able to bridge the gap between what is possible when using technology and what is acceptable to the lawyers that use it.
It’s no surprise that many of the conversations and presentations at this year’s Legalweek centered on arguably the most polarizing technology, Artificial Intelligence (AI). The use of the term in the legal sector has grown exponentially, and at Legalweek that talk seems to reach a fever pitch. But even as session after session describes the amazing things that AI can do for the legal community, I wonder if the mass adoption promised by presenters is reasonable in the current environment. Put another way, we need to focus on evolving the conversation, rather than just evolving the technology.
One of the sessions that I was excited to attend was The AI Bootcamp. There the conversation was more of the same, with unfailing optimism that not only could the technology solve the problems of the legal sector, but that the sector would soon embrace it for those reasons. The feeling in the room was that there was a wide agreement that AI would be adopted and permeate the legal sector. That is, until the questions started.
The questions from the audience dissected the great divide between the technology’s capabilities and an attorney’s faith in the result being the same or better than if they had used the processes they are used to. With each comment from the practicing attorneys in the room, I was reminded more and more of that sticky note in my office – “legal legs.” The technology is ready, but it seems the lawyers may not be.
As I listened to the dialogue between the adopters and the more reticent participants, the real difference of opinion boiled down to defensibility. Some attorneys were finding it hard to rely on a system that made decisions using algorithms that were not easily articulated. These attorneys wanted to be able to tell a judge or opposing party that the results of an AI exercise would be the same or better than if they had done it without the technology. How can they do that when the machine is doing the thinking?
Looking at my notes and seeing the word “artificial,” I realized that that was my stumbling block. It’s the implication that the results are being driven by a machine, which is not accurate. The type of technology that we use across the legal sector – whether in contract analysis, legal research, or predictive coding — is meant to take human understanding and amplify those results in order to provide the user with an efficient way to get to their result. The process is the same – a human with the required intelligence on the subject must train the machine on their thought process and desired results. The machine then simply takes that knowledge and applies it across large data sets faster than the human could. The machine isn’t deciding anything that it wasn’t already told by the human. What it does do is amplify the human’s intelligence. In other words, people are still driving the results, but with the assistance from the technology.
Termed as amplification rather than artificial we take the mystique out of the process. We bring the real definition to light – a process that leverages human intellect in order to train a machine to do the resulting work quicker. The result is the same because the human’s intelligence and input at the outset is the same. Also the ability to check the work is the same. The only thing that’s changed is the speed with which the process can be completed.
We need to change the conversation as technology and legal service providers. We need to focus on the amplification power of AI solutions and the defensibility of relying on the human subject matter expert. Until we can show the legal community that AI is not artificial at all, we will continue to have this battle between capabilities and adoption.
I for one want to solve this problem – if only so I can finally throw out that sticky note.