The Biggest Challenges in Law That AI Can Help Us to Solve
Tobias Jensen writes the Futuristic Lawyer newsletter. In his words- “On Futuristic Lawyer I try to make sense of grey areas between law, tech, and ethics. Each Tuesday morning (Central European Time) I send out a newsletter where I share legal and ethical considerations about new developments in tech. I usually write long essays (+1.500 words) often about heavy topics but I do my best not to be boring.
If you have any comments or questions, please don’t hesitate to reach out. I am always looking for sparring, feedback, or potentially new partnerships.”
Tobias and I had a very fun conversation over here, and I was struck by the high quality of his questions. That was why I invited him to feature on this newsletter, to share his insights on the AI for Law space. This article, combining Tobias’s insights as both a lawyer and a technologist, has a lot of very insights on the unique challenges associated with Law, where AI can help, and where tools can fall prey to deceptive marketing. I’m sure it will provide a lot of value to you.
If you like this article, please consider becoming a premium subscriber to AI Made Simple so I can spend more time researching and sharing information on truly important topics. We have a pay-what-you-can model, which lets you support my efforts to bring high-quality AI Education to everyone for less than the price of a cup of coffee.
I provide various consulting and advisory services. If you‘d like to explore how we can work together, reach out to me through any of my socials over here or reply to this email.
In today’s post, we will try to answer the question: What is the biggest challenge in law that AI can help us solve?
Before I give my attempt at answering, I’ll share a few reflections about the court system to set the scene.
The Court System
The court system undertakes a vitally important function in society. It serves as a central governance mechanism and provides citizens with access to justice.
Without laws and an authority to interpret and apply those laws, it would be extremely difficult to coordinate activities across a society or do business with strangers. In general, we could never trust that other people were playing by the same rules as we do.
Further, and much worse, it would be impossible to maintain a democracy without courts. We have witnessed throughout history time and time again that if the justice system is weak or dysfunctional, the tyrants become rulers (or the rulers become tyrants).
The court system should help to keep decision-makers in check and safeguard against the abuse of power. Access to justice is so important that I believe — without data to back it up — we can find a strong correlation between the fairness and independence of the court system in a given area and the general life quality and well-being of its populace.
As a result of the fundamentally important function the court system fulfills in a society, judicial processes can oftentimes be painfully slow and cumbersome, not to mention expensive. That is because the process of seeking justice in the courts is embedded in tradition, rules, norms, and rituals. Trying to change that and make the court system more efficient to keep up with the speedy pace of the modern internet era is likely an impossible task. And that’s how it should be. Sacrificing or trying to optimize some of the formal procedures would jeopardize fairness, the right to due process, and the legitimacy of the system.
We can contrast the slow process of seeking justice with the fast-moving pace of tech. One of my favorite quotes is by Andrew Grove, former CEO of Intel. He said:
“High tech runs three-times faster than normal businesses, and the government runs three times slower than normal businesses. So, we have a nine-times gap”
We can likely put the court system in the same category as governments. However, I have my doubts if a rhetorical nine-times gap is even close to covering the difference in speed between the public sector and high-tech. Comparing the speed of AI development over the last two years, with court cases like The New York Times v OpenAI and Microsoft, RIAA v Suno and Udio, or The Authors Guild v OpenAI, it sounds more right to say that tech companies are moving 9.000 (9 thousand)times faster than the courts.
If and when the courts make a decision in cases like the three ones mentioned, AI technology could be in a whole new place. For example, wild speculations, the BigTech companies could have run out of data, power, or energy to train increasingly larger models, or the ethically dubious practice of using unlicensed data to train AI models could have changed for the better due to public resistance.
The law could influence the development of AI in several directions. I wrote more about it here. An equally, if not more important question is, how AI can help us to optimize slow and cumbersome judicial processes without sacrificing the procedural rules and democratic principles that contribute to shaping modern society.
The New Legal AI Tools
Right now, the greatest beneficiaries of AIs productivity and time-saving benefits are law firms and legal departments in medium to large enterprises. Namely in English-speaking countries as the market for AI-driven legal tech products is much more advanced in the US and the UK compared to the rest of the world. There are literally hundreds of top-notch AI legal tools to choose from.
Harvey has received a lot of attention for its large founding rounds, secrecy, and early backing from OpenAI’s startup fund (I think it’s a red flag that the company is named after Harvey Specter from the TV series Suits, maybe that’s just me). Harvey has raised $206 million over three funding rounds and is valued a $1.5 billion. It offers an AI assistant that can be integrated into “all” (it claims) legal workflows such as drafting contracts, analyzing text, answering questions, and doing legal research.
Harvey is facing stiff competition from competitors such as Luminance, Casetext, Draftwise, Spellbook, Robin AI, and Dioptra AI, among many others. All of these tech companies are scrambling to make the best legal copilot and ultimately the first artificial lawyer that customers can hire to save lawyer fees. See my deep dive on the legal copilot concept here.
In a report by Wolters Kluwer from November 2023 a survey of 700 lawyers from the US and Europe showed that 73% planned to utilize generative AI in their legal work within the next year. Among the major law firms and companies with big legal departments, I would be surprised if not 100% were experimenting with generative AI in some workflows today.
How Can Legal AI Tools Help Lawyers?
A common adage among legaltech enthusiasts is that AI-driven tools will take care of routine, monotone tasks so lawyers can focus more on the strategic, high-value work that makes a really big difference to their clients.
Legaltech companies that are building copilots for lawyers are typically striving to recreate the success of AI coding tools such as GitHub Copilot and Cursor that have now become indispensable for many Product/Development teams all over the world.
The main activity all aspiring legal copilots are trying to solve is contract reviewing. On the surface, contract review is comparable to coding since both disciplines are rule-based, follow common standards, norms and patterns, and they try to describe a specific, desired outcome.
When you research the booming market for legal copilots, you sometimes come across rather shocking claims.
For example, the Y Combinator-backed Dioptra AI, makes the eye-catching claim that its tool is 95% “consistently accurate” on detecting issues in 1st party contracts (agreements made between two parties). No documentation for the claim is provided.
In a similar vein, a study conducted by the software company Onit (I wrote about the study here) measured the performance of GPT4–32-K and Claude 2.1 against junior lawyers and legal process outsourcing (LPOs) in the task of identifying legal issues in a contract benchmarked against the answers of senior lawyers.
The study came to the startling conclusion that the LLMs performed at the same level of accuracy as trained lawyers (the LPOs), and exceeded the performance of junior lawyers. Again, no documentation was provided.
I will dismiss both Diopotra’s and Onit’s claims as misguiding, and not just because they lack documentation. What the claims miss is that law is not as much a discipline of science as it is a study of humanities. In my experience factually right or wrong answers are not that common in a business law context, most relevant issues are up for debate.
Reviewing and negotiating contracts requires intimate knowledge of a business’s operation, mission, culture, risk profile, and more. In spite of what it may seem like, this knowledge is difficult to represent in a matrix or an algorithm or to streamline according to certain standards.
The legal copilots that will succeed, should be developed and branded with a focus on the time-savings and productivity benefits they can provide to lawyers, not as golden-answer machines that can outperform lawyers in accuracy. We need humans behind the steering wheels after all and we don’t want to give lawyers the impression that they are training their own replacements.
The Impact of Legal AI Tools
Legal copilots will inevitably drive down the price of legal services and make legal knowledge more accessible to non-lawyers.
The time savings generative AI offers to law firms and companies will naturally spill over to clients and customers. In some jurisdictions, this is mandated directly by the law. For example, the guidance of the American Bar Association on generative AI tools refers to case law and concludes:
“GAI tools may provide lawyers with a faster and more efficient way to render legal services to their clients, but lawyers who bill clients an hourly rate for time spent on a matter must bill for their actual time.”
AI will also raise the requirements of expertise and skills for lawyers. ChatGPT is certainly not suited to serve the role of a legal advisor but we can expect new and specialized legal AI tools to enter the market that are. One project that is worth keeping an eye out for is Safe Sign, a team consisting of lawyers and technologists from Cambridge, Harvard, Oxford, DeepMind, and Lenovo that has built an independent LLM that surpasses GPT-4’s benchmark performance on legal reasoning tasks.
Law firms are facing a substantial disruption and transformation because of generative AI but even the court systems are and will be affected by ripples of the technology’s impact.
The UK Courts and Tribunals Judiciary have issued guidelines to judicial office holders about the responsible use of generative AI. The judicial guidelines identify AI as potentially useful to summarize large bodies of text (if care is taken to ensure that the summaries are accurate), to make writing presentations, or to complete administrative tasks such as composing emails and memorandums. Tasks not recommended for generative AI use are legal research to find new information that cannot be verified independently and conducting legal analysis since current AI tools do not produce convincing analysis or reasoning.
An example from the real world of a court system embracing the technology is the Attorney General’s Office of São Paulo (AGU for its acronym in Portuguese), which adopted GPT-4 last year through its Azure cloud-computing platform. At the time of adoption, AGU was reviewing 20 million lawsuits along with 10,000 summonses and 80.000 subpoenas coming in each day on average. GPT-4 is used to speed up the screening and reviewing process of all these lawsuits and court documents. Eduardo Lang, director of AGU’s Legal Intelligence and Innovation Department, said in March 2023 about AGU’s utilization of GPT-4:
“The assistant has the role of generating summaries and analysis of processes, which can often be long and have several pages and assist in the preparation of appeals and decisions. AI is being used to become a copilot of our team.”
A use case like this one may sound incredible on paper. However, due to the critical and sensitive function of the court system and substantial concerns regarding security, data confidentiality, and hallucination risks, there is a hard cap on just how influential AI can be in a court setting. For both ethical, technical, and legal reasons, AI should never supplant human decision-making power or take the driver’s seat, especially when considering how flawed and inaccurate the output of even leading AI legal tools tend to be.
Concluding Thoughts
Generative AI will make a substantial impact on the legal industry and even influence the processes of courts. Most importantly, the technology will help to solve some of the age-old challenges citizens have faced throughout centuries with expensive lawyer fees, painfully slow case processing time, and lack of access to legal knowledge and understanding.
The biggest challenge in law that AI can help us to solve is the expensive, slow, and cumbersome processes of the court which have historically prevented many people from seeking access to justice. However, the court system is a slow-moving and extremely heavy machinery for a very good reason; it needs to adhere to many formal procedures to live up to its role as a central governing body in a democratic society.
In the years ahead, I expect that generative AI will drive down prices for legal services, help lawyers and courts to optimize certain processes and workflows — without replacing legal professionals — and specialized AI tools will be available for anyone who needs initial support with a legal dilemma.
By no means is generative AI a miracle cure. Hallucination risks and security and data confidentiality concerns call for tremendous caution and common sense when using and implementing AI tools. Whereas tech companies can afford to experiment, make mistakes, and put the cart before the horse, lawyers and court systems are bound by much higher ethical standards which — for sound reasons — limits the potential impacts of the new technology.
If you liked this article and wish to share it, please refer to the following guidelines.
That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. You can share your testimonials over here.
I put a lot of work into writing this newsletter. To do so, I rely on you for support. If a few more people choose to become paid subscribers, the Chocolate Milk Cult can continue to provide high-quality and accessible education and opportunities to anyone who needs it. If you think this mission is worth contributing to, please consider a premium subscription. You can do so for less than the cost of a Netflix Subscription.
Many companies have a learning budget, and you can expense your subscription through that budget. You can use the following for an email template.
I regularly share mini-updates on what I read on the Microblogging sites X(https://twitter.com/Machine01776819), Threads(https://www.threads.net/@iseethings404), and TikTok(https://www.tiktok.com/@devansh_ai_made_simple)- so follow me there if you’re interested in keeping up with my learnings.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819