Debunking the AI legal revolution: What’s really happening?
A reality check on legal tech
There has been lots of talk about AI revolutionising businesses and professions, including bold statements about the way the technology can eliminate lawyers. But does the hype match the reality?
One of the first known trials for AI in the law came to light in 2023, when lawyers in an airline liability case submitted a list of previously decided cases (lawyers calls them precedents) to a court in New York.
The move was in support of the lawyers’ argument that their case shouldn’t be struck out. The judge and the defendant’s lawyers struggled to find the cases cited. Why? Because ChatGPT had made them up.
Partly as a result of this case, the New York State Bar Association (NYSBA), an organisation established in 1876 that represents around 74,000 lawyers around the world, set up a special AI Task Force to look at the impact of AI in the law.
“The hype did not match the reality”
Because of the need for urgent action, the Task Force worked quickly with a combination of judges, academics and practitioners, and issued its 92-page report on the impact of AI on the law in Spring 2024.
For full disclosure, I sat on the Task Force. We looked at the evidence that existed on the use of AI in the law. Our report found that the hype did not match the reality, citing academic research that found Large Language Models (LLMs) get their results wrong at least 75% of the time when asking questions about a law court’s core ruling.
One of the issues is the data GenAI trains on. It is quite an exact science to look at precedents or legal opinions that are authoritative, and it seems GenAI models are not trained on the right data. There are rumours that some GenAI is trained in part on Reddit chat discussions – and you don’t need to be an experienced lawyer to know that that’s not the best place to start.
This is an issue recognised by the courts in England and Wales, who produced Judicial Guidance on AI in December 2023. They said “Public AI chatbots do not provide answers from authoritative databases. They generate new text using an algorithm based on the prompts they receive and the data they have been trained upon.”
The issues of hype vs. reality were also highlighted by recent intervention from the Federal Trade Commission in the US against DoNotPay, a company operating an online service in the UK and USA that claims to be able to help with legal disputes including challenging parking tickets. It claims to be “the home of the world’s first robot lawyer.”
In 2021, DoNotPay reached a valuation of £210m and the company’s founder has claimed that more than 250,000 parking tickets had been challenged using DoNotPay in London and New York. But, DoNotPay has now itself been the subject of litigation in the US, with allegations that it misled customers and misrepresented its product.
In September 2024, the FTC started enforcement activity against the company, saying its marketing does not match reality. DoNotPay agreed to penalties, including paying a fine to the FTC.
So, it’s fair to say that the early attempts to build a consumer AI product to streamline the law haven’t gone well. But, there are AI products that help lawyers and the legal system. They include products like eDiscovery tools, which, in the right hands, can be used to sift thousands of documents at a time to support litigation or investigations. AI can also help in other areas like property deals and due diligence for corporate transactions.
AI will definitely improve, particularly if AI developers can find and access reliable training data. That’s not as easy a task as it might sound, and that hunt for good quality legal data has itself led to litigation. But is AI good enough now to help in court? The jury is certainly out on that.
Jonathan Armstrong is a partner at Punter Southall Law. He serves on the New York State Bar Association’s AI Task Force and sits on the Law Society AI Group.