- AI models like ChatGPT and GPT-4 are adept at core legal tasks like parsing complex documents, searching laws, and pulling relevant evidence. This could automate repetitive legal work.
- However, these models still make mistakes and need human oversight. Lawyers are accustomed to reviewing others’ work, but building trust in AI poses challenges.
- AI adoption in law has been slow so far, but new advances may accelerate it. There are still risks around bias, expertise erosion, and overconfidence in unreliable technology.
- The impact could be a boost in productivity but also job losses, though we likely have some time before widespread displacement of attorneys.
New advances in artificial intelligence tend to be followed by anxieties around jobs. This latest wave of AI models, like ChatGPT and OpenAI’s new GPT-4, is no different. First, there was the launch of the systems. Now, there are predictions of automation.
In a report released this week, Goldman Sachs predicted that AI advances could cause 300 million jobs, representing roughly 18% of the global workforce, to be automated in some way. OpenAI also recently released its own study with the University of Pennsylvania, which claimed that ChatGPT could affect over 80% of the jobs in the US.
AI May Not Steal Your Job, but It Could Stop You Getting Hired
The numbers sound scary, but the wording of these reports can be frustratingly vague. “Affect” can mean a whole range of things, and the details are murky.
People whose jobs deal with language could, unsurprisingly, be particularly affected by large language models like ChatGPT and GPT-4. One can take the example of lawyers to examine how the legal industry is likely to be affected by new AI models, and what one finds is as much cause for optimism as for concern.
The antiquated, slow-moving legal industry has been a candidate for technological disruption for some time. In an industry with a labor shortage and a need to deal with reams of complex documents, a technology that can quickly understand and summarize texts could be immensely useful. So how should one think about the impact these AI models might have on the legal industry?
First off, recent AI advances are particularly well suited for legal work. GPT-4 recently passed the Universal Bar Exam, which is the standard test required to license lawyers. However, that doesn’t mean AI is ready to be a lawyer.
The model could have been trained on thousands of practice tests, which would make it an impressive test-taker but not necessarily a great lawyer. (One doesn’t know much about GPT-4’s training data because OpenAI hasn’t released that information.)
Still, the system is very good at parsing text, which is of the utmost importance for lawyers.
“Language is the coin in the realm of the legal industry and in the field of law. Every road leads to a document. Either you have to read, consume, or produce a document … that’s really the currency that folks trade in,” says Daniel Katz, a law professor at Chicago-Kent College of Law who conducted GPT-4’s exam.
Secondly, legal work has lots of repetitive tasks that could be automated, such as searching for applicable laws and cases and pulling relevant evidence, according to Katz.
One of the researchers on the bar exam paper, Pablo Arredondo, has been secretly working with OpenAI to use GPT-4 in its legal product, Casetext, since this fall. Casetext uses AI to conduct “document review, legal research memos, deposition preparation and contract analysis,” according to its website.
Arredondo says he has grown more and more enthusiastic about GPT-4’s potential to assist lawyers as he has used it. He says that the technology is “incredible” and “nuanced.”
AI in law isn’t a new trend, though. It has already been used to review contracts and predict legal outcomes, and researchers have recently explored how AI might help get laws passed. Recently, consumer rights company DoNotPay considered arguing a case in court using an argument written by AI, known as the “robot lawyer,” delivered through an earpiece. (DoNotPay did not go through with the stunt and is being sued for practicing law without a license.)
Despite these examples, these kinds of technologies still haven’t achieved widespread adoption in law firms. Could that change with these new large language models?
Third, lawyers are used to reviewing and editing work.
Large language models are far from perfect, and their output would have to be closely checked, which is burdensome. But lawyers are very used to reviewing documents produced by someone—or something—else. Many are trained in document review, meaning that the use of more AI, with a human in the loop, could be relatively easy and practical compared with adoption of the technology in other industries.
The big question is whether lawyers can be convinced to trust a system rather than a junior attorney who spent three years in law school.
Finally, there are limitations and risks. GPT-4 sometimes makes up very convincing but incorrect text, and it will misuse source material. One time, Arrodondo says, GPT-4 had him doubting the facts of a case he had worked on himself. “I said to it, You’re wrong. I argued this case. And the AI said, You can sit there and brag about the cases you worked on, Pablo, but I’m right and here’s proof. And then it gave a URL to nothing.” Arredondo adds, “It’s a little sociopath.”
Katz says it’s essential that humans stay in the loop when using AI systems and highlights the professional obligation of lawyers to be accurate: “One should not just take the outputs of these systems, not review them, and then give them to people.”
Others are even more skeptical. “This is not a tool I would trust with making sure important legal analysis was updated and appropriate,” says Ben Winters, who leads the Electronic Privacy Information Center’s projects on AI and human rights. Winters characterizes the culture of generative AI in the legal field as “overconfident, and unaccountable.” It’s also been well-documented that AI is plagued by racial and gender bias.
There are also the long-term, high-level considerations. If attorneys have less practice doing legal research, what does that mean for expertise and oversight in the field?
But one is a while away from that—for now.
This week, the editor at large of Tech Review, David Rotman, wrote a piece analyzing the new AI age’s impact on the economy—in particular, jobs and productivity.
“The optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth.”
Two products launched right now:
AI Agent: https://orbitmoonalpha.com/shop/ai-tool-agent/
AI Drawsth: https://orbitmoonalpha.com/shop/ai-tool-drawsth/
AI Tool Agent
One purchase equals 1 yr License. Boost your productivity with our AI tool!