For years it’s been promised that artificial intelligence would lead to a revolution in law. But will AI change how people access legal services? And could it really replace human lawyers?
AI and robotics expert Professor Kristen Thomasen weighs in.
The use and popularity of AI chatbots like ChatGPT have seemingly exploded over the past few months. Will this actually be a revolution in the way we work – in the legal profession or elsewhere – or is this mostly hype?
There is a lot of hype right now, and that hype is partly driven by the fact that the scale of some of these large database-driven chatbot systems we’re seeing right now is fairly new. And with this larger scale comes new possibilities both for benefit and for harm.
The hype can help catch the attention of more people who wouldn’t have been thinking about the issues with chatbots or AI more generally before, but it’s really important to frame the issues accurately – which isn’t always what we see in the promotion or media coverage of the technologies that are coming out right now. It’s good to question and be critical of the hype. We need to be thinking more about who is developing these technologies and for what uses.
Should we be concerned about the use of chatbots or machine learning, specifically when used in a legal context?
Chatbots have existed for a long time, and there’s over a decade of legal scholarship that’s been growing around the ethical and legal concerns raised by these tools. Computer scientist Joseph Weizenbaum was an early investigator of human-chatbot interaction and created the first chatbot in the 1960s, but later became critical of the way chatbots could be used to manipulate people.
Research has repeatedly shown that when a chatbot or machine learning system is trained on data about people, these systems can replicate biases that exist in the data such as reiterating sexist, misogynistic, or racist tropes. If the people who design or use the systems aren't attuned to these concerns, the default output is to reiterate the status quo. We're basically saying, “quantify the world as it is right now, or as it has been in the past, and continue to repeat that, because it’s more efficient.”
In a specific example in the United States, courts used a machine learning system to assess the risk that defendant would commit another criminal offence. These assessments can affect sentencing in a criminal trial and research demonstrated that racial bias was very much entrenched in the system — as a result, longer sentences were given to Black people than to white people, including those who later committed more severe crimes.
There is a perception among law and policy-makers that innovation is almost inherently beneficial and we need to allow it to happen.
The system didn’t explain its recommendations, raising the risk that the human judges who reviewed the recommendations would simply defer to the machine because there was a perception it was unbiased or more accurate.
There are increasing calls for better regulation of AI or to pause its development. Would it be prudent to pause development and could new laws provide safeguards?
There's a lot of work that laws can do to respond to concerns about AI systems – though there is also a perception among law and policy-makers that innovation is almost inherently beneficial and we need to allow it to happen. So, I’m a bit skeptical about whether that work will be done through the law.
One area where the legal system should have a role is in proactively mitigating foreseeable harms. Laws can be used to develop clearer structures around how AI can be used, including when it comes to making administrative decisions within government. These tools can be given the power to deny someone benefits, for example, which can be utterly life destroying.
It concerns me that the growing number of systems that purport to help individuals with their legal claims could become a justification for governments to stop investing in legal aid and making sure human lawyers are accessible.
I’d like to see more legal scaffolding around how AI systems are used and that could include a pause or moratorium on the development or use of different kinds of technologies, especially in particular contexts. For example, there are calls for a ban on facial recognition systems which are an anonymity-destroying technology.
I’m not saying “do not create systems that can parse through data and identify patterns or insights,” but there needs to be strong boundaries and limits on when and how that kind of system can be used and to make sure there’s human accountability, recourse, and oversight.
Looking ahead 10 or 20 years, could AI ever replace lawyers?
I don't think that a computer system can ever truly replace the work of a lawyer. It can aid the work of a lawyer, but the work of a lawyer is also interpersonal and relational, so I don’t see a computer system ever replacing that. Wealthy people will almost certainly continue to benefit from human lawyers and the more comprehensive, hands-on approach that an actual lawyer can provide.
That said, there are some lawsuits nowadays where the volume of material is so large that no team of articling students or lawyers would be able to get through it all. AI systems could potentially help minimize the amount of human effort needed to review and prepare for legal action. Some law firms are already creating their own in-house AI tools, which can improve aspects of legal work while maintaining client confidentiality.
But in a lot of instances, what we're seeing is more hype than reality, and many systems are more limited than what they're being sold to be. And there’s an associated risk of shifting public policies based on the use of technologies that won’t pan out in the ways they've been promised.
For example, it concerns me that the growing number of systems that purport to help individuals with their legal claims could become a justification for governments to stop investing in legal aid and making sure human lawyers are accessible. People who can't afford lawyers could be stuck with automated systems that aren’t relational, don't explain themselves and might not be accurate. Through the guise of improving access to justice, we’d be deepening an access to justice crisis. I hope this doesn’t come to pass.