Risks of Litigants in Person (LiPs) Using AI and ChatGPT for Self-Representation
AI| 21.05.2024
Technology is incredibly helpful in our everyday lives, but there is one rapidly emerging area that needs to be better understood before it is widely adopted within the area of law: artificial intelligence (AI).
The rapid advancement and use of generative AI tools, like ChatGPT, is now causing real concern within the legal sector, especially when used by litigants in person. You are a litigant in person (LiP) if you go to court without the legal representation of a Solicitor or Barrister. In this article, we will talk about the risks of AI as a litigant in person.
What is the problem?
It is understandable that an individual who is unable to fund the cost of a Solicitor or Barrister to defend them would turn to free tools such as ChatGPT for help. The problem is that just even if the answers provided by ChatGPT seem plausible to a layperson, they may not stand up in court if used by a litigant in person.
As Megan Shirley, a Senior Lecturer at Nottingham Law School, recently wrote in the Solicitor’s Journal, “How can we be surprised that litigants in person are asking generative AI software such as ChatGPT for help? The computer programme will draft a statement of case or a written submission to court within minutes. The text that is produced will appear to be well-written and structured; it will use legal terminology, refer to statute and case law and it will probably make a persuasive argument. A litigant in person will no doubt believe that they have hit the jackpot when they compare it with what they could have produced themselves”.
Even ChatGPT 4.0 acknowledges its own shortcomings. When asked for the top three biggest risks of its use by Litigants in Person (LiPs), it says:
1. Accuracy and ‘hallucinations’
2. Confidentiality and privacy risks
3. Embedded bias and inaccuracy
Let’s look at each in turn.
‘Hallucinations’ and lack of accuracy
It is well known that generative AI (e.g. such as ChatGPT) can sometimes produce ‘hallucinations’, resulting in what appears to be plausible but are actually completely fictitious responses. Indeed, this was the case in Harber v. Revenue and Customs Commissioners (2023), whereby an AI tool made up legal citations, which caused considerable confusion and wasted judicial resources.
Confidentiality and privacy risks
Another AI risk highlighted by the Solicitors Regulation Authority (SRA) in November 2023 concerns confidentiality and security. This is particularly problematic if the details of individuals and their legal cases are entered into public AI tools such asChatGPT because this information may then be used as training to provide responses to other users. This is an absolute ‘no-no’ for law firm staff as it puts the confidentiality of clients at risk, but it also jeopardises the privacy of LiPs using the same public tools. The reality is that, at present, not enough is known about how publicly available AI tools may compromise the security of confidential and personal information.
Embedded bias and misjudgements
The SRA have raised their concern that AI systems are also prone to bias and misjudgements, which typically arise from poor training data. As the SRA explains, “Just as with humans, AI can have biases. If not spotted and corrected, these could lead to unfair or incorrect outcomes. For example, any AI bias in criminal litigation could lead to miscarriages of justice”.
What do the courts think of LiPs’ use of AI?
Judges are very much alert to the use of AI tools and their shortcomings. The Guidance for Judicial Office Holders on Artificial Intelligence (AI), published in December 2023, advised judges to educate themselves on the limitations and potential biases of public AI chatbots. It warns:
• Against treating AI outputs as definitive facts and recommends verification of any citations or research generated through AI tools.
• AI does not necessarily provide the most accurate answer because it does not have access to answers from authoritative databases.
• Even with the best prompts, the information provided may be inaccurate, incomplete, misleading, or biased.
• Much of the training data used by AI chatbots is heavily based (but not always) on US law.
Final words
LiPs considering using AI tools such as ChatGPT should recognise the inherent risks of relying on them for self-representation. Unrepresented litigants do not have the legal expertise to critically evaluate the output from AI tools, inadvertently putting themselves in a precarious legal position. In addition to the likelihood of incorrect legal information being produced, there are confidentiality risks when inputting sensitive data into public platforms and the possibility of biased or incomplete advice. AI may play a greater role in law in the coming years, but it does not replace the need for legal expertise from a Solicitor who understands your individual circumstances and needs.
Pearcelegal has a dedicated team of charity law solicitors who provide practical and legal advice and support to charities across the UK. To make an appointment, please contact us on 0121 270 2700 or enquire through our contact form.
Expert advice for you Book a free consultation
The team at Pearcelegal will be delighted to discuss your legal matters and give you a no-obligation quote.