Navigating the Legal Landscape with AI – A Cautionary Tale

The legal profession stands on the cusp of a technological revolution, with artificial intelligence (AI) poised to transform the way lawyers work. From drafting contracts to conducting research, AI promises efficiency and speed. However, a recent study fromStanford’s Human-Centered AI Institute (HAI) raises significant concerns about the reliability of AI in legal settings1.

The Hallucination Problem

AI tools, particularly generative AI, have been found to “hallucinate” – generating false information – in 1 out of 6 benchmarking queries or more. This phenomenon is not just a minor hiccup; it has real-world consequences. For instance, a New York lawyer faced sanctions for citing non-existent cases generated by an AI tool. The study by Stanford RegLab and HAI researchers highlights the risks of incorporating AI into legal practice without rigorous benchmarking and public evaluations1.

The Risks for Legal Professionals

The implications for legal professionals are clear: while AI can be a powerful assistant, it can also lead to errors that undermine the integrity of legal work. The study tested leading legal research services like LexisNexis and Thomson Reuters and found that even these specialised tools produced incorrect information more than 17% and 34% of the time, respectively1.

Declaring the use of Generative AI

Whilst not specifically part of this study, there is a growing trend among judges to require lawyers to disclose the use of Generative AI in legal document preparation. This is to ensure transparency and adherence to accuracy standards, with some courts mandating a certificate of accuracy for AI-generated content, verified by traditional legal research methods. The movement aims to maintain the integrity of legal proceedings as AI tools become more integrated into the practice.

In New Zealand, the judiciary has developed guidelines for lawyers using Generative AI, emphasising the importance of existing professional obligations and the need for caution due to the risks and limitations of AI tools2.

1Stanford HAI – AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

2https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Lawyers.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *