Many law firms have already embraced the AI revolution. Even if your firm has not officially adopted generative artificial intelligence (gen AI), some lawyers within your firm likely have at least dabbled with AI tools. The benefits of generative AI in law firms are compelling. Lawyers and paralegals can reclaim their time from the legal gruntwork of discovery and research. Additionally, lawyers can “stress test” legal arguments and contracts using refined prompts and built-for-purpose tools.
However, gen AI also poses unique risks in a legal setting, with several high-profile failures illustrating the perils of botched AI usage. In a recent Virginia case, the presiding judge threatened to sanction lawyers for the plaintiff when the court discovered several case citations to have been nonexistent. These fictional citations were caused by a flaw in gen AI, known as a “hallucination,” in which an AI referenced fake citations and even false quotes from Supreme Court justices.
In response, some judges outright ban the use of generative AI, while other courtrooms require lawyers to disclose when they use it in case materials. To help sort out the matter, the American Bar Association (ABA) released new guidance to help lawyers navigate the ethics of responsibly implementing generative AI use. This post will take a closer look at the risks and rewards of generative AI in law firms and review the critical points of the recent ABA opinion on the matter.
The Risks of AI in a Legal Setting
Inaccuracies and hallucinations
Generative AI works by extrapolation, meaning it examines patterns of accessible data based on the user prompt and takes an educated guess as to what comes next in the sequence to answer a prompt. So, if an AI has access to a limited or inaccurate data set, it can come up with inaccurate, biased, or downright bizarre conclusions. If an AI model was theoretically trained on data that falsely stated that “the sky is green,” the AI could only answer incorrectly if prompted, “What color is the sky?” by a user.
Security and confidentiality
Cyber attackers are discovering innovative methods to manipulate or breach artificial intelligence systems. Threat actors manipulate machine learning inputs in adversarial attacks to generate false results. Additionally, if an AI trains on sensitive or confidential client data, it can inadvertently be accessible to other public users of the same AI tool.
Reputational damage
Firms revealed to have put forth inaccurate legal documents generated by AI suffer humiliation and potential sanctions from judges.
Also read: How To Protect Your Law Firm From Social Engineering Fraud
AI Use Cases in Law Firms
With proper oversight, a law firm can overcome the risks and enjoy the benefits of gen AI tools. A key advantage comes from saved time. Lawyers can eliminate tedious or overwhelming work that leads to all-night research sessions.
Critical use cases include:
- Discovery – Technology-assisted review (TAR) has been a tool for lawyers for some years, but gen AI has made TAR more accessible and easier to use. Gen AI can review hundreds of pages of documents for discoverable evidence and other relevant information in a fraction of the time it would take a human counterpart.
- Research and fact-finding – AI can hunt down legal precedents and other vital data to build a case. However, human oversight is essential to fact-check citations and other relevant information for accuracy.
- Writing legal documents – Gen AI can draft legal briefs, contracts, and other documents. Additionally, AI can write or rewrite a document for different audiences, such as a judge, a clerk, or a client. Finally, different AI tools can proofread a document for proper grammar and spelling.
- Document analysis – Use AI to probe documents, such as contracts, for vague wording and potential weaknesses.
- Risk analysis – Another existing technology that gen AI is making more accessible is predictive analytics. These tools analyze data to determine the likely outcome of litigation.
ABA's New Guidance on the Use of AI
ABA recently released its opinion on generative AI use in law firms. ABA divided its guidance across several categories. In a nutshell, they recommended the following:
- Competence – The lawyer is responsible for continuously staying up to date with new AI tools to supervise other lawyers in the firm effectively, advise clients on AI use, and ensure accuracy in all legal documents generated with AI. ABA refers to reading the terms and conditions of the firm's AI tools as “the baseline” of competence.
- Confidentiality – Lawyers must perform due diligence to ensure client data stays confidential. However, if you input sensitive data into a public AI tool, other users could access that data with the right prompt. Lawyers can avert this issue by working with technology experts and relying on a private GPT.
- Duties to the tribunal – An unintentional piece of misinformation, such as a false citation generated by AI, qualifies as misrepresentation. Lawyers must carefully review AI output to ensure accuracy before submitting it to the court.
- Communication – AI disclosure laws are evolving. Most states do not have statutes on the books. However, if a specific judge or precinct requires disclosing AI usage, be sure to do so. If client consent for AI is required in your region, then the lawyer must disclose usage and fully explain the extent of AI use to the client. Even if it is not required in your region, the ABA concludes that you must disclose gen AI usage if a client inquires about your work practices.
- Supervisory responsibilities – Supervising lawyers at a firm need to be competent in understanding and using AI. They must also create clear AI guidance for lawyers under their management and non-lawyer staff. They must also provide training for subordinate lawyers, including risks, basic operations, best practices, etc.
- Reasonable fees and billing – If a task typically takes 10 hours of billable time but is accomplished in 10 minutes with an AI tool, you cannot reasonably bill a client at the same flat rate as 10 hours.
Navigating the Risk of AI in Law Firms
When responsibly implemented, gen AI tools can save lawyers hundreds of hours annually, allowing them to focus on higher-level priorities such as building client relationships. But the risks are genuine. Over-reliance on AI can lead to dire consequences, such as inaccurate legal documents or faulty legal advice.
Because of the dangers, supervisory lawyers may be tempted to ban the use of AI in their firms. However, if you ban AI use outright, some attorneys will risk using it anyway to save time on stacks of research. It is better to set sensible guidance on how best to use AI and when to avoid it. Law firms must consider AI as an assistant, a supplement to their expertise, rather than a replacement for years of law training and expertise.
Protexure offers a range of insurance products that help attorneys mitigate risk, including professional liability and cyber risk management. We specialize in insurance and advice for small and solo firms.
Questions about AI in your law firm? Contact one of our experts to help you evaluate the risks.
Disclaimer: The following article is not intended to replace legal advice. Always seek the opinion of a certified attorney to address the specifics of an individual case and learn about recent legal developments.