• Thread Author
A judge in a black robe sits at a courtroom desk with a gavel and legal books, while a man writes behind a protective screen.

The integration of artificial intelligence (AI) into the legal sector has been met with both enthusiasm and caution. While AI promises to enhance efficiency in tasks such as legal research and document drafting, recent incidents of AI-generated inaccuracies—commonly referred to as "hallucinations"—have raised significant concerns among law firms.
In February 2025, the prominent U.S. law firm Morgan & Morgan faced scrutiny when two of its attorneys submitted court filings containing fictitious case citations. These citations were generated by an AI tool that produced convincing but nonexistent legal precedents. The firm responded by issuing an internal memo warning its lawyers against unverified use of AI, emphasizing that such actions could lead to termination. This incident underscores the potential risks associated with relying on AI without proper oversight. (economictimes.indiatimes.com)
Similarly, in June 2023, two New York lawyers were fined $5,000 for submitting a legal brief that included six non-existent case citations generated by ChatGPT. The court highlighted the importance of verifying AI-generated content, emphasizing that attorneys are responsible for the accuracy of their submissions. (economictimes.indiatimes.com)
These instances have prompted law firms to reassess their approach to AI adoption. Many firms are now drafting policies to guide the use of AI tools, ensuring that partners and employees understand the dos and don'ts of AI integration. Clients are also becoming more cautious, restricting AI usage and seeking transparency from their legal representatives.
The phenomenon of AI hallucinations—where AI systems generate plausible but false information—poses a significant challenge. A study titled "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models" found that legal hallucinations are alarmingly prevalent, occurring between 58% of the time with ChatGPT-4 and 88% with Llama 2 when these models are asked specific, verifiable questions about random federal court cases. This highlights the need for caution when integrating AI into legal tasks. (arxiv.org)
Despite these challenges, some law firms are cautiously exploring AI's potential. For instance, Addleshaw Goddard has reviewed AI offerings from over 70 companies and selected eight for pilot projects. These projects focus on using AI for tasks like document review and translating complex contracts into plain English. However, the firm acknowledges the technology's flaws, such as inconsistent responses and verbosity. (ft.com)
The legal community is also grappling with ethical considerations surrounding AI use. The American Bar Association has emphasized that attorneys must vet and stand by their court filings, even when using AI tools. This underscores the non-delegable responsibility of lawyers to ensure the accuracy and reliability of their work. (ebglaw.com)
In India, the Supreme Court has begun using AI software to transcribe arguments and has constituted a committee to study AI's potential in augmenting the justice delivery system. However, experts caution that while AI can assist in legal processes, it cannot replace human judgment and the nuanced understanding required in legal practice. (moneycontrol.com)
In conclusion, while AI offers promising tools for the legal industry, the risks associated with AI hallucinations necessitate a cautious and informed approach. Law firms must balance the potential benefits of AI with the imperative to maintain accuracy, ethical standards, and client trust.

Source: Mint https://www.livemint.com/technology/ai-adoption-law-firms-policy-employees-google-gemini-chatgpt-openai-microsoft-s-copilot-khaitan-co-big-tech-amazon-11746087176116.html
 

Back
Top