
In a striking example of how artificial intelligence (AI) can complicate legal proceedings when used without proper oversight, Gordon Rees Scully Mansukhani LLP, one of the largest law firms in the United States, has publicly apologized after a bankruptcy court filing contained numerous fabricated citations and inaccuracies produced by an AI tool.
The firm, which was representing a creditor in an ongoing hospital bankruptcy case in Montgomery, Alabama, admitted that portions of a recent court filing included errors originating from generative AI software. The incident quickly caught the attention of the U.S. Bankruptcy Court for the Middle District of Alabama, prompting concern over the use of AI in legal research and drafting without adequate human verification.
Court Flags “Pervasive and Misleading” Errors
The issue came to light when U.S. Bankruptcy Judge Christopher L. Hawkins reviewed the document and found that it contained “pervasive inaccurate, misleading, and fabricated citations, quotations, and representations of legal authority.” The court’s findings revealed that multiple case citations and quotes were either entirely nonexistent or misattributed, creating a misleading impression of established legal precedent.
In response, Gordon Rees submitted a formal apology to the court, acknowledging its failure to prevent the submission of inaccurate materials. The firm emphasized that it had since taken “immediate corrective actions” to prevent a recurrence, including the adoption of a new firm-wide AI policy and a mandatory cite-checking process for any filing involving AI-generated content.
Attorney Acknowledges AI Use Despite Initial Denial
The lawyer directly involved in the filing, Cassie Preston, initially claimed she was unaware of any use of AI in preparing the document. However, during a later hearing, she admitted that generative AI tools had, in fact, been used to assist in drafting portions of the filing.
Preston expressed remorse for the oversight, explaining that she had been under significant personal stress and had taken on an excessive workload, which contributed to the lack of careful review. “I should have verified every citation and source before submitting the document,” she stated during the hearing.
While her acknowledgment earned some sympathy, the court stressed that personal difficulties could not excuse the submission of false or misleading materials. Judge Hawkins noted that even unintentional AI-related errors could undermine the integrity of court proceedings if left unchecked.
Financial and Ethical Repercussions
In an effort to take responsibility, Gordon Rees voluntarily agreed to pay over $35,000 in attorneys’ fees to the lender and more than $20,000 to the debtor’s counsel to compensate for the additional work required to identify and correct the AI-generated inaccuracies.
The firm also confirmed that it has implemented new compliance measures and enhanced training for all attorneys and staff regarding the use of artificial intelligence in legal work. These include requiring written supervisory approval before using AI tools and conducting human verification for every legal citation or authority referenced in a court filing.
The apology and corrective steps were well-received by some in the legal community as a sign of accountability, but they also underscore growing concerns about the ethical risks and professional liabilities associated with AI in the legal sector.
AI in the Legal Profession: A Double-Edged Sword
This incident is the latest in a series of high-profile cases highlighting how overreliance on AI can lead to serious professional consequences. In recent years, courts across the country have faced similar issues, with lawyers sanctioned or reprimanded for submitting filings that contained AI-generated “hallucinations” — fabricated citations that appear legitimate but have no basis in actual case law.
Legal experts have warned that while AI tools like ChatGPT and other large language models can significantly increase efficiency in drafting and research, they must be used responsibly. The American Bar Association (ABA) and other professional bodies have urged firms to develop internal guidelines and safeguards to prevent AI misuse that could compromise a lawyer’s duty of competence and candor toward the tribunal.
Industry-Wide Wake-Up Call
For many observers, the Gordon Rees case serves as a cautionary tale for the entire legal industry. As AI continues to transform the practice of law, firms are under increasing pressure to balance innovation with accountability.
The incident reinforces that technology can assist legal professionals but cannot replace sound judgment, experience, and ethical oversight. Courts are expected to scrutinize filings more closely for potential AI-related errors, and some have already begun requiring attorneys to disclose whether AI tools were used in preparing documents.
A Path Forward
In the aftermath of the controversy, Gordon Rees reiterated its commitment to transparency, legal integrity, and responsible technology use. The firm’s prompt acknowledgment and financial restitution may help restore its credibility, but it also highlights a broader question confronting the profession: How should law firms integrate AI without compromising legal accuracy and ethics?
As artificial intelligence becomes a permanent fixture in modern legal practice, firms that establish clear internal policies, training programs, and human verification protocols are likely to avoid similar pitfalls.
As the legal industry continues to adapt to AI-driven change, it’s more important than ever for attorneys to stay informed about professional standards, compliance, and technology’s evolving role in the courtroom.
Visit LawCrossing.com to explore top legal job opportunities, gain insight into emerging trends, and connect with firms leading the way in responsible innovation.




