Legal AIFebruary 10, 2025

General AI Goes Wrong: Australian Lawyers Face Consequences for Fake Cases

Real-world examples of general AI failures in Australian courts reveal why professional verification and AI literacy are critical for legal practitioners in 2025.

N
Numbat.ai Team
General AI Goes Wrong: Australian Lawyers Face Consequences for Fake Cases

The integration of artificial intelligence into legal practice has reached a critical turning point in Australia. While AI promises unprecedented efficiency gains, a recent investigation by The Guardian reveals a troubling pattern: lawyers across Australia are submitting AI-generated content containing fabricated case citations, fake quotes, and non-existent legal authorities—leading to court referrals, financial penalties, and new regulatory restrictions.

The Immigration Case That Highlighted the Problem

The case that exemplified the dangers began innocuously enough. A lawyer with a back injury and tight deadlines turned to ChatGPT for help preparing submissions in an immigration appeal. He inserted case-relevant terms into the AI system, received what appeared to be a well-written summary of cases, and incorporated the references into his submissions without verification.

Two weeks later, the immigration minister's response revealed a shocking truth: 17 of the cases cited in the applicant's documents did not exist.

When the matter came before the federal court, the lawyer was "deeply embarrassed and apologetic," according to The Guardian's reporting. But the minister took a strong view that generative AI use was of public concern and that it was important to "nip in the bud" lawyers using AI without proper verification.

The court referred the matter to the NSW Legal Services Commissioner for review—a warning shot to the entire profession about the consequences of unchecked AI use.

A Pattern Emerges Across Australian Jurisdictions

This wasn't an isolated incident. Courts across Australia have encountered similar problems:

The Melbourne Family Law Case

A Melbourne lawyer was referred to the Victorian legal complaints body after admitting to using AI software in a family court case that generated false case citations and caused a hearing to be adjourned.

The solicitor, representing a husband in a marital dispute, provided the judge with a list of prior cases that had been requested by the court. Neither the judge nor her associates could identify any of the cases in the list.

When the matter returned to court, the lawyer confirmed the list had been prepared using legal software Leap, which includes a generative AI component. The software specifically allows legal practitioners to verify AI output, but the lawyer failed to do this when the technology "hallucinated" citations.

The Victorian Court of Appeal Decision

In a decision regarding a university's dismissal of a student, the Victorian Court of Appeal noted that some documents filed by the applicant "contained some case citations to cases that do not exist."

Justice Kristen Walker took the extraordinary step of noting: "I have not reproduced those citations, lest they contribute to the problem of LLM [large language model] AI inventing case citations."

This judicial concern about even citing fake cases—for fear of perpetuating their spread—reveals how seriously Australian courts are taking this issue.

The ACT Supreme Court Character Reference

It's not just lawyers facing scrutiny. In one case, an offender tendered a personal reference from his brother, which the ACT Supreme Court said "strongly suggest[ed]" it was written by a large-language model such as ChatGPT.

The giveaway? A statement that the brother had known the offender "both personally and professionally for an extended period"—phrasing that raised doubts about whether the brother had actually prepared it himself.

According to The Guardian, the full extent of AI use in the legal profession remains unknown. However, a Thomson Reuters survey of 869 private practice professionals in Australia revealed:

  • 40% worked at firms experimenting with AI but proceeding with caution
  • 9% of lawyers were actively using AI in daily operations
  • Nearly one in three lawyers wanted a generative AI legal assistant

These numbers suggest widespread interest in AI adoption, even as high-profile failures continue to emerge.

Industry Response: Technology Providers Weigh In

Christian Beck, CEO of Leap (the software involved in the Melbourne family law case), emphasized that his company encourages "correct and ethical use" of integrated AI products.

"At Leap, we encourage the correct and ethical use of our integrated AI products, and have implemented a range of mitigation, education and professional development measures as part of our service offering," Beck stated, according to The Guardian.

He noted that Leap's LawY feature "provides users with a free lawyer verification process that is underpinned by experienced, local lawyers" and uses technology that doesn't train large language models, avoiding confidentiality risks.

However, these safeguards only work if lawyers actually use them—as the Melbourne case demonstrated.

New Restrictions: NSW Supreme Court Takes Action

The mounting concerns led to concrete regulatory action. The Guardian reports that in February 2025, the NSW Supreme Court issued a practice note putting strict limits on generative AI use by lawyers, including:

Prohibited Uses:

  • Generating affidavits
  • Creating witness statements
  • Writing character references
  • Producing other material tendered in evidence or used in cross-examination

This practice note represents one of the most restrictive approaches to AI use in Australian courts, reflecting serious judicial concerns about AI-generated content in evidentiary contexts.

Regulatory Bodies Sound the Alarm

Legal regulators across Australia are paying close attention:

"It is still early days and we do anticipate we may receive more complaints as the use of generative AI becomes more commonplace," a spokesperson told The Guardian.

Megan Mahon warned that technology is changing how the public engages with lawyers, stating: "A self-help approach is one thing, but ensuring AI does not further enable unlawful and unqualified delivery of legal services is of great concern."

The board has identified lawyers' improper use of AI as a key risk, reminding practitioners: "It's important for lawyers to remember that it's their duty to provide accurate legal information, not the duty of an AI program. Unlike a professionally trained lawyer, AI can't exercise superior judgment, or provide ethical and confidential services."

Expert Analysis: An AI Literacy Problem

Professor Jeannie Paterson, Director of the Centre for AI and Digital Ethics at the University of Melbourne, provides crucial context. As quoted in The Guardian:

"I think this is an AI literacy problem as much as a slack solicitor problem."

Professor Paterson argues that errors may be more common among lawyers with fewer resources or experience, highlighting the need for proper AI training:

"Train people in where it is useful, because once you start using it in a conscious way, you realise it's actually not good at these sorts of things. Our legal system is going to implode if we don't have that sort of literacy."

This perspective shifts the conversation from simply condemning AI use to ensuring lawyers understand both the capabilities and critical limitations of these tools.

Understanding why AI generates fake cases is crucial for preventing these errors:

How Large Language Models Work

Generative AI systems like ChatGPT, Claude, and similar tools are trained on massive text datasets. They predict what words should come next based on patterns in their training data—but they don't actually "know" whether something is true or false.

When asked for case law, AI systems generate text that looks like legal citations because they've seen millions of real citations in their training data. But they may:

  • Combine real case names with wrong dates or courts
  • Invent entirely fictitious case names that sound plausible
  • Create quotes that were never written by any judge
  • Mix elements from multiple real cases into one fake citation

Why Verification Fails

Many lawyers assume that if an AI provides a specific citation, it must be real. They may check one or two references, find them accurate, and then trust the rest without verification—a dangerous assumption given AI's tendency to hallucinate intermittently.

The Cost of AI Errors: Real Consequences

The consequences for lawyers caught submitting AI-generated fake cases are significant:

Professional Referrals:

  • Referrals to state legal services commissioners
  • Potential disciplinary proceedings
  • Complaints to regulatory bodies

Financial Penalties:

  • One Western Australian lawyer was ordered to pay costs exceeding $8,000 after submitting AI-generated documents citing four non-existent cases
  • Court costs for adjourned hearings
  • Potential liability to clients for inadequate representation

Reputational Damage:

  • Public embarrassment
  • Loss of client trust
  • Damage to professional standing

Client Impact:

  • Delayed proceedings
  • Weakened legal positions
  • Potential case dismissals

International Parallels: A Global Problem

The Guardian notes that this isn't uniquely an Australian problem. In Minnesota, USA, an expert witness called by the attorney general was rebuked after using fake article citations generated by AI to support the state's arguments about misinformation.

The irony—an expert on misinformation using AI-generated misinformation—underscores how even experienced professionals can fall victim to AI hallucinations.

Given these high-profile failures, what should lawyers do?

1. Never Trust AI-Generated Citations Without Verification

Every single case citation from AI must be independently verified through:

  • Official court databases (AustLII, Jade)
  • Professional legal research platforms
  • Direct access to court records

2. Understand AI's Appropriate Use Cases

AI can be valuable for:

  • Initial research direction
  • Brainstorming arguments
  • Drafting initial document structures
  • Identifying potential research areas

AI should NOT be used for:

  • Final case citations without verification
  • Affidavits or witness statements (now prohibited in NSW)
  • Direct quotations from judgments without checking
  • Any content submitted without human review

Consumer AI platforms like ChatGPT are not designed for legal research. Professional legal AI tools differ by:

  • Grounding responses in verified legal databases
  • Providing transparent citations to real cases
  • Including verification mechanisms
  • Maintaining confidentiality and data security

How Numbat.ai Prevents These Failures:

Numbat.ai is specifically designed for Australian legal research to address the exact problems highlighted in these court cases:

  • Verified Sources Only: Unlike ChatGPT, Numbat.ai draws exclusively from verified legal databases like AustLII, eliminating the risk of hallucinated cases
  • Transparent Citations: Every case reference links directly to the source material, making verification instant and effortless—no need to manually search each citation
  • Australian-Focused: Built specifically for Australian law, ensuring relevant and jurisdiction-appropriate results across federal, state, and territory jurisdictions
  • No Hallucinations: By grounding all responses in actual legal documents using Retrieval-Augmented Generation (RAG), Numbat.ai cannot fabricate cases—if a case doesn't exist in the verified database, it simply won't be cited
  • Professional Compliance: Designed with legal ethics and professional obligations in mind, not as a general-purpose chatbot
  • Time Savings with Confidence: Lawyers get the efficiency benefits they seek from AI without the career-ending risk of fake citations

4. Implement Firm-Wide AI Policies

Law firms should establish clear policies on:

  • Which AI tools are approved for use
  • Mandatory verification procedures
  • Training requirements for AI users
  • Confidentiality protocols for AI inputs
  • Disclosure requirements when AI is used

5. Invest in AI Literacy Training

As Professor Paterson emphasized, this is fundamentally an education issue. Lawyers need training on:

  • How AI systems actually work
  • Where AI excels and where it fails
  • Verification best practices
  • Ethical obligations when using AI
  • Confidentiality considerations

The Path Forward: Balanced AI Integration

The cases highlighted by The Guardian represent cautionary tales, not reasons to abandon AI entirely. The technology offers genuine benefits when used appropriately:

Benefits of Proper AI Use

  • Faster initial research
  • More comprehensive case discovery
  • Improved efficiency for routine tasks
  • Better resource allocation for complex analysis

The Critical Requirement: Human Oversight

The difference between successful AI integration and career-damaging failures comes down to one principle: AI should augment, never replace, professional judgment.

Lawyers remain ultimately responsible for:

  • Verifying all AI-generated content
  • Exercising professional judgment
  • Ensuring accuracy and reliability
  • Maintaining ethical standards
  • Protecting client confidentiality

Conclusion: Learning from Failures to Build Better Practices

The wave of AI-related court incidents across Australia serves a valuable purpose: highlighting what can go wrong and pushing the profession toward better practices.

As The Guardian reports, complaints to legal regulators remain relatively few so far—"still early days," according to the NSW Legal Services Commissioner. But regulators anticipate more complaints as AI use becomes more commonplace.

The legal profession stands at a crossroads. One path leads to continued high-profile failures, increasing restrictions, and damaged public trust. The other leads to responsible AI integration through:

  • Proper training and AI literacy
  • Use of professional legal research tools designed with verification built in
  • Mandatory verification procedures
  • Clear ethical guidelines
  • Transparent disclosure when AI is used

At Numbat.ai, we're committed to the second path—developing AI tools specifically designed for Australian legal research with verified sources, transparent citations, and built-in safeguards that help prevent the kinds of failures documented by The Guardian.

What Makes Numbat.ai Different:

The lawyers in these cases turned to general-purpose AI tools like ChatGPT and encountered catastrophic failures. Numbat.ai was built from the ground up to prevent exactly these problems:

  1. Source-Grounded Research: Every answer is based on actual legal documents from verified databases, not pattern prediction
  2. Instant Verification: Click any citation to view the source document immediately—verification takes seconds, not minutes
  3. Australian Legal Expertise: Trained and optimized specifically for Australian law, not general knowledge
  4. Ethical by Design: Built with legal professional obligations and ethics requirements as core features
  5. Risk Mitigation: Eliminates the single biggest risk lawyers face with AI—fake citations leading to professional consequences

The choice isn't between AI and no AI. It's between using tools designed for lawyers with appropriate safeguards, or using general-purpose chatbots and risking professional referrals, financial penalties, and reputational damage.

The technology is here to stay. The question is whether the legal profession will learn from these early failures to build better, more responsible practices—or whether courts will need to impose increasingly restrictive limitations on AI use.

The choice, ultimately, belongs to legal practitioners themselves.


This blog post is based on reporting by Josh Taylor for The Guardian Australia, published February 10, 2025. The article provides comprehensive coverage of AI-related incidents in Australian courts and the regulatory response from legal bodies across the country.

Related Articles

The Risks of Using AI Without Legal Training
Legal AI

The Risks of Using AI Without Legal Training

Recent Australian research reveals 84 cases where AI use in court led to serious consequences. Learn why professional legal research tools matter.

AI for Lawyers in Australia: Introductory Guide (2025)
Legal AI

AI for Lawyers in Australia: Introductory Guide (2025)

A practical 2025 guide for Australian lawyers on AI/GenAI—opportunities, risks, ethics, court protocols, and guidance.

AI and the Courts in 2025: A Federal Court Approach on the 'Watch and Learn'
Legal AI

AI and the Courts in 2025: A Federal Court Approach on the 'Watch and Learn'

Justice Needham's comprehensive analysis of AI in Australian courts reveals varying approaches across jurisdictions and critical lessons for legal professionals.