Legal AIResearch

AI and the Courts in 2025: A Federal Court Approach on the 'Watch and Learn'

Justice Needham's comprehensive analysis of AI in Australian courts reveals varying approaches across jurisdictions and critical lessons for legal professionals.

Jun 27, 20259 min readBy Numbat.ai Team
AI and the Courts in 2025: A Federal Court Approach on the 'Watch and Learn'

Australian courts are navigating uncharted territory as artificial intelligence becomes increasingly prevalent in legal practice. In a landmark speech delivered on 27 June 2025, Justice Jane Needham of the Federal Court of Australia provided a comprehensive examination of where Australian courts stand on AI use, how different jurisdictions are responding, and what the future might hold for AI in legal proceedings.

Understanding AI in the Legal Context

Justice Needham begins by demystifying artificial intelligence, tracing its roots back to the 1956 coining of the term "artificial intelligence" at the Dartmouth Conference. For legal professionals, understanding what AI actually is—and more importantly, what it isn't—has become essential to competent practice.

Generative AI, the technology now commonly used in legal research and drafting, is defined in the speech as "software systems that create content... based on a user's prompts." These systems learn from vast amounts of training data and can produce text, but they do so without true understanding or verification of accuracy.

The Critical Limitation: AI Hallucinations

One of the most significant risks identified is the phenomenon of "hallucinations"—when AI systems generate plausible but completely fabricated information. This isn't a minor technical glitch; it's a fundamental characteristic of how these systems operate, and it has already led to serious consequences in courtrooms.

Australian Courts Take Different Approaches

Justice Needham's analysis reveals that Australian courts have adopted markedly different strategies for managing AI use in legal proceedings:

Federal Court: The "Watch and Learn" Approach

The Federal Court of Australia issued a Notice to Profession on 28 March 2025 that takes what Justice Needham describes as a "watch and learn" approach. Rather than imposing strict prohibitions, the Federal Court is monitoring developments while encouraging responsible use.

As Justice Needham states: "While I have a somewhat vested interest, I am of the view that the 'watch and learn' approach of the Federal Court of Australia has much to commend it."

This measured stance allows the court to understand how AI is being used in practice before determining whether more stringent regulations are necessary.

New South Wales: Strict Restrictions

The NSW Supreme Court has taken a more restrictive position through Practice Note SC GEN 23, imposing strict limitations on AI use in legal documents submitted to the court. This conservative approach prioritizes immediate risk mitigation over technological innovation.

Victoria and Queensland: Flexible Guidelines

Victoria's Supreme Court has developed guidelines that attempt to balance innovation with risk management, while Queensland Courts have issued their own Generative AI Guidelines that provide specific direction to legal practitioners.

Real-World Consequences: Cases Where AI Went Wrong

Justice Needham highlights several cases that demonstrate the very real risks of improper AI use in legal proceedings:

Australian Cases with AI Issues

Luck v Secretary, Services Australia [2025] FCAFC 26: This Federal Court Full Court case illustrates issues that can arise when AI-generated content enters legal submissions without proper verification.

LJY v Occupational Therapy Board of Australia [2025] QCAT 96: A Queensland tribunal case that further demonstrates the challenges courts face when dealing with AI-assisted legal work.

Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95: This immigration case shows how AI use can complicate already complex legal proceedings.

International Warnings

The speech also references significant international cases:

  • Kohls v Elison (US District Court): Where lawyers submitted fictitious legal authorities generated by AI
  • Ayinde v London Borough of Haringey (UK): A British case highlighting similar AI-related problems
  • Snell v United Speciality Insurance Company: Another example of AI hallucinations causing courtroom issues

These cases serve as cautionary tales for Australian legal practitioners about the consequences of failing to verify AI-generated content.

Key Risks for Legal Professionals

Justice Needham's analysis identifies several critical risks that every legal professional must understand:

1. Generation of False Case Citations

AI systems can create entirely fictitious case names, citations, and legal principles that appear completely authentic. Without careful verification, these fabrications can find their way into court submissions, potentially destroying a lawyer's credibility and harming their client's case.

2. Copyright and Privacy Violations

AI training often involves "scraping" vast amounts of internet content, potentially including copyrighted materials and personal information. Using AI tools may inadvertently expose lawyers and clients to copyright infringement or privacy breach claims.

3. Confidentiality Concerns

Inputting client information into commercial AI platforms may constitute a breach of lawyer-client confidentiality. The speech emphasizes that lawyers must carefully consider where their data goes when using AI tools.

4. Bias in Training Data

AI systems learn from historical data, which may contain embedded biases related to race, gender, socioeconomic status, or other factors. These biases can be perpetuated or amplified in AI-generated content, potentially leading to unfair outcomes.

5. Lack of Nuanced Understanding

While AI can process vast amounts of information, it lacks the nuanced understanding of legal context, precedent hierarchy, and strategic considerations that experienced lawyers bring to their work.

Technical Understanding for Lawyers

Justice Needham explains several technical concepts that legal professionals should understand:

Neural Networks: The underlying technology that allows AI systems to process information in ways loosely inspired by human brain function.

Large Language Models (LLMs): The specific type of AI system most commonly used in legal applications, trained on enormous text datasets to predict and generate language.

Scraping: The process by which AI systems collect training data from across the internet, often without explicit permission from content creators.

Understanding these technical foundations helps lawyers make informed decisions about which AI tools to use and how to use them responsibly.

Best Practices Emerging from the Federal Court Approach

While the Federal Court's "watch and learn" approach doesn't impose strict prohibitions, it implicitly suggests several best practices:

1. Complete Verification of AI Output

Every piece of AI-generated content must be thoroughly verified against reliable legal sources. This includes checking that case citations are real, that legal principles are accurately stated, and that the analysis is sound.

2. Disclosure of AI Tool Usage

Courts are increasingly expecting—and in some jurisdictions requiring—disclosure when AI tools have been used in preparing legal documents. Transparency helps maintain trust in the legal system.

3. Maintain Professional Judgment

AI should augment, not replace, professional legal judgment. Lawyers remain ultimately responsible for the accuracy and quality of their work product, regardless of what tools they use.

4. Stay Informed About Court Guidelines

As Justice Needham's speech demonstrates, different courts have different expectations. Legal practitioners must stay current with the specific requirements of each jurisdiction where they practice.

5. Use Professional Legal AI Tools

The speech implicitly distinguishes between general-purpose AI platforms and professional legal research tools. Platforms specifically designed for legal work, with verified sources and appropriate safeguards, present fewer risks than consumer-grade AI tools.

The Future of AI in Australian Courts

Justice Needham's "watch and learn" philosophy suggests that Australian courts are still determining the optimal balance between embracing technological innovation and protecting the integrity of legal proceedings.

Potential Developments

As courts gather more experience with AI use, we may see:

  • Unified national guidelines that provide consistency across jurisdictions
  • Mandatory AI disclosure requirements in court submissions
  • Professional competency standards requiring lawyers to understand AI capabilities and limitations
  • Specialized AI verification protocols for complex legal proceedings
  • Enhanced penalties for misuse of AI in legal practice

The Role of Professional Legal Technology

The Federal Court's measured approach creates space for professional legal technology to develop responsible AI solutions. Tools like Numbat.ai that are specifically designed for legal research, with verified Australian legal sources and transparent citation systems, represent the future of AI-assisted legal work.

These professional platforms address many of the concerns Justice Needham raises by:

  • Grounding all responses in verified legal databases
  • Providing transparent citations to actual cases and legislation
  • Understanding Australian jurisdictional nuances
  • Maintaining client confidentiality and data security
  • Supporting professional oversight and verification

Practical Implications for Legal Practitioners

Justice Needham's analysis has immediate practical implications for how Australian lawyers should approach AI:

What You Should Do

  1. Educate yourself about AI capabilities and limitations
  2. Verify everything generated by AI systems
  3. Use professional legal AI tools designed with appropriate safeguards
  4. Stay current with court guidelines in your jurisdictions
  5. Disclose AI use when preparing legal documents
  6. Maintain professional judgment as the ultimate check on AI output

What You Should Avoid

  1. Don't blindly trust AI-generated legal research or citations
  2. Don't input confidential client information into public AI platforms
  3. Don't assume AI understands legal nuance or strategic context
  4. Don't ignore jurisdiction-specific guidelines about AI use
  5. Don't abdicate responsibility for accuracy to AI systems

Conclusion: Embracing Innovation Responsibly

Justice Needham's comprehensive analysis reveals that Australian courts are at a critical juncture in determining how to integrate AI into legal proceedings. The Federal Court's "watch and learn" approach represents a balanced philosophy: neither rushing to embrace every new technology uncritically nor reflexively rejecting innovation out of fear.

For legal professionals, the message is clear: AI offers significant potential benefits for efficiency and comprehensiveness in legal research and practice, but only when used with full understanding of its limitations and appropriate verification protocols.

The future of legal practice will undoubtedly involve AI, but it will be a future where technology augments rather than replaces professional judgment, where verification is standard practice, and where the tools used are specifically designed for the unique requirements of legal work.

As Australian courts continue to "watch and learn," legal practitioners who embrace professional AI tools responsibly while maintaining rigorous verification standards will be best positioned to serve their clients effectively in this evolving landscape.


This blog post is based on the speech delivered by Justice Jane Needham of the Federal Court of Australia on 27 June 2025. The analysis reflects the current state of AI in Australian courts and the varying approaches different jurisdictions are taking to manage this emerging technology.

Want to learn more about AI automation for your business?

Get Started Today