AI Clampdown At Fair Work Commission

The Fair Work Commission is tightening how people use generative AI in workplace claims, aiming to keep access to justice open while curbing a flood of weak or misleading applications that are blowing out its caseload by about 70% in three years.
Updated on

The commission is now dealing with a much heavier workload than it did only a few years ago, and a big part of that spike appears to come from people using online AI tools to draft unfair dismissal and other employment claims. These tools make it easier and faster for workers and employers to lodge documents without professional help, but they also blur the line between helpful guidance and automated guesswork. The tribunal has warned that the usual link between the state of the jobs market and the number of dismissal claims no longer really holds because AI assisted filings are pushing volumes higher regardless of economic conditions.

To get the situation under control, the commission has released a draft guidance note that sets out three clear requirements for anyone who relies on generative AI to prepare applications, responses, submissions or witness material. First, users must state in the document that they used AI. Second, they must carefully check the content, confirm that the facts and legal references are accurate and relevant, and clearly state that this review has been completed, and where lawyers or paid agents are involved they need to add working links to any court or tribunal decisions they rely on. Third, for witness statements or declarations, the person signing must confirm that the content reflects their own knowledge and is true to the best of their understanding, and they must include that assurance in the document. These steps are intended to stop AI from generating imaginary laws, cases or facts that would waste time for other parties and for the tribunal.

Taken together, the new rules appear to be an attempt to balance innovation with responsibility, so people can still use AI to help draft paperwork but cannot offload their judgment or honesty onto a chatbot. If the approach works it could ease pressure on the commission’s resources and help it focus on big systemic issues such as flexible work and gig employment rather than wading through poorly grounded AI generated claims. There is still uncertainty about how strictly these requirements will be enforced and how quickly frequent users will adapt, but the move seems to be an early sign of how courts and tribunals may regulate AI generated legal content across the wider justice system.

Sources

Updated on

Our Daily Newsletter

Everything you need to know across Australian business, global and company news in a 2-minute read.