The case concerns a Melbourne drink-driving appeal where a solicitor from a Sydney-based criminal defence firm filed written submissions that immediately raised concerns for the Victorian Court of Appeal. The firm is well known for representing clients in serious organised crime matters but in this instance it was dealing with a comparatively routine licence cancellation dispute. The court was already operating under guidelines about responsible AI use and found itself facing what appeared to be a textbook example of what can go wrong when technology is used without careful supervision.
As the judge’s chambers examined the material line by line, staff discovered that seven cited authorities did not exist, and 12 separate legal quotations could not be found in the reported cases they were said to come from. These problems only emerged after the judge instructed an associate to check every citation in the filing, a task that revealed what appeared to be extensive AI-generated hallucinations embedded in the submissions. When the court contacted the solicitor and the firm’s managing partner for an explanation, no clarification or defence was provided, which left the judge to rely on the documentary trail and the court’s own AI guidelines.
This situation appears to be a signal moment for how Australian courts and regulators will respond to AI in legal practice. The judge stressed that using generative tools does not relax the duty to verify sources or maintain the diligence expected of a reasonably competent lawyer and noted that local Supreme Court rules already require practitioners to declare AI use and check outputs for accuracy. The court stopped short of a formal misconduct finding and referred the matter to the state legal services regulator. Even so, the episode is likely to influence how firms train staff, document the use of AI and balance the promise of faster drafting against the reputational damage that follows when fake authorities reach the courtroom.

