Artificial intelligence plays a central role in modern cyber activity, with hackers using it to carry out sophisticated attacks and security teams applying it to neutralise threats. While AI helps identify weak points in networks and generate convincing phishing content, Australian tech companies and security authorities are using AI-powered bots to intercept scammers and stop data breaches before damage occurs.
As scams become more complex and personalised, authorities have reported a sharp rise in breaches involving companies such as Medibank and Optus, compromising data from millions of Australians. In just the first half of 2025, a government agency recorded over 108,000 scam reports, with financial losses nearing $174 million. Cybercriminal groups are becoming structured and business-like, using tools such as AI-generated prompts and deepfakes to accelerate and amplify attacks, often without relying on large teams of developers.
AI makes it easier to break into online accounts by scanning vast numbers of stolen credentials, which are sold for high prices on underground platforms. Criminals also use AI to mimic voices and manipulate victims into sharing sensitive data or authorising fraudulent transactions. On the defensive side, cybersecurity professionals say AI enhances digital protection by detecting breaches faster and isolating threats more effectively.
The technological balance could tip in either direction. While AI lowers the bar for inexperienced criminals, it also gives security teams powerful tools to disrupt and identify attackers. In Australia, research-driven firms have rolled out tens of thousands of conversational AI bots that thwart scam calls and collect intelligence. These bots, designed to sound human with distinct voices and personalities, waste scammers’ time while secretly tracking their methods. Used on platforms such as WhatsApp and Telegram, they help businesses and law enforcement stay a step ahead.