The regulator has written to the broader financial sector, stating it is dissatisfied with current oversight of artificial intelligence used in banking and insurance operations. Australian Prudential Regulation Authority executives stress that organisations must install robust guardrails around AI systems or face consequences.
APRA makes clear it can pursue institutions that fail to lift standards and that this may escalate to formal enforcement action.
The watchdog describes an AI revolution for financial services, where algorithms increasingly shape lending, risk assessment and customer decisions. Regulators highlight that such tools create new avenues for fraud, cyber attacks and systemic errors when they are not closely governed.
APRA points to risks arising from in house deployment of AI and from third party providers whose technology underpins critical financial functions. Supervisors argue that many firms have not aligned their risk frameworks with the speed and scale of AI adoption.
APRA stops short of announcing new binding rules specifically for artificial intelligence. Instead, it tells banks and insurers that existing prudential standards already require them to properly monitor, test and control the technology they deploy.
The regulator expects to see material improvement in how institutions identify and close gaps between AI capability and their capacity to oversee it. APRA warns that those that treat AI like any other IT tool risk regulatory heat, while those that demonstrate active control and governance are more likely to satisfy supervisors.

