Australia’s financial regulator has warned financial firms that AI agent governance and assurance practices are poorly governed. The warning comes as banks and superannuation trustees expand AI in internal and customer-facing operations.
The Australian Prudential Regulation Authority said it conducted a targeted review of selected large regulated entities in late 2025 to assess AI adoption and related prudential risks. It found that AI was being used in all entities reviewed, but maturity varied in risk management and operational resilience. APRA said boards showed strong interest in AI for productivity and customer experience. However, it found that many were still building management of AI risks.
The regulator also raised concerns about reliance on vendor presentations and summaries. It said boards were not always giving enough scrutiny to risks like unpredictable model behaviour and the effect of AI failures on critical operations.
APRA said boards should develop a better understanding of AI in order to set strategy and oversight coherently. It said AI strategy should align with an institution’s risk appetite and include monitoring and defined procedures that should be taken in the event of errors.
APRA noted regulated entities were trialling or introducing AI in software engineering, claims triage, and loan application processing. Other use cases cited included fraud and scam disruption and customer interaction.
Some entities were treating AI risk in the same terms as that of other technologies, but that approach doesn’t account for models’ behaviour and bias.
It identified gaps in model behaviour monitoring, change management, and decommissioning, and stated a need for inventories of AI tools and named-person ownership of AI instances. It also pointed out the requirement for human involvement in high-risk decisions.
Cybersecurity was another area of concern. APRA said AI adoption was changing the threat environment by adding additional attack pathways such as prompt injection and insecure integrations.
Identity and access management practices had not adjusted in some instances to non-human elements such as AI agents. The volume of AI-assisted software development was placing pressure on change and release controls.
APRA said entities should apply controls on agentic and autonomous workflows which included privileged access management, configuration, and patching. It also called for security testing of AI-generated code.
Some institutions had become dependent on a single provider for many of their AI instances, ARPA noted, and only a few had been able to show an exit plan or substitution strategy for AI suppliers.
APRA said AI can be present in upstream dependencies, which entities may not be aware of.
Identity and access
The focus on identity and permission controls is also reflected in new standards work by the FIDO Alliance. The group has formed an Agentic Authentication Technical Working Group and is developing specifications for agent-initiated commerce.
FIDO said some existing authentication and authorisation models were designed for human interaction, not delegated actions performed by software. It said service providers need ways to verify who or what authorises actions and under what conditions.
Vendors have presented their solutions to FIDO for review, including Google’s Agent Payments Protocol and Mastercard’s Verifiable Intent framework. The Centre for Internet Security, a non-profit funded largely by the Department for Homeland Security, has published AI security companion guides that map CIS Controls v8.1 to large language models, AI agents, and Model Context Protocol environments.
Its LLM guide covers prompt and sensitive-data issues, and an MCP guide focuses on secure access by software tools, non-human identities, and network interactions.
(Photo by julien Tromeur)
See also: Google warns malicious web pages are poisoning AI agents
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post AI agent governance takes focus as regulators flag control gaps appeared first on AI News.