Signal: AI regulation is no longer a thought experiment
Australia debates, Europe acts—and smart companies prepare now.
Welcome to The Signal, a new mini-series on Modern Risk. Each week, we share a fast, early heads-up on emerging developments that could reshape risk, regulation, or strategy for forward-thinking businesses. Think of it as your early-warning system for what’s coming over the horizon.
Australia’s AI policy is heating up. The government is drafting a National AI Capability Plan, aiming to position the country as a global leader by 2028. Business groups, including the Business Council of Australia, are urging against over-regulation that could stifle innovation, but the direction is clear: mandatory guardrails for high-risk AI systems are coming.
Meanwhile, the EU is already there. From August 2025, the EU’s AI Act begins enforcement. Any company deploying AI in sectors like hiring, healthcare, infrastructure, or financial services must meet strict standards around data quality, transparency, and human oversight.
This is no longer just a tech issue; it’s a strategic, financial, and reputational one.
Why it matters
1. AI risk is now regulatory risk.
Just as cyber moved from IT to the boardroom, AI is heading the same way. If your business uses AI to make decisions that affect people or financial performance, you’ll be expected to demonstrate governance and control.
2. Global clients and capital will expect compliance.
Even if you don’t operate in the EU, expect to be asked about your AI controls. Corporate buyers, investors, and insurers are starting to screen for AI maturity just like they do with cybersecurity.
3. Insurance won’t cover governance gaps.
Insurers are watching. Expect changes in policy wordings, with exclusions around “algorithmic error” or “automated decision-making.” Poor governance could translate to limited cover (or no cover at all).
What to do this quarter
1. Map where AI is already in use
☐ Identify internal and third-party systems using AI or automation
☐ Prioritise high-risk use cases (e.g. hiring, scoring, underwriting, customer service)
2. Assign ownership
☐ Nominate an exec-level sponsor for AI governance
☐ Get risk, legal, tech, and operations in the same room
3. Benchmark against emerging standards
☐ Review frameworks like NIST AI RMF or ISO/IEC 42001
☐ Note any gaps in explainability, documentation, or human-in-the-loop controls
4. Review risk transfer and legal exposure
☐ Ask your broker how AI exclusions are evolving in cyber, PI, and D&O policies
☐ Audit contracts with AI vendors. Who carries the liability?
5. Brief the board
☐ Add AI risk to your next board or risk committee agenda
☐ Frame it as both a compliance horizon and a trust-building opportunity
6. Make a 90-day plan
☐ Don’t wait for regulation. Start with a short internal roadmap
☐ Show employees, investors, and partners that you’re ahead of the curve
Bottom line:
The voluntary window for AI governance is closing. Aligning early isn’t just smart risk management—it’s a competitive advantage.