Skip to Content

Australia's Moment: Why the Iran Strike Should Change How We Think About AI Governance

4 March 2026 by
Arnaud Couvreur





There is a particular kind of cognitive dissonance that only happens in the age of AI. Two articles caught my attention this week, published days apart, and the contrast between them has stayed with me. The first was a thoughtful framework by respected management scholar Dave Ulrich on moving AI from hype to real organisational impact. The second, broken by the Wall Street Journal and Axios, reported that generative AI had been used to plan the American military strikes on Iran. The tool in question was Claude, developed by Anthropic, a company that had only hours before the first bombs fell been banned by Donald Trump and declared a national security risk.

Same technology. Same week. Completely different worlds.

I have been sitting with that juxtaposition ever since. Not because it is simply dramatic, though it is, but because it crystallises something I have been trying to articulate for some time about why AI governance is not a compliance exercise. At its core, it is a question about who we are and what we are willing to build. And yet, in my recent conversations with recruiters and selection committees, not once has the question been raised. That silence, I find, is almost as telling as the story itself.

For those of us who think seriously about leadership, this matters beyond the technology. Effective leadership has always required the same things: the capacity to read an environment clearly, to sense where misalignment lives before it becomes a crisis, and the integrity to act on what you find rather than on what is convenient. AI does not change that equation. It raises the stakes. A leader who cannot articulate what their organisation's AI systems are permitted to do, who owns the decisions those systems influence, and what happens when they are wrong, is not leading the technology. The technology is leading them.


When Ethics Becomes a Red Line, Not a Tagline


The Anthropic story deserves more nuance than it has received in most commentary. Dario Amodei, the company's CEO, did not oppose military use of AI on principle. His refusal was precise: he drew two specific limits, mass domestic surveillance of American citizens and fully autonomous lethal weapons systems. These were not vague ideological positions. They were targeted, defensible, and in many respects, modest constraints.

He was punished for them anyway. The Pentagon declared Anthropic a supply-chain risk, a designation normally reserved for hostile foreign entities like Huawei. And yet, as the executive orders were still being signed, US Central Command was using Claude to assess intelligence, identify targets, and simulate battlefield scenarios over Iranian airspace.

The paradox is almost too neat. The company that maintained ethical limits was banned. The company that dropped them, OpenAI signed its Pentagon agreement on the same day, was rewarded with access and contracts. Elon Musk's xAI, never encumbered by such hesitations, was already operational inside classified systems.

What this tells us is not simply that the Trump administration has little patience for principled constraints. It tells us something more structural: when AI ethics exists only as a values statement, something written into a company's website and its public positioning, it is, in the end, negotiable. It becomes a bargaining chip. It holds only as long as the power dynamics permit it to hold.

Governance architecture is different. Built into the system before someone arrives with an ultimatum, it is load-bearing, not decorative.


A Formula Worth Revisiting


In his article, Dave Ulrich proposes what he calls the HI × AI formula, Human Ingenuity multiplied by Artificial Intelligence, as the core equation for organisational progress in the age of AI. It is a compelling frame, and largely correct. The best outcomes do emerge from the combination of human judgment and machine capability, not from the replacement of one by the other.

But this formula, as elegant as it is, remains incomplete. In earlier work I proposed a correction:

True Talent Advantage = (AI × HI) / Human Dependency Ratio

The denominator matters. As organisations embed AI more deeply into their decision-making, as tasks, then judgments, then strategic choices are progressively delegated to systems that most people do not understand and cannot interrogate, the dependency ratio climbs. And as it climbs, the advantage shrinks, even if the surface metrics look impressive.

The Iran story is a live demonstration of what happens when the denominator is ignored entirely. The US military acknowledged it would need three to six months to replace Claude's capabilities, so embedded had the tool become in classified infrastructure. That is not augmentation. That is dependency. And dependency, when it meets a governance crisis, produces exactly the kind of institutional incoherence we witnessed: a government banning a tool in the morning and using it to conduct airstrikes in the afternoon.

There is also a deeper risk embedded in the formula that the Iran story does not fully surface, but that is equally serious. The HI variable, Human Ingenuity, is not a fixed quantity. It is something we cultivate, or erode. When students stop developing critical thinking because AI writes their essays, when professionals stop exercising judgment because algorithms produce their recommendations, when leaders stop imagining alternatives because the system presents them with a ranked list of options, the numerator itself begins to shrink. We are not just risking dependency. We are risking the gradual degradation of the very human capacities that make the formula work in the first place.


Australia Is Not a Bystander


The dependency ratio problem is not an American problem. It lands here too, in every Australian boardroom that has signed an enterprise AI agreement without reading what it actually permits, in every government agency that has outsourced a consequential decision to a model it cannot explain, in every organisation that has confused adoption with strategy.

There is a tendency in Australian public discourse to observe these geopolitical and technological dramas as things that happen elsewhere, in Washington, in Silicon Valley, in the corridors of the Pentagon. We consume the news, form opinions, and move on. The tyranny of distance, reinvented for the digital age.

Australia is not a bystander in this story, and it cannot afford to behave like one.

The Australian government has been developing its approach to AI governance with real seriousness. The voluntary AI Safety Standard, released in 2024, established ten guardrails for responsible AI use in high-risk settings. The National AI Centre has been building capability and awareness across industry. These are not nothing. But they remain, at this stage, largely advisory, frameworks that organisations can adopt, calibrate, or quietly set aside when commercial pressures push in the other direction.

Meanwhile, the Australian organisations actually deploying AI at scale, the major banks, the superannuation funds, the large healthcare providers, the federal agencies, are doing so primarily with tools and infrastructure built and governed in the United States. The models are American. The cloud infrastructure is largely American. The terms of use, the ethical constraints, and the limits of what can and cannot be done with these systems are set in San Francisco and Seattle, not in Canberra or Sydney.

A sovereignty concern of this kind is not abstract. When Robodebt, Australia's automated welfare compliance system, collapsed under the weight of its own injustice, the fundamental problem was not a technical failure. It was a governance failure: a decision to delegate consequential judgments about citizens' lives to an automated system, without adequate oversight, without accountability, and without the ethical architecture that would have required someone to answer, clearly and in advance, what the system was actually permitted to do to people. The Federal Court found it unlawful. The Royal Commission found it caused serious harm. And yet the lessons have not been fully absorbed into how Australian institutions are now approaching AI deployment.

The Commonwealth Bank's experience offers a more recent and instructive example. After announcing significant workforce reductions partly attributed to AI efficiency gains, CBA subsequently acknowledged that its AI-assisted customer service tools had in some cases increased call volumes rather than reducing them, as customers sought human support to navigate or correct AI-generated outcomes. The efficiency assumption had been built into the business case before the governance questions had been answered. What happens when the system is wrong? Who is responsible? What is the escalation path? These are not afterthoughts. They are the architecture.

The Window Is Narrow


Picture a room somewhere in Canberra where a procurement officer is reviewing an enterprise AI contract. The cybersecurity boxes are ticked. Legal has signed off. The vendor is reputable. What the contract does not specify is who is accountable when the system is wrong, what the escalation path looks like, or which decisions the organisation is actually willing to let a machine make on its behalf. That room exists in dozens of agencies right now. And the contract gets signed.

The rules of AI governance are being written in exactly these moments, not in white papers or parliamentary inquiries, but in procurement decisions, in defence agreements, in the quiet negotiations between technology companies and governments that rarely make the front page. Australia's absence from the harder questions in those rooms is not a neutral position. It is a choice to be governed by frameworks designed for someone else's interests and someone else's risk appetite.

Australia has something real to offer this global conversation. As a mid-sized democracy with strong institutional traditions and genuine reach across both European governance frameworks and Indo-Pacific partners, it is well placed to advocate for AI standards that neither mimic the libertarian minimalism of the current American approach nor default to centralised control. France, which has argued consistently within the European Union for digital sovereignty and a values-based approach to technology regulation, is a natural partner in this work. The bilateral relationship between the two countries, rebuilt carefully after the AUKUS rupture, has more to offer than submarines and critical minerals. Rebuilding trust between nations, like rebuilding governance frameworks within organisations, is a long process and rarely a visible one. The shortcuts, in diplomacy as in governance, tend to surface at the worst possible moment.

What concrete leadership looks like is not complicated to describe: mandatory AI impact assessments for government deployments with public reporting, procurement standards that require explainability and human oversight as baseline conditions, and boards that treat AI governance as a strategic responsibility rather than a compliance function delegated to legal. The Robodebt Royal Commission gave Australia a detailed, painful, and publicly documented account of what happens when none of that is in place. The question is whether it was absorbed as a lesson or filed as history.

 What I Keep Coming Back To


I want to be clear about something. I am not arguing against AI. I have spent the better part of the last several years helping organisations understand and implement it, and I believe in its capacity to augment human capability in ways that are meaningful and good. The HI × AI insight is real.

The technology, like any powerful tool, takes its character from the governance structures within which it operates. A hammer is not intrinsically a weapon, but it can become one. The question is always who decides, under what constraints, and with what accountability.

The Anthropic story is instructive precisely because Dario Amodei was not, by most accounts, an idealist operating outside the real world. He signed a $200 million government contract. He built a company that deployed AI into classified military infrastructure. His red lines were narrow. And still, when the moment of pressure came, they were treated as an inconvenience to be removed.

That is the world Australian organisations are operating in. The pressure to remove guardrails, in the name of efficiency, competitiveness, speed, or national interest, is real and will intensify. The organisations and institutions that will fare well are not those that rely on goodwill and stated values. They are the ones that have done the harder work of building governance into the architecture before someone arrives with an ultimatum.

That is, in the end, a leadership question as much as a technology one. And the leaders who will matter in this decade are those willing to ask it out loud, before the contracts are signed and the systems are running.


Arnaud Couvreur 4 March 2026
Share this post
AI as Capability: Why Protecting Human Connection Matters More Than Productivity Gains