An article in a recent French publication caught my attention with a striking claim: American insurers are threatening to stop covering risks associated with artificial intelligence. Major players like AIG, Great American and W.R. Berkley have begun demanding authorisation from companies before extending coverage to AI-related exposures. Some have gone further, introducing exclusions that could erode protection worth billions in potential claims.
This development prompted an immediate question: what does this mean for Australia? Our businesses have embraced AI with comparable enthusiasm, embedding machine learning into credit decisions, medical diagnostics and operational planning. Yet our insurance landscape differs markedly from the American market. We lack the litigation culture that drives much of US insurance pricing, but we also lack the regulatory clarity that European firms now navigate under the EU’s AI Act. This positions Australian enterprises in a peculiar middle ground.
In boardrooms across Sydney, Melbourne and Perth, directors increasingly ask their risk managers whether company insurance covers AI deployments. The answer reveals an uncomfortable truth: nobody quite knows. This uncertainty reflects a challenge facing Australian enterprises in 2025, caught between regulators demanding robust AI governance and insurers reconsidering their exposure to AI-related incidents.
The Silent Transformation of Business Operations
Australian businesses have embedded artificial intelligence into operations with remarkable speed. Financial institutions deploy machine learning algorithms for credit assessments and fraud detection. Healthcare providers analyse medical imaging through AI-powered diagnostics. Manufacturing operations anticipate equipment failures using predictive maintenance systems. What began as experimental programmes barely three years ago now forms the infrastructure across industries.
This transformation occurred beneath conscious risk assessment. Previous technological shifts allowed insurance coverage to evolve alongside adoption. Artificial intelligence differs. Liability crystallises faster than protective mechanisms can form. Companies that implemented AI solutions to gain competitive advantage now discover they may have simultaneously created uninsured exposures of uncertain magnitude.
Consider specific deployments observed in the Australian market. A logistics company uses AI to optimise delivery routes, creating algorithmic decision-making that affects driver employment conditions. A property developer employs machine learning for valuations, introducing questions of professional liability if the models produce inaccurate assessments. A retailer implements AI-driven inventory management that makes autonomous purchasing decisions. Each represents a different risk profile. Existing insurance frameworks struggle to address any of them comprehensively.
The Regulatory Paradox: Accountability Without Clarity
The Australian Prudential Regulation Authority and the Australian Securities and Investments Commission elevated AI governance to a strategic priority for 2025-26, signalling heightened supervisory scrutiny across the financial services sector. APRA announced targeted supervisory engagements to understand emerging practices and potential risks associated with AI deployment. ASIC, through its report “Beware the Gap: Governance Arrangements in the Face of AI Innovation”, urges financial services and credit licensees to ensure their governance practices keep pace with accelerating AI adoption.
Directors and senior executives face immediate pressure from this regulatory attention. The Financial Accountability Regime (FAR), which commenced for insurance companies in March 2025, extends the banking sector’s executive responsibility framework. The FAR imposes strengthened accountability requirements. Executives potentially face income loss, sector disqualification and individual civil penalties for organisational contraventions involving AI governance failures.
Australia deliberately eschews prescriptive regulation, favouring voluntary frameworks instead. The National AI Centre’s Guidance for AI Adoption, introducing six essential practices known as AI6, represents the primary government reference point for organisations using AI. This guidance remains voluntary, despite its comprehensive scope. The Australian government paused work on standalone AI-specific legislation and mandatory guardrails in December 2025, instead relying on existing technology-neutral laws and sector regulators.
Directors operate under frameworks like FAR whilst lacking clear compliance benchmarks specific to AI deployment. Compare this with the European Union’s prescriptive AI Act or the divergent approaches emerging across different jurisdictions globally. Australian executives navigate ambiguity whilst competitors in other markets operate under clearer regulatory parameters.
The Privacy and Other Legislation Amendment Bill 2024 introduces penalties up to fifty million dollars or thirty per cent of annual revenue for serious privacy violations. AI systems that process personal data for training, deployment or decision-making create substantial regulatory exposure. Determining precisely which AI applications trigger these penalties, and under what circumstances, requires navigating complex intersections between data protection law, AI governance guidance and sector-specific regulations.
The Coverage Void: Insurance Ambiguity in Practice
Australian insurance policies remain largely devoid of explicit AI exclusions, creating an appearance of protection that proves misleading upon examination. Professional indemnity policies, designed to cover errors and omissions in professional services, contain ambiguities around algorithmic decision-making attribution. When an AI system produces advice or analysis causing client loss, determining whether claims fall within policy coverage requires untangling questions the policy language never anticipated.
Directors and officers liability policies face similar challenges. These policies typically require causal links between claims against directors or officers and wrongful acts committed in their capacity as company leaders. Autonomous AI decision-making scenarios complicate establishing whose conduct gave rise to claims. Do the acts belong to humans who deployed the AI, to the AI itself as an autonomous agent, or to the software provider who created the system? This attribution challenge directly affects which policy responds, if any.
Product liability insurance encounters different complications. If an AI-powered device or system causes harm, traditional product liability frameworks assume defects in physical products. Algorithmic failures don’t fit these categories. A smart home device that malfunctions due to an AI software update presents questions straddling product liability, professional indemnity and cyber insurance, with genuine uncertainty about which policy should respond.
Herbert Smith Freehills, in recent analysis of the Australian insurance landscape, identified specific complications affecting claims as AI technology evolves. Determining liability when AI causes harm may prove unclear regarding whether responsibility falls on the company, the AI provider or another party. Most policies don’t yet include AI-specific exclusions. Insurers will likely take a view on whether to price in or exclude these risks as they become better defined. The parallel with cyber exclusions offers a cautionary precedent. Many insurers introduced cyber exclusions to traditional policies before developing standalone cyber products, creating coverage gaps that took years to resolve.
The concept of “silent AI” coverage mirrors the silent cyber problem that plagued insurance over the past decade. Insurers unknowingly covered cyber incidents under general policies not designed for such risks. Silent AI may now be emerging, where insurers inadvertently cover AI risks including financial, operational, regulatory and reputational exposures arising from deployment and use. Proactive analysis of policy language becomes essential, particularly regarding exclusions, insuring clauses and definitions.
Lockton Australia emphasises that regulatory scrutiny of AI is increasing, particularly regarding data privacy and consumer protection. Under the Privacy and Other Legislation Amendment Bill 2024 framework, businesses could face fines reaching fifty million dollars for serious privacy violations involving AI systems. This regulatory exposure exists independently of insurance coverage, creating scenarios where organisations confront penalties that no current policy contemplates covering.
The Emerging Market Response: Limited Options, Uncertain Coverage
Nascent affirmative AI coverage exists globally, yet Australian businesses face limited domestic options. Munich Re developed policies covering losses when AI models fail to perform as expected. For instance, if a financial institution uses AI for property valuations and the model produces inaccurate results, such policies may respond. Coalition, a cyber insurance provider, recently added an AI endorsement to its cyber policies, broadening coverage for AI-driven incidents. Armilla Insurance, underwritten by Lloyd’s syndicates, offers warranties ensuring AI models perform as intended by developers.
These products represent efforts to recognise and insure AI exposures, potentially providing policyholders with clearer protection. They remain in early stages, with limited market penetration and uncertain scope. The insurance industry continues debating whether to absorb AI risks within existing policy structures or develop standalone products. This creates uncertainty for risk managers attempting to secure comprehensive protection.
Google’s partnership with Beazley Group, Chubb and Munich Re introduces tailored cyber insurance solutions specifically designed to provide affirmative AI coverage that Google Cloud customers can purchase. This collaboration signals market evolution towards explicit AI coverage. Such products remain accessible primarily to large enterprises with sophisticated risk management capabilities.
Mid-market Australian businesses face sparse options. Insurance brokers, according to recent industry analysis, have tended to reassure clients that existing policies suffice for AI unless apparent gaps exist. This conservative approach suggests many companies have not purchased AI-specific policies. Reviewing how specific AI scenarios would be addressed under current coverage provides greater confidence in risk management strategies, regardless of broker assurances.
The definitional challenges surrounding what qualifies as “artificial intelligence” create additional complications. Broad AI exclusions now appearing in some markets purport to exclude coverage for any claim “based upon, arising out of, or attributable to” the actual or alleged use, deployment or development of artificial intelligence. Such language, if adopted widely, could create vast coverage gaps given AI’s ubiquity in modern business operations.
Local Innovation: The Tricore Tech Model
Against this uncertain landscape, initiatives that prioritise ethical AI implementation offer instructive alternatives. Tricore Tech, a Perth-based company founded in 2025, demonstrates how organisations can approach AI deployment with integrated governance from inception. Their model embeds comprehensive AI governance, risk assessment and ethics frameworks aligned with Australian standards directly into technology solutions. This approach recognises that effective AI governance cannot be retrofitted onto existing deployments but must be architected from the beginning.
The company operates on the premise that technology should connect people rather than isolate them. Their multidisciplinary team combines expertise in development, AI systems, ERP integration, marketing, strategic thinking and compliance to bridge technology implementation with human connection. Grounded in rigorous ethical standards and Australian compliance frameworks, such initiatives demonstrate that innovation emerges when diverse perspectives unite to deploy technology responsibly.
This commitment to ethical AI governance addresses the concerns that regulators like ASIC and APRA have articulated. By integrating transparency, human oversight and commitment to values that protect people alongside efficiency gains, organisations can deploy AI’s transformative power whilst mitigating the liability risks that concern insurers. The approach suggests that the apparent conflict between innovation and risk management may be a false dichotomy. Properly governed AI deployment can simultaneously advance business objectives and reduce organisational exposure.
Supporting such initiatives represents more than endorsing particular vendors. It signals recognition that the insurance gap facing Australian businesses requires solutions beyond waiting for market products to emerge. Organisations that proactively implement robust AI governance frameworks, aligned with Australian standards like AI6 and sector-specific regulatory expectations, position themselves favourably for future insurance coverage and for demonstrating reasonable care in liability scenarios.
Strategic Implications for Australian Organisations
The convergence of regulatory scrutiny and insurance ambiguity demands proactive governance rather than reactive compliance. Organisations should begin by auditing current AI deployments comprehensively. Where has AI been embedded in business processes? Which functions rely on algorithmic decision-making? What data sources train these models? Who maintains oversight of AI system performance? These questions often reveal that AI adoption has outpaced organisational awareness.
Following this audit, assess which existing policies might respond to AI-related claims. Review professional indemnity coverage for limitations on algorithmic advice or analysis. Examine directors and officers policies for language around autonomous decision-making attribution. Consider product liability frameworks in the context of AI-powered devices or services. Scrutinise cyber policies for both AI-related coverage and potential AI exclusions. This assessment may reveal gaps requiring attention before claims arise.
Evaluate whether emerging affirmative AI products justify their premium costs for your organisation’s specific risk profile. A financial institution deploying AI for credit decisioning faces different exposures than a manufacturer using predictive maintenance algorithms. Tailored coverage that addresses your specific AI applications may provide value, whilst generic AI policies may duplicate existing coverage or leave critical gaps unaddressed.
Human oversight protocols become critical both for operational integrity and for demonstrating reasonable care in potential liability scenarios. ASIC’s guidance emphasises that AI-generated decisions should be reviewed by professionals to validate accuracy and reliability. This human-in-the-loop approach not only reduces error rates but also establishes evidence of reasonable governance should disputes arise. Documentation of oversight processes, including decision-making escalation procedures and performance monitoring, creates the evidentiary foundation for defending against claims of negligence or inadequate governance.
Board-level engagement with AI governance cannot be delegated entirely to technology functions. Directors require sufficient understanding of AI deployments to discharge their oversight responsibilities under frameworks like FAR. This doesn’t mean directors need technical expertise in machine learning algorithms, but they must grasp the strategic implications of AI deployment, the risk landscape it creates and the governance mechanisms in place to manage those risks. Regular board reporting on AI governance, including incident reviews and compliance assessments, establishes the documented oversight that regulators expect and that may prove crucial in defending against regulatory action or shareholder claims.
The Urgency of Action: Why Delay Increases Exposure
The temptation to adopt a wait-and-see approach whilst insurance markets develop clearer products is understandable but misguided. Each month of delay represents additional AI deployment without adequate governance or insurance protection. Claims arising from current AI operations could materialise years into the future. A credit decision algorithm deployed today might generate discrimination claims in 2027. A product powered by current AI systems could fail in ways triggering liability in 2028. The insurance policy in force when claims are made, not when AI was deployed, typically determines coverage.
Insurers are moving faster than organisations anticipate. Australian policies currently lack widespread AI exclusions, yet global insurers are introducing them at increasing rates. Berkley’s absolute AI exclusion for directors and officers, errors and omissions, and fiduciary liability policies represents the breadth of exclusions that could become standard. Hamilton Insurance Group’s generative AI exclusion removes coverage for any claim involving generative AI use. These exclusions, developed in offshore markets, often find their way into Australian policies through global insurance programmes and market precedents.
The regulatory landscape is similarly dynamic. Australia’s current approach emphasises voluntary guidance, yet international regulatory developments create compliance expectations affecting Australian operations. Multinational corporations must navigate divergent AI regulations across jurisdictions, creating directors and officers liability risks transcending national boundaries. The European Union’s AI Act, the United States’ evolving sectoral approach and Australia’s technology-neutral framework create complexity where missteps in one jurisdiction can generate claims affecting Australian policies.
Australia’s voluntary regulatory approach and emerging insurance landscape position businesses as primary risk bearers during this period. Treating this as merely a technical question or compliance checklist misreads the shift occurring. AI adoption without commensurate risk architecture doesn’t represent innovation but exposure, creating liabilities that may materialise years after the technology seems routine.
Organisations that integrate robust governance from inception, support initiatives prioritising ethical AI implementation and engage proactively with insurers about coverage needs position themselves to navigate this period successfully. The alternative is operating in an expanding liability gap where regulatory accountability increases whilst insurance protection contracts, leaving organisations exposed to risks they may not comprehend until claims arise.
The gap between regulatory expectations and insurance protection won't close by itself. Business leaders must act now, before liability materialises and coverage vanishes.
Learn more about Tricore Tech AI Ethics Advisory and Commitment->
