Major Insurers Petition to Exclude AI-Related Risks from Standard Corporate Policies

Major Insurers Petition to Exclude AI-Related Risks from Standard Corporate Policies

Leading global insurers, including AIG, Great American, and WR Berkley, are petitioning U.S. state regulators to approve new policy language that would explicitly exclude liabilities arising from the use of artificial intelligence. The insurers argue that the novel, opaque, and potentially systemic nature of AI-driven risks makes them uninsurable under standard corporate policies like Commercial General Liability (CGL). This move signals a fundamental shift in the insurance market that could create a significant coverage gap for corporations, impacting technology adoption, risk management strategies, and regulatory frameworks.

STÆR | ANALYTICS

Context & What Changed

For decades, the Commercial General Liability (CGL) policy has been the bedrock of corporate risk management, providing broad coverage for bodily injury and property damage caused by a company's products, services, or operations. The policy's strength lies in its breadth, designed to cover unforeseen risks that were not explicitly excluded. Historically, new, large-scale risks such as asbestos, environmental pollution, and cyber threats have tested the boundaries of CGL policies, leading to protracted legal battles and, eventually, the introduction of specific exclusions and the creation of standalone insurance products.

Artificial Intelligence (AI) represents the latest, and arguably most complex, technological shift to challenge this established framework. AI systems, particularly advanced machine learning and generative models, introduce novel risk vectors. These include algorithmic bias leading to discriminatory outcomes, 'black box' decision-making processes that are difficult to audit or explain, the potential for emergent behaviors not anticipated by developers, and the capacity for a single flawed model deployed at scale to cause widespread, correlated harm. The insurance industry refers to this as a potential for systemic risk, where a single event can trigger a cascade of losses across multiple policyholders simultaneously, threatening insurer solvency.

The key change is the proactive and coordinated move by major carriers like AIG, Great American, and WR Berkley to address this ambiguity head-on (source: news.thestaer.com). Instead of waiting for courts to interpret existing policy language after a catastrophic event, these insurers are petitioning state regulators to approve explicit AI exclusions. This action seeks to formally define AI-related damages as a distinct, non-covered peril under standard policies, akin to how nuclear or war risks are treated. This shifts the burden of AI risk squarely back onto the policyholders and forces a market-wide conversation about how to price, manage, and transfer this emerging and rapidly evolving liability.

Stakeholders

Insurers & Reinsurers: The primary actors, motivated by the need to protect their balance sheets from unpredictable and potentially infinite losses. Their core argument is that AI risks violate fundamental principles of insurability: the risks are not easily quantifiable, historical data is lacking to price them accurately, and the potential for systemic events is high. They seek regulatory approval to create a clear boundary, enabling them to develop and price specialized, separate AI insurance products with specific terms, sub-limits, and higher premiums.

Policyholders (Corporations & Public Entities): This group faces the most immediate impact. The exclusion of AI from standard policies creates a critical protection gap. Businesses across all sectors—from finance and healthcare to manufacturing and transportation—are rapidly integrating AI into core operations. They will be forced to either retain this significant risk (self-insure), which may be untenable for all but the largest corporations, or purchase new, likely expensive, specialized coverage. This will increase operating costs and may slow the pace of AI adoption, particularly for small and medium-sized enterprises (SMEs).

Regulators (U.S. State Insurance Departments & NAIC): These bodies are tasked with a dual mandate: ensuring the solvency of insurance companies and protecting consumers by ensuring the availability and affordability of necessary coverage. They must evaluate whether the insurers' claims about the 'uninsurable' nature of AI risk are valid or an attempt to shed liability for a new but manageable risk class. Their decisions will set a crucial precedent for the future of technology risk transfer.

AI Developers & Technology Providers: Companies like Microsoft, Google, and OpenAI, as well as a vast ecosystem of smaller AI firms, will be indirectly affected. If their corporate clients cannot secure insurance for using AI tools, the demand for these tools could soften. Furthermore, liability may be pushed 'upstream' onto the developers themselves through contractual obligations or litigation, increasing their own risk profile and insurance needs.

Governments & Legislators: If a significant portion of the economy becomes uninsured against a major risk, it becomes a matter of national economic security. Governments may face pressure to intervene, potentially by establishing a federal backstop or reinsurance program, similar to the Terrorism Risk Insurance Act (TRIA) in the U.S., to stabilize the market and encourage coverage availability.

The Public: Individuals harmed by AI systems—whether through an autonomous vehicle accident, a biased loan application algorithm, or a medical misdiagnosis—could find it harder to receive compensation if the responsible corporation is uninsured and lacks the resources to cover a large judgment.

Evidence & Data

The insurance industry's move is rooted in historical precedent and the unique technical characteristics of AI. The evolution of cyber insurance provides the closest parallel. Initially, cyber-related losses were often covered under CGL or property policies. As the frequency and severity of cyberattacks grew, insurers experienced massive, unanticipated losses, leading them to introduce specific cyber exclusions and develop a standalone market. The global cyber insurance market is now projected to reach over $29 billion by 2026 (source: Allianz Global Corporate & Specialty), illustrating the potential scale of a new market for AI insurance.

The technical nature of AI risk supports the insurers' case for caution. Unlike a physical asset whose failure modes are relatively understood, AI models can fail in unpredictable ways. A 2023 Stanford University report highlighted the issue of 'emergent properties' in large language models, where new, unplanned capabilities appear as models scale (source: Stanford HAI). This unpredictability makes actuarial analysis exceptionally difficult.

Furthermore, the risk of correlated failures is significant. A single popular AI model or platform, if compromised or found to have a fundamental flaw, could impact thousands of businesses simultaneously. This systemic risk potential is a key concern for reinsurers, who provide insurance for insurance companies and are essential to absorbing catastrophic losses. The global AI market's rapid growth, projected to exceed $730 billion by 2030 (source: fortunebusinessinsights.com), means the aggregate financial exposure is expanding exponentially, adding urgency to the insurers' petition.

Regulatory frameworks are also creating new liabilities that insurers are wary of. The European Union's AI Act, for example, establishes a clear liability framework that assigns responsibility for harms caused by AI systems (source: aiact.eu). As similar legal and regulatory frameworks are adopted in the U.S. and elsewhere, the legal basis for large-scale claims will solidify, increasing the potential for massive payouts under policies that were never priced for such risks.

Scenarios

Scenario 1: Widespread Regulatory Approval & Market Bifurcation (Probability: 65%)

State regulators, persuaded by arguments about systemic risk and the need for insurer solvency, approve the AI exclusions across most jurisdictions. This leads to the rapid formalization of a separate, specialized market for AI liability insurance. This new market is characterized by high premiums, stringent underwriting (requiring AI model audits, transparency reports, and robust risk management controls), and initially limited capacity. Large corporations with sophisticated risk management departments secure this coverage, while SMEs and startups struggle with affordability, creating a bifurcated landscape where risk tolerance dictates the pace of AI adoption. This becomes the dominant model within 2-3 years.

Scenario 2: Patchwork Regulation and Protracted Litigation (Probability: 30%)

Regulators in different states reach different conclusions. Some approve the exclusions, while others reject them or demand modifications. This creates a fragmented and uncertain legal and commercial environment. In states without explicit exclusions, legal battles ensue following AI-related incidents to determine if CGL policies must respond. This ‘silent AI’ coverage ambiguity leads to market instability, with insurers charging higher CGL premiums in litigious states or withdrawing from certain lines of business altogether. This period of uncertainty lasts for 3-5 years until court precedents and further regulatory action create more clarity.

Scenario 3: Broad Regulatory Rejection and Market Intervention (Probability: 5%)

Driven by strong political pressure to avoid hindering national competitiveness in AI, a critical mass of influential state regulators rejects the proposed exclusions. They rule that AI risk, while new, is not fundamentally different from other operational risks that CGL policies are intended to cover. Faced with covering these unpriced risks, insurers drastically raise CGL premiums for all businesses, impose restrictive sub-limits for any activity involving AI, or exit certain high-risk sectors. A major AI-related catastrophic event could lead to insurer insolvencies, forcing federal government intervention in the form of a public reinsurance backstop to prevent a market collapse.

Timelines

Short-Term (0-12 months): State insurance departments will conduct reviews of the proposed exclusion language. Insurers and corporate policyholder groups will engage in intensive lobbying. The first specialist AI insurance products will be launched by niche carriers and Lloyd's of London syndicates, serving as market test cases.

Medium-Term (1-3 years): The first wave of regulatory decisions will be issued. We expect key states like New York, California, and Illinois to set important precedents. The first major court cases interpreting AI liability under existing, un-amended CGL policies will likely emerge.

Long-Term (3-5+ years): A dominant market structure will have emerged, most likely aligning with Scenario 1. The market for standalone AI insurance will mature, with more sophisticated underwriting and a broader range of products. International regulatory and insurance standards will begin to converge, influenced by early decisions in the U.S. and the E.U.

Quantified Ranges

Potential AI Insurance Market Size: Based on the growth trajectory of the cyber insurance market, a standalone AI insurance market could realistically grow to between $15 billion and $30 billion in annual gross written premiums globally within a decade.

Premium Impact: Specialized AI policies could cost 200% to 700% more than the portion of a CGL premium that might have implicitly covered such risks. Premiums will be highly sensitive to the application's risk profile (e.g., AI for drug discovery vs. an autonomous weapons system).

Potential Uninsured Loss: A systemic failure of a widely used AI system (e.g., a cloud-based enterprise AI platform or a financial trading algorithm) could result in correlated losses ranging from $5 billion to $50 billion in a single event. This scale of loss would be catastrophic for many businesses if uninsured.

Risks & Mitigations

Risk: Innovation Drag & Competitive Disadvantage: The high cost or unavailability of insurance could cause companies, especially SMEs, to delay or abandon AI projects, putting them at a competitive disadvantage.

Mitigation: Governments could offer targeted subsidies or tax incentives for purchasing AI insurance. Industry associations could form risk retention groups (a form of group self-insurance) to pool risks and increase buying power.

Risk: Systemic Economic Instability: A major AI-driven event could cause cascading insolvencies among uninsured companies, impacting supply chains and the financial system.

Mitigation: Regulators should consider establishing a government-backed reinsurance program for catastrophic AI events, ensuring private insurers can offer coverage without risking insolvency. Mandating minimum capital reserves for companies deploying high-consequence AI systems could also be explored.

Risk: Concentration of Liability: Liability could become concentrated with a few large technology providers if they are forced to indemnify their uninsured customers.

Mitigation: Policymakers should enact clear legislation that apportions liability across the AI value chain—from data providers and model developers to deployers and end-users. This would create a more insurable and equitable distribution of risk.

Sector/Region Impacts

Sectors: The impact will be most acute in sectors with high-consequence AI applications: Autonomous Transportation (vehicles, drones), Healthcare (diagnostic and surgical AI), Financial Services (algorithmic trading, credit scoring), and Critical Infrastructure (energy grid management, defense systems).

Regions: The U.S. is the immediate focus of these petitions, and regulatory decisions there will set a global precedent. The European Union, with its AI Act already in place, will face intense pressure to align its insurance market regulations to ensure the Act's liability provisions are commercially viable. Other technology-forward regions in Asia will watch closely and likely follow the U.S. or E.U. model.

Recommendations & Outlook

For Public Sector Leaders (Regulators, Ministers of Finance & Technology):

1. Convene an Expert Task Force: Immediately establish a multi-agency group including insurance regulators, technology experts, and economists to model the systemic risk implications of a potential AI insurance gap.
2. Develop a Regulatory Roadmap: Do not approach insurer petitions on an ad-hoc basis. Develop a national or state-level framework for assessing novel technology risks and guiding the evolution of insurance coverage.
3. Explore a Federal Backstop: Proactively design the architecture for a potential public-private partnership or reinsurance backstop for catastrophic AI risks. It is better to have a plan before a crisis forces a reactive and likely suboptimal solution.

For Private Sector Leaders (Boards, CFOs, Chief Risk Officers):

1. Conduct an AI Risk Assessment: Immediately audit all current and planned uses of AI to identify and quantify potential liabilities. This cannot wait for an insurance renewal.
2. Engage with Brokers and Insurers: Begin urgent discussions about the future of your coverage. Understand the timeline for potential exclusions and inquire about the emerging options for specialized AI policies.
3. Budget for Increased Risk Costs: Assume that the cost of risk transfer for AI will increase significantly. Factor this into the ROI calculations for all new AI initiatives and into overall corporate budgets.

Outlook:

(Scenario-based assumption) Our primary expectation aligns with Scenario 1. We believe regulators, prioritizing market stability and insurer solvency, will largely approve the proposed exclusions within the next 18-24 months. This will catalyze the creation of a dedicated AI insurance market. (Scenario-based assumption) This new market will initially be ‘hard,’ with high prices and stringent terms, creating a challenging environment for many businesses. The key variable will be the speed and sophistication with which AI risk assessment tools are developed, both by insurers for underwriting and by corporations to prove their insurability. (Scenario-based assumption) We anticipate that within five years, governments in major economies will be compelled to introduce some form of support or backstop to ensure that the benefits of AI can be broadly realized without creating an unmanageable class of uninsured corporate and public-sector liabilities.

By Lila Klopp · 1763964076