AI is too risky to insure, say people whose job is insuring risk
AI is too risky to insure, say people whose job is insuring risk
Major insurers including AIG, Great American, and WR Berkley are asking U.S. regulators for permission to exclude AI-related liabilities from corporate insurance policies. Citing the unpredictable, "black box" nature of artificial intelligence models, underwriters are signaling that the emerging risks are becoming uninsurable under standard coverage. This move could have profound implications for companies developing or deploying AI technologies.
Context & What Changed
Corporate insurance, particularly policies like Errors & Omissions (E&O) and Directors & Officers (D&O), is a bedrock of modern commerce. It allows companies to innovate and operate by transferring catastrophic, unpredictable risks to insurers. For decades, software-related risks have been understood and priced within these frameworks. However, the rapid proliferation of advanced artificial intelligence, especially generative and autonomous systems, has introduced a new class of risk that defies traditional actuarial methods. These systems can produce unexpected, harmful, or biased outputs—often referred to as 'hallucinations' or emergent behaviors—from processes that are not fully understood even by their creators, a phenomenon known as the "black box" problem.
The significant change is the formal move by major, established insurers like AIG, Great American, and WR Berkley to seek regulatory approval for broad exclusions of AI-related claims from standard corporate policies (source: techcrunch.com). This is not a tentative adjustment of premiums but a fundamental declaration that the risk profile of AI is, in their view, currently unquantifiable and potentially unlimited. This action signals a market failure: the primary mechanism for managing corporate risk is withdrawing from one of the most significant technological transformations in modern history. This shift moves AI risk from a transferable operational cost to a direct, and potentially massive, contingent liability on corporate balance sheets.
Stakeholders
1. Insurers & Reinsurers (e.g., AIG, WR Berkley, Munich Re, Swiss Re): Their primary objective is to maintain solvency and profitability by avoiding unbounded risk. By seeking exclusions, they are protecting themselves from a new generation of claims they cannot accurately price or predict, ranging from algorithmic discrimination lawsuits to catastrophic failures of AI-controlled physical systems.
2. AI Developers & Providers (e.g., OpenAI, Google, Microsoft, Anthropic): These firms are at the top of the liability chain. They face pressure to indemnify their customers, which could expose them to colossal and concentrated risk. The inability of their clients to get insurance could slow adoption of their products.
3. Corporate Adopters (All Large-Cap Industries): This group includes virtually every major company, from banks using AI for credit scoring and fraud detection to manufacturers using AI for robotic automation and supply chain optimization. Without insurance, they face a stark choice: halt or slow AI deployment, proceed with the risk uninsured, or attempt to self-insure, which is only feasible for the very largest corporations.
4. Governments & Regulators (e.g., US National Association of Insurance Commissioners, European Commission): They are now in the critical position of having to approve or deny these exclusions. Their decisions will shape the future of AI innovation. They must balance the solvency of the insurance industry against the need to foster technological progress and economic competitiveness. This will likely force them to accelerate the development of comprehensive AI liability legislation.
5. Public Sector & Infrastructure Operators: Government agencies and operators of critical infrastructure (e.g., energy grids, water systems, transportation networks) are increasingly reliant on AI. The lack of insurance coverage presents a direct threat to public safety and national security, as these entities may be forced to operate high-consequence systems without a financial backstop for potential failures.
Evidence & Data
The core evidence is the filings by insurers seeking to add AI exclusion clauses to their policies (source: Financial Times). The rationale provided by underwriters is the inherent unpredictability of AI outputs. This is not a theoretical concern. There are already documented cases of AI systems producing biased or harmful results, such as chatbots generating malicious code or loan-approval algorithms exhibiting discriminatory patterns.
The scale of the economic activity at risk is immense. The global AI market was valued at approximately USD 241.8 billion in 2023 and is projected to grow to over USD 738.8 billion by 2030 (source: fortunebusinessinsights.com). This growth is predicated on the ability of companies to deploy these technologies safely and with manageable liability.
Regulatory frameworks are struggling to keep pace. The European Union's AI Act is the most comprehensive attempt to date, establishing a risk-based approach and assigning liability, particularly for 'high-risk' AI systems (source: ec.europa.eu). However, even it does not fully solve the problem of insuring against catastrophic, unforeseen 'black swan' events. The actions by US insurers highlight a gap between the pace of technological development and the ability of legal and financial systems to adapt.
This situation has historical parallels. The initial rollout of cyber risk and environmental liability coverage faced similar challenges of undefined risk and lack of historical data. This led to the creation of specialized insurance products and new regulatory regimes, such as the US Superfund law for environmental cleanup. The AI insurance crisis is likely to follow a similar, albeit accelerated, trajectory.
Scenarios (3) with probabilities
Scenario 1: Regulatory Patchwork and Market Fragmentation (Probability: 60%)
In this scenario, US regulators approve some forms of AI exclusions, leading to a fragmented market. A small, specialized insurance market emerges, offering high-cost coverage for narrow, well-defined, and auditable AI applications (e.g., a specific diagnostic tool in healthcare). However, broad, general-purpose AI systems remain largely uninsurable under standard policies. Corporations respond by limiting AI use in high-stakes environments, increasing their internal risk management and legal teams, and demanding indemnification from AI vendors. Innovation slows in critical sectors like finance and infrastructure due to liability fears. This creates a complex and challenging compliance environment for multinational corporations operating across jurisdictions with different rules.
Scenario 2: Government Intervention and Public-Private Backstops (Probability: 30%)
Recognizing that AI is critical national infrastructure and a key driver of economic competitiveness, governments in the US and EU intervene directly. They establish clear legislative frameworks that cap liability for certain types of AI failures, similar to the Price-Anderson Act for the nuclear industry or the PREP Act for pandemic countermeasures in the US. This involves creating public-private reinsurance pools or government backstops for catastrophic AI events. This intervention stabilizes the insurance market and allows for the continued deployment of AI, but it socializes the ultimate tail risk, placing a potential future burden on taxpayers.
Scenario 3: Market-Driven Correction and 'Insurable AI' (Probability: 10%)
The insurance gap proves too large and government intervention too slow. The market seizes up. Faced with stalling adoption and massive contingent liabilities, the technology industry is forced to pivot. The focus shifts from raw capability enhancement to developing ‘Insurability by Design’. AI developers prioritize building models that are more transparent, auditable, and predictable, with robust containment mechanisms. This slows the pace of cutting-edge development but fosters a new generation of safer, more reliable AI systems. A new standards-and-certification industry emerges to validate these ‘insurable’ models, eventually allowing the insurance market to re-engage with confidence.
Timelines
Short-Term (0-12 months): State insurance commissioners in the US will make initial rulings on the proposed exclusions. Corporations will see AI exclusion clauses appear in their 2026/2027 policy renewals. A surge in demand for legal and consulting services to assess and mitigate uninsured AI risk will occur.
Medium-Term (1-3 years): The first major court cases testing AI liability in an uninsured context will set critical legal precedents. Specialized AI insurance products with limited coverage and high premiums will become available. Legislatures in the US and EU will actively debate and draft comprehensive AI liability laws.
Long-Term (3-5+ years): A new market equilibrium, shaped by one of the scenarios above, will be established. AI risk management will become a formal, board-level governance function, analogous to cybersecurity, with established frameworks, standards, and C-suite responsibility (e.g., Chief AI Officer).
Quantified Ranges
Uninsured Liabilities: Potential corporate losses from a single AI-related event could range from USD 10-50 million for a significant discrimination lawsuit to over USD 10 billion in a scenario involving the failure of an AI-managed critical infrastructure system (e.g., a regional power grid) (author's assumption based on comparable systemic failures).
Cost of Coverage: If and when specialized AI insurance becomes available, premiums for high-risk applications could be 200% to 500% higher than for traditional software E&O policies, reflecting the uncertainty and potential for catastrophic loss (author's assumption based on the early market for cyber insurance).
Economic Impact: In the 'Regulatory Patchwork' scenario, the friction and uncertainty could reduce projected AI-driven productivity gains in OECD economies by 15-30% over the next five years (author's assumption).
Risks & Mitigations
Risk: Systemic economic disruption caused by a major, uninsured AI failure leading to the bankruptcy of a systemically important company.
Mitigation: Governments must urgently clarify liability rules. Corporations must implement rigorous AI governance frameworks, including mandatory human oversight, kill switches for autonomous systems, and independent third-party audits.
Risk: Loss of national competitiveness as innovation is stifled by legal uncertainty and lack of insurance.
Mitigation: Creation of regulatory 'sandboxes' to allow insurers and tech firms to pilot new coverage models. Public-private partnerships to develop standards for auditable and transparent AI.
Risk: Unfair market concentration, where only the largest tech companies can afford to self-insure and indemnify customers, crowding out smaller innovators.
Mitigation: Regulatory oversight of AI vendor contracts to ensure fair liability clauses. Potential creation of mutualized insurance pools for small and medium-sized enterprises.
Sector/Region Impacts
Sectors: The most acutely affected sectors will be those with high-consequence applications: financial services (algorithmic trading, credit), autonomous transportation, healthcare (AI diagnostics, robotic surgery), and energy/utilities (grid management).
Regions: The United States, with its highly litigious environment and central role in AI development, will be the epicenter of this crisis. The European Union, through its AI Act, has a regulatory head start but may find its rules are too static for the rapidly evolving technology. China's state-centric model may allow it to direct both its tech giants and state-owned insurers to manage the risk centrally, potentially creating a competitive advantage if Western markets falter.
Recommendations & Outlook
For Public Finance & Government Agencies:
Immediately establish a cross-agency task force (including Treasury, Commerce, and regulators) to develop a national AI liability framework. (Scenario-based assumption: The 'Government Intervention' scenario will become necessary for critical infrastructure sectors).
Audit all public-sector use of AI to identify uninsured liabilities and implement immediate risk-containment protocols.
For Infrastructure Operators:
Conduct an urgent review of all AI/ML systems controlling operational technology. Classify them by risk level and ensure robust human-in-the-loop controls for all high-consequence systems.
Engage with regulators and industry peers to advocate for clear liability safe harbors for operators who adhere to best-practice standards.
For Corporate Boards & CFOs:
Mandate a comprehensive inventory and risk assessment of all AI applications across the enterprise. This risk must be quantified and disclosed to the board and investors.
Directly engage with insurance providers to understand the scope of forthcoming exclusions and explore alternative risk transfer solutions, including captive insurance programs or parametric bonds.
Outlook:
The era of treating AI as just another form of software is over. The insurance industry’s retreat is a rational response to a new species of risk and serves as a critical, market-based warning. (Scenario-based assumption) We project a 24-36 month period of significant turbulence, legal challenges, and regulatory action. The most probable outcome is a hybrid of Scenarios 1 and 2, resulting in a complex, fragmented market where high-risk AI applications will require government-backed insurance schemes, while lower-risk applications will be covered by a new, expensive, and highly specialized insurance market. Companies that proactively build robust AI governance and transparency frameworks will be best positioned to navigate this new reality and secure a competitive advantage.