Zywave Cyber Front Page News
- Thursday, September 19, 2024
AI Risks, Litigation and Insurance Claims
AI Risks, Litigation and Insurance Claims
AI Risks, Litigation and Insurance Claims
By Vincent J. Vitkowsky, Vitkowsky Law
AI is evolving at breathtaking speed. As this paper is being written in late August 2024, the tech world is abuzz about Elon Musk’s release of his new Chatbots Grok-2 and Grok-2 mini, which are available to paid subscribers of his platform X (formerly known as Twitter). They include a tool called Flux, which can make remarkably realistic deepfakes, with very few “guardrails.” It generates celebrity and copyrighted images in various contexts, often humorous, and at times in compromising positions. Early testers have found some safeguards on generating images with nudity, but not with lingerie. Some prompts for images of criminal activities fail, but the wording of the prompt can yield different results. Musk has said “Grok is the most fun AI in the world.” Others consider it totally reckless and irresponsible.[1]
The next versions of phones from Google and Apple will have advanced AI features, including tools to manipulate images. As some commentators have cautioned, this could “erode trust in everything.”[2]
Another key development is the appearance of Multimodal Large Language Models. These are the next generation of Chatbots that can read human emotions. They do this by reading sentiment in the text of prompts, vocal inflections (if the user speaks through a microphone), and facial clues (if the user interacts through video). In May 2024, Open AI released GTP-4o, which has these capacities. There have been concerns that this enables targeted manipulation of users, interactively persuading them with a skill level that meets or exceeds the greatest salesperson, vastly increasing the power to persuade or manipulate.[3]
More broadly, Bloomberg reports that OpenAI has internally shared its definitions for five levels of artificial general intelligence, as follows:
Chatbots: AI with conversational language
Reasoners: human-level problem solving
Agents: systems that take actions
Innovators: AI that can aid in invention
Organizations: AI that can do the work of an organization[4]
Bloomberg also reports that OpenAI researchers believe they are closing in on level 2, human-level reasoning. Of course, it is not precisely clear what terms like “reasoning” and “human-level problem solving” would mean in this context. But it is clear that AI is taking a step closer to “Dave,” the AI operating system in 2001, A Space Odyssey.
Finally, the hallucination problem has not been solved. LLMs still cannot distinguish truth from falsehood and continue to fabricate responses.[5]
Key Risks From AI
The risks from AI fall into categories of subject matter, exposures, and use cases that lack precise boundaries and overlap, so any definitional construct is subjective. The categories below attempt to present them in a coherent and useful grouping.
Algorithmic Performance. In the most basic exposure, an AI system may not perform as expected or represented, causing all manner of damage. Here the “Black Box” aspect of AI-systems can be a complicating factor. A related concept is Algorithmic Liability, which arises when a system functions properly, but the decisions or action taken cause some kind of damage.
Antitrust and Anti-Competitive Behavior. Risks arise when automated algorithms are used in pricing, marketing, and supply chain management.
Bias and Discrimination. Training data sets and specific algorithms can create bias and discrimination in areas such as employment, housing, and lending determinations.
Copyright and Intellectual Property Infringement. Risks arise when the training data comes from unauthorized sources or includes protected intellectual property. Many of the current matters in litigation are directed toward AI system providers themselves for copyright violations. Others are directed toward users for copyright, trademark, or trade secret violations.
Cybersecurity and Privacy Risks. AI expands the scope of cybersecurity and privacy risks profoundly. Generative AI can be used to write code to facilitate data breaches, ransomware, network attacks, and other cyber risks. This expands the range of potential threat actors enormously. It can hyper-charge social engineering schemes, tailoring them remarkably to specific individuals.
Employees who use AI systems can accidentally provide access to their own company’s confidential information or proprietary trade secrets. This happens when they put such information into a prompt for a publicly available Chatbot. There is a well-known report of software engineers who tested their unreleased proprietary code into ChatGPT, thereby inadvertently making their code available to all ChatGPT users.[6]
Deepfake Risks. Deepfakes are among the most sophisticated and effective AI risks. Deepfakes use existing images, videos, and voice samples on the internet to impersonate a person or fabricate a scenario that can lead to harm through, for example, fraud or blackmail. They have become remarkably realistic. There is a prominent report of an employee who believed she was on a video call with her chief financial officer and company directors and duped into providing $25 million to cybercriminals.[7]
Defamation/Right of Publicity. Deepfakes and other AI uses can generate defamatory comments, images, or references. Alternatively, misuse of a person’s image or voice can constitute violations of that individual’s right of publicity.
Directors & Officers and Management Liability. Risks can arise from claims of malfeasance with respect to the implementation and use of Al, which leads to employment discrimination, IP infringement, breaches of privacy, or other losses. An important open question is how the breach of fiduciary duty standards will apply when AI is involved in the decision making. There is also the risk of drops in stock prices arising from AI failures.
Alternatively, an AI-adjacent risk is exposure from misleading statements about the efficiency of the AI systems used, or the extent to which they are relied upon. This is referred to as “AI-Washing.” The SEC has brought several enforcement actions based on such statements.
Healthcare Risks. Healthcare is an especially exposed industry. In addition to the prominent privacy issues, there will be questions from the increased use of AI in healthcare diagnostics. Losses may come from undue reliance on AI, or alternatively, failure to use AI-diagnostic tools. Studies have indicated they can be more reliable than human diagnoses in some instances.
Physical Risks. AI systems can cause property damage and bodily injury. When incorporated into products or manufacturing processes, AI can and will malfunction. To date, most of the litigation on this front has come from AI-driven or assisted autonomous vehicles.
Regulatory Risks. A regulatory patchwork is starting to emerge. Most US states are considering statutes regulating aspects of the use of AI.
As of late July 2024, there has only been one “comprehensive” statute, the Colorado AI Act, which will go into effect February 2026. It addresses developers and deployers of high-risk AI systems, and it aims to prevent algorithmic discrimination and ensure transparency.
Another statute with widespread application is the Utah Artificial Intelligence Policy Act, which became effective May 1, 2024. It requires disclosure that a person is interacting with generative AI, and not another person, if asked or prompted by the using person. It also requires that “regulated occupations,” i.e., those requiring a license or certification to practice, disclose any generative AI interaction or involvement in materials at the beginning of the interaction.
Tennessee enacted the ELVIS Act (an acronym for Ensuring Likeness Voice and Image Security Act), which went into effect on July 1, 2024. It protects musicians against unauthorized audio deepfakes and voice cloning. It provides for criminal penalties, and also provides for civil remedies and private rights of action for civil remedies.
Other states have statutes governing aspects of AI. The International Association of Privacy Professionals maintains a US State AI Governance Legislation Tracker, which is a useful resource for monitoring state legislative developments.[8]
These statutory requirements are in addition to unfair trade practices, consumer protection, and anti-bias laws in various states, most of which apply to AI systems.
One notable local regulation is New York City Local Law 144 regarding automated employment decision tools (the AI Hiring Law) (effective from January 2023).
At the federal level, the Federal Trade Commission released guidance saying that Section 5 “unfair and deceptive” practices can include the use of AI to make decisions about consumers, as well as any incorrect statements that AI-generated decisions or products have been made by humans.
Many AI tech regulation bills have been proposed in Congress, but none have passed. None are likely to pass until 2025, if ever.
The European Union has enacted a comprehensive act known as the “EU AI Act.” Most of its provisions will become applicable 24 months after entry into force, which follows its publication in the EU Official Journal. This is an important development with potentially global application, worthy of extended discussion. Such a discussion is beyond the scope of this paper. [9]
In general, as of mid-summer 2024, the key brokers are reporting that most of the GenAI-related claims arise through social engineering, leading to fraudulent transfers, and creating many vectors for ransomware attacks.
AI-Related Litigation So Far
George Washington University Law School (“GWU”) has created an online database of AI-Related Litigation.[10] It addresses the legal posture of the claims, not the insurance aspects.
The database identifies known ongoing and completed litigation that broadly involves or is relevant to AI, including machine learning. It identifies cases in federal and state courts throughout the United States, enforcement actions by US government agencies, and also some cases from other countries, including Australia, Canada, China, and the United Kingdom. It is not totally comprehensive but includes most known cases. It has a broad scope. As GWU describes it, the cases involve “everything from algorithms used in hiring and credit and criminal sentencing decisions to liability for accidents involving autonomous vehicles.” It links to the complaints, or if rendered, the decisions.
As of August 24, 2004, the database included 193 cases. It is searchable by Keyword, Algorithm Name, Application Area, Issues, Cause of Action, Caption, Jurisdiction Filed, and Date Action Filed. It is a tremendously useful resource.
A sampling of the litigated cases of greatest to insurers is set forth below. Many are representative of other cases, while others are one-off. The complete citations can be found by referring to the database.
Altman et al v. Ceasars Entertainment, Inc. et al alleges that casino hotel operators artificially boosted room rental rates using algorithm programs in violation of US antitrust law.
Baker v. CVS Health Corporation alleges that an employment candidate was subjected to an AI-powered lie detector test without appropriate notification.
Banner, Kim v. Tesla alleges that overselling of an autopilot function led to a fatal crash. There are several other actions against Tesla.
Barrows et al v. Humana Inc. alleges that plaintiffs who were receiving benefits had post-acute care improperly terminated because an insurer relied on AI tools to deny claims. There are similar actions against several insurers.
C.S. et al v. Saiki alleged that a new assessment algorithm vastly cut the hours indicated as needed by Oregon residents relying on in-home attendants. The court ordered restoration of the previous algorithm while Oregon develops another one.
Equal Employment Opportunity Commission v. iTutor Group Inc. alleges that hiring software automatically rejected older applicants. The case was settled. It is likely the first of many in this vein.
Huskey v. State Farm Fire & Casualty Company alleges that biased algorithms disproportionately subjects claims made by Black policyholders to “greater suspicion” and “administrative process and delay” than those made by white policyholders.
In re BlueCrest Capital Mgmt Ltd. The SEC found violations for omissions and misstatements to potential and existing investors and independent directors about AI trading, which underperformed compared to live traders. The company also misrepresented its high usage of the algorithm, telling investors it was merely experimental. The SEC has brought several proceedings with similar allegations.
In Re Clearview Litigation. This is a consolidation of 10 class actions brought against Clearview AI, Inc. for scraping billions of facial images off the web and selling them. Claims included violations of BIPA, unjust enrichment, and civil rights violations. The case was settled in June 2024.
In Re RealPage, Inc. Rental Software Antitrust Litigation. This is a consolidation of more than four dozen cases filed against a real estate software and data analytics company for allegedly facilitating a data-driven rental property cartel with landlords and property managers. Allegations include that the use of centralized pricing algorithms inflated prices, costing renters millions of dollars. In late August 2024, the US Department of Justice and the Attorneys General of eight states brought their own antitrust action.
Main Sequence, Ltd. v. Dudesy, LLC alleged that podcasters used AI to create a script and voice imitating the late George Carlin for a comedy routine.
Moffatt v. Air Canada is a Canadian case finding Air Canada liable for misinformation given by an AI Chatbot to a consumer about the airline’s policy for discounted bereavement fares.
Panolfi v. AviaGames, Inc. allege that AI was used to dupe gamers out of nearly $1 billion, by leading them to believe they were competing against humans but were actually competing against bots programmed to win.
The Most Affected Industries
Both Munich Re and Swiss Re have produced thoughtful, thorough, and insightful White Papers on AI’s impact on insurance.
In May 2024, the Swiss Re Institute released Tech-Tonic shifts -- How AI could change industry risk landscapes.[11] It focused on identifying and assessing the industries most affected by AI now, and likely to be most affected in the next 8 to10 years.
Swiss Re identified the industries with the greatest current exposures, measured by reference to severity and probability, ranked in order as follows:
IT services
Energy and utilities
Health and pharma
Other services/industries (retail, hospitality, real estate and legal services)
Mobility and transportation
Financial and insurance services
Government & education
Manufacturing
Media & communications, and
Agriculture, food, and beverages.
Swiss Re identified the industries with the greatest likely exposures in 8 to 10 years, measured by reference to severity and probability, again in rank order, as follows:
Health and pharma
Mobility and transportation
Energy and utilities
IT services
Media and communications
Government and education
Financial and insurance services
Manufacturing
Agriculture, food, and beverages, and
Other services/industries.
Swiss Re synthesized the various types of AI risks into six categories. These are (1) data bias or lack of fairness, (2) cyber-related risks, (3) algorithmic and performance risks, (4) lack of ethics, accountability, and transparency risk, (5) intellectual property risks, and (6) privacy risks. It described the risk profiles of the various industries. This discussion is highly recommended, and would be useful in designing policies for significant insureds.
The Munich Re Paper is Mind the Gap, A US-focused analysis of AI liability risks and the implications for insurance.[12] It provides an analytical overview assessing AI exposure, identifying coverages and gaps, and providing some interesting factual scenarios. This, too, is highly recommended. It is part of a series of cutting-edge White Papers Munich Re has released on AI in the insurance industry.
Insurance Claims from AI-Related Litigation
An AI system is basically complex software, so some types of insurance would seem to naturally encompass many AI claims These include Cyber, Technology E & O, and Directors & Officers insurance. Many other policies might cover AI claims, unless specifically excluded. Thus, there could be “Silent AI” coverage.
Cyber Insurance
Cyber insurance policies will generally respond to AI-enabled or related claims arising from the following losses.
Ransomware attacks
Business interruption from cyber events
Network security attacks
System failures resulting from AI-system malfunctions
Cyber privacy liability
Data breaches and data breach liability
Failure to properly collect, use, store, access, or share data or confidential personal information
Notification and credit monitoring
Deepfake cyber extortion
Deepfake business email compromises, including phishing and vishing
In many policies, regulatory liability, including fines and penalties, but note that case law across the US varies by jurisdiction, and is based on a case specific analysis of the nature of the fine or penalty and the state’s public policy on punitive damages
Some policies cover damages from breach of another entity’s confidential or private information
In some policies, reputational damage
Some (but not all), claims of liability assumed by contract
But note that most policies do not extend to first-party coverage for accidental or unauthorized disclosure of the insured’s own source codes or other proprietary and confidential information
Comprehensive Standalone Cyber Insurance with Media Liability
Invasion of privacy
Infringement of the right of publicity
Defamation, including libel and slander, product disparagement and trade libel
Infringement of intellectual property rights
Piracy, plagiarism, and misappropriation of ideas
Claims based on user generated content
Some policies cover false advertising
Some policies cover software copyright infringement
Some policies cover economic harm to third parties relying on false or erroneous content
Technology E & O
Many third-party claims alleging failures in performing technology services or in the AI-System itself
Note, in some cases, the “black box” nature of AI will complicate claims for under-performance
May also extend to IP Infringement claims based on the unlicensed use of certain information in AI training data
Most policies include breach of contract claims
Most policies exclude bodily injury or property damage (especially significant in the agriculture, energy, healthcare, manufacturing, and transportation sectors)
Could include claims for drops in stock prices following allegations of misrepresentation of capabilities of the AI systems utilized, or the extent of reliance on AI
Data breaches are not covered, unless there is a carve-back
Other Policies are or may be implicated by AI-related losses, subject to broad electronic data and related exclusions.
CGL
May cover third-party bodily injury
May cover third-party property damage
May provide product liability coverage when an AI system is incorporated into physical products, such as cars, industrial machines, robots, and manufacturing equipment
May provide coverage for third-party bodily injury or product damage arising out of a service which uses an AI system
But note that coverage can be defeated by professional liability exclusions, especially those that exclude claims for bodily injury or property damage arising out of the selling, licensing, or furnishing of software.
Coverage B Personal Advertising and Injury coverage may apply, subject to broad exclusions for electronic and internet-related activities, to AI-related losses from:
Oral or written publication, in any manner, of material that slanders or libels a person or organization or disparages a person’s or organization’s goods, products, or services
Oral or written publication in any manner, of material that violats a person’s right to privacy
The use of another’s advertising idea in the insured’s own advertisement, or
Infringement of another’s copyright, trade dress or slogan in the insured’s own advertisement.
Commercial Property
May cover the insured’s own property damage
May cover direct damage to an insured’s building or content, caused by a physical peril supported by an AI system
Business interruption coverage may exist when there is “direct physical loss of or damage to” insured property. This issue can be hotly contested, although courts in some jurisdictions have held that the loss of use or functionality of computer networks may constitute “physical loss.”
Crime
Losses from crime policies are especially vulnerable to deepfakes
Many crime policies exclude cyber-related risks, but others have been modified to cover them
The Crime Part of Comprehensive Standalone Cyber Policies affords coverage for many cyber-related losses
Unless excluded, there could be “Silent AI” coverage for these claims.
Brokers report that AI affirmative endorsements on crime policies are starting to appear in the market, but are not yet commonplace
Employment Practices Liability
May cover algorithmic bias exposures concerning discrimination in hiring, promotion, and termination claims
Intellectual Property
May cover litigation expenses when pursuing infringers
May cover litigation expenses when defending claims of infringement
Professional Liability
May cover claims based on erroneous advice, misinterpretations, and unexplainable decisions or diagnoses
Addressing AI Risks in Insurance Policies
AI risks should be addressed, with coverage either granted or excluded, whenever possible.
Exclusions
So far, widespread exclusions have not appeared. But there is at least one optional endorsement to media coverage which excludes “content created or posted for any third party that you created using generative artificial intelligence in the performance of your services.” Generative AI is defined to mean “content created through the use of any artificial intelligence application, tool, engine, or platform.”
Grants
On the other side, at least one MGA writing cyber insurance has introduced an Affirmative Artificial Intelligence Endorsement. Its press release says that it “expands the definition of a security failure or data breach to include an AI security event, where artificial intelligence technology caused a failure of computer systems’ security.” It “also expands the trigger for a funds transfer fraud event to include fraudulent instruction transmitted through the use of deepfakes or any other artificial intelligence technology.”[13]
Other insurers may explicitly offer coverage through enhancements via endorsements to policies that might otherwise exclude “Silent AI.” Some of these endorsement forms have been developed and are promoted by the leading insurance brokers.
Specialized Products
Given the scope and complexity of AI-system capacities, most insurers believe it would be difficult for a single comprehensive policy, or a single class of business, to emerge (but at least one major reinsurer disagrees).
There are some AI policies directed to individual product lines. One of the first companies to announce a semi-comprehensive policy is Vouch Insurance. Its insureds are AI startups, and it provides broad affirmative coverage for:
Other policies focus more narrowly on particular aspects. Several address performance guarantees of AI providers. For example, Swiss Re and Chaucer have recently backed Armilla Assurance in what it calls a product warranty, insuring that AI systems perform the way their sellers promised.[15] Since 2018, Munich Re has offered insurance for companies selling AI services. [16]
The Need To Refine Existing Policies
AI presents sufficient complexities to cause concern about coverage intent or gaps in many traditional policies. To remove doubts for both insurers and insureds, entire product lines must be reviewed and refined. A detailed, multi-textured and nuanced analysis is essential.
Particular subjects such as coverage for content moderation, facial recognition applications, and misleading chatbot responses require specific attention. Conscious decisions also should be made about whether the intention is to cover risks such as hallucinations, false information, bias, privacy, and IP violations. Finally, close attention is required to potential AI regulatory, ethics, and compliance coverage.
Portfolio reviews are essential. In view of the rapid evolution of AI systems, these reviews must be ongoing.
Author’s Note: This is a continuation of my series of White Papers on the implications of artificial intelligence for the insurance industry. The first Paper, Artificial Intelligence, Legal Liability, and Insurance, broadly described the background of Artificial Intelligence. It identified several challenging considerations, such as the Black Box issue, i.e., for many AI systems, it is not possible to trace the process, logic, and outcome that gave rise to a particular outcome. It reviewed the general emergence and awareness in 2023 of Generative AI systems that can create new text, code, images, video, audio, or synthetic data. It identified some of generative AI’s unique concerns, such as hallucinations, inability to conduct reasoned analysis, and lack of moral, practical, or strategic judgment. The first Paper also reviewed the legal framework for AI failures in broad terms, identifying some of the potential claims and liable parties, and the potentially applicable bodies of law.
Vince Vitkowsky is an attorney in New York. His practice includes all aspects of cyber risks, liabilities, insurance, and litigation. Vince assists insurers in many lines of business, including cyber, tech E&O, professional liability, directors & officers, CGL, and property. He assists in complex claim evaluations, and at times, the defense of insureds in complex matters. He also assists in portfolio reviews, product development and drafting policies and endorsements. He can be reached at VJV@Vitkowskylaw.com.
Copyright 2024 by Vincent J. Vitkowsky. All rights reserved.
Please note that this White Paper is for informational purposes only and is not comprehensive. It does not constitute the rendering of legal advice or opinions on any subject matter. The distribution of this White Paper to any person does not constitute the establishment of an attorney-client relationship.