Skip to content
RT ProExec

Summer 2025 RT ProExec Insights

AI CONSIDERATIONS FOR MANAGEMENT LIABILITY

 

Over the past couple of years, artificial intelligence (AI) has dominated the headlines. Emerging AI tools and applications, along with the promises they offer, have challenged industry to adapt to and invest in new technologies. Forbes reports that “[g]enerative artificial intelligence … took the world by storm in 2023 and quickly moved from a futuristic business goal into a necessity for any company hoping to compete in 2024.” [1] AI technology has developed at a rapid pace, and businesses have raced to adopt AI for a variety of purposes, including increased productivity, enhanced decision-making, cost savings, and other business efficiencies. The speed with which AI is evolving, and its rapid penetration of various aspects of the economy, are notable. According to Goldman Sachs, “[t]he promise of generative AI technology to transform companies, industries, and societies is leading tech giants and beyond to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid.” [2] Further, the International Data Corporation estimates that worldwide spending on AI will more than double by 2028, when it is expected to reach $632 billion. [3] As we settle into the “AI Everywhere Era,” [4] key questions for insurance professionals include how the insurance industry is evolving to address AI exposures, and how insurance can help manage AI risk. This article will approach these questions through the lens of management liability.

 

D&O Exposures

AI related liability for directors and officers includes regulatory actions, third party lawsuits, and claims brought against a company by its shareholders (derivative actions). To date, regulatory actions, as well as AI-related corporate and securities litigation, have tended to focus on “AI washing,” i.e., the exaggeration of a company’s AI capabilities or prospects, along with other AI-related disclosure issues. The U.S. Securities and Exchange Commission (SEC) recently charged two investment advisers with making false and misleading statements about their use of AI, and the firms agreed to pay $400,000 in total civil penalties. [5] In recent AI-related securities lawsuits, plaintiffs have alleged false and misleading statements regarding a company’s use of AI. [6] In the context of derivative actions, shareholders of a company might initiate a lawsuit alleging harm to the company resulting from the misuse or mismanagement of AI by the company’s directors or officers. Such suits are ripe for a coverage determination by a D&O carrier.

Looking ahead, industry veteran, author of The D&O Diary blog, and Executive Vice President at RT Specialty, Kevin LaCroix, anticipates an uptick in AI related corporate and securities litigation that goes beyond a company’s alleged misrepresentation of its AI capabilities or prospects, to include a company’s misrepresentation of its AI related risks, such as “allegations relating to AI misuse, the faulty deployment of AI tools, or the failure to adapt to or address the competitive urgency of AI development.” [7] LaCroix also anticipates that litigants will “seek to hold corporate managers and their employers accountable for the failure to avoid, in the deployment of AI tools, discrimination, privacy and IP violations, or consumer fraud.” [8]

As the law governing AI arguably lags behind the technological progress of AI, businesses are well advised to focus on the responsible use of AI within existing regulatory frameworks. Agents and brokers can expect D&O underwriters to ask, with increasing frequency, questions about how a company is using AI in its operations, to what extent and how executives are providing oversight with respect to AI, and what disclosures are being made about a company’s use of AI.

EPL Exposures

Currently, the most significant AI related liability theory in the employment practices context is discrimination. AI driven human resources tools have been employed for talent acquisition, performance monitoring, workflow optimization, training, and more. Résumé scanners have been used to prioritize certain applicants according to keywords in application documents. Video interviewing software has been used to evaluate candidates, and other AI tools have been adopted to screen applicants. A concern with such AI tools is the potential for discrimination or bias, based on a flawed algorithm. In one of the first lawsuits of its kind, cloud-based enterprise resource planning software vendor Workday is facing allegations that its algorithm-based screening tools discriminated against job seekers age 40 or over. [9] Another potential source of employment practices liability introduced by AI is the invasion of an employee’s privacy. Given the breadth of employee data accessible to human resources and other company personnel, proper training on the appropriate use of AI technology is important to guard against the unintentional disclosure of private information, potentially in violation of state and/or federal privacy laws. [10]

At the time of this writing, there is no federal regulatory framework addressing the use of AI in the employment process. Some states and local governments have implemented their own initiatives, but a lack of clarity and consensus leaves many businesses – especially those with operations in multiple jurisdictions – without crucial guidance as to the legal standards applicable to AI in the employment context. As the use of AI for employment related functions increases, agents and brokers can expect EPL underwriters to ask questions about how a company is using AI in the employment process, and what controls are in place to guard against discrimination, invasion of privacy, and other employment related exposures.

Additional AI Coverage Considerations

Intellectual Property

The ability of AI to generate text, images, and video has implications for intellectual property liability. For example, a recent copyright infringement lawsuit brought against ChatGPT parent OpenAI by The New York Times alleges that ChatGPT sometimes shares portions of Times articles verbatim or shares key parts of its content. [11] The lawsuit also alleges that ChatGPT “hallucinates” articles, inaccurately attributing information to the Times. In addition to alleging violation of copyright law, the Times has alleged that ChatGPT’s activity undermines the publication’s business model, which relies on subscriptions and ad revenue.

It is an important fact that Generative Artificial Intelligence, or GenAI, tools are often trained on copyrighted material. When companies use GenAI, there is a real possibility that the sourced material infringes upon a held copyright. Academics, legal experts, and professionals are currently exploring whether and to what extent the use of copyrighted works to train GenAI models constitutes “fair use” under U.S. copyright law. [12] While this inquiry is primarily focused on intellectual property liability, how it is resolved may have implications for management liability policyholders. Most D&O policies contain an intellectual property liability exclusion, but the wording of this exclusion is not always absolute. Some D&O carriers offer terms with an entity-only “IP” exclusion, or offer terms with a version of the IP exclusion with A-Side and/or derivative action carvebacks.

It is not yet clear what the intersection of AI, intellectual property liability, and D&O coverage will mean for management liability insureds. Regardless, companies relying on GenAI tools should manage their risk by closely monitoring their data sourcing protocols and implementing AI related risk mitigation strategies.

Data Privacy and Security

AI tools have the potential to improve efficiency, generate content, and support decision-making. Its applications and uses are growing by the day, across a myriad of industry sectors. However, data privacy and security are a challenge for companies engaging with and employing AI technology. Morgan Stanley notes that “[a]s AI evolves, concerns about data privacy and risk management for both individuals and businesses continue to grow.” [13] Morgan Stanley also notes that cybercriminals are using AI to carry out a variety of sophisticated attacks, from data poisoning to deepfakes. [14] Cyber coverage is beyond the scope of this article, but should be considered as a risk transfer tool for businesses with AI exposures.

However, while some Cyber policies may cover certain AI related losses (e.g., data breaches, business interruption resulting from AI-powered systems), Cyber coverage does not address the full spectrum of risks to a business that are associated with AI.

Returning to the management liability context, most D&O carriers do not intend for their policies to respond to data and privacy security related losses. That said, there might be narrow avenues for coverage found in a D&O policy, the most likely of which is a derivative action alleging mismanagement of data privacy and security, resulting in loss to the company.

Closing Thoughts

Management liability coverage for AI related risks is complicated given the novelty of AI, the wide range of loss scenarios that can occur, lagging regulatory and legislative frameworks to address AI, an absence of strong actuarial data to inform coverage considerations, among other factors. Most management liability policies are currently silent on AI, and whether coverage is available for a particular matter will depend on the circumstances of the loss and the in-force policy terms and conditions in question. It is not unprecedented for the insurance industry to play catch-up to new risks; this is likely what we will see with respect to AI. However, as reported by IRMI (Artificial Intelligence in Specialty Lines), at least one management liability insurance carrier may already be aiming to exclude “any actual or alleged use, deployment or development of Artificial Intelligence” from its D&O, EPL and/or Fiduciary coverage(s). If this trend continues, evaluating AI exclusionary language will become a necessary step for any management liability agent or broker.

For now, agents and brokers should (as always) aim to truly understand their client’s exposures, and to the extent that these include AI risk, give consideration to how the insurance program(s) recommended do or do not address that risk. Further, as AI becomes increasingly mainstream, it will be important to review management liability terms for new AI related exclusions and restrictions. In this rapidly developing area, the stakes are high for all parties involved in the insurance transaction.

 

The ProExec Advantage

RT ProExec is a leading specialty insurance practice focused exclusively on Executive, Professional and Transactional Liability. We provide cutting-edge product knowledge, innovative placement methodologies, and exceptional service to support retail clients and their insureds.

Why Should You Collaborate with Us?

We help our retail partners retain existing clients, win new prospects and grow their portfolios. While expert assistance from a wholesale broker can provide a notable competitive advantage anytime, it is particularly crucial during disrupted markets.

RT ProExec Delivers Market-Leading Scale and Depth

  • Dedicated industry verticals
  • Proprietary and exclusive products and enhancements
  • Creative problem solving
  • Robust educational resources and services
  • Claims advocacy and support

Contact

Email RT ProExec at rtproexecinfo@rtspecialty.com or contact your local RT ProExec broker at rtspecialty.com.

 

Sources

  1. Melendez, C. (2024, February 27). Enterprise AI Is Becoming A Business Necessity, But How Can
    Companies Pay For It? How Can Companies Pay For Enterprise AI?
  2. Nathan, A., Grimberg, J., & Rhodes, A. (2024). Gen AI: Too Much Spend, Too Little Benefit? Top of Mind,
    129. https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit
  3. International Data Corporation (2024, August 19). Worldwide Spending on Artificial Intelligence
    Forecast to Reach $632 Billion in 2028, According to a New IDC Spending Guide. https://my.idc.com/
    getdoc.jsp?containerId=prUS52530724
  4. International Data Corporation (2024, May 31). An Investor’s Guide to AI Everywhere – Tracking AI’s
    Dispersion Across the Value Chain. Retrieved May 16, 2025 from https://blogs.idc.com/2024/05/31/
    an-investors-guide-to-ai-everywhere/
  5. U.S. Securities and Exchange Commission. (2024, March 18). SEC Charges Two Investment Advisers
    with Making False and Misleading Statements About Their Use of Artificial Intelligence [Press release].
    http://sec.gov/newsroom/press-releases/2024-36
  6. Baker, H. G. & Dikkers, J. (2024, March 5). Increase in Securities Litigation and Regulatory Scrutiny
    Concerning Artificial Intelligence. https://www.pbwt.com/h-gregory-baker/securities-enforcementlitigation-
    insider/increase-in-securities-litigation-and-regulatory-scrutiny-concerning-artificial-intelligence
  7. LaCroix, Kevin. (2025, February 23). AI-Related Risk and Regulation. The D&O Diary. http://dandodiary.
    com/2025/02/articles/artificial-intelligence/ai-related-risk-and-regulation/
  8. LaCroix, Kevin. (2025, February 23). AI-Related Risk and Regulation. The D&O Diary. http://dandodiary.
    com/2025/02/articles/artificial-intelligence/ai-related-risk-and-regulation/
  9. Dorrian, P. (2025, May 16). Workday AI Bias Suit to Go Forward as Age Claim Class Action. Bloomberg
    Law. Workday AI Bias Suit to Go Forward as Age Claim Class Action
  10. Markel, K.A., Mildner, A.R., & Lipson, J.L. (2023, September 29). AI and employee privacy: important
    considerations for employers. Reuters. https://www.reuters.com/legal/legalindustry/ai-employeeprivacy-
    important-considerations-employers-2023-09-29/
  11. Reed, R. (2024, March 22). Harvard Law expert in technology and the law says the New York Times
    lawsuit against ChatGPT parent OpenAI is the first big test for AI in the copyright space. Harvard Law
    Today. https://hls.harvard.edu/today/does-chatgpt-violate-new-york-times-copyrights/
  12. Weslow, D.E., Nuzum, S., & Rigizadeh, S. (2025, May 16). Copyright Office Issues Key Guidance on Fair
    Use in Generative AI Training. https://www.wiley.law/alert-Copyright-Office-Issues-Key-Guidance-on-Fair-
    Use-in-Generative-AI-Training
  13. Morgan Stanley. (2024, September 11). AI and Cybersecurity: A New Era. https://www.morganstanley.
    com/articles/ai-cybersecurity-new-era
  14. Morgan Stanley. (2024, September 11). AI and Cybersecurity: A New Era. https://www.morganstanley.
    com/articles/ai-cybersecurity-new-era

 

This Article is provided for general information purposes only and represents RT Specialty’s opinion and observations on privacy and cybersecurity trends and does not constitute professional advice. No warranties, promises, and/or representations of any kind, express or implied, are given as to the accuracy, completeness, or timeliness of the information provided in this Article. No user should act on the basis of any material contained herein without obtaining professional advice specific to their situation.

RT ProExec is a part of the RT Specialty division of RSG Specialty, LLC, a Delaware limited liability company based in Illinois. RSG Specialty, LLC, is a subsidiary of Ryan Specialty, LLC. RT ProExec provides wholesale insurance brokerage and other services to agents and brokers. RT ProExec does not solicit insurance from the public. Some products may only be available in certain states, and some products may only be available from surplus lines insurers. In California: RSG Specialty Insurance Services, LLC (License #0G97516). ©2025 Ryan Specialty, LLC

The information contained in this material is for information purposes only. This material should not be relied on or treated as a substitute for specific advice relevant to any particular circumstances. Appropriate steps to manage any of the risks described herein will vary depending on particular circumstances. This material should be considered in addition to all other relevant information, including the advice of professional advisors, best practices suggested by relevant organizations and the requirements of any applicable policy of insurance. RT ProExec and/or RT Specialty shall not be liable for any loss alleged to relate to the provision of this material.