The rise of AI was one of the hottest topics in 2024 and it comes as no surprise that investors and regulators have focused on how companies are addressing the new technology.
As the use of AI proliferates at public companies – as well as at their competitors, customers, vendors and other relevant third parties – management needs to consider the impact on its company’s SEC disclosure obligations. Boards likewise need to ensure appropriate oversight of their respective companies’ expansion of AI integration.
The SEC has indicated that it may heighten its focus on AI-related disclosures and has warned against ‘AI washing’ – which involves exaggerating or making false claims with respect to a company’s AI capabilities or use. The agency brought several enforcement actions in 2024 and early 2025 based on AI-washing allegations. It has also warned against using generic or boilerplate language in AI-related risk disclosures.
As a result, companies should carefully consider whether AI-related risk factors are appropriate for their business. If they are, companies should consider making risk factor disclosures that are appropriately tailored to their specific business and operations.
Risk-factor disclosure
For example, if companies are using proprietary AI or relying on third-party service providers, it may be appropriate to discuss the risks associated with either approach. Companies that are developing their own AI may consider risks associated with this application and the resources required to advance it. Companies also may choose to consider risks associated with generative AI in terms of security, data privacy and reliability.
In addition to risk-factor disclosure, companies should consider whether other disclosures need to be updated, such as management’s discussion and analysis, regulatory and cyber-security oversight.
Boards face new challenges in overseeing a company’s expanded use of, and exposure to, AI. They should consider whether oversight should be handled by the full board, by an existing committee such as the audit committee, or by a new standing committee.
The appropriate board governance structure will depend on the level of oversight needed given a particular company’s depth of involvement in AI-related activities. The board’s oversight structure should provide for regular reviews of AI developments, particularly given the constantly involving regulatory landscape. Boards also should understand management’s governance structure for overseeing AI implementation and expansion.
In addition, boards should understand how AI is being developed and deployed within the company and the associated potential risks and benefits. This should include an understanding of management’s approach to AI-related risk assessments, such as any third-party frameworks that management may be using.
Understanding over months and years
The expansion of AI has significantly broadened cyber-security threats and likely will continue to do so. Boards should understand how management is considering and managing these risks, particularly those raised by generative AI.
This new technology has many public companies brimming with excitement over the seemingly endless possibilities for innovation. But alongside that potential for dynamic change, both within companies and regulators, management and boards need to continually consider their respective obligations as the landscape evolves over the coming months and years.
Ryan Adams and Scott Lesmes are partners with Morrison Foerster