More than 400 publicly traded firms have recently begun flagging artificial intelligence (AI) as a significant business threat in their filings with the Securities and Exchange Commission (SEC), marking a sharp shift in how corporations view this technology. According to data compiled by research service AlphaSense, some 418 companies worth over $1 billion cited AI-related risk factors in 2025 — a 46 % increase from 2024 and nearly nine times the number from 2023.
These disclosures show companies raising concerns about a number of AI-related exposures, including how bias or erroneous outputs could damage their reputation, how data breaches or misuse of AI might undermine trust, and how evolving laws and regulations could render existing systems non-compliant or even illegal. For example, gaming publisher Take‑Two Interactive noted in its risk section that as AI use expands, “more usage, more experimentation … the potential for more risk as well.” Financial-services firm Visa warned that agentic AI in commerce could lead to “erroneous or disputed payments, increased chargebacks and reputational harm.” Consumer-goods company Clorox flagged that AI tools “may compromise confidential or sensitive information.” And cosmetics brand ELF Beauty pointed out that its “ability to adapt to new regulations, technological shifts, ethical standards, and stakeholder expectations regarding AI may materially impact our operations, financial condition, or reputation.”
Why is this happening now? There are several converging forces that explain the surge. First, AI has moved from being an experimental tech buzzword to being deeply embedded in many business models — from generative-AI content tools to automated decision systems and algorithmic processing of sensitive data. The scale of adoption means the potential for failure or misuse has grown. Legal and regulatory pressure is mounting: regulators like the SEC have been clear that companies must disclose “known unknowns” — risks whose probability may be uncertain but whose impacts could be material. As law-professor M. Todd Henderson explains, “You’ve got to disclose known unknowns.”
Second, investors are increasingly attuned to AI-related fallout: when AI systems make errors, cause regulatory breaches or produce unethical outputs, the cost can be huge. Unlike earlier tech risks tied to platforms or the internet, AI’s failures touch on deep questions: self-learning systems, hallucinations, decisions with no clear human oversight, and systemically embedded bias. One analyst argued that “losing your data in a hack is one thing — but if you deploy AI and it hallucinates, that’s potentially a much bigger problem.”
Third, the disclosures themselves reflect a change in how companies think about risk. In prior years, companies tended to focus on business-model disruption from new technologies. Now they’re mapping out second-order risks: what happens if the tech they rely on misbehaves, is regulated out of existence, or becomes a reputational liability. According to law-firm Goodwin, a recent report found that 76 % of S&P 500 companies either added or expanded descriptions of AI as a material risk in their 2025 annual filings.
What should you, as an investor or simply someone tracking business risk, look out for? First, pay attention to how detailed the AI risk disclosure is. Companies that simply say “we use AI and it could have risks” are less transparent than those that identify concrete scenarios (e.g., “our data-processing pipeline relies on AI models that could misclassify consumer data and generate regulatory liability of $X”). Second, check whether the company has published any mitigation strategies: AI governance frameworks, audit logs, human-in-the-loop oversight, bias-monitoring programs. A disclosure without mitigation may indicate risk is more theoretical for now — or being down-played. Third, think about how AI fits into the business model: firms with deep AI dependence (e.g., fintech, adtech, SaaS-platforms) may face greater exposure than hardware firms whose AI usage is smaller in scope.
Importantly, these disclosures don’t mean companies are pulling back from AI — quite the opposite. Many firms reiterated that they are increasing AI investment even as they warn of risk. The paradox is on full display: companies are signalling that AI is core to future growth, and that they’re worried about its downsides. Because of that, investors may have to balance growth opportunities with emerging risk.
In short, the flood of new AI-risk warnings in SEC filings is a signal that businesses are taking the downside of AI seriously for the first time. It means AI isn’t just a tech project anymore — it’s a strategic risk, a center-of-gravity issue for corporate boards, and something investors will start demanding more visibility on. The next time you open a 10-K or proxy statement, scan not just for “AI growth” but for how it frames “AI risk.” Because as much as companies want to win with AI, many already fear what happens if things go badly.
