When “We’ll Lock It Up” Stops Working
The AI industry spent years assuring everyone that its most valuable assets were safely behind walls. Last month, two incidents in five days challenged that assumption.
A supply chain attack on Mercor, a $10 billion AI recruiting startup, gave hackers access to training data from major labs including OpenAI and Anthropic. Days later, Anthropic itself accidentally exposed over 500,000 lines of source code for Claude Code through a misconfigured package; within hours, the code was mirrored across thousands of GitHub repositories. Y Combinator president Garry Tan called it a national security problem. Marc Andreessen went further, declaring it the end of the industry’s “we’ll lock it up” approach to security.
For public companies navigating SEC disclosure obligations, these incidents sharpen an already uncomfortable question: does your Item 1C cybersecurity disclosure really account for any of this?
Item 1C enters its second full year
The SEC’s cybersecurity disclosure framework, adopted in July 2023, requires public companies to describe their risk management processes, board oversight, and management’s role under Item 106 of Regulation S-K. A survey of S&P 100 filings found the average Item 1C disclosure runs about 980 words, with 78% of companies identifying a chief information security officer and 51% referencing the National Institute of Standards and Technology (NIST) — longer and more detailed than a year ago, though specificity across the field remains patchy.
The Mercor breach exploited a vulnerability in LiteLLM, a popular open-source library for connecting applications to AI services. That kind of third-party dependency is precisely what Item 1C’s risk management provisions are designed to surface. Yet companies that expanded their AI risk language last year largely stuck with generic descriptions, even as SEC staff flagged boilerplate AI disclosures as a problem at the 2024 AICPA conference.
The threat environment isn’t waiting for disclosure cycles
In response to a fast-growing threat environment, FINRA stood up its Financial Intelligence Fusion Center last month, a real-time threat-sharing portal for member firms. FBI data puts 2024 cyber-incident losses at $16.6 billion, a 33% jump year over year.
The enforcement backdrop is shifting as well. For a detailed look at how the SEC’s cybersecurity enforcement posture is evolving under the current administration, read our new report, Cybersecurity Risk Disclosures Rise as SEC Enforcement Recalibrates.
The locked door needs a different strategy
Item 1C disclosures that treat cybersecurity as a generic risk factor will look increasingly thin as AI-specific attack vectors multiply. The twin incidents last month offer a useful stress test for disclosure committees: which supply chain dependencies are identified? How is board oversight of AI-related cyber risk described? Are risk factors updated to reflect threats that evolve faster than annual filing cycles? Boards that cannot clearly address those questions may increase their risk of D&O exposure they may not have anticipated — cyber incidents are increasingly finding their way into securities litigation.
Andreessen’s line about the end of “we’ll lock it up” applies to disclosure strategy, too. Companies that figure that out before the next incident will be in a much better position than those that figure it out after.
—
Want faster, smarter insights? Intelligize+ AI™ puts the data you need at your fingertips. Request a free trial and see how it can transform your workflow.