SEC Committee Push for AI Disclosures Sparks Regulatory Deja Vu
Federal regulators are once again confronting a familiar question: who should regulate transformative technology when it affects the capital markets? We most recently saw it unfold with crypto. Now it’s AI’s turn.
Last week, the SEC’s Investor Advisory Committee issued a formal recommendation urging the Commission to adopt AI-specific disclosure rules. The Committee contends that AI’s growing impact on business demands more consistent transparency. Advisory recommendations don’t usually make headlines, but this one arrives at an awkward moment that stems from interagency hesitation surrounding a key executive order. Several federal agencies have slowed AI initiatives after the Trump administration’s rescission of a Biden-era executive order. The Committee’s proposal pushes the SEC back into a role that its commissioners—and the White House—prefer to avoid.
Chair Paul Atkins and Commissioner Hester Peirce made their skepticism clear last week at the Investor Advisory Committee meeting. Atkins raised concerns about regulatory sprawl, nodding to the decade-long struggle to define the SEC’s crypto jurisdiction. Peirce was more direct in her remarks, cautioning against assuming the SEC should default to regulating AI and urging a more precise mapping of AI-related risks that fall within the agency’s investor-protection mandate.
Their concerns echo the early crypto years, when overlapping federal mandates, conflicting state rules, and incompatible international frameworks left issuers scrambling. But AI is different. It’s already used in business functions implicating privacy, consumer protection, and financial reporting, among other areas. While the EU has moved quickly, introducing an AI Act with risk-tiered obligations and extraterritorial requirements that apply to many U.S. public companies, things are far murkier stateside. No single U.S. regulator holds the full set of tools needed to govern AI. States have moved aggressively on AI governance, with California advancing automated-decision-making regulations, and New York and Colorado pushing new transparency and data-rights obligations, however, the viability of state regulation was dealt a blow this week when President Trump promised an executive order to curb state AI laws.
The Investor Advisory Committee’s proposal places the SEC within this landscape by tying AI disclosures to financial materiality. It calls for disclosures covering governance structures, workforce impacts, model reliability, cybersecurity risks, and strategic dependencies. Those categories overlap with state privacy laws and international AI standards, raising a question of whether SEC rules will clean up the landscape or just pile on.
The Committee’s recommendation sweeps in work the SEC has been doing for years. Even after the White House rescinded its AI executive order, the SEC has continued to scrutinize AI-related investor disclosures, especially when companies oversell their systems’ capabilities. That scrutiny has focused on “AI washing,” which regulators have made clear that misleading AI claims can amount to material misstatements.
The Investor Advisory Committee’s proposal explicitly aims for early, clear, principles-based rules. Even so, adopting AI disclosure rules now could recreate crypto’s challenges: federal rules that lag state or global regimes, forcing companies to comply with multiple—sometimes contradictory—standards.
The Investor Advisory Committee’s recommendation opens a broader question: who will shape the regulatory architecture for AI, and can the SEC step into that role? The safest assumption for issuers is that AI oversight will evolve unevenly. Until clearer guidance emerges, companies operating in capital markets should prepare to navigate overlapping, competing and potentially conflicting AI regulations.
—
Don’t just read about the trends — leverage them. Explore Intelligize with a free trial and unlock the tools professionals rely on every day.