Top Four Artificial Intelligence Risks on SEC’s Radar

Likely confounding an audience at Yale Law School accustomed to rote legal speeches, Securities and Exchange Commission Chair Gary Gensler in recent remarks on artificial intelligence offered riffs on Valentine’s Day, robots and his affinity for romantic comedies. “The streaming apps long ago figured out I’m a rom-com guy,” quipped the now-noted cinephile.

Amid the levity – and a surprising mention of the 2022 hit horror film M3GAN – Gensler got down to business on regulatory and policy issues involving AI, stressing the importance of balancing the potential benefits of the burgeoning technology with their risks. “Our role at the SEC is both allowing for issuers and investors to benefit from the great potential of AI while also ensuring that we guard against the inherent risks,” he said.

With that in mind, Intelligize has identified the top four AI risks that appear to be on the SEC’s radar.

AI “Washing”

As the number of references to AI in annual reports increase, so too does regulatory scrutiny and vigilance for misleading claims. Call it AI washing.

According to Bloomberg Law, roughly 40% of Standard & Poor’s 500 companies mentioned AI in their most recent annual reports. That constituted a significant increase from 2018 when AI was mentioned only “sporadically.” Some companies, such as CarMax Inc. and CVS Health Corp., mentioned their use of AI in various business initiatives, while others focused on its competitive and security-related risks.

These references to AI-related opportunities and risks have caught the attention of Gensler and the SEC, and officials have warned companies that misleading AI-related claims could result in legal action.

“Don’t do it,” Gensler warned in December. “One shouldn’t greenwash, and one shouldn’t AI-wash.”

Conflicts of Interest

In his February 13 speech at Yale, Gensler expressed concern that conflicts of interest could arise if tech platforms are using AI. Because AI-based models are increasingly able to make predictions about individuals based on responses to prompts, products and pricing, Gensler asked about the consequences of finance platforms determining subtle preferences that could adversely affect investors.

“If the optimization function in the AI system is taking the interest of the platform into consideration as well as the interest of the customer, this can lead to conflicts of interest,” Gensler noted. “In finance, when brokers or advisers act on those conflicts and optimize to place their interests ahead of their investors’ interests, investors may suffer financial harm.”

Inaccurate Information

As evidenced in the recent Lyft trading kerfuffle, forecasting errors can cause dramatic swings in the price of a company’s stock: The rideshare company’s shares rose 67% before a critical mistake was corrected. (Analysts were quick to warn that the Lyft incident could spur regulatory scrutiny.) Using AI in the future to generate that kind of information doesn’t guarantee accuracy, and Gensler has stressed that companies are responsible for releasing correct information to the public.

Because investment advisers and broker-dealers are prohibited from placing their interests ahead of investors’ interests, they should not provide investment advice or recommendations based on inaccurate or incomplete information, Gensler said.  “You don’t want your broker or adviser recommending investments they hallucinated while on mushrooms,” he said. “So, when the broker or adviser uses an AI model, they must ensure that any recommendations or advice provided by the model is not based on a hallucination or inaccurate information.”

Unpredictable Harm

In the context of AI, we can easily identify cases of bad actors using such technology to commit outright fraud. Similarly, if using AI raises foreseeable risks of harm to investors, we can hold the parties using the technology to account.

But how should we interpret concepts such as negligence when it comes to the unpredictable harms that arise as a result of AI? The answer to that question seems murky, but Gensler has indicated he is at least pondering its significance in the context of the SEC enforcing anti-fraud laws. For now, according to the SEC chair, it’s a matter for the courts to decide.

Colorful as Gensler’s recent comments were, the potential market risks of AI are very real and are likely to grow more complex and troublesome as the nascent technology continues to mature. The SEC has those risks on its radar, but it’s also clear that the onus is on corporations to ensure their adoption doesn’t come at the detriment of investors.

Latest Articles

Is Corporate ESG Expertise Sufficient?

Corporate ESG programs have endured a bumpy ride the last few years. As ESG has evolved from a trendy corporate buzzword to political lightning rod to key business initiative and f...

Read More

SEC’s Win in ‘Shadow Trading’ Case Shines Light on Corporate Trading Policies

The circumstances of individual cases may differ, but we all know that insider trading involves using material, non-public information to buy and sell a company’s securities. But w...

Read More

Frustrations Mount Over Differing Climate Disclosure Rules

The long slog to implementing sustainability-related disclosure rules for companies in the United States reached something of a conclusion last month. While issuers are coming to t...

Read More