AI: The Next Frontier in Antitrust Law and Regulation

Having completed “first tech monopoly trial of the internet era” against Google for its search advertising businesses, the federal government is now exploring potential antitrust issues in artificial intelligence.

According to Jonathan Kanter, the assistant attorney general in charge of the Department of Justice Antitrust Division, emerging issues in AI deserve immediate attention. In an interview with the Financial Times, Kanter said his team is digging into “monopoly choke points and the competitive landscape” in AI. The fear is that a few well-resourced companies have already gobbled up most of the market power over the latest transformative technology.

The Federal Trade Commission already began an inquiry in January seeking more information from major tech companies regarding their investments and partnerships across the AI sector. “We are scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition across layers of the AI stack,” said FTC Chair Lina Khan. Specifically, Khan said the FTC wants to know if some of the new AI partnerships represent end-runs around formal merger reviews.

In fact, the Justice Department and FTC have divvied up responsibility for overseeing competitors in the AI market: DOJ gets Silicon Valley phenom Nvidia Corp.; the FTC is taking on venerable tech giant Microsoft Corp. and OpenAI, the creator of ChatGPT. A similar project five years ago produced the antitrust lawsuit against Google and other cases involving big tech power players such as Apple and Amazon.

Regulators in Europe are probably wondering what took so long for AI scrutiny to ramp up in the U.S. European Union policymakers in December approved the AI Act, a groundbreaking law intended to govern the use of AI technologies. The EU regulations primarily aim to monitor AI applications that could do the most damage – infrastructure and security risks, for instance. Additionally, developers of AI systems would be subject to new transparency requirements, and technologies used to make so-called deepfake images and videos would be required to label AI-generated outputs.

Despite the efforts of the Biden administration to police AI more vigorously, lawmakers in the U.S. seem far more reluctant than their European counterparts to intervene in the market. (Keep in mind that the first draft of the AI Act circulated in 2021.) In May, a bipartisan group of senators including Senate Majority Leader Chuck Schumer of New York proposed a $32 billion spending plan for AI research and development. Notably absent: Any specific details about regulating the sector.

It shouldn’t come as a surprise that legislators would hesitate to put a leash on what some are projecting to be a trillion-dollar business. However, if they continue putting off serious efforts to regulate AI, who knows what the sector will look like when they finally decide to act.

Latest Articles

Companies Forced to Confront Geopolitical Risks

When JPMorgan Chase CEO Jamie Dimon talks, people in the business world listen. Some of his remarks in the banking giant’s latest earnings release sent a chilling message. “We have...

Read More

Cyber Disclosure Rules Yet to Cause Market Declines Once Feared

Despite long-simmering dread that the Securities and Exchange Commission’s cybersecurity disclosure rules would cause share prices to plunge, research indicates companies realized...

Read More

SEC Goes After “Fake It Till You Make It” Fraudsters

In 2022, a jury convicted Theranos Inc. founder Elizabeth Holmes of perpetrating an audacious fraud against investors in her blood-testing company that turned the Stanford Universi...

Read More