AI: The Next Frontier in Antitrust Law and Regulation

Having completed “first tech monopoly trial of the internet era” against Google for its search advertising businesses, the federal government is now exploring potential antitrust issues in artificial intelligence.

According to Jonathan Kanter, the assistant attorney general in charge of the Department of Justice Antitrust Division, emerging issues in AI deserve immediate attention. In an interview with the Financial Times, Kanter said his team is digging into “monopoly choke points and the competitive landscape” in AI. The fear is that a few well-resourced companies have already gobbled up most of the market power over the latest transformative technology.

The Federal Trade Commission already began an inquiry in January seeking more information from major tech companies regarding their investments and partnerships across the AI sector. “We are scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition across layers of the AI stack,” said FTC Chair Lina Khan. Specifically, Khan said the FTC wants to know if some of the new AI partnerships represent end-runs around formal merger reviews.

In fact, the Justice Department and FTC have divvied up responsibility for overseeing competitors in the AI market: DOJ gets Silicon Valley phenom Nvidia Corp.; the FTC is taking on venerable tech giant Microsoft Corp. and OpenAI, the creator of ChatGPT. A similar project five years ago produced the antitrust lawsuit against Google and other cases involving big tech power players such as Apple and Amazon.

Regulators in Europe are probably wondering what took so long for AI scrutiny to ramp up in the U.S. European Union policymakers in December approved the AI Act, a groundbreaking law intended to govern the use of AI technologies. The EU regulations primarily aim to monitor AI applications that could do the most damage – infrastructure and security risks, for instance. Additionally, developers of AI systems would be subject to new transparency requirements, and technologies used to make so-called deepfake images and videos would be required to label AI-generated outputs.

Despite the efforts of the Biden administration to police AI more vigorously, lawmakers in the U.S. seem far more reluctant than their European counterparts to intervene in the market. (Keep in mind that the first draft of the AI Act circulated in 2021.) In May, a bipartisan group of senators including Senate Majority Leader Chuck Schumer of New York proposed a $32 billion spending plan for AI research and development. Notably absent: Any specific details about regulating the sector.

It shouldn’t come as a surprise that legislators would hesitate to put a leash on what some are projecting to be a trillion-dollar business. However, if they continue putting off serious efforts to regulate AI, who knows what the sector will look like when they finally decide to act.

Latest Articles

Even Before Shooting of Insurance CEO, Spending on Executive Security Was Booming

The fatal shooting of United Healthcare CEO Brian Thompson last week has put a new found spotlight on executive security, an issue that hasn’t previously received much attention in...

Read More

Judge Again Rejects Musk’s $56 Billion Pay Package from Tesla

We’re going to talk about Elon Musk, but first we’ve got to focus. Forget, for a moment, about other recent stories involving the world’s richest man. Forget about his stated initi...

Read More

Trump Seemingly Poised to Relax Regulation of AI

Remember the anecdote about President Biden getting spooked by a blockbuster film’s depiction of artificial intelligence run amok? The story goes that in November 2023, he issued a...

Read More