AI: The Next Frontier in Antitrust Law and Regulation

Having completed “first tech monopoly trial of the internet era” against Google for its search advertising businesses, the federal government is now exploring potential antitrust issues in artificial intelligence.

According to Jonathan Kanter, the assistant attorney general in charge of the Department of Justice Antitrust Division, emerging issues in AI deserve immediate attention. In an interview with the Financial Times, Kanter said his team is digging into “monopoly choke points and the competitive landscape” in AI. The fear is that a few well-resourced companies have already gobbled up most of the market power over the latest transformative technology.

The Federal Trade Commission already began an inquiry in January seeking more information from major tech companies regarding their investments and partnerships across the AI sector. “We are scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition across layers of the AI stack,” said FTC Chair Lina Khan. Specifically, Khan said the FTC wants to know if some of the new AI partnerships represent end-runs around formal merger reviews.

In fact, the Justice Department and FTC have divvied up responsibility for overseeing competitors in the AI market: DOJ gets Silicon Valley phenom Nvidia Corp.; the FTC is taking on venerable tech giant Microsoft Corp. and OpenAI, the creator of ChatGPT. A similar project five years ago produced the antitrust lawsuit against Google and other cases involving big tech power players such as Apple and Amazon.

Regulators in Europe are probably wondering what took so long for AI scrutiny to ramp up in the U.S. European Union policymakers in December approved the AI Act, a groundbreaking law intended to govern the use of AI technologies. The EU regulations primarily aim to monitor AI applications that could do the most damage – infrastructure and security risks, for instance. Additionally, developers of AI systems would be subject to new transparency requirements, and technologies used to make so-called deepfake images and videos would be required to label AI-generated outputs.

Despite the efforts of the Biden administration to police AI more vigorously, lawmakers in the U.S. seem far more reluctant than their European counterparts to intervene in the market. (Keep in mind that the first draft of the AI Act circulated in 2021.) In May, a bipartisan group of senators including Senate Majority Leader Chuck Schumer of New York proposed a $32 billion spending plan for AI research and development. Notably absent: Any specific details about regulating the sector.

It shouldn’t come as a surprise that legislators would hesitate to put a leash on what some are projecting to be a trillion-dollar business. However, if they continue putting off serious efforts to regulate AI, who knows what the sector will look like when they finally decide to act.

Latest Articles

EV Industry Struggles Multiply as Sales Continue to Stall

The precipitous drop in demand for electric vehicles continues to inflict pain on the automotive sector in the form of bankruptcies, workforce reductions, sagging profits, and aban...

Read More

Supreme Court Decision Raises Big Questions About SEC’s Authority

Have you heard about the big Supreme Court decision that came down a couple weeks ago? No, not the one about Presidential immunity. We’re talking about the one with the power to up...

Read More

SEC Breaks New Ground with Cybersecurity Enforcement Case

R.R. Donnelly & Sons Co. doesn’t seem like the type of company that’s likely to make waves. Once the world’s largest commercial printer, Chicago-based RRD boasts that it has “t...

Read More