Use of Anthropic’s Tools in Cyberattack Sounds Alarm for AI Risk

Artificial intelligence start-up Anthropic shocked the technology community when it announced on November 13 that it had uncovered the first documented case of a cyberattack largely executed using AI tools with no human intervention.

In a detailed report, Anthropic said it was “highly confident” a state-sponsored Chinese group executed a sophisticated espionage operation using Anthropic’s AI agent, the Claude Code large language model, to gather intelligence from roughly 30 organizations including banks, technology companies and government agencies.

Anthropic suggested the speed and scale of the operation should serve as a wake-up call for organizations to strengthen their cyber defenses. “While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” the company said.

That wake-up call is a stark reminder of what some companies are beginning to say aloud: AI is as much a business risk as it is a business opportunity.

Meta has provided arguably the most comprehensive articulation of AI-related risk so far. Tucked into its 2024 10-K, the company tells investors that its artificial intelligence initiatives, especially generative AI, may not succeed. Moreover, Meta said the projects could “adversely affect [its] business, reputation, or financial results.”

Meta’s language covers the expected categories, such as predictive accuracy and system reliability. It also applies to nefarious applications that may be harder to control – think deepfakes, misinformation, discrimination, toxicity, intellectual-property violations, data privacy, cybersecurity, and even sanctions and export-control exposure. Meta’s admission signified a rare moment of candor from a technology giant with strong commercial incentives to emphasize AI’s benefits.

Other companies are likely to follow Meta’s lead out of necessity. AI is proliferating across every function, and boards of directors are broadly acknowledging that they are struggling to keep pace. At a Wall Street Journal summit this month, board members representing brand names like JPMorgan, Disney and Amgen appeared to agree that AI is moving faster than existing governance structures can handle.

The speed of evolution with AI matters because the law is shifting, too. Under Delaware’s evolving fiduciary-oversight framework, corporate officers now have an affirmative duty to identify and escalate material risks within their remit. AI easily qualifies. It touches data, customers, employees, operations, product development, compliance, IP, cybersecurity, and brand risk. Boards, meanwhile, must ensure their companies have monitoring systems capable of identifying these issues. In this environment, companies are expected to proactively address these risks in public disclosures.

Then there’s the litigation angle. The New York Times has already spent big bucks pursuing litigation against OpenAI and Microsoft for allegedly using its work without authorization. The publisher spent $10.8 million on litigation in 2024 and another $2.4 million in the third quarter of 2025 alone. The Times has gone so far as to break out these costs as a special item in its corporate financial reporting.

And courts are letting more cases move forward. Authors, including George R. R. Martin, cleared a key hurdle when a judge held that a ChatGPT-generated plot outline could plausibly violate copyrights. Defamation claims are emerging, too, with some seeking eight-figure damages stemming from AI-created statements.

These aren’t theoretical risks; they’re expenses showing up on corporate financial statements. Zooming out, a pattern appears to be forming. Companies are aggressively adopting AI to remain competitive in the marketplace. But they’re also beginning to warn investors that the technology carries meaningful risks – operational, governance, legal, and reputational. The Anthropic episode highlights the urgency behind developing best practices for managing those risks.

If sophisticated attackers can misuse AI systems, companies should review their own safeguards for weaknesses. Companies that fail to disclose such risks in a timely manner may face increased scrutiny from shareholders.

Want faster, smarter insights? Intelligize puts the data you need at your fingertips. Request a free trial and see how it can transform your workflow. 

***

The Intelligize blog is on hiatus for the Thanksgiving holiday and will return on Wednesday, December 3, 2025.

Latest Articles

Guidance “Resets” Gain Momentum in Corporate America

Although most consumers don’t know what Fiserv is, the chances are good that they’ve used its financial technology. The company processes payments on everything from pumps at gas s...

Read More

AI Gains Are Losses for White-Collar Workers

Policymakers frequently point to career retraining as a remedy for workers in sectors affected by disruptive technological innovations. Most notably, as automation affected the dem...

Read More

Five SEC Enforcement Trends in 2025

The Securities and Exchange Commission’s enforcement program has entered a new era in 2025—less Fearless and more Folklore. As Taylor Swift might put it, the agency seems to have s...

Read More