Normal view

Received today — 2 August 2025
Received before yesterday

Key fair use ruling clarifies when books can be used for AI training

24 June 2025 at 19:56

Artificial intelligence companies don't need permission from authors to train their large language models (LLMs) on legally acquired books, US District Judge William Alsup ruled Monday.

The first-of-its-kind ruling that condones AI training as fair use will likely be viewed as a big win for AI companies, but it also notably put on notice all the AI companies that expect the same reasoning will apply to training on pirated copies of books—a question that remains unsettled.

In the specific case that Alsup is weighing—which pits book authors against Anthropic—Alsup found that "the purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative" and "necessary" to build world-class AI models.

Read full article

Comments

© CookiesForDevo | iStock / Getty Images Plus

Anthropic releases custom AI chatbot for classified spy work

6 June 2025 at 21:12

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Read full article

Comments

© Anthropic

Reddit sues Anthropic over AI scraping that retained users’ deleted posts

5 June 2025 at 16:57

On the heels of an OpenAI controversy over deleted posts, Reddit sued Anthropic on Wednesday, accusing the AI company of "intentionally" training AI models on the "personal data of Reddit users"—including their deleted posts—"without ever requesting their consent."

Calling Anthropic two-faced for depicting itself as a "white knight of the AI industry" while allegedly lying about AI scraping, Reddit painted Anthropic as the worst among major AI players. While Anthropic rivals like OpenAI and Google paid Reddit to license data—and, crucially, agreed to "Reddit’s licensing terms that protect Reddit and its users’ interests and privacy" and require AI companies to respect Redditors' deletions—Anthropic wouldn't participate in licensing talks, Reddit alleged.

"Unlike its competitors, Anthropic has refused to agree to respect Reddit users’ basic privacy rights, including removing deleted posts from its systems," Reddit's complaint said.

Read full article

Comments

© SOPA Images / Contributor | LightRocket

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

5 June 2025 at 14:35

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

Read full article

Comments

© Bloomberg via Getty Images

Windsurf says Anthropic is limiting its direct access to Claude AI models

4 June 2025 at 00:48
Windsurf, the popular vibe coding startup that’s reportedly being acquired by OpenAI, says Anthropic significantly reduced its first-party access to its Claude 3.7 Sonnet and Claude 3.5 Sonnet AI models. Windsurf CEO Varun Mohan said in a post on X Tuesday that Anthropic gave Windsurf little notice for this change, and the startup now has […]

Anthropic’s AI is writing its own blog — with human oversight

3 June 2025 at 20:29
Anthropic has given its AI a blog. A week ago, Anthropic quietly launched Claude Explains, a new page on its website that’s generated mostly by the company’s AI model family, Claude. Populated by posts on technical topics related to various Claude use cases (e.g. “Simplify complex codebases with Claude”), the blog is intended to be […]
❌