Reading view

Anthropic releases custom AI chatbot for classified spy work

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Read full article

Comments

© Anthropic

  •  

Reddit sues Anthropic over AI scraping that retained users’ deleted posts

On the heels of an OpenAI controversy over deleted posts, Reddit sued Anthropic on Wednesday, accusing the AI company of "intentionally" training AI models on the "personal data of Reddit users"—including their deleted posts—"without ever requesting their consent."

Calling Anthropic two-faced for depicting itself as a "white knight of the AI industry" while allegedly lying about AI scraping, Reddit painted Anthropic as the worst among major AI players. While Anthropic rivals like OpenAI and Google paid Reddit to license data—and, crucially, agreed to "Reddit’s licensing terms that protect Reddit and its users’ interests and privacy" and require AI companies to respect Redditors' deletions—Anthropic wouldn't participate in licensing talks, Reddit alleged.

"Unlike its competitors, Anthropic has refused to agree to respect Reddit users’ basic privacy rights, including removing deleted posts from its systems," Reddit's complaint said.

Read full article

Comments

© SOPA Images / Contributor | LightRocket

  •  

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

Read full article

Comments

© Bloomberg via Getty Images

  •  

Windsurf says Anthropic is limiting its direct access to Claude AI models

Windsurf, the popular vibe coding startup that’s reportedly being acquired by OpenAI, says Anthropic significantly reduced its first-party access to its Claude 3.7 Sonnet and Claude 3.5 Sonnet AI models. Windsurf CEO Varun Mohan said in a post on X Tuesday that Anthropic gave Windsurf little notice for this change, and the startup now has […]
  •  

Anthropic’s AI is writing its own blog — with human oversight

Anthropic has given its AI a blog. A week ago, Anthropic quietly launched Claude Explains, a new page on its website that’s generated mostly by the company’s AI model family, Claude. Populated by posts on technical topics related to various Claude use cases (e.g. “Simplify complex codebases with Claude”), the blog is intended to be […]
  •  

New Claude 4 AI model refactored code for 7 hours straight

On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours.

Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly."

Before we go further, a brief refresher on Claude's three AI model "size" names (introduced in March 2024) is probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price (in the API), speed, and capability.

Read full article

Comments

© Anthropic

  •  

Hidden costs in AI deployment: Why Claude models may be 20-30% more expensive than GPT in enterprise settings

Credit: VentureBeat using DALL-E 3
It is a well-known fact that different model families can use different tokenizers. However, there has been limited analysis on how the process of “tokenization” itself varies across these tokenizers. Do all tokenizers result in the same number of tokens for a given input text? If not, how different are the generated tokens? How significant are the…Read More
  •  
  •  

Researchers concerned to find AI models misrepresenting their “reasoning” processes

Remember when teachers demanded that you "show your work" in school? Some new types of AI models promise to do exactly that, but new research suggests that the "work" they show can sometimes be misleading or disconnected from the actual process used to reach the answer.

New research from Anthropic—creator of the ChatGPT-like Claude AI assistant—examines simulated reasoning (SR) models like DeepSeek's R1, and its own Claude series. In a research paper posted last week, Anthropic's Alignment Science team demonstrated that these SR models frequently fail to disclose when they've used external help or taken shortcuts, despite features designed to show their "reasoning" process.

(It's worth noting that OpenAI's o1 and o3 series SR models were excluded from this study.)

Read full article

Comments

© Malte Mueller via Getty Images

  •  

After months of user complaints, Anthropic debuts new $200/month AI plan

On Wednesday, Anthropic introduced a new $100- to $200-per-month subscription tier called Claude Max that offers expanded usage limits for its Claude AI assistant. The new plan arrives after many existing Claude subscribers complained of hitting rate limits frequently.

"The top request from our most active users has been expanded Claude access," wrote Anthropic in a news release. A brief stroll through user feedback on Reddit seems to confirm that sentiment, showing that many Claude users have been unhappy with Anthropic's usage limits over the past year—even on the Claude Pro plan, which costs $20 a month.

One of the downsides of a relatively large context window with Claude (the amount of text it can process at once) has been that long conversations or inclusions of many reference documents (such as code files) fill up usage limits quickly. That's because each time the user adds to the conversation, the entire text of the conversation (including any attached documents) is fed back into the AI model again and re-evaluated. But on the other hand, a large context window allows Claude to process more complex projects within each session.

Read full article

Comments

© Anthropic

  •  

Anthropic rolls out a $200-per-month Claude subscription

Anthropic announced on Wednesday that it’s launching a new, very expensive subscription plan for its AI chatbot Claude: Max. An answer to OpenAI’s $200-a-month ChatGPT Pro tier, Max comes with higher usage limits than Anthropic’s $20-per-month Claude Pro subscription, as well as priority access to the company’s newest AI models and features. A bit confusingly, […]
  •  

Genspark’s Super Agent ups the ante in the general AI agent race


This week, Palo Alto-based startup Genspark released what it calls Super Agent, a fast-moving autonomous system designed to handle real-world tasks across a wide range of domains – including some that raise eyebrows, like making phone calls to restaurants using a realistic synthetic voice.Read More
  •