โŒ

Normal view

Received before yesterday

Anthropic releases custom AI chatbot for classified spy work

6 June 2025 at 21:12

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Read full article

Comments

ยฉ Anthropic

Reddit sues Anthropic over AI scraping that retained usersโ€™ deleted posts

5 June 2025 at 16:57

On the heels of an OpenAI controversy over deleted posts, Reddit sued Anthropic on Wednesday, accusing the AI company of "intentionally" training AI models on the "personal data of Reddit users"โ€”including their deleted postsโ€”"without ever requesting their consent."

Calling Anthropic two-faced for depicting itself as a "white knight of the AI industry" while allegedly lying about AI scraping, Reddit painted Anthropic as the worst among major AI players. While Anthropic rivals like OpenAI and Google paid Reddit to license dataโ€”and, crucially, agreed to "Redditโ€™s licensing terms that protect Reddit and its usersโ€™ interests and privacy" and require AI companies to respect Redditors' deletionsโ€”Anthropic wouldn't participate in licensing talks, Reddit alleged.

"Unlike its competitors, Anthropic has refused to agree to respect Reddit usersโ€™ basic privacy rights, including removing deleted posts from its systems," Reddit's complaint said.

Read full article

Comments

ยฉ SOPA Images / Contributor | LightRocket

โ€œIn 10 years, all bets are offโ€โ€”Anthropic CEO opposes decadelong freeze on state AI laws

5 June 2025 at 14:35

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

Read full article

Comments

ยฉ Bloomberg via Getty Images

New Claude 4 AI model refactored code for 7 hours straight

22 May 2025 at 16:45

On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours.

Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly."

Before we go further, a brief refresher on Claude's three AI model "size" names (introduced in March 2024) is probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price (in the API), speed, and capability.

Read full article

Comments

ยฉ Anthropic

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup

14 May 2025 at 22:16

On Wednesday, OpenAI announced that ChatGPT users now have access to GPT-4.1, an AI language model previously available only through the company's API since its launch one month ago. The update brings what OpenAI describes as improved coding and web development capabilities to paid ChatGPT subscribers, with wider enterprise rollout planned in the coming weeks.

Adding GPT-4.1 and 4.1 mini to ChatGPT adds to an already complex model selection that includes GPT-4o, various specialized GPT-4o versions, o1-pro, o3-mini, and o3-mini-high models. There are technically nine AI models available for ChatGPT Pro subscribers. Wharton professor Ethan Mollick recently publicly lampooned the awkward situation on social media.

As of May 14, 2025, ChatGPT Pro users have access to 8 different main AI models, plus Deep Research. As of May 14, 2025, ChatGPT Pro users have access to eight main AI models, plus Deep Research. Credit: Benj Edwards

Deciding which AI model to use can be daunting for AI novices. Reddit users and OpenAI forum members alike commonly voice confusion about the available options. "I do not understand the reason behind having multiple models available for use," wrote one Reddit user in March. "Why would anyone use anything but the best one?" Another Redditor said they were "a bit lost" with the many ChatGPT models available after switching back from using Anthropic Claude.

Read full article

Comments

ยฉ Getty Images

AI use damages professional reputation, study suggests

8 May 2025 at 20:23

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

"Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.

Read full article

Comments

ยฉ demaerre via Getty Images

OpenAI scraps controversial plan to become for-profit after mounting pressure

5 May 2025 at 20:18

On Monday, ChatGPT-maker OpenAI announced it will remain under the control of its founding nonprofit board, scrapping its controversial plan to split off its commercial operations as a for-profit company after mounting pressure from critics.

In an official OpenAI blog post announcing the latest restructuring decision, CEO Sam Altman wrote: "We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware."

The move represents a significant shift in OpenAI's proposed restructuring. While the most recent previous version of the company's plan (which we covered in December) would have established OpenAI as a Public Benefit Corporation with the nonprofit merely holding shares and having limited influence, the revised approach keeps the nonprofit firmly in control of operations.

Read full article

Comments

ยฉ Benj Edwards / OpenAI

โŒ