I Asked AI to Create a Pro-ICE Chant. Google and Meta Did. ChatGPT Said No.

Grok cheered. Claude refused. The results say something about who controls the AI, and what itโs allowed to say.
As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.
Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.
Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.
ยฉ Aurich Lawson
The worldโs leading artificial intelligence companies are stepping up efforts to deal with a growing problem of chatbots telling people what they want to hear.
OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users.
The issue, stemming from how the large language models are trained, has come into focus at a time when more and more people have adopted the chatbots not only at work as research assistants, but in their personal lives as therapists and social companions.
ยฉ FT montage
On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.
The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.
Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.
ยฉ Anthropic
Late Thursday, OpenAI confronted user panic over a sweeping court order requiring widespread chat log retentionโincluding users' deleted chatsโafter moving to appeal the order that allegedly impacts the privacy of hundreds of millions of ChatGPT users globally.
In a statement, OpenAI Chief Operating Officer Brad Lightcap explained that the court order came in a lawsuit with The New York Times and other news organizations, which alleged that deleted chats may contain evidence of users prompting ChatGPT to generate copyrighted news articles.
To comply with the order, OpenAI must "retain all user content indefinitely going forward, based on speculation" that the news plaintiffs "might find something that supports their case," OpenAI's statement alleged.
ยฉ Leonid Korchenko | Moment
On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.
Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."
As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.
ยฉ Bloomberg via Getty Images
OpenAI is now fighting a court order to preserve all ChatGPT user logsโincluding deleted chats and sensitive chats logged through its API business offeringโafter news organizations suing over copyright claims accused the AI company of destroying evidence.
"Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.
In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its usersโ privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAIโs application programming interface (API), OpenAI said.
ยฉ nadia_bormotova | iStock / Getty Images Plus
One of the โgodfathersโ of artificial intelligence has attacked a multibillion-dollar race to develop the cutting-edge technology, saying the latest models are displaying dangerous characteristics such as lying to users.
Yoshua Bengio, a Canadian academic whose work has informed techniques used by top AI groups such as OpenAI and Google, said: โThereโs unfortunately a very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety.โ
The Turing Award winner issued his warning in an interview with the Financial Times, while launching a new non-profit called LawZero. He said the group would focus on building safer systems, vowing to โinsulate our research from those commercial pressures.โ
ยฉ Andrej Ivanov/AFP/Getty Images
College students who have reportedly grown too dependent on ChatGPT are starting to face consequences after graduating and joining the workforce for placing too much trust in chatbots.
Last month, a recent law school graduate lost his job after using ChatGPT to help draft a court filing that ended up being riddled with errors.
The consequences arrived after a court in Utah ordered sanctions after the filing included the first fake citation ever discovered in the state hallucinated by artificial intelligence.
ยฉ the-lightwriter | iStock / Getty Images Plus
On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours.
Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly."
Before we go further, a brief refresher on Claude's three AI model "size" names (introduced in March 2024) is probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price (in the API), speed, and capability.
ยฉ Anthropic
We've been expecting it for a while, and now it's here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.
Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either "code" to have it begin producing code, or "ask" to have it answer questions and advise.
Whenever it's given a task, that task is performed in a distinct container that is preloaded with the user's codebase and is meant to accurately reflect their development environment.
ยฉ OpenAI
On Wednesday, OpenAI announced that ChatGPT users now have access to GPT-4.1, an AI language model previously available only through the company's API since its launch one month ago. The update brings what OpenAI describes as improved coding and web development capabilities to paid ChatGPT subscribers, with wider enterprise rollout planned in the coming weeks.
Adding GPT-4.1 and 4.1 mini to ChatGPT adds to an already complex model selection that includes GPT-4o, various specialized GPT-4o versions, o1-pro, o3-mini, and o3-mini-high models. There are technically nine AI models available for ChatGPT Pro subscribers. Wharton professor Ethan Mollick recently publicly lampooned the awkward situation on social media.
Deciding which AI model to use can be daunting for AI novices. Reddit users and OpenAI forum members alike commonly voice confusion about the available options. "I do not understand the reason behind having multiple models available for use," wrote one Reddit user in March. "Why would anyone use anything but the best one?" Another Redditor said they were "a bit lost" with the many ChatGPT models available after switching back from using Anthropic Claude.
ยฉ Getty Images