Normal view
Did Google lie about building a deadly chatbot? Judge finds it plausible.
Ever since a mourning mother, Megan Garcia, filed a lawsuit alleging that Character.AI's dangerous chatbots caused her son's suicide, Google has maintained that—so it could dodge claims that it had contributed to the platform's design and was unjustly enriched—it had nothing to do with C.AI's development.
But Google lost its motion to dismiss the lawsuit on Wednesday after a US district judge, Anne Conway, found that Garcia had plausibly alleged that Google played a part in C.AI's design by providing a component part and "substantially" participating "in integrating its models" into C.AI. Garcia also plausibly alleged that Google aided and abetted C.AI in harming her son, 14-year-old Sewell Setzer III.
Google similarly failed to toss claims of unjust enrichment, as Conway suggested that Garcia plausibly alleged that Google benefited from access to Setzer's user data. The only win for Google was a dropped claim that C.AI makers were guilty of intentional infliction of emotional distress, with Conway agreeing that Garcia didn't meet the requirements, as she wasn't "present to witness the outrageous conduct directed at her child."
© via Center for Humane Technology
-
VentureBeat
- Anthropic faces backlash to Claude 4 Opus behavior that contacts authorities, press if it thinks you’re doing something ‘egregiously immoral’
Anthropic faces backlash to Claude 4 Opus behavior that contacts authorities, press if it thinks you’re doing something ‘egregiously immoral’

Bowman later edited his tweet and the following one in a thread to read as follows, but it still didn't convince the naysayers.Read More
-
VentureBeat
- Time Magazine appears to accidentally publish embargoed story confirming new Anthropic model
Time Magazine appears to accidentally publish embargoed story confirming new Anthropic model

Someone also appears to have published a full scrape of the Time article online on the news aggregator app Newsbreak.Read More
OpenAI overrode concerns of expert testers to release sycophantic GPT-4o

Once again, it shows the importance of incorporating more domains beyond the traditional math and computer science into AI development.Read More
Trump’s hasty Take It Down Act has “gaping flaws” that threaten encryption
Everyone expects that the Take It Down Act—which requires platforms to remove both real and artificial intelligence-generated non-consensual intimate imagery (NCII) within 48 hours of victims' reports—will likely pass a vote in the House of Representatives tonight.
After that, it goes to Donald Trump's desk, where the president has confirmed that he will promptly sign it into law, joining first lady Melania Trump in strongly campaigning for its swift passing. Victims-turned-advocates, many of them children, similarly pushed lawmakers to take urgent action to protect a growing number of victims from the increasing risks of being repeatedly targeted in fake sexualized images or revenge porn that experts say can quickly spread widely online.
Digital privacy experts tried to raise some concerns, warning that the law seemed overly broad and could trigger widespread censorship online. Given such a short window to comply, platforms will likely remove some content that may not be NCII, the Electronic Frontier Foundation (EFF) warned. And even more troublingly, the law does not explicitly exempt encrypted messages, which could potentially encourage platforms to one day break encryption due to the liability threat. Also, it seemed likely that the removal process could be abused by people who hope platforms will automatically remove any reported content, especially after Trump admitted that he would use the law to censor his enemies.
© Kayla Bartkowski / Staff | Getty Images News
Does RAG make LLMs less safe? Bloomberg research reveals hidden dangers

RAG is supposed to make enterprise AI more accurate, but it could potentially also make it less safe according to new research.Read More
Anthropic CEO wants to open the black box of AI models by 2027
OpenAI’s GPT-4.1 may be less aligned than the company’s previous AI models
Sean Duffy Doesn’t Want Air Traffic Controllers Retiring After 25 Years

The union for air traffic controllers says this is a bad idea.
Researchers concerned to find AI models misrepresenting their “reasoning” processes
Remember when teachers demanded that you "show your work" in school? Some new types of AI models promise to do exactly that, but new research suggests that the "work" they show can sometimes be misleading or disconnected from the actual process used to reach the answer.
New research from Anthropic—creator of the ChatGPT-like Claude AI assistant—examines simulated reasoning (SR) models like DeepSeek's R1, and its own Claude series. In a research paper posted last week, Anthropic's Alignment Science team demonstrated that these SR models frequently fail to disclose when they've used external help or taken shortcuts, despite features designed to show their "reasoning" process.
(It's worth noting that OpenAI's o1 and o3 series SR models were excluded from this study.)
© Malte Mueller via Getty Images