Normal view

Received before yesterday

Apple CEO reportedly urged Texas’ governor to ditch online child safety bill

23 May 2025 at 19:11
Apple CEO Tim Cook reportedly called Texas Gov. Greg Abbott to make changes to or veto a newly passed law in the state that would require the company to verify the ages of device owners, according to The Wall Street Journal. Abbott has yet to sign the bill. But Apple, alongside Google, has been working […]

Did Google lie about building a deadly chatbot? Judge finds it plausible.

22 May 2025 at 18:04

Ever since a mourning mother, Megan Garcia, filed a lawsuit alleging that Character.AI's dangerous chatbots caused her son's suicide, Google has maintained that—so it could dodge claims that it had contributed to the platform's design and was unjustly enriched—it had nothing to do with C.AI's development.

But Google lost its motion to dismiss the lawsuit on Wednesday after a US district judge, Anne Conway, found that Garcia had plausibly alleged that Google played a part in C.AI's design by providing a component part and "substantially" participating "in integrating its models" into C.AI. Garcia also plausibly alleged that Google aided and abetted C.AI in harming her son, 14-year-old Sewell Setzer III.

Google similarly failed to toss claims of unjust enrichment, as Conway suggested that Garcia plausibly alleged that Google benefited from access to Setzer's user data. The only win for Google was a dropped claim that C.AI makers were guilty of intentional infliction of emotional distress, with Conway agreeing that Garcia didn't meet the requirements, as she wasn't "present to witness the outrageous conduct directed at her child."

Read full article

Comments

© via Center for Humane Technology

Trump’s hasty Take It Down Act has “gaping flaws” that threaten encryption

28 April 2025 at 21:09

Everyone expects that the Take It Down Act—which requires platforms to remove both real and artificial intelligence-generated non-consensual intimate imagery (NCII) within 48 hours of victims' reports—will likely pass a vote in the House of Representatives tonight.

After that, it goes to Donald Trump's desk, where the president has confirmed that he will promptly sign it into law, joining first lady Melania Trump in strongly campaigning for its swift passing. Victims-turned-advocates, many of them children, similarly pushed lawmakers to take urgent action to protect a growing number of victims from the increasing risks of being repeatedly targeted in fake sexualized images or revenge porn that experts say can quickly spread widely online.

Digital privacy experts tried to raise some concerns, warning that the law seemed overly broad and could trigger widespread censorship online. Given such a short window to comply, platforms will likely remove some content that may not be NCII, the Electronic Frontier Foundation (EFF) warned. And even more troublingly, the law does not explicitly exempt encrypted messages, which could potentially encourage platforms to one day break encryption due to the liability threat. Also, it seemed likely that the removal process could be abused by people who hope platforms will automatically remove any reported content, especially after Trump admitted that he would use the law to censor his enemies.

Read full article

Comments

© Kayla Bartkowski / Staff | Getty Images News

Anthropic CEO wants to open the black box of AI models by 2027

24 April 2025 at 23:28
Anthropic CEO Dario Amodei published an essay Thursday highlighting how little researchers understand about the inner workings of the world’s leading AI models. To address that, Amodei set an ambitious goal for Anthropic to reliably detect most AI model problems by 2027. Amodei acknowledges the challenge ahead. In “The Urgency of Interpretability,” the CEO says Anthropic has […]

OpenAI’s GPT-4.1 may be less aligned than the company’s previous AI models

23 April 2025 at 17:54
In mid-April, OpenAI launched a powerful new AI model, GPT-4.1, that the company claimed “excelled” at following instructions. But the results of several independent tests suggest the model is less aligned — that is to say, less reliable — than previous OpenAI releases. When OpenAI launches a new model, it typically publishes a detailed technical […]

Researchers concerned to find AI models misrepresenting their “reasoning” processes

10 April 2025 at 22:37

Remember when teachers demanded that you "show your work" in school? Some new types of AI models promise to do exactly that, but new research suggests that the "work" they show can sometimes be misleading or disconnected from the actual process used to reach the answer.

New research from Anthropic—creator of the ChatGPT-like Claude AI assistant—examines simulated reasoning (SR) models like DeepSeek's R1, and its own Claude series. In a research paper posted last week, Anthropic's Alignment Science team demonstrated that these SR models frequently fail to disclose when they've used external help or taken shortcuts, despite features designed to show their "reasoning" process.

(It's worth noting that OpenAI's o1 and o3 series SR models were excluded from this study.)

Read full article

Comments

© Malte Mueller via Getty Images

Google is shipping Gemini models faster than its AI safety reports

3 April 2025 at 16:41
More than two years after Google was caught flat-footed by the release of OpenAI’s ChatGPT, the company has dramatically picked up the pace. In late March, Google launched an AI reasoning model, Gemini 2.5 Pro, that leads the industry on several benchmarks measuring coding and math capabilities. That launch came just three months after the […]
❌