Normal view
Did Google lie about building a deadly chatbot? Judge finds it plausible.
Ever since a mourning mother, Megan Garcia, filed a lawsuit alleging that Character.AI's dangerous chatbots caused her son's suicide, Google has maintained thatโso it could dodge claims that it had contributed to the platform's design and was unjustly enrichedโit had nothing to do with C.AI's development.
But Google lost its motion to dismiss the lawsuit on Wednesday after a US district judge, Anne Conway, found that Garcia had plausibly alleged that Google played a part in C.AI's design by providing a component part and "substantially" participating "in integrating its models" into C.AI. Garcia also plausibly alleged that Google aided and abetted C.AI in harming her son, 14-year-old Sewell Setzer III.
Google similarly failed to toss claims of unjust enrichment, as Conway suggested that Garcia plausibly alleged that Google benefited from access to Setzer's user data. The only win for Google was a dropped claim that C.AI makers were guilty of intentional infliction of emotional distress, with Conway agreeing that Garcia didn't meet the requirements, as she wasn't "present to witness the outrageous conduct directed at her child."
ยฉ via Center for Humane Technology
Trumpโs hasty Take It Down Act has โgaping flawsโ that threaten encryption
Everyone expects that the Take It Down Actโwhich requires platforms to remove both real and artificial intelligence-generated non-consensual intimate imagery (NCII) within 48 hours of victims' reportsโwill likely pass a vote in the House of Representatives tonight.
After that, it goes to Donald Trump's desk, where the president has confirmed that he will promptly sign it into law, joining first lady Melania Trump in strongly campaigning for its swift passing. Victims-turned-advocates, many of them children, similarly pushed lawmakers to take urgent action to protect a growing number of victims from the increasing risks of being repeatedly targeted in fake sexualized images or revenge porn that experts say can quickly spread widely online.
Digital privacy experts tried to raise some concerns, warning that the law seemed overly broad and could trigger widespread censorship online. Given such a short window to comply, platforms will likely remove some content that may not be NCII, the Electronic Frontier Foundation (EFF) warned. And even more troublingly, the law does not explicitly exempt encrypted messages, which could potentially encourage platforms to one day break encryption due to the liability threat. Also, it seemed likely that the removal process could be abused by people who hope platforms will automatically remove any reported content, especially after Trump admitted that he would use the law to censor his enemies.
ยฉ Kayla Bartkowski / Staff | Getty Images News