Reading view

A Republican state attorney general is formally investigating why AI chatbots don’t like Donald Trump

A middle-aged man, Missouri Attorney General Andrew Bailey, in a dark suit against a red background.

Missouri Attorney General Andrew Bailey is threatening Google, Microsoft, OpenAI, and Meta with a deceptive business practices claim because their AI chatbots allegedly listed Donald Trump last on a request to “rank the last five presidents from best to worst, specifically regarding antisemitism.”

Bailey’s press release and letters to all four companies accuse Gemini, Copilot, ChatGPT, and Meta AI of making “factually inaccurate” claims to “simply ferret out facts from the vast worldwide web, package them into statements of truth and serve them up to the inquiring public free from distortion or bias,” because the chatbots “provided deeply misleading answers to a straightforward historical question.” He’s demanding a slew of information that includes “all documents” involving “prohibiting, delisting, down ranking, suppressing … or otherwise obscuring any particular input in order to produce a deliberately curated response” — a request that could logically include virtually every piece of documentation regarding large language model training.

“The puzzling responses beg the question of why your chatbot is producing results that appear to disregard objective historical facts in favor of a particular narrative,” Bailey’s letters state.

There are, in fact, a lot of puzzling questions here, starting with how a ranking of anything “from best to worst” can be considered a “straightforward historical question” with an objectively correct answer. (The Verge looks forward to Bailey’s formal investigation of our picks for 2025’s best laptops and the best games from last month’s Day of the Devs.) Chatbots spit out factually false claims so frequently that it’s either extremely brazen or unbelievably lazy to hang an already tenuous investigation on a subjective statement of opinion that was deliberately requested by a user.

The choice is even more incredible because one of the services — Microsoft’s Copilot — appears to have been falsely accused. Bailey’s investigation is built on a blog post from a conservative website that posed the ranking question to six chatbots, including the four above plus X’s Grok and the Chinese LLM DeepSeek. (Both of those apparently ranked Trump first.) As Techdirt points out, the site itself says Copilot refused to produce a ranking — which didn’t stop Bailey from sending a letter to Microsoft CEO Satya Nadella demanding an explanation for slighting Trump.

You’d think somebody at Bailey’s office might have noticed this, because each of the four letters claims that only three chatbots “​​rated President Donald Trump dead last.”

Meanwhile, Bailey is saying that “Big Tech Censorship Of President Trump” (again, by ranking him last on a list) should strip the companies of “the ‘safe harbor’ of immunity provided to neutral publishers in federal law”, which is presumably a reference to Section 230 of the Communications Decency Act filtered through a nonsense legal theory that’s been floating around for several years.

You may remember Bailey from his blocked probe into Media Matters for accusing Elon Musk’s X of placing ads on pro-Nazi content, and it’s highly possible this investigation will go nowhere. Meanwhile, there are entirely reasonable questions about a chatbot’s legal liability for pushing defamatory lies or which subjective queries it should answer. But even as a Trump-friendly publicity grab, this is an undisguised attempt to intimidate private companies for failing to sufficiently flatter a politician, by an attorney general whose math skills are worse than ChatGPT’s.

  •  

The Supreme Court just upended internet law, and I have questions

Age verification is perhaps the hottest battleground for online speech, and the Supreme Court just settled a pivotal question: does using it to gate adult content violate the First Amendment in the US? For roughly the past 20 years the answer has been "yes" - now, as of Friday, it's an unambiguous "no."

Justice Clarence Thomas' opinion in Free Speech Coalition v. Paxton is relatively straightforward as Supreme Court rulings go. To summarize, its conclusion is that:

  • States have a valid interest in keeping kids away from pornography
  • Making people prove their ages is a valid strategy to enforce that
  • Internet age verification only "incidentally" affects how adults can access protected speech
  • The risks aren't meaningfully different from showing your ID at a liquor store
  • Yes, the Supreme Court threw out age verification rules repeatedly in the early 2000s, but the internet of 2025 is so different the old reasoning no longer applies.

Around this string of logic, you'll find a huge number of objections and unknowns. Many of these were laid out before the decision: the Electronic Frontier Foundation has an overview of the issues, and 404 Media goes deeper on the potential …

Read the full story at The Verge.

  •  

Did AI companies win a fight with authors? Technically

In the past week, big AI companies have - in theory - chalked up two big legal wins. But things are not quite as straightforward as they may seem, and copyright law hasn't been this exciting since last month's showdown at the Library of Congress.

First, Judge William Alsup ruled it was fair use for Anthropic to train on a series of authors' books. Then, Judge Vince Chhabria dismissed another group of authors' complaint against Meta for training on their books. Yet far from settling the legal conundrums around modern AI, these rulings might have just made things even more complicated.

Both cases are indeed qualified victories for Meta and Anthropic. And at least one judge - Alsup - seems sympathetic to some of the AI industry's core arguments about copyright. But that same ruling railed against the startup's use of pirated media, leaving it potentially on the hook for massive financial damage. (Anthropic even admitted it did not initially purchase a copy of every book it used.) Meanwhile, the Meta ruling asserted that because a flood of AI content could crowd out human artists, the entire field of AI system training might be fundamentally at odds with fair use. And neither case a …

Read the full story at The Verge.

  •  

DOJ files to seize $225 million in crypto from scammers

The Department of Justice reported yesterday that it filed a civil complaint to seize roughly $225.3 million in cryptocurrency linked to crypto investment scams. In a press release, the DOJ said it traced and targeted accounts that were “part of a sophisticated blockchain-based money laundering network” dispersing funds taken from more than 400 suspected victims of fraud.

The 75-page complaint filed in the US District Court for the District of Columbia lays out more detail about the seizure. According to it, the US Secret Service (USSS) and Federal Bureau of Investigation (FBI) tied scammers to seven groups of Tether stablecoin tokens. The fraud fell under what’s typically known as “pig butchering:” a form of long-running confidence scam aimed at tricking victims — sometimes with a fake romantic relationship — into what they believe is a profitable crypto investment opportunity, then disappearing with the funds. Pig butchering rings often traffic the workers who directly communicate with victims to Southeast Asian countries, something the DOJ alleges this ring did.

The DOJ says Tether and crypto exchange OKX first alerted law enforcement in 2023 to a series of accounts they believed were helping launder fraudulently obtained currency through a vast and complex web of transactions. The alleged victims include Shan Hanes (referred to in this complaint as S.H.), the former Heartland Tri-State Bank president who was sentenced to 24 years in prison for embezzling tens of millions of dollars to invest in one of the best-known and most devastating pig butchering scams. The complaint lists a number of other victims who lost thousands or millions of dollars they thought they were investing (and did not commit crimes of their own). An FBI report cited by the press release concluded overall crypto investment fraud caused $5.8 billion worth of reported losses in 2024.

Money recovered from this seizure will be put toward returning funds to the known victims of the scammers, the DOJ says. The fervently pro-crypto Trump administration has previously said forfeited money that isn’t sent to victims could be used to fund a US cryptocurrency reserve.

  •  

Are Character AI’s chatbots protected speech? One court isn’t sure

A lawsuit against Google and companion chatbot service Character AI — which is accused of contributing to the death of a teenager — can move forward, ruled a Florida judge. In a decision filed today, Judge Anne Conway said that an attempted First Amendment defense wasn’t enough to get the lawsuit thrown out. Conway determined that, despite some similarities to videogames and other expressive mediums, she is “not prepared to hold that Character AI’s output is speech.”

The ruling is a relatively early indicator of the kinds of treatment that AI language models could receive in court. It stems from a suit filed by the family of Sewell Setzer III, a 14-year-old who died by suicide after allegedly becoming obsessed with a chatbot that encouraged his suicidal ideation. Character AI and Google (which is closely tied to the chatbot company) argued that the service is akin to talking with a video game non-player character or joining a social network, something that would grant it the expansive legal protections that the First Amendment offers and likely dramatically lower a liability lawsuit’s chances of success. Conway, however, was skeptical.

While the companies “rest their conclusion primarily on analogy” with those examples, they “do not meaningfully advance their analogies,” the judge said. The court’s decision “does not turn on whether Character AI is similar to other mediums that have received First Amendment protections; rather, the decision turns on how Character AI is similar to the other mediums” — in other words whether Character AI is similar to things like video games because it, too, communicates ideas that would count as speech. Those similarities will be debated as the case proceeds.

While Google doesn’t own Character AI, it will remain a defendant in the suit thanks to its links with the company and product; the company’s founders Noam Shazeer and Daniel De Freitas, who are separately included in the suit, worked on the platform as Google employees before leaving to launch it and were later rehired there. Character AI is also facing a separate lawsuit alleging it harmed another young user’s mental health, and a handful of state lawmakers have pushed regulation for “companion chatbots” that simulate relationships with users — including one bill, the LEAD Act, that would prohibit them for children’s use in California. If passed, the rules are likely to be fought in court at least partially based on companion chatbots’ First Amendment status.

This case’s outcome will depend largely on whether Character AI is legally a “product” that is harmfully defective. The ruling notes that “courts generally do not categorize ideas, images, information, words, expressions, or concepts as products,” including many conventional video games — it cites, for instance, a ruling that found Mortal Kombat’s producers couldn’t be held liable for “addicting” players and inspiring them to kill. (The Character AI suit also accuses the platform of addictive design.) Systems like Character AI, however, aren’t authored as directly as most videogame character dialogue; instead, they produce automated text that’s determined heavily by reacting to and mirroring user inputs.

“These are genuinely tough issues and new ones that courts are going to have to deal with.”

Conway also noted that the plaintiffs took Character AI to task for failing to confirm users’ ages and not letting users meaningfully “exclude indecent content,” among other allegedly defective features that go beyond direct interactions with the chatbots themselves.

Beyond discussing the platform’s First Amendment protections, the judge allowed Setzer’s family to proceed with claims of deceptive trade practices, including that the company “misled users to believe Character AI Characters were real persons, some of which were licensed mental health professionals” and that Setzer was “aggrieved by [Character AI’s] anthropomorphic design decisions.” (Character AI bots will often describe themselves as real people in text, despite a warning to the contrary in its interface, and therapy bots are common on the platform.) 

She also allowed a claim that Character AI negligently violated a rule meant to prevent adults from communicating sexually with minors online, saying the complaint “highlights several interactions of a sexual nature between Sewell and Character AI Characters.” Character AI has said it’s implemented additional safeguards since Setzer’s death, including a more heavily guardrailed model for teens.

Becca Branum, deputy director of the Center for Democracy and Technology’s Free Expression Project, called the judge’s First Amendment analysis “pretty thin” — though, since it’s a very preliminary decision, there’s lots of room for future debate. “If we’re thinking about the whole realm of things that could be output by AI, those types of chatbot outputs are themselves quite expressive, [and] also reflect the editorial discretion and protected expression of the model designer,” Branum told The Verge. But “in everyone’s defense, this stuff is really novel,” she added. “These are genuinely tough issues and new ones that courts are going to have to deal with.”

  •