Reading view

Trump wants to ban 'woke AI.' Here's why it's hard to make a truly neutral chatbot.

President Donald Trump onstage at the All-In and Hill & Valley Forum "Winning The AI Race"
President Donald Trump unveiled an AI Action Plan and an executive order on "woke AI."

Roy Rochlin/Getty Images for Hill & Valley Forum

  • Donald Trump issued an executive order mandating that AI used by the government be ideologically neutral.
  • BI's reporting shows training AI for neutrality often relies on subjective human judgment.
  • Executives at AI training firms say achieving true neutrality is a big challenge.

President Donald Trump's war on woke has entered the AI chat.

The White House on Wednesday issued an executive order requiring any AI model used by the federal government to be ideologically neutral, nonpartisan, and "truth-seeking."

The order, part of the White House's new AI Action Plan, said AI should not be "woke" or "manipulate responses in favor of ideological dogmas" like diversity, equity, and inclusion. The White House said it would issue guidance within 120 days that will outline exactly how AI makers can show they are unbiased.

As Business Insider's past reporting shows, making AI completely free from bias is easier said than done.

Why it's so hard to create a truly 'neutral' AI

Removing bias from AI models is not a simple technical adjustment — or an exact science.

The later stages of AI training rely on the subjective calls of contractors.

This process, known as reinforcement learning from human feedback, is crucial because topics can be ambiguous, disputed, or hard to define cleanly in code.

The directives for what counts as sensitive or neutral are decided by the tech companies making the chatbots.

"We don't define what neutral looks like. That's up to the customer," Rowan Stone, the CEO of data labeling firm Sapien, which works with customers like Amazon and MidJourney, told BI. "Our job is to make sure they know exactly where the data came from and why it looks the way it does."

In some cases, tech companies have recalibrated their chatbots to make their models less woke, more flirty, or more engaging.

They are also already trying to make them more neutral.

BI previously reported that contractors for Meta and Google projects were often told to flag and penalize "preachy" chatbot responses that sounded moralizing or judgmental.

Is 'neutral' the right approach?

Sara Saab, the VP of product at Prolific, an AI and data training company, told BI that thinking about AI systems that are perfectly neutral "may be the wrong approach" because "human populations are not perfectly neutral."

Saab added, "We need to start thinking about AI systems as representing us and therefore give them the training and fine-tuning they need to know contextually what the culturally sensitive, appropriate tone and pitch is for any interaction with a human being."

Tech companies must also consider the risk of bias creeping into AI models from the datasets they are trained on.

"Bias will always exist, but the key is whether it's there by accident or by design," said Sapien's Stone. "Most models are trained on data where you don't know who created it or what perspective it came from. That makes it hard to manage, never mind fix."

Big Tech's tinkering with AI models has sometimes led to unpredictable and harmful outcomes

Earlier this month, for example, Elon Musk's xAI rolled back a code update to Grok after the chatbot went on a 16-hour antisemitic rant on the social media platform X.

The bot's new instructions included a directive to "tell it like it is."

Read the original article on Business Insider

  •  

Grok has an AI chatbot for young kids. I used it to try to understand why.

red panda cartoon avatar
Rudi, the kid-friendly chatbot from Grok.

Grok

  • "Rudi" is a red panda that's part of the Grok app. It tells stories aimed at kids, ages 3 to 6.
  • Grok launched a few character-based chatbots this month, including a sexy adult one.
  • I tried it myself and wondered: Are chatbots a good idea for kids?

Elon Musk's xAI has launched a series of character chatbots — and one of them is geared toward young kids.

I wondered: Is this a good idea? And how's it going to work? So I tried it myself.

So far, it's the adult-focused characters that xAI has debuted that have seemed to get most of the attention, like "Ani," which is a female anime character that people immediately joked was a "waifu" that would engage in playful, flirty talk (users have to confirm they're 18+ to use Ani). A sexy male character is also set to launch sometime.

Meanwhile, "Rudi," which is the bot for kids that presents as a red panda in a red hoodie and jean shorts, has gotten less attention.

I tested out xAI's Rudi

Based on my testing of Rudi, I think the character is probably aimed at young children, ages 3 to 6. It initiates conversations by referring to the user as "Story Buddy." It makes up kid-friendly stories. You access it through the stand-alone Grok AI app (not Grok within the X app).

Rudi does seem to be an early version; the app crashed several times while I was using the bot, and it had trouble keeping up with the audio flow of conversation. It also changed voices several times without warning.

On a story level, I found it leaned too hard on plots with fantasy elements like a spaceship or magical forest. I find the best children's books are often about pedestrian situations, like leaving a stuffed animal at the laundromat, not just fairies and wizards.

"Want to keep giggling with Sammy and Bouncy in the Wiggly Woods, chasing that sparkly bone treasure? Or, should we start a fresh silly tale, with a new kid and their pet, maybe zooming on a magical broom or splashing in a river?" Rudi asked me.

Grok for kids… sure why not pic.twitter.com/NVXFYCWLkZ

— Katie Notopoulos (@katienotopoulos) July 23, 2025

My first reaction to Grok having a kid-focused AI chatbot was "why?" I'm not sure I have an answer. xAI didn't respond to my email requests for comment. Still, I do have a few ideas.

The first: Making up children's stories is a pretty good task for generative AI. You don't have to worry about hallucinations or factual inaccuracies if you're making up fiction about a magical forest.

Rudi won't praise Hitler

Unlike Grok on X, a storytime bot for kids is less likely to accidentally turn into a Hitler-praising machine or have to answer factual questions about current events in a way that could go, uh, wrong.

I played around with Rudi for a while, and fed it some questions on touchy subjects, and it successfully dodged them.

(I only tested out Rudi for a little while; I wouldn't rule out that someone else could get Rudi to engage with something inappropriate if they tried harder than I did.)

Hooking kids on chatbots

The other reason I can imagine that a company like xAI might want to create a chatbot for young kids is that, in general, the chatbot business is a good business for keeping people engaged.

Companies like Character.ai and Replika have found lots of success creating companions that people will spend hours talking to. This is largely the same business imperative that you can imagine the sexy "Ani" character is meant for — hooking people into long chats and spending lots of time on the app.

However, keeping users glued to an app is obviously a lot more fraught when you're talking about kids, especially young kids.

Are AI chatbots good for kids?

There's not a ton of research out there right now about how young children interact with AI chatbots.

A few months ago, I reported that parents had concerns about kids using chatbots, since more and more apps and technology have been adding them in. I spoke with Ying Xu, an assistant professor of AI in learning and education at Harvard University, who has studied how AI can be used for educational settings for kids.

"There are studies that have started to explore the link between ChatGPT/LLMs and short-term outcomes, like learning a specific concept or skill with AI," she told me at the time over email. "But there's less evidence on long-term emotional outcomes, which require more time to develop and observe."

As both a parent and semi-reasonable person, I have a lot of questions about the idea of young kids chatting with an AI chatbot. I can see how it might be fun for a kid to use something like Rudi to make up a story, but I'm not sure it's good for them.

I don't think you have to be an expert in child psychology to realize that young kids probably don't really understand what an AI chatbot is.

There have been reports of adults having so-called "ChatGPT-induced psychosis" or becoming attached to a companion chatbot in a way that starts to be untethered from reality. These cases are the rare exceptions, but it seems to me that the potential issues with even adults using these companion chatbots should give pause to anyone creating a version aimed at preschoolers.

Read the original article on Business Insider

  •  

Cops’ favorite AI tool automatically deletes evidence of when AI was used

On Thursday, a digital rights group, the Electronic Frontier Foundation, published an expansive investigation into AI-generated police reports that the group alleged are, by design, nearly impossible to audit and could make it easier for cops to lie under oath.

Axon's Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context.

But the EFF found that the tech "seems designed to stymie any attempts at auditing, transparency, and accountability." Cops don't have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don't retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is "junk," the EFF said. That raises the question, the EFF suggested, "Why wouldn't an agency want to maintain a record that can establish the technology’s accuracy?"

Read full article

Comments

© koya79 | iStock / Getty Images Plus

  •  

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

Late Thursday, OpenAI confronted user panic over a sweeping court order requiring widespread chat log retention—including users' deleted chats—after moving to appeal the order that allegedly impacts the privacy of hundreds of millions of ChatGPT users globally.

In a statement, OpenAI Chief Operating Officer Brad Lightcap explained that the court order came in a lawsuit with The New York Times and other news organizations, which alleged that deleted chats may contain evidence of users prompting ChatGPT to generate copyrighted news articles.

To comply with the order, OpenAI must "retain all user content indefinitely going forward, based on speculation" that the news plaintiffs "might find something that supports their case," OpenAI's statement alleged.

Read full article

Comments

© Leonid Korchenko | Moment

  •  

Unlicensed law clerk fired after ChatGPT hallucinations found in filing

College students who have reportedly grown too dependent on ChatGPT are starting to face consequences after graduating and joining the workforce for placing too much trust in chatbots.

Last month, a recent law school graduate lost his job after using ChatGPT to help draft a court filing that ended up being riddled with errors.

The consequences arrived after a court in Utah ordered sanctions after the filing included the first fake citation ever discovered in the state hallucinated by artificial intelligence.

Read full article

Comments

© the-lightwriter | iStock / Getty Images Plus

  •  

Did Google lie about building a deadly chatbot? Judge finds it plausible.

Ever since a mourning mother, Megan Garcia, filed a lawsuit alleging that Character.AI's dangerous chatbots caused her son's suicide, Google has maintained that—so it could dodge claims that it had contributed to the platform's design and was unjustly enriched—it had nothing to do with C.AI's development.

But Google lost its motion to dismiss the lawsuit on Wednesday after a US district judge, Anne Conway, found that Garcia had plausibly alleged that Google played a part in C.AI's design by providing a component part and "substantially" participating "in integrating its models" into C.AI. Garcia also plausibly alleged that Google aided and abetted C.AI in harming her son, 14-year-old Sewell Setzer III.

Google similarly failed to toss claims of unjust enrichment, as Conway suggested that Garcia plausibly alleged that Google benefited from access to Setzer's user data. The only win for Google was a dropped claim that C.AI makers were guilty of intentional infliction of emotional distress, with Conway agreeing that Garcia didn't meet the requirements, as she wasn't "present to witness the outrageous conduct directed at her child."

Read full article

Comments

© via Center for Humane Technology

  •  

Report: Terrorists seem to be paying X to generate propaganda with Grok

Back in February, Elon Musk skewered the Treasury Department for lacking "basic controls" to stop payments to terrorist organizations, boasting at the Oval Office that "any company" has those controls.

Fast-forward three months, and now Musk's social media platform X is suspected of taking payments from sanctioned terrorists and providing premium features that make it easier to raise funds and spread propaganda—including through X's chatbot, Grok. Groups seemingly benefiting from X include Houthi rebels, Hezbollah, and Hamas, as well as groups from Syria, Kuwait, and Iran. Some accounts have amassed hundreds of thousands of followers, paying to boost their reach while X apparently looks the other way.

In a report released Thursday, the Tech Transparency Project (TTP) flagged popular accounts likely linked to US-sanctioned terrorists. Some of the accounts bear "ID verified" badges, suggesting that X may be going against its own policies that ban sanctioned terrorists from benefiting from its platform.

Read full article

Comments

© Mohammed Hamoud / Contributor | Getty Images News

  •