Normal view

Received before yesterday

Two major AI coding tools wiped out user data after making cascading mistakes

24 July 2025 at 21:01

New types of AI coding assistants promise to let anyone build software by typing commands in plain English. But when these tools generate incorrect internal representations of what's happening on your computer, the results can be catastrophic.

Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding"—using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code.

The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed.

Read full article

Comments

© Benj Edwards / Getty Images

Trump’s order to make chatbots anti-woke is unconstitutional, senator says

24 July 2025 at 18:21

The CEOs of every major artificial intelligence company received letters Wednesday urging them to fight Donald Trump's anti-woke AI order.

Trump's executive order requires any AI company hoping to contract with the federal government to jump through two hoops to win funding. First, they must prove their AI systems are "truth-seeking"—with outputs based on "historical accuracy, scientific inquiry, and objectivity" or else acknowledge when facts are uncertain. Second, they must train AI models to be "neutral," which is vaguely defined as not favoring DEI (diversity, equity, and inclusion), "dogmas," or otherwise being "intentionally encoded" to produce "partisan or ideological judgments" in outputs "unless those judgments are prompted by or otherwise readily accessible to the end user."

Announcing the order in a speech, Trump said that the US winning the AI race depended on removing allegedly liberal biases, proclaiming that "once and for all, we are getting rid of woke."

Read full article

Comments

© Chip Somodevilla / Staff | Getty Images News

Grok has an AI chatbot for young kids. I used it to try to understand why.

23 July 2025 at 18:50
red panda cartoon avatar
Rudi, the kid-friendly chatbot from Grok.

Grok

  • "Rudi" is a red panda that's part of the Grok app. It tells stories aimed at kids, ages 3 to 6.
  • Grok launched a few character-based chatbots this month, including a sexy adult one.
  • I tried it myself and wondered: Are chatbots a good idea for kids?

Elon Musk's xAI has launched a series of character chatbots — and one of them is geared toward young kids.

I wondered: Is this a good idea? And how's it going to work? So I tried it myself.

So far, it's the adult-focused characters that xAI has debuted that have seemed to get most of the attention, like "Ani," which is a female anime character that people immediately joked was a "waifu" that would engage in playful, flirty talk (users have to confirm they're 18+ to use Ani). A sexy male character is also set to launch sometime.

Meanwhile, "Rudi," which is the bot for kids that presents as a red panda in a red hoodie and jean shorts, has gotten less attention.

I tested out xAI's Rudi

Based on my testing of Rudi, I think the character is probably aimed at young children, ages 3 to 6. It initiates conversations by referring to the user as "Story Buddy." It makes up kid-friendly stories. You access it through the stand-alone Grok AI app (not Grok within the X app).

Rudi does seem to be an early version; the app crashed several times while I was using the bot, and it had trouble keeping up with the audio flow of conversation. It also changed voices several times without warning.

On a story level, I found it leaned too hard on plots with fantasy elements like a spaceship or magical forest. I find the best children's books are often about pedestrian situations, like leaving a stuffed animal at the laundromat, not just fairies and wizards.

"Want to keep giggling with Sammy and Bouncy in the Wiggly Woods, chasing that sparkly bone treasure? Or, should we start a fresh silly tale, with a new kid and their pet, maybe zooming on a magical broom or splashing in a river?" Rudi asked me.

Grok for kids… sure why not pic.twitter.com/NVXFYCWLkZ

— Katie Notopoulos (@katienotopoulos) July 23, 2025

My first reaction to Grok having a kid-focused AI chatbot was "why?" I'm not sure I have an answer. xAI didn't respond to my email requests for comment. Still, I do have a few ideas.

The first: Making up children's stories is a pretty good task for generative AI. You don't have to worry about hallucinations or factual inaccuracies if you're making up fiction about a magical forest.

Rudi won't praise Hitler

Unlike Grok on X, a storytime bot for kids is less likely to accidentally turn into a Hitler-praising machine or have to answer factual questions about current events in a way that could go, uh, wrong.

I played around with Rudi for a while, and fed it some questions on touchy subjects, and it successfully dodged them.

(I only tested out Rudi for a little while; I wouldn't rule out that someone else could get Rudi to engage with something inappropriate if they tried harder than I did.)

Hooking kids on chatbots

The other reason I can imagine that a company like xAI might want to create a chatbot for young kids is that, in general, the chatbot business is a good business for keeping people engaged.

Companies like Character.ai and Replika have found lots of success creating companions that people will spend hours talking to. This is largely the same business imperative that you can imagine the sexy "Ani" character is meant for — hooking people into long chats and spending lots of time on the app.

However, keeping users glued to an app is obviously a lot more fraught when you're talking about kids, especially young kids.

Are AI chatbots good for kids?

There's not a ton of research out there right now about how young children interact with AI chatbots.

A few months ago, I reported that parents had concerns about kids using chatbots, since more and more apps and technology have been adding them in. I spoke with Ying Xu, an assistant professor of AI in learning and education at Harvard University, who has studied how AI can be used for educational settings for kids.

"There are studies that have started to explore the link between ChatGPT/LLMs and short-term outcomes, like learning a specific concept or skill with AI," she told me at the time over email. "But there's less evidence on long-term emotional outcomes, which require more time to develop and observe."

As both a parent and semi-reasonable person, I have a lot of questions about the idea of young kids chatting with an AI chatbot. I can see how it might be fun for a kid to use something like Rudi to make up a story, but I'm not sure it's good for them.

I don't think you have to be an expert in child psychology to realize that young kids probably don't really understand what an AI chatbot is.

There have been reports of adults having so-called "ChatGPT-induced psychosis" or becoming attached to a companion chatbot in a way that starts to be untethered from reality. These cases are the rare exceptions, but it seems to me that the potential issues with even adults using these companion chatbots should give pause to anyone creating a version aimed at preschoolers.

Read the original article on Business Insider

NYT to start searching deleted ChatGPT logs after beating OpenAI in court

2 July 2025 at 16:34

Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs "indefinitely," including deleted and temporary chats.

But Sidney Stein, the US district judge reviewing OpenAI's request, immediately denied OpenAI's objections. He was seemingly unmoved by the company's claims that the order forced OpenAI to abandon "long-standing privacy norms" and weaken privacy protections that users expect based on ChatGPT's terms of service. Rather, Stein suggested that OpenAI's user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.

The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content.

Read full article

Comments

© Pakorn Supajitsoontorn | iStock / Getty Images Plus

The résumé is dying, and AI is holding the smoking gun

24 June 2025 at 17:25

Employers are drowning in AI-generated job applications, with LinkedIn now processing 11,000 submissions per minute—a 45 percent surge from last year, according to new data reported by The New York Times.

Due to AI, the traditional hiring process has become overwhelmed with automated noise. It's the résumé equivalent of AI slop—call it "hiring slop," perhaps—that currently haunts social media and the web with sensational pictures and misleading information. The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control.

The Times illustrates the scale of the problem with the story of an HR consultant named Katie Tanner, who was so inundated with over 1,200 applications for a single remote role that she had to remove the post entirely and was still sorting through the applications three months later.

Read full article

Comments

© sturti via Getty Images

Judge denies creating “mass surveillance program” harming all ChatGPT users

23 June 2025 at 17:33

After a court ordered OpenAI to "indefinitely" retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations.

In May, Judge Ona Wang, who drafted the order, rejected the first user's request on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.

The second request to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT "from time to time," occasionally sending OpenAI "highly sensitive personal and commercial information in the course of using the service."

Read full article

Comments

© Yurii Karvatskyi | iStock / Getty Images Plus

To avoid admitting ignorance, Meta AI says man’s number is a company helpline

20 June 2025 at 15:12

Anyone whose phone number is just one digit off from a popular restaurant or community resource has long borne the burden of either screening or redirecting misdials. But now, AI chatbots could exacerbate this inconvenience by accidentally giving out private numbers when users ask for businesses' contact information.

Apparently, the AI helper that Meta created for WhatsApp may even be trained to tell white lies when users try to correct the dissemination of WhatsApp user numbers.

According to The Guardian, a record shop worker in the United Kingdom, Barry Smethurst, was attempting to ask WhatsApp's AI helper for a contact number for TransPennine Express after his morning train never showed up.

Read full article

Comments

© Moor Studio | DigitalVision Vectors

Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content

18 June 2025 at 11:15

Former Cloudflare executive John Graham-Cumming recently announced that he launched a website, lowbackgroundsteel.ai, that treats pre-AI, human-created content like a precious commodity—a time capsule of organic creative expression from a time before machines joined the conversation. "The idea is to point to sources of text, images and video that were created prior to the explosion of AI-generated content," Graham-Cumming wrote on his blog last week. The reason? To preserve what made non-AI media uniquely human.

The archive name comes from a scientific phenomenon from the Cold War era. After nuclear weapons testing began in 1945, atmospheric radiation contaminated new steel production worldwide. For decades, scientists needing radiation-free metal for sensitive instruments had to salvage steel from pre-war shipwrecks. Scientists called this steel "low-background steel." Graham-Cumming sees a parallel with today's web, where AI-generated content increasingly mingles with human-created material and contaminates it.

With the advent of generative AI models like ChatGPT and Stable Diffusion in 2022, it has become far more difficult for researchers to ensure that media found on the Internet was created by humans without using AI tools. ChatGPT in particular triggered an avalanche of AI-generated text across the web, forcing at least one research project to shut down entirely.

Read full article

Comments

© National Nuclear Security Administration/Public domain

Reddit sues Anthropic over AI scraping that retained users’ deleted posts

5 June 2025 at 16:57

On the heels of an OpenAI controversy over deleted posts, Reddit sued Anthropic on Wednesday, accusing the AI company of "intentionally" training AI models on the "personal data of Reddit users"—including their deleted posts—"without ever requesting their consent."

Calling Anthropic two-faced for depicting itself as a "white knight of the AI industry" while allegedly lying about AI scraping, Reddit painted Anthropic as the worst among major AI players. While Anthropic rivals like OpenAI and Google paid Reddit to license data—and, crucially, agreed to "Reddit’s licensing terms that protect Reddit and its users’ interests and privacy" and require AI companies to respect Redditors' deletions—Anthropic wouldn't participate in licensing talks, Reddit alleged.

"Unlike its competitors, Anthropic has refused to agree to respect Reddit users’ basic privacy rights, including removing deleted posts from its systems," Reddit's complaint said.

Read full article

Comments

© SOPA Images / Contributor | LightRocket

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

5 June 2025 at 14:35

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

Read full article

Comments

© Bloomberg via Getty Images

Google Gemini has 350M monthly users, reveals court hearing

23 April 2025 at 15:52
Gemini, Google’s AI chatbot, had 350 million monthly active users around the globe as of March, according to internal data revealed in Google’s ongoing antitrust suit. The Information first reported the stat. Usage of Google’s AI offerings has exploded in the last year. Gemini had just 9 million daily active users in October 2024, but […]
❌