Normal view

Received before yesterday

AI chatbots tell users what they want to hear, and that’s problematic

12 June 2025 at 13:33

The world’s leading artificial intelligence companies are stepping up efforts to deal with a growing problem of chatbots telling people what they want to hear.

OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users.

The issue, stemming from how the large language models are trained, has come into focus at a time when more and more people have adopted the chatbots not only at work as research assistants, but in their personal lives as therapists and social companions.

Read full article

Comments

© FT montage

“Godfather” of AI calls out latest models for lying to users

3 June 2025 at 14:35

One of the “godfathers” of artificial intelligence has attacked a multibillion-dollar race to develop the cutting-edge technology, saying the latest models are displaying dangerous characteristics such as lying to users.

Yoshua Bengio, a Canadian academic whose work has informed techniques used by top AI groups such as OpenAI and Google, said: “There’s unfortunately a very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety.”

The Turing Award winner issued his warning in an interview with the Financial Times, while launching a new non-profit called LawZero. He said the group would focus on building safer systems, vowing to “insulate our research from those commercial pressures.”

Read full article

Comments

© Andrej Ivanov/AFP/Getty Images

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”

16 May 2025 at 15:13

On Wednesday, the world was a bit perplexed by the Grok LLM's sudden insistence on turning practically every response toward the topic of alleged "white genocide" in South Africa. xAI now says that odd behavior was the result of "an unauthorized modification" to the Grok system prompt—the core set of directions for how the LLM should behave.

That prompt modification "directed Grok to provide a specific response on a political topic" and "violated xAI's internal policies and core values," xAI wrote on social media. The code review process in place for such changes was "circumvented in this incident," it continued, without providing further details on how such circumvention could occur.

To prevent similar problems from happening in the future, xAI says it has now implemented "additional checks and measures to ensure that xAI employees can't modify the prompt without review" as well as putting in place "a 24/7 monitoring team" to respond to any widespread issues with Grok's responses.

Read full article

Comments

© Getty Images

Revelo’s LatAm talent network sees strong demand from US companies, thanks to AI

4 May 2025 at 15:00
While many tech companies are mandating that their employees return to their offices, and putting an emphasis on building in-person teams, they are also turning in droves to Latin America to find developer talent — especially for post-training AI models. Revelo, a full-stack platform of vetted developers in Latin America, is seeing a new surge in […]
❌