Reading view

Meta beefs up disappointing AI division with $15 billion Scale AI investment

Meta has invested $15 billion into data-labeling startup Scale AI and hired its co-founder, Alexandr Wang, as part of its bid to attract talent from rivals in a fiercely competitive market.

The deal values Scale at $29 billion, double its valuation last year. Scale said it would “substantially expand” its commercial relationship with Meta “to accelerate deployment of Scale’s data solutions,” without giving further details. Scale helps companies improve their artificial intelligence models by providing labeled training data.

Scale will distribute proceeds from Meta’s investment to shareholders, and Meta will own 49 percent of Scale’s equity following the transaction.

Read full article

Comments

© Getty Images |NurPhoto

  •  

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Users on X (formerly Twitter) love to tag the verified @grok account in replies to get the large language model's take on any number of topics. On Wednesday, though, that account started largely ignoring those requests en masse in favor of redirecting the conversation toward the topic of alleged "white genocide" in South Africa and the related song "Kill the Boer."

Searching the Grok account's replies for mentions of "genocide" or "Boer" currently returns dozens if not hundreds of posts where the LLM responds to completely unrelated queries with quixotic discussions about alleged killings of white farmers in South Africa (though many have been deleted in the time just before this post went live; links in this story have been replaced with archived versions where appropriate). The sheer range of these non sequiturs is somewhat breathtaking; everything from questions about Robert F. Kennedy Jr.'s disinformation to discussions of MLB pitcher Max Scherzer's salary to a search for new group-specific put-downs see Grok quickly turning the subject back toward the suddenly all-important topic of South Africa.

It's like Grok has become the world's most tiresome party guest, harping on its own pet talking points to the exclusion of any other discussion.

Read full article

Comments

© Getty Images / Kyle Orland

  •  

AI isn’t ready to replace human coders for debugging, researchers say

There are few areas where AI has seen more robust deployment than the field of software development. From "vibe" coding to GitHub Copilot to startups building quick-and-dirty applications with support from LLMs, AI is already deeply integrated.

However, those claiming we're mere months away from AI agents replacing most programmers should adjust their expectations because models aren't good enough at the debugging part, and debugging occupies most of a developer's time. That's the suggestion of Microsoft Research, which built a new tool called debug-gym to test and improve how AI models can debug software.

Debug-gym (available on GitHub and detailed in a blog post) is an environment that allows AI models to try and debug any existing code repository with access to debugging tools that aren't historically part of the process for these models. Microsoft found that without this approach, models are quite notably bad at debugging tasks. With the approach, they're better but still a far cry from what an experienced human developer can do.

Read full article

Comments

  •  

ChatGPT can now remember and reference all your previous chats

OpenAI today announced a significant expansion of ChatGPT's customization and memory capabilities. For some users, it will now be able to remember information from the full breadth of their prior conversations with it and adjust its responses based on that information.

This means ChatGPT will learn more about the user over time to personalize its responses, above and beyond just a handful of key facts.

Some time ago, OpenAI added a feature called "Memory" that allowed a limited number of pieces of information to be retained and used for future responses. Users often had to specifically ask ChatGPT to remember something to trigger this, though it occasionally tried to guess at what it should remember, too. (When something was added to its memory, there was a message saying that its memory had been updated.)

Read full article

Comments

© Benj Edwards / OpenAI

  •