Reading view

  •  

OpenAI’s most capable AI model, GPT-5, may be coming in August

On Thursday, The Verge reported that OpenAI is preparing to launch GPT-5 as early as August, according to sources familiar with the company's plans. The report comes five months after CEO Sam Altman first laid out a roadmap for the next-generation AI model that would unify the company's various AI capabilities. OpenAI CEO Sam Altman revealed in a post on X last week that the company plans to release GPT-5 "soon."

According to The Verge's Tom Warren, Microsoft engineers began preparing server capacity for GPT-5 as early as late May, but testing and development challenges pushed the timeline back. During an appearance on Theo Von's podcast this week, Altman demonstrated the model's capabilities by having it answer a question he couldn't. "I put it in the model, this is GPT-5, and it answered it perfectly," Altman said, saying it gave him a "weird feeling" to see the AI model answer a question that he couldn't.

GPT-5 has been a highly anticipated release since the launch of GPT-4 in March 2023. In fact, we first wrote about rumors of GPT-5's launch in March 2024, but it appears that GPT-5 did not materialize last year because the company saved the "GPT-5" name for a future release.

Read full article

Comments

© Benj Edwards / OpenAI

  •  

Mistral’s Le Chat chatbot gets a productivity push with new ‘deep research’ mode

French AI lab Mistral introduced a range of new features to its Le Chat chatbot on Thursday that bring it closer to the capabilities of rivals like OpenAI and Google. The new update includes a “deep research” mode, native multilingual reasoning, and advanced image editing. 
  •  
  •  
  •  
  •  

New study shows why simulated reasoning AI models don’t yet live up to their billing

There's a curious contradiction at the heart of today's most capable AI models that purport to "reason": They can solve routine math problems with accuracy, yet when faced with formulating deeper mathematical proofs found in competition-level challenges, they often fail.

That's the finding of eye-opening preprint research into simulated reasoning (SR) models, initially listed in March and updated in April, that mostly fell under the news radar. The research serves as an instructive case study on the mathematical limitations of SR models, despite sometimes grandiose marketing claims from AI vendors.

What sets simulated reasoning models apart from traditional large language models (LLMs) is that they have been trained to output a step-by-step "thinking" process (often called "chain-of-thought") to solve problems. Note that "simulated" in this case doesn't mean that the models do not reason at all but rather that they do not necessarily reason using the same techniques as humans. That distinction is important because human reasoning itself is difficult to define.

Read full article

Comments

© PhonlamaiPhoto via Getty Images

  •