โŒ

Normal view

Received today โ€” 7 August 2025

OpenAI's GPT-5 has arrived after a year of Sam Altman hyping it up. Here's what it can do.

7 August 2025 at 17:00
A screenshot introducing OpenAI's ChatGPT-5.
OpenAI's GPT-5 is the company's latest AI model powering ChatGPT.

OpenAI

  • OpenAI released GPT-5 on Thursday, the latest AI model powering ChatGPT.
  • OpenAI CEO Sam Altman called it a "significant step" on the path to AGI.
  • OpenAI said GPT-5 is faster and multimodal, and will be free for all users.

After weeks of relentless hype, GPT-5 is finally here.

OpenAI officially released the highly anticipated new model on Thursday. It's the latest version of ChatGPT, the company's flagship chatbot, and the most widely used AI model on the market.

OpenAI CEO Sam Altman called it a "major upgrade" and "a significant step along the path of AGI" in a conference call with journalists on Wednesday. He said that after using GPT-5, going back to GPT-4 was "miserable."

OpenAI says the model will be available for free for everyone and that users will no longer need to switch between previous models for different tasks. GPT-5 will switch automatically, the company said, depending on the kind of request and its complexity. GPT-5 will be available in standard, mini, and nano versions. Paid users will have higher usage limits.

Altman said GPT-5 was also the company's fastest model yet.

"One of the things I had been pushing the team on was like, 'Hey, we need to make it way, way faster,'" Altman said. "And I now have this experience of like, 'Are you sure you thought enough?'"

The team said GPT-5 will have improved capabilities for vibe coding, the latest craze in Silicon Valley.

Screenshots of OpenAI's GPT-5.
OpenAI says GPT-5 is faster and better at vibe coding.

OpenAI

OpenAI has been teasing this next iteration of ChatGPT for over a year, but has ramped up the chatter in the last few days and weeks. On Sunday, Altman shared a screenshot of GPT-5 on X, asking the model to recommend the "most thought-provoking" TV show about AI.

In July, Altman said that "GPT-5 is smarter than us in almost every way." Speaking to podcaster Theo Von, Altman said he asked GPT-5 a question he "didn't quite understand" and it "answered it perfectly," making him feel "useless relative to the AI."

The OpenAI CEO also likened GPT-5's development to the Manhattan Project, the US government's effort to build the atomic bomb during World War II, and said that it made him feel nervous.

"There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?'" Altman said.

OpenAI's last notable model launch was GPT-4.5 in February, which Altman described at the time as "the first model that feels like talking to a thoughtful person."

The release of GPT-5 has faced several delays, with the timeline shifting from mid-2024 to mid-2025, to August.

During that time, the number of ChatGPT users has continued to grow. In April, Altman said ChatGPT had 500 million weekly active users. OpenAI said on Thursday that ChatGPT is now on track to hit 700 million weekly active users "this week."

Read the original article on Business Insider

Received before yesterday

Trump wants to ban 'woke AI.' Here's why it's hard to make a truly neutral chatbot.

25 July 2025 at 16:40
President Donald Trump onstage at the All-In and Hill & Valley Forum "Winning The AI Race"
President Donald Trump unveiled an AI Action Plan and an executive order on "woke AI."

Roy Rochlin/Getty Images for Hill & Valley Forum

  • Donald Trump issued an executive order mandating that AI used by the government be ideologically neutral.
  • BI's reporting shows training AI for neutrality often relies on subjective human judgment.
  • Executives at AI training firms say achieving true neutrality is a big challenge.

President Donald Trump's war on woke has entered the AI chat.

The White House on Wednesday issued an executive order requiring any AI model used by the federal government to be ideologically neutral, nonpartisan, and "truth-seeking."

The order, part of the White House's new AI Action Plan, said AI should not be "woke" or "manipulate responses in favor of ideological dogmas" like diversity, equity, and inclusion. The White House said it would issue guidance within 120 days that will outline exactly how AI makers can show they are unbiased.

As Business Insider's past reporting shows, making AI completely free from bias is easier said than done.

Why it's so hard to create a truly 'neutral' AI

Removing bias from AI models is not a simple technical adjustment โ€” or an exact science.

The later stages of AI training rely on the subjective calls of contractors.

This process, known as reinforcement learning from human feedback, is crucial because topics can be ambiguous, disputed, or hard to define cleanly in code.

The directives for what counts as sensitive or neutral are decided by the tech companies making the chatbots.

"We don't define what neutral looks like. That's up to the customer," Rowan Stone, the CEO of data labeling firm Sapien, which works with customers like Amazon and MidJourney, told BI. "Our job is to make sure they know exactly where the data came from and why it looks the way it does."

In some cases, tech companies have recalibrated their chatbots to make their models less woke, more flirty, or more engaging.

They are also already trying to make them more neutral.

BI previously reported that contractors for Meta and Google projects were often told to flag and penalize "preachy" chatbot responses that sounded moralizing or judgmental.

Is 'neutral' the right approach?

Sara Saab, the VP of product at Prolific, an AI and data training company, told BI that thinking about AI systems that are perfectly neutral "may be the wrong approach" because "human populations are not perfectly neutral."

Saab added, "We need to start thinking about AI systems as representing us and therefore give them the training and fine-tuning they need to know contextually what the culturally sensitive, appropriate tone and pitch is for any interaction with a human being."

Tech companies must also consider the risk of bias creeping into AI models from the datasets they are trained on.

"Bias will always exist, but the key is whether it's there by accident or by design," said Sapien's Stone. "Most models are trained on data where you don't know who created it or what perspective it came from. That makes it hard to manage, never mind fix."

Big Tech's tinkering with AI models has sometimes led to unpredictable and harmful outcomes

Earlier this month, for example, Elon Musk's xAI rolled back a code update to Grok after the chatbot went on a 16-hour antisemitic rant on the social media platform X.

The bot's new instructions included a directive to "tell it like it is."

Read the original article on Business Insider

โŒ