❌

Normal view

Received before yesterday

Sam Altman hopes AGI will allow people to have more kids in the future

15 August 2025 at 18:05
Sam Altman speaking
OpenAI CEO Sam Altman says that an advanced AI could help facilitate bigger families for humans in the future.

Andrew Harnik/Getty Images

  • OpenAI CEO Sam Altman says AGI, once it's reached, could allow humans to have bigger families.
  • Global population growth has slowed, and many Gen Z and millennials are delaying parenthood.
  • Altman isn't the only AI leader concerned about rates of procreation.

OpenAI CEO Sam Altman says having a kid has been "amazing" and thinks everyone else should have one, too.

He also says AGI could maybe help with that.

AGI, or artificial general intelligence, is a still theoretical version of AI that reasons as well as humans. Achieving AGI is the ultimate goal of many of the leading AI companies and is what's largely driving the AI talent wars.

Meanwhile, the world's population growth is slowing down. In the United States, Gen Z and millennials are delaying having children or not having children at all to focus on their financial stability. Some prominent futurists, including Altman, say that's a cause for concern.

He said this trend is a "real problem" during an episode of "People by WTF" with Nikhil Kamath on Thursday. Altman, who had his first child earlier this year, said he hopes that building families and creating community "will become far more important in a post-AGI world."

He said he thinks this will be possible because AGI will allow for a world "where people have more abundance, more time, more resources, and potential, and ability." As AI progresses and becomes a more useful tool, he says society will grow richer and there will be more social support.

"I think it's pretty clear that family and community are two of the things that make us the happiest, and I hope we will turn back to that," Altman said.

When Kamath asked about Altman's own experience with fatherhood, the CEO said he strongly recommends having children. "It felt like the most important and meaningful and fulfilling thing I could imagine doing," he said.

Altman has described himself as "extremely kid-pilled" and said that in the first weeks of being a dad, he was "constantly" asking ChatGPT questions. Using AI is a skill that he says he plans to pass down to his children.

"My kids will never be smarter than AI," Altman said on an episode of The OpenAI Podcast in June. "They will grow up vastly more capable than we grew up, and able to do things that we cannot imagine, and they'll be really good at using AI."

Altman isn't the only prominent CEO in the AI industry who's passionate about procreation. Elon Musk, the founder of Grok-maker xAI, among other companies, has fathered over 10 known children. Musk has said he's "doing his best to help the underpopulation crisis."

"A collapsing birth rate is the biggest danger civilization faces by far," Musk said in an X post in 2022.

OpenAI did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

Here's why Sam Altman says OpenAI's GPT-5 falls short of AGI

7 August 2025 at 18:01
OpenAI CEO Sam Altman speaking at an event with SoftBank Group CEO Masayoshi Son in Tokyo, Japan.
OpenAI CEO Sam Altman said older people tend to use ChatGPT as a "Google replacement" while college students use it like an operating system.

Tomohiro Ohsumi via Getty Images

  • Sam Altman says OpenAI's GPT-5 is its most advanced model yet.
  • It doesn't quite meet what he defines as true AGI, however.
  • AGI is broadly defined as AI that can reason like humans. It's OpenAI's ultimate goal.

Sam Altman says OpenAI has yet to crack AGI.

The OpenAI CEO said that while the highly anticipated GPT-5,Β which launched Thursday, is a major advancement, it isn't what he considers artificial general intelligence, a still theoretical threshold where AI can reason like humans.

Developing AGI that benefits all of humanity is OpenAI's core mission.

"This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we're still missing something quite important, or many things quite important," Altman told reporters during a press call on Wednesday before the release of GPT-5.

One of those missing elements, Altman said, is the model's ability to learn on its own.

"One big one is, you know, this is not a model that continuously learns as it's deployed from the new things it finds, which is something that to me feels like AGI. But the level of intelligence here, the level of capability, it feels like a huge improvement," he said.

The exact definition of AGIΒ andΒ how far awayΒ the world-changing technology might be are topics of much debate in the AI industry.

Some AI leaders, like Meta's chief AI scientist, Yann LeCun, have said we may still be "decades" away.

Altman said that looking back at OpenAI's previous releases, GPT-5 is still a step in the right direction.

"If I could go back five years before GPT-3, and you told me we have this now, I'd be like, that's a significant fraction of the way to something very AGI-like," he said on Wednesday's call.

In an earlier blog post, Altman wrote that he and OpenAI's cofounders "started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history."

While AGI remains the company's mission, Altman says OpenAI is already looking beyond it to superintelligence, a still theoretical advancement in which artificial intelligence can reason far beyond human capability.

"Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity," Altman wrote in January.

Read the original article on Business Insider

At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race

1 August 2025 at 21:23

Silicon Valley's AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)β€”with potentially $100 million in the first year aloneβ€”it shattered every historical precedent for scientific and technical compensation we can find on record. That includes salaries during the development of major scientific milestones of the 20th century.

The New York Times reported that Deitke had cofounded a startup called Vercept and previously led the development of Molmo, a multimodal AI system, at the Allen Institute for Artificial Intelligence. His expertise in systems that juggle images, sounds, and textβ€”exactly the kind of technology Meta wants to buildβ€”made him a prime target for recruitment. But he's not alone: Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years. What's going on?

These astronomical sums reflect what tech companies believe is at stake: a race to create artificial general intelligence (AGI) or superintelligenceβ€”machines capable of performing intellectual tasks at or beyond the human level. Meta, Google, OpenAI, and others are betting that whoever achieves this breakthrough first could dominate markets worth trillions. Whether this vision is realistic or merely Silicon Valley hype, it's driving compensation to unprecedented levels.

Read full article

Comments

Β© Paper Boat Creative via Getty Images

DeepMind has detailed all the ways AGI could wreck the world

3 April 2025 at 21:43

As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today's AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn't work against human interests.

Unfortunately, we don't have anything as elegant as Isaac Asimov's Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.

It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm."

Read full article

Comments

❌