Reading view

OpenAI warns its future models will have a higher risk of aiding bioweapons development

  • OpenAI says its next generation of AI models could significantly increase the risk of biological weapon development, even enabling individuals with no scientific background to create dangerous agents. The company is boosting its safety testing as it anticipates some models will reach its highest risk tier.

OpenAI is warning that its next generation of advanced AI models could pose a significantly higher risk of biological weapon development, especially when used by individuals with little to no scientific expertise.

OpenAI executives told Axios they anticipate upcoming models will soon trigger the high-risk classification under the company’s preparedness framework, a system designed to evaluate and mitigate the risks posed by increasingly powerful AI models.  

OpenAI’s head of safety systems, Johannes Heidecke, told the outlet that the company is “expecting some of the successors of our o3 (reasoning model) to hit that level.”

In a blog post, the company said it was increasing its safety testing to mitigate the risk that models will help users in the creation of biological weapons. OpenAI is concerned that without these mitigations models will soon be capable of “novice uplift,” allowing those with limited scientific knowledge to create dangerous weapons.

“We’re not yet in the world where there’s like novel, completely unknown creation of bio threats that have not existed before,” Heidecke said. “We are more worried about replicating things that experts already are very familiar with.”

Part of the reason why it’s difficult is that the same capabilities that could unlock life-saving medical breakthroughs could also be used by bad actors for dangerous ends. According to Heidecke, this is why leading AI labs need highly accurate testing systems in place.

One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.

“This is not something where like 99% or even one in 100,000 performance is … sufficient,” he said. “We basically need, like, near perfection.”

Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.

Model misuse

OpenAI is not the only company concerned about the misuse of its models when it comes to weapon development. As models get more advanced their potential for misuse and risk generally grows.

Anthropic recently launched its most advanced model, Claude Opus 4, with stricter safety protocols than any of its previous models, categorizing it an AI Safety Level 3 (ASL-3), under the company’s Responsible Scaling Policy. Previous Anthropic models have all been classified AI Safety Level 2 (ASL-2) under the company’s framework, which is loosely modeled after the U.S. government’s biosafety level (BSL) system.

Models that are categorized in this third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D. Anthropic’s most advanced model also made headlines after it opted to blackmail an engineer to avoid being shut down in a highly controlled test.

Early versions of Anthropic’s Claude 4 were found to comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.

This story was originally featured on Fortune.com

© Sven Hoppe—picture alliance via Getty Images

Johannes Heidecke (R), OpenAI's head of safety systems talks with Reinhard Heckel (L), professor of machine learning at the Department of Computer Engineering at TUM, and OpenAI CEO Sam Altman, in a panel discussion at the Technical University of Munich (TUM) in May 2023.
  •  

Meta’s $100 million signing bonuses for OpenAI staff are just the latest sign of extreme AI talent war 

  • Big Tech is shelling out jaw-dropping compensation amid a fierce AI talent war. Meta is even offering $100m signing bonuses to woo top OpenAI researchers, according to CEO Sam Altman. But as top AI companies scramble to retain staff with massive bonuses and noncompete deals, entry-level engineers are seeing fewer opportunities amid a declining junior hiring trend.

The AI talent war has been heating up between Big Tech companies as they vie for an increasingly small group of elite AI researchers. According to OpenAI CEO Sam Altman, Meta has been aggressively going after the company’s top engineers—offering eye-watering compensation and multi-million dollar signing bonuses.

Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

It’s the latest example of the intense competition for top talent and the lengths companies are willing to go to recruit and retain them.

Meta is particularly committed to its AI recruiting drive at the moment. The company has lost several of its top AI researchers in recent years and currently is fighting a narrative that it has fallen behind in the AI race after its newest Llama 4 model received a lukewarm reaction from developers.

This has kicked Zuckerberg into overdrive and reportedly led the CEO to personally recruit for a new 50-person “Superintelligence” AI team at Meta. Meta also recently invested up to $15 billion for a 49% stake in the training data company, ScaleAI, as part of a plan to hire the company’s CEO Alexandr Wang.

While Altman said that none of his best people had decided to take up Mark Zuckerberg’s generous offer, Meta has managed to lure other prominent AI researchers.

According to Bloomberg, Meta has also hired Jack Rae, a principal researcher at Google DeepMind, for the team and brought on Johan Schalkwyk, a machine learning leader from the AI voice startup Sesame AI. Meta was reportedly unsuccessful in its efforts to poach top OpenAI researcher, Noam Brown, and Google’s AI architect, Koray Kavukcuoglu.

Meta is also trailing fellow AI labs with a retention rate of 64%, according to SignalFire’s recently released 2025 State of Talent Report. At buzzy AI startup Anthropic, 80% of employees hired at least two years ago are still at the company, an impressive figure in an industry known for its high turnover.

Representatives for Meta did not immediately respond to a recent request for comment from Fortune, made outside the company’s normal working hours.

AI talent gap

Zuckerberg’s salary offers are reaching the pro-athlete threshold, which, as Fortune’s Sharon Goldman notes, is becoming par for the course in the industry.

Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

While Meta may be making headlines, it is not the only company going to extreme lengths to retain and recruit this talent. Google DeepMind is reportedly enforcing six-to-12-month noncompete clauses that prevent some AI researchers from joining competitors—paying them full salaries even while they’re sidelined.

Over at OpenAI, the company is rumored to be offering sky-high compensation to retain talent, with top researchers earning over $10 million annually. According to Reuters, the company has offered more than $2 million in retention bonuses and equity packages exceeding $20 million to deter defections to Ilya Sutskever’s new venture, SSI.

While elite AI labs are working overtime to lock in top talent, the full picture for AI engineers, especially junior talent, is not quite so rosy. Several recent reports, including SignalFire’s 2025 State of Talent Report, have suggested that entry-level hiring in the tech industry is collapsing.

According to the report, hiring for mid and senior-level roles has bounced back from the 2023 slump but the cuts for new grads have just kept coming. Among Big Tech companies, new grads account for just 7% of hires, down 25% from 2023 and over 50% from pre-pandemic levels in 2019. For startups, new grads make up less than 6% of new hires, down 11% from 2023 and over 30% from pre-pandemic levels in 2019.

This story was originally featured on Fortune.com

© Shawn Thew/EPA/Bloomberg via Getty Images

The AI talent war has been heating up between Big Tech companies.
  •  

Canva’s cofounder is looking to hire ‘AI natives’ and university dropouts to train the rest of the company on the tech

  • Canva cofounder Cliff Obrecht is on the hunt for “AI natives”—even those who have dropped out of college. Speaking to Fortune at VivaTech in Paris, Obrecht said the company sees high value in hiring less traditional candidates who understand AI tools and workflows. As AI threatens to change the job market rapidly, Obrecht says that curiosity and adaptability are becoming more valuable than ever.

With anxiety mounting over the mass automation of entry-level jobs, job seekers with AI skills may now have an edge over those with university credentials. Canva cofounder and COO Cliff Obrecht told Fortune the company is actively hiring AI-savvy college students regardless of whether they finish their degrees.

Obrecht said Canva is increasingly looking for “AI natives” when hiring new staffers and is benefiting from university dropouts when it comes to engineering talent.

“We are looking to actually hire second- to fourth-year university graduates because they are AI natives,” Obrecht said in an interview at Viva Technology in Paris.

“Hiring a lot of junior people who are native at building agentic workflows and picking up AI first is just a different way of thinking about building products,” he added. “We are actually getting a lot of value from bringing in those university dropouts.”

Obrecht said most organizations are currently trying to upskill engineers on AI coding tools in hopes of productivity gains, but he was looking to hire less experienced talent who have a stronger grasp of the current AI tools on offer.

“They’re really good hires, especially when you add them to a nontechnical team and upskill the rest of the organization. They’re AI natives and become evangelists in the organization and really help drive that mindset shift,” he said.

What is an ‘AI native’?

The discourse around AI-fueled job losses, particularly concerning entry-level work, has been heating up recently and has created somewhat of a divide in the tech industry.

Anthropic CEO Dario Amodei sparked a fierce debate with his recent prediction that AI could wipe out roughly 50% of all entry-level white-collar jobs within the next five years. While some, including Nvidia CEO Jensen Huang, have pushed back on Amodei’s predictions, recent data suggests that some entry-level work may already be under pressure from the rise of automation.

Companies are also increasingly looking to incorporate AI into their workflows in the hope of productivity gains, with some putting in formal requirements for workers to embrace the tech. But getting ahead of the curve when it comes to AI skills is less about using ChatGPT every day and more about being at the forefront of the technology, according to Obrecht.

“An AI native has got a deep understanding of the AI tools in their tool belt,” he said. “And they’re constantly at the forefront of creating agents, chaining multiple complex AI workflows together—maybe from different products and providers—to create unified experiences. They have a goal in mind, and that goal isn’t just delivered through single AIs. It’s connected to a bunch of different things.”

Obrecht sees AI natives as “curiosity-focused” rather than confined to one certain generation.

“You can be a hungry, curious person who sees this brand-new technology changing our world, and be someone who’s like, ‘I want to learn everything I can about this part of this.’ That curiosity is the key attribute that leads to someone being successful in companies now,” he said.

This story was originally featured on Fortune.com

© Brent Lewin—Bloomberg/Getty Images

“They’re really good hires, especially when you add them to a nontechnical team and upskill the rest of the organization,” Obrecht told Fortune.
  •  

OpenAI plans to continue working with Scale AI despite rival Meta’s $14.3 billion deal with the company, OpenAI’s CFO says

  • Despite Meta’s $14.3 billion investment in Scale AI that is shaking up the AI landscape, OpenAI plans to keep working with the startup, according to CFO Sarah Friar. Friar emphasized the importance of maintaining a diverse vendor ecosystem to support AI development. Meanwhile, Scale’s other key customers like Google, Microsoft, and xAI are reportedly looking to distance themselves from the startup.

OpenAI’s CFO, Sarah Friar, says the company plans to continue working with Scale AI despite the startup’s recent multi-billion-dollar partnership with rival Meta.

“We don’t just buy from Scale,” Friar said at the Viva Technology conference in Paris. “We work with many vendors on the data front.”

“As models have gotten smarter, you’re going into a place where you need real expertise…we have academics and experts telling us that they are finding novel things in their space,” she said. “We don’t want to ice the ecosystem, because acquisitions are going to happen and I think if we ice each other out, I think we’re actually going to slow the pace of innovation.”

Founded in 2016, Scale AI supplies large volumes of labeled and curated training data and works with several major AI companies including Google, Microsoft, OpenAI, and Meta. On Thursday, Meta announced it was investing $14.3 billion for a 49% stake in the startup—a major move for Meta’s AI capabilities but one that reportedly made some of the Big Tech’s competitors wary of using Scale’s services.

Scale intends to keep operating as an independent business but with deeper commercial ties to Meta. The company’s CEO Alexandr Wang will also join Meta’s team working on “superintelligence” and be replaced by Jason Droege as interim CEO. Wang will remain on Scale’s board and said in a note to employees he would poach a few “Scalien” employees to take with him to Meta, but did not identify them directly.

Scale’s largest customer, Google, reportedly plans to cut ties with the AI data-labeling startup in the wake of the Meta deal. According to a report from Reuters, the tech giant has already held conversations with some of Scale’s rivals to shift much of the workload, representing a significant loss of business for the startup now valued at $29 billion. Google did not immediately respond to a request for comment made by Fortune.

Microsoft and Elon Musk’s xAI also reportedly looking to pull back from Scale after the high-profile deal, and despite Friar’s comments, OpenAI reportedly made a similar decision to pull back on some of its business with the startup several months ago.

Representatives for OpenAI did not immediately respond to a request for comment made by Fortune outside of normal working hours.

Meta’s $14.3 billion AI bet on Scale

Meta’s deal with Scale AI bolsters Meta’s AI credentials after Zuckerberg reaffirmed the company’s commitment to building technology that outstrips human intelligence—and Meta’s rivals—earlier this year.

Meta has trailed rivals in consumer-facing AI and, unlike competitors like Google and OpenAI, has chosen to release its Llama models as open source. The tech giant’s recent Llama 4 AI models received a lukewarm response from developers, and the company hasn’t yet released its most advanced model, Llama 4 Behemoth. The pause on Behemoth was due to concerns from leadership that the model didn’t sufficiently advance on previous models, The Wall Street Journal reported.

Zuckerberg’s primary gain from the investment appears to be Wang. The 28-year-old will join a reported 50-person superintelligence AI team at Meta that is aiming to beat rivals like Google and OpenAI to artificial general intelligence (AGI). According to Bloomberg, Zuckerberg is personally recruiting for the team after the CEO was disappointed by the reaction to Llama 4.

This story was originally featured on Fortune.com

OpenAI's CFO, Sarah Friar, says the company plans to continue working with Scale AI.
  •