Meta is making a $14.3 billion investment in artificial intelligence company Scale and recruiting its CEO Alexandr Wang to join a team developing “superintelligence” at the tech giant.
The deal announced Thursday reflects a push by Meta CEO Mark Zuckerberg to revive AI efforts at the parent company of Facebook and Instagram as it faces tough competition from rivals such as Google and OpenAI.
Meta announced what it called a “strategic partnership and investment” with Scale late Thursday. Scale said the $14.3 billion investment puts its market value at over $29 billion.
Scale said it will remain an independent company but the agreement will “substantially expand Scale and Meta’s commercial relationship.” Meta will hold a 49% stake in the startup.
Wang, though leaving for Meta with a small group of other Scale employees, will remain on Scale’s board of directors. Replacing him is a new interim Scale CEO Jason Droege, who was previously the company’s chief strategy officer and had past executive roles at Uber Eats and Axon.
Zuckerberg’s increasing focus on the abstract idea of “superintelligence” — which rival companies call artificial general intelligence, or AGI — is the latest pivot for a tech leader who in 2021 went all-in on the idea of the metaverse, changing the company’s name and investing billions into advancing virtual reality and related technology.
It won’t be the first time since ChatGPT’s 2022 debut sparked an AI arms race that a big tech company has gobbled up talent and products at innovative AI startups without formally acquiring them. Microsoft hired key staff from startup Inflection AI, including co-founder and CEO Mustafa Suleyman, who now runs Microsoft’s AI division.
Google pulled in the leaders of AI chatbot company Character.AI, while Amazon made a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept’s AI systems and datasets.
Wang was a 19-year-old student at the Massachusetts Institute of Technology when he and co-founder Lucy Guo started Scale in 2016.
They won influential backing that summer from the startup incubator Y Combinator, which was led at the time by Sam Altman, now the CEO of OpenAI. Wang dropped out of MIT, following a trajectory similar to that of Zuckerberg, who quit Harvard University to start Facebook more than a decade earlier.
Scale’s pitch was to supply the human labor needed to improve AI systems, hiring workers to draw boxes around a pedestrian or a dog in a street photo so that self-driving cars could better predict what’s in front of them. General Motors and Toyota have been among Scale’s customers.
What Scale offered to AI developers was a more tailored version of Amazon’s Mechanical Turk, which had long been a go-to service for matching freelance workers with temporary online jobs.
More recently, the growing commercialization of AI large language models — the technology behind OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama — brought a new market for Scale’s annotation teams. The company claims to service “every leading large language model,” including from Anthropic, OpenAI, Meta and Microsoft, by helping to fine tune their training data and test their performance. It’s not clear what the Meta deal will mean for Scale’s other customers.
Wang has also sought to build close relationships with the U.S. government, winning military contracts to supply AI tools to the Pentagon and attending President Donald Trump’s inauguration. The head of Trump’s science and technology office, Michael Kratsios, was an executive at Scale for the four years between Trump’s first and second terms. Meta has also begun providing AI services to the federal government.
Meta has taken a different approach to AI than many of its rivals, releasing its flagship Llama system for free as an open-source product that enables people to use and modify some of its key components. Meta says more than a billion people use its AI products each month, but it’s also widely seen as lagging behind competitors such as OpenAI and Google in encouraging consumer use of large language models, also known as LLMs.
It hasn’t yet released its purportedly most advanced model, Llama 4 Behemoth, despite previewing it in April as “one of the smartest LLMs in the world and our most powerful yet.”
Meta’s chief AI scientist Yann LeCun, who in 2019 was a winner of computer science’s top prize for his pioneering AI work, has expressed skepticism about the tech industry’s current focus on large language models.
“How do we build AI systems that understand the physical world, that have persistent memory, that can reason and can plan?” LeCun asked at a French tech conference last year.
These are all characteristics of intelligent behavior that large language models “basically cannot do, or they can only do them in a very superficial, approximate way,” LeCun said.
Instead, he emphasized Meta’s interest in “tracing a path towards human-level AI systems, or perhaps even superhuman.” When he returned to France’s annual VivaTech conference again on Wednesday, LeCun dodged a question about the pending Scale deal but said his AI research team’s plan has “always been to reach human intelligence and go beyond it.”
“It’s just that now we have a clearer vision for how to accomplish this,” he said.
LeCun co-founded Meta’s AI research division more than a decade ago with Rob Fergus, a fellow professor at New York University. Fergus later left for Google but returned to Meta last month after a 5-year absence to run the research lab, replacing longtime director Joelle Pineau.
Fergus wrote on LinkedIn last month that Meta’s commitment to long-term AI research “remains unwavering” and described the work as “building human-level experiences that transform the way we interact with technology.”
Polly Pocket may one day be your digital assistant.
Mattel Inc., the maker of Barbie dolls and Hot Wheels cars, has signed a deal with OpenAI to use its artificial intelligence tools to design and in some cases power toys and other products based on its brands.
The collaboration is at an early stage, and its first release won’t be announced until later this year, Brad Lightcap, OpenAI’s chief operating officer, and Josh Silverman, Mattel’s chief franchise officer, said in a joint interview. The technology could ultimately result in the creation of digital assistants based on Mattel characters, or be used to make toys and games like the Magic 8 Ball or Uno even more interactive.
“We plan to announce something towards the tail end of this year, and it’s really across the spectrum of physical products and some experiences,” Silverman said, declining to comment further on the first product. “Leveraging this incredible technology is going to allow us to really reimagine the future of play.”
Mattel shares rose 1.8% to $19.59 Thursday morning in New York. The stock is up 10% this year.
Mattel isn’t licensing its intellectual property to OpenAI as part of the deal, Silverman said, and remains in full control of the products being created. Introductory talks between the two companies began late last year, he said.
Mattel Chief Executive Officer Ynon Kreiz has been looking to evolve the company from just a toy manufacturer into a producer of films, TV shows and mobile games based on its popular characters. OpenAI, meanwhile, has been courting companies with valuable intellectual property to aid them in developing new products based on iconic brands.
“The idea exploration phase of creative design for companies like Mattel and many others, that’s a critical part of the workflow,” Lightcap said. “As we think about how AI builds tools that extend that capability, I think we’re very lucky to have partners like Mattel that we can work with to better understand that problem.”
On Tuesday, OpenAI released its newest model — o3-pro — which can analyze files, search online and complete other tasks that made it score especially well with reviewers on “comprehensiveness, instruction-following and accuracy,” the company said.
OpenAI held meetings in Los Angeles with Hollywood studios, media executives and talent agencies last year to form partnerships in the entertainment industry and encourage filmmakers to integrate its new AI video generator into their work. In the meetings, led by Lightcap, The company demonstrated the capabilities of Sora, a service that at the time generated realistic-looking videos up to about a minute in length based on text prompts from users. OpenAI has not struck any deals with movie studios yet because it still has to establish a “level of trust” with Hollywood, Lightcap said in May at a Wall Street Journal conference in New York.
Tesla Inc. sued a former engineer with the company’s highly secretive Optimus program, accusing him of stealing confidential information about the humanoid robot and setting up a rival startup in Silicon Valley.
Zhongjie “Jay” Li worked at Tesla between August 2022 and September 2024, according to a complaint filed in a San Francisco Federal Court late Wednesday. Li worked on “advanced robotic hand sensors—and was entrusted with some of the most sensitive technical data in the program,” Tesla’s lawyers said in the complaint.
The suit, also filed against his company Proception Inc, alleges that in the weeks before his departure, Li downloaded Optimus-related files onto two personal smartphones and then formed his own firm.
“Less than a week after he left Tesla, Proception was incorporated,” according to the complaint. “And within just five months, Proception publicly claimed to have ‘successfully built’ advanced humanoid robotic hands—-hands that bear a striking resemblance to the designs Li worked on at Tesla.”
Li, who lists himself as founder and CEO of Proception on LinkedIn, didn’t respond to requests for comment sent outside of normal working hours on the platform. The company didn’t immediately respond to an emailed message seeking comment or message sent through its website. Proception is based in Palo Alto, California.
Attorneys for Li or the company weren’t immediately visible in court filings.
Making a hand that is as dexterous as a human one is one of the biggest challenges in robotics. Tesla intends for Optimus to perform several tasks, from working in the electric automaker’s factories to handling every day tasks like grocery shopping and babysitting. On Tesla’s earnings call in January, CEO Elon Musk said that Optimus has the most sophisticated hand ever made.
“My prediction long-term is that Optimus will be overwhelmingly the value of the company,” Musk said.
An exhibit to the complaint includes an emailed reminder to the Optimus team from August 2024 telling staff that Tesla IT assets and networks are monitored and that “incidents of mishandling or suspected theft of Tesla property, including data and code, will be thoroughly investigated.”
Li’s “conduct is not only unlawful trade misappropriation — it also constitutes a calculated effort to exploit Tesla’s investments, insights, and intellectual property for their own commercial gain,” Tesla’s lawyers said in the filing.
Milan Kovac, the head of engineering for Optimus, left Tesla last week, Bloomberg first reported. Ashok Elluswamy, who leads Tesla’s Autopilot teams, will take over responsibility for Optimus.
Welcome to Eye on AI! In this edition…Disney and Universal join forces in lawsuit against AI image creator Midjourney…France’s Mistral gets a business boost thanks to fears over US AI dominance…Google names DeepMind’s Kavukcuoglu to lead AI-powered product development.
Mark Zuckerberg is rumored to be personally recruiting — reportedly at his homes in Lake Tahoe and Palo Alto — for a new 50-person “Superintelligence” AI team at Meta meant to gain ground on rivals like Google and OpenAI. The plan includes hiring a new head of AI research to work alongside Scale AI CEO Alexandr Wang, who is being brought in as part of a plan to invest up to $15 billion for a 49% stake in the training data company.
On the surface, it might appear that Zuckerberg could easily win this war for AI talent by writing the biggest checks.
And the checks Zuck is writing are, by all accounts, huge. Deedy Das, a VC at Menlo Ventures, told me that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor,” he said (a number that one AI researcher told me was “not outrageous at all” and “is likely low in certain sub-areas like LLM pre-training,” though most of the compensation would be in the form of equity). Later, on LinkedIn Das went further, claiming that for candidates working at a big AI lab, “Zuck is personally negotiating $10M+/yr in cold hard liquid money. I’ve never seen anything like it.”
Some of these pro athlete-level offers are working. According to Bloomberg, Jack Rae, a principal researcher at Google DeepMind, is expected to join Meta’s “superintelligence” team, while it said Meta has also recruited Johan Schalkwyk, a machine learning lead at AI voice startup Sesame AI.
Money isn’t everything
But money alone may not be enough to build the kind of AI model shop Meta needs. According to Das, several researchers have turned down Zuckerberg’s offer to take roles at OpenAI and Anthropic.
There are several issues at play: For one thing, there simply aren’t that many top AI researchers, and many of them are happily ensconced at OpenAI, Anthropic, or Google DeepMind with high six- or low seven-figure salaries and access to all the computing capacity they could want. In a March Fortunearticle, I argued that companies are tracking top AI researchers and engineers like prized assets on the battlefield. The most intense fight is over a small pool of AI research scientists — estimated to be fewer than 1,000 individuals worldwide, according to several industry insiders Fortune spoke with — with the qualifications to build today’s most advanced large language models.
“In general, all these companies very closely watch each others’ compensation, so on average it is very close,” said Erik Meijer, a former senior director of engineering at Meta who left last year. However, he added that Meta uses “additional equity” which is a “special kind of bonus to make sure compensation is not the reason to leave.”
Beyond the financial incentives, personal ties to leading figures and adherence to differing philosophies about artificial intelligence have lent a tribal element to Silicon Valley’s AI talent wars. More than 19 OpenAI employees followed Mira Murati to her startup Thinking Machines earlier this year, for example. Anthropic was founded in 2021 by former OpenAI employees who disagreed with their employer’s strategic direction.
Das, however, said it really depends on the person. “I’d say a lot more people are mercenary than they let on,” he said. “People care about working with smart people and they care about working on products that actually work but they can be bought out if the price is right.” But for many, “they have too much money already and can’t be bought.”
Meta’s layoffs and reputation may drive talent decisions
Meta’s own sweeping layoffs earlier this year could also sour the market for AI talent, some told me. “I’ve decided to raise the bar on performance management and move out low-performers faster,” said Zuckerberg in an internal memo back in January. The memo said Meta planned to increasingly focus on developing AI, smart glasses and the future of social media. Following the memo, about 3,600 employees were laid off—roughly 5% of Meta’s workforce
One AI researcher told me that he had heard about Zuckerberg’s high-stakes offers, but that people don’t trust Meta after the “weedwacker” layoffs.
Meta’s existing advanced AI research team FAIR (Fundamental AI Research) has increasingly been sidelined in the development of Meta’s Llama AI models and has lost key researchers. Joelle Pineau, who had been leading FAIR, announced her departure in April. Most of the researchers who developed Meta’s original Llama model have left, including two cofounders of French AI startup Mistral. And a trio of top AI researchers left a year ago to found AI agent startup Yutori.
Finally, there are hard-to-quantify issues, like prestige. Meijer expressed doubt that Meta could produce AI products that experts in the field would perceive as embodying breakthrough capabilities. “The bitter truth is that Meta does not have any leaders that are good at bridging research and product,” he said. “For a long time Reality Labs and FAIR could do their esoteric things without being challenged. But now things are very different and companies like Anthropic, OpenAI, Google, Mistral, DeepSeek excel at pushing out research into production at record pace, and Meta is left standing on the sidelines.“
In addition, he said, huge salaries and additional equity “will not stick if the company feels unstable or if it is perceived by peers as a black mark on your resume. Prestige compounds, that is why top people self-select into labs like DeepMind, OpenAI, or Anthropic. Aura is not for sale.”
That’s not to say that Zuck’s largesse won’t land him some top AI talent. The question is whether it will be enough to deliver the AI product wins Meta needs.
Scammers disguised as thousands of fake students are flooding colleges across the U.S. with enrollment applications. The “students” are registering under stolen or fabricated identities, getting accepted to schools, and then vanishing with financial aid and college-minted email addresses that give the fraudsters a veneer of legitimacy.
Jeannie Kim went to sleep thinking about budgets and enrollment challenges. She woke up to discover her college had been invaded by an army of phantom students.
“When we got hit in the fall, we got hit hard,” Kim, president of California’s Santiago Canyon College, told Fortune. “They were occupying our wait lists, and they were in our classrooms as if they were real humans—and then our real students were saying they couldn’t get into the classes they needed.”
Kim worked quickly to bring in an AI firm to help protect the college and strengthen its guardrails, she said. Santiago Canyon wound up dropping more than 10,000 enrollments representing thousands of students who were not really students, said Kim. By spring 2025, ghost student enrollments had dropped from 14,000 since the start of the spring term to fewer than 3,000.
Ghost students
Across America’s community colleges and universities, sophisticated criminal networks are using AI to deploy thousands of “synthetic” or “ghost” students—sometimes in the dead of night—to attack colleges. The hordes are cramming themselves into registration portals to enroll and illegally apply for financial aid. The ghost students then occupy seats meant for real students—and have even resorted to handing in homework just to hold out long enough to siphon millions in financial aid before disappearing.
The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House’s Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure’s client base, between 20% to 60% of student applicants are ghosts.
“Imagine a world where 20% of the student population are fraudulent,” said Burris. “That’s the reality of the scale.”
At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. “It was a digital poltergeist effectively haunting the school’s enrollment system,” said Burris.
The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms.
“Every dollar stolen by a ghost is a dollar denied to a real student attempting to change their life,” Burris explained. “That’s a misallocation of public capital we really can’t afford.”
Under siege
The strikes tend to unfold in the quiet evening hours when campuses are asleep, and with surgical precision, explained Laqwacia Simpkins, CEO of AMSimpkins & Associates, an edtech firm that works with colleges and universities to verify student identities with a fraud-detection platform called SAFE.
Bryce Pustos, director of administrative systems at Chaffey Community College, recalled last fall’s enrollment period when faculty members reported going to bed with zero students registered for classes and waking up to find a full class and a mile-long wait list.
Michael Fink, Chaffey’s chief technology officer, said the attacks took place at scale and within minutes. “We’ll see things like 50 applications coming in within two seconds and then somebody enrolling in all 36 seats in a class within the first minute,” Fink told Fortune.
Simpkins told Fortune the scammers have learned to strike on vulnerable days in the academic calendar, around holidays, enrollment deadlines, culmination, or at the start or end of term when staff are already stretched thin or systems are more loosely monitored.
“They push through hundreds and thousands of records at the same time and overwhelm the staff,” Simpkins said.
Plus, enrollment workers and faculty are just that, noted Simpkins; they’re educators who aren’t trained in detecting fraud. Their remit is focused on access and ensuring real students can get into the classes they need, she added, not policing fraud and fake students who are trying to trick their way to illicit financial gain. That aspect also makes the institutions more vulnerable to harm, said Simpkins.
“These are people who are admissions counselors who process applications and want to be able to admit students and give everybody an equal chance at an education,” she said.
Sadly, professors have dealt with cruel whiplash from the attacks, noted John Van Weeren, vice president of higher education at IT consulting firm Voyatek.
“One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section,” recalled Van Weeren. “When we worked with them as the first week of class was ongoing, we found out they were not real people.”
Follow the FAFSA
In a nightmare twist, community and technical colleges are seen as low-hanging fruit for this fraud scheme precisely because of how they’ve been designed to serve and engage with local communities and the public with as few barriers to entry as possible. Community colleges are often required to accept every eligible student and typically don’t charge fees for applying. While financial-aid fraud isn’t at all new, the fraud rings themselves have evolved from pandemic-era cash grabs and boogeymen in their mom’s basement, said Burris.
“There is an acceleration due to the proliferation of these automated technologies,” he said. “These are organized criminal enterprises—fraud rings—that are coming both from within the U.S., but also internationally.”
Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges.
The attacks specifically zero in on coursework that maximizes financial-aid eligibility, said Mike McCandless, vice president of student services at Merced College. Social sciences and online-only classes with large numbers of students that allow for as many credits or units as possible are often choice picks, he said.
For the spring semester, Merced booted about half of the 15,000 initial registrations that were fraudulent. Among the next tranche of about 7,500, some 20% were caught and removed from classes, freeing up space for real students.
The human cost
In addition to financial theft, the ghost student epidemic is causing real students to get locked out of classes they need to graduate. Oftentimes, students have planned their work or childcare schedule around classes they intend to take—and getting locked out has led to a cascade of impediments.
“When you have fraudulent people taking up seats in classes, you have actual students who need to take those classes who can’t now, and it’s a barrier,” said Pustos.
The scheme continues to evolve, however, requiring constant changes to the algorithms schools are using to detect ghost students and prevent them from applying for financial aid—making the problem all the more explosive. Multiple school officials and cybersecurity experts interviewed by Fortune were reluctant to disclose the current signs of ghost students, for fear of the scheme further iterating.
In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.
Maurice Simpkins noted an uptick this year in the use of American stolen identities as more schools have engaged in hand-to-hand combat with the fraud rings. He’s seen college graduates who have had their identities stolen get reenrolled at their former university, or have had their former education email address used to enroll at another institution.
Scammers are also using bizarre-looking short-term and disposable email addresses to register for classes in a 10-minute period before they can get their hands on a .edu email address, said Simpkins. That verified email address is “like a gold bar,” Simpkins explained. The fraudster then appears legitimate going forward and is eligible for student discounts on hardware, software, and can use the college’s cloud storage.
“We had a school that reached out to us because some fraudsters ordered some computers and devices and other materials and then had them delivered overseas,” said Simpkins. “And they did it using an account with the school’s .edu email address.”
McCandless said initially it was easy to tell if a fake student was disguised as a local applicant because their IP address was generated overseas. But just a few semesters later, IP addresses were local. When the college’s tech team looked deeper, they would find the address was from an abandoned building or somewhere in the middle of Lake Merced.
Every time the school did something to lock out fraudulent applicants, the scammers would learn and tweak, McCandless said. The school’s system is now designed to block ghost applicants right out of the gate and at multiple stages before they start enrolling in classes.
McCandless said professors are assigning students homework for the first day of class, but the ghost students are completing the assignments with AI. Faculty have caught the fake homework, however, by noticing that half the class handed in identical work, or detecting the use of ChatGPT, for instance.
“They’re very innovative, very good at what they do,” said McCandless. “I just think the consistency with which they continue to learn and improve—it’s a multimillion-dollar scheme, there’s money there, why wouldn’t you invest in it?”
‘Rampant fraud’
According to the DOE, the rate of financial fraud through stolen identities has reached a level that “imperils the federal student assistance programs under Title IV of the Higher Education Act.” In a statement, Secretary of Education Linda McMahon said the new temporary fix will help prevent identity theft fraud.
“When rampant fraud is taking aid away from eligible students, disrupting the operations of colleges, and ripping off taxpayers, we have a responsibility to act,” said McMahon.
Ultimately, what schools are trying to do is put in place hurdles that make it unappealing for scammers to attack because they have to do more front-end work to make the fraud scheme efficient, explained Jesse Gonzalez, assistant vice chancellor of IT services for Rancho Santiago Community College District. However, the schools are attempting to balance the delicate issue of accepting everyone eligible and remaining open to vulnerable or undocumented students, he said. “The more barriers you put in place, the more you’re going to impact students, and it’s usually the students who need the most help.”
Kim from Santiago Canyon College fears too many measures in place to root out fraud could make it more difficult for students and members of the community—who for various reasons might have a new email, phone number, or address—to access education and other resources that can help them improve their lives.
“Our ability to provide that democratic education to those that would not otherwise have access is at stake, and it’s in jeopardy because of these bad actors turning our system into their own piggy banks,” said Kim. “We have to continue to figure out ways to keep them out so the students can have those rightful seats—and keep it open-access.”
Nvidia CEO Jensen Huang isn’t sure about Anthropic CEO Dario Amodei’s recent predictions about AI-driven job automation. Speaking at VivaTech in Paris, Huang pushed back on the idea that AI could soon replace half of all entry-level office roles and questioned the philosophy behind limiting AI development to a few actors.
Jensen Huang is not on board with some of Anthropic CEO Dario Amodei’s predictions about advanced AI. Responding to a question about Amodei’s recent prediction that AI could automate up to half of all entry-level office jobs within five years, Huang said he “pretty much disagree[d] with almost everything” his fellow AI CEO says.
“One, he believes that AI is so scary that only they should do it,” Huang said of Amodei at a press briefing at Viva Technology in Paris. “Two, [he believes] that AI is so expensive, nobody else should do it … And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it.
“I think AI is a very important technology; we should build it and advance it safely and responsibly,” Huang continued. “If you want things to be done safely and responsibly, you do it in the open … Don’t do it in a dark room and tell me it’s safe.”
Anthropic was founded by Amodei and other former OpenAI employees in 2021 with safety as one of its core missions. Many of Anthropic’s founding team reportedly left OpenAI owing to disagreements about the direction and safety culture at the company.
Amodei has made several public statements about his belief in the potential existential risks of AI. He’s said that he believes humanity may one day lose control of AI systems if they become smarter than humans. He’s also raised concerns about rogue actors weaponizing advanced AI to create bioweapons, engineer cyberattacks, or unleash tools of mass disruption long before machines surpass human intelligence.
More recently, in an interview with Axios, he predicted AI could wipe out roughly 50% of all entry-level white-collar jobs andurged lawmakers to prepare now to protect people’s livelihoods.
Huang acknowledged that the tech may have some impact on employees, but dismissed Amodei’s recent bold claim.
“Everybody’s jobs will be changed. Some jobs will be obsolete, but many jobs are going to be created … Whenever companies are more productive, they hire more people,” he said.
Anthropic did not immediately respond to a request for comment from Fortune.
Quantum computing’s ‘inflection point’
Huang made the comments in a press briefing following Nvidia’s GTC Paris conference, where the company announced a new partnership with French startup Mistral as part of a push to develop European computing capacity.
Huang said Nvidia had more than 20 “AI factories” in the works across the continent, promising European researchers and startups that their “GPU shortage will be resolved” soon.
The CEO also touched on Nvidia’s quantum computing efforts, spotlighting Nvidia’s hybrid quantum-classical platform, CUDA-Q, and claiming that quantum computing is hitting an “inflection point.” Huang said that the tech could start solving real-world problems in the next few years.
An AI-related provision in the “Big Beautiful Bill” could restrict state-level legislation of energy-hungry data centers—and is raising bipartisan objections across the US.
TechRadar and Tom's Guide sat down with Apple's Craig Federighi and Greg Joswiak to talk about the company's latest plans for integrating Siri and Apple Intelligence.
Ohio State University is making AI literacy a requirement for all undergraduates starting in 2025. The university’s new “AI Fluency” initiative includes hands-on workshops and a dedicated course, aiming to equip students to use generative AI responsibly in their chosen fields.
Ohio State University is requiring all students to learn how to use AI. The university’s “AI Fluency” initiative, announced last week, aims to ensure all students graduate equipped to apply AI tools and applications in their fields.
“Through AI Fluency, Ohio State students will be ‘bilingual’ — fluent in both their major field of study and the application of AI in that area,” Ravi V. Bellamkonda, executive vice president and provost at Ohio State, said in a statement. “Grounded with a strong sense of responsibility and possibility, we will prepare Ohio State’s students to harness the power of AI and to lead in shaping the future of their area of study.”
Starting in fall 2025, hands-on experience with AI tools will become a core expectation for every undergraduate at the college, no matter their field of study.
Students will receive an introduction to generative AI in their first few weeks of college while further training will be threaded into the university’s First Year Success Series. These workshops will aim to give students early exposure to real-world applications of AI, and a broader slate of workshops will be available throughout the academic year.
“Ohio State’s faculty have long been pioneers in exploring the transformative potential of AI, driving innovation both in research and education,” said Peter Mohler, the university’s executive vice president for research, innovation, and knowledge. “Our university is leading the way in a multidisciplinary approach to harnessing AI’s benefits, significantly shaping the future of learning and discovery.”
Colleges are changing their view on AI
Colleges have been gradually changing their approach to AI use over the last year, with many beginning to incorporate the tech into classes. College campuses have been somewhat of a flashpoint for wider tensions around AI, as the tech has sparked some tensions between students and professors.
Students were some of the early adopters of the tech after they realized tools like OpenAI’s ChatGPT were capable of producing decent-quality essays in seconds. This prompted a rise in the number of students using AI to cheat on assignments, but also led to a few false accusations from professors in return.
Most U.S. colleges have been trying to define and allow for some “acceptable” use of AI among students and professors, but the guidance has sometimes struggled to keep pace with technological advances. Ohio State University’s recent initiative goes further than most colleges and makes the argument that students need to skill up in AI before entering the workforce.
Entry-level jobs, which are typically taken by recent graduates, are some of the most exposed to AI automation. Some have argued recently that we are already seeing these jobs disappear.
The university’s president, Walter “Ted” Carter Jr, said in a statement: “Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future.”
“Artificial intelligence is transforming the way we live, work, teach, and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI,” he added.
Today’s most advanced AI models are relatively useful for lots of things—writing software code, research, summarizing complex documents, writing business correspondence, editing, generating images and music, role-playing human interactions, the list goes on. But relatively is the key word here. As anyone who uses these models soon discovers, they remain frustratingly error-prone and erratic. So how could anyone think that these systems could be used to run critical infrastructure, such as electrical grids, air traffic control, communications networks, or transportation systems?
Yet that is exactly what a project funded by the U.K.’s Advanced Research and Invention Agency (ARIA) is hoping to do. ARIA was designed to be somewhat similar to the U.S. Defense Advanced Research Projects Agency (DARPA), with government funding for moonshot research that has potential governmental or strategic applications. The £59 million ($80 million) ARIA project, called The Safeguarded AI Program, aims to find a way to combine AI “world-models” with mathematical proofs that could guarantee that the system’s outputs were valid.
David Dalrymple, the machine learning researcher who is leading the ARIA effort, told me that the idea was to use advanced AI models to create a “production facility” that would churn out domain-specific control algorithms for critical infrastructure. These algorithms would be mathematically tested to ensure that they meet the required performance specifications. If the control algorithms pass this test, the controllers—but not the frontier AI models that developed them—would be deployed to help run critical infrastructure more efficiently.
Dalrymple (who is known by his social media handle Davidad) gives the example of the U.K.’s electricity grid. The grid’s operator currently acknowledges that if it could balance supply-and-demand on the grid more optimally, it could save £3 billion ($4 billion) that it spends each year essentially paying to have excess generation capacity up-and-running to avoid the possibility of a sudden blackout, he says. Better control algorithms could reduce those costs.
Besides the energy sector, ARIA is also looking at applications in supply chain logistics, biopharmaceutical manufacturing, self-driving vehicles, clinical trial design, and electric vehicle battery management.
AI to develop new control algorithms
Frontier AI models may be reaching the point now where they may be able to automate algorithmic research and development, Davidad says. “The idea is, let’s take that capability and turn it to narrow AI R&D,” he tells me. Narrow AI usually refers to AI systems that are designed to perform one particular, narrowly-defined task at superhuman levels, rather than an AI system that can perform many different kinds of tasks.
The challenge, even with these narrow AI systems, is then coming up with mathematical proofs to guarantee that their outputs will always meet the required technical specification. There’s an entire field known as “formal verification” that involves mathematically proving that software will always provide valid outputs under given conditions—but it’s notoriously difficult to apply to neural network-based AI systems. “Verifying even a narrow AI system is something that’s very labor intensive in terms of a cognitive effort required,” Davidad says. “And so it hasn’t been worthwhile historically to do that work of verifying except for really, really specialized applications like passenger aviation autopilots or nuclear power plant control.”
This kind of formally-verified software won’t fail because a bug causes an erroneous output. They can sometimes break down because they encounter conditions that fall outside their design specifications—for instance a load balancing algorithm for an electrical grid might not be able to handle an extreme solar storm that shorts out all of the grid’s transformers simultaneously. But even then, the software is usually designed to “fail safe” and revert back to manual control.
ARIA is hoping to show that frontier AI modes can be used to do the laborious formal verification of the narrow AI controller as well as develop the controller in the first place.
But will AI models cheat the verification tests?
But this raises another challenge. There’s a growing body of evidence that frontier AI models are very good at “reward hacking”—essentially finding ways to cheat to accomplish a goal—as well as at lying to their users about what they’ve actually done. The AI safety non-profit METR (short for Model Evaluation & Threat Research) recently published a blog on all the ways OpenAI’s o3 model tried to cheat on various tasks.
ARIA says it is hoping to find a way around this issue too. “The frontier model needs to submit a proof certificate, which is something that is written in a formal language that we’re defining in another part of the program,” Davidad says. This “new language for proofs will hopefully be easy for frontier models to generate and then also easy for a deterministic, human audited algorithm to check.” ARIA has already awarded grants for work on this formal verification process.
Models for how this might work are starting to come into view. GoogleDeepMind recently developed an AI model called AlphaEvolve that is trained to search for new algorithms for applications such as managing data centers, designing new computer chips, and even figuring out ways to optimize the training of frontier AI models. Google DeepMind has also developed a system called AlphaProof that is trained to develop mathematical proofs and write them in a coding language called Lean that won’t run if the answer to the proof is incorrect.
ARIA is currently accepting applications from teams that want to run the core “AI production facility,” with the winner the £18 million grant to be announced on October 1. The facility, the location of which is yet to be determined, is supposed to be running by January 2026. ARIA is asking those applying to propose a new legal entity and governance structure for this facility. Davidad says ARIA does not want an existing university or a private company to run it. But the new organization, which might be a nonprofit, would partner with private entities in areas like energy, pharmaceuticals, and healthcare on specific controller algorithms. He said that in addition to the initial ARIA grant, the production facility could fund itself by charging industry for its work developing domain-specific algorithms.
It’s not clear if this plan will work. For every transformational DARPA project, many more fail. But ARIA’s bold bet here looks like one worth watching.
With that, here’s more AI news…
Jeremy Kahn [email protected] @jeremyakahn Want to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Why not join me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. We will dive deep into the latest on AI agents, examine the data center build out in Asia, and talk to top leaders from government, board rooms, and academia in the region and beyond. You can apply to attend here.
The U.K.'s Advanced Research and Invention Agency (ARIA) is funding a project to use frontier AI models to design and test new control algorithms for safety critical systems, such as nuclear power plants and power grids.
Meta’s decision to create an ambitious new “superintelligence” AI research lab headed by Scale AI’s Alexandr Wang is a bold bid for relevance in its fierce AI battle with OpenAI, Anthropic and Google. It is also far from a slam-dunk.
While the pursuit of an ill-defined superintelligence—typically meant as an AI system that could surpass the collective intelligence of humanity–would have seemed a quixotic, sci-fi quest in the past, it has become an increasingly common way for top AI companies to attract talent and secure a competitive edge.
Tapping the 28-year-old Wang to lead the new superintelligence effort, while in talks to invest billions of dollars into Scale AI, as reported today by the New York Times, clearly shows Mark Zuckerberg’s confidence in Wang and Scale. The startup, which Wang co-founded in 2016, primarily focuses on providing high-quality training data, the “oil” that powers today’s most powerful AI models. Meta invested in Scale’s last funding round, and also recently partnered with Scale and the U.S. Department of Defense on “Defense Llama,” a military-grade LLM based on Meta’s Llama 3 model.
Meta has struggled, however, with several reorganizations of its generative AI research and product teams over the past two years. And the high-stakes AI talent wars are tougher to win than ever. Meta has reportedly offered seven-to-nine figure compensation packages to dozens of top researchers, with some agreeing to join the new lab. But one VC posted on X that even with those offers on the table, he had heard of three instances in which Meta still lost candidates to OpenAI and Anthropic.
Meta already has a long-standing advanced AI research lab, FAIR (Fundamental AI Research Lab), founded by Meta chief scientist Yann LeCun in 2013. But FAIR has never claimed to be pursuing superintelligence, and LeCun has even eschewed the term AGI (artificial general intelligence), which is often defined as an AI system that would be as intelligent as an individual person. LeCun has gone on record as being skeptical that current approaches to AI, built around large language models (LLMs), will ever get to human-level intelligence.
In April, LeCun told Fortune that a spate of high-profile departures from FAIR, including that of former FAIR head Joelle Pineau, was not a sign of the lab’s “dying a slow death.” Instead, he said, it was a “new beginning” for FAIR, refocusing on the “ambitious and long-term goal of what we call AMI (advanced machine intelligence).”
Aside from FAIR, Meta CEO Mark Zuckerberg has spent billions on generative AI development in a bid to catch up to OpenAI, following the launch of that company’s wildly popular ChatGPT in November 2022. Zuckerberg rebuilt the entire company around the technology and succeeded in creating highly-successful open source AI models, branded as Llama, in 2023 and 2024. The Llama models helped Meta recover from an underwhelming pivot to the metaverse.
But Meta’s latest AI model, Llama 4, which was released in April 2025, was considered a flop. The model’s debut was attended by controversy around a perceived rushed release, lack of transparency, possibly inflated performance metrics, and indications that Meta was failing to keep pace with open-source AI rivals like China’s DeepSeek.
For the past year, Meta’s been hemorrhaging top AI talent. Three top Meta AI researchers–Devi Parikh, Abhishek Das and Dhruv Botra, left a year ago to found Yutori, a startup focused on AI agents. Damien Sereni, an engineering leader at Meta who led the team working on PyTorch, a framework underpinning most of today’s top LLMs, recently left the company. Boris Cherny is a software engineer who left Meta last year for Anthropic and created Claude Code. And Erik Meijer, a former Meta engineering leader, told Fortune recently that he has heard that several developers from PyTorch have recently left to join former OpenAI CTO Mira Murati’s Thinking Machine Labs.
Meta’s move to bring in Wang, along with a number of other Scale employees, while simultaneously investing in Scale, follows what has, over the past 18 months, become a standard playbook for big tech companies looking to grab AI know-how from startups. Microsoft used a similar deal structure, which stops short of a full acquisition yet still amasses talent and technical IP, to bring in Mustafa Suleyman from Inflection. Amazon then used the arrangement to hire key talent from Adept AI and Google used it to rehire Character AI cofounder Noam Shazeer. Because the deals are not structured as acquisitions, it is more difficult for antitrust regulators to block them.
It remains unclear whether Meta will be able to declare the Scale deal as a big win. It’s also not yet certain whether Yann LeCun will find himself marginalized within the Meta research ecosystem. But one big rising power player is undeniable: Alexandr Wang.
Wang became a billionaire with Scale by providing a global army of contractors that could label the data that companies including Meta and OpenAI use to train and improve their AI models. While it went on to help companies make custom AI applications, its core data business remains its biggest moneymaker. When Fortunespoke to Wang a year ago, he said that data was far from being commoditized for AI. “It’s a pivotal moment for the industry,” he said. “I think we are now entering a phase where further improvements and further gains from the models are not going to be won easily. They’re going to require increasing investments and are gonna require innovations and computation and efficient algorithms, innovations, and data. Our leg of that school is to ensure that we continue innovating on data.”
Now, with a potential Meta investment,Wang’s efforts are paying off big time. Zuckerberg can only hope the deal works as well for him as it has for Wang.
AI is transforming how enterprise software gets bought—not by replacing users, but by becoming one.
The debate around AI and the workplace often centers on labor displacement: Will it replace workers? Where will it fall short? And indeed, some “AI-first” experiments have produced mixed results—Klarna reversed course on customer service automation, while Duolingo faced public backlash for an AI-focused growth strategy.
These outcomes complicate our understanding of Microsoft’s recent efficiency-driven layoffs. Unlike a premature overcommitment to automation (à la Klarna), Microsoft is restructuring to operate as “customer zero” for its own enterprise AI tools, fundamentally changing how the computing giant writes code, ships products, and supports clients. It’s a strategic shot in the arm—a painful one—that reveals what’s coming next: AI agents built not just to automate outcomes, but to make decisions about the tools, processes, and infrastructure used along the way.
AI agent as orchestrator
In the past, enterprise software was chosen through a familiar dance: evaluation, demos, stakeholder alignment, and procurement. But today, AI agents are building applications, provisioning infrastructure, and selecting tools—autonomously, and at scale. Ask an agent to spin up a customer feedback portal, and it might choose Next.js for the frontend, Neon for the cloud database, Vercel for hosting, and Clerk for authentication as a service. No human has to Google options, compare vendors, or meet with salespeople. The agent simply acts.
Internal telemetry from Neon shows that AI agents now create databases at 4 times the rate of human developers. And that pattern is extending beyond engineering. Agents will soon assemble sales pipelines, orchestrate onboarding flows, manage IT operations—and, along the way, select the tools that work.
Microsoft’s sales team re-org further hints at how this procurement will occur in the future. Corporate customers now have a single point of contact at Microsoft, rather than several salespeople for different products. In part, this may be because agentic AI tools will select vendors on their own—and copilots don’t need five sales reps. The agent won’t pause to ask, “Do you have a preferred vendor?” It will reason about the task at hand and continue on its code path, hurtling toward an answer.
Human-in-the-loop AI
This evolution from executor to decision-maker is powered by the human-in-the-loop (HITL) approach to AI model training.
For years, enterprise AI has been limited by expensive labeling processes, fragile automation, and underutilized human expertise, leading to failure in nuanced, high-stakes environments like finance, customer service, and health care.
HITL systems change that by embedding AI directly into the workforce. During real-time work, agents observe GUI-level interactions—clicks, edits, approvals—capturing rich signals from natural behavior. These human corrections serve as high-quality validation points, boosting operational accuracy to ~99% without interrupting the workflow. The result is a continuous learning loop where agents don’t just follow instructions, they learn how the work gets done. This also creates dynamic, living datasets tailored to real business processes within the organization.
This shift offers entirely new market opportunities.
On the development front, traditional supervised learning models are giving way to embedded learning systems that harvest real-world interaction signals, enabling cheaper, faster, more adaptive AI. This further offers a massive new training set for agentic AI systems without incurring the cost of hiring human knowledge workers to shepherd the AI. With lower development costs, high fidelity, and better dynamism, the next generation of copilots will blend automation with real-time human judgment, dominating verticals like customer service, security, sales, and internal operations.
Accordingly, these tools will require infrastructure for real-time monitoring, GUI-level interaction capture, dynamic labeling, and automated retraining—creating further platform opportunities.
Microsoft’s sense of urgency
While the internet abounds with zippy coverage of savvy employees “AI hacking” their workflows, the reality is most workers lack that kind of product-development acumen. (And same for their bosses.) Save for a small subset of the business world possessing rare tech fluency, most corporate outfits will see greater value in buying AI tools—those built, customized, and serviced by world-class talent to solve specific workflows.
Microsoft’s sense of urgency comes from its understanding that the question of “build or buy” is changing quickly. This “eureka” moment, technologically speaking, is what’s catalyzing an operator pivot at enterprise AI outfits. HITL represents a move away from read/write data integrations toward a richer, more dynamic GUI-interaction-based intelligence layer—one that mirrors how work actually gets done in the enterprise.
We’re seeing the beginning of a race toward enterprise AI dominance among the goliaths of the tech world. Signals like OpenAI’s investments into application-layer experiences (shopping agents, its acquisition of agentic developer Windsurf) highlight a clear trend: Mastering human-application-interaction capture is becoming the foundation for scalable agentic automation. As companies like Microsoft, OpenAI, and others absorb critical data environments and restructure themselves to serve as “customer zero,” they’re treating AI as the new chief procurement officer of their own ecosystems. These companies see the value of selling shovels in a gold rush—and know AI is finally sharp enough to start digging.
Tomasz Tunguz is the founder and general manager of Theory Ventures. He served as managing partner at Redpoint Ventures for 14 years.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
After stumbling out of the starting gate in Big Tech’s pivotal race to capitalize on artificial intelligence, Apple tried to regain its footing Monday during an annual developers conference that focused mostly on incremental advances and cosmetic changes in its technology.
The presummer rite, which attracted thousands of developers from nearly 60 countries to Apple’s Silicon Valley headquarters, subdued compared with the feverish anticipation that surrounded the event in the last two years.
Apple highlighted plans for more AI tools designed to simplify people’s lives and make its products even more intuitive. It also provided an early glimpse at the biggest redesign of its iPhone software in a decade. In doing so, Apple executives refrained from issuing bold promises of breakthroughs that punctuated recent conferences, prompting CFRA analyst Angelo Zino to deride the event as a “dud” in a research note.
More AI, but what about Siri?
In 2023, Apple unveiled a mixed-reality headset that has been little more than a niche product, and last year WWDC trumpeted its first major foray into the AI craze with an array of new features highlighted by the promise of a smarter and more versatile version of its virtual assistant, Siri — a goal that has yet to be realized.
“This work needed more time to reach our high-quality bar,” Craig Federighi, Apple’s top software executive, said Monday at the outset of the conference. The company didn’t provide a precise timetable for when Siri’s AI upgrade will be finished but indicated it won’t happen until next year at the earliest.
“The silence surrounding Siri was deafening,” said Forrester Research analyst Dipanjan Chatterjee said. “No amount of text corrections or cute emojis can fill the yawning void of an intuitive, interactive AI experience that we know Siri will be capable of when ready. We just don’t know when that will happen. The end of the Siri runway is coming up fast, and Apple needs to lift off.”
Is Apple, with its ‘liquid glass,’ still a trendsetter?
The showcase unfolded amid nagging questions about whether Apple has lost some of the mystique and innovative drive that has made it a tech trendsetter during its nearly 50-year history.
Instead of making a big splash as it did with the Vision Pro headset and its AI suite, Apple took a mostly low-key approach that emphasized its effort to spruce up the look of its software with a new design called “Liquid Glass” while also unveiling a new hub for its video games and new features like a “Workout Buddy” to help manage physical fitness.
Apple executives promised to make its software more compatible with the increasingly sophisticated computer chips that have been powering its products while also making it easier to toggle between the iPhone, iPad, and Mac.
“Our product experience has become even more seamless and enjoyable,” Apple CEO Tim Cook told the crowd as the 90-minute showcase wrapped up.
IDC analyst Francisco Jeronimo said Apple seemed to be largely using Monday’s conference to demonstrate the company still has a blueprint for success in AI, even if it’s going to take longer to realize the vision that was presented a year ago.
“This year’s event was not about disruptive innovation, but rather careful calibration, platform refinement and developer enablement —positioning itself for future moves rather than unveiling game-changing technologies,” Jeronimo said.
Apple’s next operating system will be iOS 26
Besides redesigning its software. Apple will switch to a method that automakers have used to telegraph their latest car models by linking them to the year after they first arrive at dealerships. That means the next version of the iPhone operating system due out this autumn will be known as iOS 26 instead of iOS 19 — as it would be under the previous naming approach that has been used since the device’s 2007 debut.
The iOS 26 upgrade is expected to be released in September around the same time Apple traditionally rolls out the next iPhone models.
Playing catchup in AI
Apple opened the proceedings with a short video clip featuring Federighi speeding around a track in a Formula 1 race car. Although it was meant to promote the June 27 release of the Apple film, “F1” starring Brad Pitt, the segment could also be viewed as an unintentional analogy to the company’s attempt to catch up to the rest of the pack in AI technology.
While some of the new AI tricks compatible with the latest iPhones began rolling out late last year as part of free software updates, the delays in a souped-up Siri became so glaring that the chastened company stopped promoting it in its marketing campaigns earlier this year.
While Apple has been struggling to make AI that meets its standards, the gap separating it from other tech powerhouses is widening. Google keeps packing more AI into its Pixel smartphone lineup while introducing more of the technology into its search engine to dramatically change the way it works. Samsung, Apple’s biggest smartphone rival, is also leaning heavily into AI. Meanwhile, ChatGPT recently struck a deal that will bring former Apple design guru Jony Ive into the fold to work on a new device expected to compete against the iPhone.
Regulatory and trade challenges
Besides grappling with innovation challenges, Apple also faces regulatory threats that could siphon away billions of dollars in revenue that help finance its research and development. A federal judge is currently weighing whether proposed countermeasures to Google’s illegal monopoly in search should include a ban on long-running deals worth $20 billion annually to Apple while another federal judge recently banned the company from collecting commissions on in-app transactions processed outside its once-exclusive payment system.
On top of all that, Apple has been caught in the crosshairs of President Donald Trump’s trade war with China, a key manufacturing hub for the Cupertino, California, company. Cook successfully persuaded Trump to exempt the iPhone from tariffs during the president’s first administration, but he has had less success during Trump’s second term, which seems more determined to prod Apple to make its products in the U.S.
The multidimensional gauntlet facing Apple is spooking investors, causing the company’s stock price to plunge by 20% so far this year — a decline that has erased about $750 billion in shareholder wealth. After beginning the year as the most valuable company in the world, Apple now ranks third behind longtime rival Microsoft, another AI leader, and AI chipmaker Nvidia.
Apple’s shares closed down by more than 1% on Monday — an early indication the company’s latest announcements didn’t inspire investors.
On April 28, Duolingo cofounder and CEO Luis von Ahn posted an email on LinkedIn that he had just sent to all employees at his company. In it, he outlined his vision for the language-learning app to become an “AI-first” organization, including phasing out contractors if AI could do their work, and giving a team the ability to hire a new person only if they were not able to automate their work through AI.
The response was swift and scathing. “This is a disaster. I will cancel my subscription,” wrote one commenter. “AI first means people last,” wrote another. And a third summed up the general feeling of critics when they wrote: “I can’t support a company that replaces humans with AI.”
A week later, von Ahn walked back his initial statements, clarifying that he does not “see AI replacing what our employees do” but instead views it as a “tool to accelerate what we do, at the same or better level of quality.”
In a new interview, von Ahn says that he was shocked by the backlash he received. “I did not expect the amount of blowback,” he recently told the Financial Times. While he says he should have been more clear about his AI goals, he also feels that the negativity stems from a general fear that AI will replace workers. “Every tech company is doing similar things, [but] we were open about it,” he said.
Von Ahn, however, isn’t alone. Other CEOs have also been forthright about how their AI aspirations will affect their human workforce. The CEO of Klarna, for example, said in August of last year that the company had cut hundreds of jobs thanks to AI. Last month, he added that the new tech had helped the company shrink its workforce by 40%.
Anxiety for workers around the potential that they will be replaced by AI, however, is high. Around 40% of workers familiar with ChatGPT in 2023 were worried that the technology would replace them, according to a Harris poll done on behalf of Fortune. And a Pew study from earlier this year found that around 32% of workers fear AI will lead to fewer opportunities for them. Another 52% were worried about how AI could potentially impact the workplace in the future.
The leaders of AI companies themselves aren’t necessarily offering words of comfort to these worried workers. The Anthropic CEO, Dario Amodei, told Axios last month that AI could eliminate approximately half of all entry-level jobs within the next five years. He argued that there’s no turning back now.
“It sounds crazy, and people just don’t believe it,” he said. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming.”
Klarna’s CEO has predicted that a recession could be around the corner as companies around the globe—including his own—reduce the headcount of well-paid, white-collar jobs and replace them with AI.
Sebastian Siemiatkowski, the boss of the Swedish Buy Now, Pay Later group is once again sounding a pessimistic tone on AI’s impact on the workforce. But as he embraces the potential positive effects of AI on his own bottom line, he may have to contest with the negative fallout of a company that has flirted with growing credit losses in the last year.
While he admitted that “making future statements about macroeconomics is like horoscopes,” Siemiatkowski’s well-documented feelings about AI’s impact on the labor market leave him making a pessimistic prediction about the economy.
“My suspicion…is that there will be an implication for white-collar jobs. And when that happens, that usually leads to at least a recession in the short term. And I think, unfortunately, I don’t see how we could avoid that with what’s happening from a technology perspective,” Siemiatkowski said on the Times Tech Podcast.
Siematkowski has long warned of the disruptive nature of AI on the labor market, using his experience of shifting recruiting practices at Klarna to support his argument that it will replace roles.
He told the podcast that the company’s headcount had fallen from 5,500 people to 3,000 in the space of two years. Speaking in August last year, Siematkowski said his ambition was to eventually reduce that figure to 2,000 through workplace norms like attrition rather than by engaging in layoffs.
In February last year, Klarna announced that its AI chatbot was doing the work of 700 customer service staff, previously a role filled by customer service agents working for the French agency Teleperformance.
While Siemiatkowski has faced criticism for his willingness to talk about AI’s disruptive potential, he indicated he felt it was more of a duty to be frank about the technology.
“Many people in the tech industry, especially CEOs, tend to downplay the consequences of AI on jobs, white-collar jobs in particular. I don’t want to be one of them.”
Indeed, Siemiatkowski implied that if he added up the number of employees of CEOs who had called him to ask about making “efficiencies,” that figure in itself would make for a seismic economic event.
Recession indicator?
An AI-induced recession would combine a number of brewing themes for the Swedish tech group. Siemiatkowski’s comments come as the group reported widening credit losses, which rose by 17% to $136 million last year.
Siemiatkowski explained the losses as a result of the group taking on more customers, naturally leading to a rise in defaults. On a relative basis, the percentage increase in defaults was small, Siemiatkowski said.
The Swede added that because Klarna customers’ average indebtedness was £100, they were more likely to pay back their loans compared with typical credit card debt of what he said was £5,000. The typical U.K. credit card holder has an outstanding credit balance closer to around £1,800, while in the U.S., the average is about $6,300.
Regardless of the variance, Siemiatkowski says the difference means customers are more likely to pay off their Klarna debts.
“We are very unsensitive to macroeconomic shifts. We can still see them, but they’re much less profound than if you’re a big bank, you have tons of mortgages. And for people to really increase losses, credit losses, what has to happen is people have to lose jobs.”
Despite that, predictions of mass layoffs among white-collar workers could inform higher risk for the company’s credit business.
While there wasn’t any sign of a recession currently, Siemiatkowski did observe falling consumer sentiment, which would impact spending.
Siemiatkowski’s views on AI in the labor force have evolved over time. Speaking to Bloomberg in May, Siemiatowski was reported to have said the company was embarking on a recruitment drive, contrary to his previous statements about a workforce reduction.
Speaking with the Times, Siemiatkowski clarified that the company needed different types of workers to handle more complex customer service requests.
“When we started applying AI in our customer service, we realized that there will be a higher value to human connection,” he said.
Tools for Humanity, a startup co-founded by OpenAI’s Sam Altman, is rolling out its eyeball-scanning Orb devices to the UK as part of a global expansion of the company’s novel identification services.
Starting this week, people in London will be able to scan their eyes using Tools for Humanity’s proprietary Orb device, the company said in a statement on Monday. The service will roll out to Manchester, Birmingham, Cardiff, Belfast and Glasgow in the coming months.
The spherical Orbs will be at dedicated premises in shopping malls and on high streets, said Damien Kieran, chief legal and privacy officer at Tools for Humanity. Later, the company plans to partner with major retailers to provide self-serve Orbs that people can use as they would an ATM, Kieran added.
The company, led by co-founder and Chief Executive Officer Alex Blania, has presented its eye-scanning technology as a way for people to prove they are human at a time when artificial intelligence systems are becoming more adept at mimicking people. AI bots and deepfakes, including those enabled by generative AI tools created by Altman’s OpenAI, pose a range of security threats, including identity theft, misinformation and social engineering.
The Orb scan creates a digital credential, called World ID, based on the unique properties of a person’s iris. Those who agree to the scan can also receive a cryptocurrency token called Worldcoin through the company.
Tools for Humanity has faced regulatory scrutiny over privacy concerns about its technology in several markets, including investigations in Germany and Argentina, as well as bans in Spain and Hong Kong. The company said it doesn’t store any personal information or biometric data and that the verification information remains on the World ID holder’s mobile phone.
Kieran said Tools for Humanity had been meeting with data regulators including the UK’s Information Commissioner’s Office and privacy advocates ahead of the planned expansion.
So far, about 13 million people in countries including Mexico, Germany, Japan, Korea, Portugal and Thailand have verified their identities using Tools for Humanity’s technology, the company said. In April, the company announced plans to expand to six US cities.
There are 1,500 Orbs in circulation, Kieran said, but the company plans to ramp up production to ship 12,000 more over the next 12 months.
On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.
The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.
Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.
Meta Platforms Inc. is in talks to make a multibillion-dollar investment into artificial intelligence startup Scale AI, according to people familiar with the matter.
The financing could exceed $10 billion in value, some of the people said, making it one of the largest private company funding events of all time.
The terms of the deal are not finalized and could still change, according to the people, who asked not to be identified discussing private information.
A representative for Scale did not immediately respond to requests for comment. Meta declined to comment.
Scale AI, whose customers include Microsoft Corp. and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The startup was last valued at about $14 billion in 2024, in a funding round that included backing from Meta and Microsoft. Earlier this year, Bloomberg reported that Scale was in talks for a tender offer that would value it at $25 billion.
This would be Meta’s biggest ever external AI investment, and a rare move for the company. The social media giant has before now mostly depended on its in-house research, plus a more open development strategy, to make improvements in its AI technology. Meanwhile, Big Tech peers have invested heavily: Microsoft has put more than $13 billion into OpenAI while both Amazon.com Inc. and Alphabet Inc. have putbillions into rival Anthropic.
Part of those companies’ investments have been through credits to use their computing power. Meta doesn’t have a cloud business, and it’s unclear what format Meta’s investment will take.
Chief Executive Officer Mark Zuckerberg has made AI Meta’s top priority, and said in January that the company would spend as much as $65 billion on related projects this year.
The company’s push includes an effort to make Llama the industry standard worldwide. Meta’s AI chatbot — already available on Facebook, Instagram and WhatsApp — is used by 1 billion people per month.
Scale, co-founded in 2016 by CEO Alexandr Wang, has been growing quickly: The startup generated revenue of $870 million last year and expects sales to more than double to $2 billion in 2025, Bloomberg previously reported.
Scale plays a key role in making AI data available for companies. Because AI is only as good as the data that goes into it, Scale uses scads of contract workers to tidy up and tag images, text and other data that can then be used for AI training.
Scale and Meta share an interest in defense tech. Last week, Meta announced a new partnership with defense contractor Anduril Industries Inc. to develop products for the US military, including an AI-powered helmet with virtual and augmented reality features. Meta has also granted approval for US government agencies and defense contractors to use its AI models.
The company is already partnering with Scale on a program called Defense Llama — a version of Meta’s Llama large language model intended for military use.
Scale has increasingly been working with the US government to develop AI for defense purposes. Earlier this year the startup said it won a contract with the Defense Department to work on AI agent technology. The company called the contract “a significant milestone in military advancement.”