A Political Battle Is Brewing Over Data Centers
© Lance Ulanoff / Future
© Shutterstock/Adeel Ahmed photos
Ohio State University is requiring all students to learn how to use AI. The university’s “AI Fluency” initiative, announced last week, aims to ensure all students graduate equipped to apply AI tools and applications in their fields.
“Through AI Fluency, Ohio State students will be ‘bilingual’ — fluent in both their major field of study and the application of AI in that area,” Ravi V. Bellamkonda, executive vice president and provost at Ohio State, said in a statement. “Grounded with a strong sense of responsibility and possibility, we will prepare Ohio State’s students to harness the power of AI and to lead in shaping the future of their area of study.”
Starting in fall 2025, hands-on experience with AI tools will become a core expectation for every undergraduate at the college, no matter their field of study.
Students will receive an introduction to generative AI in their first few weeks of college while further training will be threaded into the university’s First Year Success Series. These workshops will aim to give students early exposure to real-world applications of AI, and a broader slate of workshops will be available throughout the academic year.
“Ohio State’s faculty have long been pioneers in exploring the transformative potential of AI, driving innovation both in research and education,” said Peter Mohler, the university’s executive vice president for research, innovation, and knowledge. “Our university is leading the way in a multidisciplinary approach to harnessing AI’s benefits, significantly shaping the future of learning and discovery.”
Colleges have been gradually changing their approach to AI use over the last year, with many beginning to incorporate the tech into classes. College campuses have been somewhat of a flashpoint for wider tensions around AI, as the tech has sparked some tensions between students and professors.
Students were some of the early adopters of the tech after they realized tools like OpenAI’s ChatGPT were capable of producing decent-quality essays in seconds. This prompted a rise in the number of students using AI to cheat on assignments, but also led to a few false accusations from professors in return.
Most U.S. colleges have been trying to define and allow for some “acceptable” use of AI among students and professors, but the guidance has sometimes struggled to keep pace with technological advances. Ohio State University’s recent initiative goes further than most colleges and makes the argument that students need to skill up in AI before entering the workforce.
Entry-level jobs, which are typically taken by recent graduates, are some of the most exposed to AI automation. Some have argued recently that we are already seeing these jobs disappear.
The university’s president, Walter “Ted” Carter Jr, said in a statement: “Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future.”
“Artificial intelligence is transforming the way we live, work, teach, and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI,” he added.
This story was originally featured on Fortune.com
© Photo by Aaron M. Sprecher/Getty Images
Today’s most advanced AI models are relatively useful for lots of things—writing software code, research, summarizing complex documents, writing business correspondence, editing, generating images and music, role-playing human interactions, the list goes on. But relatively is the key word here. As anyone who uses these models soon discovers, they remain frustratingly error-prone and erratic. So how could anyone think that these systems could be used to run critical infrastructure, such as electrical grids, air traffic control, communications networks, or transportation systems?
Yet that is exactly what a project funded by the U.K.’s Advanced Research and Invention Agency (ARIA) is hoping to do. ARIA was designed to be somewhat similar to the U.S. Defense Advanced Research Projects Agency (DARPA), with government funding for moonshot research that has potential governmental or strategic applications. The £59 million ($80 million) ARIA project, called The Safeguarded AI Program, aims to find a way to combine AI “world-models” with mathematical proofs that could guarantee that the system’s outputs were valid.
David Dalrymple, the machine learning researcher who is leading the ARIA effort, told me that the idea was to use advanced AI models to create a “production facility” that would churn out domain-specific control algorithms for critical infrastructure. These algorithms would be mathematically tested to ensure that they meet the required performance specifications. If the control algorithms pass this test, the controllers—but not the frontier AI models that developed them—would be deployed to help run critical infrastructure more efficiently.
Dalrymple (who is known by his social media handle Davidad) gives the example of the U.K.’s electricity grid. The grid’s operator currently acknowledges that if it could balance supply-and-demand on the grid more optimally, it could save £3 billion ($4 billion) that it spends each year essentially paying to have excess generation capacity up-and-running to avoid the possibility of a sudden blackout, he says. Better control algorithms could reduce those costs.
Besides the energy sector, ARIA is also looking at applications in supply chain logistics, biopharmaceutical manufacturing, self-driving vehicles, clinical trial design, and electric vehicle battery management.
Frontier AI models may be reaching the point now where they may be able to automate algorithmic research and development, Davidad says. “The idea is, let’s take that capability and turn it to narrow AI R&D,” he tells me. Narrow AI usually refers to AI systems that are designed to perform one particular, narrowly-defined task at superhuman levels, rather than an AI system that can perform many different kinds of tasks.
The challenge, even with these narrow AI systems, is then coming up with mathematical proofs to guarantee that their outputs will always meet the required technical specification. There’s an entire field known as “formal verification” that involves mathematically proving that software will always provide valid outputs under given conditions—but it’s notoriously difficult to apply to neural network-based AI systems. “Verifying even a narrow AI system is something that’s very labor intensive in terms of a cognitive effort required,” Davidad says. “And so it hasn’t been worthwhile historically to do that work of verifying except for really, really specialized applications like passenger aviation autopilots or nuclear power plant control.”
This kind of formally-verified software won’t fail because a bug causes an erroneous output. They can sometimes break down because they encounter conditions that fall outside their design specifications—for instance a load balancing algorithm for an electrical grid might not be able to handle an extreme solar storm that shorts out all of the grid’s transformers simultaneously. But even then, the software is usually designed to “fail safe” and revert back to manual control.
ARIA is hoping to show that frontier AI modes can be used to do the laborious formal verification of the narrow AI controller as well as develop the controller in the first place.
But this raises another challenge. There’s a growing body of evidence that frontier AI models are very good at “reward hacking”—essentially finding ways to cheat to accomplish a goal—as well as at lying to their users about what they’ve actually done. The AI safety non-profit METR (short for Model Evaluation & Threat Research) recently published a blog on all the ways OpenAI’s o3 model tried to cheat on various tasks.
ARIA says it is hoping to find a way around this issue too. “The frontier model needs to submit a proof certificate, which is something that is written in a formal language that we’re defining in another part of the program,” Davidad says. This “new language for proofs will hopefully be easy for frontier models to generate and then also easy for a deterministic, human audited algorithm to check.” ARIA has already awarded grants for work on this formal verification process.
Models for how this might work are starting to come into view. Google DeepMind recently developed an AI model called AlphaEvolve that is trained to search for new algorithms for applications such as managing data centers, designing new computer chips, and even figuring out ways to optimize the training of frontier AI models. Google DeepMind has also developed a system called AlphaProof that is trained to develop mathematical proofs and write them in a coding language called Lean that won’t run if the answer to the proof is incorrect.
ARIA is currently accepting applications from teams that want to run the core “AI production facility,” with the winner the £18 million grant to be announced on October 1. The facility, the location of which is yet to be determined, is supposed to be running by January 2026. ARIA is asking those applying to propose a new legal entity and governance structure for this facility. Davidad says ARIA does not want an existing university or a private company to run it. But the new organization, which might be a nonprofit, would partner with private entities in areas like energy, pharmaceuticals, and healthcare on specific controller algorithms. He said that in addition to the initial ARIA grant, the production facility could fund itself by charging industry for its work developing domain-specific algorithms.
It’s not clear if this plan will work. For every transformational DARPA project, many more fail. But ARIA’s bold bet here looks like one worth watching.
With that, here’s more AI news…
Jeremy Kahn
[email protected]
@jeremyakahn
Want to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Why not join me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. We will dive deep into the latest on AI agents, examine the data center build out in Asia, and talk to top leaders from government, board rooms, and academia in the region and beyond. You can apply to attend here.
This story was originally featured on Fortune.com
© Milan Jaros—Bloomberg via Getty Images
Meta’s decision to create an ambitious new “superintelligence” AI research lab headed by Scale AI’s Alexandr Wang is a bold bid for relevance in its fierce AI battle with OpenAI, Anthropic and Google. It is also far from a slam-dunk.
While the pursuit of an ill-defined superintelligence—typically meant as an AI system that could surpass the collective intelligence of humanity–would have seemed a quixotic, sci-fi quest in the past, it has become an increasingly common way for top AI companies to attract talent and secure a competitive edge.
Tapping the 28-year-old Wang to lead the new superintelligence effort, while in talks to invest billions of dollars into Scale AI, as reported today by the New York Times, clearly shows Mark Zuckerberg’s confidence in Wang and Scale. The startup, which Wang co-founded in 2016, primarily focuses on providing high-quality training data, the “oil” that powers today’s most powerful AI models. Meta invested in Scale’s last funding round, and also recently partnered with Scale and the U.S. Department of Defense on “Defense Llama,” a military-grade LLM based on Meta’s Llama 3 model.
Meta has struggled, however, with several reorganizations of its generative AI research and product teams over the past two years. And the high-stakes AI talent wars are tougher to win than ever. Meta has reportedly offered seven-to-nine figure compensation packages to dozens of top researchers, with some agreeing to join the new lab. But one VC posted on X that even with those offers on the table, he had heard of three instances in which Meta still lost candidates to OpenAI and Anthropic.
Meta already has a long-standing advanced AI research lab, FAIR (Fundamental AI Research Lab), founded by Meta chief scientist Yann LeCun in 2013. But FAIR has never claimed to be pursuing superintelligence, and LeCun has even eschewed the term AGI (artificial general intelligence), which is often defined as an AI system that would be as intelligent as an individual person. LeCun has gone on record as being skeptical that current approaches to AI, built around large language models (LLMs), will ever get to human-level intelligence.
In April, LeCun told Fortune that a spate of high-profile departures from FAIR, including that of former FAIR head Joelle Pineau, was not a sign of the lab’s “dying a slow death.” Instead, he said, it was a “new beginning” for FAIR, refocusing on the “ambitious and long-term goal of what we call AMI (advanced machine intelligence).”
Aside from FAIR, Meta CEO Mark Zuckerberg has spent billions on generative AI development in a bid to catch up to OpenAI, following the launch of that company’s wildly popular ChatGPT in November 2022. Zuckerberg rebuilt the entire company around the technology and succeeded in creating highly-successful open source AI models, branded as Llama, in 2023 and 2024. The Llama models helped Meta recover from an underwhelming pivot to the metaverse.
But Meta’s latest AI model, Llama 4, which was released in April 2025, was considered a flop. The model’s debut was attended by controversy around a perceived rushed release, lack of transparency, possibly inflated performance metrics, and indications that Meta was failing to keep pace with open-source AI rivals like China’s DeepSeek.
For the past year, Meta’s been hemorrhaging top AI talent. Three top Meta AI researchers–Devi Parikh, Abhishek Das and Dhruv Botra, left a year ago to found Yutori, a startup focused on AI agents. Damien Sereni, an engineering leader at Meta who led the team working on PyTorch, a framework underpinning most of today’s top LLMs, recently left the company. Boris Cherny is a software engineer who left Meta last year for Anthropic and created Claude Code. And Erik Meijer, a former Meta engineering leader, told Fortune recently that he has heard that several developers from PyTorch have recently left to join former OpenAI CTO Mira Murati’s Thinking Machine Labs.
Meta’s move to bring in Wang, along with a number of other Scale employees, while simultaneously investing in Scale, follows what has, over the past 18 months, become a standard playbook for big tech companies looking to grab AI know-how from startups. Microsoft used a similar deal structure, which stops short of a full acquisition yet still amasses talent and technical IP, to bring in Mustafa Suleyman from Inflection. Amazon then used the arrangement to hire key talent from Adept AI and Google used it to rehire Character AI cofounder Noam Shazeer. Because the deals are not structured as acquisitions, it is more difficult for antitrust regulators to block them.
It remains unclear whether Meta will be able to declare the Scale deal as a big win. It’s also not yet certain whether Yann LeCun will find himself marginalized within the Meta research ecosystem. But one big rising power player is undeniable: Alexandr Wang.
Wang became a billionaire with Scale by providing a global army of contractors that could label the data that companies including Meta and OpenAI use to train and improve their AI models. While it went on to help companies make custom AI applications, its core data business remains its biggest moneymaker. When Fortune spoke to Wang a year ago, he said that data was far from being commoditized for AI. “It’s a pivotal moment for the industry,” he said. “I think we are now entering a phase where further improvements and further gains from the models are not going to be won easily. They’re going to require increasing investments and are gonna require innovations and computation and efficient algorithms, innovations, and data. Our leg of that school is to ensure that we continue innovating on data.”
Now, with a potential Meta investment,Wang’s efforts are paying off big time. Zuckerberg can only hope the deal works as well for him as it has for Wang.
This story was originally featured on Fortune.com
© Bloomberg / Getty Images
AI is transforming how enterprise software gets bought—not by replacing users, but by becoming one.
The debate around AI and the workplace often centers on labor displacement: Will it replace workers? Where will it fall short? And indeed, some “AI-first” experiments have produced mixed results—Klarna reversed course on customer service automation, while Duolingo faced public backlash for an AI-focused growth strategy.
These outcomes complicate our understanding of Microsoft’s recent efficiency-driven layoffs. Unlike a premature overcommitment to automation (à la Klarna), Microsoft is restructuring to operate as “customer zero” for its own enterprise AI tools, fundamentally changing how the computing giant writes code, ships products, and supports clients. It’s a strategic shot in the arm—a painful one—that reveals what’s coming next: AI agents built not just to automate outcomes, but to make decisions about the tools, processes, and infrastructure used along the way.
In the past, enterprise software was chosen through a familiar dance: evaluation, demos, stakeholder alignment, and procurement. But today, AI agents are building applications, provisioning infrastructure, and selecting tools—autonomously, and at scale. Ask an agent to spin up a customer feedback portal, and it might choose Next.js for the frontend, Neon for the cloud database, Vercel for hosting, and Clerk for authentication as a service. No human has to Google options, compare vendors, or meet with salespeople. The agent simply acts.
Internal telemetry from Neon shows that AI agents now create databases at 4 times the rate of human developers. And that pattern is extending beyond engineering. Agents will soon assemble sales pipelines, orchestrate onboarding flows, manage IT operations—and, along the way, select the tools that work.
Microsoft’s sales team re-org further hints at how this procurement will occur in the future. Corporate customers now have a single point of contact at Microsoft, rather than several salespeople for different products. In part, this may be because agentic AI tools will select vendors on their own—and copilots don’t need five sales reps. The agent won’t pause to ask, “Do you have a preferred vendor?” It will reason about the task at hand and continue on its code path, hurtling toward an answer.
This evolution from executor to decision-maker is powered by the human-in-the-loop (HITL) approach to AI model training.
For years, enterprise AI has been limited by expensive labeling processes, fragile automation, and underutilized human expertise, leading to failure in nuanced, high-stakes environments like finance, customer service, and health care.
HITL systems change that by embedding AI directly into the workforce. During real-time work, agents observe GUI-level interactions—clicks, edits, approvals—capturing rich signals from natural behavior. These human corrections serve as high-quality validation points, boosting operational accuracy to ~99% without interrupting the workflow. The result is a continuous learning loop where agents don’t just follow instructions, they learn how the work gets done. This also creates dynamic, living datasets tailored to real business processes within the organization.
This shift offers entirely new market opportunities.
On the development front, traditional supervised learning models are giving way to embedded learning systems that harvest real-world interaction signals, enabling cheaper, faster, more adaptive AI. This further offers a massive new training set for agentic AI systems without incurring the cost of hiring human knowledge workers to shepherd the AI. With lower development costs, high fidelity, and better dynamism, the next generation of copilots will blend automation with real-time human judgment, dominating verticals like customer service, security, sales, and internal operations.
Accordingly, these tools will require infrastructure for real-time monitoring, GUI-level interaction capture, dynamic labeling, and automated retraining—creating further platform opportunities.
While the internet abounds with zippy coverage of savvy employees “AI hacking” their workflows, the reality is most workers lack that kind of product-development acumen. (And same for their bosses.) Save for a small subset of the business world possessing rare tech fluency, most corporate outfits will see greater value in buying AI tools—those built, customized, and serviced by world-class talent to solve specific workflows.
Microsoft’s sense of urgency comes from its understanding that the question of “build or buy” is changing quickly. This “eureka” moment, technologically speaking, is what’s catalyzing an operator pivot at enterprise AI outfits. HITL represents a move away from read/write data integrations toward a richer, more dynamic GUI-interaction-based intelligence layer—one that mirrors how work actually gets done in the enterprise.
We’re seeing the beginning of a race toward enterprise AI dominance among the goliaths of the tech world. Signals like OpenAI’s investments into application-layer experiences (shopping agents, its acquisition of agentic developer Windsurf) highlight a clear trend: Mastering human-application-interaction capture is becoming the foundation for scalable agentic automation. As companies like Microsoft, OpenAI, and others absorb critical data environments and restructure themselves to serve as “customer zero,” they’re treating AI as the new chief procurement officer of their own ecosystems. These companies see the value of selling shovels in a gold rush—and know AI is finally sharp enough to start digging.
Tomasz Tunguz is the founder and general manager of Theory Ventures. He served as managing partner at Redpoint Ventures for 14 years.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
Read more:
This story was originally featured on Fortune.com
© JASON REDMOND/AFP via Getty Images
After stumbling out of the starting gate in Big Tech’s pivotal race to capitalize on artificial intelligence, Apple tried to regain its footing Monday during an annual developers conference that focused mostly on incremental advances and cosmetic changes in its technology.
The presummer rite, which attracted thousands of developers from nearly 60 countries to Apple’s Silicon Valley headquarters, subdued compared with the feverish anticipation that surrounded the event in the last two years.
Apple highlighted plans for more AI tools designed to simplify people’s lives and make its products even more intuitive. It also provided an early glimpse at the biggest redesign of its iPhone software in a decade. In doing so, Apple executives refrained from issuing bold promises of breakthroughs that punctuated recent conferences, prompting CFRA analyst Angelo Zino to deride the event as a “dud” in a research note.
In 2023, Apple unveiled a mixed-reality headset that has been little more than a niche product, and last year WWDC trumpeted its first major foray into the AI craze with an array of new features highlighted by the promise of a smarter and more versatile version of its virtual assistant, Siri — a goal that has yet to be realized.
“This work needed more time to reach our high-quality bar,” Craig Federighi, Apple’s top software executive, said Monday at the outset of the conference. The company didn’t provide a precise timetable for when Siri’s AI upgrade will be finished but indicated it won’t happen until next year at the earliest.
“The silence surrounding Siri was deafening,” said Forrester Research analyst Dipanjan Chatterjee said. “No amount of text corrections or cute emojis can fill the yawning void of an intuitive, interactive AI experience that we know Siri will be capable of when ready. We just don’t know when that will happen. The end of the Siri runway is coming up fast, and Apple needs to lift off.”
The showcase unfolded amid nagging questions about whether Apple has lost some of the mystique and innovative drive that has made it a tech trendsetter during its nearly 50-year history.
Instead of making a big splash as it did with the Vision Pro headset and its AI suite, Apple took a mostly low-key approach that emphasized its effort to spruce up the look of its software with a new design called “Liquid Glass” while also unveiling a new hub for its video games and new features like a “Workout Buddy” to help manage physical fitness.
Apple executives promised to make its software more compatible with the increasingly sophisticated computer chips that have been powering its products while also making it easier to toggle between the iPhone, iPad, and Mac.
“Our product experience has become even more seamless and enjoyable,” Apple CEO Tim Cook told the crowd as the 90-minute showcase wrapped up.
IDC analyst Francisco Jeronimo said Apple seemed to be largely using Monday’s conference to demonstrate the company still has a blueprint for success in AI, even if it’s going to take longer to realize the vision that was presented a year ago.
“This year’s event was not about disruptive innovation, but rather careful calibration, platform refinement and developer enablement —positioning itself for future moves rather than unveiling game-changing technologies,” Jeronimo said.
Besides redesigning its software. Apple will switch to a method that automakers have used to telegraph their latest car models by linking them to the year after they first arrive at dealerships. That means the next version of the iPhone operating system due out this autumn will be known as iOS 26 instead of iOS 19 — as it would be under the previous naming approach that has been used since the device’s 2007 debut.
The iOS 26 upgrade is expected to be released in September around the same time Apple traditionally rolls out the next iPhone models.
Apple opened the proceedings with a short video clip featuring Federighi speeding around a track in a Formula 1 race car. Although it was meant to promote the June 27 release of the Apple film, “F1” starring Brad Pitt, the segment could also be viewed as an unintentional analogy to the company’s attempt to catch up to the rest of the pack in AI technology.
While some of the new AI tricks compatible with the latest iPhones began rolling out late last year as part of free software updates, the delays in a souped-up Siri became so glaring that the chastened company stopped promoting it in its marketing campaigns earlier this year.
While Apple has been struggling to make AI that meets its standards, the gap separating it from other tech powerhouses is widening. Google keeps packing more AI into its Pixel smartphone lineup while introducing more of the technology into its search engine to dramatically change the way it works. Samsung, Apple’s biggest smartphone rival, is also leaning heavily into AI. Meanwhile, ChatGPT recently struck a deal that will bring former Apple design guru Jony Ive into the fold to work on a new device expected to compete against the iPhone.
Besides grappling with innovation challenges, Apple also faces regulatory threats that could siphon away billions of dollars in revenue that help finance its research and development. A federal judge is currently weighing whether proposed countermeasures to Google’s illegal monopoly in search should include a ban on long-running deals worth $20 billion annually to Apple while another federal judge recently banned the company from collecting commissions on in-app transactions processed outside its once-exclusive payment system.
On top of all that, Apple has been caught in the crosshairs of President Donald Trump’s trade war with China, a key manufacturing hub for the Cupertino, California, company. Cook successfully persuaded Trump to exempt the iPhone from tariffs during the president’s first administration, but he has had less success during Trump’s second term, which seems more determined to prod Apple to make its products in the U.S.
The multidimensional gauntlet facing Apple is spooking investors, causing the company’s stock price to plunge by 20% so far this year — a decline that has erased about $750 billion in shareholder wealth. After beginning the year as the most valuable company in the world, Apple now ranks third behind longtime rival Microsoft, another AI leader, and AI chipmaker Nvidia.
Apple’s shares closed down by more than 1% on Monday — an early indication the company’s latest announcements didn’t inspire investors.
This story was originally featured on Fortune.com
© Jeff Chiu—AP
On April 28, Duolingo cofounder and CEO Luis von Ahn posted an email on LinkedIn that he had just sent to all employees at his company. In it, he outlined his vision for the language-learning app to become an “AI-first” organization, including phasing out contractors if AI could do their work, and giving a team the ability to hire a new person only if they were not able to automate their work through AI.
The response was swift and scathing. “This is a disaster. I will cancel my subscription,” wrote one commenter. “AI first means people last,” wrote another. And a third summed up the general feeling of critics when they wrote: “I can’t support a company that replaces humans with AI.”
A week later, von Ahn walked back his initial statements, clarifying that he does not “see AI replacing what our employees do” but instead views it as a “tool to accelerate what we do, at the same or better level of quality.”
In a new interview, von Ahn says that he was shocked by the backlash he received. “I did not expect the amount of blowback,” he recently told the Financial Times. While he says he should have been more clear about his AI goals, he also feels that the negativity stems from a general fear that AI will replace workers. “Every tech company is doing similar things, [but] we were open about it,” he said.
Von Ahn, however, isn’t alone. Other CEOs have also been forthright about how their AI aspirations will affect their human workforce. The CEO of Klarna, for example, said in August of last year that the company had cut hundreds of jobs thanks to AI. Last month, he added that the new tech had helped the company shrink its workforce by 40%.
Anxiety for workers around the potential that they will be replaced by AI, however, is high. Around 40% of workers familiar with ChatGPT in 2023 were worried that the technology would replace them, according to a Harris poll done on behalf of Fortune. And a Pew study from earlier this year found that around 32% of workers fear AI will lead to fewer opportunities for them. Another 52% were worried about how AI could potentially impact the workplace in the future.
The leaders of AI companies themselves aren’t necessarily offering words of comfort to these worried workers. The Anthropic CEO, Dario Amodei, told Axios last month that AI could eliminate approximately half of all entry-level jobs within the next five years. He argued that there’s no turning back now.
“It sounds crazy, and people just don’t believe it,” he said. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming.”
This story was originally featured on Fortune.com
© Getty Images / Bloomberg
Klarna’s CEO has predicted that a recession could be around the corner as companies around the globe—including his own—reduce the headcount of well-paid, white-collar jobs and replace them with AI.
Sebastian Siemiatkowski, the boss of the Swedish Buy Now, Pay Later group is once again sounding a pessimistic tone on AI’s impact on the workforce. But as he embraces the potential positive effects of AI on his own bottom line, he may have to contest with the negative fallout of a company that has flirted with growing credit losses in the last year.
While he admitted that “making future statements about macroeconomics is like horoscopes,” Siemiatkowski’s well-documented feelings about AI’s impact on the labor market leave him making a pessimistic prediction about the economy.
“My suspicion…is that there will be an implication for white-collar jobs. And when that happens, that usually leads to at least a recession in the short term. And I think, unfortunately, I don’t see how we could avoid that with what’s happening from a technology perspective,” Siemiatkowski said on the Times Tech Podcast.
Siematkowski has long warned of the disruptive nature of AI on the labor market, using his experience of shifting recruiting practices at Klarna to support his argument that it will replace roles.
He told the podcast that the company’s headcount had fallen from 5,500 people to 3,000 in the space of two years. Speaking in August last year, Siematkowski said his ambition was to eventually reduce that figure to 2,000 through workplace norms like attrition rather than by engaging in layoffs.
In February last year, Klarna announced that its AI chatbot was doing the work of 700 customer service staff, previously a role filled by customer service agents working for the French agency Teleperformance.
While Siemiatkowski has faced criticism for his willingness to talk about AI’s disruptive potential, he indicated he felt it was more of a duty to be frank about the technology.
“Many people in the tech industry, especially CEOs, tend to downplay the consequences of AI on jobs, white-collar jobs in particular. I don’t want to be one of them.”
Indeed, Siemiatkowski implied that if he added up the number of employees of CEOs who had called him to ask about making “efficiencies,” that figure in itself would make for a seismic economic event.
An AI-induced recession would combine a number of brewing themes for the Swedish tech group. Siemiatkowski’s comments come as the group reported widening credit losses, which rose by 17% to $136 million last year.
Siemiatkowski explained the losses as a result of the group taking on more customers, naturally leading to a rise in defaults. On a relative basis, the percentage increase in defaults was small, Siemiatkowski said.
The Swede added that because Klarna customers’ average indebtedness was £100, they were more likely to pay back their loans compared with typical credit card debt of what he said was £5,000. The typical U.K. credit card holder has an outstanding credit balance closer to around £1,800, while in the U.S., the average is about $6,300.
Regardless of the variance, Siemiatkowski says the difference means customers are more likely to pay off their Klarna debts.
“We are very unsensitive to macroeconomic shifts. We can still see them, but they’re much less profound than if you’re a big bank, you have tons of mortgages. And for people to really increase losses, credit losses, what has to happen is people have to lose jobs.”
Despite that, predictions of mass layoffs among white-collar workers could inform higher risk for the company’s credit business.
While there wasn’t any sign of a recession currently, Siemiatkowski did observe falling consumer sentiment, which would impact spending.
Siemiatkowski’s views on AI in the labor force have evolved over time. Speaking to Bloomberg in May, Siemiatowski was reported to have said the company was embarking on a recruitment drive, contrary to his previous statements about a workforce reduction.
Speaking with the Times, Siemiatkowski clarified that the company needed different types of workers to handle more complex customer service requests.
“When we started applying AI in our customer service, we realized that there will be a higher value to human connection,” he said.
This story was originally featured on Fortune.com
© John Phillips/Getty Images for SXSW London
Tools for Humanity, a startup co-founded by OpenAI’s Sam Altman, is rolling out its eyeball-scanning Orb devices to the UK as part of a global expansion of the company’s novel identification services.
Starting this week, people in London will be able to scan their eyes using Tools for Humanity’s proprietary Orb device, the company said in a statement on Monday. The service will roll out to Manchester, Birmingham, Cardiff, Belfast and Glasgow in the coming months.
The spherical Orbs will be at dedicated premises in shopping malls and on high streets, said Damien Kieran, chief legal and privacy officer at Tools for Humanity. Later, the company plans to partner with major retailers to provide self-serve Orbs that people can use as they would an ATM, Kieran added.
The company, led by co-founder and Chief Executive Officer Alex Blania, has presented its eye-scanning technology as a way for people to prove they are human at a time when artificial intelligence systems are becoming more adept at mimicking people. AI bots and deepfakes, including those enabled by generative AI tools created by Altman’s OpenAI, pose a range of security threats, including identity theft, misinformation and social engineering.
The Orb scan creates a digital credential, called World ID, based on the unique properties of a person’s iris. Those who agree to the scan can also receive a cryptocurrency token called Worldcoin through the company.
Tools for Humanity has faced regulatory scrutiny over privacy concerns about its technology in several markets, including investigations in Germany and Argentina, as well as bans in Spain and Hong Kong. The company said it doesn’t store any personal information or biometric data and that the verification information remains on the World ID holder’s mobile phone.
Kieran said Tools for Humanity had been meeting with data regulators including the UK’s Information Commissioner’s Office and privacy advocates ahead of the planned expansion.
So far, about 13 million people in countries including Mexico, Germany, Japan, Korea, Portugal and Thailand have verified their identities using Tools for Humanity’s technology, the company said. In April, the company announced plans to expand to six US cities.
There are 1,500 Orbs in circulation, Kieran said, but the company plans to ramp up production to ship 12,000 more over the next 12 months.
This story was originally featured on Fortune.com
© Christina House / Los Angeles Times via Getty Images
On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.
The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.
Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.
© Anthropic
Meta Platforms Inc. is in talks to make a multibillion-dollar investment into artificial intelligence startup Scale AI, according to people familiar with the matter.
The financing could exceed $10 billion in value, some of the people said, making it one of the largest private company funding events of all time.
The terms of the deal are not finalized and could still change, according to the people, who asked not to be identified discussing private information.
A representative for Scale did not immediately respond to requests for comment. Meta declined to comment.
Scale AI, whose customers include Microsoft Corp. and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The startup was last valued at about $14 billion in 2024, in a funding round that included backing from Meta and Microsoft. Earlier this year, Bloomberg reported that Scale was in talks for a tender offer that would value it at $25 billion.
This would be Meta’s biggest ever external AI investment, and a rare move for the company. The social media giant has before now mostly depended on its in-house research, plus a more open development strategy, to make improvements in its AI technology. Meanwhile, Big Tech peers have invested heavily: Microsoft has put more than $13 billion into OpenAI while both Amazon.com Inc. and Alphabet Inc. have put billions into rival Anthropic.
Part of those companies’ investments have been through credits to use their computing power. Meta doesn’t have a cloud business, and it’s unclear what format Meta’s investment will take.
Chief Executive Officer Mark Zuckerberg has made AI Meta’s top priority, and said in January that the company would spend as much as $65 billion on related projects this year.
The company’s push includes an effort to make Llama the industry standard worldwide. Meta’s AI chatbot — already available on Facebook, Instagram and WhatsApp — is used by 1 billion people per month.
Read More: The 10 Defense Tech Startups to Watch in 2025
Scale, co-founded in 2016 by CEO Alexandr Wang, has been growing quickly: The startup generated revenue of $870 million last year and expects sales to more than double to $2 billion in 2025, Bloomberg previously reported.
Scale plays a key role in making AI data available for companies. Because AI is only as good as the data that goes into it, Scale uses scads of contract workers to tidy up and tag images, text and other data that can then be used for AI training.
Scale and Meta share an interest in defense tech. Last week, Meta announced a new partnership with defense contractor Anduril Industries Inc. to develop products for the US military, including an AI-powered helmet with virtual and augmented reality features. Meta has also granted approval for US government agencies and defense contractors to use its AI models.
The company is already partnering with Scale on a program called Defense Llama — a version of Meta’s Llama large language model intended for military use.
Scale has increasingly been working with the US government to develop AI for defense purposes. Earlier this year the startup said it won a contract with the Defense Department to work on AI agent technology. The company called the contract “a significant milestone in military advancement.”
This story was originally featured on Fortune.com
© Drew Angerer—Getty Images
© Shutterstock
© Shutterstock / Ryzhi
One of the biggest beneficiaries of the AI revolution warned that the technology could also create massive fissures in society—unless the industry works hard to prevent them.
Alex Karp, CEO of data-mining software company Palantir, was asked on CNBC on Thursday about AI’s implications for employment.
“Those of us in tech cannot have a tin ear to what this is going to mean for the average person,” he replied.
That comes as AI increasingly gets incorporated into the daily tasks of workers, boosting their productivity and efficiency. At the same time, there are also signs that AI is shrinking opportunities for young workers in entry-level jobs that traditionally have been stepping stones for launching careers.
Meanwhile, Palantir has been at the forefront of using AI at the enterprise level. The company is known for putting its AI-powered platforms to work in the defense and intelligence sectors, but it has also been expanding in the commercial space. Most recently, it partnered with TeleTracking, a provider of operations platforms for hospitals and health systems.
On Thursday, Karp said the kind of AI that Palantir is doing can be “net accretive to the workforce in America,” but only if “we work very, very hard at it.”
He pointed out that just because it can happen, doesn’t mean it will happen. The industry has to make it so.
“We have to will it to be, because otherwise we’re going to have deep societal upheavals that I think many in our elite are just really ignoring,” Karp said.
The warning is especially notable coming from a leader in the AI field. But Karp has also urged the tech sector to take on bigger problems.
In a recent Atlantic essay adapted from their book The Technological Republic, Karp and Nicholas Zamiska, Palantir’s head of corporate affairs and legal counsel to the office of the CEO, blasted Silicon Valley for focusing on “trivial yet solvable inconveniences” and abandoning a long history of working with the government to tackle more pressing national issues.
Others in the AI field have also offered dire predictions about AI and the workforce lately. Last month, Anthropic CEO Dario Amodei said AI could wipe out roughly 50% of all entry-level white-collar jobs.
In an interview with Axios, he said that displacement could cause unemployment to spike to between 10% and 20%. The latest jobs report on Friday put the rate at 4.2%.
“Most of them are unaware that this is about to happen,” Amodei said. “It sounds crazy, and people just don’t believe it … We, as the producers of this technology, have a duty and an obligation to be honest about what is coming.”
And OpenAI CEO Sam Altman said this past week that AI agents are like interns, predicting that in the next year they can “help us discover new knowledge, or can figure out solutions to business problems that are very nontrivial.”
Meanwhile, Nvidia CEO Jensen Huang said at the Milken Institute’s Global Conference last month that while workers may not lose their jobs to AI, they will lose them to “someone who uses AI.”
This story was originally featured on Fortune.com
© Kevin Dietsch—Getty Images
It seems like hardly a day goes by anymore without a new version of Google's Gemini AI landing, and sure enough, Google is rolling out a major update to its most powerful 2.5 Pro model. This release is aimed at fixing some problems that cropped up in an earlier Gemini Pro update, and the word is, this version will become a stable release that comes to the Gemini app for everyone to use.
The previous Gemini 2.5 Pro release, known as the I/O Edition, or simply 05-06, was focused on coding upgrades. Google claims the new version is even better at generating code, with a new high score of 82.2 percent in the Aider Polyglot test. That beats the best from OpenAI, Anthropic, and DeepSeek by a comfortable margin.
While the general-purpose Gemini 2.5 Flash has left preview, the Pro version is lagging behind. In fact, the last several updates have attracted some valid criticism of 2.5 Pro's performance outside of coding tasks since the big 03-25 update. Google's Logan Kilpatrick says the team has taken that feedback to heart and that the new model "closes [the] gap on 03-25 regressions." For example, users will supposedly see more creativity with better formatting of responses.
© Ryan Whitwam
On the heels of an OpenAI controversy over deleted posts, Reddit sued Anthropic on Wednesday, accusing the AI company of "intentionally" training AI models on the "personal data of Reddit users"—including their deleted posts—"without ever requesting their consent."
Calling Anthropic two-faced for depicting itself as a "white knight of the AI industry" while allegedly lying about AI scraping, Reddit painted Anthropic as the worst among major AI players. While Anthropic rivals like OpenAI and Google paid Reddit to license data—and, crucially, agreed to "Reddit’s licensing terms that protect Reddit and its users’ interests and privacy" and require AI companies to respect Redditors' deletions—Anthropic wouldn't participate in licensing talks, Reddit alleged.
"Unlike its competitors, Anthropic has refused to agree to respect Reddit users’ basic privacy rights, including removing deleted posts from its systems," Reddit's complaint said.
© SOPA Images / Contributor | LightRocket