Normal view

Received before yesterday

Mark Zuckerberg has an AI talent problem—but money alone is unlikely to solve it

12 June 2025 at 16:49

Welcome to Eye on AI! In this edition…Disney and Universal join forces in lawsuit against AI image creator Midjourney…France’s Mistral gets a business boost thanks to fears over US AI dominance…Google names DeepMind’s Kavukcuoglu to lead AI-powered product development.

Mark Zuckerberg is rumored to be personally recruiting — reportedly at his homes in Lake Tahoe and Palo Alto — for a new 50-person “Superintelligence” AI team at Meta meant to gain ground on rivals like Google and OpenAI. The plan includes hiring a new head of AI research to work alongside Scale AI CEO Alexandr Wang, who is being brought in as part of a plan to invest up to $15 billion for a 49% stake in the training data company.

On the surface, it might appear that Zuckerberg could easily win this war for AI talent by writing the biggest checks.

And the checks Zuck is writing are, by all accounts, huge. Deedy Das, a VC at Menlo Ventures, told me that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor,” he said (a number that one AI researcher told me was “not outrageous at all” and “is likely low in certain sub-areas like LLM pre-training,” though most of the compensation would be in the form of equity). Later, on LinkedIn Das went further, claiming that for candidates working at a big AI lab, “Zuck is personally negotiating $10M+/yr in cold hard liquid money. I’ve never seen anything like it.”

Some of these pro athlete-level offers are working. According to Bloomberg, Jack Rae, a principal researcher at Google DeepMind, is expected to join Meta’s “superintelligence” team, while it said Meta has also recruited Johan Schalkwyk, a machine learning lead at AI voice startup Sesame AI. 

Money isn’t everything 

But money alone may not be enough to build the kind of AI model shop Meta needs. According to Das, several researchers have turned down Zuckerberg’s offer to take roles at OpenAI and Anthropic. 

There are several issues at play: For one thing, there simply aren’t that many top AI researchers, and many of them are happily ensconced at OpenAI, Anthropic, or Google DeepMind with high six- or low seven-figure salaries and access to all the computing capacity they could want. In a March Fortune article, I argued that companies are tracking top AI researchers and engineers like prized assets on the battlefield. The most intense fight is over a small pool of AI research scientists — estimated to be fewer than 1,000 individuals worldwide, according to several industry insiders Fortune spoke with — with the qualifications to build today’s most advanced large language models. 

“In general, all these companies very closely watch each others’ compensation, so on average it is very close,” said Erik Meijer, a former senior director of engineering at Meta who left last year. However, he added that Meta uses “additional equity” which is a “special kind of bonus to make sure compensation is not the reason to leave.” 

Beyond the financial incentives, personal ties to leading figures and adherence to differing philosophies about artificial intelligence have lent a tribal element to Silicon Valley’s AI talent wars. More than 19 OpenAI employees followed Mira Murati to her startup Thinking Machines earlier this year, for example. Anthropic was founded in 2021 by former OpenAI employees who disagreed with their employer’s strategic direction. 

Das, however, said it really depends on the person. “I’d say a lot more people are mercenary than they let on,” he said. “People care about working with smart people and they care about working on products that actually work but they can be bought out if the price is right.” But for many, “they have too much money already and can’t be bought.” 

Meta’s layoffs and reputation may drive talent decisions

Meta’s own sweeping layoffs earlier this year could also sour the market for AI talent, some told me. “I’ve decided to raise the bar on performance management and move out low-performers faster,” said Zuckerberg in an internal memo back in January. The memo said Meta planned to increasingly focus on developing AI, smart glasses and the future of social media. Following the memo, about 3,600 employees were laid off—roughly 5% of Meta’s workforce

One AI researcher told me that he had heard about Zuckerberg’s high-stakes offers, but that people don’t trust Meta after the “weedwacker” layoffs. 

Meta’s existing advanced AI research team FAIR (Fundamental AI Research) has increasingly been sidelined in the development of Meta’s Llama AI models and has lost key researchers. Joelle Pineau, who had been leading FAIR, announced her departure in April. Most of the researchers who developed Meta’s original Llama model have left, including two cofounders of French AI startup Mistral. And a trio of top AI researchers left a year ago to found AI agent startup Yutori. 

Finally, there are hard-to-quantify issues, like prestige. Meijer expressed doubt that Meta could produce AI products that experts in the field would perceive as embodying breakthrough capabilities. “The bitter truth is that Meta does not have any leaders that are good at bridging research and product,” he said. “For a long time Reality Labs and FAIR could do their esoteric things without being challenged. But now things are very different and companies like Anthropic, OpenAI, Google, Mistral, DeepSeek excel at pushing out research into production at record pace, and Meta is left standing on the sidelines.“

In addition, he said, huge salaries and additional equity “will not stick if the company feels unstable or if it is perceived by peers as a black mark on your resume. Prestige compounds, that is why top people self-select into labs like DeepMind, OpenAI, or Anthropic. Aura is not for sale.” 

That’s not to say that Zuck’s largesse won’t land him some top AI talent. The question is whether it will be enough to deliver the AI product wins Meta needs.

With that, here’s the rest of the AI news.

Sharon Goldman
[email protected]
@sharongoldman

This story was originally featured on Fortune.com

© DAVID PAUL MORRIS—Bloomberg/Getty Images

Meta CEO Mark Zuckerberg.

Exclusive: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—‘I would be terrified’

11 June 2025 at 12:00

Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, harbored a critical security flaw that, according to researchers, signals a broader risk of AI agents being hacked.

The flaw, revealed today by AI security startup Aim Security and shared exclusively in advance with Fortune, is the first known “zero-click” attack on an AI agent, an AI that acts autonomously to achieve specific goals. The nature of the vulnerability means that the user doesn’t need to click anything or interact with a message for an attacker to access sensitive information from apps and data sources connected to the AI agent. 

In the case of Microsoft 365 Copilot, the vulnerability lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed. Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself. 

Microsoft 365 Copilot acts based on user instructions inside Office apps to do things like access documents and produce suggestions. If infiltrated by hackers, it could be used to target sensitive internal information such as emails, spreadsheets, and chats. The attack bypasses Copilot’s built-in protections, which are designed to ensure that only users can access their own files—potentially exposing proprietary, confidential, or compliance-related data.

The researchers at Aim Security dubbed the flaw “EchoLeak.” Microsoft told Fortune that it has already fixed the issue in Microsoft 365 Copilot and that its customers were unaffected. 

“We appreciate Aim for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted,” a Microsoft spokesperson said in a statement. “We have already updated our products to mitigate this issue, and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture.”

The Aim researchers said that EchoLeak is not just a run-of-the-mill security bug. It has broader implications beyond Copilot because it stems from a fundamental design flaw in LLM-based AI agents that is similar to software vulnerabilities in the 1990s, when attackers began to be able to take control of devices like laptops and mobile phones. 

Adir Gruss, cofounder and CTO of Aim Security, told Fortune that he and his fellow researchers took about three months to reverse engineer Microsoft 365 Copilot, one of the most widely used generative AI assistants. They wanted to determine whether something like those earlier software vulnerabilities lurked under the hood and then develop guardrails to mitigate against them. 

“We found this chain of vulnerabilities that allowed us to do the equivalent of the ‘zero click’ for mobile phones, but for AI agents,” he said. First, the attacker sends an innocent-seeming email that contains hidden instructions meant for Copilot. Then, since Copilot scans the user’s emails in the background, Copilot reads the message and follows the prompt—digging into internal files and pulling out sensitive data. Finally, Copilot hides the source of the instructions, so the user can’t trace what happened. 

After discovering the flaw in January, Gruss explained that Aim contacted the Microsoft Security Response Center, which investigates all reports of security vulnerabilities affecting Microsoft products and services. “They want their customers to be secure,” he said. “They told us this was super groundbreaking for them.”

However, it took five months for Microsoft to address the issue, which, Gruss said, “is on the (very) high side of something like this.” One reason, he explained, is that the vulnerability is so new, and it took time to get the right Microsoft teams involved in the process and educate them about the vulnerability and mitigations.

Microsoft initially attempted a fix in April, Gruss said, but in May the company discovered additional security issues around the vulnerability. Aim decided to wait until Microsoft had fully fixed the flaw before publishing its research, in the hope that other vendors that might have similar vulnerabilities “will wake up.”

Gruss said the biggest concern is that EchoLeak could apply to other kinds of agents—from Anthropic’s MCP (Model Context Protocol), which connects AI assistants to other applications, to platforms like Salesforce’s Agentforce. 

If he led a company implementing AI agents right now, “I would be terrified,” Gruss said. “It’s a basic kind of problem that caused us 20, 30 years of suffering and vulnerability because of some design flaws that went into these systems, and it’s happening all over again now with AI.”

Organizations understand that, he explained, which may be why most have not yet widely adopted AI agents. “They’re just experimenting, and they’re super afraid,” he said. “They should be afraid, but on the other hand, as an industry we should have the proper systems and guardrails.”

Microsoft tried to prevent such a problem, known as an LLM scope violation vulnerability. It’s a class of security flaws in which the model is tricked into accessing or exposing data beyond what it’s authorized or intended to handle—essentially violating its “scope” of permissions. “They tried to block it in multiple paths across the chain, but they just failed to do so because AI is so unpredictable and the attack surface is so big,” Gruss said. 

While Aim is offering interim mitigations to clients adopting other AI agents that could be affected by the EchoLeak vulnerability, Gruss said the long-term fix will require a fundamental redesign of how AI agents are built. “The fact that agents use trusted and untrusted data in the same ‘thought process’ is the basic design flaw that makes them vulnerable,” he explained. “Imagine a person that does everything he reads—he would be very easy to manipulate. Fixing this problem would require either ad hoc controls, or a new design allowing for clearer separation between trusted instructions and untrusted data.” 

Such a redesign could be in the models themselves, Gruss said, citing active research into enabling the models to better distinguish between instructions and data. Or the applications the agents are built on top of could add mandatory guardrails for any agent. 

For now, “every Fortune 500 I know is terrified of getting agents to production,” he said, pointing out that Aim has previously done research on coding agents where the team was able to run malicious code on developers’ machines. “There are users experimenting, but these kind of vulnerabilities keep them up at night and prevent innovation.” 

This story was originally featured on Fortune.com

© FABRICE COFFRINI—AFP/Getty Images

Microsoft CEO Satya Nadella

Everyone’s using AI at work. Here’s how companies can keep data safe

11 June 2025 at 10:00

Companies across industries are encouraging their employees to use AI tools at work. Their workers, meanwhile, are often all too eager to make the most of generative AI chatbots like ChatGPT. So far, everyone is on the same page, right?

There’s just one hitch: How do companies protect sensitive company data from being hoovered up by the same tools that are supposed to boost productivity and ROI? After all, it’s all too tempting to upload financial information, client data, proprietary code, or internal documents into your favorite chatbot or AI coding tool, in order to get the quick results you want (or that your boss or colleague might be demanding). In fact, a new study from data security company Varonis found that shadow AI—unsanctioned generative AI applications—poses a significant threat to data security, with tools that can bypass corporate governance and IT oversight, leading to potential data leaks. The study found that nearly all companies have employees using unsanctioned apps, and nearly half have employees using AI applications considered high-risk. 

For information security leaders, one of the key challenges is educating workers about what the risks are and what the company requires. They must ensure that employees understand the types of data the organization handles—ranging from corporate data like internal documents, strategic plans, and financial records, to customer data such as names, email addresses, payment details, and usage patterns. It’s also critical to communicate how each type of data is classified—for example, whether it is public, internal-only, confidential, or highly restricted. Once this foundation is in place, clear policies and access boundaries must be established to protect that data accordingly.

Striking a balance between encouraging AI use and building guardrails

“What we have is not a technology problem, but a user challenge,” said James Robinson, chief information security officer at data security company Netskope. The goal, he explained, is to ensure that employees use generative AI tools safely—without discouraging them from adopting approved technologies.

“We need to understand what the business is trying to achieve,” he added. Rather than simply telling employees they’re doing something wrong, security teams should work to understand how people are using the tools, to make sure the policies are the right fit—or whether they need to be adjusted to allow employees to share information appropriately.

Jacob DePriest, chief information security officer at password protection provider 1Password, agreed, saying that his company is trying to strike a balance with its policies—to both encourage AI usage and also educate so that the right guardrails are in place. 

Sometimes that means making adjustments. For example, the company released a policy on the acceptable use of AI last year, part of the company’s annual security training. “Generally, it’s this theme of ‘Please use AI responsibly; please focus on approved tools; and here are some unacceptable areas of usage.’” But the way it was written caused many employees to be overly cautious, he said. 

“It’s a good problem to have, but CISOs can’t just focus exclusively on security,” he said. “We have to understand business goals and then help the company achieve both business goals and security outcomes as well. I think AI technology in the last decade has highlighted the need for that balance. And so we’ve really tried to approach this hand in hand between security and enabling productivity.” 

Banning AI tools to avoid misuse does not work

But companies who think banning certain tools is a solution, should think again. Brooke Johnson, SVP of HR and security at Ivanti, said her company found that among people who use generative AI at work, nearly a third keep their AI use completely hidden from management. “They’re sharing company data with systems nobody vetted, running requests through platforms with unclear data policies, and potentially exposing sensitive information,” she said in a message. 

The instinct to ban certain tools is understandable but misguided, she said. “You don’t want employees to get better at hiding AI use; you want them to be transparent so it can be monitored and regulated,” she explained. That means accepting the reality that AI use is happening regardless of policy, and conducting a proper assessment of which AI platforms meet your security standards. 

“Educate teams about specific risks without vague warnings,” she said. Help them understand why certain guardrails exist, she suggested, while emphasizing that it is not punitive. “It’s about ensuring they can do their jobs efficiently, effectively, and safely.” 

Agentic AI will create new challenges for data security

Think securing data in the age of AI is complicated now? AI agents will up the ante, said DePriest. 

“To operate effectively, these agents need access to credentials, tokens, and identities, and they can act on behalf of an individual—maybe they have their own identity,” he said. “For instance, we don’t want to facilitate a situation where an employee might cede decision-making authority over to an AI agent, where it could impact a human.” Organizations want tools to help facilitate faster learning and synthesize data more quickly, but ultimately, humans need to be able to make the critical decisions, he explained. 

Whether it is the AI agents of the future or the generative AI tools of today, striking the right balance between enabling productivity gains and doing so in a secure, responsible way may be tricky. But experts say every company is facing the same challenge—and meeting it is going to be the best way to ride the AI wave. The risks are real, but with the right mix of education, transparency, and oversight, companies can harness AI’s power—without handing over the keys to their kingdom.

This story was originally featured on Fortune.com

© Illustration by Simon Landrein

❌