Normal view

Received yesterday — 11 June 2025

Exclusive: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—‘I would be terrified’

11 June 2025 at 12:00

Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, harbored a critical security flaw that, according to researchers, signals a broader risk of AI agents being hacked.

The flaw, revealed today by AI security startup Aim Security and shared exclusively in advance with Fortune, is the first known “zero-click” attack on an AI agent, an AI that acts autonomously to achieve specific goals. The nature of the vulnerability means that the user doesn’t need to click anything or interact with a message for an attacker to access sensitive information from apps and data sources connected to the AI agent. 

In the case of Microsoft 365 Copilot, the vulnerability lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed. Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself. 

Microsoft 365 Copilot acts based on user instructions inside Office apps to do things like access documents and produce suggestions. If infiltrated by hackers, it could be used to target sensitive internal information such as emails, spreadsheets, and chats. The attack bypasses Copilot’s built-in protections, which are designed to ensure that only users can access their own files—potentially exposing proprietary, confidential, or compliance-related data.

The researchers at Aim Security dubbed the flaw “EchoLeak.” Microsoft told Fortune that it has already fixed the issue in Microsoft 365 Copilot and that its customers were unaffected. 

“We appreciate Aim for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted,” a Microsoft spokesperson said in a statement. “We have already updated our products to mitigate this issue, and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture.”

The Aim researchers said that EchoLeak is not just a run-of-the-mill security bug. It has broader implications beyond Copilot because it stems from a fundamental design flaw in LLM-based AI agents that is similar to software vulnerabilities in the 1990s, when attackers began to be able to take control of devices like laptops and mobile phones. 

Adir Gruss, cofounder and CTO of Aim Security, told Fortune that he and his fellow researchers took about three months to reverse engineer Microsoft 365 Copilot, one of the most widely used generative AI assistants. They wanted to determine whether something like those earlier software vulnerabilities lurked under the hood and then develop guardrails to mitigate against them. 

“We found this chain of vulnerabilities that allowed us to do the equivalent of the ‘zero click’ for mobile phones, but for AI agents,” he said. First, the attacker sends an innocent-seeming email that contains hidden instructions meant for Copilot. Then, since Copilot scans the user’s emails in the background, Copilot reads the message and follows the prompt—digging into internal files and pulling out sensitive data. Finally, Copilot hides the source of the instructions, so the user can’t trace what happened. 

After discovering the flaw in January, Gruss explained that Aim contacted the Microsoft Security Response Center, which investigates all reports of security vulnerabilities affecting Microsoft products and services. “They want their customers to be secure,” he said. “They told us this was super groundbreaking for them.”

However, it took five months for Microsoft to address the issue, which, Gruss said, “is on the (very) high side of something like this.” One reason, he explained, is that the vulnerability is so new, and it took time to get the right Microsoft teams involved in the process and educate them about the vulnerability and mitigations.

Microsoft initially attempted a fix in April, Gruss said, but in May the company discovered additional security issues around the vulnerability. Aim decided to wait until Microsoft had fully fixed the flaw before publishing its research, in the hope that other vendors that might have similar vulnerabilities “will wake up.”

Gruss said the biggest concern is that EchoLeak could apply to other kinds of agents—from Anthropic’s MCP (Model Context Protocol), which connects AI assistants to other applications, to platforms like Salesforce’s Agentforce. 

If he led a company implementing AI agents right now, “I would be terrified,” Gruss said. “It’s a basic kind of problem that caused us 20, 30 years of suffering and vulnerability because of some design flaws that went into these systems, and it’s happening all over again now with AI.”

Organizations understand that, he explained, which may be why most have not yet widely adopted AI agents. “They’re just experimenting, and they’re super afraid,” he said. “They should be afraid, but on the other hand, as an industry we should have the proper systems and guardrails.”

Microsoft tried to prevent such a problem, known as an LLM scope violation vulnerability. It’s a class of security flaws in which the model is tricked into accessing or exposing data beyond what it’s authorized or intended to handle—essentially violating its “scope” of permissions. “They tried to block it in multiple paths across the chain, but they just failed to do so because AI is so unpredictable and the attack surface is so big,” Gruss said. 

While Aim is offering interim mitigations to clients adopting other AI agents that could be affected by the EchoLeak vulnerability, Gruss said the long-term fix will require a fundamental redesign of how AI agents are built. “The fact that agents use trusted and untrusted data in the same ‘thought process’ is the basic design flaw that makes them vulnerable,” he explained. “Imagine a person that does everything he reads—he would be very easy to manipulate. Fixing this problem would require either ad hoc controls, or a new design allowing for clearer separation between trusted instructions and untrusted data.” 

Such a redesign could be in the models themselves, Gruss said, citing active research into enabling the models to better distinguish between instructions and data. Or the applications the agents are built on top of could add mandatory guardrails for any agent. 

For now, “every Fortune 500 I know is terrified of getting agents to production,” he said, pointing out that Aim has previously done research on coding agents where the team was able to run malicious code on developers’ machines. “There are users experimenting, but these kind of vulnerabilities keep them up at night and prevent innovation.” 

This story was originally featured on Fortune.com

© FABRICE COFFRINI—AFP/Getty Images

Microsoft CEO Satya Nadella

Everyone’s using AI at work. Here’s how companies can keep data safe

11 June 2025 at 10:00

Companies across industries are encouraging their employees to use AI tools at work. Their workers, meanwhile, are often all too eager to make the most of generative AI chatbots like ChatGPT. So far, everyone is on the same page, right?

There’s just one hitch: How do companies protect sensitive company data from being hoovered up by the same tools that are supposed to boost productivity and ROI? After all, it’s all too tempting to upload financial information, client data, proprietary code, or internal documents into your favorite chatbot or AI coding tool, in order to get the quick results you want (or that your boss or colleague might be demanding). In fact, a new study from data security company Varonis found that shadow AI—unsanctioned generative AI applications—poses a significant threat to data security, with tools that can bypass corporate governance and IT oversight, leading to potential data leaks. The study found that nearly all companies have employees using unsanctioned apps, and nearly half have employees using AI applications considered high-risk. 

For information security leaders, one of the key challenges is educating workers about what the risks are and what the company requires. They must ensure that employees understand the types of data the organization handles—ranging from corporate data like internal documents, strategic plans, and financial records, to customer data such as names, email addresses, payment details, and usage patterns. It’s also critical to communicate how each type of data is classified—for example, whether it is public, internal-only, confidential, or highly restricted. Once this foundation is in place, clear policies and access boundaries must be established to protect that data accordingly.

Striking a balance between encouraging AI use and building guardrails

“What we have is not a technology problem, but a user challenge,” said James Robinson, chief information security officer at data security company Netskope. The goal, he explained, is to ensure that employees use generative AI tools safely—without discouraging them from adopting approved technologies.

“We need to understand what the business is trying to achieve,” he added. Rather than simply telling employees they’re doing something wrong, security teams should work to understand how people are using the tools, to make sure the policies are the right fit—or whether they need to be adjusted to allow employees to share information appropriately.

Jacob DePriest, chief information security officer at password protection provider 1Password, agreed, saying that his company is trying to strike a balance with its policies—to both encourage AI usage and also educate so that the right guardrails are in place. 

Sometimes that means making adjustments. For example, the company released a policy on the acceptable use of AI last year, part of the company’s annual security training. “Generally, it’s this theme of ‘Please use AI responsibly; please focus on approved tools; and here are some unacceptable areas of usage.’” But the way it was written caused many employees to be overly cautious, he said. 

“It’s a good problem to have, but CISOs can’t just focus exclusively on security,” he said. “We have to understand business goals and then help the company achieve both business goals and security outcomes as well. I think AI technology in the last decade has highlighted the need for that balance. And so we’ve really tried to approach this hand in hand between security and enabling productivity.” 

Banning AI tools to avoid misuse does not work

But companies who think banning certain tools is a solution, should think again. Brooke Johnson, SVP of HR and security at Ivanti, said her company found that among people who use generative AI at work, nearly a third keep their AI use completely hidden from management. “They’re sharing company data with systems nobody vetted, running requests through platforms with unclear data policies, and potentially exposing sensitive information,” she said in a message. 

The instinct to ban certain tools is understandable but misguided, she said. “You don’t want employees to get better at hiding AI use; you want them to be transparent so it can be monitored and regulated,” she explained. That means accepting the reality that AI use is happening regardless of policy, and conducting a proper assessment of which AI platforms meet your security standards. 

“Educate teams about specific risks without vague warnings,” she said. Help them understand why certain guardrails exist, she suggested, while emphasizing that it is not punitive. “It’s about ensuring they can do their jobs efficiently, effectively, and safely.” 

Agentic AI will create new challenges for data security

Think securing data in the age of AI is complicated now? AI agents will up the ante, said DePriest. 

“To operate effectively, these agents need access to credentials, tokens, and identities, and they can act on behalf of an individual—maybe they have their own identity,” he said. “For instance, we don’t want to facilitate a situation where an employee might cede decision-making authority over to an AI agent, where it could impact a human.” Organizations want tools to help facilitate faster learning and synthesize data more quickly, but ultimately, humans need to be able to make the critical decisions, he explained. 

Whether it is the AI agents of the future or the generative AI tools of today, striking the right balance between enabling productivity gains and doing so in a secure, responsible way may be tricky. But experts say every company is facing the same challenge—and meeting it is going to be the best way to ride the AI wave. The risks are real, but with the right mix of education, transparency, and oversight, companies can harness AI’s power—without handing over the keys to their kingdom.

This story was originally featured on Fortune.com

© Illustration by Simon Landrein

Received before yesterday

Meta’s ‘superintelligence’ effort with Scale AI founder highlights its scramble to keep pace in AI race

10 June 2025 at 16:12

Meta’s decision to create an ambitious new “superintelligence” AI research lab headed by Scale AI’s Alexandr Wang is a bold bid for relevance in its fierce AI battle with OpenAI, Anthropic and Google. It is also far from a slam-dunk. 

While the pursuit of an ill-defined superintelligence—typically meant as an AI system that could surpass the collective intelligence of humanity–would have seemed a quixotic, sci-fi quest in the past, it has become an increasingly common way for top AI companies to attract talent and secure a competitive edge.

Tapping the 28-year-old Wang to lead the new superintelligence effort, while in talks to invest billions of dollars into Scale AI, as reported today by the New York Times, clearly shows Mark Zuckerberg’s confidence in Wang and Scale. The startup, which Wang co-founded in 2016, primarily focuses on providing high-quality training data, the “oil” that powers today’s most powerful AI models. Meta invested in Scale’s last funding round, and also recently partnered with Scale and the U.S. Department of Defense on “Defense Llama,” a military-grade LLM based on Meta’s Llama 3 model. 

Meta has struggled, however, with several reorganizations of its generative AI research and product teams over the past two years. And the high-stakes AI talent wars are tougher to win than ever. Meta has reportedly offered seven-to-nine figure compensation packages to dozens of top researchers, with some agreeing to join the new lab. But one VC posted on X that even with those offers on the table, he had heard of three instances in which Meta still lost candidates to OpenAI and Anthropic. 

Meta already has a long-standing advanced AI research lab, FAIR (Fundamental AI Research Lab), founded by Meta chief scientist Yann LeCun in 2013. But FAIR has never claimed to be pursuing superintelligence, and LeCun has even eschewed the term AGI (artificial general intelligence), which is often defined as an AI system that would be as intelligent as an individual person. LeCun has gone on record as being skeptical that current approaches to AI, built around large language models (LLMs), will ever get to human-level intelligence.

In April, LeCun  told Fortune that a spate of high-profile departures from FAIR, including that of former FAIR head Joelle Pineau, was not a sign of the lab’s “dying a slow death.” Instead, he said, it was a “new beginning” for FAIR, refocusing on the “ambitious and long-term goal of what we call AMI (advanced machine intelligence).” 

Aside from FAIR, Meta CEO Mark Zuckerberg has spent billions on generative AI development in a bid to catch up to OpenAI, following the launch of that company’s wildly popular ChatGPT in November 2022. Zuckerberg rebuilt the entire company around the technology and succeeded in creating highly-successful open source AI models, branded as Llama, in 2023 and 2024. The Llama models helped Meta recover from an underwhelming pivot to the metaverse. 

But Meta’s latest AI model, Llama 4, which was released in April 2025, was considered a flop. The model’s debut was attended by controversy around a perceived rushed release, lack of transparency, possibly inflated performance metrics, and indications that Meta was failing to keep pace with open-source AI rivals like China’s DeepSeek.

For the past year, Meta’s been hemorrhaging top AI talent. Three top Meta AI researchers–Devi Parikh, Abhishek Das and Dhruv Botra, left a year ago to found Yutori, a startup focused on AI agents. Damien Sereni, an engineering leader at Meta who led the team working on PyTorch, a framework underpinning most of today’s top LLMs, recently left the company. Boris Cherny is a software engineer who left Meta last year for Anthropic and created Claude Code. And Erik Meijer, a former Meta engineering leader, told Fortune recently that he has heard that several developers from PyTorch have recently left to join former OpenAI CTO Mira Murati’s Thinking Machine Labs.

Meta’s move to bring in Wang, along with a number of other Scale employees, while simultaneously investing in Scale, follows what has, over the past 18 months, become a standard playbook for big tech companies looking to grab AI know-how from startups. Microsoft used a similar deal structure, which stops short of a full acquisition yet still amasses talent and technical IP, to bring in Mustafa Suleyman from Inflection. Amazon then used the arrangement to hire key talent from Adept AI and Google used it to rehire Character AI cofounder Noam Shazeer. Because the deals are not structured as acquisitions, it is more difficult for antitrust regulators to block them.

It remains unclear whether Meta will be able to declare the Scale deal as a big win. It’s also not yet certain whether Yann LeCun will find himself marginalized within the Meta research ecosystem. But one big rising power player is undeniable: Alexandr Wang.

Wang became a billionaire with Scale by providing a global army of contractors that could label the data that companies including Meta and OpenAI use to train and improve their AI models. While it went on to help companies make custom AI applications, its core data business remains its biggest moneymaker. When Fortune spoke to Wang a year ago, he said that data was far from being commoditized for AI.  “It’s a pivotal moment for the industry,” he said. “I think we are now entering a phase where further improvements and further gains from the models are not going to be won easily. They’re going to require increasing investments and are gonna require innovations and computation and efficient algorithms, innovations, and data. Our leg of that school is to ensure that we continue innovating on data.” 

Now, with a potential Meta investment,Wang’s efforts are paying off big time. Zuckerberg can only hope the deal works as well for him as it has for Wang.

This story was originally featured on Fortune.com

© Bloomberg / Getty Images

Meta CEO Mark Zuckerberg is betting on Scale AI to help it regain ground in the AI race. Photographer: Davd Paul Morris/Bloomberg via Getty Images

Exclusive: Ex-Meta AI leaders debut an agent that scours the web for you in a push to ultimately give users their own digital ‘chief of staff’

10 June 2025 at 13:00

The trio is widely regarded as among the world’s most elite AI talent.  All three are veteran ex-Meta researchers who helped lead the company’s high-profile generative AI efforts—and before that, ran labs together at Georgia Tech. 

Devi Parikh led Meta’s multimodal AI research team. Dhruv Batra headed up embodied AI, building models that help robots navigate the physical world. And Abhishek Das was a research scientist at Meta’s Fundamental AI Research lab, or FAIR.

A year ago, Parikh and Das left Meta to launch Yutori, a startup named after the Japanese word for the mental spaciousness that comes from having room to think. Batra joined a couple of months later. 

Now, investors are betting big on the team’s vision for Yutori. Radical Ventures, Felicis, and a roster of top AI angels—including Elad Gil, Sarah Guo, Jeff Dean, and Fei-Fei Li—have backed Yutori’s $15 million seed round. The mission: to rethink how people interact with  AI agents—where the software, not the user, is the one doing the surfing to accomplish tasks like an AI ‘chief of staff.’

Taking daily digital chores off your plate

“The web is simultaneously one of humanity’s greatest inventions—and really, really clunky,” said Parikh. Yutori’s long-term dream, she explained, is to build AI personal assistants—in the form of web agents—that can take daily digital chores off your plate without you lifting a finger, leaving you with time to tackle whatever brings you joy. But to make agents people actually want to use, she said, the entire experience needs a redesign—from product and user interface to technical infrastructure. 

“That’s something that’s harder for larger entities to think through from scratch, since they are incentivized to think about their existing products,” said Parikh, adding that she saw a lot of that at Meta. “We have the luxury to be able to just think from scratch.” 

Parikh explained that Yutori’s focus is on improving  interaction with generative AI.  It should be dynamic and adapt to the task at hand, rather than using a rigid, pre-designed template like a chat box or a web page.

For example, if an AI agent is ordering food on DoorDash for you, it might need to show which restaurants it searched, what menu items it considered, and a few options you can quickly review and confirm. But if that same agent is monitoring the news and generating daily summaries, the format should be entirely different—perhaps organized like a briefing or timeline.

Ultimately, Parikh believes, a system should intelligently decide how to present information and how users can interact with the agent to refine or redirect the task. To get there, Yutori is building on top of existing models, including Meta’s Llama, with a singular focus on agents that can navigate the web and take actions on behalf of a user. 

Today, Yutori announced its first consumer product, Scouts, which the Yutori team explained is like having a team of agents that can monitor the web for anything you care about. Say you’re interested in buying a phone: You want to have a team of agents monitoring the web for whenever there is a discount on the Google Pixel 9. A Scout can notify you when that happens. Or if you are interested in a daily news update on an obscure topic, you can set up a Scout for that.

“Anything of this flavor where you want a team of agents to monitor the web and then notify you, either based on a condition or at a particular time, that’s  the use case we are going after,” said Das. His own very Scout, he explained, is one to reserve tennis courts in San Francisco. He asked his Scout to “Notify me whenever a tennis court in Buena Vista park becomes available for Mondays at 7:30am.” He gets timely notifications over email and he ends up booking the courts. Scouts is free to use, though there is a waitlist for access. 

Unlike traditional search tools – like Google Alerts, for example – Scouts work deeper behind the scenes, autonomously operating browsers and clicking through websites to gather details.  They can also monitor dozens of sites at the same time to find updates. While other companies like OpenAI may be going after the same kind of idea, Batra said that it’s “still early” in the AI agent space and that Yutori is not deterred: “I think we still have a shot.” 

A long-term consumer vision

While Yutori has launched its first product, both the founding team and its investors are clear: the initial $15 million investment is less about this specific release, and more about the team’s bona fides and its long-term consumer vision. For years, the three close friends—Parikh and Batra have also been married since 2010, while Batra advised Das’s PhD—had met weekly over brainstorming dinners and long discussed the possibility of starting a company focused on the future of AI agents. 

“In the early stages of a startup, the quality of the team is the single most important thing—more than the idea, more than the product, more than the market,” said Rob Toews, partner at Radical Ventures, which led Yutori’s seed round and has invested in AI startups including Cohere, Waabi, and Writer. He emphasized that only a “very, very small set of individuals” in the world have the technical depth and creative judgment to build cutting-edge AI systems.

“The Yutori founding team is very much in that upper echelon,” he said, referring to the three cofounders and their initial hires, totaling 15. “It’s just an incredibly dense talent team, top to bottom. Everyone they’ve hired so far is a highly coveted researcher or engineer from places like Meta, Google, and Tesla. Teams of this caliber just don’t come along very often.”

It’s a $15 million bet on what Batra called “an experiment” and “a hypothesis,” explaining that the goal of Scouts is to learn how people actually use autonomous agents in real life—and then iterate quickly. Take an agent that can monitor a gym’s scheduling page every 20 minutes: “No human wants to sit down and do that,” said Das.

For now, Yutori has no plans to charge for its products, but instead to keep experimenting to see what clicks with consumers. Ultimately, the founders say they aren’t selling AI for its own sake, but instead are focused on rethinking not just the tasks agents can take on, but the context in which they operate. Today’s digital assistants are mostly reactive—requiring users to reach for their phones, open an app, and manually explain what they need. Yutori’s vision is to remove that friction by building agents that understand what a user is doing in the moment and can proactively step in to help.

It’s a vision the founders have been working toward since their time at Meta, where they experimented with early versions of smart assistants in Meta’s smart glasses made in partnership with Ray-Ban. At Yutori, they’re continuing that work—testing different ways to deliver helpful support exactly when people need it.

This story was originally featured on Fortune.com

© Courtesy of Yutori

Yutori co-founders (left to right) Dhruv Batra, Devi Parikh and Abhishek Das.
❌