Mark Zuckerberg has named Shengjia Zhao, an artificial intelligence researcher who joined Meta Platforms Inc. from OpenAI in June, as the chief scientist for the social media company’s new superintelligence AI group.
Zhao was part of the team behind the original version of OpenAI’s popular chatbot, ChatGPT. He will help lead Meta’s high-profile group, which is aiming to build new AI models that can perform tasks as well as or better than humans. Zhao will report to Alexandr Wang, the former chief executive officer of Scale AI who also joined Meta in June as Chief AI Officer.
Meta has been spending aggressively to recruit AI experts to develop new models and keep pace with rivals like OpenAI and Google in the race for AI dominance. The company has been looking for a chief scientist for the group for months. Zhao is one of more than a dozen former OpenAI employees who have joined Meta’s AI unit in the past two months.
“Shengjia co-founded the new lab and has been our lead scientist from day one,” Zuckerberg, Meta’s CEO, wrote in a post announcing the news on Threads. “Now that our recruiting is going well and our team is coming together, we have decided to formalize his leadership role.”
Zhao was a co-author on the original ChatGPT research paper, and was also a key researcher on OpenAI’s first reasoning model, o1, which has helped popularize a wave of similar so-called “chain-of-thought” systems from labs such as DeepSeek, Google, and others. He was listed as one of over 20 “foundational researchers” on the project.
Yann LeCun, another AI researcher who has been at Meta for over a decade and holds the title of chief scientist, will continue to work at the company as chief scientist of an internal AI research group known as FAIR, according to a person familiar with the matter. He will report to Wang, they added.
Google has launched a tool to let you virtually try on clothes online. Available now, the tool lets users upload a picture of themselves to see how the outfit would look on them. AI components account for how materials fold or stretch across people’s bodies.
Buying online clothes is always something of a crap shoot. What looks good on the webpage could look ridiculous on you in person (or worse, entirely too tight).
A few months ago at its I/O 2025 event, Google unveiled an AI-powered tool that could prevent those fashion disasters, letting you upload a full-body picture of yourself to virtually try on clothing found online. Now, that tool is finally available to users in the U.S.
To use the tool, just tap on any product listing across Google or an apparel product result on Google Images and hit the “try it on” icon. From there, you’ll upload a full-length photo and see what you’ll look like in the outfit.
The tool works on laptop, desktop and mobile devices.
Google says it has also updated its price alert feature, so you can track prices and grab the top, dress or shorts when it falls into your price range.
The tool doesn’t just overlay the clothing on top of the image you upload. Google used an AI image generation model to account for how materials fold or stretch across people’s bodies.
Later this year, Google will bring shopping features like this to its AI Mode, including one that lets people explore (and purchase) outfit and decor ideas directly from the chatbot.
One heads-up about this tool: While it does a good job of showing you what the clothes will look like on you, it does not account for body size, so it’s not currently useful as a guide to whether an outfit will fit you well. In our own tests, the tool shaved several pounds off of us to showcase the clothes. And while that was a nice fantasy, the reality would likely look a bit different.
In fairness, the tool does warn users “Fit and appearance won’t be exact.”
Agentic AI is the hottest new trend in today’s tech sector, as both large companies and buzzy startups promise that a legion of autonomous programs will soon be able to manage our personal and professional lives.
Sapna Chadha, vice president for Southeast Asia and South Asia frontier at Google Asia Pacific, described agentic AI as the logical next step in this new technology. “AI agents are where you take intelligent language models and give them access to tools,” she said Tuesday at the Fortune Brainstorm AI Singapore conference. This access allows the language models to stitch together complex and multi-step actions.
Vivek Luthra, Accenture’s Asia-Pacific data and AI lead, shared one example from Accenture’s own experience: Marketing teams could use an AI agent to manage campaigns, allowing human employees to engage in more value-added functions. (Accenture is a founding partner of Brainstorm AI Singapore)
Chadha predicted that almost a third of all enterprise software will have agentic AI built in by 2028, and could automate almost 15% of day-to-day work and workflows.
But Luthra suggested that most companies aren’t there yet. Accenture clients fall into three stages of agentic AI adoption. The first is AI assistance, where staff members ask an agentic co-pilot for help in much the same way they might ask a fellow team member a question. The second stage is treating it as an advisor, increasing the overall capability of all human employees and empowering them to make the right decisions. The final stage is giving autonomous agents the authority to handle entire processes on their own.
As of now, Luthra says, most clients are in the first and second stage, with fewer companies prepared to let AI agents truly handle things on their own.
According to Luthra, companies leading the way on AI begin by imagining new ways of structuring workflows, then assess what skills are needed in the workplace to make that happen. Then, they put agentic AI into practice with a cross-platform “workbench” that gives employees opportunities to integrate AI agents into their daily lives.
Robotaxis are starting to become a reality after years of hype and promises. Self-driving cars are now on the road in cities like San Francisco, Shenzhen and Wuhan, and robotaxi firms are foraying into new markets like Singapore and the United Arab Emirates. And on Thursday, Tesla CEO Elon Musk suggested that half the U.S. population will get access to a robotaxi by 2026.
At this point, those working in the autonomous vehicle space think the problem isn’t technology, but rather people, and regulators in particular.
“It’s clearly mature enough to scale, but we have to work with the government on the regulation part,” said Kerry Xu, WeRide’s general manager for Singapore, at the Fortune Brainstorm AI Singapore conference on Wednesday. “Public acceptance, data, transparency…I think it takes time for the community to completely accept AVs as part of their normal life.”
Earlier this year, WeRide hosted Singapore Acting Transport Minister Jeffrey Siow at its Guangzhou headquarters, where the minister announced ambitious plans to debut AVs in the city’s public housing estates by the end of the year. The Chinese startup also debuted Singapore’s first fully driverless bus system earlier in July.
ST Liew, president of Qualcomm’s Taiwan and Southeast Asia business, agreed that the private sector needed to help build an ecosystem of trust. “We always advocate that we should have cross-industry benchmark transparency in the training data and make sure that we are compliant,” he said.
“Take care of the safety, make sure that it is transparent, and then you can enjoy yourself while you’re in the car,” he continued.
Liew credited Qualcomm’s work on autonomous vehicles to the company’s four decades of experience working on semiconductors. He noted that the power of new chips allow for “AI at the edge” to make the process of learning, deduction, and decision-making more automated. This enables driverless cars to be deployed across different geographies and weather patterns, such as Southeast Asia.
Cars built in Asia, and particularly China, are now becoming more sophisticated, offering a wide array of customer-friendly assisted driving and software features, turning cars into smartphones on wheels.
Liew pointed out that the car industry is now shifting to the “software-defined car,” which allows drivers to do much more inside the vehicle.
“If I can go into the car, then I can conduct my meetings, I can use AI to ask where I am, book my restaurants, and all that,” Liew said, “just like it’s an extension of my office or my home.”
President Donald Trump revealed he once considered breaking up AI darling Nvidia before learning more about its CEO, Jensen Huang, and the company’s market dominance. The president made the comments during an AI summit hosted in Washington on Wednesday. Nvidia recently became the first company to reach a $4 trillion market valuation, driven by its near-monopoly in AI chip technology.
U.S. President Donald Trump said he considered breaking up AI darling Nvidia before learning more about the chipmaker and its CEO, Jensen Huang.
“I said, ‘Look, we’ll break this guy up,’ before I learned the facts of life,” Trump said of Huang during a Wednesday speech about his new AI Action Plan. The U.S. president then appeared to recount an earlier conversation with an advisor about Nvidia’s market share, its CEO, and potentially breaking up the company.
“I said, who the hell is he? What’s his name…What the hell is Nvidia? I’ve never heard of it before,” the president said of the world’s most valuable tech company.
“I figured we could go in and we could sort of break them up a little bit, get them a little competition, and I found out it’s not easy in that business… Then I got to know Jensen and now I see why,” he said, inviting Huang, who was sitting in the audience, to stand up.
The president made the comments during an AI summit hosted in Washington on Wednesday, but it’s unclear when the original conversation about potentially breaking up the company took place.
Representatives for Nvidia did not immediately respond to a request for comment made by Fortune.
Huang’s relationship with Trump
Huang scored a win for Nvidia from the U.S. president earlier this month.
Following a meeting between Huang and Trump at the White House, the Trump administration lifted restrictions on Nvidia’s H20 AI chip exports to China, allowing the company to sell the chips in the lucrative market and reversing previous Trump administration restrictions.
Per the New York Times, Huang engaged in months of lobbying for the policy change, meeting with Trump, testifying before Congress, and working closely with White House allies like AI adviser David Sacks. The CEO argued that restricting chip sales would hurt U.S. tech leadership by allowing Chinese rivals to dominate, and emphasized that Nvidia’s chips were crucial for global AI standards.
The tech giant has been on something of a winning streak of late. Earlier this month, the company made history when it became the first in the world to reach a market value of $4 trillion. The company’s stock has soared over the past five years, with a nearly 18% gain registered year-to-date. Nvidia’s supercharged growth isdriven by the AI boom and the company’s near-monopoly on AI chip manufacturing. The company’s graphics processing units (GPUs) are used by all major tech companies to maintain and develop AI models.
The company’s dominance in AI hardware has made it a key player in global tech geopolitics, particularly as governments scrutinize the export of advanced semiconductor technology amid rising U.S.-China tensions.
The private sector has been in the driver’s seat on AI development since ChatGPT’s release in late 2023. Big Tech companies like Microsoft, Google and Alibaba and smaller startups like Anthropic and Mistral are all trying to monetize this new technology for future growth.
Yet at the Fortune Brainstorm AI Singapore conference on Wednesday, two experts called for a more humane and interdisciplinary approach to artificial intelligence.
AI needs to “think better,” not just faster and cheaper, said Anthea Roberts, founder of startup Dragonfly Thinking. Both human individuals and AI models can struggle to look beyond a particular perspective, whether based on a country or discipline in the case of people, or a “centrist approach” in the case of computers. Human-AI collaboration can enable policy makers to think through issues from different country, disciplinary, and domain perspectives, increasing the likelihood of success, she explains.
Artificial intelligence is a “civilization-changing technology,” that requires a multi-stakeholder ecosystem of academia, civil society, government, and industry working to improve it, Russell Wald, executive director at the Stanford Institute for Human-Centered AI, said.
“Industry really needs to be a leader in this space, but academia does too,” he said, pointing to its early support for frontier technology, its ability to train future “AI leaders,” and its willingness to publish information.
Stopping AI from being a ‘crazy uncle’
Despite rapid growth in AI use, several people are still skeptical about using AI, pointing to its penchant to hallucinate or go off the rails with strange or even offensive language.
Roberts suggested that most people fall into two camps. The first camp, which includes most industry players and even university students, engage in “uncritical use” of AI. The other instead follow “critical non-use”: Those concerned about bias, transparency and inauthenticity simply refuse to join the AI bandwagon.
“I would invite people who aren’t Silicon Valley ‘tech bros’ to get involved in the making and shaping of how we use these products,” she said.
Wald said his institute has learned a lot about humanity in the process of training AI. “You want the right parts of humanity…not the crazy uncle at the Thanksgiving table,” he said.
Both experts said that getting AI right is critical, due to the momentous possible benefits this new technology could bring to society.
“You need to think [about] not just what people want—which is often their baser instincts—but what do they want to want, which is their more altruistic instincts,” Roberts explained.
Anthea Roberts, Founder and CEO of Dragonfly Thinking (left), and Russell Wald, Executive Director at Stanford Institute for Human-Centered AI, speaking at Fortune Brainstorm AI Singapore on July 23, 2025.
Search is changing at a breakneck pace, with Google rolling out new AI features so quickly it can be hard to keep up. So far, these AI implementations are being offered in addition to the traditional search experience. However, Google is now offering a sneak peek at how it may use AI to change the good old-fashioned list of blue links. The company says its new Web Guide feature is being developed to "intelligently organize" the results page, and you can try it now, if you dare.
Many Google searches today come with an AI Overview right at the top of the page. There's also AI Mode, which does away with the typical list of links in favor of a full chatbot approach. While Google contends that these features enhance the search experience and direct users to good sources, it's been easy to scroll right past the AI and get to the regular list of websites. That may change in the not too distant future, though.
Google's latest AI experiment, known as Web Guide, uses generative AI to organize the search results page. The company says Web Guide uses a custom version of Gemini to surface the most helpful webpages and organize the page in a more useful way. It uses the same fan-out technique as AI Mode, conducting multiple parallel searches to gather more data on your query.
The CEOs of every major artificial intelligence company received letters Wednesday urging them to fight Donald Trump's anti-woke AI order.
Trump's executive order requires any AI company hoping to contract with the federal government to jump through two hoops to win funding. First, they must prove their AI systems are "truth-seeking"—with outputs based on "historical accuracy, scientific inquiry, and objectivity" or else acknowledge when facts are uncertain. Second, they must train AI models to be "neutral," which is vaguely defined as not favoring DEI (diversity, equity, and inclusion), "dogmas," or otherwise being "intentionally encoded" to produce "partisan or ideological judgments" in outputs "unless those judgments are prompted by or otherwise readily accessible to the end user."
Announcing the order in a speech, Trump said that the US winning the AI race depended on removing allegedly liberal biases, proclaiming that "once and for all, we are getting rid of woke."
Google DeepMind's Gemini AI won a gold medal at the International Mathematical Olympiad by solving complex math problems using natural language, marking a breakthrough in AI reasoning and human-level performance.Read More
Could Walmart become a leader in the burgeoning agentic AI race?
After watching the retail company’s technology leaders discuss a host of new agents Wednesday at a New York City event, a yes might not be as farfetched as it might sound to some.
The retail giant unveiled its vision for how AI agents are going to overhaul how customers shop on its digital platforms; how corporate and store employees do their jobs; and how vendors and sellers track their merchandise performance. In some cases, this autonomous technology is doing so already.
“Walmart is all in on agents,” the company’s chief technology officer, Suresh Kumar, told reporters at the event. “Agents can make life simpler for every aspect of what we do at Walmart,” he added.
Despite its roots as a brick-and-mortar retailer, Walmart has more recently been at the forefront of online commerce. In embracing AI agents, however, the company is positioning itself ahead of even many digital companies.
Agents, to many in the tech industry, are the next evolution in the current AI boom, where artificial intelligence not only acts as an assistant, but can autonomously complete complex multistep actions with limited, or even no, human involvement. And for Walmart, the company’s leaders say it’s a natural next step in a technological transformation that has been underway inside the Arkansas-based retailer for the past few years. Kumar said he believes that Walmart holds a key advantage over many competitors in this space, considering the depth and breadth of data the company holds both because of its massive customer base, and when it comes to employee experiences as the world’s largest nongovernment employer.
He and other Walmart tech leaders showed off examples of four “super agents,” which essentially act as managers that rout tasks to each more specialized agent. For consumers, there’s Sparky, currently a generative AI digital assistant that can answer product questions and make suggestions, and which has been live in Walmart’s app for some time. In the future, the assistant will start to take actions. Namely, create an order of weekly essential products based on a customer’s shopping behavior, and place the order with essentially a thumbs-up from the customer. The agent will also eventually possess the capability to curate a multi-item order geared to an upcoming party or event—based on specifics such as theme, attendee size, and a shopper’s budget.
Other leaders showcased internal agent use cases that the company says will more efficiently accomplish mundane and repetitive tasks for store workers, corporate staff, Walmart software engineers, and brands and other companies that sell through Walmart’s physical and digital storefronts.
While some of these agentic use cases are live today, others are coming soon, company execs said. But they were intent on making one point clear.
“It’s not vaporware,” one executive said, accurately reading between the lines of one of this reporter’s questions.
Critically, many questions remain unanswered. What exact impact will this so-called agentic future—if brought to full fruition—have on employee headcount at the world’s largest nongovernment employer?
“We expect jobs to evolve, and we don’t know what that looks like yet,” Walmart exec Dave Glick told Fortune.
Will the revenue and employee productivity gains outweigh the intense costs of using AI at scale, especially for a company known on Wall Street for consistently generating profits?
And at a broader industry level, is Walmart willing to participate in a possible future where consumers trust shopping agents from companies like OpenAI or Perplexity to autonomously make purchase decisions for them? Walmart U.S. CTO Hari Vasudev told Fortune that the company is building the technological capabilities to do so, but that the ultimate decision will lie elsewhere in the company.
“I don’t want to mandate the business model; I want to be able to build it as open as I can,” he said. “Whether the business decides to do it with a particular AI operator or not will depend on the economics and the business model and the relationships.”
Are you a current or former Walmart employee with thoughts on this topic or a tip to share? Contact Jason Del Rey at [email protected], [email protected], or through messaging apps Signal or WhatsApp at 917-655-4267. You can also message him on LinkedIn or at @delrey on X.
Yesterday, I recapped my day at “Winning the AI Race”—an event hosted by the All-In podcast and the Hill & Valley coalition—where Silicon Valley’s elite descended on Washington’s stately Andrew Mellon Auditorium to celebrate President Trump’s new AI Action Plan, which he signed onstage after a surreal afternoon that fused podcast spectacle with public policy. The only non–Silicon Valley touch seemed to be the sea of suits that replaced the typical tech uniform of hoodies and sneakers (though Nvidia CEO Jensen Huang refused to budge from his usual leather jacket and black jeans).
Trump’s speech before scrawling his signature went on so long that I missed my Amtrak train back to New Jersey. While I waited for the next one, I had plenty of time to reflect on the day—which, without question, was a victory lap for the so-called AI “accelerationists,” now led in Washington by David Sacks, Trump’s appointed AI and crypto czar and co-host of the All-In podcast.
Sacks—along with senior White House AI policy advisor Sriram Krishnan and Office of Science and Technology Policy director Michael Kratsios, both of whom were also present at the event—has been front and center pushing Silicon Valley’s pro-speed, pro-scale ideology, advocating for rapid deployment and minimal regulation of AI.
For the “accelerationists”—those who believe the rapid development and deployment of artificial intelligence should be pursued as quickly as possible—innovation, scale, and speed are everything. Over-caution and regulation? Ill-conceived barriers that will actually cause more harm than good. They argue that faster progress will unlock massive economic growth, scientific breakthroughs, and national advantage. And if superintelligence is inevitable, they say, the U.S. had better get there first—before rivals like China’s authoritarian regime.
AI ethics and safety has been sidelined
This worldview, articulated by Marc Andreessen in his 2023 blog post, has now almost entirely displaced the diverse coalition of people who worked on AI ethics and safety during the Biden Administration—from mainstream policy experts focused on algorithmic fairness and accountability, to the safety researchers in Silicon Valley who warn of existential risks. While they often disagreed on priorities and tone, both camps shared the belief that AI needed thoughtful guardrails. Today, they find themselves largely out of step with an agenda that prizes speed, deregulation, and dominance.
Whether these groups can claw their way back to the table is still an open question. The mainstream ethics folks—with roots in civil rights, privacy, and democratic governance—may still have influence at the margins, or through international efforts. The existential risk researchers, once tightly linked to labs like OpenAI and Anthropic, still hold sway in academic and philanthropic circles. But in today’s environment—where speed, scale, and geopolitical muscle set the tone—both camps face an uphill climb. If they’re going to make a comeback, I get the feeling it won’t be through philosophical arguments. More likely, it would be because something goes wrong—and the public pushes back.
Also: I hope you’ll check out my first-ever Fortunecover story –a deep dive into Meta’s superintelligence spending spree, with a massive bet by Mark Zuckerberg on new chief AI officer and Scale AI founder Alexandr Wang. Also, don’t miss the marvelous feature from Jeremy Kahn about how Aravind Srinivas turned Perplexity into an $18 billion would-be Google killer. All part of our upcoming Most Powerful People issue!
Fortune recently unveiled a new ongoing series, Fortune AIQ, dedicated to navigating AI’s real-world impact.Our third collection of stories explores how businesses across virtually every industry are putting AI to work—and how their particular field is changing as a result.
How Walmart, Amazon, and other retail giants are using AI to reinvent the supply chain—from warehouse to checkout. Read more
Meet the legacy players and upstarts using AI to reinvent the energy business. Read more
AI isn’t just entering law offices—it’s challenging the entire legal playbook. Read more
How a bulldozer, crane, and excavator rental company is using AI to save 3,000 hours per week. Read more
AI is already touching nearly every corner of the medical field. Read more
U.S. President Donald Trump displays an executive order on artificial intelligence he signed at the "Winning the AI Race" AI Summit at the Andrew W. Mellon Auditorium in Washington, D.C.., on July 23.
OpenAI CEO Sam Altman said AI will change education, and he doubled down on his previous sentiment that college isn’t the best path for everyone. Altman noted education is “going to feel very different” possibly in 18 years when a new generation will have never known a world without AI. However, the CEO claimed education and human jobs won’t go away, but will merely evolve.
OpenAI CEO Sam Altman is so skeptical of college he doesn’t think his own kid will attend.
Having dropped out himself—from Stanford University in 2005—the now-billionaire has often advised young people to look beyond a college education and not automatically follow the traditional path. In previous comments, Altman has downplayed his own decision to drop out, saying he always had the option to return if things didn’t work out.
Dating back more than a decade, Altman has long cautioned that young people shouldn’t go to college without dedicating themselves to worthwhile projects and connecting with ambitious people.
“Most people think about risk the wrong way—for example, staying in college seems like a non-risky path. However, getting nothing done for four of your most productive years is actually pretty risky,” he wrote in a blog post in 2013.
In an interview on the This Past Weekend podcast with comedian Theo Von published Thursday, Altman expanded on his thoughts, claiming his kid would “probably not” go to college.
In a world where young people grow up with new advanced technology such as AI, Altman notes that future kids, including his own, will never be smarter than AI, and will never know a world where products and services aren’t smarter than them. This changes the game for education, he said.
“In that world, education is going to feel very different. I already think college is, like, maybe not working great for most people, but I think if you fast-forward 18 years it’s going to look like a very very different thing,” he said.
While Altman told Von he had “deep worries” about technology and how it is affecting kids and their development, especially the “dopamine hit” of short-form video, he noted the real challenge with advancing AI is whether adults will be able to catch up.
“I actually think the kids will be fine; I’m worried about the parents. If you look at the history of the world when there’s a new technology—people that grow up with it, they’re always fluent. They always figure out what to do. They always learn the new kinds of jobs. But if you’re like a 50-year-old and you have to kind of learn how to do things in a very different way, that doesn’t always work,” he said.
Altman clarified the advent of new technology will likely eliminate some jobs, but many more jobs will evolve rather than disappear. Just like when Google first came online when he was in junior high, some are also now claiming education may become useless thanks to AI.
Altman doesn’t buy into this idea. Rather, he points to new tech as yet another tool that helps people think better, come up with better ideas, and do new things.
“I’m sure the same thing happened with the calculator before, and now this is just a new tool that exists in the tool chain,” he said.
However, Altman cautioned, it’s impossible to know how education and jobs will evolve and which roles will exist in the future, and how. He noted his own job as CEO of an AI company would likely have been unimaginable in the past. An AI CEO may even be on the horizon for OpenAI, he said, and therefore his own job would have to change.
Altman isn’t a doomer about the future of work, though, because of the innate social nature of humans and their seemingly limitless capacity for creativity, purpose-seeking, and improving their social status.
In the same way people from the time of the Industrial Revolution might have viewed modern humans as leading a relatively easy existence, looking forward 100 years from now, we may well think the same thing. Either way, he said, he sees a bright future ahead.
“I think that’s beautiful. I think it’s great that those people in the past think we have it so easy. I think it’s great that we think those people in the future have it so easy,” Altman said. “That is the beautiful story of us all contributing to human progress and everybody’s lives getting better and better.”
Google’s AI Overviews are fundamentally changing how users interact with search results, according to new data from Pew Research Center. Just 8% of users who encountered an AI summary clicked on a traditional link — half the rate of those who did not. The shift could pose a major threat to publishers and content creators reliant on organic search traffic.
It’s official: Google’s AI Overviews are eating search.
Ever since Google first debuted the AI-generated search summaries, web creators have feared the overviews will siphon precious clicks and upend a search experience publishers have relied upon for years. Now, it seems they have their proof.
According to a new study from the Pew Research Center, Google users who are met with an AI summary are not only less likely to click through to other websites but are also more likely to end their browsing session entirely.
Researchers found that just 8% of users who were presented with Google’s AI-generated overviews clicked on a traditional search result link, as opposed to those who did notencounter an AI summary, who clicked on a search result nearly twice as often.
Just over a quarter of searches that produced an AI summary were closed without users clicking through to any links, compared with 16% of pages with only traditional search results.
The summaries are also becoming more common. According to Pew, around one in five Google searches in March 2025 produced an AI summary, with 18% of all the Google searches in the study producing an AI summary.
AI’s search revolution
It’s easy to see why the summaries are popular. Apart from a few minor user experience tweaks, search has remained largely untouched since its conception. Up until AI-powered search entered the scene, users had been presented with a list of links, ranked by an ever-changing Google algorithm, in response to what is normally a natural-language query.
After the launch of AI-powered chatbots such as ChatGPT, the logical jump to technology’s search potential was so obvious that Google declared a “code red” internally and began pouring resources into its AI development.
Fast forward three years, and Google’s AI Overviews are facing off against AI-powered search competitors like Perplexity and ChatGPT Search.
More often than not, users who come to search engines are looking for an answer to a question. AI allows for a new, cleaner way to provide these answers, one that utilizes natural language and speeds up the search process for users.
But the trade-off for this improved experience is the lack of click-through to other websites, potentially resulting in a catastrophic decline in website traffic, especially for sites that rely on informational content or rank highly for keywords.
The study found that Google is far more likely to serve up an AI Overview in response to longer, more natural-sounding queries or questions. Just 8% of one or two-word searches produced an AI-generated summary. That figure jumps to 53% for searches containing ten words or more.
Queries phrased as full sentences, especially those that include both a noun and a verb, triggered summaries 36% of the time. Meanwhile, question-based searches were the most likely to invoke an AI response, with 60% of queries beginning with “who,” “what,” “when,” or “why” generating an Overview.
Common sources
While the overviews do link out and cite web sources, more often than not, the summaries lean heavily on a trio of Wikipedia, YouTube, and Reddit.
Collectively, these three platforms accounted for 15% of all citations in AI Overviews, almost mirroring the three sites’ 17% share of links in standard search results. Researchers found that AI Overviews were more likely to include links to Wikipedia and government websites, while standard search results featured YouTube links more prominently. Government sources represented 6% of AI-linked content, compared to just 2% in traditional results.
In another potentially ominous sign for publishers hoping to capitalize on the AI revolution, news organizations remain largely flat in both formats, making up just 5% of links in AI Overviews and standard search results alike.
Things are likely to get worse for web creators before they get better as Google leans further into AI in its search business. In May, Google unveiled a new “AI mode” search feature that intends to provide more direct answers to user questions. The answers provided by the new feature are similar to AI Overviews, and blend AI-generated responses with summarized and linked contentfrom around the internet.
Google has continually brushed off concerns that the overviews could negatively affect web traffic for creators. The company did not immediately respond to Fortune’s request for comment on the Pew survey.
In a stark and urgent warning to the nation’s financial stewards, OpenAI CEO Sam Altman declared on Tuesday that artificial intelligence is now so adept at mimicking human voices it could spark a global “fraud crisis” in banking “very, very soon.” His remarks, delivered at a Federal Reserve conference in Washington, underscored how people will have to change fundamental things about the way they interact because of the relentless pace of advancements in this technology.
Altman addressed hundreds of regulators and banking executives while sitting down for an interview with Fed governor Michelle Bowman, the vice chair for supervision. Bowman, who has emerged as a contender to potentially succeed Fed chair Jerome Powell, prompted Altman to talk about the technology he helped pioneer and concerns about fraud.
Altman immediately brought up how powerful AI models are now capable of perfectly reproducing anyone’s voice based on just a few short audio samples and issued his warning: “A thing that terrifies me is apparently there are still some financial institutions that will accept the voiceprint as authentication for you to move a lot of money or do something else,” Altman told the audience. “That is a crazy thing to still be doing … AI has fully defeated that.”
The widespread adoption of voice authentication
To Altman’s point, banks have, for more than a decade, relied on voice authentication: Clients repeat a custom phrase, their “voiceprint,” to access accounts. But as generative AI has advanced, so have the tools available to would-be fraudsters. Altman described a near future where attackers will be able to call a bank, pass every test, and move money freely, all by simulating a customer’s voice.
“Just because we are not releasing the technology does not mean it does not exist,” Altman said of the pandora’s box that AI represents. “Some bad actor is going to release it—this is not a super difficult thing to do.”
The OpenAI chief described the scenario that keeps him up at night: a large-scale, coordinated attack where AI-generated voices rapidly defeat outdated security measures across the world’s biggest banks.
The threat isn’t limited to voice. Altman gave a glimpse into the next frontier: “video clones”—AI capable of mimicking an individual’s appearance and speech—heightening the stakes for personal security and institutional trust.
“Right now it is a voice call. Soon it is going to be a video FaceTime. It will be indistinguishable from reality,” he said.
A potential partner in Washington
Altman’s warning didn’t fall on deaf ears. Bowman agreed that collaboration between regulators and tech leaders going forward will be vital. “That might be something we can think about partnering on,” she said, signaling the central bank’s readiness to take action and eagerness to work with OpenAI.
OpenAI, for its part, is planning to expand its physical presence in Washington, D.C., aiming to facilitate more direct collaboration with regulators and policymakers, including the Federal Reserve. The company’s new D.C. office will host policy workshops and serve as a venue for hands-on collaboration and training related to AI deployments in government and regulated industries, a spokesperson for OpenAI told CNBC the day before Altman’s panel with Bowman.
The Fed frequently organizes similar roundtable discussions and panels with executives from tech, fintech, and financial institutions to explore the adoption and impact of AI, especially generative AI, in banking and broader economic sectors. The central bank also encourages partnerships between banks and fintechs, with the latter working to integrate advanced AI tools into regulated banking activity.
For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.
Sam Altman, chief executive officer of OpenAI Inc., speaks during the Federal Reserve Integrated Review of the Capital Framework for Large Banks Conference in Washington, DC, US, on Tuesday, July 22, 2025.
The remarks, which came during a keynote speech at a summit hosted by the All-In Podcast, follow President Donald Trump’s newly released AI Action Plan.