President Donald Trump revealed he once considered breaking up AI darling Nvidia before learning more about its CEO, Jensen Huang, and the company’s market dominance. The president made the comments during an AI summit hosted in Washington on Wednesday. Nvidia recently became the first company to reach a $4 trillion market valuation, driven by its near-monopoly in AI chip technology.
U.S. President Donald Trump said he considered breaking up AI darling Nvidia before learning more about the chipmaker and its CEO, Jensen Huang.
“I said, ‘Look, we’ll break this guy up,’ before I learned the facts of life,” Trump said of Huang during a Wednesday speech about his new AI Action Plan. The U.S. president then appeared to recount an earlier conversation with an advisor about Nvidia’s market share, its CEO, and potentially breaking up the company.
“I said, who the hell is he? What’s his name…What the hell is Nvidia? I’ve never heard of it before,” the president said of the world’s most valuable tech company.
“I figured we could go in and we could sort of break them up a little bit, get them a little competition, and I found out it’s not easy in that business… Then I got to know Jensen and now I see why,” he said, inviting Huang, who was sitting in the audience, to stand up.
The president made the comments during an AI summit hosted in Washington on Wednesday, but it’s unclear when the original conversation about potentially breaking up the company took place.
Representatives for Nvidia did not immediately respond to a request for comment made by Fortune.
Huang’s relationship with Trump
Huang scored a win for Nvidia from the U.S. president earlier this month.
Following a meeting between Huang and Trump at the White House, the Trump administration lifted restrictions on Nvidia’s H20 AI chip exports to China, allowing the company to sell the chips in the lucrative market and reversing previous Trump administration restrictions.
Per the New York Times, Huang engaged in months of lobbying for the policy change, meeting with Trump, testifying before Congress, and working closely with White House allies like AI adviser David Sacks. The CEO argued that restricting chip sales would hurt U.S. tech leadership by allowing Chinese rivals to dominate, and emphasized that Nvidia’s chips were crucial for global AI standards.
The tech giant has been on something of a winning streak of late. Earlier this month, the company made history when it became the first in the world to reach a market value of $4 trillion. The company’s stock has soared over the past five years, with a nearly 18% gain registered year-to-date. Nvidia’s supercharged growth isdriven by the AI boom and the company’s near-monopoly on AI chip manufacturing. The company’s graphics processing units (GPUs) are used by all major tech companies to maintain and develop AI models.
The company’s dominance in AI hardware has made it a key player in global tech geopolitics, particularly as governments scrutinize the export of advanced semiconductor technology amid rising U.S.-China tensions.
Google’s AI Overviews are fundamentally changing how users interact with search results, according to new data from Pew Research Center. Just 8% of users who encountered an AI summary clicked on a traditional link — half the rate of those who did not. The shift could pose a major threat to publishers and content creators reliant on organic search traffic.
It’s official: Google’s AI Overviews are eating search.
Ever since Google first debuted the AI-generated search summaries, web creators have feared the overviews will siphon precious clicks and upend a search experience publishers have relied upon for years. Now, it seems they have their proof.
According to a new study from the Pew Research Center, Google users who are met with an AI summary are not only less likely to click through to other websites but are also more likely to end their browsing session entirely.
Researchers found that just 8% of users who were presented with Google’s AI-generated overviews clicked on a traditional search result link, as opposed to those who did notencounter an AI summary, who clicked on a search result nearly twice as often.
Just over a quarter of searches that produced an AI summary were closed without users clicking through to any links, compared with 16% of pages with only traditional search results.
The summaries are also becoming more common. According to Pew, around one in five Google searches in March 2025 produced an AI summary, with 18% of all the Google searches in the study producing an AI summary.
AI’s search revolution
It’s easy to see why the summaries are popular. Apart from a few minor user experience tweaks, search has remained largely untouched since its conception. Up until AI-powered search entered the scene, users had been presented with a list of links, ranked by an ever-changing Google algorithm, in response to what is normally a natural-language query.
After the launch of AI-powered chatbots such as ChatGPT, the logical jump to technology’s search potential was so obvious that Google declared a “code red” internally and began pouring resources into its AI development.
Fast forward three years, and Google’s AI Overviews are facing off against AI-powered search competitors like Perplexity and ChatGPT Search.
More often than not, users who come to search engines are looking for an answer to a question. AI allows for a new, cleaner way to provide these answers, one that utilizes natural language and speeds up the search process for users.
But the trade-off for this improved experience is the lack of click-through to other websites, potentially resulting in a catastrophic decline in website traffic, especially for sites that rely on informational content or rank highly for keywords.
The study found that Google is far more likely to serve up an AI Overview in response to longer, more natural-sounding queries or questions. Just 8% of one or two-word searches produced an AI-generated summary. That figure jumps to 53% for searches containing ten words or more.
Queries phrased as full sentences, especially those that include both a noun and a verb, triggered summaries 36% of the time. Meanwhile, question-based searches were the most likely to invoke an AI response, with 60% of queries beginning with “who,” “what,” “when,” or “why” generating an Overview.
Common sources
While the overviews do link out and cite web sources, more often than not, the summaries lean heavily on a trio of Wikipedia, YouTube, and Reddit.
Collectively, these three platforms accounted for 15% of all citations in AI Overviews, almost mirroring the three sites’ 17% share of links in standard search results. Researchers found that AI Overviews were more likely to include links to Wikipedia and government websites, while standard search results featured YouTube links more prominently. Government sources represented 6% of AI-linked content, compared to just 2% in traditional results.
In another potentially ominous sign for publishers hoping to capitalize on the AI revolution, news organizations remain largely flat in both formats, making up just 5% of links in AI Overviews and standard search results alike.
Things are likely to get worse for web creators before they get better as Google leans further into AI in its search business. In May, Google unveiled a new “AI mode” search feature that intends to provide more direct answers to user questions. The answers provided by the new feature are similar to AI Overviews, and blend AI-generated responses with summarized and linked contentfrom around the internet.
Google has continually brushed off concerns that the overviews could negatively affect web traffic for creators. The company did not immediately respond to Fortune’s request for comment on the Pew survey.
A patient in London was mistakenly invited to a diabetic screening after an AI-generated medical record falsely claimed he had diabetes and suspected heart disease. The summaries, created by Anima Health’s AI tool Annie, also included fabricated details like a fake hospital address. NHS officials have described the incident as a one-off human error, but the organization is already facing scrutiny over how AI tools are used and regulated.
AI use in healthcare has the potential to save time, money, and lives. But when technology that is known to occasionally lie is introduced into patient care, it also raises serious risks.
One London-based patient recently experienced just how serious those risks can be after receiving a letter inviting him to a diabetic eye screening—a standard annual check-up for people with diabetes in the UK. The problem: He had never been diagnosed with diabetes or shown any signs of the condition.
After opening the appointment letter late one evening, the patient, a healthy man in his mid-20’s, told Fortune he had briefly worried that he had been unknowingly diagnosed with the condition, before concluding the letter must just be an admin error. The next day, at a pre-scheduled routine blood test, a nurse questioned the diagnosis and, when the patient confirmed he wasn’t diabetic, the pair reviewed his medical history.
“He showed me the notes on the system, and they were AI-generated summaries. It was at that point I realized something weird was going on,” the patient, who asked for anonymity to discuss private health information, told Fortune.
After requesting and reviewing his medical records in full, the patient noticed the entry that had introduced the diabetes diagnosis was listed as a summary that had been “generated by Annie AI.” The record appeared around the same time he had attended the hospital for a severe case of tonsillitis. However, the record in question made no mention of tonsillitis. Instead, it said he had presented with chest pain and shortness of breath, attributed to a “likely angina due to coronary artery disease.” In reality, he had none of those symptoms.
The records, which were reviewed by Fortune, also noted the patient had been diagnosed with Type 2 diabetes late last year and was currently on a series of medications. It also included dosage and administration details for the drugs. However, none of these details were accurate, according to the patient and several other medical records reviewed by Fortune.
‘Health Hospital’ in ‘Health City’
Even stranger, the record attributed the address of the medical document it appeared to be processing to a fictitious “Health Hospital” located on “456 Care Road” in “Health City.” The address also included an invented postcode.
A representative for the NHS, Dr. Matthew Noble, told Fortune the GP practice responsible for the oversight employs a “limited use of supervised AI” and the error was a “one-off case of human error.” He said that a medical summariser had initially spotted the mistake in the patient’s record but had been distracted and “inadvertently saved the original version rather than the updated version [they] had been working on.”
However, the fictitious AI-generated record appears to have had downstream consequences, with the patient’s invitation to attend a diabetic eye screening appointment presumedly based on the erroneous summary.
While most AI tools used in healthcare are monitored by strict human oversight, another NHS worker told Fortune that the leap from the original symptoms—tonsillitis—to what was returned—likely angina due to coronary artery disease—raised alarm bells.
“These human error mistakes are fairly inevitable if you have an AI system producing completely inaccurate summaries,” the NHS employee said. “Many elderly or less literate patients may not even know there was an issue.”
The company behind the technology, Anima Health, did not respond to Fortune’s questions about the issue. However, Dr. Noble said, “Anima is an NHS-approved document management system that assists practice staff in processing incoming documents and actioning any necessary tasks.”
“No documents are ever processed by AI, Anima only suggests codes and a summary to a human reviewer in order to improve safety and efficiency. Each and every document requires review by a human before being actioned and filed,” he added.
AI’s uneasy rollout in the health sector
The incident is somewhat emblematic of the growing pains around AI’s rollout in healthcare. As hospitals and GP practices race to adopt automation tools that promise to ease workloads and reduce costs, they’re also grappling with the challenge of integrating still-maturing technology into high-stakes environments.
The pressure to innovate and potentially save lives with the technology is high, but so is the need for rigorous oversight, especially as tools once seen as “assistive” begin influencing real patient care.
The company behind the tech, Anima Health, promises healthcare professionals can “save hours per day through automation.” The company offers services including automatically generating “the patient communications, clinical notes, admin requests, and paperwork that doctors deal with daily.”
Anima’s AI tool, Annie, is registered with the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) as a Class I medical device. This means it is regarded as low-risk and designed to assist clinicians, such as examination lights or bandages, rather than automate medical decisions.
AI tools in this category require outputs to be reviewed by a clinician before action is taken or items are entered into the patient record. However, in this case of the misdiagnosed patient, the practice appeared to fail to appropriately address the factual errors before they were added to the patient’s records.
The incident comes amid increased scrutiny within the UK’s health service of the use and categorization of AI technology. Last month, bosses for the health service warned GPs and hospitals that some current uses of AI software could breach data protection rules and put patients at risk.
In an email first reported by Sky News and confirmed by Fortune, NHS England warned that unapproved AI software that breached minimum standards could risk putting patients at harm. The letter specifically addressed the use of Ambient Voice Technology, or “AVT” by some doctors.
The main issue with AI transcribing or summarizing information is the manipulation of the original text, Brendan Delaney, professor of Medical Informatics and Decision Making at Imperial College London and a PT General Practitioner, told Fortune.
“Rather than just simply passively recording, it gives it a medical device purpose,” Delaney said. The recent guidance issued by the NHS, however, has meant that some companies and practices are playing regulatory catch-up.
“Most of the devices now that were in common use now have a Class One [categorization],” Delaney said. “I know at least one, but probably many others are now scrambling to try and start their Class 2a, because they ought to have that.”
Whether a device should be defined as a Class 2a medical device essentially depends on its intended purpose and the level of clinical risk. Under U.K. medical device rules, if the tool’s output is relied upon to inform care decisions, it could require reclassification as a Class 2a medical device, a category subject to stricter regulatory controls.
The U.K. government is embracing the possibilities of AI in healthcare, hoping it can boost the country’s strained national health system.
In a recent “10-Year Health Plan,” the British government said it aims to make the NHS the most AI-enabled care system in the world, using the tech to reduce admin burden, support preventive care, and empower patients through technology.
But rolling out this technology in a way that meets current rules within the organization is complex. Even the U.K.’s health minister appeared to suggest earlier this year that some doctors may be pushing the limits when it comes to integrating AI technology in patient care.
“I’ve heard anecdotally down the pub, genuinely down the pub, that some clinicians are getting ahead of the game and are already using ambient AI to kind of record notes and things, even where their practice or their trust haven’t yet caught up with them,” Wes Streeting said, in comments reported by Sky News.
“Now, lots of issues there—not encouraging it—but it does tell me that contrary to this, ‘Oh, people don’t want to change, staff are very happy and they are really resistant to change’, it’s the opposite. People are crying out for this stuff,” he added.
AI tech certainly has huge possibilities to dramatically improve speed, accuracy, and access to care, especially in areas like diagnostics, medical recordkeeping, and reaching patients in under-resourced or remote settings. However, walking the line between the tech’s potential and risks is difficult in sectors like healthcare that deal with sensitive data and could cause significant harm.
Reflecting on his experience, the patient told Fortune: “In general, I think we should be using AI tools to support the NHS. It has massive potential to save money and time. However, LLMs are still really experimental, so they should be used with stringent oversight. I would hate this to be used as an excuse to not pursue innovation but instead should be used to highlight where caution and oversight are needed.”
"He showed me the notes on the system, and they were AI-generated summaries. It was at that point I realized something weird was going on," the patient said.