At least $1 billionย worth of Nvidiaโs advanced artificial intelligence processors were shipped to China in the three months after Donald Trump tightened chip export controls, exposing the limits of Washingtonโs efforts to restrain Beijingโs high-tech ambitions.
A Financial Times analysis of dozens of sales contracts, company filings, and multiple people with direct knowledge of the deals reveals that Nvidiaโs B200 has become the most sought-afterโand widely availableโchip in a rampant Chinese black market for American semiconductors.
The processor is widely used by US powerhouses such as OpenAI, Google, and Meta to train their latest AI systems, but banned for sale to China.
On Tuesday, OpenAI announced a partnership with Oracle to develop 4.5 gigawatts of additional data center capacity for its Stargate AI infrastructure platform in the US. The expansion, which TechCrunch reports is part of a $30 billion-per-year deal between OpenAI and Oracle, will reportedly bring OpenAI's total Stargate capacity under development to over 5 gigawatts.
The data center has taken root in Abilene, Texas, a city of 127,000 located 150 miles west of Fort Worth. The city, which serves as the commercial hub of a 19-county region known as the "Big Country," offers a location with existing tech employment ecosystem, including Dyess Air Force Base and three universities. Abilene's economy has evolved over time from its agricultural and livestock roots to embrace technology and manufacturing sectors.
"We have signed a deal for an additional 4.5 gigawatts of capacity with oracle as part of stargate. easy to throw around numbers, but this is a gigantic infrastructure project," wrote OpenAI CEO Sam Altman on X. "We are planning to significantly expand the ambitions of stargate past the $500 billion commitment we announced in January."
Commerce Secretary Howard Lutnick echoed Nvidia CEO Jensen Huang's view of why a US company should sell chips to China.
Andrew Harnik/Getty Images
The Trump administration is fine with Nvidia selling chips in China.
Commerce Secretary Howard Lutnick says the best chips will stay within the US.
Nvidia announced that it has received assurances it can resume selling its H20 chip in China.
The Trump White House says it's content to allow Nvidia to tap into the lucrative Chinese market.
"We don't sell them our best stuff, not our second best stuff, not even our third best," Commerce Secretary Howard Lutnick said on CNBC Tuesday afternoon. "I think fourth best is where we have come out that we're cool."
Nvidia announced on Monday that the Trump administration has signaled it will allow the company to sell its China-specific H20 chip once more. The news sent shares of the world's most valuable company, which eclipsed $4 trillion in market cap last week, even higher.
Nvidia's H20 was designed to be technologically inferior. As Lutnick said, the company also sells three other chips that far surpass the H20's power. Nvidia is already preparing its transition from Blackwell (its most powerful chip) to Blackwell Ultraย and has plans for itsย next superchip, "Vera Rubin."
CEO Jensen Huang has pushed to sell the company's prized chips to China. Before the news, Nvidia said it had lost $8 billion on unshipped orders. The announcement came after Huang met with President Donald Trump at the White House last week.
Lutnick said that the administration shares Huang's view that cutting China off completely from the chips needed to power artificial intelligence advancements won't starve China's AI industry.
"So the idea is the Chinese are more than capable of building their own, right? So you want to keep one step ahead of what they can build so they keep buying our chips, because, remember, developers are the key to technology," Lutnick said.
In the end, Lutnick said, it's better if China becomes reliant on the US for chips.
"So you want to sell the Chinese enough that their developers get addicted to the American technology stack," he said. And that's the thinking. Donald Trump is on it."
Chinese firms have begun rushing to order Nvidia's H20 AI chips as the company plans to resume sales to mainland China, Reuters reports. The chip giant expects to receive US government licenses soon so that it can restart shipments of the restricted processors just days after CEO Jensen Huang met with President Donald Trump, potentially generating $15 billion to $20 billion in additional revenue this year.
Nvidia said in a statement that it is filing applications with the US government to resume H20 sales and that "the US government has assured Nvidia that licenses will be granted, and Nvidia hopes to start deliveries soon."
Since the launch of ChatGPT in 2022, Nvidia's financial trajectory has been linked to the demand for specialized hardware capable of executing AI models with maximum efficiency. Nvidia designed its data center GPU to perform the massive parallel computations required by neural networks, processing countless matrix operations simultaneously.
Nvidia is recommending a mitigation for customers of one of its GPU product lines that will degrade performance by up to 10 percent in a bid to protect users from exploits that could let hackers sabotage work projects and possibly cause other compromises.
The move comes in response to an attack a team of academic researchers demonstrated against Nvidiaโs RTX A6000, a widely used GPU for high-performance computing thatโs available from many cloud services. A vulnerability the researchers discovered opens the GPU to Rowhammer, a class of attack that exploits physical weakness in DRAM chip modules that store data.
Rowhammer allows hackers to change or corrupt data stored in memory by rapidly and repeatedly accessingโor hammeringโa physical row of memory cells. By repeatedly hammering carefully chosen rows, the attack induces bit flips in nearby rows, meaning a digital zero is converted to a one or vice versa. Until now, Rowhammer attacks have been demonstrated only against memory chips for CPUs, used for general computing tasks.
As the AI chipmaker rockets past a $4 trillion valuation, CEO Jensen Huang lays out a stunning vision of a future with robot assistants and revived American factories, but admits the transition won't be painless.
Enterprise AI agent adoption is accelerating faster than predicted. Get the 4 key takeaways from VB Transform 2025 on how leaders from Intuit, Capital One, and more are deploying agents in production and reshaping their teams for a new era of AI.Read More
On Wednesday, Nvidia became the first company in history to reach $4 trillion market valuation as shares rose more than 2 percent, reports CNBC. The GPU maker's stock has climbed 22 percent since the start of 2025, continuing a trend driven by demand for AI hardware following ChatGPT's late 2022 launch.
The milestone marks the highest market cap ever recorded for a publicly traded company, surpassing Apple's previous record of $3.8 trillion set in December. Nvidia first crossed $2 trillion in February 2024 and reached $3 trillion just four months later in June. The $4 trillion valuation represents a market capitalization larger than the GDP of most countries.
As we explained in 2023, Nvidia's continued success has been intimately tied to growth in demand for hardware that runs AI models as capably and efficiently as possible. The company's data center GPUs excel at performing billions of matrix multiplications necessary to train and run neural networks due to their parallel architectureโhardware architectures that originated as video game graphics accelerators now power the generative AI boom.
Despite the company dominating headlines and being at the forefront of many conversations around AI, some people still don't know how to pronounce its name.
Luckily, Nvidia cleared the confusion on its website and explained the proper pronunciation. We're sorry to tell you, but if you're one of the people calling the tech giant "NUH-vid-ee-uh," you've been saying it wrong.
The proper pronunciation of Nvidia is "en-VID-ee-uh," according to the company.
A screenshot of Nvidia's brand guidelines that detail the correct pronunciation of the company's name.
Nvidia
Founded by CEO Jensen Huang, Chris Malachowsky, and Curtis Priem in 1993, the chipmaker's name actually came from its lack of a name, Fortune previously reported. While the trio focused on developing the company, they put its title on the back burner and named files "NV" as an abbreviation for the "next version."
The three eventually decided on NVision before realizing the name was taken by a toilet-paper manufacturing company,ย The New Yorker reported. Finally, Huang suggested the chipmaker's current name, a spinoff of the word "invidia," which means envy in Latin, the report said.
Nvidia founder, president and CEO Jensen Huang displays his tattoo in September 2010.
Robert Galbraith/Reuters
Huang and the founders had dreams of creating a product that would make rivals "green with envy," Nvidia cofounder Priem said. Given Nvidia has a nearly $3.9 trillion market cap and a long line of tech giants and startups angling for its latest AI chips, it seems as if that vision has come to fruition.
To celebrate Nvidia's stock price hitting $100 years ago, Huang got the company's logo tattooed on his arm โ an experience he later said "hurts way more than anybody tells you."
Check out the video below to hear Huang pronounce the name at Nvidia's 2024 keynote.
Over the last two years, Nvidia has used its ballooning fortunes to invest in over 80 AI startups. Here are the giant semiconductor's largest investments.
Mistral AI partners with Nvidia to launch European AI infrastructure platform, challenging US cloud giants while unveiling breakthrough reasoning models that rival OpenAI.Read More
AI is the "great equalizer," Nvidia CEO Jensen Huang said at London Tech Week.
CARL COURT/POOL/AFP via Getty Images
Jensen Huang said people programming AI is similar to the way "you program a person."
Speaking at London Tech Week, the Nvidia CEO said all anyone had to do to program AI was "just ask nicely."
He called AI "the great equalizer, " allowing anyone to program computers using plain language.
Nvidia CEOย Jensen Huang has said that programming AI is similar to "the way you program a person" โ and that "human" is the new coders' language.
"The thing that's really, really quite amazing is the way you program an AI is like the way you program a person," Huang told London Tech Week on Monday.
Huang shared an example, saying, "You say, 'You are an incredible poet. You are deeply steeped in Shakespeare, and I would like you to write a poem to describe today's keynote.' Without very much effort, this AI would help you generate such a wonderful poem.
"And when it answers, you could say, 'I feel like you could do even better.' And it will go off and think about it and it will come back and say, 'In fact, I can do better.' And it does do a better job."
Huang said that in the past, "technology was hard to use" and that to access computer science, "we had to learn programming languages, architect systems, and design very complicated computers.
"But now, all of a sudden, there's a new programming language. This new programming language is called human."
"Most people don't know C++, very few people know Python, and everybody, as you know, knows human."
Huang called AI "the great equalizer" for making technology accessible to everyone and called the shift "transformative.
"This way of interacting with computers, I think, is something that almost anybody can do," he said.
"The way you program a computer today is to ask the computer to do something for you, even write a program, generate images, write a poem โ just ask it nicely," Huang added.
At the World Government Summit in Dubai last year, Huang suggested the tech sector should focus less on coding and more on using AI as a tool across fields like farming, biology, and education.
"It is our job to create computing technology such that nobody has to program. And that the programming language is human, everybody in the world is now a programmer. This is the miracle of artificial intelligence," Huang said at the time.
When it comes to Nvidia's GeForce RTX 5060 graphics card, the GPU itself is less interesting than the storm Nvidia stirred up by trying to earn it better reviews. If you donโt follow the twists and turns of graphics card launch metanarratives, allow me to recap the company's behavior for you.
Though the RTX 5060 launched on May 19, Nvidia and its partners were uncharacteristically slow to ship graphics cards to reviewers. For outlets that received pre-launch hardware, Nvidia didnโt provide the pre-launch drivers that it usually sends out so that reviewers could run their own tests on the cards, informing reviewers on a call that drivers would be available to them and the public on the 19th.
Except! Nvidia did offer advance drivers to a handful of publications on the condition that they run a few benchmarks that had been pre-selected by Nvidia and that they only report numbers from tests performed with the 50-series new DLSS Multi-Frame Generation (MFG) setting enabled.
Nvidia products, such as GPUs and software, are driving the AI boom.
Brittany Hosea-Small/REUTERS
Nvidia products, such as data center GPUs, are crucial for AI, making it the leader in the industry.
Nvidia's CUDA software stack supports GPU programming, enhancing its competitive edge.
Nvidia's automotive and consumer tech ventures expand its influence beyond data centers.
Nvidia products are at the heart of the boom in artificial intelligence.
Despite starting in gaming and designing semiconductors that touch many diverse industries, the products Nvidia designs to go inside high-powered data centers are the most important to the company today, and to the future of AI.
Graphics processing units, designed to be clustered together in dozens of racks inside massive temperature-controlled warehouses, made Nvidia a household name. They also got Nvidia into the Dow Jones Industrial Average, and put it in the position to control the flow of a crucial but finite resource: artificial intelligence.
Nvidia's first generation of chips for the data center launched in 2017. That first generation was called Volta. Along with the Volta chips, Nvidia designed DGX (which stands for Deep GPU Xceleration) systems โ the full stack of technologies and equipment necessary to bring GPUs online in a data center and make them work to the best of their ability. DGX was the first of its kind. As AI has become more mainstream, other companies such as Dell and and Supermicro have put forth designs for running GPUs at scale in a data center too.
Ampere, Hopper, Blackwell, and Beyond
The next GPU generation designed for the data center, Ampere, which launched in 2020, can still be found in data centers today.
Though Ampere generation GPUs are slowly fading into the background in favor of more powerful models, this generation did support the first iteration of Nvidia's Omniverse, a simulation platform that the company purports as key to a future where robots work alongside humans doing physical tasks.
The Hopper generation of GPUs is the one that has enabled much of the latest innovation in large language models and broader AI.
Nvidia's Hopper generation of chips, which include the H100 and the H200, debuted in 2022 and remain in high demand. The H200 model in particular has added capacity that has proven increasingly important as AI models grow in size, complexity, and capability.
The most powerful chip architecture Nvidia has launched to date is Blackwell. Jensen Huang announced the step change in accelerated computing in 2024 at GTC, Nvidia's developers conference, and though the rollout has been rocky, racks of Blackwells are now available from cloud providers.
Nvidia unveiled its Blackwell chip at the GTC conference in 2024.
Andrej Sokolow/picture alliance via Getty Images
Inside the data center, Nvidia does have competitors, even though it has the vast majority of the market for AI computing. Those competitors include AMD, Intel, Huawei, custom AI chips, and a cavalcade of startups.
The company has already teased that the next generation will be called "Blackwell Ultra," followed by "Rubin" in 2026. Nvidia also plans to launch a new CPU, or traditional computer chip alongside Rubin, which it hasn't done since 2022. CPUs work alongside GPUs to triage tasks and direct the firepower that is parallel computing.
Nvidia is a software company, too
None of this high-powered computing is possible without software and Nvidia recognized this need sooner than any other company.
Development for Nvidia's tentpole software stack, CUDA or Compute Unified Device Architecture, began as early as 2006. CUDA is software that allows developers to use widely known coding languages to program GPUs, since these chips require layers of code to work relatively few developers have the needed skills to program the chips directly.
Still "CUDA developer" is a skillset and there are millions who claim this ability, according to Nvidia.
When GPUs started going into data centers, CUDA was ready and that's why it's often touted as the basis for Nvidia's competitive moat.
Within CUDA are dozens of libraries that help developers use GPUs in specific fields such as medical imaging, data science, or weather analytics.
Nvidia began at home
Just two years after Nvidia's founding, the company released its first graphics card in 1995. For more than a decade, the chips mostly resided in homes and offices โ used by gamers and graphics professionals.
The current generation includes the GeForce RTX 5090 and 5080, which was released in May 2025. RTX 4090, 4080, 4070, and 4060, were released in 2022 and 2023. GPUs in gaming enabled the more sophisticated shadows, texture, and light to make games hyperrealistic.
In addition to the consumer work stations, Nvidia partners with device-makers like Apple and ASUS to produce laptops and personal computers. Though gaming is now a minority of the company's revenue, the business continues to grow.
Nvidia has also made new efforts to enable high powered computing at home for the machine-learning obsessed. It launched Project DIGITS, which is a personal-sized supercomputer capable of working with some of the largest large language models.
Nvidia in the car
Nvidia is angling to be a primary player in a future where self-driving cars are the norm, but the company has also been in the automotive semiconductor game for many years.
Nvidia first launched its DRIVE PX, for developing autopilot capabilities for vehicles, in 2015.
Kim Kulish/Corbis via Getty Images
It launched Nvidia DRIVE, a platform for autonomous vehicle development, in 2015, and over time it developed or acquired technologies for mapping, driver assist, and driver monitoring.
The company designs various chips for all of functions in partnerships with Mediatek and Foxconn. Nvidia's automotive customers include Toyota, Uber, and Hyundai.