Nvidia’s reported move to take a roughly $30bn equity stake in OpenAI marks a notable simplification of what had previously been framed as a far larger, capacity-linked partnership. Instead of an unfinished “up to $100bn” commitment that depended on rolling out massive compute infrastructure, the new approach looks more like a straightforward strategic investment aligned with OpenAI’s ongoing fundraising.
According to reporting on 20 Feb 2026, OpenAI is seeking a round expected to exceed $100bn, implying a valuation around $830bn. If that structure comes together, it would rank among the most consequential capital raises in modern tech, and it would further tie together the companies building frontier models and the companies selling the scarce hardware required to train and run them.
From a $100bn framework to a simpler $30bn stake
In September 2025, OpenAI and Nvidia publicly announced a letter of intent (LOI) built around infrastructure deployment at extraordinary scale: “at least 10 gigawatts” of Nvidia systems. Nvidia also said it intended to invest “up to $100bn” progressively, tied to deployment milestones rather than a single check written upfront.
The concept, as reported at the time, resembled a staged financing plan, often described as tranches (for example, “$10bn at a time”) that would be triggered as capacity came online. The LOI even included a timeline detail: the first 1 gigawatt was targeted for the second half of 2026 on the “NVIDIA Vera Rubin platform,” underscoring that the deal’s economic gravity depended on hardware roadmaps and buildout execution.
By late 2025, however, there were signs that the original framework still had moving parts. Nvidia’s CFO was reported saying the up-to-$100bn OpenAI agreement “has not been finalised,” which set expectations that the LOI was directionally important but not yet a locked, bankable financing structure. The new ~$30bn reported stake appears to replace that earlier commitment framework, according to Reuters pickup reporting on 20 Feb 2026, with Nvidia declining to comment.
What’s driving OpenAI’s mega-round and valuation math
Reporting on 20 Feb 2026 indicates OpenAI’s fundraising is expected to exceed $100bn, and Reuters described a structure seeking “more than $100bn,” implying an ~$830bn valuation. Those numbers are staggering even by late-stage tech standards, and they hint at a capital plan designed less like a typical growth round and more like a balance-sheet build for industrial-scale computing.
Financial Times reporting also cited OpenAI’s revenue run-rate as having “recently exceeded $20bn” annualized. While revenue run-rate is not the same as audited annual revenue, it provides a directional signal that OpenAI is no longer a purely speculative R&D story, it is already monetizing at scale, likely through a mix of API usage, enterprise products, and platform distribution.
The same reporting pointed to OpenAI planning to spend roughly ~$600bn on computing infrastructure through 2030. That single figure helps explain why the round is so large: the frontier-model race is being shaped by capital intensity, long-lived infrastructure, and the need to secure supply chains for chips, networking, and power, plus the data center footprint to house it all.
Nvidia’s incentives: lock in demand, de-risk supply, shape the roadmap
For Nvidia, a $30bn stake can be interpreted as a more flexible way to secure strategic alignment without being bound to an unfinished deployment-linked financing mechanism. Equity investment can offer upside exposure to OpenAI’s platform economics while also reinforcing Nvidia’s role as the default compute substrate for training and inference.
Markets coverage has framed OpenAI’s giant fundraising as potentially positive for Nvidia because it signals sustained, expanding demand for GPUs, the central bottleneck in scaling frontier AI. Barron’s highlighted that an OpenAI mega-raise could be constructive for Nvidia as a key supplier, while other coverage noted that Nvidia and Microsoft were among the participants cited as the financing took shape.
There is also a practical, near-term loop: Reuters reporting cited a source familiar with the matter saying OpenAI is expected to use much of the fresh capital to buy Nvidia chips that power training and deployment. If true, this creates a self-reinforcing flywheel where OpenAI raises capital, spends heavily on Nvidia hardware, and (if commercialization continues to accelerate) uses the resulting capabilities to grow revenue, supporting the valuation underpinning the raise.
The round’s cast of characters: SoftBank, Amazon, MGX, Microsoft, and strategic tension
The reported financing is not just about Nvidia. Financial Times reporting described a round expected to exceed $100bn with other potential participants including SoftBank (around ~$30bn), Amazon (up to ~$50bn), MGX, Microsoft, and others. A syndicate of that scale would be less like a typical venture round and more like a geopolitically relevant capital coalition.
Each prospective investor brings a different strategic agenda. SoftBank has historically made concentrated bets on platform shifts; Amazon has both cloud ambitions and an interest in AI workloads; Microsoft already has deep commercial and platform ties to OpenAI; and MGX represents another pool of large-scale capital looking for exposure to the AI buildout.
At the same time, a multi-party round can introduce strategic tension. Nvidia benefits when OpenAI buys Nvidia chips; hyperscalers benefit when workloads run on their clouds; and OpenAI benefits when it can access compute across multiple channels at favorable economics. The more investors with overlapping incentives, the more carefully governance, procurement, and long-term infrastructure commitments must be structured.
Compute is the product: why infrastructure promises keep changing
When OpenAI and Nvidia announced their LOI in September 2025, the messaging was explicit about the centrality of compute. Sam Altman was quoted saying, “Everything starts with compute,” framing hardware capacity as the foundation of the future economy rather than a back-office expense line.
Jensen Huang, in the same announcement, described the partnership as “the next leap forward,” centered on deploying 10 gigawatts, language that treated infrastructure as the strategic differentiator. Yet the subsequent shift toward a simpler equity investment suggests an important reality: committing to capacity in public is easier than sequencing permitting, power, procurement, site selection, and platform timing in a way that cleanly maps to financing tranches.
OpenAI’s broader infrastructure context also shows how diversified and fluid its compute planning has become. In mid-2025, OpenAI was reported to have signed a cloud agreement with Oracle for about 4.5GW of computing power, described as roughly $30bn annually. Against that backdrop, it is plausible that OpenAI seeks redundancy and negotiating leverage across vendors and clouds, while Nvidia seeks to remain the premier chip supplier inside whichever data centers ultimately host the workloads.
What changes if OpenAI really spends ~$600bn on compute by 2030
A plan to spend on the order of ~$600bn on computing infrastructure through 2030 implies multi-year commitments across chips, networking, storage, cooling, real estate, and power generation or procurement. It also implies that OpenAI’s competitive edge is increasingly a function of capital access, execution discipline, and supply-chain coordination, areas where partnerships and investor alignment matter as much as model architecture.
If OpenAI is able to execute on that spend and translate it into product leadership, then Nvidia’s equity stake is not just financial upside, it is a strategic hedge against platform risk. Nvidia captures value both ways: through direct chip sales that may be funded by the very capital it helps OpenAI raise, and through equity exposure to OpenAI’s expanding revenue base.
But the scale also elevates risks. Massive capex plans are vulnerable to delays (power constraints, construction bottlenecks, export controls), price competition (from alternative accelerators), and demand cycles (enterprise adoption pacing). Even with a reported $20bn+ annualized revenue run-rate, sustaining the growth required to justify an ~$830bn valuation will demand continued breakthroughs in reliability, cost efficiency, and monetization.
Nvidia’s reported pivot from an unfinished $100bn deployment-linked framework to a roughly $30bn stake in OpenAI can be read as a pragmatic adjustment to how mega-infrastructure is actually financed. Equity is simpler, faster to execute, and easier to align with a broader syndicate, especially in a round expected to exceed $100bn.
For OpenAI, the story remains consistent: compute is destiny, and capital is the constraint. If the fundraising closes near the reported scale and OpenAI channels much of it into Nvidia-powered infrastructure, the deal will not just reshape ownership tables, it will shape the supply lines that determine who can train, deploy, and profit from the next generation of AI systems.