NFL week 3 predictions and expert picks against the spread

New York — The third weekend of the NFL season arrives with a...

Gaza city under relentless Israeli assault as families flee shattered blocks

Gaza City — Israeli forces tightened their grip on Gaza City on Saturday...

Trump’s $100K H-1B fee sparks panic and global outrage

Washington — In a thunderous policy shift that has rattled the American technology...

New York Fashion Week 2025-26, redefining luxury, creativity, and venue storytelling

New York — At New York Fashion Week 2025, Coach unveiled a groundbreaking...

Nvidia’s $100 billion OpenAI pact tries to lock down AI compute with 10GW Vera Rubin build

San Francisco — Nvidia is preparing to pour as much as $100 billion into OpenAI, an unprecedented capital and compute pact that aims to stand up at least 10 gigawatts of artificial intelligence infrastructure and deliver the first tranche of systems in the second half of 2026. The plan, outlined by both companies on September 22, is not a simple chip sale. It blends staged equity financing with large purchases of full-stack Nvidia systems so OpenAI can scale training and inference for its next generation of models.

High-voltage substation illustrating power needs for AI data centers
Ten gigawatts of AI capacity demands major grid connections and new substations [PHOTO: ABB].

The numbers are eye-popping even by hyperscale standards. Nvidia says the partnership will deploy “millions of GPUs” as part of a purpose-built platform branded Vera Rubin, with the initial one-gigawatt phase landing in late 2026, followed by a multi-year buildout to at least 10 gigawatts worldwide. Those claims are spelled out in the companies’ primary materials and corroborated according to Financial Times.

For OpenAI, the pact is oxygen. Its services now reach hundreds of millions of users, and the organization’s public roadmaps increasingly point toward larger, longer training runs and more capable multimodal systems. For Nvidia, the deal extends the company’s grip on AI compute at a moment when rivals are racing to catch up with accelerators and networking. It also formalizes the role Nvidia has played informally for two years, as the supplier of the most coveted engines in machine learning.

Why 10 gigawatts changes the map

Ten gigawatts is a scale more often associated with national electricity planning than with a single vendor partnership. The International Energy Agency projects that global data centre power use will more than double by 2030, approaching 945 terawatt-hours in its base case. In other words, data centres will consume roughly as much electricity as Japan does today. The U.S. Department of Energy’s primer on grid units is helpful here. A single gigawatt equals one billion watts, on the order of a typical nuclear reactor’s output in continuous operation. Building 10 gigawatts of AI capacity therefore implies energy and siting choices that rival national infrastructure programs.

SK hynix HBM4 wafers supporting Nvidia Rubin-generation systems
Next-gen HBM4 supply is pivotal to OpenAI’s 2026 timeline [PHOTO: Techovedas].

The Eastern Herald has reported extensively on how AI demand is colliding with public utilities and aging transmission. Our recent investigation showed how AI data centres are already straining America’s grid and reshaping local permitting fights. The OpenAI–Nvidia buildout will intensify those debates. A single 100 megawatt campus using evaporative cooling can require millions of gallons of water per day during hot spells. The DOE’s Federal Energy Management Program outlines well-documented ways to curb that draw, including better cycles of concentration and tower management, but the baseline is still material. See FEMP guidance on cooling water efficiency and tower best practices. A recent Congressional Research Service brief puts US data centre energy use near 176 TWh in 2023 and cites studies estimating roughly seven cubic meters of water per MWh for typical operations. Expect water to be as contentious as megawatts when sites are announced.

Liquid-cooling manifold reducing water and energy use in AI data centers
Liquid-cooling topologies are increasingly favored to curb water and energy footprint [PHOTO:
Data Center Dynamics].

Supply chain reality check

Moving from press release to powered racks depends on supply chains that have been tight for two years. Packaging is still the chokepoint. Nvidia’s Blackwell generation transitions from CoWoS-S to CoWoS-L, and while total advanced packaging capacity has expanded, it remains the system limiter in most deliveries. TSMC says it is accelerating capacity, with multiple reports indicating plans to more than double CoWoS throughput from 2024 levels and to keep adding lines through 2026. The memory side is also evolving. SK hynix announced it has completed development of HBM4 and is readying mass production, which would align with OpenAI’s 2026 timing and Nvidia’s Rubin platforms. Samsung and Micron are close behind with their own HBM4 roadmaps, although the competitive pecking order could shift as vendors finalize yields and power envelopes.

TSMC CoWoS-L advanced packaging line for Nvidia Blackwell GPUs
Advanced packaging capacity remains a key limiter for Blackwell-class accelerators [PHOTO: Seeking Alpha]

The Eastern Herald has tracked these feeder stories across the sector. Broadcom, a key supplier for custom accelerators, recently guided higher on AI strength, underscoring how networking and ASIC programs are the other half of the buildout. Our summary of that outlook is here: Broadcom revenue jumps on soaring AI chips. On the equity side, Nvidia’s own market arc has been a story in itself. We chronicled its climb to the top of global market cap in this explainer and the more recent crosswinds around insider sales and valuation in our NVDA stock coverage. Those dynamics frame why a staged, milestone-based investment into OpenAI is both feasible for Nvidia and consequential for investors who now must price multi-year capex, energy, and regulatory risks.

SK hynix HBM4 wafers supporting Nvidia Rubin-generation systems
Next-gen HBM4 supply is pivotal to OpenAI’s 2026 timeline [PHOTO: Techovedas].

Regulators are already circling

Any arrangement that binds the dominant AI chip supplier to a leading AI developer will draw scrutiny. In the U.K., the Competition and Markets Authority recently concluded that Microsoft’s complex relationship with OpenAI was not a reviewable merger under its rules, while still publishing detailed reasoning on control and material influence. In the U.S., the Federal Trade Commission has been probing the web of cloud–model partnerships for over a year. Its January staff study on AI investments lays out concrete concerns over access to compute, exclusive terms, and switching costs.

The Justice Department has also signaled a harder look at AI tie-ups, including seeking notice requirements on new AI investments in related litigation and speeches. Recent remarks by the head of DOJ Antitrust emphasize “real competition” in AI as an enforcement priority. Put together, those threads suggest the OpenAI–Nvidia deal will undergo a process check even if it ultimately clears.

Geopolitics and export rules still bite

The partnership unfolds within export regimes that define where high-end compute can be sold or provisioned. Since 2022, the U.S. has tightened rules on advanced accelerators and some services for destinations of concern, with additional clarifications this year. For baseline policy see the Commerce Department’s Advanced Computing rule hub and January’s updated controls reported by Reuters. Beijing has responded with its own procurement directives and domestic buildout plans, which darken the near-term outlook for selling detuned accelerators into China. Against that background, OpenAI’s capacity plan will likely concentrate in the US and allied jurisdictions with favorable siting, power, and rule certainty.

What it means for rivals and the cloud

Ten gigawatts is a capacity signal that every hyperscaler will read closely. Microsoft remains OpenAI’s strategic cloud partner for consumer and enterprise access. Oracle, Amazon, and Google are competing to land training and inference workloads through direct contracts and model marketplaces. The sector’s center of gravity is shifting from single-tenant superclusters to federated networks of training hubs and regional inference fabrics. That is why the plumbing matters as much as the processors. Packaging, HBM supply, Ethernet and NVLink fabrics, cooling topologies, and power interconnects will decide who hits time-to-utility targets.

OpenAI headquarters signage in San Francisco
OpenAI’s infrastructure expansion will anchor model training and inference across multiple regions [PHOTO: Tech Research Online].

Expect competitors to counterprogram. Meta is leaning into open models and proprietary clusters, a strategy we unpacked in our analysis of Zuckerberg’s $14 billion AI push. AMD and custom-silicon players will tout total cost of ownership and supply assurance as levers against Nvidia. Broadcom and Cisco will pitch network determinism for training jobs. The net effect is likely a capex supercycle that rewards suppliers up and down the stack while keeping delivery schedules tight through 2026.

The siting puzzle: megawatts, water, and neighbors

Where the first gigawatt lands will be a test case for the politics of AI infrastructure. States with spare capacity and friendly interconnection queues will court the jobs and tax base. Communities will ask hard questions about transmission upgrades, substation footprints, transformer supply, and noise from rooftop cooling. Some will focus on water. Policy analysts warn that inland campuses relying on evaporative systems can stress municipal supplies during heat events. CRS summarizes the tradeoffs well and points to documented cases of local strain. DOE’s guidance offers practical mitigations, but those require up-front design choices investors must underwrite. For a broader view of the grid consequences as AI scales, revisit our investigation into how data centres are breaking the grid.

What to watch next

  • Contract finalization and phasing. The companies described staged deployment tied to gigawatt milestones. The exact cadence and revenue recognition will matter for Nvidia’s guidance and for OpenAI’s product roadmap.
  • Packaging and HBM ramps. Signs that CoWoS-L lines are filling and that HBM4 is sampling at customer-qualified speeds will confirm the 2026 schedule.
  • Regulatory posture. The CMA’s Microsoft–OpenAI decision and the FTC’s AI partnerships study are the best compendium of questions regulators will ask. DOJ speeches and filings offer further tea leaves on remedies and disclosure expectations.
  • Energy deals. Expect bespoke power purchase agreements, on-site generation, and nuclear partnerships to surface. DOE primers on gigawatt-scale generation and FEMP’s data centre design guide illuminate the design space.

More

Amazon sues to gut New York labor law as NLRB piles on

New York — Amazon has taken New York to...

Putin offers Trump a one-year new start freeze as US toys with space defenses

Moscow — Russian President Vladimir Putin offered the United...
Show your support if you like our work.

Author

News Room
News Room
The Eastern Herald’s Editorial Board validates, writes, and publishes the stories under this byline. That includes editorials, news stories, letters to the editor, and multimedia features on easternherald.com.

Comments

Editor's Picks

Trending Stories

Discover more from The Eastern Herald

Subscribe now to keep reading and get access to the full archive.

Continue reading