OpenAI’s 6 GW chip binge with AMD, a risky bet on 2026

Six gigawatts on order, a penny warrant in hand, and a race to switch on a one gigawatt campus in 2026.

San Francisco — OpenAI has signed a multiyear pact to buy enough chips to power six gigawatts of computing, a scale more often associated with national electric grids than with any single company’s servers. The agreement binds the most visible developer of generative artificial intelligence to a new wave of accelerators and, through a warrant, gives the buyer a potential minority position in its supplier. The first tranche of hardware is slated to arrive in the second half of 2026, when OpenAI begins building a one gigawatt site that will run on the MI450 series, according to the companies.

The deal moves two numbers to the foreground. One is six, the cumulative gigawatts OpenAI says it will deploy over several years across multiple generations of systems. The other is ten, the approximate percentage stake the company could acquire if a penny-a-share warrant vests in full. That option, which allows the purchase of up to 160 million shares at one cent each, is contingent on volume and price milestones that stretch over the life of the agreement, as first detailed by Reuters. Taken together, the figures describe a partnership that links an appetite for compute to the supplier’s road map and incentives.

Server racks in a modern data center with active cooling
Server racks in a high density data hall, a reminder that siting and power shape every AI build at this scale. [PHOTO: AnD Cable Products]

What makes this arrangement unusual is not only its size, it is also the way it braids technology plans with corporate finance. Vesting occurs in steps tied to deliveries and purchases, beginning when the initial one gigawatt deployment goes live, then unlocking further as orders accumulate toward the six gigawatt total. A separate ladder links vesting to share price thresholds, with an upper target that would require a far richer valuation than today, according to the 8-K filing. This turns the buyer into a strategic ally that shares upside if execution stays on schedule, and it gives the supplier a powerful incentive to hit dates, specs, and software readiness without drift.

Both sides are casting the partnership in mission terms. Company leaders talk about delivering AI compute at massive scale and building capacity for the next phase of AI. There is salesmanship in that language, but there is also a practical reading: no single vendor can meet this trajectory alone, and no single buyer can push a chip maker into the lead without deep coordination across the stack, from interconnects to racks to orchestration software.

The six gigawatt figure is a proxy for how far this build intends to stretch. Even conservative translations of gigawatts into accelerators and racks imply hundreds of thousands of high-end chips spread across multiple campuses. The initial one gigawatt slice, set for late 2026, would rank among the largest single-tenant AI builds to date. That timing overlaps with a parallel plan that targets at least ten gigawatts from a competing ecosystem, outlined in a letter of intent last month. For readers tracking that path, The Eastern Herald has a primer on why a ten gigawatt build changes the map, including the implications for power planning and supply chains.

isa Su holding a data center chip during a keynote
AMD CEO Lisa Su during an Instinct keynote segment that set the stage for the next accelerator generation. [PHOTO: Reuters]

Under the hood, the choice reflects a bet on performance per watt, memory bandwidth, and system-level efficiency, not just raw peak numbers. Over the past two years the supplier has tried to narrow gaps in developer tools and frameworks that once limited share in large-scale training. The MI450 series is meant to extend those gains. Both companies describe the agreement as multigenerational, so the 2026 deployments are a starting point. The target is not only throughput on single benchmarks, it is reliability across fleets, serviceability on the floor, and a software stack that does not strand developers when they move workloads between clusters.

The competitive map remains crowded. The market leader continues to sell out runs of its highest-end parts and has built an ecosystem around networking, integration, and software that multiplies the value of each chip. That camp, in a separate announcement, outlined a partnership that would deploy at least ten gigawatts of systems beginning in 2026. The overlap matters. It tells suppliers that price and delivery will be judged against live alternatives, and it tells buyers that single-vendor risk can be managed by running two engines in parallel.

The energy footprint is now part of any story at this scale. Six gigawatts across several years is not the draw of a single campus, it is a running sum tied to how fast facilities come online, how efficiently each generation runs, and how much capacity goes to training versus serving. Even so, the number is large enough to force questions about siting, transmission, and regional grids. Earlier this summer, federal interruptions to a Plains transmission project illustrated how policy choices can ripple into data center timelines. For a deeper look at that intersection between power infrastructure and compute demand, see our coverage of grid upgrades pushed by AI data centers and how delays complicate multi-site rollouts.

There is also the matter of money. The buyer has generated several billion dollars of revenue in the first half of the year, and it has a major cloud backer supplying credits and capital, but the cash requirements for hardware, land, and construction at this tempo run to the tens of billions. The warrant gives equity exposure that could offset a slice of cost if execution drives the stock higher, but it does not replace the need to finance the builds. That is one reason the buyer has diversified partners across chips, cloud, and real estate, and why it has been willing to frame agreements in ways that align incentives close to the metal.

Investors marked up the supplier’s stock sharply on the news. The rally reflects more than a single customer. The thesis has been forming since the current accelerator generation launched. It goes like this: the AI compute market is so wide that even a second supplier can grow at extraordinary rates if it ships competitive hardware on cadence, closes the software gap, and wins trust from anchor customers. Coverage today characterized the arrangement as a multiyear engine for revenue and a re-rating story for a company that has spent years in the leader’s shadow, as Bloomberg framed it.

Operational questions will decide how much of that thesis sticks. Can foundry partners source and package enough high-bandwidth memory into modules that meet power and thermal budgets. Can system makers deliver racks that meet serviceability constraints at one gigawatt scale. Can the buyer train and retain enough engineers to run fleets this large without outages that erode reliability guarantees for enterprise customers. In data centers the answers travel a long chain, from mines that supply materials for semiconductors to crews that swap boards on raised floors.

Sam Altman speaking on stage about AI systems
OpenAI’s chief executive discussing the role of compute capacity in product roadmaps during a 2025 appearance. [PHOTO: TED]

On cadence, the overlap between platforms in 2026 sets up a straightforward comparison. The supplier’s data center lead has been touting the next generation as a clean leap, with confidence that software maturity will narrow historical gaps. Industry coverage captured that sentiment with a headline promise that the coming GPUs would surpass competitors’ announced architectures, as TechRadar reported. Claims are the easy part. The test will be delivered hardware, driver stability, compiler behavior, and rack-level throughput when the systems are live.

Scale also changes how companies think about networks. At campus size, performance is as much about fabric, topologies, and failure domains as it is about individual chips. The leader in this market has spent years tuning those layers around its own silicon, from link technology and switches to collective libraries for training at trillion-parameter scales. The challenger has partnered with system integrators to deliver full-rack designs that meet comparable serviceability and uptime targets. The gaps are narrowing, but they are not gone. That is one reason the buyer has been testing multiple pathways at once, including a separate letter of intent that would put millions of rival accelerators into service starting in 2026, and a set of efforts around custom silicon that reduce dependence on merchant parts over time.

Close-up of high density cabling on a GPU rack
Dense cabling on a GPU rack highlights why fabric design, cooling, and serviceability matter at gigawatt scale. [PHOTO: Nassau National Cable]

Regulators will study these alignments. One question is whether money that comes in the front door of a model developer could route back to its suppliers through purchase commitments, raising conflict concerns. Another is the reverse case, where a buyer acquires an option in a supplier while negotiating terms as a customer. The companies argue that the market is expanding fast enough that no single arrangement forecloses competition, and that their plans explicitly involve multiple sources. The eventual answer will depend on how these agreements translate into shipments and whether newcomers can find room to sell.

For readers looking to follow the paper trail, the outlines are public. The total capacity, the timing of the first one gigawatt deployment, and the multi-generation scope are described in the joint notices posted by the companies, including the buyer’s newsroom summary. The mechanics of the warrant, including volume triggers and references to price ladders, appear in the regulatory filing. And the independent framing of share issuance, vesting, and expected revenue lift comes through in wire coverage that set the tone of Monday’s trading, as Associated Press noted, and in a separate analysis of the share jump and revenue arc, as Reuters detailed.

There is precedent for anchor deals of this sort in other sectors. In aviation, a large order can shape production plans for years and influence which engine supplier gets the nod. In power markets, long term purchase agreements can finance entire wind farms. Here, the buyer is both the airline and the off-taker. It is committing to buy the capacity that makes its products possible and, through the warrant, it is taking a piece of the factory that builds the engines. That is new ground in Silicon Valley, but it matches the scale of what the leading labs are trying to build.

The ripple effects continue beyond the immediate parties. Suppliers in memory, substrates, and advanced packaging will read the six gigawatt line as a multi-year runway. Cloud partners, which have been balancing their own custom silicon against merchant chips, will treat the agreement as a marker of where demand is heading and how fast. Developers will care less about the politics of who supplied the racks and more about whether the frameworks, kernels, and container images behave the same in production as they do in a test cluster.

Policy is part of the backdrop. Washington’s tighter export controls and licensing regimes have already pushed vendors to create product variants for restricted markets. A recent change that forces major chip makers to hand over a portion of China revenues has become another line item in earnings calls. For context on that rule and its implications for both large suppliers, see our report on the revenue levy tied to China sales and what it means for pricing and margins in the quarters ahead.

Competition will not sit still. One supplier’s ecosystem benefits from years of moat building, from CUDA-class software to networking that knits millions of accelerators into a fabric. The other is racing to turn hardware leaps into developer-friendly platforms. That dynamic is healthy for buyers. It also sets up volatility for investors, because misses on software cadence or packaging yields can change the perception of a generation overnight. For a broader market read that places this week’s rally in context, see our coverage of another supplier whose AI-linked revenue forecasts have kept momentum in adjacent parts of the stack.

Scale has consumer-facing consequences. If the first one gigawatt campus stands up on schedule, the second half of 2026 would bring a step change in the buyer’s ability to train and serve new models. Some of that capacity will go to products that people can see, like multimodal assistants and creative tools. For a guide that explains where those tools already live, The Eastern Herald maintains a plain-English walkthrough of the app and a deeper explainer on how an AI search product works. The rest of the capacity goes to less visible work, like training successors to today’s models and running evaluations that decide what ships to the public.

What about the grid. Site selection will tell its own story. Hyperscale developers look for a mix of cheap generation, transmission headroom, and communities that can absorb industrial footprints without backlash. Water and waste heat are not afterthoughts at this scale. In colder climates, free cooling and heat recovery agreements can shave operating costs and soften the politics of megaprojects. In warmer regions, air and water constraints raise engineering difficulty and public scrutiny. The first campus tied to this agreement will be a signal of how the buyer is balancing speed to market with the long run cost of power.

There is a practical takeaway for developers. The headlines talk about billions and gigawatts. The day to day reality is a string of deadlines, each tied to a truck that needs to arrive and a rack that needs to pass tests. Tooling, kernel updates, and framework releases will decide whether new hardware translates into real throughput. For readers who prefer a translation of the big number into something tangible, industry coverage has tried to compare six gigawatts to household equivalents and power plants, useful shorthand that still comes with caveats, as TechCrunch noted.

The last piece is the cadence of announcements versus reality on the floor. From here, the milestones are clear. In 2026, the first shipments arrive. The first campus tied to this agreement lights up. The rival platform’s first phase stands up on the other side of the ledger. Financing decisions lock in sites two and three. The market will keep score along the way, with every quarterly update measured against the promises made in press releases and regulatory filings.

For now, one company has its headline and the other has its rally. The more interesting story will unfold over the next twelve to thirty six months, when delivered hardware, stable software, and working campuses replace sketches on investor slides. If the plan holds, a buyer will have secured a second source at historic size, and a supplier will have proven it belongs at the center of the most coveted market in chips.

More

Cairo’s high stakes gamble: talks open while Gaza Genocide

Gaza City — Negotiators from Israel and Hamas converged on...

Judge smacks down Trump’s Portland troop stunt

Washington — A federal judge in Oregon just put a...
Show your support if you like our work.

Author

News Room
News Room
The Eastern Herald’s Editorial Board validates, writes, and publishes the stories under this byline. That includes editorials, news stories, letters to the editor, and multimedia features on easternherald.com.

Comments

Editor's Picks

Trending Stories