Finance News

OpenAI’s latest megadeals are testing its hyperscaler ambitions


Broadcom-OpenAI deal expected to be cheaper than current GPU options

Sam Altman didn’t set out to compete with Nvidia.

OpenAI began with a simple bet that better ideas, not better infrastructure, would unlock artificial general intelligence. But that view shifted years ago, as Altman realized that more compute, or processing power, meant more capability — and ultimately, more dominance.

On Monday morning, he unveiled his latest blockbuster deal, one that moves OpenAI squarely into the chipmaking business and further into competition with the hyperscalers.

OpenAI is partnering with Broadcom to co-develop racks of custom AI accelerators, purpose-built for its own models. It’s a big shift for a company that once believed intelligence would come from smarter algorithms, not bigger machines.

“In 2017, the thing that we found was that we were getting the best results out of scale,” the OpenAI CEO said in a company podcast on Monday. “It wasn’t something we set out to prove. It was something we really discovered empirically because of everything else that didn’t work nearly as well.”

That insight — that the key was scale, not cleverness — fundamentally reshaped OpenAI.

Now, the company is expanding that logic even further, teaming up with Broadcom to design and deploy racks of custom silicon optimized for OpenAI’s workloads.

The deal gives OpenAI deeper control over its stack, from training frontier models to owning the infrastructure, distribution, and developer ecosystem that turns those models into lasting platforms.

Altman’s rapid series of deals and product launches is assembling a complete AI ecosystem, much like Apple did for smartphones and Microsoft did for PCs, with infrastructure, hardware, and developers at its core.

OpenAI expands hyperscaler ambitions with custom silicon, 10 GW Broadcom chip deal

Hardware

Through its partnership with Broadcom, OpenAI is co-developing custom AI accelerators, optimized for inference and tailored specifically to its own models.

Unlike Nvidia and AMD chips, which are designed for broader commercial use, the new silicon is built for vertically integrated systems, tightly coupling compute, memory, and networking into full rack-level infrastructure. OpenAI plans to begin deploying them in late 2026.

The Broadcom deal is similar to what Apple did with its M-series chips: control the semiconductors, control the experience.

But OpenAI is going even further and engineering every layer of the hardware stack, not just the chip.

The Broadcom systems are built on its Ethernet stack and designed to accelerate OpenAI’s core workloads, giving the company a physical advantage that’s deeply entangled with its software edge.

At the same time, OpenAI is pushing into consumer hardware, a rare move for a model-first company.

Its $6.4 billion all-stock acquisition of Jony Ive‘s startup, io, brought the legendary Apple designer into its inner circle. It was a sign that OpenAI doesn’t just want to power AI experiences, it wants to own them.

Ive and his team are exploring a new class of AI-native devices designed to reshape how people interact with intelligence, moving beyond screens…



Read More: OpenAI’s latest megadeals are testing its hyperscaler ambitions

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More