The AI investment cycle is slowly entering a new phase where excitement around models and demos is now defined by balance sheets and capex plans.
And the amounts are extraordinary. Some say this might be the largest capex boom in history.
And there’s one specific company that has been dominating discussions since late 2022, and that is OpenAI.
Some even believe that the success of AI technology rests on Sam Altman’s shoulders.
But the past few weeks have shown some interesting developments which are worth a look, and perhaps an evaluation of opinions.
The AI capex boom not really concentrated
AI spending is no longer concentrated in one or two firms but is spread across the entire stack.
Nvidia continues to benefit as the primary supplier of high-end accelerators, with data centre revenue still growing at rates that justify aggressive capacity expansion.
Hyperscalers like Oracle are committing tens of billions each year to new data centres, power infrastructure, and networking, not as optional growth projects but as core strategy.
Microsoft, Amazon, Google, and Meta have all forecasted materially higher AI capex through 2026 and beyond.
These investments are being made even as investors grow more sensitive to near-term margins.
This matters because it shows where confidence actually sits. Capital is flowing first to infrastructure and distribution.
Model developers sit downstream of those decisions.
Read also: Nvidia puts brakes on $100B OpenAI buzz: what it means for NVDA stock
Hyperscalers are spending, but on their own terms
The recent market reaction to Microsoft’s earnings illustrated the tension clearly.
Azure continues to grow, but AI-related spending rose faster than near-term monetisation. The result was one of Microsoft’s sharpest single-day drawdowns in years.
The concern was not whether AI works, but how long margins stay under pressure.
That reaction is instructive. Hyperscalers can afford to absorb that pressure because AI strengthens their ecosystems.
A prime example here is Google, which continues to roll out major Chrome integrations with its Gemini model.
AI workloads pull customers deeper into cloud platforms. They increase switching costs. They justify higher long-term pricing.
Even if margins dip temporarily, the strategic logic holds.
But that logic does not apply in the same way to OpenAI.
OpenAI’s economics look very different
OpenAI’s projected spending path is extraordinary. Estimates point to cumulative losses approaching $100 billion through 2028, driven primarily by compute.
Revenue is growing fast, but the gap between revenue and cost remains wide.
To close that gap, OpenAI needs two things to happen at once. Revenue must scale at historic speed, and compute costs must fall sharply.
Both outcomes depend on forces largely outside OpenAI’s control.
Unlike hyperscalers, OpenAI does not own the infrastructure it depends on. It buys compute from Microsoft and potentially Amazon.
That means OpenAI is exposed to whichever layer of the stack ends up capturing most of the economic value.
If margins compress at the model layer, OpenAI feels it immediately.
Vertical integration is becoming the real advantage
This is where structural differences matter most.
Google designs its own chips, runs its own cloud, and distributes AI directly through Search, Chrome, Android, and Workspace.
When Google trains Gemini, much of the cost stays inside the company.
Meta is pursuing a different version of the same logic.
It funds AI spending through advertising cash flows, trains models for internal use first, and deploys them across products with billions of users.
AI improves engagement and pricing power even before direct monetisation.
Amazon benefits from AI demand regardless of which model wins. Every training run and inference request strengthens AWS.
Even reported discussions around large investments in OpenAI should be viewed through that lens. Amazon is buying exposure to demand, not betting its future on one model.
OpenAI is trying to move in this direction through partial ownership of compute projects and custom chip design.
Those efforts help, but they do not match the control enjoyed by vertically integrated peers.
Competition is eroding the idea of a single winner
The speed at which Google closed the perceived performance gap with Gemini has changed market expectations.
Distribution now matters more than benchmarks. Gemini does not need to be better in every task, as it only needs to be good enough and be everywhere.
When it comes to enterprise adoption, Anthropic has focused on regulated and technical use cases where long-term contracts are the core pursuit.
Enterprise customers care less about being on the absolute frontier and more about reliability, security, and support.
Consumer AI remains fluid because switching costs are minimal. There are no network effects comparable to social media or operating systems.
Users rotate based on preference, price, and convenience.
That combination points toward a market with several strong players rather than a single dominant one.
Apple has taken a notably cautious approach.
Rather than racing to build the largest models, Apple is integrating AI, which strengthens hardware and ecosystem lock-in.
Acquisitions such as Q.ai focus on device-level intelligence, not frontier compute.
Apple’s approach suggests that owning the user relationship may matter more than owning the most powerful model.
Now, Nvidia sits in a different position entirely. It sells the tools everyone needs, regardless of who wins at the model layer.
That is why reported hesitation around an outsized OpenAI investment is telling. Nvidia does not need OpenAI to dominate. It needs AI training and inference to continue scaling.
This distinction between enabling demand and owning demand is critical. It explains why infrastructure providers look increasingly attractive as AI exposure, while pure model economics look harder.
A likely outcome investors are underestimating
A realistic scenario is now visible. AI adoption accelerates, spending remains high, and several models coexist at similar capability levels.
That means prices fall, while infrastructure, chips, and platforms capture most of the value. Model developers compete harder for thinner margins.
But at the same time, investors are becoming more sensitive to higher spending without immediate results, as evidenced by Microsoft’s plunge.
What investors are looking for are distribution methods for these models.
They are looking for defensible moats, such as the ones being built by Google and Anthropic.
Higher spending doesn’t cut it anymore. The market just doesn’t have the appetite for it.
OpenAI can remain important without being the defining economic winner of the AI era.
That outcome would feel uncomfortable because recent tech history trained investors to expect concentration.
The next phase of AI may reward control over infrastructure, distribution, and balance sheets instead.
The post AI capex boom shifts focus from models to infrastructure and balance sheets appeared first on Invezz


