Altman and Nadella Bet on AI’s Growing Energy Appetite Without Clear Limits
On November 3, 2025, OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella publicly acknowledged a critical uncertainty behind their massive AI ambitions: the escalating electricity consumption of AI systems and the lack of a defined ceiling on that growth. Both leaders are doubling down on AI’s expansion, implicitly accepting that it will demand increasingly vast power resources. Yet, this indefinite appetite for energy creates a hidden constraint that investors and operators must reckon with as they back AI scaling at extraordinary rates.
Why Energy Consumption Is the Invisible Constraint Shaping AI’s Scalability
Altman and Nadella have positioned AI compute capacity as the primary bottleneck—and their companies have responded accordingly. OpenAI’s $38 billion multi-year commitment to Amazon Web Services locked in massive cloud infrastructure, while Microsoft’s GPU deals with companies like Lambda Labs and the Australian energy firm Iren secured hardware and regional cloud capacity. These moves secure specialized chips and data center access essential for training and running large language models that power products like ChatGPT.
Yet, the confounding factor few highlight is the sharp, practically unbounded rise in electricity demand driven by these compute-intensive models. AI training and inferencing workloads are notorious energy hogs: training a large foundation model can consume megawatt-hours worth thousands of dollars in electricity alone. For example, recent estimates peg the annual energy consumption of top-tier AI models in the tens of gigawatt-hours range, comparable to small cities.
Altman and Nadella’s admission they don’t know “how much” energy AI will consume signals the real constraint has shifted away from just access to compute hardware toward the availability and cost of electricity. Unlike GPUs or cloud contracts, electricity supply involves geographic, regulatory, and infrastructure complexities that are less fungible and take years to adjust. This power demand constraint places an upper bound on AI scaling that no amount of chip acquisition or cloud agreements can circumvent.
Positioning AI to Run on Ever Greater Energy Inputs Without Defined Limits Reveals a Strategic Blind Spot
There are two ways to approach power constraints at scale: first, improve energy efficiency per model run; second, secure or build access to ever-increasing power sources. Currently, neither OpenAI nor Microsoft has outlined a concrete ceiling or clear strategy to contain energy growth. Instead, they place a long-term bet that power markets and infrastructure will accommodate AI’s relentless growth.
This bet provides leverage by allowing the companies to focus on rapid model development and deployment without internal caps—but it also externalizes risk. For investors, this signals exposure to energy price volatility and potential regulatory restrictions, especially as governments tighten emissions targets and grid reliability concerns grow.
Importantly, this approach differs from other players in AI infrastructure who pursue distinctive leverage mechanisms around energy constraints. Lambda Labs’ specialized hardware contracts include geographic diversification into regions with lower-cost renewable energy, explicitly offsetting power cost risk. Similarly, Microsoft’s Australian cloud deal with Iren hinges on tapping renewables-heavy grids to stabilize energy expenses, a subtle shift that shifts the constraint from raw energy to energy source quality and cost.
Failure to Define Energy Limits Alters the System's Constraint, Elevating Regulatory and Infrastructure Risks
By not quantifying or capping their energy consumption trajectories, OpenAI and Microsoft implicitly define the AI scaling constraint as “unbounded compute power conditional on energy supply.” This reframes the bottleneck from technology licensing and GPU availability to the physical limits and economics of power grids.
This is a crucial system dynamic because building out new data centers or energy infrastructure involves multiyear timelines and hefty capital expenditure. Energy providers are often constrained by grid capacity and local regulations. Should electricity prices spike or carbon taxes rise, the marginal cost of AI compute runs could increase sharply, eroding profitability or forcing throttling of operations.
Compared to traditional cloud computing growth strategies that factor in power cost caps and efficiency roadmaps, openly admitting uncertainty about energy usage is a positioning move that both amplifies operational leverage and risk—depending on how power and regulatory ecosystems evolve.
Concrete Examples of Energy Consumption's Impact on AI Business Leverage
Consider that ChatGPT’s continuous user inferencing demands are running 24/7 in data centers with GPUs consuming 250-300 watts each. Adding capacity to serve an additional 10 million users basically scales electricity need proportionally. At an average U.S. commercial electricity rate of $0.10/kWh, this could add millions of dollars monthly just for power.
By contrast, companies like Google have disclosed multi-year AI sustainability targets aiming for net-zero emissions or capped energy budgets for AI workloads, effectively positioning their constraint around clean energy procurement and efficiency. This dynamic shifts the bottleneck from absolute compute to sustainable compute—which can determine AI system durability over the next decade.
This is the leverage distinction OpenAI and Microsoft have yet to embrace explicitly: they could choose to innovate energy-efficient model architectures or partner aggressively on renewable energy infrastructure, but until they do, energy remains a latent and volatile constraint.
What Operators Overlooking Energy Constraints Miss About AI’s Structural Limits
Many investors and analysts assume AI’s primary advantages come from proprietary models, data, and software. This misses that the physical infrastructure—energy supply and grid access—constitutes the ultimate choke point for sustainable growth. OpenAI and Microsoft own or contract firepower, but they do not control the electric grids powering those resources.
This matters for anyone managing AI investments or operations because it reframes where strategic effort needs to focus: securing favorable energy contracts, innovating on model efficiency, and forecasting regulatory trends become critical to maintain scalable AI advantage.
For context, our review of OpenAI’s AWS contract shows how compute availability alone is not enough—without sustainable power, scaling stalls. Meanwhile, rising energy costs have already disrupted data center expansion plans across the AI industry, forcing operators to rethink supply chains and deployment regions.
This hidden power consumption constraint represents a rare opportunity to differentiate AI companies: those who master energy sourcing innovation will transform a looming operational risk into a durable advantage.
Frequently Asked Questions
Why is energy consumption a critical concern in scaling AI systems?
Energy consumption is critical because top-tier AI models consume tens of gigawatt-hours annually, with training alone costing thousands of dollars in electricity. The lack of limits on energy use creates a hidden constraint affecting scalability, grid capacity, and operational costs.
How much can serving additional AI users impact electricity costs?
Serving an additional 10 million AI users can significantly increase electricity needs since GPUs running 24/7 consume 250-300 watts each. At a $0.10/kWh electricity rate, this scales to millions of dollars monthly in power expenses.
What strategies do AI companies use to address energy supply risks?
Some companies diversify geographically to access lower-cost renewable energy, like Lambda Labs, or secure cloud capacity in renewables-heavy grids, such as Microsoft’s Australian deal with Iren, to stabilize energy costs and mitigate power risk.
Why is electricity supply considered a more complex bottleneck than hardware availability?
Electricity supply involves geographic, regulatory, and infrastructure complexities that are less fungible and take years to adjust, unlike GPUs or cloud contracts. This makes energy availability and cost a decisive constraint on AI scaling.
What risks do investors face from AI's increasing energy demands?
Investors face exposure to energy price volatility, potential regulatory restrictions, carbon taxes, and infrastructure limitations that could increase compute operating costs or force throttling of AI workloads.
How are some tech companies addressing sustainability in AI compute?
Companies like Google adopt multi-year sustainability targets aiming for net-zero emissions or capped energy budgets for AI workloads, shifting focus from pure compute to sustainable compute to enhance long-term system durability.
Why haven’t leading AI companies set clear energy consumption limits?
OpenAI and Microsoft currently focus on model growth without defined energy caps, betting that power markets and infrastructure will support relentless AI expansion, which externalizes energy-related risks.
What makes energy sourcing innovation a potential competitive advantage in AI?
Mastering energy sourcing innovation can transform operational risk linked to power consumption into a durable advantage by ensuring scalable, cost-effective AI deployment amidst rising energy costs and regulatory pressures.