Skip to content

The AGI Distraction: How We Traded Profit for a promise

The AI industry will spend $1.5 trillion this year. OpenAI will lose $14 billion in 2026. 95% of enterprise AI pilots deliver zero ROI. None of that slowed the capital. Here's why AGI became the new 'path to profitability.

The AGI Distraction: How We Traded Profit for a promise
Published:

The global AI industry will spend roughly $1.5 trillion this year. OpenAI alone will burn through an estimated $14 billion in operating losses in 2026, on top of the $5 billion it lost in 2024 on just $3.7 billion in revenue. Deutsche Bank projects OpenAI will accumulate $143 billion in negative cash flow before it sees a single profitable quarter, sometime around 2030, if the forecasts hold. For context: that is more than four times what the Manhattan Project cost in today's dollars. Meanwhile, a July 2025 MIT study found that 95% of enterprise AI pilots are delivering zero measurable return on investment. Ninety-five percent.

None of that stopped the capital from coming. None of it slowed the narrative. And that is the point.

The market has largely decided that AGI is the answer to the profitability question, not a separate question from it. The implicit argument being accepted goes something like this: scale creates capability, capability creates dependency, dependency creates pricing power, and pricing power eventually creates margins. It is a logical chain. It just has a lot of "eventually" in it and very few hard dates.

So here is the honest answer to a question nobody is asking loudly enough: has anyone actually explained how AGI unlocks profitability in a way that current models have not? Not really. The closest thing to a coherent argument is that AGI-level systems could command dramatically higher prices because they would replace entire functions rather than assist them. An AI that fully replaces a legal team, an accounting department, or a software engineering org is worth more per seat than a copilot. That is the pricing thesis underneath all the spending.

The problem is that thesis assumes the customer can and will pay, that competitors do not commoditize it first, and that the regulatory environment stays permissive. None of those are guaranteed. None of them are being stress-tested publicly either.

The market is not betting on a clear profitability mechanism. It is betting on inevitability. The reasoning is closer to "whoever controls AGI controls everything" than it is to any traditional unit economics argument. That is a power-capture thesis, not a business model. And power-capture theses attract enormous capital right up until the moment they do not.

The industry has not sold investors a path to profit through AGI. It has sold them the idea that asking about profit is thinking too small. That is a very dangerous thing. And it is a much harder position to hold anyone accountable for.

Profit used to be the proof

In earlier technology cycles, the pattern was different. Search, cloud, and social media all had speculative periods and silly valuations, but there was still a basic expectation that the thing had to make be profitable eventually. Search had ads. Cloud had subscriptions and margins that actually held up. Platforms took a cut. You could disagree on the multiple, but you could at least see the mechanism.

Today, a significant number of serious, credentialed people accept "we're chasing AGI" as its own justification. Losses stop looking like a temporary bridge to sustainable economics and start looking like the permanent price of admission to the race. If you are not pouring staggering capital into GPUs and data centers, you are not considered a serious player. The implied threat is: miss this window and you miss everything.

That logic has a name in earlier technology cycles. It was called irrational exuberance. We just gave this version a better acronym.

The core problem is that AGI has become a financial fig leaf. Instead of asking whether a given model or product has coherent unit economics, the conversation jumps straight to what happens when we get there. The destination is allowed to erase the road. Losses at the current scale, which run into the tens of billions annually across the major frontier labs, are reframed not as a gap in the business model but as bold infrastructure investment in a future that is always arriving and never quite here.

Many leading AI labs are effectively subsidized by larger cloud, advertising, or platform businesses, not supported by standalone model revenue. That subsidy relationship is worth naming clearly, because it changes the accountability structure. When losses are absorbed by a parent company with a different profit engine, the pressure to demonstrate that the AI product itself can pay its way essentially disappears. AGI becomes the cover story for an indefinite R&D budget.

The new cold fusion

The comparison that should make people uncomfortable is cold fusion. Cold fusion has existed for decades in a strange limbo: always thirty years away, always just on the other side of the next breakthrough. Perpetually important, perpetually transformative, and perpetually not in the room.

AGI is drifting into that same space. Close enough to justify building entire industrial stacks around it. Far enough that almost no one is held accountable for missing timelines. The gap between the story and the balance sheet gets filled with phrases like "sooner than you think" and "we'll figure out monetization at scale."

Meanwhile, the meters keep spinning. Data centers now account for roughly 1.5 percent of global electricity consumption, with that share expected to roughly double by 2030 almost entirely on the back of AI workloads. Global data center electricity demand could reach 900 to 1,000 terawatt-hours per year by the end of the decade, comparable to the total current power consumption of Japan. In some regions, data centers already consume several percent of local electricity supply, and utilities are projecting sharp further increases under aggressive AI growth scenarios.

Water tells a similar story. A typical large data center uses around 300,000 gallons of water per day for cooling, roughly equivalent to the daily demand of 1,000 households. The largest facilities approach five million gallons per day. Training a single frontier-scale model has been estimated to evaporate hundreds of thousands of gallons of water. One independent analysis of a Grok-scale training run put total cooling water use at roughly 200 million gallons, around 100 Olympic swimming pools, for a single run.

These are not abstract infrastructure footnotes. They are the physical balance sheet of the AGI story. And they are being underwritten on the assumption that the returns will eventually justify the draw.

The advantage fallacy

Assume, for a moment, that the optimists are right. The breakthroughs land. The infrastructure is built. Something credibly deserving of the AGI label arrives in five years, or ten, or twenty. Then what?

The standard pitch is that everyone wins. Productivity explodes, science accelerates, the hard problems get solved. But at the level where companies actually operate, the dynamics are less magical. When a new capability becomes broadly available, it does not stay a competitive advantage for long. It becomes the new baseline. Plumbing.

This has happened before, repeatedly. There was a time when having internet access, or broadband, or a smartphone was a genuine differentiator. Today those things are assumed. Nobody wins a deal because they use spellcheck. You only lose if you do not.

AI is on the same trajectory, and it is moving fast. The first firms to deploy serious models in production may enjoy a real window of outperformance. But as access to comparable tools spreads across the market, that edge does not disappear so much as it gets competed away. The gains show up as compressed timelines, higher output expectations, and new quotas, not as free afternoons or reliably higher pay.

Economic analysis of automation and labor suggests this pattern is not accidental. When powerful tools diffuse broadly, productivity gains tend to accrue more to capital owners and competitive price pressure than to median workers, particularly when institutional bargaining power is weak. The technology changes the quota before it changes the paycheck.

Consider what this means in practice. A useful version of the AI story is that an editor equipped with a serious AI assistant could do the work of ten people and reclaim three days a week. The more realistic version is that the editor is now expected to process ten times the volume, with tighter turnarounds and lower tolerance for error, at roughly the same compensation. The machine did not liberate the human. It redefined what a full day's work looks like.

Psychologists call a related dynamic hedonic adaptation: the tendency to adjust quickly to new conditions, such that what once felt like a meaningful improvement becomes the new normal. The same mechanism operates at the firm and industry level. Once everyone's workflows are rebuilt around AI, nobody feels like they are winning because of it. They are just trying to keep up with the new baseline.

The missing conversation

None of this would be particularly alarming if the industry's loudest voices spent more time on the boring stuff: revenue, margins, customer retention, opportunity cost. Instead the dominant tone is one of inevitability. AGI is coming. The scale of investment must therefore be unprecedented. The conclusion is baked into the premise.

There is very little serious discussion of what happens if the bet fails, or arrives in a weaker and weirder form than expected. What if the models that are good enough for most commercial tasks turn out to be substantially the ones already available, and the real gains from here come from better integration, better interfaces, and better governance rather than more parameters? In that world, the decision to burn through capital, energy, and public goodwill in pursuit of an ever-receding target starts to look less like visionary long-termism and more like an extremely expensive sunk cost.

The irony is that many existing narrow AI systems could probably be profitable if companies stopped chasing scale for its own sake. There is no law of physics that says language models, recommendation systems, and vision tools must lose money. That is a strategic choice, specifically the choice to treat current products as mere stepping stones to a future AGI rather than as businesses that have to stand on their own.

Analysts and investors have described the current environment as an AI industrial bubble, one where a meaningful share of deployed capital will not earn back its cost. That assessment is not fringe. It is showing up in serious financial analysis across the market. The question is whether anyone inside the industry is structuring their decisions around it.

Systemic exposure

The pursuit of AGI is not just a high-beta investment theme. It is happening inside a macroeconomic context that makes the downside risks larger than they look in isolation.

The current AI infrastructure build is occurring against a backdrop of already high public debt, constrained fiscal space, and elevated geopolitical risk. Data centers and chip fabs are financed with debt, equity, and public subsidies. Banks extend credit. Governments redirect tax revenue and development policy. Investors accept years of negative cash flow on the assumption of a future that makes it all pencil out.

What happens if that future arrives, or fails to arrive, in a world where balance sheets are already stretched and credit conditions have tightened? Entire sectors could end up with more automated capacity than solvent demand. Some data center portfolios may not cover their operating and financing costs. The fallback options that made earlier tech cycles survivable, rolling debt, selling assets to new entrants, waiting for the next round, get harder the third and fourth time around.

Analysts warn that an AI-driven correction could amplify existing debt and asset-bubble risks, particularly where capex has been justified more by AGI narrative than by near-term cash flows. You can imagine arriving at the starting line of something like AGI at the precise moment when the financial system, the grid, and the political order are least equipped to absorb another wave of disruption.

Put plainly: you can win the AGI race and still discover that your customers, your lenders, and your partners are no longer around to make meaningful use of it.

The abandoned path that leaves more standing

There is a different version of this story available. It does not require abandoning AI, dismissing the research, or pretending the technology is not genuinely powerful. It just requires asking the question that somehow fell out of the conversation: can this thing pay for itself, not in some future AGI scenario, but with the customers and use cases that actually exist today?

That framing would change some decisions. It would mean building models and products that can stand on their own economics in real markets. It would mean pacing infrastructure growth so that grids, water systems, and communities are not pushed past their limits to support one more training run. It would mean spreading the gains across more businesses and more time rather than concentrating the risk in a handful of balance sheets and a few overstressed regions.

A more grounded pace would also give more of the economy time to integrate AI into its actual foundations, rather than just its headlines. People would be more likely to see it as a tool they can build with, rather than a force that might make their entire function obsolete at the next version number.

The bigger picture is not that AI is bad or that the research should stop. The bigger picture is that intelligence, artificial or otherwise, ought to include the capacity to ask what we are trading away for the story we are telling. And whether a more profitable, more widely shareable, more durably grounded version of this technology might leave us with more actually standing when the narrative finally has to meet the balance sheet.

That question is not being asked loudly enough. It should be.

Sources and supporting data are drawn from independent energy demand analysis, data center water usage research, economic modeling of automation and labor markets, and analyst assessments of AI capital expenditure and return assumptions.