In 1865, an Economist Predicted Your AI Budget Would Explode. He Was Right.
Published: January 28, 2026 - 12 min read
This is Part 7 of the Tokenomics for Humans series. If you haven't read Part 6 on TCO and Hidden Costs, I recommend starting there.
At the end of Part 6, I asked you a question:
If AI tokens became 90% cheaper tomorrow, would your organization use less AI, the same amount of AI, or more AI?
If you said "less" or "the same," you're wrong.
If you said "more," you're catching on.
But here's the part that will really cook your brain: your organization would likely spend MORE money on AI, not less.
Cheaper prices. Higher spending.
That sounds insane. But it's not a prediction. It's a pattern that's been documented for over 160 years.
Let me tell you the story of a dead economist and why he matters to your AI budget.
The Coal Paradox of 1865
In 1865, a British economist named William Stanley Jevons noticed something strange happening in England.
Steam engines were getting more efficient.
The newer engines used less coal to produce the same amount of work. Engineers were celebrating. Politicians were relieved. Everyone assumed coal consumption would go down.
It didn't.
Coal consumption went up. Way up.
Jevons studied the data and realized what was happening: efficiency made steam engines economical for new applications.
Before the efficiency improvements, steam engines were too expensive to use for many purposes. But as they got more efficient:
- Factories that couldn't afford steam power now could
- Industries that relied on human labor switched to machines
- New applications became economically viable
- Existing users expanded their operations
The per-unit cost went down. But total consumption went up. And total spending went up even more.
JEVONS' PARADOX: THE ORIGINAL EXAMPLE
================================================================
BEFORE: Steam engines use 10 units of coal per hour
Coal costs $1 per unit
Cost to run engine: $10/hour
Result: Only wealthy factories use steam power
AFTER: Improved steam engines use 5 units of coal per hour
Coal still costs $1 per unit
Cost to run engine: $5/hour
Result: Twice as many factories can now afford steam power
Each uses half the coal...
But total coal consumption DOUBLES
And total spending on coal INCREASES
================================================================
Per-unit efficiency UP
Total consumption UP
Total spending UP
================================================================
Jevons published his findings in a book called "The Coal Question." His insight is now called Jevons' Paradox.
And it's happening right now with AI.
Jevons' Paradox in AI
Let's apply Jevons' logic to AI tokens.
The Price Drop
Token prices have been falling dramatically:
| Year | Approximate Cost (per 1M tokens) | Trend |
|---|---|---|
| 2023 | $10.00 | Baseline |
| 2024 | $5.00 | 50% cheaper |
| 2025 | $2.00 | 80% cheaper than 2023 |
| 2026 | $0.50 | 95% cheaper than 2023 |
Note: These are illustrative figures showing the general trend. Actual prices vary by model and provider.
Looking at this, you might think: "Great! My AI bill should be going down."
But here's what actually happens:
JEVONS' PARADOX: THE AI VERSION
================================================================
YEAR COST/MILLION TOKENS TOTAL AI SPENDING
---- ------------------- -----------------
2023 $10.00 $X
2024 $5.00 (50% cheaper) $2X (spending DOUBLED)
2025 $2.00 (80% cheaper) $5X (spending 5x higher)
2026 $0.50 (95% cheaper) $15X (spending 15x higher)
================================================================
Each token costs less.
But you use SO many more tokens that total spending increases.
================================================================
Cheaper tokens don't lower your bill. They increase your usage.
Why This Happens: The Expansion Effect
When something becomes cheaper, three things happen:
1. New Use Cases Become Viable
At $10 per million tokens, you're careful. You use AI for high-value tasks where the ROI is obvious.
At $0.50 per million tokens? Suddenly you can use AI for:
- First drafts of everything
- Internal documentation
- Meeting summaries
- Email responses
- Data cleaning
- Quality checks on other AI outputs
- Experiments that might not work
Each of these seemed "too expensive" before. Now they're cheap enough to try.
2. Existing Users Use More
Your marketing team was using AI for 10 tasks. At lower prices, they use it for 50 tasks.
Your developers were running AI code review occasionally. Now they run it on every commit.
Your customer service was using AI for complex tickets. Now they use it for every ticket.
People don't use the same amount more efficiently. They use more.
3. AI Agents Multiply Demand
This is the new factor that Jevons couldn't have predicted: AI can consume AI.
AI agents and automated workflows run without human intervention. They can:
- Run 24/7 without breaks
- Process thousands of requests per hour
- Chain multiple AI calls together
- Spawn additional AI processes as needed
One human using AI might make 50 requests per day.
One AI agent might make 50,000 requests per day.
As token prices drop, deploying more agents becomes economical. Each agent consumes tokens continuously. Your usage doesn't increase linearly. It increases exponentially.
THE AGENT MULTIPLICATION EFFECT
================================================================
WITHOUT AGENTS:
10 employees x 50 requests/day = 500 requests/day
WITH AGENTS:
10 employees x 50 requests/day = 500 human requests
5 AI agents x 5,000 requests/day = 25,000 agent requests
---------------------------------------------------------
Total: 25,500 requests/day (51x increase)
================================================================
Cheaper tokens make agents economically viable.
Agents multiply token consumption exponentially.
================================================================
Real-World Evidence
This isn't theory. It's what's actually happening.
Deloitte's Findings
Deloitte's research (the same research behind this entire series) documented this pattern in enterprise AI spending:
- Year 1: Organizations used ~10 billion tokens
- Year 2: Organizations used ~300 billion tokens (30x increase)
- Year 3: Organizations used ~1 trillion tokens (100x from Year 1)
Token prices dropped during this period. But total spending increased at every step.
The OpenAI Example
When OpenAI dropped GPT-4 prices by 50%, did usage stay flat?
No. Usage exploded. More developers built more applications. More companies integrated AI. More use cases became viable.
OpenAI's revenue went up, not down.
The Cloud Computing Precedent
We've seen this movie before.
When AWS first launched cloud computing, prices were relatively high. As prices dropped over the years, companies didn't spend less on cloud. They spent more. Much more.
Cloud spending has increased every single year for the past 15+ years, even as per-unit costs have fallen dramatically.
AI is following the exact same pattern.
The Math of Jevons' Paradox
Let me show you exactly how this works with numbers.
Scenario: A Marketing Team
Year 1: High token prices
- Token cost: $10 per million
- Use case: AI-generated ad copy only (high ROI)
- Monthly usage: 2 million tokens
- Monthly cost: $20
Year 2: Token prices drop 80%
- Token cost: $2 per million
- New use cases now viable:
- Ad copy (original)
- Social media posts
- Email campaigns
- Blog post drafts
- Product descriptions
- A/B testing variations
- Monthly usage: 25 million tokens
- Monthly cost: $50
The result:
- Cost per token: DOWN 80%
- Token usage: UP 1,150%
- Total spending: UP 150%
JEVONS' PARADOX: THE MARKETING TEAM
================================================================
YEAR 1 YEAR 2 CHANGE
------ ------ ------
Token price (per 1M) $10.00 $2.00 -80%
Monthly tokens used 2M 25M +1,150%
Monthly spending $20 $50 +150%
================================================================
Cheaper prices enabled more use cases.
More use cases consumed more tokens.
Total spending increased despite lower unit costs.
================================================================
This is Jevons' Paradox in action.
Why Governance is Non-Optional
Here's the uncomfortable implication:
Without governance, your AI spending will grow. Forever.
Every time prices drop, usage expands. Every new use case that becomes viable gets adopted. Every efficiency gain gets reinvested into more AI usage.
This isn't bad. AI usage can generate tremendous value. But if you're not actively managing it, costs will spiral.
The Governance Gap
Most organizations have governance for:
- Capital expenditures over certain thresholds
- New software purchases
- Headcount additions
But AI spending often flies under the radar because:
- Individual transactions are small (fractions of a cent)
- It's classified as a subscription or utility
- It grows incrementally, not in big jumps
- No one person "approves" each use
By the time someone notices the bill, it's already 10x what it was.
What Governance Looks Like
Effective AI governance includes:
Visibility:
- Track token consumption by project/team/application
- Monitor spending in real-time
- Understand which use cases consume the most
Controls:
- Set budget caps with alerts
- Require approval for high-usage projects
- Establish usage policies
Optimization:
- Right-size models for each task (don't use Opus when Sonnet works)
- Eliminate wasteful consumption
- Review and rationalize use cases regularly
Accountability:
- Charge costs back to business units
- Require ROI justification
- Regular spending reviews
AI GOVERNANCE FRAMEWORK
================================================================
VISIBILITY CONTROLS
+-- Token tracking +-- Budget caps
+-- Real-time monitoring +-- Approval thresholds
+-- Usage attribution +-- Usage policies
+-- Cost dashboards +-- Rate limiting
OPTIMIZATION ACCOUNTABILITY
+-- Model right-sizing +-- Cost allocation
+-- Waste elimination +-- ROI justification
+-- Use case review +-- Regular audits
================================================================
Without this framework, Jevons' Paradox wins.
Your spending grows faster than your value.
================================================================
What This Means For You
If You're a CFO or Finance Lead
Jevons' Paradox means you cannot assume falling prices will reduce spending.
Practical recommendations:
-
Budget for growth, not decline. Even with falling token prices, plan for AI spending to increase. How much depends on your adoption trajectory.
-
Implement visibility before you need it. By the time spending is out of control, it's harder to rein in. Build tracking systems now.
-
Create approval thresholds. Just like capital expenditures, AI projects over certain spend levels should require approval.
-
Watch for agent multiplication. Autonomous AI agents can multiply consumption exponentially. Know where they're deployed and how much they consume.
-
Question "it's cheap" justifications. When someone says "it's only a few dollars per day," multiply by users, by agents, by months, by the expansion that will happen when it works.
If You're a Tech-Forward Manager
You're probably the one proposing new AI use cases. Own the Jevons effect.
Practical recommendations:
-
Be honest about expansion. If a pilot works, usage will grow. Build that into your projections.
-
Track from day one. Don't wait until someone asks. Know your team's token consumption.
-
Anticipate the next use case. Every successful AI deployment leads to "what else can we use this for?" Plan for that.
-
Right-size your models. Using the most powerful model for every task is wasteful. Match model capability to task complexity.
-
Build off-switches. The ability to pause or limit AI usage is as important as the ability to deploy it.
The Silver Lining
Jevons' Paradox sounds scary. But it's not all bad news.
If usage is expanding, value is probably expanding too.
The reason people use more AI when it's cheaper is because AI is useful. The marketing team didn't expand to 25 million tokens per month for no reason. Those tokens are generating value.
The key is ensuring that value grows faster than cost.
That's what governance is for:
- Ensure high-value use cases get priority
- Eliminate low-value consumption
- Measure ROI (even imperfectly, as we discussed in Part 6)
- Make intentional decisions about expansion
Jevons' Paradox isn't a reason to avoid AI. It's a reason to manage AI deliberately.
Coming Up Next
Part 8: Latency, Throughput, and Why Your AI Feels Slow
We've talked about costs. Now let's talk about performance.
In Part 8, we'll cover:
- What latency and throughput actually mean
- Why some AI requests feel instant and others crawl
- How to optimize for speed when it matters
Your Homework for Part 8
Think about your AI usage:
- Which AI tasks need to be fast? (Customer-facing, real-time)
- Which can afford to be slow? (Background processing, batch jobs)
- Have you ever waited frustratingly long for an AI response?
Understanding the speed/cost trade-off is the next piece of the puzzle.
See you in Part 8.
As always, thanks for reading!