There’s a quiet shift happening.
Not in what we can automate—but in what we shouldn’t.
For the last two years, the conversation has been dominated by capability: AI copilots, agent stacks, workflow automation, local models, prompt engineering. And to be fair, the upside is real. Organizations are seeing measurable gains—faster processes, reduced manual work, and improved efficiency .
But something is off.
We’re optimizing for throughput, not outcomes.
And that’s where the math breaks.

The Problem: Automation Without a Cost Model
Here’s the pattern I keep seeing:
A rational, capable professional looks at a task and thinks:
“This is automatable.”
And they’re right.
So they build a workflow. Or wire up an agent. Or duct-tape together prompts and APIs.
And it works—kind of.
But what’s missing isn’t technical sophistication.
It’s economics.
Specifically: expected value.
Because automation isn’t free. It just hides its costs better.
The Missing Equation
Most automation decisions today implicitly assume:
If it saves time, it creates value.
That assumption is wrong.
A more accurate model looks like this:
Expected Value = (Time Saved × Value of Time)
– (Error Cost × Error Rate)
– Review Time
– Trust Overhead
We’re very good at estimating the first term.
We’re terrible at estimating the rest.
The Hidden Costs (Where the Model Breaks)
1. Error Cost Is Non-Linear
Not all mistakes are equal.
- A formatting error in a report? Annoying.
- A hallucinated legal clause? Expensive.
- A silent data corruption in a financial model? Catastrophic.
What matters isn’t just how often the system fails—but how bad it is when it does.
There’s emerging research showing that automation risk scales with both failure probability and the severity of downstream impact—not just model accuracy .
Yet most people treat errors as a rounding error.
They’re not.
They’re the whole game.
2. Review Time Eats Your Gains
This one is subtle.
You automate a task that used to take 30 minutes.
Now it takes 5 minutes to run… and 15 minutes to check.
Did you save time?
Maybe. Maybe not.
In practice, verification burden is one of the largest—and least modeled—costs in AI workflows. In some cases, expected productivity gains actually reverse once review time is included .
We don’t eliminate work.
We shift it—from execution to validation.
3. Trust Overhead Is Real Work
This is the one nobody talks about.
If you don’t fully trust the system, you:
- Double-check outputs
- Cross-reference sources
- Re-run tasks “just to be sure”
- Keep a mental model of where it might fail
That cognitive load is work.
And it compounds.
Over time, low-trust automation becomes a tax on attention.
4. Integration Friction Is the Silent Killer
Most automation doesn’t fail because the model is bad.
It fails because it doesn’t fit cleanly into how work actually happens.
- Edge cases break flows
- Inputs aren’t as structured as expected
- Outputs require translation into other systems
Even when tools promise 4–5x productivity gains, those gains assume ideal conditions that rarely exist in real workflows .
Reality is messier.
Why This Matters Now
We’re entering a new phase.
The first wave of AI adoption asked:
“What can I automate?”
The current wave is asking:
“How do I automate more?”
But the next—and more important—question is:
“What should I not automate?”
Because here’s the uncomfortable truth:
A large percentage of automation efforts don’t produce meaningful value. Some estimates suggest the majority of generative AI pilots fail to deliver expected outcomes .
Not because the technology doesn’t work.
But because the economics don’t.
The Inversion: Start With Failure
A better approach is to invert the problem.
Instead of asking:
“How can I automate this?”
Ask:
“How does this automation fail—and what does that cost me?”
Work backward:
- Enumerate failure modes
- Wrong output
- Partial output
- Misleading confidence
- Silent failure
- Assign cost to each
- Time
- Money
- Reputation
- Decision quality
- Estimate frequency
- Not ideal-case performance
- Real-world, messy-input performance
- Add review and trust costs
- Time to validate
- Cognitive overhead
Only then do you compare against the upside.
A Practical Heuristic
If you don’t want to build a full model, use this:
Only automate tasks where:
- Errors are cheap
- Outputs are easy to verify
- Trust can be high (or irrelevant)
This is why automation works so well in:
- Data transformation
- Formatting
- Low-stakes content generation
And struggles in:
- Strategy
- Legal reasoning
- Financial decision-making
- Anything with asymmetric downside
Where This Connects to FRICT
If FRICT helped answer:
“Which problems are worth solving?”
Then this is the next layer:
“Which solutions are worth automating?”
It’s not just selection logic anymore.
It’s economic discipline.
Because automation isn’t a capability problem.
It’s a capital allocation problem—just with time, attention, and trust instead of dollars.
The Takeaway
We’re very early in understanding the real economics of AI-assisted work.
Right now, most people are:
- Overestimating gains
- Underestimating costs
- Ignoring variance
And that combination leads to systematically bad decisions.
The fix isn’t more tooling.
It’s better thinking.
Before you automate your next workflow, ask one simple question:
“If this fails quietly, how expensive is that?”
If you don’t like the answer, you already know what to do.
* AI tools were used as a research assistant for this content, but human moderation and writing are also included. The included images are AI-generated.







