There’s a certain kind of exhaustion that doesn’t come from hard problems.
It comes from repeated problems.
The kind you’ve solved before. The kind you’ll solve again tomorrow. The kind that makes you think, “Why am I still doing this by hand?”
Over the past few years—whether in cybersecurity operations, advisory work, or just wrangling my own digital life—I’ve noticed something: most people don’t struggle to build automation.
They struggle to choose the right things to automate.

So here’s a methodology I’ve been refining. It’s practical. It’s testable. And it’s surprisingly reliable.
I call it FRICT.
Step 1: Run the FRICT Filter
Before you automate anything, run it through this filter.
If a task is:
-
Frequent (weekly or more often)
-
Rules-based (clear decision criteria)
-
Information-moving (copy/paste, reformatting, summarizing, transforming)
-
Checklist-driven (same steps each time)
-
Templated (same structure, different inputs)
…it’s a strong automation candidate.
Why This Works
High leverage tends to live inside repeated, structured work.
Think about your week:
-
Generating recurring reports
-
Moving data between systems
-
Creating customer follow-ups
-
Reviewing logs for defined patterns
-
Reformatting notes into documentation
These aren’t “hard” problems. They’re structured problems. And structured problems are automation-friendly by nature.
In cybersecurity operations, we’ve seen this repeatedly. Log triage. Ticket enrichment. Asset tagging. Compliance evidence collection. They’re not intellectually trivial—but they are structured.
And structure is oxygen for automation.
The Caveat
Some frequent tasks still require deep contextual judgment. Executive communications. Incident response war rooms. Strategic advisory decisions.
Those may be frequent—but they’re not always safely automatable.
FRICT gets you to the right neighborhood. It doesn’t mean you bulldoze the house.
Step 2: Score Before You Build
This is where most people go wrong.
They automate what’s annoying, not what’s valuable.
Before building anything, score the candidate task across five axes, 0–5 each:
-
Time saved per month
-
Error reduction
-
Risk if wrong (invert this—lower is better)
-
Data access feasibility
-
Repeatability
Then use this formula:
(Time + Error + Repeatability + Feasibility) − Risk ≥ 10
If it scores 10 or higher, it’s worth serious consideration.
Why This Works
This forces you to think in terms of:
-
ROI
-
Operational safety
-
Feasibility
-
System access realities
In security consulting, we’ve learned this lesson the hard way. Automating the wrong control can introduce more risk than it removes. Automating something that saves 20 minutes a month but takes 12 hours to build? That’s hobby work, not leverage.
This scoring model prevents premature enthusiasm.
It also forces you to confront a truth:
Just because something is automatable doesn’t mean it’s worth automating.
A Quick Example
Let’s say you generate a weekly client status report.
FRICT check:
-
Frequent? ✔ Weekly
-
Rules-based? ✔ Same metrics
-
Information-moving? ✔ Pulling data from systems
-
Checklist-driven? ✔ Same sections
-
Templated? ✔ Same structure
Score it:
-
Time saved/month: 4
-
Error reduction: 3
-
Risk if wrong: 2
-
Data feasibility: 4
-
Repeatability: 5
Formula:
(4 + 3 + 5 + 4) − 2 = 14
That’s automation gold.
Now compare that to “automate strategic roadmap planning.”
FRICT? Weak.
Score? Probably low repeatability, high risk.
That’s a human job.
The Subtle Insight: Automation Is Risk Management
In cybersecurity, we obsess over reducing human error.
But here’s the uncomfortable truth:
Most organizations still rely heavily on manual, repetitive, error-prone workflows.
Automation isn’t about convenience.
It’s about:
-
Reducing variance
-
Increasing consistency
-
Making controls measurable
-
Freeing human judgment for non-templated work
The irony? The more strategic your role becomes, the more your value depends on eliminating the structured tasks beneath you.
FRICT helps you find them.
The scoring model helps you prioritize them.
Together, they create something better than random automation experiments.
They create a system.
What This Looks Like in Practice
If you want to apply this method this week:
-
List every recurring task you do for 7 days.
-
Mark the ones that pass FRICT.
-
Score the top five.
-
Only build the ones that cross the ≥10 threshold.
-
Re-evaluate quarterly.
You’ll be surprised how quickly this surfaces 2–3 high-leverage opportunities.
And here’s the part people don’t expect:
Once you start doing this intentionally, you begin redesigning your work to be more automatable.
That’s when things get interesting.
The Contrary View
There’s one important caveat.
Some strategic automations score low at first—but unlock long-term leverage.
Examples:
-
Building a normalized data model
-
Creating unified dashboards
-
Establishing an API integration layer
They may not immediately score ≥10.
But they create compounding effects.
That’s where experience comes in. Use the formula as a guardrail—not a prison.
Final Thought: Automate the Machine, Not the Mind
If you automate everything, you lose your edge.
If you automate nothing, you waste your edge.
The sweet spot is this:
Automate the predictable.
Protect the contextual.
Elevate the human.
FRICT isn’t magic.
But it’s not random either.
And in a world racing toward AI-first everything, having a disciplined way to decide what should be automated may be the most valuable skill of all.
Method Summary
FRICT Filter
Frequent + Rules-based + Information-moving + Checklist-driven + Templated
Scoring Formula
(Time + Error + Repeatability + Feasibility) − Risk ≥ 10
Now I’m curious:
What’s one task you’ve been doing repeatedly that probably shouldn’t require your brain anymore?
* AI tools were used as a research assistant for this content, but human moderation and writing are also included. The included images are AI-generated.








