AI That Removes the Boring Parts
Everyone's talking about AI replacing jobs. Meanwhile, engineers are drowning in review tasks that nobody wants to do and nobody does properly.
The firewall team manually reviews every rule change. The WAF policy gets tuned once at deployment and never again. The security review backlog grows faster than it shrinks.
These aren't hard problems. They're tedious problems. And that's exactly where AI helps today.
The Tasks Nobody Does Properly
Here's a pattern I keep seeing in enterprise infrastructure teams:
The policy says: Every firewall rule must be reviewed by perimeter security before deployment.
The reality: 1,000 rules a month. Most are low-risk internal changes. The team rubber-stamps 80% of them because they don't have time to properly assess each one. The 20% that actually need scrutiny get the same cursory glance as everything else.
The policy says: WAF policies should be tuned based on production traffic patterns.
The reality: The WAF gets configured at deployment. Someone spends a week tuning it. Then it never gets touched again because who has time to continuously review logs and adjust policies?
These aren't failures of competence. They're failures of capacity. The boring work expands to fill all available time, leaving no room for the interesting work.
Where AI Actually Helps
AI is good at:
- Classification against known criteria
- Pattern matching across large datasets
- Continuous re-evaluation when context changes
- Flagging anomalies for human review
AI is not good at:
- Making judgment calls on novel situations
- Understanding business context that isn't documented
- Taking responsibility for decisions
The match is obvious. Give AI the classification and pattern matching. Keep humans for the judgment calls.
Example: Firewall Rule Classification
Here's a practical example we're working on.
The problem: Every firewall rule change requires perimeter security review. The team reviews 100% of requests manually, but most are routine internal changes that don't warrant deep analysis.
The approach:
- AI analyses the rule against context documentation - what networks are involved, what's the exposure, does it match known safe patterns
- Produces a classification: high, medium, or low risk
- A second AI checks the analysis for completeness
- Low/medium risk rules proceed automatically. High risk goes to human review.
The outcome: Perimeter security reviews 20% of rules instead of 100%. But that 20% gets actual scrutiny instead of a rubber stamp. And when context changes - a network gets reclassified, a new threat emerges - the system can retrigger and re-evaluate existing rules.
The senior engineer's judgment isn't replaced. It's focused on the cases that actually need it.
Example: WAF Policy Tuning
The problem: WAF policies are tuned once at deployment based on initial traffic patterns. Production traffic evolves. The policy doesn't. Six months later, you're either blocking legitimate traffic or missing threats that the original tuning didn't anticipate.
The approach:
- External process pulls WAF logs from Splunk periodically
- AI analyses logs against current policy - what's being blocked, what's being allowed, what patterns are emerging
- A second AI validates the analysis
- System either auto-updates low-risk tuning or flags changes for the load-balancing team to review
The outcome: Continuous tuning instead of one-off tuning. The WAF policy evolves with production traffic. When logs show new patterns, the system re-evaluates automatically.
The security team isn't replaced. They're freed from the log analysis they were never going to do anyway.
The Pattern
Both examples share a structure:
- AI handles classification and analysis - the volume work that humans can't do consistently at scale
- AI validates AI - a second model checks for safety and completeness
- Humans handle exceptions - the cases that need judgment, not just pattern matching
- Context changes trigger re-evaluation - the system doesn't just run once, it continuously adapts
This isn't AGI. It's not even particularly sophisticated. It's using AI for what it's genuinely good at: processing volume, maintaining consistency, and never getting bored.
The Question to Ask
Stop asking "will AI take my job?"
Start asking "which parts of my job do I never do properly because there isn't enough time?"
That's where AI helps today. Not replacing engineers, but removing the tasks that were never getting done right anyway.
The firewall team isn't worse at their jobs because AI classifies rule risk. They're better, because they can actually focus on the rules that matter.
The security team isn't obsolete because AI tunes WAF policies. They're more effective, because the policies actually get tuned.
The boring parts were never the job. They were just in the way.
How We're Exploring This
At NetAutomate, we're building these patterns into NetOrca Pack - our agentic AI layer that sits on top of NetOrca's declarative platform.
Pack adds three things that make this work at enterprise scale:
-
Customer intent as the foundation - Every AI action traces back to a validated declaration of what the customer actually asked for. Not what someone typed into a prompt, but structured, schema-validated intent.
-
Auditability - Every classification, every analysis, every decision is logged. When the auditor asks "why did this rule get auto-approved?", there's an answer.
-
Iteration - When the AI gets it wrong, the system learns. Service owners can add comments, retrigger with feedback, and the loop improves over time.
The examples above - firewall classification, WAF tuning - are patterns we're actively exploring with Pack. The goal isn't to replace the infrastructure team. It's to remove the 80% they were never going to do properly anyway, so they can focus on the 20% that actually matters.