Skip to content

AI Factories Are Coming. Who Controls Them?

NVIDIA's GTC 2026 keynote made one thing crystal clear: AI isn't a feature anymore — it's infrastructure. Jensen Huang didn't talk about models or chatbots. He talked about AI factories — purpose-built facilities that manufacture intelligence the way power plants manufacture electricity.

The numbers are staggering. The Vera Rubin platform promises 35-50x performance leaps. Cost per token is collapsing. Entire data centres are being redesigned as single computers, with rack-scale liquid cooling and NVLink fabric connecting thousands of GPUs into one coherent system.

But here's what Jensen didn't talk about: who actually governs what these factories produce?

The Raw Capability Problem

NVIDIA is solving the compute problem. They're doing it brilliantly. But raw capability without operational control is like building a power grid with no circuit breakers.

Consider what's happening right now in enterprise infrastructure:

  • Teams are deploying AI agents that make real changes to production systems
  • These agents operate in continuous reasoning loops — assess, decide, execute, verify
  • The speed of AI-driven change is orders of magnitude faster than human review cycles
  • Traditional change management (ServiceNow tickets, CAB meetings, manual approvals) can't keep up

Jensen called this "agentic AI" — autonomous agents doing real work. He described it as the next wave. But he framed it as a compute problem. It's not. It's a governance problem.

Intelligence Without Governance Is Just Fast Chaos

When an AI agent can assess a security policy, generate a firewall configuration, and deploy it to production in seconds, the bottleneck isn't compute. It's trust.

  • Who validated the agent's decision?
  • Against what policy was it assessed?
  • What happens when two agents make conflicting changes?
  • How do you audit a decision that happened in milliseconds?

These aren't theoretical questions. They're the questions every enterprise CISO, CTO, and compliance officer is asking right now. And NVIDIA's answer — more compute, faster inference, cheaper tokens — doesn't address any of them.

The Control Plane Gap

In networking, we solved this decades ago. The data plane moves packets. The control plane decides where they go. You'd never build a network where every router made independent forwarding decisions with no coordination. That's called a routing loop, and it brings down networks.

Yet that's exactly what we're building with agentic AI. Thousands of AI agents, each making independent decisions, each operating on their own context, with no unified control plane governing the overall system.

What's needed is an AI control plane — a layer that:

  • Declares intent — teams state what they need, not how to do it
  • Assesses against policy — every AI-driven action is validated before execution
  • Provides independent verification — a second AI assessment catches what the first missed
  • Maintains auditability — every decision, every reasoning chain, fully logged
  • Enables human override — the AI operates autonomously within bounds, but humans set the bounds

The Declarative Model

The key insight is that governance doesn't have to mean slow. It means declarative.

Instead of humans reviewing every change, humans define the policy. The AI assesses every change against that policy. Automatically. In milliseconds. At scale.

This is exactly what we've built with NetOrca. A team declares: "I need payments-api to communicate with auth-service on port 443." NetOrca's AI engine, Pack:

  1. Assesses the request against security policy
  2. Independently verifies the assessment with a second AI review
  3. Deploys the change to the target platform if approved
  4. Logs the full decision chain for audit

No ServiceNow ticket. No CAB meeting. No two-week wait. But also no unvalidated change reaching production.

And critically — this isn't a point-in-time assessment. It's continuous posture validation. The same rules that were approved six months ago get re-assessed against today's security policy, automatically, as often as you need — daily, weekly, on every policy update. Security posture isn't a snapshot. It's a state that needs to be maintained over time. When the policy changes, every existing rule is re-evaluated. What was low-risk last quarter might be high-risk today. The control plane catches that drift before it becomes an incident.

Cost Per Token Changes Everything

This is where NVIDIA's compute story and the governance story converge.

When inference was expensive, running two AI assessments on every infrastructure change felt wasteful. When cost per token collapses — as Jensen demonstrated it will — running ten assessments on every change becomes trivially cheap.

The economics flip: it becomes more expensive NOT to have AI governance than to have it.

Every unvalidated change that causes an outage costs orders of magnitude more than the tokens required to assess it. Every compliance finding from a missed policy check costs more than a year of AI-driven assessment.

Cheaper compute doesn't just enable more AI. It enables more trustworthy AI. Because you can afford to verify, double-check, and audit at a scale that was previously impossible.

The Real Industrial Revolution

Jensen is right that we're entering a new industrial revolution. But industrial revolutions aren't defined by their engines — they're defined by their control systems.

The steam engine existed for decades before it transformed industry. What changed wasn't the engine. It was the governor — James Watt's centrifugal governor that automatically regulated steam flow. Without it, engines ran too fast and exploded. With it, they powered factories reliably for a century.

AI factories need their governor. Not to slow them down — to make them safe enough to run at full speed.

The companies that build the AI control plane for this new infrastructure won't just participate in the industrial revolution. They'll be the ones that make it actually work.


The gap between AI capability and AI governance is the largest unsolved problem in enterprise infrastructure. The compute is coming. The control plane is what's missing. That's what we're building at NetOrca.