Skip to content

AI

AI Factories Are Coming. Who Controls Them?

NVIDIA's GTC 2026 keynote made one thing crystal clear: AI isn't a feature anymore — it's infrastructure. Jensen Huang didn't talk about models or chatbots. He talked about AI factories — purpose-built facilities that manufacture intelligence the way power plants manufacture electricity.

The numbers are staggering. The Vera Rubin platform promises 35-50x performance leaps. Cost per token is collapsing. Entire data centres are being redesigned as single computers, with rack-scale liquid cooling and NVLink fabric connecting thousands of GPUs into one coherent system.

But here's what Jensen didn't talk about: who actually governs what these factories produce?

Building a Cost Optimisation Loop for AI Agents

AI models are getting cheaper. But most organisations can't tell you what their AI agent actually costs per outcome.

They see monthly bills. They see token counts. What they don't see is: "It cost us £0.47 to deploy that load balancer, and £2.30 to configure that firewall rule."

Without that visibility, you can't optimise. You're just hoping costs go down as models get cheaper.

There's a better way: build a system that finds the cheapest path to the correct outcome, automatically.

AI That Removes the Boring Parts

Everyone's talking about AI replacing jobs. Meanwhile, engineers are drowning in review tasks that nobody wants to do and nobody does properly.

The firewall team manually reviews every rule change. The WAF policy gets tuned once at deployment and never again. The security review backlog grows faster than it shrinks.

These aren't hard problems. They're tedious problems. And that's exactly where AI helps today.

You Already Trust Code You've Never Read

When was the last time you reviewed the assembly output of your compiler?

I'm guessing never. You write Python or TypeScript or Go, hit build, and trust that something correct comes out the other end. The compiler is a black box. You don't understand its internals. You don't need to.

As InfoWorld notes, when high-level languages first required compilers, many thought no machine could write better assembly than humans. That concern was put to rest long ago.

So why do we treat AI differently?

MCP: The New Vendor Lock-In, Dressed Up as a Standard

Every few years, the infrastructure industry invents a new way to sell you complexity.

First it was hardware appliances - proprietary boxes that went end-of-life every three years, forcing expensive upgrades. Then it was proprietary software platforms - lock-in disguised as "integrated solutions." Now it's MCP.

The Model Context Protocol is being positioned as the "USB-C for AI" - a universal standard for connecting AI agents to external systems. 97 million SDK downloads. Big tech backing. The narrative says it's already won.

I don't buy it.

2026: The Year AI Agents Finally Have to Prove Themselves

The AI industry is having an accountability moment.

According to recent research, only 6-8% of enterprises have AI agents deployed in production. Meanwhile, Gartner predicts 40% of enterprise applications will embed AI agents by the end of 2026.

That's a massive gap to close in 12 months. And it raises a question the industry has been avoiding: what does a production-ready AI agent actually look like?