Why We Don’t Use Kubernetes for Live Video at GlobalM

Why We Don’t Use Kubernetes for Live Video at GlobalM

Kubernetes is everywhere. It’s become the default choice for modern application orchestration, and fair enough for certain use cases, it makes sense. But when you’re building a global, ultra-low latency network for live video contribution and distribution, you have to make decisions that fit the problem, not the hype.

At GlobalM, we chose not to use Kubernetes. Instead, we built our orchestration layer directly on top of systemd, and we’re convinced it was the right call.

Here’s why.

Live video isn’t your average workload

Live video isn’t stateless. It’s not transactional. It’s not the kind of thing that can tolerate “eventual consistency” or 5 seconds of downtime while a container restarts. It’s persistent, it’s sensitive to jitter and latency, and it demands deterministic behaviour under load.

When a broadcaster is pushing out a live feed, say from a tier one football match or a breaking news event, they’re not going to wait for Kubernetes to decide it’s time to reschedule a node. That’s unacceptable. And it’s not just about failover time, it’s about complexity, networking, debugging, and the added operational overhead that comes with managing a system that wasn’t built for this type of job.

Kubernetes brings unnecessary baggage

To be clear, Kubernetes is fantastic for abstracting infrastructure. It does great work at scale for web apps and microservices where elasticity is key. But to get it production-ready for live video, you end up bolting on custom networking, persistent media flows, and real-time health checks that Kubernetes just wasn’t designed to handle natively.

In our testing, Kubernetes introduced latency at the orchestration level, made service recovery harder to guarantee in time-critical paths, and frankly added layers of indirection that made things more fragile, not less.

Let’s also talk observability and troubleshooting. Kubernetes can make it harder to trace a single stream’s journey through the system because you’re abstracting away the very thing you sometimes need to see clearly. With systemd, what you configure is what you get on the specific machine you configure it. Logs, dependencies, health checks, restarts, and most importantly resource allocation and limits, it’s all right there, and it behaves exactly as expected.

Systemd gave us speed and reliability

Using systemd, we can orchestrate our live media services directly on the host, using the host’s native service orchestrator, systemd. No container overhead. No abstracted networking. No orchestration lag. It gives us predictable start-up, proper dependency management, robust isolation, resource allocation limiting, and extremely fast recovery times if something does go wrong.

We know exactly where every service is, how it behaves, and what its logs say when it misbehaves. When we deploy updates, we know the impact and can tightly control the sequence. That level of control and reliability is gold when you’re handling critical live traffic for tier-1 broadcasters and sports rights holders.

Systemd is also scriptable, audit-friendly, and rock solid across environments. It doesn’t require a Kubernetes cluster to keep it alive. It just works. And in live video, “just works” is what matters.

Not anti-Kubernetes

This isn’t a rant against Kubernetes. It’s a reminder that not every workload benefits from its model. At GlobalM, our priority is making sure live video flows deterministically, without glitches, handovers, or added latency, whether it’s a point-to-point link or a full global distribution.

Sometimes that means swimming against the trend. And we’re more than okay with that.

Back to Top