Right-Sizing Orchestration: Kubernetes Isn’t Always the Answer

As cloud maturity spreads across defense and federal landscapes, the pressure to adopt “modern” tooling can easily outpace the practical needs of a mission. Kubernetes has become the emblem of next-gen infrastructure—an orchestration powerhouse promising portability, scalability, and robust automation. For many, it also signals technical credibility, especially in DevSecOps-first cultures. But the reality is more complex: Kubernetes is powerful, yes, but it comes with real overhead, and sometimes that overhead delivers diminishing returns.

At SkyDryft, we regularly encounter environments where teams are leaning hard into container orchestration—sometimes by necessity, sometimes by inertia, and sometimes because someone saw it on a slide. But the truth is, not every workload demands a platform-as-a-service abstraction, and not every team is equipped to carry the operational burden that comes with it. Sometimes, a simpler approach wins—not just in speed, but in sustainability.

Kubernetes as the Middleware Between Infrastructure and Application Delivery

At its best, Kubernetes sits at a useful intersection: it bridges the world of infrastructure-as-code (IaC) with that of modern software delivery. IaC tools like OpenTofu or Terraform handle baseline infrastructure—VPCs, IAM roles, persistent storage, access boundaries—while Kubernetes acts as a dynamic orchestrator for application-level workloads. Helm charts, deployment manifests, and GitOps flows layer on top of static provisioning, creating a flexible and repeatable pipeline from code to service.

In this model, Kubernetes becomes a kind of middleware—abstracting complexity for developers while enforcing deployment and scaling logic defined by platform teams. This is especially valuable when operating multi-tenant platforms, microservices, or high-availability systems with burstable demand. But this value doesn’t come free.

Running Kubernetes means running a control plane. It means managing networking, autoscaling, ingress controllers, certificate rotation, secrets, service mesh options, persistent volumes, and sometimes your own observability stack just to see what’s happening under the hood. For teams without a mature DevOps culture—or those just migrating out of legacy systems—that’s a lot of surface area.

And the question must be asked: does the mission justify the orchestration?

Simplicity Can Be Secure and Scalable—When Done Right

There are plenty of scenarios where traditional compute platforms and IaC tools are not just adequate—they're optimal. A tightly scoped workload behind an ALB, running on a single EC2 instance with Docker Compose, can meet performance, security, and compliance requirements without requiring a cluster. If you don’t need multi-service coordination, auto-healing, or pod-level scaling, a container on a hardened host with appropriate logging and identity controls can be perfectly acceptable.

This isn’t regression—it’s restraint. It’s choosing an operational model aligned with the workload's needs rather than chasing architectural purity. A sidecar proxy pattern isn’t necessary if the service doesn’t talk to anything else. A CI/CD pipeline doesn’t need to push to a service mesh if the runtime changes once a quarter. And you don’t need a Kubernetes ingress gateway if your system has a single entry point behind a managed load balancer.

The key is recognizing when you're solving a problem you don't actually have.

Too often, teams adopt Kubernetes not to solve scale, availability, or platform engineering challenges—but to stay current. The result is overhead without upside. You inherit complexity in upgrades, cost in control plane runtime, and technical debt as developers struggle to debug YAMLs instead of solving business logic.

When the workload is well understood, bounded in scope, and operated by a small team, there’s nothing wrong with building on more traditional patterns. EC2 instances deployed via IaC, running containers with minimal dependencies, remain a viable deployment model—especially in environments where compliance, identity integration, and persistent storage are more important than ephemeral scalability.

Design for the Mission, Not the Trend

There’s a tendency to equate Kubernetes with modernity and maturity. And to be fair, many of the most progressive infrastructure teams use Kubernetes to great effect. But platform complexity must always be proportional to mission need. Going “cloud native” for its own sake is a poor strategy. It imposes cognitive load, extends onboarding time, and creates more surfaces for failure—unless the mission truly benefits from the capabilities Kubernetes provides.

The same goes for PaaS models more broadly. Abstracting infrastructure away sounds appealing until you need to troubleshoot network pathing or integrate with legacy systems that don’t speak 2023's APIs. Platforms that promise simplicity often deliver opinionated constraints, and those constraints may not align with government-grade compliance, auditability, or system interdependence.

Choosing not to use Kubernetes isn’t a sign of being behind. It’s a signal that you understand the tradeoffs—and are confident enough to design to purpose, not popularity.

Conclusion: Platform Choices Are Operational Commitments

The tools you choose define the operational shape of your environment. Kubernetes is not a “just run it” technology. It is a platform that demands investment—of time, expertise, and resources. When used intentionally, it can enable velocity, scale, and modularity. When used reactively, it can bury teams in technical noise, distract from mission outcomes, and delay delivery under the guise of engineering progress.

Sometimes, all you need is a secure, well-configured virtual machine running Docker Compose behind an ALB. And if that gets the job done—securely, repeatably, and with enough room to grow—then that’s not a shortcut. That’s good design.

Don’t confuse architecture patterns with capability. Choose tools that reflect the mission—not the market trend.

Previous
Previous

Encryption Keys Aren’t Just Technical—They’re Strategic: Getting Key Management Right for Federal Compliance

Next
Next

Build Secure, Not Backward: Why Guardrails Must Lead Cloud Adoption