The Future Is Modular: What a Decade of Running Kubernetes Teaches Us About Platform Engineering

AI generated image for The Future Is Modular: What a Decade of Running Kubernetes Teaches Us About Platform Engineering

Kubernetes is 10+ years old now, which in technology years means it’s old enough to have strong opinions about observability stacks, security baselines, and whether “platform” is a product or a philosophical argument you have at 2 a.m. during an incident.

In February 2026, Giant Swarm published a short but pointed piece arguing that the future of Kubernetes platforms isn’t another bigger “everything included” bundle—it’s modularity: pick-and-choose platform capabilities that can integrate with what you already run, rather than forcing you to replace it. The post is titled The future is modular: what a decade of running Kubernetes taught us about platforms and it’s written by Oliver Thylmann. It’s only a few minutes long, but it tees up a debate that’s been quietly intensifying across platform engineering: are we building internal developer platforms (IDPs) as curated ecosystems… or buying vendor bundles that look curated until you try to plug in your existing SIEM?

This article expands on Giant Swarm’s argument with industry context, examples, and a field guide for platform teams trying to avoid what Thylmann calls the “bundle trap.” Along the way we’ll talk about why Kubernetes adoption keeps rising while many developers touch it less directly, what “modular” actually means in practice (and what it definitely doesn’t), and how to choose modules without assembling your own Frankenplatform.

What Giant Swarm is actually saying (and why it matters)

Thylmann’s core claim is simple: no two organizations need the same platform. They may share broad problems—security, delivery speed, compliance, multi-cluster operations—but they start from different constraints and existing investments. A platform that ignores those realities doesn’t reduce complexity, it adds a new layer of it. citeturn2view0

That’s the rationale for Giant Swarm’s evolution toward a modular platform model: instead of pushing every customer into the same fixed stack, offer capabilities that can be adopted incrementally and integrated with what teams already trust (or are contractually married to until 2028). citeturn2view0

It’s also a reflection of where the Kubernetes market is in 2026. In the early years, buying a big pre-assembled platform made sense: the cloud native ecosystem was younger, noisier, and full of sharp edges. But as organizations matured, they developed preferences and competencies: maybe Grafana + Prometheus is sacred, maybe Splunk is non-negotiable, maybe the security team has already standardized on a specific policy engine and scanner.

In other words: once you’ve learned how to run pieces of the stack well, a bundle can become a constraint instead of an accelerator.

The “bundle trap”: why all-in-one platforms often underdeliver

Giant Swarm calls it the bundle trap: you pay for a comprehensive platform, use only a fraction, and keep parallel tooling to fill the gaps. citeturn2view0

If you’ve ever seen “standardization” turn into “we now run two CI/CD systems because the platform one can’t meet compliance,” you’ve met the bundle trap. It happens for several reasons:

  • Existing investment is real. Teams already paid money and time for tools, integrations, dashboards, training, and runbooks.
  • Enterprise requirements diverge fast. A bank’s controls for change management aren’t the same as a manufacturing company’s constraints at the edge.
  • Bundled platforms optimize for the vendor’s roadmap. You get what they ship, when they ship it, and sometimes “we’ll support that soon” means “next fiscal year, maybe.”
  • Replacing everything increases risk. Platform rewrites are exciting until you realize your incident response process is also being rewritten.

The punchline: bundles can still be valuable, but they’re often sold as simplification while quietly shifting the complexity cost to the customer—especially integration complexity.

Why Kubernetes adoption can rise while direct Kubernetes usage falls

One of the most interesting bits in the Giant Swarm post is the data point about a widening gap between organizational Kubernetes adoption and developer-level direct usage. Thylmann cites CNCF survey data showing widespread Kubernetes production usage, while a CNCF/SlashData report suggests only around 30% of backend developers say they use Kubernetes directly (down from a higher peak). citeturn2view0turn3search8

Even if you don’t obsess over the exact percentage (surveys vary), the trend is intuitively right: Kubernetes is increasingly infrastructure, not a daily developer tool. More teams interact with it through platforms, paved roads, and managed services rather than kubectl and YAML handcrafting.

This matters because it reframes where “platform value” sits:

  • The platform layer shapes developer experience. If developers aren’t talking to Kubernetes directly, they’re talking to your IDP, templates, pipelines, policies, and golden paths.
  • Abstraction isn’t optional. It’s what allows Kubernetes to scale across many teams without turning every engineer into a cluster whisperer.
  • Platform decisions become product decisions. You’re designing an experience, not just wiring components.

So what is a “modular platform,” really?

“Modular” can mean anything from “we sell add-ons” to “we’re a collection of loosely related Helm charts.” The useful interpretation is narrower:

  • Capabilities are separable: Kubernetes lifecycle management, observability, policy/security, networking, application delivery, and (increasingly) AI infrastructure can be adopted independently.
  • Interfaces are consistent: modules don’t each invent a new control plane. They integrate through stable APIs, automation patterns, and shared conventions.
  • Integration work is curated: the vendor (or your platform team) has already tested component combinations, upgrade paths, and operational runbooks.

Giant Swarm’s docs describe a platform architecture built around Kubernetes, with a central management cluster providing a Platform API and hosting operators/controllers for cluster and capability management, while being cloud-agnostic and based on Cluster API. citeturn0search2turn4search4

That “API-first, Kubernetes-native control plane” approach is what can make modularity feasible without devolving into a spreadsheet of incompatible versions.

The trade-offs Giant Swarm admits (and why you should too)

Modularity is not a free lunch. Giant Swarm explicitly notes the downside: you make more decisions upfront, you need to consider integration, and there’s a risk of creating a messy patchwork of tools no one fully understands. citeturn2view0

That last point deserves a tattoo (metaphorically, unless you’re really into platform engineering): modular doesn’t mean random. Modular without curation becomes what I’ll call the “CNCF shopping spree problem”—you come home with 14 projects, 3 overlapping policy engines, and no plan for upgrades.

The practical lesson: modularity shifts effort from “adopt the bundle” to “choose the right modules and standardize how they fit.” The question isn’t whether work exists. It’s whether the work buys you flexibility and better alignment with the business.

Cost: modularity as a spotlight, not a miracle

Thylmann argues modularity makes cost visible—you pay for what you use instead of negotiating a bundle price that includes shelf-ware. citeturn2view0

This maps to a broader industry theme: waste reduction has become a top priority for FinOps practitioners, reflecting how much spend is tied up in underused capacity and services. citeturn2view0

But let’s be precise. Modular pricing doesn’t automatically reduce spend. What it does is:

  • Make underused capabilities easier to identify
  • Let you stage investment (start small, expand as value is proven)
  • Prevent “we bought it so we must use it” platform decisions

In mature organizations, that transparency is often more valuable than a discount, because it supports governance and prioritization.

Real-world modularity: three scenarios where it actually helps

1) The observability migration you can’t do all at once

Many enterprises have years of dashboards, alerts, and incident runbooks built around a specific stack. Forcing a “standard observability bundle” into place can create a huge operational risk window. Modularity lets you:

  • Keep your existing logging/SIEM pipeline while standardizing metrics
  • Adopt curated Kubernetes monitoring first, then evolve app-level SLO tooling later
  • Run a dual-stack period intentionally (with defined exit criteria)

The key is that platform teams can define interfaces (how teams publish metrics/logs/traces) while keeping the backend tooling flexible.

2) Security hardening driven by compliance deadlines

Security rarely arrives as “we feel like it.” It arrives as “the auditor is coming.” Modular capabilities allow you to add policy enforcement, scanning, and hardening baselines when the requirement appears—without waiting for a monolithic platform release train.

That said, this only works if security modules are integrated into delivery workflows (GitOps/CI/CD), not bolted on as afterthoughts.

3) Edge and hybrid: where “one platform” is usually a lie

Giant Swarm explicitly contrasts very different environments—manufacturing at the edge versus financial services multi-cloud—arguing they shouldn’t be forced into the same bundle. citeturn2view0

At the edge you care about constrained hardware, intermittent connectivity, and operational simplicity. In regulated multi-cloud you care about identity boundaries, network integration, and audit trails. Modularity helps because it lets you standardize some layers (cluster lifecycle, baseline security, GitOps) while varying others (connectivity, observability backends, data residency choices).

Where Cluster API and GitOps fit into the modular story

Giant Swarm’s platform direction leans heavily on Cluster API as an open standard for cluster lifecycle management across providers, and on GitOps patterns (including Flux) for declarative operations. citeturn0search2turn4search4turn4search1

This is important because modular platforms need a dependable “spine.” In most Kubernetes-centric platforms, that spine is:

  • Declarative APIs (Kubernetes resources and CRDs, or compatible abstractions)
  • Reconciliation loops (controllers that converge actual state to desired state)
  • Versioned configuration (Git as the audit-friendly source of truth)

When you have that spine, modules can be “just” operators, policies, Helm releases, and configurations—meaning they’re installable, upgradable, and removable in a controlled way.

Modular doesn’t mean unmanaged: the curation problem

Giant Swarm differentiates between “here’s a catalog of open source components, good luck” and “here’s a set of modules curated and integrated across 150+ production clusters.” citeturn2view0turn0search4

That’s the hard part of modularity: integration and lifecycle management. The moment you modularize, you inherit new responsibilities:

  • Compatibility matrices (Kubernetes version X with CNI Y with policy engine Z)
  • Upgrade choreography (what upgrades first, what breaks, what rollbacks look like)
  • Operational ownership (who is on-call for which module?)
  • Golden path design (how developers onboard without reading 40 pages of docs)

Platform teams who want modularity but don’t want chaos need a rule: you can have choice, but not infinite choice. “Modular” should mean “a curated set of supported options,” not “every team installs whatever looked good in a conference talk.”

AI infrastructure: the newest module everyone argues about

Thylmann explicitly calls out AI infrastructure as a capability some organizations need now, while others won’t need for months or years—making it a strong candidate for a modular add-on rather than a default bundle component. citeturn2view0

Giant Swarm also points to CNCF’s Kubernetes AI Conformance effort and says it is among the first platforms certified under that program. citeturn2view0turn3search2turn3search0

Whether you buy that from Giant Swarm or build it yourself, the industry direction is clear: AI workloads are pulling platform engineering into new territory—GPU scheduling, cost governance, model serving, and observability for inference pipelines. If your platform strategy is still “Kubernetes + ingress + logs,” you’re about to have an interesting year.

How to adopt modularity without building a toolchain Jenga tower

If you’re a platform team considering a modular approach, here’s a practical adoption path that avoids the two common failure modes (over-bundling and over-fragmentation).

Step 1: Define your non-negotiables

These are the platform invariants that shouldn’t vary by team:

  • Identity and access model (SSO, RBAC conventions, audit logging)
  • Baseline security controls (policies, image provenance, patching expectations)
  • Delivery workflow (GitOps vs imperative, release approvals)
  • Support model (what “supported” means, and by whom)

Step 2: Split “capabilities” from “implementations”

Example: “observability” is a capability. Prometheus/Grafana, Datadog, New Relic, or an internal stack are implementations. A modular platform should standardize the capability contract (telemetry formats, labels, SLO expectations) while allowing a small set of implementations.

Step 3: Curate modules like products

Each module should have:

  • A clear owner
  • A documented lifecycle (how it’s installed, upgraded, and removed)
  • Compatibility guarantees
  • Runbooks and SLOs

This is “platform as a product” thinking applied to modularity.

Step 4: Make the default path boring (in a good way)

Developers shouldn’t have to become experts in your module system. Provide a default paved road that works for 80% of teams, and an exception process for the remaining 20%—with the explicit acknowledgment that exceptions cost money and time.

A quick comparison: modular platform vs bundle vs DIY

  • Bundle: Fast to buy, slower to align with existing tools; risk of shelf-ware and forced migrations.
  • DIY: Maximum flexibility; also maximum integration and operational cost; depends heavily on senior engineering availability.
  • Modular (curated): Flexibility with guardrails; requires strong platform governance and/or a vendor that does real integration work.

Giant Swarm’s argument is essentially that the industry is moving from bundle-first to modular-curated because organizations are mature enough to know what they want, and because the platform layer is now the primary determinant of developer experience. citeturn2view0

Conclusion: the future is modular… if you do the hard parts

The most persuasive thing about Giant Swarm’s post isn’t the word “modular.” It’s the realism: modularity creates decisions, integration needs, and the possibility of a patchwork mess. citeturn2view0

But the alternative—pretending one platform bundle fits everyone—has its own predictable outcome: parallel stacks, wasted spend, and platform teams becoming full-time exception handlers. In 2026, Kubernetes isn’t the hard part. The hard part is building a platform layer that respects what’s already working, provides a paved road for what isn’t, and can evolve as new domains (hello, AI infrastructure) become non-optional.

Or, to translate into platform engineer: “Your platform should be a set of Lego bricks, not a single Duplo cube you throw at every problem.”

Sources

Bas Dorland, Technology Journalist & Founder of dorland.org