Kubernetes v1.36 Sneak Peek: The Security-First Release That’s Quietly Rewiring Your Cluster

AI generated image for Kubernetes v1.36 Sneak Peek: The Security-First Release That’s Quietly Rewiring Your Cluster

Kubernetes is about to do that thing it does best: change just enough to keep platform teams employed, while simultaneously making clusters faster, safer, and—if we’re honest—slightly more confusing for anyone who hasn’t read release notes since the last decade.

On March 30, 2026, the Kubernetes project published a preview post titled Kubernetes v1.36 Sneak Peek, authored by Chad Crowell, Kirti Goyal, Sophia Ugochukwu, Swathi Rao, and Utkarsh Umre. It outlines some of the highlights slated for the next minor release—along with a clear warning that details may still shift before release day. citeturn0search0

As of today (April 3, 2026), the upstream release is planned for Wednesday, April 22, 2026. citeturn0search0turn0search1

In this article I’ll expand the sneak peek into a practical, operator-focused briefing: what’s changing, why Kubernetes is doing it now, what could break, and how to prepare without sacrificing your weekend. We’ll zoom out into the broader ecosystem too—because v1.36 isn’t happening in a vacuum. It’s landing right as the community is also dealing with the retirement of Ingress NGINX, one of the most widely deployed ingress controllers in the wild. citeturn2search0turn2search1

Release timing: what v1.36 means for your upgrade calendar

Kubernetes minor releases arrive on a roughly quarterly cadence. That predictability is great—until you realize your production platform is now effectively on a treadmill. The v1.36 release cycle started in mid-January 2026 and targets an April 22, 2026 release date, with the typical milestones (enhancements freeze, code freeze, docs freeze) already behind the project by early April. citeturn0search1

Why should you care about the schedule? Because most managed Kubernetes vendors (EKS, GKE, AKS, OpenShift, etc.) follow upstream with a lag. If you’re in a regulated environment or you’re trying to avoid “extended support” fees, you probably already run upgrades as a recurring operational program rather than a once-a-year fire drill. The v1.36 changes—especially removals and deprecations—are exactly the kind of things that punish “we’ll upgrade later” cultures.

Quick timeline reality check

  • March 30, 2026: upstream publishes the v1.36 sneak peek. citeturn0search0
  • April 3, 2026: today, as I’m writing this.
  • April 22, 2026: planned upstream release date for Kubernetes v1.36.0. citeturn0search0turn0search1

If you manage clusters at scale, treat this as your signal to start pre-upgrade scanning: find APIs you’re using that now warn, hunt for legacy volume types, and validate security posture for multi-tenant clusters.

The headline theme: security-driven pruning and safer defaults

The official sneak peek calls out two particularly notable lifecycle changes:

  • Service .spec.externalIPs is being deprecated (warnings now; removal planned later).
  • The legacy gitRepo volume driver is removed/disabled (and can’t be re-enabled).

Those might look like routine cleanup items, but they reflect a deeper trend: Kubernetes is increasingly willing to make uncomfortable changes when the security model can’t be repaired without sharp edges. The project’s deprecation policy tries to balance stability with forward motion, but some features are simply too risky to keep around forever. citeturn0search0

On top of that, Kubernetes v1.36 also spotlights performance and device-management improvements aimed at modern workloads—especially AI/ML clusters and hardware scheduling—through upgrades to Dynamic Resource Allocation (DRA) and SELinux volume labeling.

Deprecation: Service .spec.externalIPs (and why it’s been a security headache for years)

Let’s start with the one that will show up in your logs first: deprecation warnings for Service .spec.externalIPs.

According to the Kubernetes v1.36 sneak peek, the field is deprecated in v1.36 and is planned for removal in v1.43. citeturn0search0

What externalIPs does (in plain English)

A Kubernetes Service normally routes traffic to a set of Pods. The externalIPs field is a way to instruct nodes to accept traffic destined for certain IPs and forward that traffic into the Service.

That sounds convenient, especially in on-prem and bare-metal environments where you might not have a cloud LoadBalancer. But convenience is often how security incidents begin—with a Jira ticket labeled “temporary.”

The CVE context: CVE-2020-8554

The reason this deprecation matters is strongly tied to CVE-2020-8554, a medium-severity design issue disclosed in 2020. The Kubernetes security advisory explains that an attacker who can create or edit Services and Pods may be able to intercept traffic by abusing spec.externalIPs (and in some cases LoadBalancer IP handling) in multi-tenant clusters. citeturn1search0

This CVE is commonly described as “unpatchable” in the sense that it stems from how some Service features were designed, not a one-line bug fix. Datadog Security Labs has also written about mitigations and why policy controls (admission control, Gatekeeper, etc.) are key. citeturn1search7

So: deprecating externalIPs is less about removing a beloved feature and more about gradually shutting a door that never should have been left open in multi-tenant buildings.

What you should do now

  • Inventory usage: find any Services using spec.externalIPs.
  • Decide a replacement path: LoadBalancer (cloud or MetalLB), NodePort (carefully), or Gateway API depending on your environment and requirements.
  • Harden multi-tenant clusters: even before removal, block or constrain externalIPs via admission policy where appropriate.

The sneak peek itself suggests alternatives like LoadBalancer services, NodePort, or Gateway API for more flexible and secure external traffic handling. citeturn0search0

Removal: the gitRepo volume plugin is finally, truly gone

Next up: gitRepo volumes. If you’re new to Kubernetes, you may have never seen this volume type in production. If you’ve been around long enough, you might remember it as one of those features that felt clever in 2018 and now feels like leaving your house key under a doormat labeled “KEY.”

The Kubernetes sneak peek states that the gitRepo volume plugin—deprecated since v1.11—is permanently disabled starting in Kubernetes v1.36 and cannot be turned back on. citeturn0search0

Why remove it now?

The upstream post frames the rationale clearly: it’s a security risk because it can allow code execution as root on the node. citeturn0search0

Stepping back, the underlying issue is that “clone arbitrary repo contents onto the node during Pod startup” is a dangerously powerful primitive. Even with guardrails, it blurs the line between build-time and run-time, and it can create ugly supply-chain and privilege boundaries.

What breaks when you upgrade?

If any workload uses:

  • volumes: - gitRepo: ...

…that Pod spec will stop working as of v1.36. It’s not a warning. It’s a hard stop.

Migration strategy: what to use instead

Kubernetes has recommended alternatives for years, and the sneak peek reiterates them: use an init container, or an external git-sync style sidecar/tooling approach. citeturn0search0

In practice, platform teams typically pick from:

  • Init container that pulls code into an emptyDir (works for many internal apps, but you must handle auth, integrity, and repeatability).
  • CI builds immutable images (the “container-native” best practice: put code in the image, not pulled at runtime).
  • GitOps operators (Flux / Argo CD) to deploy manifests, not to inject runtime code into a Pod filesystem.

From a security standpoint, moving “get code from git” into CI and artifact signing is usually a net win. Runtime git pulls expand your attack surface and complicate provenance.

SELinux volume labeling goes GA: faster mounts, fewer startup stalls

One of the most practically useful upgrades mentioned in the sneak peek is faster SELinux labeling for volumes reaching general availability (GA). citeturn0search0

The problem it solves

On SELinux-enforcing nodes, Kubernetes historically relied on recursive relabeling of mounted volumes. If your Pod mounts a volume with lots of files (think language package caches, model artifacts, or even just “node_modules but on a persistent volume”), recursive relabeling becomes a silent tax on startup time.

The Kubernetes project previously described how mounting volumes with the correct SELinux context can avoid that recursive walk, providing constant-time labeling with mount options like -o context=.... citeturn1search1

What’s new in v1.36

The sneak peek explains that Kubernetes replaces recursive file relabeling with applying the correct label at mount time (using mount options), reducing Pod startup delays and improving consistency on SELinux systems. citeturn0search0

Why this matters right now: SELinux-enforcing distributions (and Kubernetes distributions built around them) are a major part of enterprise Kubernetes, and performance improvements that also strengthen isolation tend to get adopted quickly.

Operational implications (aka, what could surprise you)

  • CSI driver support matters: CSI drivers typically need to declare SELinux mount support (the older blog post emphasizes CSI driver vendor support and configuration). citeturn1search1
  • Edge cases exist with shared volumes, subPaths, and mixed labels. That’s why Kubernetes previously included feature-gates and opt-out mechanisms.

Net-net: if you run SELinux enforcing nodes and have workloads with big or frequently mounted volumes, this is a quality-of-life improvement you’ll actually feel—like shaving seconds off every deploy, multiplied by every rollout, multiplied by every team that thinks “just restart it” is a debugging strategy.

External signing of ServiceAccount tokens: a quieter security upgrade with big implications

The sneak peek also points to support for external signing of ServiceAccount tokens as an expected GA graduation in v1.36. citeturn0search0

ServiceAccount tokens are core to how workloads authenticate to the Kubernetes API and, in many deployments, to cloud IAM systems via federation. External signing aims to let the API server delegate signing to an external service—useful in environments with centralized key management, strict compliance, HSM-backed signing, or highly controlled certificate/token lifecycles.

Kubernetes documentation for service account administration describes configuration for an external JWT signer using the --service-account-signing-endpoint flag (via a Unix domain socket), including how the API server fetches public keys that it trusts for validating tokens. citeturn2search7

Why platform teams care

  • Key isolation: signing keys can live outside the API server’s filesystem.
  • Compliance and auditing: centralized token issuance can align with org-wide signing policies.
  • Incident response: faster key rotation patterns can become easier to implement consistently.

This is the kind of feature that won’t trend on social media (because it’s not flashy) but will make security architects nod approvingly in meetings—right before they ask you to document it and roll it out to 200 clusters.

Dynamic Resource Allocation (DRA) steps forward: device taints, tolerations, and partitionable accelerators

Now for the part of v1.36 that screams “Kubernetes is adapting to modern hardware workloads.” The sneak peek highlights two DRA-related items:

  • DRA driver support for device taints and tolerations moving to beta.
  • DRA support for partitionable devices to split accelerators into logical units for sharing.

DRA device taints and tolerations: safer scheduling for “special” hardware

Kubernetes already has node taints/tolerations to keep workloads from landing on nodes unless they explicitly tolerate certain taints. Extending that idea to devices makes sense: not all GPUs, NICs, or specialized accelerators are equal, and some might be degraded, reserved, or restricted.

The Kubernetes docs on Dynamic Resource Allocation explain device taints and tolerations, how effects like NoSchedule and NoExecute work, and how drivers can publish taints through ResourceSlices (and admins can apply DeviceTaintRules). citeturn2search2

The sneak peek says this feature graduates to beta in v1.36, making it accessible by default (no feature flag) and ready for broader feedback. citeturn0search0

For cluster operators, that means tighter control over where high-value devices get allocated, and potentially fewer “why is my training job running on the flaky GPU again?” conversations.

Partitionable devices: the GPU utilization story Kubernetes has been waiting for

The second DRA highlight—partitionable devices—targets a common cost problem: full-device allocation leads to underutilization. Many workloads don’t need an entire accelerator; they need a slice. Being able to split a physical device into multiple logical units can improve utilization and reduce cost per workload.

The sneak peek frames this as particularly useful for GPUs and other high-cost accelerators, enabling platform teams to allocate only the portion needed while maintaining isolation and control. citeturn0search0

This aligns with industry trends: AI and data workloads increasingly run on shared clusters, and organizations want both higher utilization and stronger policy control. DRA is Kubernetes’ path toward first-class device lifecycle management that’s not bolted on via vendor-specific hacks.

Ingress NGINX retirement: the ecosystem backdrop you can’t ignore

Even though it’s not a “Kubernetes v1.36 feature” per se, the sneak peek explicitly calls out the Ingress NGINX retirement as an example of lifecycle discipline in the Kubernetes ecosystem. citeturn0search0

Here’s the blunt reality: upstream Ingress NGINX maintenance halted in March 2026, and after retirement there are no further releases, bugfixes, or security updates from the Kubernetes community maintainers. citeturn2search0turn2search1

If your org still runs ingress-nginx, it may continue functioning for a long time—right up until a nasty vulnerability drops and there’s no upstream patch. The Kubernetes Steering and Security Response Committees were unambiguous about the risk of continuing to run it after retirement. citeturn2search1

What are teams doing instead?

There isn’t a single drop-in replacement for every feature set, but common paths include:

  • Gateway API adoption (more expressive routing, more standardized evolution path).
  • Vendor ingress controllers (commercial support, but you trade simplicity for contracts and product roadmaps).
  • Fork/extended maintenance options. For example, Chainguard announced they’re providing maintained ingress-nginx images to give users time to migrate. citeturn2search6

This matters to v1.36 planning because platform teams often bundle change: “We’re upgrading Kubernetes” quickly becomes “We’re upgrading Kubernetes and re-platforming ingress” which becomes “We’re rethinking network policy, cert management, and auth annotations that have been accumulating like geological strata.”

Practical upgrade checklist for v1.36 (operator edition)

If you want the “I have 15 minutes before my next incident call” version of preparation, here’s a pragmatic checklist focused on the changes called out in the sneak peek.

1) Scan for Service externalIPs usage

  • Find Services that set spec.externalIPs.
  • Decide whether they should become LoadBalancers, NodePorts, or be replaced via Gateway API or external infrastructure.
  • If you operate a multi-tenant cluster, review admission policies that restrict this behavior (given CVE-2020-8554 context). citeturn1search0

2) Hunt down gitRepo volume specs before they hunt you

  • Search manifests, Helm charts, and live workloads for gitRepo volumes.
  • Refactor to init containers or build-time inclusion.

3) Validate SELinux + CSI drivers in your environment

  • If SELinux is enforcing, test high-churn workloads and large volumes in a staging cluster on a v1.36 pre-release build (or as soon as your distro provides it).
  • Confirm CSI drivers correctly advertise SELinux mount capabilities where needed (the earlier Kubernetes blog post details driver expectations). citeturn1search1

4) If you run accelerators, revisit scheduling policy

  • Track DRA maturity and whether your device plugins/drivers support new taints/tolerations or partitioning.
  • Plan how you want to represent “degraded,” “reserved,” or “restricted” devices.

5) Make ingress strategy explicit (especially post-March 2026)

  • Confirm whether you run the retired upstream ingress-nginx.
  • If yes: define a migration plan (Gateway API or alternative controller) and timelines.

So, is Kubernetes v1.36 a “big” release?

It depends on what you run:

  • If you’re a typical web/app platform team on a managed service, v1.36 may feel like a steady incremental upgrade—unless you rely on deprecated networking shortcuts or legacy volumes.
  • If you run multi-tenant clusters, v1.36’s push away from risky knobs like externalIPs is part of a broader security hardening story that aligns with real CVE history. citeturn1search0
  • If you run regulated enterprise Linux stacks with SELinux enforcing, the GA SELinux mount labeling improvements can be a real performance win. citeturn0search0turn1search1
  • If you run AI/ML clusters with scarce accelerators, DRA improvements hint at a future where Kubernetes becomes much better at treating expensive hardware like a first-class, policy-controlled resource rather than a set of vendor-specific exceptions.

In other words: Kubernetes v1.36 is not a “new shiny UI” kind of release. It’s a “tighten your seatbelt and stop doing the sketchy things” kind of release. Historically, those are the releases that age well.

Conclusion: treat v1.36 as a forcing function for better hygiene

With the upstream v1.36 release scheduled for April 22, 2026, you have a narrow window to do the prep work that makes upgrades boring—in the best possible way. citeturn0search1

The sneak peek is a clear signal that Kubernetes is continuing to:

  • deprecate risky API surfaces (externalIPs),
  • remove legacy features with disproportionate security risk (gitRepo volumes),
  • promote security and performance improvements into defaults (SELinux volume labeling), and
  • modernize scheduling for real-world hardware constraints (DRA evolutions).

If your platform strategy is “we’ll see what breaks after we upgrade,” Kubernetes will continue to teach you expensive lessons. If your strategy is “we’ll upgrade when we’re ready,” v1.36 offers a tidy roadmap for what “ready” should look like.

Sources

Bas Dorland, Technology Journalist & Founder of dorland.org