
Kubernetes releases are like seasonal flu shots: you can skip them for a while, but eventually you’ll be forced to deal with consequences that are way less fun than reading release notes on a Friday night.
On March 30, 2026, the Kubernetes project published “Kubernetes v1.36 Sneak Peek”, a forward-looking post outlining deprecations, removals, and a handful of enhancements that are likely to land in Kubernetes v1.36. The post is credited to Chad Crowell, Kirti Goyal, Sophia Ugochukwu, Swathi Rao, and Utkarsh Umre. That Kubernetes blog entry is our original RSS source and the foundation for this deeper dive. citeturn5search0
v1.36 is planned for Wednesday, April 22, 2026 (per the Kubernetes release schedule). In other words, by the time you finish arguing about whether “platform engineering” is just DevOps in a trench coat, the release will probably be here. citeturn1view2turn1view0
This article focuses on what the sneak peek signals for real-world cluster operators: what might break, what becomes safer by default, and what you can do now so your next upgrade feels less like defusing a bomb and more like updating a dependency with mild annoyance.
What this sneak peek is (and what it isn’t)
The Kubernetes v1.36 sneak peek is not the final changelog. The post explicitly notes that the listed enhancements are likely to be included, but not guaranteed, and that the formal “what’s new” announcement will arrive in the v1.36 CHANGELOG when the release ships. citeturn1view0
Still, sneak peeks are valuable because they emphasize things that tend to cause the most operational pain:
- Deprecations that start warning now but remove functionality later
- Removals where a previously “deprecated but still works” feature finally stops working
- Key enhancements that change defaults or unlock better performance/capacity utilization
In Kubernetes, that usually translates to: “the cluster is fine” until the day it isn’t, and then you’re grepping YAML at 2 a.m. while your incident channel fills with memes that are not helping.
Deprecation: Service .spec.externalIPs is finally being put on a timer
The headline deprecation in v1.36 is Service spec.externalIPs. Starting with Kubernetes v1.36, using externalIPs triggers deprecation warnings, and the field is slated for full removal in v1.43. citeturn1view0
Why Kubernetes is doing this: security, specifically CVE-2020-8554
If you’ve ever treated externalIPs as a quick-and-dirty way to route traffic, you’re not alone. It was convenient. It was also a long-running security headache. The sneak peek calls out that externalIPs has enabled man-in-the-middle attacks, linking this directly to CVE-2020-8554. citeturn1view0
CVE-2020-8554 is one of those “design-level” Kubernetes issues that isn’t about a single buffer overflow; it’s about how Services can be abused when an attacker has certain permissions. Security researchers and vendors have documented how this class of issue can be mitigated (for example, by blocking externalIPs via admission control or policy). citeturn2search1turn2search5turn2search8
The Kubernetes enhancement tracking issue for this deprecation (KEP-5707) was opened on November 26, 2025, and explicitly targets v1.36 for the deprecation stage. citeturn3view0
What you should do instead (and the tradeoffs)
The sneak peek recommends alternative approaches depending on your environment:
- LoadBalancer Services for cloud-managed ingress
- NodePort Services for simple port exposure
- Gateway API for a more flexible, security-oriented external traffic model
Those are sensible defaults, but they’re not equivalent replacements. externalIPs was often used precisely because it bypassed “proper” ingress management. If your environment is bare metal, air-gapped, or “creative,” you may be using externalIPs as glue between legacy networks and Kubernetes. That glue is now being labeled “flammable.” citeturn1view0
A practical migration checklist (before the warnings become removals)
Because removal is planned for v1.43, you have time—but also enough time to procrastinate until it becomes urgent. Don’t. Here’s a pragmatic checklist:
- Inventory usage: find Services with
spec.externalIPsacross all namespaces. - Classify intent: is it exposing HTTP(S), TCP, internal legacy routing, or “someone tried something in 2019 and it worked”?
- Choose a replacement:
- If you’re in a cloud: strongly consider LoadBalancer or a Gateway API-based approach.
- If you’re on-prem/bare metal: evaluate a dedicated load balancer solution (MetalLB-style patterns), or Gateway implementations that fit your environment.
- Add policy guardrails: even before removal, consider blocking new uses of
externalIPswith admission control/policy. - Canary the upgrade: ensure your staging clusters run without warnings you plan to ignore.
Yes, this is work. But it’s predictable work—my favorite kind.
Removal: the gitRepo volume driver is now permanently disabled
The second big operational landmine is a removal, not a deprecation: the gitRepo volume type. It’s been deprecated since Kubernetes v1.11, but in Kubernetes v1.36 the sneak peek states it becomes permanently disabled and “cannot be turned back on.” citeturn1view0
Why now? Because “deprecated” isn’t the same as “safe”
The sneak peek frames this as a security measure: using gitRepo could allow an attacker to run code as root on the node. citeturn1view0
Separately, the Kubernetes security tracking for CVE-2024-10220 documents an issue where a user able to create a Pod and attach a gitRepo volume could achieve arbitrary command execution beyond the container boundary, leveraging a repository hooks folder. The write-up includes affected/fixed kubelet versions and recommends moving Git clone operations into an init container instead. citeturn7view1
Even if you weren’t directly exposed to that specific CVE, the pattern is clear: “git clone as an in-tree volume plugin” is not a modern security posture. It’s a convenience feature that aged into a liability.
What to use instead: init containers, git-sync patterns, or external pipeline steps
The sneak peek suggests migrating to supported approaches such as init containers or external “git-sync style” tools. citeturn1view0
In practice, that usually breaks down into a few architectural patterns:
- Build-time fetch: pull code during CI, bake into the image. (Most deterministic; best for production.)
- Init container clone: clone a repo into an
emptyDir(or PVC), mount into the main container. - Sidecar git-sync: continuously sync a repo for config/content refresh use cases.
Each option has tradeoffs in auditability, supply-chain integrity, and runtime complexity. But all of them are easier to reason about than an in-tree plugin doing privileged-ish operations on your nodes.
Real-world impact: who gets hurt by this?
You will feel this removal if:
- You run legacy manifests that still specify
volumes: - gitRepo: - You inherited a chart that “helpfully” clones a repo at runtime
- You’ve got internal developer platforms that scaffold workloads with outdated templates
The bad news: it will stop working in v1.36. The good news: the migration path is well understood, and you can usually fix it with a targeted template update.
Featured enhancements: performance and security improvements that matter in production
The sneak peek highlights several enhancements likely to be included in v1.36. Let’s unpack the ones that look small on paper but big in production.
Faster SELinux labeling for volumes reaches GA
Kubernetes v1.36 makes the SELinux volume mounting improvement generally available (GA). The key idea is to avoid slow, recursive relabeling by applying the SELinux context at mount time using a mount option (e.g., mount -o context=XYZ). The Kubernetes blog notes this reduces Pod startup delays on SELinux-enforcing systems. citeturn1view0turn7view3
If you’ve ever had a StatefulSet take forever to come up because a volume had “a lot of files” (technical term), you already understand why this matters. Recursive relabeling is one of those operations that scales with the number of inodes, not your patience.
But what about edge cases? Opt-out exists for a reason
SELinux labeling via mount options isn’t always a drop-in replacement for recursive relabeling—especially when multiple Pods interact with the same volume in complex ways. Kubernetes has had feature gates and opt-out mechanisms for this area, including the ability for Pods to opt out by setting spec.securityContext.seLinuxChangePolicy to Recursive. That behavior is documented in Kubernetes security context guidance. citeturn6search2turn7view3
In other words: v1.36 is trying to make the fast path the default, but it still acknowledges that “storage is hard” and “security is harder.”
External signing of ServiceAccount tokens (expected GA)
The sneak peek also calls out an authentication/security enhancement: the API server can delegate service account token signing to external systems like cloud KMS or hardware security modules, and the post says this is expected to graduate to stable (GA) in v1.36. citeturn1view0turn4view2
The admin documentation explains how kube-apiserver can be configured with --service-account-signing-endpoint to use an ExternalJWTSigner gRPC service over a Unix domain socket, and notes that the API server requires the external signer to be healthy at startup. citeturn7view2
Why this matters: service account tokens are foundational to how in-cluster workloads authenticate. Moving signing into centralized key management can help organizations that need stronger key custody controls, standardized rotation, or compliance-friendly audit trails.
Of course, it also means you’ve introduced a new dependency: if your external signer is down, your API server may have a very bad day. So treat this as a platform capability you adopt intentionally, not a box you tick because “GA sounds good.”
DRA continues its march toward practical AI hardware management
Dynamic Resource Allocation (DRA) has been one of Kubernetes’ most important “future-facing” subsystems in the last few releases—especially because modern workloads increasingly depend on expensive, specialized hardware (GPUs, accelerators, and devices that cost more than your first car).
The v1.36 sneak peek highlights two DRA-related enhancements likely to land:
- Device taints and tolerations graduating to beta
- Support for partitionable devices (splitting a single accelerator into multiple logical units)
DRA device taints and tolerations: safer scheduling for “don’t touch that GPU” scenarios
The sneak peek explains that DRA drivers can mark devices as tainted so they aren’t used unless explicitly tolerated, and that administrators can define DeviceTaintRule objects to taint devices matching criteria. The feature is described as graduating to beta in v1.36, meaning it becomes available by default without a feature flag (per the blog). citeturn1view0turn4view3
Operationally, this is a big deal for shared clusters where certain devices are:
- Reserved for specific teams (e.g., ML research vs. batch inference)
- Known-bad (flaky hardware you haven’t replaced yet)
- Subject to compliance constraints (yes, even GPUs can have compliance drama)
Without device-level scheduling controls, you end up implementing “GPU governance” out-of-band—through node pools, manual labeling, custom admission policies, or the ancient art of yelling in Slack. DRA is Kubernetes trying to make this boring and automatable.
Partitionable devices: making expensive accelerators less wasteful
The sneak peek states that Kubernetes v1.36 expands DRA with support for partitionable devices, letting a single accelerator be split into multiple logical units and shared across workloads—particularly useful for GPUs where dedicating the entire device can cause underutilization. citeturn1view0turn4view1
This is the Kubernetes platform acknowledging a simple truth: the economics of AI infrastructure are brutal. If you allocate full GPUs for tiny inference jobs, you burn money at a rate that makes cloud CFOs develop a thousand-yard stare.
Partitionable devices are also a conceptual shift. Historically, Kubernetes resources were largely treated as indivisible units at scheduling time (with some exceptions). As AI workloads become more granular—multiple models, multiple tenants, fluctuating demand—hardware sharing becomes a first-class requirement.
There are still questions operators should ask before betting the farm on this feature:
- What hardware and drivers support partitioning in my environment?
- How do isolation guarantees work between partitions?
- How will observability/accounting be handled (cost attribution, chargeback)?
But directionally, this is Kubernetes moving toward “GPU efficiency as a platform primitive.”
Industry context: Kubernetes is tightening defaults while the ecosystem shifts underfoot
v1.36 is arriving in a broader moment where Kubernetes is doing three things at once:
- Removing sharp edges that have been known security liabilities for years (
externalIPs,gitRepo) - Making hardened configurations more practical (faster SELinux labeling, external token signing)
- Modernizing around AI infrastructure (DRA improvements)
Meanwhile, the Kubernetes networking world is also dealing with a very public “end of an era”: the Kubernetes project stated that Ingress NGINX will be retired in March 2026, and that after retirement there will be no releases for bug fixes or security patches. The statement (by Kat Cosgrove on January 29, 2026) strongly urges migration to alternatives such as Gateway API. citeturn7view0
That matters here because it reinforces the broader theme: Kubernetes is trying to get the community to move from legacy, convenient patterns toward more sustainable and secure interfaces (like Gateway API), even if it requires engineering work.
What this means for platform teams: an “upgrade readiness” agenda for April 2026
If you operate Kubernetes at any meaningful scale, v1.36 isn’t just “another minor.” It’s a reminder that safe operations require continuous hygiene.
1) Treat deprecations as active work, not future work
The worst deprecations are the ones you ignore because they don’t break anything yet. externalIPs is the classic case: it will warn in v1.36, and removal is planned for v1.43. That’s a multi-year runway in Kubernetes time, which means it’s dangerously easy to forget. citeturn1view0turn3view0
Action item: file a ticket now to eliminate usage, and add a policy that prevents new usage. Future you will send present you a thank-you note (probably in the form of fewer pages).
2) Assume something “deprecated” might still be exploitable
The gitRepo situation is a lesson in “deprecated doesn’t mean dead.” The volume type lingered for years, and security issues continued to surface, including CVE-2024-10220. Disabling it permanently is Kubernetes choosing safety over backwards compatibility. citeturn1view0turn7view1
Action item: scan your cluster configs (and Helm chart repositories) for gitRepo usage and replace it with init containers or build-time fetch patterns.
3) Re-evaluate SELinux assumptions if you run hardened nodes
SELinux in Kubernetes has historically been a source of performance surprises. The move toward mount-time labeling is designed to remove a common bottleneck. v1.36 making this GA suggests the project believes it’s mature enough for broad use. citeturn1view0turn7view3
Action item: if you operate SELinux-enforcing clusters, run performance tests on representative workloads and validate whether any Pods need seLinuxChangePolicy: Recursive as an explicit opt-out. citeturn6search2
4) If you’re building AI platforms, start tracking DRA seriously
DRA has moved from “interesting idea” to “how Kubernetes will do device management at scale.” Device taints/tolerations in beta and partitionable devices support are strong signals that Kubernetes wants to be a better substrate for shared accelerators. citeturn1view0turn4view1turn4view3
Action item: if you currently manage GPUs through node selectors and prayer, evaluate DRA’s roadmap and test drivers that support your hardware. Even if you don’t adopt it immediately, you want to be literate in the direction the platform is heading.
A quick “who should care” matrix
- Security teams: care about
externalIPsdeprecation andgitReporemoval; these reduce cluster attack surface. citeturn1view0turn7view1 - Platform/SRE teams: care about upgrade risk, policy enforcement, and SELinux startup-time improvements. citeturn1view0turn7view3
- ML platform teams: care about DRA device governance and partitionable accelerators for utilization. citeturn1view0turn4view1turn4view3
- Developers: care only after you remove
gitRepoand their workload stops starting. (This is not a criticism; it’s a law of nature.) citeturn1view0
Looking ahead: what to watch between now and April 22, 2026
Between April 11, 2026 (today) and the planned release date of April 22, 2026, the most important thing to watch is the official Kubernetes v1.36 CHANGELOG/release notes when the release goes final. The sneak peek is directionally accurate, but the final list can still shift, and release notes will include the concrete “action required” details. citeturn1view0turn1view2
Also watch for:
- Any last-minute changes to timelines (rare, but it happens)
- Docs updates and migration guidance linked from KEPs (often where the real operational details live)
- Ecosystem vendor notes (managed Kubernetes providers, security scanners, ingress/controller projects)
And if you’re still running an older release: Kubernetes only maintains release branches for the most recent three minor versions. As of the Kubernetes releases page crawl, that’s 1.35, 1.34, and 1.33—so staying current isn’t just “best practice,” it’s how you keep getting fixes. citeturn1view1
Sources
- Kubernetes v1.36 Sneak Peek (Chad Crowell, Kirti Goyal, Sophia Ugochukwu, Swathi Rao, Utkarsh Umre) citeturn1view0
- Kubernetes v1.36 Release Information (schedule) citeturn1view2
- Kubernetes Releases (supported branches and release history) citeturn1view1
- KEP-5707 tracking issue: Deprecate service.spec.externalIPs citeturn3view0
- Datadog Security Labs: CVE-2020-8554 analysis citeturn2search1
- Palo Alto Networks Unit 42: CVE-2020-8554 citeturn2search5
- Red Hat Bugzilla: CVE-2020-8554 citeturn2search8
- Kubernetes issue: CVE-2024-10220 (gitRepo volume) citeturn7view1
- Kubernetes docs: Managing Service Accounts (external signer / ExternalJWTSigner) citeturn7view2
- Kubernetes blog: Efficient SELinux volume relabeling (background) citeturn7view3
- KEP-740 tracking issue: external signing of ServiceAccount tokens citeturn4view2
- KEP-5055 tracking issue: DRA device taints and tolerations citeturn4view3
- KEP-4815 tracking issue: DRA partitionable devices citeturn4view1
- Kubernetes blog: Ingress NGINX statement (retirement March 2026) citeturn7view0
Bas Dorland, Technology Journalist & Founder of dorland.org