Kubernetes 1.34 – What you need to know

Kubernetes 1.34 is right around the corner, and there are quite a lot of changes to unpack! Removing enhancements with the status of “Deferred” or “Removed from Milestone” we have 59 Enhancements in all listed within the official tracker. So, what’s new in 1.34?

Kubernetes 1.34 brings a whole bunch of useful enhancements, including 45 changes tracked as ‘Graduating’ in this Kubernetes release. From these, 23 enhancements are graduating to stable, including support for Direct Service Return (DSR) and overlay networking in Windows kube-proxy.

12 new alpha features are also listed in the enhancements tracker, including a built-in capability for pods to authenticate to kube-apiserver using mTLS. This is especially cool as it's built from pieces that are easy for external projects to reuse to deliver additional functionality using X.509 certificates. Amongst other new benefits, DevOps teams will benefit from new container restart rules. These rules declare the container-specific logic that can be used to customise the restart behaviour of the container when it exited or crashed.

If you don’t have time to read the entire release report right now, Gerome Grignon provides an easy to consume, status overview of each feature release from 1.19 through to 1.34 - https://kaniuse.gerome.dev. The Cloudsmith team is really excited about this release and everything that comes with it! Let’s jump into all of the major features and changes in Kubernetes 1.34.

Kubernetes 1.34 – Editor’s pick:

Here are a few of the changes that Cloudsmith employees are most excited about in this release:

#3962 Mutating Admission Policies

“From a personal perspective, I find the introduction of CEL-based mutating admission policies in Kubernetes 1.34 incredibly exciting because it brings clarity and simplicity to an area that’s historically been complex and operationally heavy. For those dealing with the overhead of managing external mutating webhooks (whether for debugging latency issues, ensuring HA, and handling failure edge cases) this feels like a major leap forward. Being able to express common mutations declaratively using CEL, directly within the API server, eliminates an entire layer of moving parts while still retaining flexibility. It also aligns beautifully with the Kubernetes philosophy of declarative configuration, and I’m particularly excited about how this could streamline GitOps workflows and simplify secure, consistent policy enforcement across environments.”

Nigel Douglas - Head of Developer Relations


#4427 Relaxed DNS search string validation

“I'm really happy to see this KEP graduate to stable in Kubernetes 1.34 because it finally addresses a long-standing pain point when dealing with legacy or standards-bending DNS setups. In the past, trying to use valid-but-practically-common patterns like underscores or a single dot in dnsConfig.searches meant hitting a frustrating wall with RFC-1123 validation. This often forced awkward workarounds or even blocked legacy workloads from being migrated into Kubernetes entirely. With this KEP now stable, Kubernetes is acknowledging real-world usage patterns while still maintaining safe, opt-in behaviour.

From a practical standpoint, it's empowering for me to be able to configure a pod with dnsConfig.searches like abc_d.example.com without having to fight the platform. These are small changes, but they remove major friction, especially when you're dealing with SRV records or short-name resolution in systems not designed with strict DNS compliance in mind. The design is also thoughtful: it's gated, it respects compatibility boundaries, and it won't break clusters on upgrade or downgrade. To me, this shows Kubernetes is maturing in a way that balances standards compliance with usability, and that's worth celebrating.”

Esteban Garcia - Principal Software Engineer


#4639 VolumeSource: OCI Artifact and/or Image

“This is a game-changer. Allowing OCI images to be mounted as volumes is incredibly exciting for us. It means our customers can now leverage their existing OCI registries to seamlessly distribute ML models, configurations, and binaries directly to their pods. This move greatly simplifies artifact management, streamlines MLOps and CI/CD pipelines, and enhances security. All of it while using familiar, standard tooling. I can’t wait to document some new examples about this.”

Pablo Javier López Zaldívar - Technical Writer

Apps in Kubernetes 1.34


#3939 Allow recreation of pods once fully terminated in job controller
Stage: Graduating to Stable
Feature group: sig-apps

Many machine learning frameworks like TensorFlow and JAX require dedicated pods per index, but issues arise when these pods are prematurely terminated due to factors like preemption or eviction. Currently, replacement pods are created immediately, which often results in failure to start - especially in clusters with limited resources or strict budgets. This premature creation can delay scheduling, as resources may not be available until the terminating pod has fully exited. Additionally, if cluster autoscaling is enabled, these early replacements may trigger unnecessary scale-ups.

To address this, the proposed change introduces more control and visibility into pod replacement behaviour for Kubernetes Jobs. The Job controller will now differentiate between active, failed, and terminating pods. A new status field will track the number of terminating pods, and an optional spec field will allow users to configure whether replacement pods should wait for termination to complete. This information can also support queueing controllers like Kueue in better quota calculation. The goal is to offer more flexible and efficient scheduling without affecting other workload APIs.


#961 maxUnavailable for StatefulSets

Stage: Graduating to Beta
Feature group: sig-apps

This enhancement introduces support for a maxUnavailable parameter in the RollingUpdate strategy of Kubernetes StatefulSet. Currently, StatefulSets update their pods one at a time, which can be slow and inefficient for certain workloads. By allowing multiple pods to be updated simultaneously (up to the specified maxUnavailable value). This change enables faster and more flexible rolling updates for stateful applications.

The motivation stems from several real-world scenarios. For instance, StatefulSets preserve pod identities, making them ideal for applications that publish consistent metrics or rely on fixed pod names - something not possible with Deployments. Additionally, applications that perform lengthy startup tasks or can tolerate multiple replicas going down (e.g., follower nodes in a distributed system) would benefit from the faster updates enabled by maxUnavailable. Furthermore, StatefulSets use ControllerRevisions for tracking updates, offering simpler and more predictable revision management compared to Deployments. The proposal aims to make StatefulSets more feature-rich and deployment-friendly, while retaining their core identity-preserving capabilities.


#3973 Consider Terminating Pods in Deployments

Stage: Graduating to Alpha
Feature group:
sig-apps

Deployments in Kubernetes currently show inconsistent behaviour when dealing with terminating pods, depending on the rollout strategy or scaling activity. In some cases, it might be more efficient to wait for pods to fully terminate before creating new ones, while in others, launching new pods immediately is preferable. To address this inconsistency, a new field, .spec.podReplacementPolicy, is proposed to give users control over when replacement pods should be created during deployment updates.

The goal is to let users define whether a Deployment should wait for pods to terminate before spinning up new ones or proceed without delay, all while respecting the chosen deployment strategy. Additionally, the status of Deployments and ReplicaSets would be enhanced to reflect the number of managed terminating pods. This distinction is important, as pods marked for deletion (with a deletionTimestamp) aren't currently accounted for in the .status.replicas field. This can lead to temporary over-provisioning - especially during rollouts or unexpected deletions like evictions - causing resource strain, potential scheduling issues, and even unnecessary autoscaling, which may increase cloud costs in large-scale or tightly resourced environments.

API in Kubernetes 1.34

#2340 Consistent Reads from Cache

Stage: Graduating to Stable
Feature group: sig-api-machinery


This proposal outlines a plan to enhance Kubernetes scalability and performance by enabling consistent reads directly from the watch cache instead of querying etcd. The core idea is to ensure that cached data is up-to-date by leveraging etcd's progress events, which indicate the latest revision observed and guarantee that all future events will be newer. This mechanism allows Kubernetes to verify how fresh the watch cache is and safely serve consistent (quorum) reads from it, without needing to go back to etcd each time.

The motivation behind this change is significant: serving reads from the cache is far more efficient than querying etcd, filtering, deserialising, and garbage-collecting large numbers of objects. This is particularly impactful for node-originated requests, such as kubelets listing pods scheduled on their nodes. Instead of listing and filtering hundreds of thousands of pods from etcd, the cache can immediately return just the relevant data. Beyond performance, this also resolves the long-standing “stale read” issue, where reflectors could receive outdated data on startup due to defaulting to non-consistent reads.

The implementation plan includes using a consistent read to get the latest resource version, waiting for the watch cache to reach that revision (with repeated WatchProgressRequest calls every 100ms if necessary), and then serving the request once the cache is fresh enough. This approach introduces little to no extra scalability cost and enables Kubernetes to maintain correctness while improving efficiency - particularly in large-scale clusters.

#3962 Mutating Admission Policies

Stage: Graduating to Beta
Feature group: sig-api-machinery

The proposed enhancement introduces CEL-based mutating admission policies as a simpler, declarative alternative to traditional mutating admission webhooks in Kubernetes. Many common mutation scenarios (such as adding labels or injecting containers) can now be expressed using concise CEL (Common Expression Language) expressions, significantly reducing operational complexity.

This method eliminates the need for running and managing external webhooks and provides better introspection and performance. Because these policies run in-process within the Kubernetes API server, they avoid the latency and reliability issues associated with webhooks and support automatic re-invocation when needed.

The new API structure mirrors that of ValidatingAdmissionPolicy, with the use of:

  • MutatingAdmissionPolicy for policy definition
  • MutatingAdmissionPolicyBinding for configuration binding
  • Parameter resources (e.g., custom resources or ConfigMaps)

Mutations are specified using either ApplyConfiguration (structured merge) or JSONPatch, depending on the desired semantics and compatibility needs. The structured merge strategy is preferred for its declarative nature and integration with Server-Side Apply.

A key feature is the use of CEL expressions to dynamically construct the partial object to be merged into the original request. For example, injecting a sidecar initContainer can be done using the following concise snippet:

apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicy
metadata:
  name: "sidecar-policy.example.com"
spec:
  paramKind:
    group: mutations.example.com
    kind: Sidecar
    version: v1
  matchConstraints:
    resourceRules:
    - apiGroups:   ["apps"]
      apiVersions: ["v1"]
      operations:  ["CREATE"]
      resources:   ["pods"]
  matchConditions:
    - name: does-not-already-have-sidecar
      expression: "!object.spec.initContainers.exists(ic, ic.name == params.name)"
  failurePolicy: Fail
  reinvocationPolicy: IfNeeded
  mutations:
    - patchType: "ApplyConfiguration"
      expression: >
        Object{
          spec: Object.spec{
            initContainers: [
              Object.spec.initContainers{
                name: params.name,
                image: params.image,
                args: params.args,
                restartPolicy: params.restartPolicy
              }
            ]
          }
        }

This enhancement provides a Kubernetes-native way to express and apply mutations with clarity, speed, and declarative consistency - without sacrificing flexibility. It also paves the way for policy frameworks to be built around in-tree mechanisms and reused across systems like GitOps or CI/CD, all without relying on external webhook infrastructure.

#5366 Graceful Leader Transition

Stage: Net New to Alpha
Feature group: sig-api-machinery
Feature gate: GracefulLeaderTransition Default value: false

This proposal introduces a new approach to handling leader election transitions for key Kubernetes components (kube-scheduler, kube-controller-manager, and cloud-controller-manager) by allowing them to gracefully release their leadership lock and revert to a follower state without restarting the entire process. Currently, when a component loses leadership, it forcefully exits using os.Exit(), requiring the kubelet to restart it. This results in unnecessary overhead and prevents clean shutdowns. The proposed change, gated behind a new feature flag (GracefulLeaderTransition), aims to improve high-availability (HA) behaviour by enabling components to stop their internal controllers, release resources, and then attempt to reacquire the lease - all without exiting.

To implement this, the components' control loops will be updated to support proper context cancellation and cleanup when leadership is lost. Key technical changes include modifying the OnStoppedLeading callback, enhancing health check de-registration, and deferring or resetting resources upon leadership loss. While this offers clear benefits in reduced latency and better lifecycle handling, there are associated risks - such as memory leaks, failure to respect shutdown signals, and regressions from future code changes. These risks are mitigated through audits, new tests, best practice documentation, and conditional gating of shutdown logic.

CLI in Kubernetes 1.34

#5295 Kubernetes YAML (KYAML)

Stage: Net New to Alpha
Feature group: sig-cli

This enhancement proposal introduces KYAML, a new output format for kubectl (invoked via kubectl get -o kyaml). KYAML is a strict subset (or dialect) of YAML designed to be compatible with existing YAML tooling while avoiding many of YAML’s most common pitfalls. It emphasises syntactical rules that prevent ambiguous behaviour, such as always quoting strings to avoid unintentional type coercion (for example "no" being interpreted as a boolean), using explicit flow-style syntax ([] for lists, {} for maps), and eliminating reliance on whitespace sensitivity. These design choices make KYAML easier to read, write, patch, and template, particularly beneficial in tools like Helm.

The KEP also proposes making KYAML the standard format for all Kubernetes project-owned documentation and examples. The motivation stems from YAML’s complexity and its tendency to produce valid but incorrectly interpreted files due to issues like indentation or ambiguous values. While JSON is an alternative, it lacks comments and is less human-friendly. KYAML aims to blend the safety and clarity of JSON with YAML’s readability and flexibility. By updating examples and tooling to use KYAML, the project hopes to guide the ecosystem toward more reliable and error-resistant configuration practices.

#3104 Implement a .kuberc file to separate user preferences from cluster configs

Stage: Graduating to Beta
Feature group: sig-cli

During the Beta phase, this feature will be enabled through the KUBECTL_KUBERC=true environment variable, with the default kuberc file located in ~/.kube/kuberc. Users can override this location using the --kuberc flag (kubectl --kuberc /var/kube/rc). The kuberc file will serve as an optional configuration to separate user preferences from cluster credentials and server configurations. This proposal aims to introduce a new file dedicated to user preferences, which will offer better flexibility compared to kubeconfig, which currently under-utilises its preferences field due to the creation of new files for each cluster that mix credentials with preferences.

The kuberc file will provide a clear separation between server configurations and user preferences, allowing users to define command aliases, default flags, and other customisations. The file will be versioned to support easy future updates, and users will be able to maintain a single preferences file, regardless of the --kubeconfig flag or $KUBECONFIG environment variable. This proposal also suggests deprecating the kubeconfig preferences field to streamline configuration management. The kuberc file will be entirely optional, providing users with an opt-in way for custom kubectl behaviour without impacting any existing setups.

Kubernetes 1.34 Networking

#4427 Relaxed DNS search string validation

Stage: Graduating to Stable
Feature group: sig-network
Feature gate: RelaxedDNSSearchValidation Default value: false

Kubernetes currently enforces strict validation for dnsConfig.searches fields in Pods based on RFC-1123 hostname rules. However, this can be overly restrictive, particularly for legacy workloads or DNS use cases like SRV records that may use underscores ( _ ) or require a single dot ( . ) in the search path. These characters, although non-compliant with RFC-1123, are often used in real-world DNS environments and are generally accepted by DNS servers.

To address this, a new proposal introduces a feature gate called RelaxedDNSSearchValidation. When enabled, it allows underscores in domain labels and accepts a single dot as a valid search path. This change supports legacy systems and helps users configure Pods that need to resolve short DNS names, especially in domains like _sip._tcp.abc_d.example.com.

For example, a current Pod spec that tries to use an underscore in the search path fails validation:

apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: dns-example
spec:
  containers:
    - name: test
      image: nginx
  dnsPolicy: "None"
  dnsConfig:
    nameservers:
      - 1.2.3.4
    searches:
      - abc_d.example.com


This results in an error because abc_d.example.com violates RFC-1123. Similarly, using a single dot ( . ) to suppress internal DNS lookups also fails under the current validation.

With the new feature gate enabled, Kubernetes will relax the validation only for the searches field in dnsConfig, maintaining safety for other components while improving compatibility. The feature is disabled by default, and safe downgrade behaviour is ensured by distinguishing between new and existing resources.

#3015 PreferSameNode Traffic Distribution

Stage: Graduating to Beta
Feature group: sig-network

The Kubernetes community has decided to deprecate the ambiguous PreferClose value in TrafficDistribution due to its inconsistent interpretation and replace it with a clearer, more explicit option: PreferSameZone. This change ensures that traffic routing semantics are well-defined, avoiding the confusion caused by the vague notion of "topologically proximate," which could mean anything from the same node to the same region depending on the implementation. While PreferClose will remain in the API for compatibility, it will now strictly be treated as an alias for PreferSameZone.

Additionally, a new value, PreferSameNode, is being introduced. This allows service traffic to be routed to endpoints on the same node as the client whenever possible, enhancing efficiency for cases like node-local DNS resolution. This new behaviour supports use cases where locality matters for performance or reliability, while still allowing fallback to remote endpoints when necessary. Although locking down the semantics of these routing preferences could limit the flexibility of future proxy optimisations, the clarity and control provided to users outweigh these risks, especially since other mechanisms like autoscaling and topology spread constraints can manage load concerns independently.

#4762 Allows setting any FQDN as the pod's hostname

Stage: Net New to Alpha
Feature group:
sig-network

This KEP advocates for allowing pods to arbitrarily set their internal kernel hostname. Currently, Kubernetes enforces a specific naming convention for pod hostnames (eg: pod-name.namespace.pod.cluster-domain), which can hinder migration of legacy applications that rely on specific hostname configurations. Such enforcement complicates the onboarding of older systems (like Kerberos) that tightly couple service behaviour or authentication to the internal hostname.

A concrete example is the Kerberos replication daemon, kpropd, which uses its internal hostname to determine authentication credentials, while the sending node uses an external hostname to generate its cryptographic key. If the internal and external hostnames don’t match, authentication fails. Current workarounds include significant changes to upstream application code or brittle reconfigurations of clusters and services. This proposal argues that internal hostnames should be decoupled from Kubernetes' DNS enforcement, as they are only relevant to applications inside the pod. Removing these constraints would broaden Kubernetes' compatibility with legacy systems and simplify service migrations.

Kubernetes 1.34 Authentication

#4633 Only allow anonymous auth for configured endpoints

Stage: Graduating to Stable
Feature group: sig-auth
Feature gate: AnonymousAuthConfigurableEndpoints Default value: false

Kubernetes currently treats unauthenticated API requests as anonymous by default, assigning them the identity system:anonymous and group system:unauthenticated. This behaviour is controlled by the --anonymous-auth flag, which defaults to true. However, this can be a security risk, particularly when powerful RoleBindings or ClusterRoleBindings are mistakenly granted to anonymous users - an issue highlighted in real-world incidents.

To mitigate this risk while preserving legitimate use cases like health checks (/healthz, /readyz, /livez) or kubeadm bootstrapping, this proposal introduces a more granular control mechanism. A new configuration option in the kube-apiserver's AuthenticationConfiguration allows administrators to enable anonymous authentication for only a specific set of paths. This avoids the need to completely disable anonymous access (which can break critical functionality) while minimising the attack surface.


apiVersion: apiserver.config.k8s.io/v1alpha1
kind: AuthenticationConfiguration
anonymous:
  enabled: true
  conditions:
    - path: "/healthz"
    - path: "/readyz"
    - path: "/livez"

This configuration permits anonymous access only to the specified endpoints and blocks it elsewhere, even if misconfigured bindings exist. The enhancement is gated behind a new feature flag AnonymousAuthConfigurableEndpoints, ensuring backward compatibility and intentional adoption.

#4601 Authorise with Field and Label Selectors

Stage: Graduating to Stable
Feature group: sig-auth

Kubernetes is extending its authorisation attributes to include field selectors and label selectors for the List, Watch, and DeleteCollection verbs. This enhancement enables authorisers (especially out-of-tree or custom ones) to leverage selector information when making authorisation decisions. It strengthens security for per-node workloads and allows for fine-grained access control, such as restricting node clients to only list or watch pods assigned to them (eg: spec.nodeName=$nodeName).

To support this, the authorisation system and related APIs like SubjectAccessReview, SelfSubjectAccessReview, and LocalSubjectAccessReview are being updated. These now carry parsed selector “requirements” alongside optional raw string representations called a rawSelector. Webhook authors are encouraged to honour the requirements field and ignore rawSelector to avoid ambiguity or inconsistencies in parsing, which could introduce security vulnerabilities.

Importantly, the changes are backward-compatible: older authorisers that don’t understand these fields will default to granting broader access, ensuring a fail-closed posture. The CEL authoriser is also being enhanced to support selectors in its expressions, enabling complex rule logic like .fieldSelector('spec.nodeName=foo'). Future versions of Kubernetes may extend selector support to other verbs (eg: Get, Create, Update), but for now, selectors are only honoured in List, Watch, and DeleteCollection requests.

#740 API for external signing of Service Account tokens

Stage: Graduating to Beta
Feature group:
sig-auth

Kubernetes currently relies on service account keys loaded from disk at kube-apiserver startup for JWT signing and authentication. This static method limits key rotation flexibility and poses security risks if signing materials are exposed via disk access. To address these limitations, a proposal suggests enabling integration with external key management systems (such as HSMs and cloud KMSes). This would allow service account JWT signing to occur out-of-process without restarting the kube-apiserver, improving both ease of rotation and security.

The proposal introduces a new gRPC API (ExternalJWTSigner) to support signing and key management externally, similar to Kubernetes’ existing KMS API model. It maintains backward compatibility by preserving current file-based behaviour unless a new signing endpoint is explicitly configured. The key goals for this update include supporting public key listing, ensuring token compatibility with existing standards, and avoiding performance regression for in-process keys.

#4412 Projected service account tokens for Kubelet image credential providers

Stage: Graduating to Beta
Feature group: sig-auth

In Kubernetes 1.34, the ServiceAccount token integration for kubelet credential providers is expected to reach beta and be enabled by default. This feature allows the kubelet to authenticate to container registries using short-lived, automatically rotated ServiceAccount tokens scoped to individual Pods. Introduced in alpha status, this mechanism replaces the need for long-lived image pull secrets and aligns image pull authentication with modern, identity-aware practices, reducing security risks and operational overhead.

Historically, Kubernetes has moved away from long-lived credentials toward ephemeral, OIDC-compliant tokens. This evolution enables external services to validate tokens for secure operations like secret retrieval from external vaults without requiring stored secrets in the cluster. Extending this model to image pulls enables per-pod identity-based authentication before pod startup, eliminating reliance on long-lived secrets stored in the Kubernetes API or node-wide kubelet credential providers.

Kubernetes 1.34 Nodes

#4818 Allow zero value for Sleep Action of PreStop Hook

Stage: Graduating to Stable
Feature group:
sig-node
Feature gate: PodLifecycleSleepActionAllowZero Default value: false

This KEP proposes a change to Kubernetes’ PreStop lifecycle hook behaviour, specifically enhancing the sleep action introduced in KEP 3960. Currently, configuring a PreStop hook with a sleep duration of zero seconds results in a validation error, even though the underlying Go time.After() function supports a zero (or even negative) duration, returning immediately. This enhancement aims to allow zero as a valid sleep duration, enabling users to define no-op PreStop hooks when necessary, such as for admission webhook validation purposes where a hook must exist but actual sleep is not required.

To implement this change safely, the proposal introduces a feature gate named PodLifecycleSleepActionAllowZero, which is disabled by default. When enabled, it updates the validation logic in the API server to accept zero as a valid duration (but still disallows negative values). The kubelet already supports a zero duration due to its reliance on time.After(), so the main update is in API validation logic. The feature ensures backward compatibility and safe downgrade behaviour by differentiating between new resources and updates to existing ones. Overall, this change makes Kubernetes lifecycle hooks more flexible without compromising existing behaviour or introducing risk.

#4369 Allow printable ASCII characters in environmental variables

Stage: Graduating to Stable
Feature group: sig-node
Feature gate: RelaxedEnvironmentVariableValidation Default value: false

Kubernetes will now support environment variable names using any printable ASCII character, excluding "=". This change is driven by the need for flexibility in supporting a wide variety of applications, some of which rely on characters like ( : ) - commonly used in frameworks such as .NET Core - to structure configuration keys. Previously, Kubernetes enforced strict naming rules that limited adoption in such scenarios.

The proposal introduces a feature gate, RelaxedEnvironmentVariableValidation, which defaults to false and allows users to opt into relaxed validation. With this enabled, environment variable names can include any printable ASCII character (excluding =), bypassing the more restrictive regex that only allowed characters within the C_IDENTIFIER scope. Validation logic will now accommodate both strict and relaxed modes depending on the feature gate's state and the context (create vs. update operations). While the change introduces some risks, particularly for upgrade and rollback compatibility, these are mitigated by user-controlled opt-in via the feature gate. This also extends to allow ConfigMap and Secret keys outside the traditional scope to be used in EnvFrom.

#4639 VolumeSource: OCI Artifact and/or Image

Stage: Graduating to Beta
Feature group:
sig-node

The proposed Kubernetes enhancement introduces a new VolumeSource that enables direct mounting of OCI (Open Container Initiative) images as volumes within a pod. This allows users to package and share files between containers without embedding them in the primary container image. Such an approach streamlines image creation, reduces the attack surface by minimising the image size, and aligns Kubernetes more closely with OCI standards. By leveraging OCI-compatible registries, this enhancement opens up use cases beyond traditional container workloads, including distributing configuration files, machine learning models, binaries, and other artifacts.

The motivation stems from the desire to make Kubernetes a more flexible platform for distributing and mounting content packaged in OCI images. Notably, it allows for secure, standards-based delivery of artifacts using image pull secrets and familiar credentials. This supports common DevOps, security, and AI/ML scenarios (like separating malware signatures, packaging CI/CD artifacts, or deploying large ML model weights) without requiring custom base images. The enhancement deliberately limits scope to mounting directories (not single files) and defers runtime specifics and complex use cases like manifest list handling for future versions.

The proposed Kubernetes API extension introduces a new image field in the volume spec, enabling users to mount the OCI image contents into the container file system:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  volumes:
  - name: oci-volume
    image:
      reference: docker.cloudsmith.io/acme-corporation-acme-repo-one/my-html-pod:latest
      pullPolicy: IfNotPresent
  containers:
  - name: my-container
    image: busybox
    volumeMounts:
    - mountPath: /data
      name: oci-volume

This design promotes reuse of existing OCI tooling and infrastructure while maintaining security and performance considerations, such as enforcing read-only mounts and trusted registry access.

#4381 DRA: structured parameters

Stage: Graduating to Stable
Feature group:
sig-node

Dynamic Resource Allocation (DRA) in Kubernetes introduces a flexible framework for requesting and managing specialised hardware resources (like GPUs, FPGAs, and custom accelerators) beyond the traditional CPU and memory model. Introduced in Kubernetes v1.30 and targeting stability in v1.34, DRA enables workloads to claim devices using structured, opaque parameters through new API types such as ResourceClaim, ResourceClaimTemplate, DeviceClass, and ResourceSlice. This approach draws inspiration from dynamic volume provisioning and adds a new .spec.resourceClaims field to Pod definitions. It empowers device drivers and administrators to define reusable device classes, while Kubernetes allocates suitable devices and schedules Pods accordingly. Key features include CEL-based filtering, centralised device categorisation, and streamlined Pod resource requests.

DRA addresses several longstanding limitations of the existing device plugin model. It supports device initialisation and cleanup (for example, securely configuring and resetting FPGAs), partial device allocation (like NVIDIA MIG or SR-IOV), and optional device usage, allowing workloads to run with or without accelerators. It also lays the groundwork for support of non-node-local resources, such as devices accessed over the network fabric. These capabilities are not possible with today’s static, node-local device plugin interface, which lacks the flexibility to dynamically provision or share resources. By leveraging Container Device Interface (CDI), DRA provides a secure, extensible way to expose hardware to containers. Future goals include enabling autoscaling with resource claims and supporting both local and fabric-attached devices using vendor-defined parameters.

Scheduling in Kubernetes 1.34

#4247 Per-plugin callback functions for accurate re-queueing in kube-scheduler

Stage: Graduating to Stable
Feature group: sig-scheduling

This enhancement introduces a new feature called QueueingHint to the scheduler. The KEP aims to improve scheduling throughput by reducing unnecessary retries of Pods that are unlikely to be scheduled. Currently, plugins use a broad set of events (via EventsToRegister) to determine when to requeue Pods, which often leads to retries triggered by irrelevant updates (like minor Node changes that don't affect Pod schedule-ability). With QueueingHint, plugins gain finer control, enabling them to selectively requeue Pods only when changes meaningfully improve the chance of successful scheduling.

Additionally, the KEP addresses inefficiencies in DRA scenarios. Sometimes, DRA plugins reject Pods while waiting for updates from external sources (such as device drivers), but the scheduler unnecessarily delays retries due to its backoff mechanism. The proposal allows plugins to skip backoff when appropriate, enabling faster re-scheduling and better responsiveness. The overall goals include improving the tracking and handling of rejected Pods in the scheduling queue without changing the user-facing API or completely removing backoff, ensuring backward compatibility and targeted improvements.

#3902 Decouple TaintManager from NodeLifecycleController

Stage: Graduating to Stable
Feature group: sig-scheduling

This update seems to suggest a structural refactor within Kubernetes by decoupling the TaintManager (responsible for evicting pods from tainted nodes) from the NodeLifecycleController (which applies taints to unhealthy nodes such as Unreachable or NotReady). Currently, NodeLifecycleController handles both identifying unhealthy nodes and triggering taint-based pod evictions. The proposal aims to split these responsibilities into two independent controllers: the existing NodeLifecycleController, which would continue to apply NoExecute taints, and a new TaintEvictionController, dedicated solely to managing pod evictions based on those taints.

The main motivation behind this separation is to enhance code clarity, modularity, and maintainability. By isolating these logically distinct operations, it becomes easier to improve or customise taint-based eviction logic without impacting the core lifecycle management of nodes. While the separation opens the door for cluster operators to build or swap in custom eviction controllers, this is considered a beneficial side-effect rather than the primary objective. The proposal carefully avoids introducing any behavioural changes or additional complexity in the existing APIs, focusing purely on internal reorganisation for long-term maintainability.

#3243 Respect PodTopologySpread after rolling upgrades

Stage: Graduating to Beta
Feature group: sig-scheduling
Feature gate: MatchLabelKeysInPodTopologySpread Default value: true

The Kubernetes enhancement proposes adding a new field called matchLabelKeys to the TopologySpreadConstraint API, improving how pods are grouped for topology spreading. Currently, users rely solely on LabelSelector, which requires specifying exact label key-value pairs for identifying related pods. This approach fails to isolate spreading behaviour during rolling updates, especially in Deployments where new and old ReplicaSets coexist briefly, often resulting in unbalanced pod distribution. The new matchLabelKeys field allows users to specify only label keys, and at pod creation time, Kubernetes merges the pod’s label values with the existing selector. This dynamic merging enables skew calculations to apply specifically within a deployment revision, ensuring proper balance without manual label management or external tools like a de-scheduler.

This enhancement is especially useful during rolling updates, where it’s desirable to limit spreading calculations to the new ReplicaSet only. With matchLabelKeys, developers can leverage automatically generated labels such as pod-template-hash to distinguish between deployment revisions. This avoids the pitfalls of statically defined selectors and eliminates the need for manual tuning or infrastructure complexity. Kubernetes will gate this functionality behind a feature flag (MatchLabelKeysInPodTopologySpread) and handle backward compatibility and safe upgrades thoughtfully, aligning the design with similar concepts in PodAffinity.

Here's how a pod spec would use the new matchLabelKeys field:

apiVersion: v1
kind: Pod
metadata:
  name: sample
  labels:
    app: sample
spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
    labelSelector: {}  # Initially empty
    matchLabelKeys:
    - app  # Label keys to match from the incoming pod

Upon pod creation, Kubernetes automatically transforms this to:

labelSelector:
  matchExpressions:
  - key: app
    operator: In
    values:
    - sample  # Extracted from the pod's label

This dynamic merging makes pod spreading smarter, safer, and more maintainable across rolling updates.

Storage in Kubernetes 1.34

#3476 VolumeGroupSnapshot

Stage: Graduating to Beta
Feature group:
sig-storage

This proposal introduces a Kubernetes API to support crash-consistent group snapshots of multiple persistent volumes via the Container Storage Interface (CSI). The API enables users to snapshot multiple volumes simultaneously using a label selector, allowing for write-order consistency across volumes without needing to quiesce applications. This is particularly valuable for stateful applications that store data and logs across separate volumes, ensuring consistency during backup and restore scenarios like disaster recovery.

The proposal defines new Custom Resource Definitions (CRDs): VolumeGroupSnapshot, VolumeGroupSnapshotContent, and VolumeGroupSnapshotClass. A user can create a VolumeGroupSnapshot by applying a label selector that matches the desired PVCs. If the CSI driver supports the required capabilities, the snapshot controller and the csi-snapshotter sidecar will coordinate to generate group and individual snapshot resources. The CSI function CreateVolumeGroupSnapshot must return a list of individual snapshot handles, which will be recorded in the status of the VolumeGroupSnapshotContent.

Each resulting individual VolumeSnapshot includes metadata linking it to the VolumeGroupSnapshot, and finalisers prevent them from being deleted independently. This mechanism allows for consistent, automated handling of group snapshots and their corresponding individual snapshots.

Here's a simplified example of the resulting individual snapshot object:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: snapshot1
  labels:
    volumeGroupSnapshotName: groupSnapshot1
spec:
  source:
    persistentVolumeClaimName: vsc1
status:
  volumeGroupSnapshotName: groupSnapshot1

This design is distinct from broader backup/restore KEPs (for example #1051) and focuses solely on consistent group snapshotting, not group volume management or replication. Future enhancements (like the Liens feature) may further streamline safety and lifecycle management of these snapshots.

#3751 VolumeAttributesClass

Stage: Graduating to Stable
Feature group: sig-storage

This improvement introduces a new VolumeAttributesClass resource, enabling dynamic, cloud-provider-agnostic management of volume performance attributes such as IOPS and throughput. While current Kubernetes functionality allows volume capacity to be modified after creation, performance parameters like IOPS and throughput are immutable once set through the StorageClass. This forces users to directly interact with cloud provider APIs to make changes, ultimately leading to management friction and operational complexity.

With this enhancement, users can now assign a VolumeAttributesClass to a PersistentVolumeClaim (PVC), allowing storage performance parameters to be updated non-disruptively by simply switching to a different class (like, upgrading from “silver” to “gold” for higher IOPS). Cluster administrators define these classes, and enforcement is delegated to the CSI driver. This decouples performance tuning from capacity management and simplifies lifecycle operations for applications like databases or stateful services that may need to scale performance characteristics independently over time.

Example PVC using VolumeAttributesClass:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pv-claim
spec:
  storageClassName: csi-sc-example
  VolumeAttributesClassName: gold  #Can be changed later to adjust IOPS/throughput
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 64Gi

This design bridges a crucial gap by enabling inline performance updates in Kubernetes without relying on vendor-specific tooling, improving both usability and automation of storage resources.

#1790 Support recovery from volume expansion failure

Stage: Graduating to Stable
Feature group: sig-storage

This KEP improves the reliability and user experience of PVC expansion by allowing users to recover from failed volume resize attempts. Often, users may attempt to expand a PVC to a size unsupported by the underlying storage provider (like from 10Gi to 500Gi), which causes the expansion controller to repeatedly and unsuccessfully attempt the operation. To address this, the proposal introduces the ability for users to reduce the requested size of a PVC (when provided remains larger than the current provisioned size) thereby enabling them to retry expansion with more realistic values without being locked into the failing config.

To prevent abuse of this resizing capability, such as users gaming the quota system by repeatedly expanding and shrinking volumes, the proposal adds a new pvc.Status.AllocatedResources field to track the maximum requested size for quota enforcement. Quotas are calculated using the maximum of pvc.Spec.Resources and pvc.Status.AllocatedResources, ensuring quota usage reflects attempted expansions even if the user later reduces the request. A new field, pvc.Status.AllocatedResourceStatus, is also introduced to track the detailed state of volume expansion at both the controller and node levels. These mechanisms allow controlled rollback of expansion requests and ensure quota accounting remains accurate, even in failure or edge cases like ReadWriteMany (RWX) volumes and rapid successive expansion-shrink attempts. Overall, the changes maintain backward compatibility and uphold the integrity of quota systems while improving usability and error recovery for volume resizing.

Timeline of v.1.34 Kubernetes Release

Kubernetes users can expect the v1.34 release process to unfold throughout August 2025, with key milestones including two release candidates on August 6th and 20th, followed by the official v1.34.0 release on August 27th. Supporting events include the docs freeze (August 6th) and the release blog publication on the same day as the final release.

Kubernetes v1.34 Release Information
What is happening?By whom?And when?
Docs FreezeDocs LeadWednesday 6th August 2025
v1.34.0-rc.0 releasedBranch ManagerWednesday 6th August 2025
v1.34.0-rc.1 releasedBranch ManagerWednesday 20th August 2025
v1.34.0 release announcedCommsWednesday 27th August 2025
Kubernetes v1.34 Release Information

Kubernetes Release Archive

Kubernetes 1.34 – What you need to know
Kubernetes 1.33 – What you need to know

Keep up to date with our monthly newsletter

By submitting this form, you agree to our privacy policy