The Kubernetes project has launched version 1.35 under the code name ‘Timbernetes’. The release introduces 60 new features. The most notable is the introduction of in-place updates of Pod resources, which simplifies the management of Kubernetes workloads. At the same time, the release is characterized by the farewell to Ingress NGINX and cgroup v1 support.
The name ‘Timbernetes’ refers to Yggdrasil, the world tree from Norse mythology. The symbol fits the philosophy of the project, at least according to its creators. Like a tree, Kubernetes grows ring by ring, shaped by a global community of contributors. The release emphasizes that growth; below are the most important innovations.
In-place updates for Pod resources
The most important new stable feature is the ability to adjust CPU and memory resources of running Pods without having to restart them. Previously, administrators had to completely recreate Pods for resource changes, which caused disruptions to stateful applications. The new functionality allows workloads to be scaled up and down without downtime, a significant improvement.
Previous Kubernetes versions only allowed infrastructure settings to be changed for existing Pods. With v1.35, administrators can now intervene immediately in the event of resource shortages or surpluses. Recent releases have already focused on more lifecycle management capabilities, as seen in version 1.33. This trend is therefore continuing.
Native Pod certificates for workload identity
Another notable beta feature is native support for Pod certificates. Until now, providing certificates to Pods required external controllers such as cert-manager or SPIFFE/SPIRE, plus complex CRD orchestration and secrets management. Kubernetes v1.35 integrates this functionality directly into the platform. Although Kubernetes is not a ‘by default’ secure technology (it always needs more security than it builds in itself), such improvements do save IT teams a lot of headaches.
The kubelet (read: the node agent) now independently generates keys, requests certificates via PodCertificateRequest, and writes credential bundles directly to the Pod’s file system. The kube-apiserver enforces node restrictions during allocation, preventing common errors with third-party signers. With older versions of Kubernetes, it is obvious that node restrictions are no longer enforced, leading to undesirable behavior, such as nodes claiming identities for workloads that are not running there.
Declaring node features for better scheduling
As an alpha feature, v1.35 introduces a framework that allows nodes to declare their supported Kubernetes features. This solves a practical problem where control planes enable new features but nodes are still running old versions, causing the scheduler to place Pods on incompatible nodes.
Nodes now report which features they do and do not support via a new .status.declaredFeatures field. The kube-scheduler, admission controllers, and third-party components use this information to schedule Pods only on compatible nodes.
The end of the Ingress NGINX controller
In addition to new features, v1.35 also bids farewell to legacy components. The Ingress NGINX controller, which has been the standard for traffic management of Kubernetes workloads for years, is being phased out permanently. The project is struggling with a shortage of maintainers and mounting technical debt. Best-effort maintenance will continue until March 2026, after which it will be archived. Security issues surrounding this component have already come to light in a painful way this year.
The recommended migration route is via the Gateway API, which offers more modern and extensible traffic management. For organizations that rely heavily on Ingress NGINX, this is a fairly significant change that requires planning and may lead to some technical imperfections.
In addition, support for cgroup v1 will disappear completely. Kubernetes already introduced cgroup v2 support in 2022 with version 1.25 due to better resource isolation and a cleaner hierarchy. Administrators who still run nodes on old Linux distributions without cgroup v2 will have to migrate, otherwise the kubelet will no longer start.
The ipvs mode in kube-proxy will also eventually disappear. Although the mode remains available, kube-proxy now displays a warning when it is used. For Linux nodes, nftables is the recommended replacement. The change is due to maintenance overhead and the desire to focus on modern standards. It appears that the Kubernetes team has looked at this issue across the board and is therefore eliminating legacy wherever possible.
Improvements to Dynamic Resource Allocation
Dynamic Resource Allocation (DRA) reached stable status in v1.34 and remains always enabled in v1.35. Several alpha features within DRA have been significantly improved, including extended resource requests, device taints and tolerations, and partitionable devices.
DRA makes it possible to allocate specialized hardware such as GPUs to workloads more flexibly. The improvements focus on edge cases that previously did not work well, such as reusing equipment in init containers and reporting problems without directly impacting scheduling.