← Back to all briefings
Developer 5 min read Published Updated Credibility 88/100

Kubernetes 1.26 Release

Kubernetes 1.26 "Electrifying" delivers 37 improvements—including CRI v1 enforcement, storage migrations, Windows HostProcess GA, and new admission controls—requiring platform teams to plan upgrade rehearsals and resilience testing.

Reviewed for accuracy by Kodi C.

Developer pillar illustration for Zeph Tech briefings
Developer enablement and platform engineering briefings

Kubernetes v1.26, codenamed “Electrifying,” was released on 8 December 2022 with 37 tracked improvements: eleven graduated to stable, ten advanced to beta, and sixteen entered alpha. The release enforces the Container Runtime Interface (CRI) v1 API, advances storage migration to CSI drivers, graduates Windows HostProcess containers to general availability, and introduces new extensibility mechanisms such as Common Expression Language (CEL)-powered validating admission policies and dynamic resource allocation. Platform engineering teams must orchestrate multi-stage upgrade rehearsals, dependency validation, and outcome testing to absorb the new capabilities without jeopardising cluster availability.

The release cadence spanned fourteen weeks (5 September – 9 December 2022) and involved contributions from over 6,800 individuals across 976 teams. Version 1.26 maintains compatibility with Kubernetes’ standard skew policy—allowing upgrades from 1.24 or 1.25—but drops support for deprecated APIs and runtime hooks. Operators should scrutinise release notes for component-specific changes that require action before upgrading production clusters.

Core platform changes

  • CRI v1 enforcement. The v1alpha2 Container Runtime Interface is removed. Nodes running container runtimes such as containerd or CRI-O must upgrade to versions supporting CRI v1 before the Kubernetes upgrade. Clusters that relied on older Docker Engine integration must already have transitioned to shims compatible with CRI v1.
  • Storage migration milestones. CSI migration for Azure File and VMware vSphere storage drivers graduated to stable, completing the removal of their in-tree plugins. Delegated fsGroup handling for CSI drivers is also stable, enabling drivers to set POSIX permissions without kubelet intervention. Deprecated in-tree GlusterFS and OpenStack Cinder drivers were removed, reinforcing the shift toward external CSI setups.
  • Windows HostProcess GA. Windows privileged container support—implemented as HostProcess pods—reached general availability. This enables platform teams to run node maintenance agents, CNI configuration scripts, and monitoring daemons on Windows worker nodes with host-level access.
  • Kubelet credential provider GA. Credential providers for image registries are now stable, improving security for external registry authentication.
  • Service networking improvements. Service Internal Traffic Policy, LoadBalancer mixed protocol support, and service IP reservation are stable, giving operators finer-grained traffic routing controls.

New extensibility primitives

Kubernetes v1.26 adds experimental capabilities that expand the platform’s flexibility:

  • Dynamic Resource Allocation (alpha). A new framework allows third-party device plugins to request resources through resource claims similar to persistent volumes. The feature uses the Container Device Interface (CDI) for device injection and enables more expressive scheduling for GPUs, FPGAs, DPUs, and storage accelerators. It is guarded by the DynamicResourceAllocation feature gate.
  • ValidatingAdmissionPolicy (alpha). Administrators can define admission policies using the Common Expression Language (CEL) without deploying external webhooks. This reduces latency and operational complexity for enforcing guardrails such as label policies, image provenance checks, or namespace restrictions.
  • Pod scheduling readiness (alpha). The PodSchedulingReadiness feature introduces .spec.schedulingGates, allowing controllers to postpone scheduling until prerequisites (for example, sidecar provisioning or policy approval) are satisfied.
  • Node inclusion policy in topology spread (beta). Refinements to topology spread constraints let teams control whether taints and tolerations influence pod distribution, aiding high-availability planning.

Security and supply chain

Release artifact signing with Sigstore cosign advanced to beta. Every binary and container image published by the release team can be verified using keyless signatures, strengthening supply chain assurance. Clusters adopting Kubernetes 1.26 should incorporate cosign verification into build and deployment pipelines.

The release also continues Kubernetes’ security hardening journey: removal of legacy logging flags, in-tree credential code, and kube-proxy userspace mode reduces attack surface. Operators must confirm that automation scripts or monitoring integrations no longer rely on deprecated flags.

Deprecations and removals

Twelve features were deprecated or removed, including:

  • Removal of the CRI v1alpha2 API.
  • Retirement of the flowcontrol.apiserver.k8s.io/v1beta1 and autoscaling/v2beta2 APIs.
  • Removal of dynamic kubelet configuration and kubectl legacy flags.
  • Deletion of in-tree GlusterFS and OpenStack Cinder volume plugins.

Upgrade playbooks must include schema migrations for manifests, admission controllers, and automation that referenced these APIs or flags.

Upgrade readiness checklist

  • Prerequisites. Ensure control plane and worker nodes run supported container runtimes with CRI v1. Validate etcd and CoreDNS versions meet compatibility matrices.
  • Manifest audit. Scan manifests for deprecated API versions using kubectl convert or pluto. Update HorizontalPodAutoscaler resources to autoscaling/v2 and flow control objects to v1beta3 or v1beta2.
  • Storage migration. Verify clusters using Azure File or vSphere have deployed external CSI drivers and removed in-tree references. Test backup and restore workflows after migration.
  • Windows operations. For hybrid clusters, run smoke tests on HostProcess workloads to confirm policies, RBAC, and monitoring agents function with GA semantics.
  • Admission policies. If experimenting with CEL admission policies, deploy them in non-production clusters first, validate expressions, and monitor audit logs before rolling into production.

Outcome testing and observability

To show successful upgrades and new feature adoption:

  • Upgrade drills. Perform blue/green or canary upgrades in staging clusters. Capture metrics for control plane availability, API server latency, and workload disruptions.
  • Regression suites. Run conformance tests (sonobuoy), chaos engineering scenarios, and application-specific integration tests post-upgrade.
  • Security validation. Verify cosign signatures for release artifacts, run cluster vulnerability scans, and confirm removal of deprecated flags in configuration management.
  • Performance baselines. Monitor CPUManager and DeviceManager stability, especially with workloads using dedicated CPU or device assignments. Track pod scheduling latency when enabling new scheduling features.

Roadmap considerations

Kubernetes 1.26 will receive patch support for roughly one year, so teams should plan upgrades to 1.27 or newer within that window. Many alpha features introduced here—dynamic resource allocation, CEL-based admission, scheduling gates—will evolve quickly; teams should participate in SIG feedback loops and contribute test results to accelerate maturation.

By investing in structured upgrade governance, observability, and experimentation, platform teams can capitalize on Kubernetes 1.26’s new capabilities while maintaining reliability and compliance.

Stakeholder communication

Product owners and application teams should be briefed on feature timelines, particularly when alpha capabilities (Dynamic Resource Allocation, ValidatingAdmissionPolicy) are enabled in sandbox environments. Establish release notes summaries that translate technical changes into business language—such as highlighting how signed artifacts reduce supply chain risk or how Windows HostProcess GA enables hybrid automation scenarios. Update service catalogs and platform documentation to reflect new add-ons, deprecations, and support SLAs.

Enterprise PMOs may require evidence that platform changes underwent change advisory board (CAB) review. Capture upgrade plans, risk assessments, test results, and rollback procedures in change tickets. Communicate maintenance windows well in advance and coordinate with disaster recovery teams to ensure secondary clusters mirror configuration updates.

Continue in the Developer pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
Developer
Source credibility
88/100 — high confidence
Topics
Kubernetes release · Platform engineering · Cloud native security · Upgrade readiness
Sources cited
3 sources (kubernetes.io, github.com)
Reading time
5 min

References

  1. Kubernetes v1.26: Electrifying — Cloud Native Computing Foundation
  2. Kubernetes 1.26 Release Notes — Kubernetes Project
  3. Dynamic Resource Allocation — Kubernetes Documentation
  • Kubernetes release
  • Platform engineering
  • Cloud native security
  • Upgrade readiness
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.