Runtime Briefing — Kubernetes 1.26 Release
Kubernetes 1.26 "Electrifying" delivers 37 enhancements—including CRI v1 enforcement, storage migrations, Windows HostProcess GA, and new admission controls—requiring platform teams to plan upgrade rehearsals and resilience testing.
Executive briefing: Kubernetes v1.26, codenamed “Electrifying,” was released on 8 December 2022 with 37 tracked enhancements: eleven graduated to stable, ten advanced to beta, and sixteen entered alpha. The release enforces the Container Runtime Interface (CRI) v1 API, advances storage migration to CSI drivers, graduates Windows HostProcess containers to general availability, and introduces new extensibility mechanisms such as Common Expression Language (CEL)-powered validating admission policies and dynamic resource allocation. Platform engineering teams must orchestrate multi-stage upgrade rehearsals, dependency validation, and outcome testing to absorb the new capabilities without jeopardising cluster availability.
The release cadence spanned fourteen weeks (5 September – 9 December 2022) and involved contributions from over 6,800 individuals across 976 organisations. Version 1.26 maintains compatibility with Kubernetes’ standard skew policy—allowing upgrades from 1.24 or 1.25—but drops support for deprecated APIs and runtime hooks. Operators should scrutinise release notes for component-specific changes that require action before upgrading production clusters.
Core platform changes
- CRI v1 enforcement. The
v1alpha2Container Runtime Interface is removed. Nodes running container runtimes such as containerd or CRI-O must upgrade to versions supporting CRI v1 before the Kubernetes upgrade. Clusters that relied on older Docker Engine integration must already have transitioned to shims compatible with CRI v1. - Storage migration milestones. CSI migration for Azure File and VMware vSphere storage drivers graduated to stable, completing the removal of their in-tree plugins. Delegated
fsGrouphandling for CSI drivers is also stable, enabling drivers to set POSIX permissions without kubelet intervention. Deprecated in-tree GlusterFS and OpenStack Cinder drivers were removed, reinforcing the shift toward external CSI implementations. - Windows HostProcess GA. Windows privileged container support—implemented as HostProcess pods—reached general availability. This enables platform teams to run node maintenance agents, CNI configuration scripts, and monitoring daemons on Windows worker nodes with host-level access.
- Kubelet credential provider GA. Credential providers for image registries are now stable, improving security for external registry authentication.
- Service networking improvements. Service Internal Traffic Policy, LoadBalancer mixed protocol support, and service IP reservation are stable, giving operators finer-grained traffic routing controls.
New extensibility primitives
Kubernetes v1.26 adds experimental capabilities that expand the platform’s flexibility:
- Dynamic Resource Allocation (alpha). A new framework allows third-party device plugins to request resources through resource claims similar to persistent volumes. The feature uses the Container Device Interface (CDI) for device injection and unlocks more expressive scheduling for GPUs, FPGAs, DPUs, and storage accelerators. It is guarded by the
DynamicResourceAllocationfeature gate. - ValidatingAdmissionPolicy (alpha). Administrators can define admission policies using the Common Expression Language (CEL) without deploying external webhooks. This reduces latency and operational complexity for enforcing guardrails such as label policies, image provenance checks, or namespace restrictions.
- Pod scheduling readiness (alpha). The
PodSchedulingReadinessfeature introduces.spec.schedulingGates, allowing controllers to postpone scheduling until prerequisites (e.g., sidecar provisioning or policy approval) are satisfied. - Node inclusion policy in topology spread (beta). Refinements to topology spread constraints let teams control whether taints and tolerations influence pod distribution, aiding high-availability planning.
Security and supply chain
Release artifact signing with Sigstore cosign advanced to beta. Every binary and container image published by the release team can be verified using keyless signatures, strengthening supply chain assurance. Clusters adopting Kubernetes 1.26 should incorporate cosign verification into build and deployment pipelines.
The release also continues Kubernetes’ security hardening journey: removal of legacy logging flags, in-tree credential code, and kube-proxy userspace mode reduces attack surface. Operators must confirm that automation scripts or monitoring integrations no longer rely on deprecated flags.
Deprecations and removals
Twelve features were deprecated or removed, including:
- Removal of the CRI
v1alpha2API. - Retirement of the
flowcontrol.apiserver.k8s.io/v1beta1andautoscaling/v2beta2APIs. - Removal of dynamic kubelet configuration and
kubectllegacy flags. - Deletion of in-tree GlusterFS and OpenStack Cinder volume plugins.
Upgrade playbooks must include schema migrations for manifests, admission controllers, and automation that referenced these APIs or flags.
Upgrade readiness checklist
- Prerequisites. Ensure control plane and worker nodes run supported container runtimes with CRI v1. Validate etcd and CoreDNS versions meet compatibility matrices.
- Manifest audit. Scan manifests for deprecated API versions using
kubectl convertorpluto. Update HorizontalPodAutoscaler resources toautoscaling/v2and flow control objects tov1beta3orv1beta2. - Storage migration. Verify clusters using Azure File or vSphere have deployed external CSI drivers and removed in-tree references. Test backup and restore workflows after migration.
- Windows operations. For hybrid clusters, run smoke tests on HostProcess workloads to confirm policies, RBAC, and monitoring agents function with GA semantics.
- Admission policies. If experimenting with CEL admission policies, deploy them in non-production clusters first, validate expressions, and monitor audit logs before rolling into production.
Outcome testing and observability
To demonstrate successful upgrades and new feature adoption:
- Upgrade drills. Perform blue/green or canary upgrades in staging clusters. Capture metrics for control plane availability, API server latency, and workload disruptions.
- Regression suites. Run conformance tests (
sonobuoy), chaos engineering scenarios, and application-specific integration tests post-upgrade. - Security validation. Verify cosign signatures for release artifacts, run cluster vulnerability scans, and confirm removal of deprecated flags in configuration management.
- Performance baselines. Monitor CPUManager and DeviceManager stability, especially with workloads using dedicated CPU or device assignments. Track pod scheduling latency when enabling new scheduling features.
Roadmap considerations
Kubernetes 1.26 will receive patch support for roughly one year, so organisations should plan upgrades to 1.27 or newer within that window. Many alpha features introduced here—dynamic resource allocation, CEL-based admission, scheduling gates—will evolve quickly; teams should participate in SIG feedback loops and contribute test results to accelerate maturation.
By investing in structured upgrade governance, observability, and experimentation, platform teams can capitalise on Kubernetes 1.26’s new capabilities while maintaining reliability and compliance.
Stakeholder communication
Product owners and application teams should be briefed on feature timelines, particularly when alpha capabilities (Dynamic Resource Allocation, ValidatingAdmissionPolicy) are enabled in sandbox environments. Establish release notes summaries that translate technical changes into business language—such as highlighting how signed artifacts reduce supply chain risk or how Windows HostProcess GA unlocks hybrid automation scenarios. Update service catalogs and platform documentation to reflect new add-ons, deprecations, and support SLAs.
Enterprise PMOs may require evidence that platform changes underwent change advisory board (CAB) review. Capture upgrade plans, risk assessments, test results, and rollback procedures in change tickets. Communicate maintenance windows well in advance and coordinate with disaster recovery teams to ensure secondary clusters mirror configuration updates.
Continue in the Developer pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Secure Software Supply Chain Tooling Guide — Zeph Tech
Engineer developer platforms that deliver verifiable provenance, SBOM distribution, vendor assurance, and runtime integrity aligned with SLSA v1.0, NIST SP 800-204D, and CISA SBOM…
-
AI-Assisted Development Governance Guide — Zeph Tech
Govern GitHub Copilot, Azure AI, and internal generative assistants with controls aligned to NIST AI RMF 1.0, EU AI Act enforcement timelines, OMB M-24-10, and enterprise privacy…
-
Developer Enablement & Platform Operations Guide — Zeph Tech
Plan AI-assisted development, secure SDLC controls, and runtime upgrades using Zeph Tech research on GitHub Copilot, GitHub Advanced Security, and major language lifecycles.




