Platform Briefing — Kubernetes 1.24 'Stargazer' Release
Kubernetes 1.24 removes Dockershim, advances PodSecurity and network policy status, and adds runtime and workload enhancements—demanding runtime migrations, policy updates, and vendor coordination before production upgrades.
Executive briefing: Kubernetes 1.24, codenamed “Stargazer,” was released on 3 May 2022 with 46 enhancements, marking the official removal of Dockershim, promoting several features to stable (including Network Policy status updates and pod security admission), and introducing new beta capabilities such as dynamic resource allocation and gRPC probes. Platform teams must ensure that clusters, container runtimes, and ecosystem tooling are ready for these changes before upgrading production workloads.
Runtime transition and Dockershim removal
The most consequential change is the removal of Dockershim—the legacy Kubernetes component that allowed the kubelet to communicate with Docker Engine via the Container Runtime Interface (CRI) shim. Clusters upgrading to 1.24 must use a CRI-compliant runtime such as containerd, CRI-O, or Docker Engine’s cri-dockerd add-on. Operations teams should verify runtime compatibility, update node bootstrap scripts, and adjust monitoring dashboards that previously scraped Docker-specific metrics. For managed services (EKS, GKE, AKS), confirm that control planes and worker node images ship with supported runtimes and that autoscaling groups reference updated Amazon Machine Images or node pools.
Migration steps include installing containerd packages, configuring /etc/containerd/config.toml, updating kubelet flags (--container-runtime, --container-runtime-endpoint), and validating image pull secrets. Regression testing should cover logging (containerd’s CRI plugin uses cri-containerd log formatting), metrics exporters, and security agents (Falco, Aqua, Prisma Cloud) that relied on Docker sockets. Ensure that admission controllers and build pipelines no longer assume Docker CLI availability on worker nodes; use crictl or runtime-specific tooling for debugging.
Security and policy updates
Kubernetes 1.24 graduates PodSecurity admission to stable, replacing the deprecated PodSecurityPolicies. Cluster administrators should define namespace-level labels (pod-security.kubernetes.io/enforce, .../audit, .../warn) aligned with baseline, restricted, or privileged profiles. Update manifests, Helm charts, and CI validations to ensure workloads comply with chosen policies—restricting hostPath volumes, privilege escalation, and capabilities. Consider using Kyverno or Open Policy Agent Gatekeeper to extend policy coverage beyond the built-in PodSecurity levels.
The release also adds NetworkPolicy status, allowing controllers to report whether policies are accepted and enforced. Operations teams should integrate the status conditions into observability dashboards to detect invalid or partially applied policies. Security engineers can enhance drift detection by alerting on policies stuck in “Pending” or “Degraded” states, improving assurance that micro-segmentation rules are active.
Workload and developer experience enhancements
Dynamic Resource Allocation (DRA) enters beta, enabling workloads to request and claim specialised hardware (GPUs, FPGAs, smart NICs) through device-allocated resources managed by third-party schedulers. Organisations exploring AI/ML or high-performance computing workloads should assess DRA frameworks, coordinate with hardware vendors for device plugin support, and evaluate scheduler extensions for multi-step resource claims.
gRPC probes (alpha) allow health checks directly over gRPC, reducing reliance on HTTP wrappers for microservices using gRPC natively. Teams can experiment in development clusters, updating readiness and liveness configurations to utilise the new probe type once it matures. Additionally, 1.24 introduces beta support for persistent volumes powered by Azure Disk CSI snapshots and improvements to topology-aware hints, advancing data locality for stateful applications.
API deprecations and version skew
API removals include events.k8s.io/v1beta1, autoscaling/v2beta1, and certificates.k8s.io/v1beta1. Ensure manifests, custom controllers, and Helm charts target the stable APIs (events.k8s.io/v1, autoscaling/v2, certificates.k8s.io/v1). Validate CustomResourceDefinitions (CRDs) for compatibility with the Structural Schema requirements enforced since 1.22. Kubernetes 1.24 also updates minimum supported etcd to 3.5.3 and increases default security posture (containerd by default, seccomp profile defaults). Plan etcd upgrades carefully, taking snapshots and verifying that control plane components are compatible.
Version skew policy remains unchanged: kube-apiserver can be up to one minor version ahead of kubelet, and kubectl can be one version ahead or behind. Upgrade processes should follow the standard pattern—control plane first, then node pools—while ensuring that add-ons (CNI plugins, CSI drivers, Ingress controllers) support 1.24. Review release notes for each add-on vendor, as some may require configuration changes to accommodate containerd logging paths or cgroup v2.
Operational roadmap
Assessment (Weeks 1–2). Conduct cluster discovery to identify versions, runtimes, and add-ons. Inventory workloads dependent on Docker-specific tooling (e.g., building images on nodes, accessing Docker socket). Gather statements from platform vendors (Red Hat OpenShift, VMware Tanzu, Rancher) regarding 1.24 support timelines.
Preparation (Weeks 3–6). Build staging clusters or sandboxes running Kubernetes 1.24 with the target runtime. Execute conformance tests, performance benchmarks, and chaos engineering drills to validate behaviour. Update documentation and runbooks describing container lifecycle operations under containerd (e.g., using ctr or crictl for troubleshooting). Train operations staff on new logging paths (/var/log/pods) and metrics collection.
Upgrade execution (Weeks 6–10). Follow cloud provider or distribution-specific guidance. For EKS, migrate managed node groups to the Bottlerocket or updated Amazon Linux 2 AMIs; for GKE, enable containerd node images; for self-managed clusters, upgrade kubeadm, kubelet, and kubectl sequentially. Perform canary upgrades on non-critical namespaces, monitor application telemetry, and confirm pod security labels. Validate CSI snapshot compatibility and network policy status reporting.
Post-upgrade (Weeks 10–12). Retire legacy tooling that depended on Docker, update incident response procedures, and refresh security baselines. Audit cluster role bindings to ensure least privilege, adopt seccomp profile defaults, and verify that PodSecurity enforcement metrics meet compliance targets. Capture lessons learned and incorporate runtime lifecycle tracking into platform roadmaps.
Sourcing and ecosystem considerations
Vendors providing Kubernetes platforms, observability, and security tools must confirm 1.24 readiness. Request updated certification matrices from container security vendors (Sysdig, Aqua, Lacework), Ingress controllers (NGINX, HAProxy), and service meshes (Istio 1.14+, Linkerd 2.11+) that align with containerd-based nodes. Evaluate support contracts to ensure timely patches for cgroup v2 compatibility, CRI integration, and PodSecurity enforcement. If relying on managed Kubernetes, review service-level agreements for upgrade windows and maintenance controls, particularly for clusters supporting regulated workloads.
For supply chain security, coordinate with build platform teams to ensure that image build pipelines remain functional. Where developers previously used Docker-in-Docker within clusters, transition to tools like Kaniko, BuildKit with rootless mode, or Tekton Chains running on containerd. Update documentation for developers debugging pods, highlighting kubectl debug with ephemeral containers and crictl commands.
Risk management
Primary risks include runtime incompatibility, policy enforcement gaps, and unplanned downtime during upgrades. Mitigate by implementing upgrade playbooks with go/no-go checkpoints, maintaining snapshots of etcd, and scheduling maintenance windows with rollback contingencies. Monitor cluster telemetry for container start failures, image pull errors, and crash loops that may indicate runtime misconfiguration. Establish dedicated communication channels between platform teams and application owners to triage regressions quickly.
Security monitoring should verify that PodSecurity policies prevent privileged containers from launching unexpectedly. Use admission control testing frameworks to simulate policy violations and ensure audit events are captured. For compliance frameworks (PCI DSS, HIPAA, ISO 27001), document the upgrade process, testing evidence, and runtime transition as part of change management.
Kubernetes 1.24 solidifies the project’s move away from Docker-specific dependencies while advancing security and workload capabilities. Organisations that plan migrations carefully, retrain teams, and modernise tooling will benefit from improved portability, clearer policy enforcement, and future-ready runtime architectures.
Continue in the Infrastructure pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Edge Resilience Infrastructure Guide — Zeph Tech
Engineer resilient edge estates using ETSI MEC standards, DOE grid assessments, and GSMA availability benchmarks documented by Zeph Tech.
-
Infrastructure Resilience Guide — Zeph Tech
Coordinate capacity planning, supply chain, and reliability operations using DOE grid programmes, Uptime Institute benchmarks, and NERC reliability mandates covered by Zeph Tech.
-
Infrastructure Sustainability Reporting Guide — Zeph Tech
Produce audit-ready infrastructure sustainability disclosures aligned with CSRD, IFRS S2, and sector-specific benchmarks curated by Zeph Tech.




