Kubernetes 1.24 'Stargazer' Release
Dockershim is officially gone. Kubernetes 1.24 removes the Docker-to-CRI shim, which means your clusters need containerd or CRI-O now—not later. Also in this release: PodSecurity admission goes stable (replacing the deprecated PodSecurityPolicies), and gRPC probes hit alpha. If you have not migrated off Docker Engine yet, do it before upgrading.
Accuracy-reviewed by the editorial team
Kubernetes 1.24, codenamed “Stargazer,” was released on 3 May 2022 with 46 improvements, marking the official removal of Dockershim, promoting several features to stable (including Network Policy status updates and pod security admission), and introducing new beta capabilities such as dynamic resource allocation and gRPC probes. Platform teams must ensure that clusters, container runtimes, and ecosystem tooling are ready for these changes before upgrading production workloads.
Runtime transition and Dockershim removal
The most consequential change is the removal of Dockershim—the legacy Kubernetes component that allowed the kubelet to communicate with Docker Engine via the Container Runtime Interface (CRI) shim. Clusters upgrading to 1.24 must use a CRI-compliant runtime such as containerd, CRI-O, or Docker Engine’s cri-dockerd add-on. Operations teams should verify runtime compatibility, update node bootstrap scripts, and adjust monitoring dashboards that previously scraped Docker-specific metrics. For managed services (EKS, GKE, AKS), confirm that control planes and worker node images ship with supported runtimes and that autoscaling groups reference updated Amazon Machine Images or node pools.
Migration steps include installing containerd packages, configuring /etc/containerd/config.toml, updating kubelet flags (--container-runtime, --container-runtime-endpoint), and validating image pull secrets. Regression testing should cover logging (containerd’s CRI plugin uses cri-containerd log formatting), metrics exporters, and security agents (Falco, Aqua, Prisma Cloud) that relied on Docker sockets. Ensure that admission controllers and build pipelines no longer assume Docker CLI availability on worker nodes; use crictl or runtime-specific tooling for debugging.
Security and policy updates
Kubernetes 1.24 graduates PodSecurity admission to stable, replacing the deprecated PodSecurityPolicies. Cluster administrators should define namespace-level labels (pod-security.kubernetes.io/enforce, /audit, /warn) aligned with baseline, restricted, or privileged profiles. Update manifests, Helm charts, and CI validations to ensure workloads comply with chosen policies—restricting hostPath volumes, privilege escalation, and capabilities. Consider using Kyverno or Open Policy Agent Gatekeeper to extend policy coverage beyond the built-in PodSecurity levels.
The release also adds NetworkPolicy status, allowing controllers to report whether policies are accepted and enforced. Operations teams should integrate the status conditions into observability dashboards to detect invalid or partially applied policies. Security engineers can improve drift detection by alerting on policies stuck in “Pending” or “Degraded” states, improving assurance that micro-segmentation rules are active.
Workload and developer experience improvements
Dynamic Resource Allocation (DRA) enters beta, enabling workloads to request and claim specialized hardware (GPUs, FPGAs, smart NICs) through device-allocated resources managed by third-party schedulers. Teams exploring AI/ML or high-performance computing workloads should assess DRA frameworks, coordinate with hardware vendors for device plugin support, and evaluate scheduler extensions for multi-step resource claims.
gRPC probes (alpha) allow health checks directly over gRPC, reducing reliance on HTTP wrappers for microservices using gRPC natively. Teams can experiment in development clusters, updating readiness and liveness configurations to utilize the new probe type once it matures. Also, 1.24 introduces beta support for persistent volumes powered by Azure Disk CSI snapshots and improvements to topology-aware hints, advancing data locality for stateful applications.
API deprecations and version skew
API removals include events.k8s.io/v1beta1, autoscaling/v2beta1, and certificates.k8s.io/v1beta1. Ensure manifests, custom controllers, and Helm charts target the stable APIs (events.k8s.io/v1, autoscaling/v2, certificates.k8s.io/v1). Validate CustomResourceDefinitions (CRDs) for compatibility with the Structural Schema requirements enforced since 1.22. Kubernetes 1.24 also updates minimum supported etcd to 3.5.3 and increases default security posture (containerd by default, seccomp profile defaults). Plan etcd upgrades carefully, taking snapshots and verifying that control plane components are compatible.
Version skew policy remains unchanged: kube-apiserver can be up to one minor version ahead of kubelet, and kubectl can be one version ahead or behind. Upgrade processes should follow the standard pattern—control plane first, then node pools—while ensuring that add-ons (CNI plugins, CSI drivers, Ingress controllers) support 1.24. Review release notes for each add-on vendor, as some may require configuration changes to accommodate containerd logging paths or cgroup v2.
Operational roadmap
Assessment (Weeks 1–2). Conduct cluster discovery to identify versions, runtimes, and add-ons. Inventory workloads dependent on Docker-specific tooling (for example, building images on nodes, accessing Docker socket). Gather statements from platform vendors (Red Hat OpenShift, VMware Tanzu, Rancher) regarding 1.24 support timelines.
Preparation (Weeks 3–6). Build staging clusters or sandboxes running Kubernetes 1.24 with the target runtime. Execute conformance tests, performance benchmarks, and chaos engineering drills to validate behavior. Update documentation and runbooks describing container lifecycle operations under containerd (for example, using ctr or crictl for troubleshooting). Train operations staff on new logging paths (/var/log/pods) and metrics collection.
Upgrade execution (Weeks 6–10). Follow cloud provider or distribution-specific guidance. For EKS, migrate managed node groups to the Bottlerocket or updated Amazon Linux 2 AMIs; for GKE, enable containerd node images; for self-managed clusters, upgrade kubeadm, kubelet, and kubectl sequentially. Perform canary upgrades on non-critical namespaces, monitor application telemetry, and confirm pod security labels. Validate CSI snapshot compatibility and network policy status reporting.
Post-upgrade (Weeks 10–12). Retire legacy tooling that depended on Docker, update incident response procedures, and refresh security baselines. Audit cluster role bindings to ensure least privilege, adopt seccomp profile defaults, and verify that PodSecurity enforcement metrics meet compliance targets. Capture lessons learned and incorporate runtime lifecycle tracking into platform roadmaps.
Sourcing and ecosystem considerations
Vendors providing Kubernetes platforms, observability, and security tools must confirm 1.24 readiness. Request updated certification matrices from container security vendors (Sysdig, Aqua, Lacework), Ingress controllers (NGINX, HAProxy), and service meshes (Istio 1.14+, Linkerd 2.11+) that align with containerd-based nodes. Evaluate support contracts to ensure timely patches for cgroup v2 compatibility, CRI integration, and PodSecurity enforcement. If relying on managed Kubernetes, review service-level agreements for upgrade windows and maintenance controls, particularly for clusters supporting regulated workloads.
For supply chain security, coordinate with build platform teams to ensure that image build pipelines remain functional. Where developers previously used Docker-in-Docker within clusters, transition to tools like Kaniko, BuildKit with rootless mode, or Tekton Chains running on containerd. Update documentation for developers debugging pods, highlighting kubectl debug with ephemeral containers and crictl commands.
Infrastructure improvements
Infrastructure teams should conduct full assessments to identify affected systems and focus on remediation based on exposure and criticality. Patch management processes should account for the specific technical requirements and potential compatibility considerations associated with this update. Testing procedures should validate that patches do not introduce operational disruptions before deployment to production environments.
Monitoring should continue post-remediation to verify successful setup and detect any exploitation attempts targeting systems that remain vulnerable during the patching window.
Continue in the Infrastructure pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Telecom Modernization Infrastructure Guide
Modernise telecom infrastructure using 3GPP Release 18 roadmaps, O-RAN Alliance specifications, and ITU broadband benchmarks curated here.
-
Infrastructure Resilience Guide
Coordinate capacity planning, supply chain, and reliability operations using DOE grid programmes, Uptime Institute benchmarks, and NERC reliability mandates covered here.
-
Edge Resilience Infrastructure Guide
Engineer resilient edge estates using ETSI MEC standards, DOE grid assessments, and GSMA availability benchmarks documented here.
Coverage intelligence
- Published
- Coverage pillar
- Infrastructure
- Source credibility
- 90/100 — high confidence
- Topics
- Kubernetes · Container orchestration · Dockershim removal · Platform engineering
- Sources cited
- 3 sources (kubernetes.io, iso.org)
- Reading time
- 6 min
Further reading
- Kubernetes Blog — Kubernetes v1.24: Stargazer — kubernetes.io
- Kubernetes Blog — Kubernetes is Moving on From Dockershim — kubernetes.io
- ISO/IEC 27017:2015 — Cloud Service Security Controls — International Organization for Standardization
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.