Kubernetes 1.18 general availability
Kubernetes 1.18 landed with server-side apply graduating to beta and kubectl debug for troubleshooting pods. The topology manager is now beta too, which matters if you are running latency-sensitive workloads on specific NUMA nodes.
Editorially reviewed for factual accuracy
Kubernetes 1.18 landed on March 25, 2020, and while point releases do not usually make headlines, this one included changes that affect how you'll run production clusters. Server-side apply went beta, topology-aware routing got smarter, and IngressClass finally gave us a sane way to handle multiple ingress controllers. If you are running Kubernetes in production, these are not just feature bullet points—they are operational improvements you'll actually use.
Server-side apply is ready for real workloads
Server-side apply moving to beta status is bigger than it sounds. The old client-side apply (kubectl apply) has been the source of countless merge conflicts and unexpected behavior when multiple tools or operators modify the same resources. Server-side apply tracks field ownership on the API server itself, making conflicts explicit and resolution predictable.
In practice, this means your GitOps tools, operators, and manual kubectl commands can coexist without stepping on each other. When two controllers try to manage the same field, you get an explicit conflict instead of last-writer-wins chaos. For organizations running complex deployments with multiple automation systems, this is a fundamental improvement in reliability.
The migration path matters: server-side apply is opt-in per operation. You can start using it for new resources while legacy workflows continue working. Test your CI/CD pipelines with server-side apply in non-production environments before enabling it cluster-wide.
Topology-aware routing reduces latency and cost
Service topology in 1.18 enables preferring endpoints in the same node, zone, or region before routing to more distant endpoints. For services with latency sensitivity or cross-zone transfer costs, this is immediate operational benefit.
Cloud providers charge for cross-zone network traffic. In large clusters spanning multiple zones, services routing randomly to any endpoint accumulate significant transfer costs. Topology-aware routing lets you prefer local endpoints without sacrificing availability—traffic stays in-zone when endpoints are healthy, failing over to other zones only when necessary.
Implementation requires annotation on Services to define topology preferences. Start with non-critical services to validate behavior before enabling on production workloads with strict latency requirements.
IngressClass brings sanity to multi-ingress clusters
If you have ever run multiple ingress controllers in the same cluster—nginx for public traffic, internal ALB for private services, specialized controllers for specific use cases—you know the annotation-based selection was fragile. IngressClass provides a proper API resource for specifying which controller handles which Ingress.
The default IngressClass mechanism is particularly useful: you can designate a default controller that handles Ingresses without explicit class selection, while other controllers only pick up Ingresses explicitly assigned to them. This eliminates the multiple-controllers-fighting-over-resources problem that plagued earlier multi-ingress setups.
Migration from annotation-based selection to IngressClass requires updating existing Ingress resources. Plan this carefully—the transition period where both mechanisms coexist can cause confusion if not documented clearly for teams managing Ingresses.
Pod disruption budget improvements
PodDisruptionBudgets gained better handling of unhealthy pods. Previously, unhealthy pods could count against disruption budgets, preventing voluntary disruptions even when those pods were not serving traffic anyway. The new behavior is more sensible: unhealthy pods do not block necessary cluster operations.
This particularly matters during node draining and cluster upgrades. Clusters with flaky workloads or long-running health check failures experienced upgrade gridlock when PDBs prevented progress. The 1.18 behavior keeps PDBs effective for protecting healthy capacity while not blocking operations on already-broken pods.
HPA improvements for more responsive scaling
Horizontal Pod Autoscaler got configurable scaling policies with finer control over scale-up and scale-down behavior. You can now set different rates for scaling up versus down, preventing the oscillation that plagued earlier HPA versions when metrics fluctuated around threshold boundaries.
The stabilization window prevents rapid flapping: HPA considers the highest (for scale-down) or lowest (for scale-up) recommendation over a configurable window before acting. This smooths out metric spikes without sacrificing responsiveness to sustained load changes.
For workloads with variable load patterns, these controls enable tuning HPA behavior to match application characteristics. Scale up aggressively on traffic spikes, scale down conservatively to handle returning traffic—configurations that required custom controllers before are now native HPA capabilities.
What this means for your upgrade planning
Kubernetes releases follow a predictable cadence: new minor version every three months, support for three minor versions at a time. If you are running 1.15 or earlier, you are already outside the supported window. 1.18 gives you a good upgrade target with mature features and stability improvements.
Upgrade testing should focus on: API deprecations (check release notes for removed APIs), workload compatibility (run your test suites against 1.18 clusters), and add-on compatibility (ingress controllers, CNI plugins, monitoring stacks all need version alignment).
Multi-cluster operators should plan staged rollouts: upgrade development clusters first, then staging, then production. Each stage provides learning opportunities before higher-stakes environments.
Practical upgrade checklist
- Review the full release notes for API deprecations and breaking changes affecting your workloads.
- Test server-side apply with your GitOps tooling and operators in non-production before enabling cluster-wide.
- Evaluate topology-aware routing for services with latency sensitivity or cross-zone cost concerns.
- Plan IngressClass migration if running multiple ingress controllers.
- Update HPA configurations to use new scaling policies where appropriate.
- Verify ingress controller, CNI plugin, and monitoring stack compatibility with 1.18.
- Execute staged rollout: dev → staging → production with validation at each stage.
Kubernetes 1.18 is a solid release for production clusters. The features that went GA or beta address real operational pain points rather than adding complexity for its own sake. Organizations that upgrade thoughtfully will benefit from improved reliability, lower costs, and more predictable multi-controller environments.
Continue in the Developer pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Secure Software Supply Chain Tooling Guide
Engineer developer platforms that deliver verifiable provenance, SBOM distribution, vendor assurance, and runtime integrity aligned with SLSA v1.0, NIST SP 800-204D, and CISA SBOM…
-
AI-Assisted Development Governance Guide
Govern GitHub Copilot, Azure AI, and internal generative assistants with controls aligned to NIST AI RMF 1.0, EU AI Act enforcement timelines, OMB M-24-10, and enterprise privacy…
-
Developer Enablement & Platform Operations Guide
Plan AI-assisted development, secure SDLC controls, and runtime upgrades using our research on GitHub Copilot, GitHub Advanced Security, and major language lifecycles.
Coverage intelligence
- Published
- Coverage pillar
- Developer
- Source credibility
- 73/100 — medium confidence
- Topics
- Kubernetes 1.18 · Ingress · server-side apply
- Sources cited
- 3 sources (kubernetes.io, cvedetails.com, iso.org)
- Reading time
- 5 min
Documentation
- Kubernetes 1.18: Fit & Finish
- CVE Details - Vulnerability Database — CVE Details
- ISO/IEC 27034-1:2011 — Application Security — International Organization for Standardization
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.