Back to Blog

CKA Cluster Upgrade with kubeadm: Step-by-Step Guide (2026)

Complete walkthrough for upgrading a Kubernetes cluster using kubeadm — control plane and worker nodes, with the exact apt commands, drain sequence, and verification steps you'll need on exam day.

By Sailor Team , April 27, 2026

Cluster upgrade is the most procedural question on the CKA exam — long, with many small steps, and unforgiving if you skip one. It’s also one of the highest-point questions when it appears. Get it right and you bank 10-12 points on a question that intimidates most candidates. Get it wrong and you lose them all, plus 15 minutes you can’t get back.

This guide gives you the exact apt commands, kubeadm flags, drain sequence, and verification steps for upgrading a cluster from one Kubernetes version to the next. Drill it on a real cluster until you can complete it in 12-15 minutes — that’s the bar.

What the Exam Actually Asks

The CKA upgrade question follows a predictable shape:

Upgrade the cluster from Kubernetes v1.29.x to v1.30.x. Upgrade the control plane node first (controlplane), then the worker node (worker-1). All worker workloads must be drained before upgrade. After upgrade, both nodes must be Ready.

You’ll have SSH access to both nodes. Plan to spend 15 minutes on this question — set a hard timer.

The Two Golden Rules

Before any commands, internalize these rules. Breaking either one usually ends the question.

  1. Upgrade kubeadm first, then use it to upgrade the cluster, then upgrade kubelet and kubectl. Never upgrade kubelet first — it’ll fail to start because the control plane is still on the old version.
  2. Drain before upgrading kubelet on a node, uncordon after. If you skip the drain, workloads get killed mid-upgrade with no graceful shutdown.

Memorize the order: kubeadmkubeadm upgradekubelet/kubectlrestart kubeletuncordon.

The Full Upgrade Sequence

There are five phases:

  1. Upgrade the control plane node (kubeadm + kubeadm upgrade + kubelet/kubectl).
  2. (For HA clusters only) Upgrade other control plane nodes with kubeadm upgrade node.
  3. Upgrade each worker node (drain → kubeadm + kubeadm upgrade node → kubelet/kubectl → uncordon).
  4. Verify the cluster is healthy and on the new version.

Phase 1: Control Plane Node

SSH into the control plane node.

1a. Find the exact version string

sudo apt update
apt-cache madison kubeadm | head -5

You’ll see entries like 1.30.0-1.1 or 1.30.0-00. The exact suffix changes by distribution — copy it from the output, don’t guess.

1b. Upgrade kubeadm

# Unhold (kubeadm is held to prevent accidental upgrades)
sudo apt-mark unhold kubeadm

# Install the target version (use the exact string from apt-cache madison)
sudo apt-get install -y kubeadm=1.30.0-1.1

# Re-hold
sudo apt-mark hold kubeadm

# Verify
kubeadm version

1c. Drain the control plane node

# From the control plane (kubectl is configured)
kubectl drain controlplane --ignore-daemonsets

If you see “cannot delete Pods with local storage”, add --delete-emptydir-data. Check carefully — the exam sometimes includes pods with PVCs that legitimately should not be evicted.

1d. Plan the upgrade

sudo kubeadm upgrade plan

This shows which components will be upgraded and to what version. Read the output — it confirms you’re on the path you expect. Look for the line:

You can now apply the upgrade by executing the following command:
  kubeadm upgrade apply v1.30.0

1e. Apply the upgrade

sudo kubeadm upgrade apply v1.30.0

Type y to confirm. This step takes 2-4 minutes. You’ll see output for each control plane component (etcd, apiserver, controller-manager, scheduler) being upgraded.

When it finishes, you’ll see:

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.30.0".

1f. Upgrade kubelet and kubectl

sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.30.0-1.1 kubectl=1.30.0-1.1
sudo apt-mark hold kubelet kubectl

1g. Restart kubelet

sudo systemctl daemon-reload
sudo systemctl restart kubelet

1h. Uncordon the control plane

kubectl uncordon controlplane

1i. Verify

kubectl get nodes
# controlplane should show: Ready, version v1.30.0
# worker-1 should show: Ready (or NotReady), version v1.29.x

Phase 2: Worker Nodes

For each worker node, the sequence is similar — but there are two key differences: kubeadm upgrade node instead of apply, and you drain from kubectl on the control plane.

2a. Drain the worker (from the control plane)

kubectl drain worker-1 --ignore-daemonsets --delete-emptydir-data

2b. SSH to the worker and upgrade kubeadm

ssh worker-1

sudo apt update
sudo apt-mark unhold kubeadm
sudo apt-get install -y kubeadm=1.30.0-1.1
sudo apt-mark hold kubeadm

2c. Run kubeadm upgrade on the worker

sudo kubeadm upgrade node

Note: kubeadm upgrade node, not kubeadm upgrade apply. apply is only for the first control plane node. node is for every other node (additional control planes and all workers).

2d. Upgrade kubelet and kubectl

sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.30.0-1.1 kubectl=1.30.0-1.1
sudo apt-mark hold kubelet kubectl

sudo systemctl daemon-reload
sudo systemctl restart kubelet

2e. Uncordon (back on the control plane)

exit  # back to control plane
kubectl uncordon worker-1

2f. Verify

kubectl get nodes
# Both nodes should be Ready, both on v1.30.0

Verification Checklist

The grader checks specific things. Cover all of them before declaring done:

# All nodes Ready
kubectl get nodes

# All nodes on the target version
kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.kubeletVersion}'

# Control plane components healthy
kubectl get pods -n kube-system

# kubectl client matches server (within one minor version)
kubectl version

If any node shows NotReady after the upgrade, the most likely cause is kubelet failing to start:

ssh <node>
sudo systemctl status kubelet
sudo journalctl -u kubelet -n 100 --no-pager

Common kubelet startup failures after upgrade:

  • Wrong kubelet binary version mismatch with kubeadm config — re-run apt install with the exact target version.
  • Container runtime socket changed — check /var/lib/kubelet/kubeadm-flags.env.
  • Swap is on — sudo swapoff -a and remove from /etc/fstab.

Skipping the Container Runtime Upgrade

The CKA upgrade question doesn’t typically include container runtime upgrades (containerd, CRI-O). If a question asks for it, the steps are:

sudo apt-mark unhold containerd
sudo apt-get install -y containerd=<version>
sudo apt-mark hold containerd

sudo systemctl daemon-reload
sudo systemctl restart containerd
sudo systemctl restart kubelet

Always restart kubelet after restarting the container runtime.

Common Upgrade Pitfalls (and Fixes)

Pitfall 1: Upgrading kubelet before kubeadm

# WRONG
sudo apt-get install -y kubelet=1.30.0-1.1   # before kubeadm upgrade

The kubelet will fail to register because the API server is still on the old version. Fix: always upgrade kubeadm and run kubeadm upgrade apply (or node) first.

Pitfall 2: Forgetting to uncordon

The grader checks kubectl get nodes. A SchedulingDisabled node fails the test even if it’s on the right version. Fix: always end each node’s upgrade with kubectl uncordon <node>.

Pitfall 3: Wrong package version syntax

# Probably wrong
sudo apt-get install -y kubeadm=1.30.0

# Correct (the suffix is required)
sudo apt-get install -y kubeadm=1.30.0-1.1

Always run apt-cache madison kubeadm | head -5 first and copy the exact version.

Pitfall 4: Holding/unholding the wrong packages

The order matters: unhold BEFORE install, hold AFTER. If you forget to unhold, apt will silently keep the old version.

Pitfall 5: Skipping the drain

If you upgrade kubelet without draining, running pods get killed mid-upgrade. The grader may check that pods on the upgraded node were rescheduled cleanly. Fix: always kubectl drain <node> --ignore-daemonsets before touching kubelet.

Pitfall 6: Forgetting --ignore-daemonsets on drain

DaemonSet pods can’t be evicted normally. Without the flag, drain fails. Fix: always include --ignore-daemonsets.

How to Practice This

Build a kubeadm cluster on an older version (e.g., v1.29.0). Practice upgrading to v1.30.0 end-to-end. Then tear down and rebuild from a snapshot, and do it again. The fifth time you do it should take under 12 minutes.

Practice scenarios to drill:

  1. Single control plane + single worker (the standard CKA setup).
  2. Failed kubelet after upgrade — practice diagnosis with journalctl.
  3. Wrong drain command — practice recovering from a partial drain.

For the lab setup, see our Kubernetes lab setup for CKA guide.

A Quick Mental Checklist for Exam Day

When you see the upgrade question, before typing anything:

  • Confirm which version you’re upgrading to and from
  • Confirm the control plane node name (the question states it)
  • Confirm the worker node name(s)
  • Note any constraints (e.g., “do not upgrade etcd”)

Then execute in this exact order:

  1. Control plane: drain → kubeadm → upgrade plan → upgrade apply → kubelet+kubectl → restart → uncordon
  2. Each worker: drain (from control plane) → SSH → kubeadm → upgrade node → kubelet+kubectl → restart → exit → uncordon
  3. Verify: kubectl get nodes on the new version, all Ready

Validate Your Speed With a Real Mock

Drilling on your own cluster builds the muscle memory. The CKA tests whether you can do it under time pressure on an unfamiliar cluster. The only way to validate your speed is a scored, exam-realistic simulator.

Our CKA Mock Exam Bundle includes upgrade questions in every simulator with the same UI and version skew as the real exam. You’ll find out exactly how long an upgrade takes you under pressure — and that’s the number that predicts whether you pass.

Frequently Asked Questions

Q: How long does the upgrade question take in the exam? A: Aim for 12-15 minutes. The cluster commands themselves take ~5-7 minutes; the rest is drain, verification, and reading.

Q: What versions are in scope on the 2026 exam? A: The exam tracks the current Kubernetes minor version and the previous one. In 2026, that’s roughly v1.29 → v1.30 or v1.30 → v1.31.

Q: Do I need to upgrade etcd separately? A: No. kubeadm upgrade apply upgrades etcd (when running as a static pod) along with the other control plane components.

Q: What’s the difference between kubeadm upgrade apply and kubeadm upgrade node? A: apply is run only on the first control plane node and upgrades the cluster’s components. node is run on every other node (additional control planes and workers) to upgrade their kubeadm-managed configs.

Q: Can I skip a minor version (e.g., 1.28 → 1.30)? A: Kubernetes only supports upgrading one minor version at a time. The exam will not ask you to skip versions.

Q: What if kubeadm upgrade apply fails partway through? A: Read the error. The most common cause is a control plane component pod not coming up — check with kubectl get pods -n kube-system. If etcd is unhealthy, see our CKA etcd backup and restore guide.

Q: Do I need to upgrade kubectl on the control plane? A: Yes. The exam typically checks that kubectl on the control plane node matches the server version.

Q: Do worker nodes need kubeadm upgrade plan? A: No. plan is only for the first control plane. Workers go straight to kubeadm upgrade node.

Ready to make cluster upgrade an automatic 10 points on your CKA? Drill it on a real cluster, then validate your speed with our CKA Mock Exam Bundle.

Limited Time Offer: Get 80% off all Mock Exam Bundles | Sale ends in 7 days. Start learning today.

Claim Now