Back to Blog

How to Set Up a Kubernetes Lab for CKA Practice

Complete guide to setting up Kubernetes lab environments for CKA exam prep. Compare minikube, kind, kubeadm, and cloud options.

By Sailor Team , April 1, 2026

Setting up a proper Kubernetes lab environment is essential for CKA exam preparation. You need a space to experiment, break things intentionally, and practice without affecting production systems. This comprehensive guide walks you through multiple options for creating a Kubernetes lab, comparing their strengths and helping you choose the right approach for your situation.

Kubernetes Lab Options Comparison

OptionComplexityCostRealismBest For
minikubeEasyFreeLowBeginners, learning basics
kind (Kubernetes in Docker)EasyFreeMediumLocal development, quick setups
kubeadm Multi-NodeHardFree+Very HighProduction-like environments
Vagrant + kubeadmMediumFree+HighRepeatable multi-node setups
Cloud (GKE/EKS/AKS)Medium$HighReal cloud practice
Hybrid ApproachVariesFree-$HighComprehensive training

Option 1: minikube - Easiest for Beginners

minikube creates a single-node Kubernetes cluster in a VM or container. It’s perfect for learning fundamentals but doesn’t simulate real multi-node clusters.

Installation

macOS

# Using Homebrew
brew install minikube

# Or download directly
curl -LO https://github.com/kubernetes/minikube/releases/download/latest/minikube-darwin-amd64
chmod +x minikube-darwin-amd64
sudo mv minikube-darwin-amd64 /usr/local/bin/minikube

Linux

curl -LO https://github.com/kubernetes/minikube/releases/download/latest/minikube-linux-amd64
chmod +x minikube-linux-amd64
sudo mv minikube-linux-amd64 /usr/local/bin/minikube

Windows

choco install minikube
# Or download from GitHub releases

Starting a Cluster

# Start minikube with default settings
minikube start

# Start with specific Kubernetes version
minikube start --kubernetes-version=v1.29.0

# Start with more resources
minikube start --cpus=4 --memory=8192 --disk-size=40g

# Start with Docker driver (faster on Linux)
minikube start --driver=docker

# Start with multiple nodes (experimental)
minikube start --nodes=3

Accessing the Cluster

# minikube sets up kubeconfig automatically
kubectl cluster-info
kubectl get nodes

# SSH into the node for debugging
minikube ssh

# Access Kubernetes dashboard
minikube dashboard

# Stop the cluster
minikube stop

# Delete the cluster
minikube delete

Limitations for CKA Study

  • Single node only (basic mode) or limited multi-node
  • Doesn’t teach kubeadm cluster creation
  • Limited networking complexity
  • No etcd backup/restore practice
  • No cluster upgrade practice

Best For

  • Learning basic kubectl commands
  • Understanding pod and deployment concepts
  • Quick experimentation
  • Beginners new to Kubernetes

Option 2: kind (Kubernetes in Docker) - Best for Local Development

kind runs Kubernetes clusters in Docker containers. Multiple nodes run in containers on your machine, making it lightweight and fast.

Installation

# Download the binary
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Or install with Go
go install sigs.k8s.io/kind@latest

Creating a Multi-Node Cluster

# Simple single-node cluster
kind create cluster --name test-cluster

# Multi-node cluster with configuration
cat > kind-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9191203ad86603deed650453c5be0c4b95bffd92beb8
- role: worker
  image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9191203ad86603deed650453c5be0c4b95bffd92beb8
- role: worker
  image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9191203ad86603deed650453c5be0c4b95bffd92beb8
EOF

kind create cluster --config kind-config.yaml --name multi-node

Managing Clusters

# List clusters
kind get clusters

# Switch context
kubectl config use-context kind-multi-node

# Delete a cluster
kind delete cluster --name multi-node

# Get kubeconfig
kind get kubeconfig --name multi-node

Advantages

  • Free, no VM required
  • Multi-node setup easy
  • Fast to create and destroy
  • Perfect for testing
  • Great for CI/CD pipelines

Limitations for CKA

  • Docker-based, not Linux VMs
  • Can’t practice kubeadm full init process
  • Limited for networking practice
  • No etcd direct access

Best For

  • Quick multi-node testing
  • CI/CD pipelines
  • Local development
  • Testing Kubernetes manifests

Option 3: kubeadm Multi-Node - Most Realistic for CKA

This is the gold standard for CKA preparation. You create a real multi-node Kubernetes cluster using kubeadm, simulating production environments.

Prerequisites

  • 3+ Linux VMs or cloud instances (1 control plane, 2+ workers)
  • Ubuntu 20.04+ or compatible Linux
  • Minimum resources: 2 CPUs, 2GB RAM per node
  • Network connectivity between nodes

Step-by-Step Setup

Step 1: Prepare Each Node

# Run on all nodes
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg

# Add Kubernetes APT key
curl https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Add Kubernetes repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install kubelet, kubeadm, kubectl
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo systemctl enable --now kubelet

# Install container runtime (containerd)
sudo apt install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

Step 2: Configure Networking

# Enable IP forwarding (all nodes)
sudo sysctl -w net.ipv4.ip_forward=1
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf

# Disable swap
sudo swapoff -a
# Comment out swap in /etc/fstab

Step 3: Initialize Control Plane

# On control plane node only
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.29.0

# Set up kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Verify control plane
kubectl get nodes
kubectl get pods -n kube-system

Step 4: Install Network Plugin

# Install Flannel for networking
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# Or use Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml

# Wait for nodes to be Ready
kubectl get nodes

Step 5: Join Worker Nodes

# On control plane, get the join command
kubeadm token create --print-join-command

# Output will be something like:
# kubeadm join 10.0.0.10:6443 --token abc123.xyz789 --discovery-token-ca-cert-hash sha256:...

# On each worker node, run the join command
sudo kubeadm join 10.0.0.10:6443 --token abc123.xyz789 --discovery-token-ca-cert-hash sha256:...

# Verify all nodes are joined
kubectl get nodes

Step 6: Verify Cluster Health

# Check nodes
kubectl get nodes

# Check system pods
kubectl get pods -n kube-system

# Check component status
kubectl get componentstatuses

# Create a test pod
kubectl run nginx --image=nginx
kubectl get pods

Advantages

  • Most realistic for CKA exam simulation
  • Practice full kubeadm process
  • Multi-node environment
  • Can practice cluster upgrade
  • Can backup/restore etcd
  • Closest to production setup

Limitations

  • Requires multiple VMs (or cloud instances)
  • More complex to manage
  • Takes longer to set up
  • Requires Linux knowledge

Best For

  • Comprehensive CKA preparation
  • Production-like training
  • Advanced troubleshooting practice
  • Full cluster lifecycle management

Option 4: Vagrant + kubeadm - Repeatable Infrastructure

Vagrant automates VM creation, making kubeadm setup repeatable and shareable.

Vagrant Installation

# macOS
brew install vagrant virtualbox

# Linux
sudo apt install vagrant virtualbox

# Windows
choco install vagrant virtualbox

Create Vagrantfile

# Save as Vagrantfile

Vagrant.configure("2") do |config|
  # Control plane node
  config.vm.define "control-plane" do |cp|
    cp.vm.box = "ubuntu/jammy64"
    cp.vm.hostname = "control-plane"
    cp.vm.network "private_network", ip: "192.168.56.10"
    cp.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.cpus = 2
    end
    cp.vm.provision "shell", path: "init-common.sh"
    cp.vm.provision "shell", path: "init-control-plane.sh"
  end

  # Worker nodes
  (1..2).each do |i|
    config.vm.define "worker-#{i}" do |w|
      w.vm.box = "ubuntu/jammy64"
      w.vm.hostname = "worker-#{i}"
      w.vm.network "private_network", ip: "192.168.56.#{20 + i}"
      w.vm.provider "virtualbox" do |v|
        v.memory = 2048
        v.cpus = 2
      end
      w.vm.provision "shell", path: "init-common.sh"
      w.vm.provision "shell", path: "init-worker.sh", args: ["192.168.56.10"]
    end
  end
end

Quick Vagrant Commands

# Start all VMs
vagrant up

# SSH into a VM
vagrant ssh control-plane

# Reload a VM
vagrant reload control-plane

# Destroy all VMs
vagrant destroy -f

# Check status
vagrant status

Advantages

  • Repeatable and shareable setup
  • Automates VM creation
  • Easy to recreate from scratch
  • Good for team training

Best For

  • Teams training together
  • Repeatable lab environments
  • Infrastructure as code practice

Option 5: Cloud Provider Free Tiers

Cloud providers offer free or trial clusters for practice.

Google Kubernetes Engine (GKE)

# Create a free tier cluster
gcloud container clusters create cka-lab \
  --zone=us-central1-a \
  --num-nodes=3 \
  --machine-type=e2-medium \
  --enable-ip-alias

# Get credentials
gcloud container clusters get-credentials cka-lab --zone=us-central1-a

# Delete when done
gcloud container clusters delete cka-lab --zone=us-central1-a

Cost: Free tier includes 1 free zonal cluster, then ~$0.10/hour

Amazon EKS

  • More expensive (~$0.10/hour for control plane)
  • Free tier limited to 12 months
  • Good for AWS practice

Azure Kubernetes Service (AKS)

  • Free control plane
  • Pay only for nodes (~$0.10/hour per node)
  • Generous free tier

Advantages

  • Real managed Kubernetes
  • High availability
  • No VM management
  • Easy scaling

Disadvantages

  • Cost considerations
  • Can’t practice kubeadm
  • Limited etcd access
  • Less suitable for disaster recovery practice

Best For

  • Cloud-specific practice
  • Managed Kubernetes experience
  • Real-world scenarios

Combine multiple options for comprehensive preparation:

Phase 1: Fundamentals (Week 1-2)

  • Use minikube or kind
  • Learn basic kubectl commands
  • Understand pod and deployment concepts
  • Cost: Free

Phase 2: Multi-Node Setup (Week 3-6)

  • Use kubeadm with 3+ VMs
  • Practice cluster creation
  • Learn networking and storage
  • Practice RBAC configuration
  • Cost: Free (if using local VMs)

Phase 3: Advanced Scenarios (Week 7-8)

  • Use same kubeadm cluster
  • Practice cluster upgrades
  • Backup and restore etcd
  • Implement NetworkPolicies
  • Troubleshoot network issues
  • Cost: Free

Phase 4: Final Practice (Week 9-10)

  • Use kind for quick scenario recreation
  • Use kubeadm for complex scenarios
  • Mock exams on Sailor.sh
  • Cost: Free + exam platform subscription

Lab Setup Checklist

Before Starting

  • Decide on lab option (or hybrid approach)
  • Ensure sufficient resources (CPU, RAM, disk)
  • Plan network configuration
  • Prepare monitoring/logging strategy

After Setting Up

  • Verify all nodes are Ready: kubectl get nodes
  • Check system pods: kubectl get pods -n kube-system
  • Test networking: kubectl run test --image=busybox
  • Document your setup
  • Create snapshots/backups of working state

Regular Maintenance

  • Keep Kubernetes version updated (1.29+)
  • Update security patches
  • Test backup and restore procedures
  • Clean up unused resources
  • Monitor resource usage

Kubernetes Versions for CKA

The CKA exam covers recent Kubernetes versions:

  • Currently: 1.29 and 1.30
  • Use Ubuntu, CentOS, or Debian as control plane OS
  • containerd, cri-o, or Docker as container runtime

Resource Requirements

Minimum for Single-Node Lab

  • CPU: 2 cores
  • RAM: 4GB
  • Disk: 20GB
  • Control Plane Node:

    • CPU: 2 cores
    • RAM: 4GB
    • Disk: 20GB
  • Worker Nodes (each):

    • CPU: 2 cores
    • RAM: 4GB
    • Disk: 20GB
  • Total for 3-node cluster: 6 cores, 12GB RAM, 60GB disk

Lab Practice Scenarios

Once your lab is set up, practice these scenarios:

Week 1-2: Basics

  • Deploy applications
  • Create services and exposures
  • Scale deployments
  • View logs and events

Week 3-4: Intermediate

  • Create RBAC policies
  • Implement NetworkPolicies
  • Set up persistent storage
  • Configure resource limits

Week 5-6: Advanced

  • Perform cluster upgrades
  • Practice backup and restore
  • Troubleshoot networking
  • Debug pod failures

Week 7-8: Expert

  • Multi-failure scenarios
  • Complex networking debugging
  • Resource constraint scenarios
  • Production-like troubleshooting

Comparing Your Setup for CKA Readiness

SkillminikubekindkubeadmCloud
kubectl commands
Pod/Deployment creation
NetworkingPartialGoodExcellentGood
RBAC
StorageLimitedGoodExcellentLimited
Cluster creationLimitedLimitedExcellent
Cluster upgradeLimited
etcd backup/restore
TroubleshootingLimitedLimitedExcellentGood

FAQ

Q: What’s the minimum setup for CKA prep? A: Start with kind (free, multi-node), then move to kubeadm for realistic practice.

Q: Should I use cloud or local VMs? A: Local VMs for cost savings and full control; cloud for real-world experience.

Q: Can I prepare on just minikube? A: Not ideal. You’ll miss multi-node scenarios, kubeadm practice, and advanced troubleshooting.

Q: How long to set up a kubeadm cluster? A: 30-60 minutes for experienced users, 2-3 hours for beginners.

Q: Can I use different Linux distributions? A: Yes, Ubuntu, CentOS, Debian all work. Ubuntu is most common for CKA prep.

Q: How often should I rebuild my lab? A: At least 2-3 times during prep to practice the full setup process.

Ready to build your lab? Start with kind for quick multi-node practice, then scale to kubeadm for production-like environments. Use Sailor.sh practice exams within your lab to validate your setup and practice in realistic conditions.

Limited Time Offer: Get 80% off all Mock Exam Bundles | Sale ends in 7 days. Start learning today.

Claim Now