Practice questions are essential for CKA exam preparation. While hands-on lab experience is critical, testing your knowledge through realistic questions helps identify weak areas and builds confidence. This post provides 15 carefully crafted practice questions spanning all CKA domains with detailed explanations.
These questions range from beginner to advanced difficulty and cover the types of scenarios you’ll encounter on the real exam.
How to Use These Practice Questions
Recommended Approach:
- Attempt each question without looking at the answer
- Time yourself—allocate 5-10 minutes per question
- Write out your kubectl commands before checking the answer
- Review the explanation even if you answered correctly
- Note which domains need additional study
Difficulty Ratings:
- Beginner (B): 3-5 minutes, foundational concepts
- Intermediate (I): 6-8 minutes, intermediate complexity
- Advanced (A): 10-15 minutes, requires deep understanding
Practice Questions
Question 1: Pod Creation with Resource Limits (B)
Scenario:
Create a pod named memory-pod in the default namespace using the image nginx:latest. The pod should have:
- Memory request: 128Mi
- Memory limit: 256Mi
- CPU request: 100m
- CPU limit: 500m
Your Task: Write the kubectl command or YAML manifest to create this pod.
Answer
Solution 1: Using kubectl run (Imperative)
k run memory-pod --image=nginx:latest \
--requests=memory=128Mi,cpu=100m \
--limits=memory=256Mi,cpu=500mSolution 2: Using YAML (Declarative)
apiVersion: v1
kind: Pod
metadata:
name: memory-pod
namespace: default
spec:
containers:
- name: nginx
image: nginx:latest
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"Explanation: Resource requests define the minimum guaranteed resources; limits define the maximum a container can consume. This pod will be scheduled only on nodes with at least 128Mi memory and 100m CPU available. It cannot exceed the limits.
Key Points:
- Resource units: memory in Mi/Gi, CPU in m (millicores)
- 1000m = 1 CPU
- Always set both requests and limits for best practices
- Limits lower than requests are invalid
Question 2: RBAC Configuration (I)
Scenario:
Create a new ServiceAccount named dev-user in the development namespace. Then create a Role that allows the dev-user ServiceAccount to:
- Get, List, and Watch Pods
- Get and List Deployments
- Get and List Services
Finally, bind this Role to the ServiceAccount.
Your Task: Write the kubectl commands or YAML manifests.
Answer
Solution 1: ServiceAccount (Imperative)
k create serviceaccount dev-user -n developmentSolution 2: Role (YAML)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dev-role
namespace: development
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list"]Solution 3: RoleBinding (YAML)
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-user-binding
namespace: development
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev-role
subjects:
- kind: ServiceAccount
name: dev-user
namespace: developmentVerification:
# Verify the user can get pods
k auth can-i get pods --as=system:serviceaccount:development:dev-user -n development
# Output: yes
# Verify the user cannot delete pods
k auth can-i delete pods --as=system:serviceaccount:development:dev-user -n development
# Output: noExplanation: RBAC uses three components: ServiceAccount (identity), Role (permissions), and RoleBinding (connection). This grants specific permissions to a service account in a specific namespace.
Key Points:
- apiGroups: "" = core API, “apps” = apps API group
- Rules combine apiGroups, resources, and verbs
- Always verify permissions with
kubectl auth can-i
Question 3: Network Policy (I)
Scenario:
You have two applications: a frontend running in the frontend namespace and a backend running in the backend namespace. Create a NetworkPolicy that:
- Denies all ingress traffic to the backend namespace by default
- Allows traffic from pods labeled
app: frontendin thefrontendnamespace
Your Task: Write the NetworkPolicy manifests.
Answer
Solution:
# Step 1: Deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: backend
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Step 2: Allow specific ingress from frontend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
namespace: backend
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080Important Prerequisites:
# Label the frontend namespace
k label namespace frontend name=frontendTesting:
# Try to connect from backend pod to another backend pod
# Should fail with first policy, succeed with second
# Try to connect from frontend pod to backend pod
# Should succeed (allowed by second policy)Explanation: NetworkPolicies work by default allowing all traffic unless specified. To create restrictions, you must explicitly deny, then allow what you need. This is called “default deny, allow by exception.”
Key Points:
- Empty
podSelector: {}applies to all pods namespaceSelectortargets pods in specific namespaces- Policies are additive (all matching policies apply)
- Default: if no NetworkPolicy applies, traffic is allowed
Question 4: Deployment Rolling Update (I)
Scenario:
You have a Deployment named web-app with 5 replicas running image myapp:v1. Update the image to myapp:v2 and monitor the rollout. If the new version has issues, rollback to v1.
Your Task: Write kubectl commands to:
- Update the image
- Check rollout status
- View rollout history
- Perform a rollback if needed
Answer
Step 1: Update the image
k set image deployment/web-app myapp=myapp:v2 --record
# Or:
k patch deployment web-app -p '{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"myapp:v2"}]}}}}'Step 2: Monitor rollout status
# Watch the rollout in real-time
k rollout status deployment/web-app --watch
# Check pod status
k get pods -o wideStep 3: View rollout history
k rollout history deployment/web-app
# Output:
# REVISION CHANGE-CAUSE
# 1 <none>
# 2 kubectl set image deployment/web-app myapp=myapp:v2 --recordStep 4: Rollback if needed
# Rollback to previous revision
k rollout undo deployment/web-app
# Or rollback to specific revision
k rollout undo deployment/web-app --to-revision=1
# Verify the rollback
k get deployment web-app
k get podsVerification:
# Check the current image
k get deployment web-app -o yaml | grep imageExplanation:
Rolling updates ensure zero downtime by replacing pods gradually. The --record flag documents the change-cause for history tracking.
Key Points:
maxSurge: pods above desired count during updatemaxUnavailable: pods below desired count during update- Rollback creates new revision, doesn’t revert to old pods
- Always test in dev environment first
Question 5: Storage Configuration (B)
Scenario: Create a PersistentVolume (PV) with:
- Name:
pv-local - Capacity: 10Gi
- Access mode: ReadWriteOnce
- Storage class: standard
- Host path:
/data/pv
Then create a PersistentVolumeClaim (PVC) that binds to this PV.
Your Task: Write the YAML manifests for both.
Answer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /data/pvPersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-local
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10GiVerification:
k apply -f pv-local.yaml
k apply -f pvc-local.yaml
# Check binding status
k get pv,pvc
# The PVC should show as Bound to pv-localPod Using the PVC:
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: busybox
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-localExplanation: PVs are cluster-level resources; PVCs are namespace-specific requests. Kubelet mounts the claimed storage into containers.
Key Points:
- PV capacity ≥ PVC request
- accessModes must match
- storageClassName must match (or be unspecified)
- PVC claim may not fully use PV storage
Question 6: Service Exposure (I)
Scenario:
You have a Deployment named api-server with 3 replicas in the production namespace. The application listens on port 8080. Create:
- A ClusterIP Service for internal communication
- A NodePort Service for external access
- Expose the Service via Ingress with hostname
api.example.com
Your Task: Write the kubectl commands and YAML manifests.
Answer
ClusterIP Service (Internal):
apiVersion: v1
kind: Service
metadata:
name: api-server-clusterip
namespace: production
spec:
type: ClusterIP
selector:
app: api-server
ports:
- protocol: TCP
port: 80
targetPort: 8080NodePort Service (External):
apiVersion: v1
kind: Service
metadata:
name: api-server-nodeport
namespace: production
spec:
type: NodePort
selector:
app: api-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080 # Optional: specify port or let K8s assign (30000-32767)Ingress Resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: production
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-server-clusterip
port:
number: 80Verification:
# Check services
k get svc -n production
# Test internal connectivity
k exec -it api-server-0 -- curl http://api-server-clusterip
# Check ingress
k get ingress -n productionExplanation: Each Service type serves a different purpose: ClusterIP for internal, NodePort for external node access, LoadBalancer for cloud load balancers, Ingress for HTTP routing.
Key Points:
- port: service port, targetPort: container port
- NodePort range: 30000-32767
- Ingress requires an ingress controller (nginx, haproxy, etc.)
- Service endpoints route traffic to pods with matching selectors
Question 7: Pod Affinity Rules (A)
Scenario: Deploy an application where:
- Frontend pods should not run on the same node as backend pods
- Frontend pods should preferably run together for resource efficiency
- Backend pods must have at least 2 nodes with more than 4 CPU cores
Your Task: Write Deployment manifests with appropriate affinity rules.
Answer
Frontend Deployment (Pod Affinity):
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
affinity:
# Prefer to run together
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- frontend
topologyKey: kubernetes.io/hostname
# Must not run with backend
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- backend
topologyKey: kubernetes.io/hostname
containers:
- name: frontend
image: myapp-frontend:latestBackend Deployment (Node Affinity):
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- large # Node with 4+ CPUs labeled as 'large'
containers:
- name: backend
image: myapp-backend:latestPrerequisites:
# Label nodes by CPU capacity
k label nodes node-1 cpus=large
k label nodes node-2 cpus=largeExplanation: Affinity rules control pod placement. Pod affinity colocates pods; pod anti-affinity separates them. Node affinity targets specific nodes.
Key Points:
required: rule must be satisfied (hard constraint)preferred: rule is attempted but not guaranteed (soft constraint)topologyKey: node grouping level (hostname, zone, region)- Rules apply at scheduling time; changes don’t move existing pods
Question 8: ConfigMap and Secrets (I)
Scenario: Create a ConfigMap containing application configuration:
DATABASE_HOST=postgres.default.svc.cluster.local
DATABASE_PORT=5432
LOG_LEVEL=info
Create a Secret containing:
DATABASE_USER=admin
DATABASE_PASSWORD=secretpassword123
Mount both in a pod using environment variables and volume.
Your Task: Create the ConfigMap, Secret, and Pod manifests.
Answer
ConfigMap:
k create configmap app-config --from-literal=DATABASE_HOST=postgres.default.svc.cluster.local --from-literal=DATABASE_PORT=5432 --from-literal=LOG_LEVEL=infoConfigMap YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "postgres.default.svc.cluster.local"
DATABASE_PORT: "5432"
LOG_LEVEL: "info"Secret:
k create secret generic db-credentials --from-literal=DATABASE_USER=admin --from-literal=DATABASE_PASSWORD=secretpassword123Secret YAML:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
DATABASE_USER: YWRtaW4= # base64 encoded 'admin'
DATABASE_PASSWORD: c2VjcmV0cGFzc3dvcmQxMjM= # base64 encodedPod with Environment Variables:
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:latest
envFrom:
# Load all ConfigMap keys as env vars
- configMapRef:
name: app-config
# Load all Secret keys as env vars
- secretRef:
name: db-credentials
# Or selectively:
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: DATABASE_USERPod with Volumes:
apiVersion: v1
kind: Pod
metadata:
name: app-pod-volumes
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: config
mountPath: /etc/config
- name: secrets
mountPath: /etc/secrets
volumes:
- name: config
configMap:
name: app-config
- name: secrets
secret:
secretName: db-credentialsVerification:
k apply -f configmap.yaml
k apply -f secret.yaml
k apply -f pod.yaml
# Check env vars in pod
k exec -it app-pod -- env | grep DATABASE
# Check mounted files
k exec -it app-pod-volumes -- ls /etc/config
k exec -it app-pod-volumes -- cat /etc/config/DATABASE_HOSTExplanation: ConfigMaps store non-sensitive configuration; Secrets store sensitive data (both are base64-encoded, but Secrets indicate sensitive intent). Both can be mounted as env vars or volume files.
Key Points:
- ConfigMaps: max 1MB, non-sensitive
- Secrets: max 1MB, sensitive data
- Secrets are base64-encoded (not encrypted by default)
- Volume mount creates files named after keys
- envFrom loads all keys as env vars with the same names
Question 9: Cluster Upgrade (A)
Scenario: Your Kubernetes cluster is running version 1.28.0. You need to upgrade it to 1.29.0 using kubeadm. Walk through the steps for upgrading both the control plane and worker nodes while maintaining cluster availability.
Your Task: Write the kubectl and kubeadm commands for a complete upgrade.
Answer
Step 1: Prepare Control Plane Node
# Check upgrade path
kubeadm upgrade plan
# Drain the control plane node
kubectl drain control-plane-1 --ignore-daemonsets
# Upgrade kubeadm on control plane
apt update
apt upgrade -y kubeadm=1.29.0-00
# Plan upgrade again to verify
kubeadm upgrade planStep 2: Upgrade Control Plane Components
# Apply the upgrade
sudo kubeadm upgrade apply v1.29.0
# Upgrade kubelet and kubectl
apt upgrade -y kubelet=1.29.0-00 kubectl=1.29.0-00
# Restart kubelet
systemctl daemon-reload
systemctl restart kubelet
# Uncordon the node
kubectl uncordon control-plane-1
# Verify control plane
kubectl get nodes
kubectl get pods -n kube-systemStep 3: Upgrade Worker Nodes (One at a Time)
# For each worker node:
WORKER_NODE=worker-1
# 1. Drain the node
kubectl drain $WORKER_NODE --ignore-daemonsets --delete-emptydir-data
# 2. SSH to the node and upgrade kubeadm
# ssh user@worker-1
apt update
apt upgrade -y kubeadm=1.29.0-00
# 3. Upgrade kubelet and kubectl
apt upgrade -y kubelet=1.29.0-00 kubectl=1.29.0-00
# 4. Reload and restart kubelet
systemctl daemon-reload
systemctl restart kubelet
# 5. Back on control plane, uncordon the node
kubectl uncordon $WORKER_NODE
# 6. Wait for node to be Ready
kubectl wait --for=condition=Ready node/$WORKER_NODE
# 7. Repeat for next worker nodeVerification:
# Check cluster version
kubectl version --short
# Check all nodes are upgraded
kubectl get nodes -o wide
# Check component versions
kubectl get nodes -o wide | grep -i versionImportant Notes:
- Always drain before upgrading (to move pods to other nodes)
- Drain with
--ignore-daemonsets(DaemonSets are node-specific) - Upgrade one node at a time for High Availability
- Leave at least one control plane node available during upgrade
- If using multiple control plane nodes, upgrade in sequence
Explanation: Cluster upgrades require careful sequencing to avoid downtime. The control plane handles requests; worker nodes run applications. Upgrade control plane first, then workers sequentially.
Key Points:
kubectl drain: gracefully evicts pods from nodekubeadm upgrade apply: updates control plane components- Kubelet restart: required for changes to take effect
- Always have backup before major upgrades
Question 10: etcd Backup and Restore (A)
Scenario:
Your cluster’s etcd database contains critical data. Create a backup of etcd, then demonstrate how to restore it. Your etcd is running as a static pod in /etc/kubernetes/manifests/etcd.yaml with:
- Data directory:
/var/lib/etcd - Listening on:
https://localhost:2379
Your Task: Write commands to backup and restore etcd.
Answer
Prerequisites: Identify etcd Details
# Get etcd pod details
kubectl get pod etcd-control-plane-1 -n kube-system -o yaml
# Check etcd certificate and key locations
# Usually: /etc/kubernetes/pki/etcd/
ls /etc/kubernetes/pki/etcd/Step 1: Create etcd Backup
# SSH to control plane node with etcd
# Export etcd endpoint
export ETCDCTL_API=3
export ETCD_CERT=/etc/kubernetes/pki/etcd/server.crt
export ETCD_KEY=/etc/kubernetes/pki/etcd/server.key
export ETCD_CA=/etc/kubernetes/pki/etcd/ca.crt
# Create backup (from host, not pod)
sudo etcdctl --endpoints=localhost:2379 \
--cacert=$ETCD_CA \
--cert=$ETCD_CERT \
--key=$ETCD_KEY \
snapshot save /backup/etcd-backup-$(date +%Y%m%d-%H%M%S).db
# Verify backup
ls -lah /backup/Step 2: Verify Backup Integrity
sudo etcdctl --write-out=table snapshot status /backup/etcd-backup.dbStep 3: Restore from Backup (Disaster Recovery)
# Stop the API server and etcd
kubectl delete pod kube-apiserver-control-plane-1 -n kube-system --force
kubectl delete pod etcd-control-plane-1 -n kube-system --force
# Give system time to stop
sleep 10
# Restore the backup
sudo etcdctl snapshot restore /backup/etcd-backup.db \
--data-dir=/var/lib/etcd.restored
# Backup original data directory
sudo mv /var/lib/etcd /var/lib/etcd.backup
# Use restored directory
sudo mv /var/lib/etcd.restored /var/lib/etcd
# Restore ownership and permissions
sudo chown -R etcd:etcd /var/lib/etcd
sudo chmod -R 700 /var/lib/etcd
# Kubelet will restart etcd and API server automatically
# Wait for components to restart
sleep 30
kubectl get pods -n kube-systemStep 4: Verify Cluster Health
# Check all nodes are ready
kubectl get nodes
# Check all control plane pods are running
kubectl get pods -n kube-system
# Verify data integrity
kubectl get all -AOne-Liner Backup Script:
#!/bin/bash
BACKUP_DIR=/backup/etcd
mkdir -p $BACKUP_DIR
ETCD_BACKUP=$BACKUP_DIR/etcd-backup-$(date +%Y%m%d-%H%M%S).db
sudo etcdctl --endpoints=localhost:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save $ETCD_BACKUP
echo "Backup saved to: $ETCD_BACKUP"Important Considerations:
- etcd backups are cluster-wide; backup only once per cluster
- Test backups periodically in non-production environments
- Store backups in secure, off-site locations
- Backup size typically 5-500MB depending on cluster size
- Restoration requires downtime
Explanation: etcd is the cluster’s database. Backups protect against data loss; restores recover from corruption or disasters. Only control plane nodes run etcd.
Key Points:
- ETCDCTL_API=3: specifies etcd API version
- snapshot save: creates backup file
- snapshot restore: prepares restored data
- Always stop etcd before restoring (or create new cluster with restored data)
Question 11: Pod Troubleshooting (I)
Scenario:
A pod named broken-pod in the default namespace is stuck in CrashLoopBackOff state. The pod was created from the manifest below. Diagnose the issue and fix it.
apiVersion: v1
kind: Pod
metadata:
name: broken-pod
spec:
containers:
- name: app
image: nginx:latest
command: ["/bin/sh"]
args: ["-c", "echo 'Starting'; invalid_command; sleep 3600"]
Your Task: Diagnose the issue using kubectl commands and fix it.
Answer
Step 1: Check Pod Status
kubectl get pod broken-pod
# Output: broken-pod 0/1 CrashLoopBackOffStep 2: Describe the Pod
kubectl describe pod broken-pod
# Look for:
# - Last State: Terminated, Exit Code: 127 (command not found)
# - Reason: Error
# - Message: back-off restarting failed containerStep 3: Check Logs
# Check current logs
kubectl logs broken-pod
# Output: Starting
# Check previous container logs
kubectl logs broken-pod --previous
# Output: /bin/sh: invalid_command: not foundStep 4: Identify the Issue
The command invalid_command doesn’t exist, causing container exit code 127 (command not found).
Step 5: Fix the Problem
apiVersion: v1
kind: Pod
metadata:
name: broken-pod
spec:
containers:
- name: app
image: nginx:latest
command: ["/bin/sh"]
args: ["-c", "echo 'Starting'; nginx -g 'daemon off;'"]Step 6: Apply the Fix
# Delete the broken pod
kubectl delete pod broken-pod
# Apply the corrected manifest
kubectl apply -f broken-pod-fixed.yaml
# Verify it's running
kubectl get pod broken-pod
# Output: broken-pod 1/1 RunningAlternative Diagnosis Using Logs Directly:
# All logs in one command
kubectl logs broken-pod --all-containers=true
# Stream logs in real-time
kubectl logs -f broken-pod
# Get logs from exited container
kubectl logs broken-pod --previousExplanation: CrashLoopBackOff means the pod keeps crashing. Check logs to find the exit reason. Common causes:
- Invalid command
- Missing dependencies
- Configuration errors
- Memory/resource issues
Key Points:
- Always check
kubectl logs --previousfor exited containers - Exit codes: 0 = success, 127 = command not found, 1 = generic error
kubectl describe podshows restart count and last state- Increase restart limit with
restartPolicy: OnFailureandbackoffLimit
Question 12: Network Connectivity Troubleshooting (A)
Scenario: Two applications in different namespaces cannot communicate:
frontendpod inwebnamespace needs to callbackendpod inapinamespace- Backend service:
backend-svc.api.svc.cluster.local:8080 - Connectivity is failing with timeout
Your Task: Diagnose and fix the connectivity issue.
Answer
Step 1: Verify Services and Endpoints
# Check if backend service exists and has endpoints
kubectl get svc -n api
kubectl get endpoints -n api
# Both should show the backend service with valid endpointsStep 2: Check DNS Resolution
# Test DNS from frontend pod
kubectl run -it debug-pod -n web --image=busybox -- nslookup backend-svc.api.svc.cluster.local
# Should return the service IP
# If it fails: DNS issue, check CoreDNSStep 3: Test Network Connectivity
# Install curl in a debug pod
kubectl run -it debug-pod -n web --image=curlimages/curl -- sh
# From inside the pod:
curl -v backend-svc.api.svc.cluster.local:8080
# Or use nc for port checking
nc -zv backend-svc.api.svc.cluster.local 8080Step 4: Check for NetworkPolicies
# Check if NetworkPolicies are restricting traffic
kubectl get networkpolicy -n api
# If policies exist, check their rules
kubectl describe networkpolicy <policy-name> -n api
# Common issue: policy denies cross-namespace trafficStep 5: Verify Service Configuration
# Check if service selectors match pod labels
kubectl get svc backend-svc -n api -o yaml | grep -A5 selector
kubectl get pods -n api --show-labels
# Selectors should match pod labelsStep 6: Check if Port is Correct
# Verify backend pod listens on correct port
kubectl exec -it <backend-pod> -n api -- netstat -tlnp | grep 8080
# Port should be listeningPossible Fixes (in order of likelihood)
Fix 1: NetworkPolicy Blocking Cross-Namespace Traffic
# If blocking policy exists, allow cross-namespace traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-namespace
namespace: api
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: web
ports:
- protocol: TCP
port: 8080Fix 2: Service Port Mapping
# Verify service port configuration
kubectl get svc backend-svc -n api -o yaml
# Check: containerPort, service port, targetPort mappingFix 3: Backend Pod Not Running
# Ensure backend pod is healthy
kubectl get pods -n api -o wide
kubectl logs <backend-pod> -n apiComplete Debugging Workflow:
# Systematic diagnosis
1. Verify service exists: kubectl get svc -n api
2. Check endpoints: kubectl get endpoints -n api
3. Check pod labels: kubectl get pods -n api --show-labels
4. Test DNS: nslookup from frontend
5. Test port: nc -zv
6. Check policies: kubectl get networkpolicy -n api
7. Check logs: kubectl logs <backend-pod>Explanation: Cross-namespace communication issues usually stem from NetworkPolicies, DNS, or service misconfiguration. Systematically verify each layer.
Key Points:
- Service FQDN:
service-name.namespace.svc.cluster.local - Check endpoints:
kubectl get endpointsshows backing pods - NetworkPolicy defaults to allow unless specified
- Pod labels must match service selectors
Question 13: Node Maintenance (I)
Scenario:
You need to perform maintenance on worker node node-2 (updating kernel, running system patches). All pods must be evicted gracefully with rescheduling to other nodes.
Your Task: Write the commands to safely drain the node and return it to service.
Answer
Step 1: Cordon the Node (Prevent New Pods)
kubectl cordon node-2
# Verify node is unschedulable
kubectl get nodes
# node-2 should show SchedulingDisabledStep 2: Drain the Node (Evict Existing Pods)
# Standard drain (respects PDBs and ignores system pods)
kubectl drain node-2 --ignore-daemonsets
# With additional options:
kubectl drain node-2 \
--ignore-daemonsets \
--delete-emptydir-data \
--force \
--timeout=5m
# Options explanation:
# --ignore-daemonsets: Skip DaemonSet pods (tied to node)
# --delete-emptydir-data: Force delete pods with emptyDir storage
# --force: Force deletion if PDB prevents graceful eviction
# --timeout: Max wait time for graceful terminationStep 3: Monitor Eviction
# Watch pod migration in real-time
kubectl get pods --all-namespaces -o wide --watch
# Watch drain progress
kubectl get nodes -o wide | grep node-2Step 4: Perform Maintenance
# SSH to the node
ssh user@node-2
# Run system updates (example):
sudo apt update
sudo apt upgrade -y
sudo rebootStep 5: Return Node to Service
# After node reboots and rejoins cluster
kubectl get nodes
# Wait for node-2 to be Ready
# Uncordon the node
kubectl uncordon node-2
# Verify node is schedulable again
kubectl get nodes
# node-2 should show Ready, no SchedulingDisabledStep 6: Verify Cluster Health
# Check all pods are running
kubectl get pods --all-namespaces | grep -v Running | head
# Check node resources
kubectl top nodes
kubectl describe node node-2Troubleshooting: Drain Hangs
# If drain gets stuck, check for problematic pods
kubectl get pods --all-namespaces --field-selector=status.phase=Pending
# Check pod disruption budgets
kubectl get pdb --all-namespaces
# Force delete if necessary (last resort)
kubectl delete pod <pod-name> -n <namespace> --force --grace-period=0Quick Reference:
# Drain with most common options
kubectl drain node-2 --ignore-daemonsets --delete-emptydir-data
# Uncordon after maintenance
kubectl uncordon node-2Important Notes:
- Cordon + drain is safer than deleting the node
- DaemonSets remain on all nodes (can’t drain them)
- Static pods bound to a node don’t migrate
- PodDisruptionBudgets may prevent complete drainage
- Some pods may have no drain=NoExecute tolerations
Explanation: Node maintenance follows a sequence: cordon (stop new scheduling), drain (migrate existing pods), maintain, then uncordon (resume scheduling).
Key Points:
- Drain is graceful eviction, not forced deletion
- Pods are recreated on other nodes automatically
- Node itself is not deleted or destroyed
- Cordon remains until explicitly uncordoned
Question 14: Storage Troubleshooting (I)
Scenario: A pod is pending because its PersistentVolumeClaim cannot be bound to a PersistentVolume. Diagnose why the binding is failing.
Your Task: Identify the issue and fix it.
Answer
Step 1: Check PVC Status
kubectl get pvc
# Look for status: Pending (not Bound)Step 2: Describe the PVC
kubectl describe pvc <pvc-name>
# Look for events and error messages
# Common messages:
# - "no persistent volumes available"
# - "no storage class with name..."
# - "access modes not matching"
# - "size not matching"Step 3: Check Available PVs
kubectl get pv
# Check if PVs exist and are Available
# Status should be: Available, Bound, or PendingStep 4: Compare PVC and PV Specifications
# Get PVC details
kubectl get pvc <pvc-name> -o yaml
# Get PV details
kubectl get pv <pv-name> -o yaml
# Check matching:
# - accessModes: must match exactly
# - storage capacity: PV >= PVC request
# - storageClassName: must matchCommon Issues and Fixes
Issue 1: Access Mode Mismatch
# PVC wants ReadWriteMany
# But PV only supports ReadWriteOnce
# Fix: Create a new PV with matching access modes
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-new
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany # Match PVC
storageClassName: standard
nfs:
server: nfs-server.example.com
path: "/data"Issue 2: StorageClass Mismatch
# Check requested storageClass
kubectl get pvc <pvc-name> -o yaml | grep storageClassName
# Create matching StorageClass
kubectl get storageclass
# If missing, create one
# Fix: Recreate PVC with correct storageClassName
kubectl delete pvc <pvc-name>
# Edit YAML to specify correct storageClassName
kubectl apply -f <pvc-yaml>Issue 3: Insufficient Capacity
# PVC requests 10Gi but all PVs are smaller
# Fix: Create a larger PV
kubectl apply -f - << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-large
spec:
capacity:
storage: 10Gi # Match or exceed PVC request
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /data/large
EOFIssue 4: No StorageClass and No Manual PV
# If using dynamic provisioning but no provisioner exists
# Create a StorageClass
kubectl apply -f - << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/local # or your provisioner
parameters:
type: pd-ssd
EOFComprehensive Check Script:
#!/bin/bash
echo "=== Checking PVC ==="
kubectl get pvc
echo "=== Checking PV ==="
kubectl get pv
echo "=== Describing problematic PVC ==="
kubectl describe pvc <pvc-name>
echo "=== Checking StorageClasses ==="
kubectl get storageclass
echo "=== Checking events ==="
kubectl get events --sort-by='.lastTimestamp'Resolution Workflow:
1. kubectl get pvc (identify pending)
2. kubectl describe pvc <name> (read error)
3. kubectl get pv (check available)
4. kubectl get pv <name> -o yaml (verify specs)
5. kubectl describe pv <name> (check status)
6. Fix the mismatch (access mode, size, or class)
7. kubectl get pvc (verify now Bound)Explanation: PVC-PV binding requires exact matches on access modes, sufficient capacity, and matching storage classes. Any mismatch prevents binding.
Key Points:
- accessModes must match exactly
- PV storage >= PVC request
- storageClassName must match
- PV must be in Available state to bind
- Dynamic provisioning auto-creates PVs if provisioner exists
Question 15: RBAC Verification (I)
Scenario:
Verify that a specific user or service account has required permissions. The user [email protected] should be able to:
- Get, list, and watch pods in the
devnamespace - Create and delete deployments in the
devnamespace - NOT access services or secrets
Your Task: Verify these permissions using kubectl auth commands.
Answer
Step 1: Check Individual Permissions
# Can get pods?
kubectl auth can-i get pods [email protected] -n dev
# Output: yes/no
# Can list pods?
kubectl auth can-i list pods [email protected] -n dev
# Can watch pods?
kubectl auth can-i watch pods [email protected] -n dev
# Can create deployments?
kubectl auth can-i create deployments [email protected] -n dev
# Can delete deployments?
kubectl auth can-i delete deployments [email protected] -n dev
# Can access services (should be no)?
kubectl auth can-i get services [email protected] -n dev
# Can access secrets (should be no)?
kubectl auth can-i get secrets [email protected] -n devStep 2: Check Wildcard Permissions
# Can do anything with pods?
kubectl auth can-i '*' pods [email protected] -n dev
# Can do anything in the namespace?
kubectl auth can-i '*' '*' [email protected] -n devStep 3: Identify the Role/Binding
# Find which RoleBindings apply to the user
kubectl get rolebinding -n dev -o wide
# Filter for the specific user
kubectl get rolebinding -n dev -A -o yaml | grep -A5 [email protected]Step 4: Audit the Role
# Get the Role definition
kubectl get role -n dev
# Examine the specific role
kubectl describe role dev-role -n dev
# View YAML
kubectl get role dev-role -n dev -o yamlStep 5: Create Correct RBAC if Not Exists
---
# Role with correct permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer-role
namespace: dev
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create", "delete", "get", "list"]
---
# RoleBinding to attach to user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: developer-role
subjects:
- kind: User
name: [email protected]
apiGroup: rbac.authorization.k8s.ioStep 6: Verify After Creating RBAC
# Rerun verification commands
kubectl auth can-i get pods [email protected] -n dev
# Should now output: yes
# All should be correct nowComprehensive Verification Script:
#!/bin/bash
USER="[email protected]"
NAMESPACE="dev"
echo "=== Checking Permissions for $USER in $NAMESPACE ==="
echo ""
echo "Should be YES:"
kubectl auth can-i get pods --as=$USER -n $NAMESPACE
kubectl auth can-i list pods --as=$USER -n $NAMESPACE
kubectl auth can-i watch pods --as=$USER -n $NAMESPACE
kubectl auth can-i create deployments --as=$USER -n $NAMESPACE
kubectl auth can-i delete deployments --as=$USER -n $NAMESPACE
echo ""
echo "Should be NO:"
kubectl auth can-i get services --as=$USER -n $NAMESPACE
kubectl auth can-i get secrets --as=$USER -n $NAMESPACEOutput Examples:
=== Checking Permissions for [email protected] in dev ===
Should be YES:
yes
yes
yes
yes
yes
Should be NO:
no
noTroubleshooting Unexpected Results
Issue: All queries return ‘no’
# Check if RoleBinding exists
kubectl get rolebinding -n dev
# Check Role exists
kubectl get role -n dev
# Verify subject matches exactly
kubectl describe rolebinding developer-binding -n devIssue: User has more permissions than expected
# User might be bound to admin or other roles
kubectl get rolebinding -n dev -o yaml | grep -A10 [email protected]
kubectl get clusterrolebinding -A -o yaml | grep -A10 [email protected]Explanation:
RBAC verification ensures users have exactly the permissions they need—no more, no less. Use kubectl auth can-i to test every permission systematically.
Key Points:
kubectl auth can-iis the primary verification tool- Always specify
-n namespacefor namespace-scoped roles - User must exist in authentication system first
- Multiple bindings stack (OR logic)
- Absence of allow = deny
Summary and Next Steps
These 15 practice questions cover all major CKA domains with a mix of difficulty levels. Use them to:
- Assess your current knowledge - Try questions without looking at answers first
- Identify weak areas - Note which domains were challenging
- Practice problem-solving - Write out your solution before checking the answer
- Build muscle memory - Repeatedly practice common patterns
After working through these questions, you should:
- Understand hands-on pod and deployment creation
- Be comfortable with RBAC configuration and verification
- Know how to implement network policies and troubleshoot connectivity
- Understand storage configuration and troubleshooting
- Be familiar with cluster maintenance procedures
- Feel confident diagnosing and fixing cluster issues
Next: Full Mock Exams
These sample questions provide valuable practice, but full-length mock exams are essential for exam preparation. Take Sailor.sh’s realistic CKA mock exams to:
- Simulate real exam conditions (2-hour time limit)
- Practice across all domains under pressure
- Identify remaining weak areas
- Build confidence before exam day
Start with free practice questions on Sailor.sh, then progress to full mock exams when ready.