Security isn’t an afterthought in Kubernetes—it must be built into every layer of your architecture. From RBAC and network policies to image scanning and runtime monitoring, each security control serves a specific purpose in defending your cluster.
This guide covers the most important Kubernetes security best practices, with practical implementation examples and kubectl commands you can use immediately.
1. Implement Role-Based Access Control (RBAC) Correctly
RBAC is your first line of defense against unauthorized access. Implemented incorrectly, it provides a false sense of security.
The Principle of Least Privilege
Every service account, user, and deployment should have the minimum permissions necessary to function.
# ❌ WRONG: Admin access everywhere
kubectl create clusterrolebinding admin-all --clusterrole=cluster-admin --serviceaccount=default:default
# ✓ CORRECT: Minimal permissions
kubectl create role reader --verb=get,list --resource=pods
kubectl create rolebinding reader-binding --role=reader --serviceaccount=default:reader-sa
Implementing Least Privilege RBAC
# Example: Monitoring application that only reads pods
apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-sa
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: production
rules:
# Can read pods but not modify or delete
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
# Can read pod logs
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
# Cannot access secrets, config, or anything else
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: monitoring-binding
namespace: production
subjects:
- kind: ServiceAccount
name: monitoring-sa
roleRef:
kind: Role
name: pod-reader
Testing RBAC Permissions
# Verify what a service account can do
kubectl auth can-i get pods --as=system:serviceaccount:production:monitoring-sa
# Output: yes/no
# List all permissions for a service account
kubectl auth can-i list --all --as=system:serviceaccount:production:monitoring-sa
# Test specific resource access
kubectl auth can-i get secrets --as=system:serviceaccount:production:monitoring-sa
# Output: no
# Audit RBAC in dry-run mode
kubectl get rolebindings --all-namespaces
kubectl get clusterrolebindings
ServiceAccount Best Practices
# ✓ CORRECT: Disable automatic token mounting
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: production
automountServiceAccountToken: false
---
# Use only when the pod actually needs the token
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
namespace: production
spec:
serviceAccountName: my-app
automountServiceAccountToken: true # Explicitly enable
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: token
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readOnly: true
volumes:
- name: token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600 # Short-lived tokens
2. Enforce Network Policies for Zero-Trust Networking
Network policies implement microsegmentation—treat every pod as untrusted until explicitly allowed.
Deny-All Default Policy
Start by denying all traffic, then explicitly allow what’s needed:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods in namespace
policyTypes:
- Ingress
- Egress
# No rules = nothing allowed (deny-all)
Allow Specific Traffic
# Frontend can accept traffic from users
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-ingress
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
ingress:
# Allow from outside cluster (Ingress Controller)
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
---
# Frontend can connect to backend API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-egress
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow to backend
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080
---
# Backend can only receive from frontend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-ingress
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
---
# Backend can connect to database only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-egress
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow to database
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
Testing Network Policies
# Deploy test pods to verify policies
kubectl run -it --image=nicolaka/netcat-openbsd frontend-test \
-n production --labels="tier=frontend"
# From frontend pod, try to reach backend
nc -zv backend:8080
# Should work with policy, fail without
# Verify policies are applied
kubectl get networkpolicies -n production
kubectl describe networkpolicy backend-ingress -n production
3. Implement Pod Security Standards
Pod Security Standards (PSS) define security profiles at the namespace level.
Enforce Restricted Profile
apiVersion: v1
kind: Namespace
metadata:
name: secure
labels:
# Enforce restricted - block non-compliant pods
pod-security.kubernetes.io/enforce: restricted
# Audit mode - log violations
pod-security.kubernetes.io/audit: restricted
# Warn users about violations
pod-security.kubernetes.io/warn: restricted
# Version (required)
pod-security.kubernetes.io/enforce-version: latest
What “Restricted” Means
Pods must comply with these requirements:
# ❌ This pod will be REJECTED in restricted namespace
apiVersion: v1
kind: Pod
metadata:
name: non-compliant
spec:
containers:
- name: app
image: ubuntu:latest
# ❌ Runs as root
# ❌ Can escalate privileges
# ❌ Has writable root filesystem
# ❌ Has all capabilities
---
# ✓ This pod COMPLIES with restricted profile
apiVersion: v1
kind: Pod
metadata:
name: compliant
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var
mountPath: /var
volumes:
- name: tmp
emptyDir: {}
- name: var
emptyDir: {}
Exempting System Pods
apiVersion: v1
kind: Namespace
metadata:
name: secure
labels:
pod-security.kubernetes.io/enforce: restricted
# Allow these labels to bypass enforcement
pod-security.kubernetes.io/enforce-version: latest
---
# Exempt pods in kube-system and kube-public
apiVersion: v1
kind: Namespace
metadata:
name: kube-system
labels:
pod-security.kubernetes.io/enforce: baseline
4. Secure Container Images
Scan for Vulnerabilities
# Scan an image before deployment
trivy image nginx:latest
# Scan with severity filter
trivy image --severity HIGH,CRITICAL nginx:latest
# Output JSON for processing
trivy image --format json --output report.json nginx:1.23.0
# Scan manifests for misconfigurations
trivy config deployment.yaml
# Scan entire registry
trivy image --skip-update alpine:latest | grep -i "CVE"
Block Vulnerable Images at Deployment
# ValidatingWebhook to block vulnerable images
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: image-vulnerability-check
webhooks:
- name: image-check.security.io
admissionReviewVersions: ["v1"]
clientConfig:
service:
name: image-scanner
namespace: security
path: "/validate"
caBundle: LS0tLS1CRUdJTi... # certificate
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
failurePolicy: Fail
sideEffects: None
Use Private Registries
# Create secret for private registry
kubectl create secret docker-registry gcr-secret \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat key.json)"
---
# Use the secret in pod spec
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
imagePullSecrets:
- name: gcr-secret
containers:
- name: app
image: gcr.io/myproject/myapp:v1.0.0
Sign and Verify Images
# Sign image with cosign
cosign sign --key cosign.key gcr.io/project/myapp:v1.0.0
# Verify image signature before running
cosign verify --key cosign.pub gcr.io/project/myapp:v1.0.0
# Policy that enforces signed images
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: require-image-signature
spec:
match:
excludedNamespaces: ["kube-system", "kube-public"]
parameters:
labels: ["image-signed"]
5. Manage Secrets Securely
Encrypt Secrets at Rest
# Generate encryption key
head -c 32 /dev/urandom | base64
# Add to kube-apiserver manifest
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml
--encryption-provider-config-automatic-reload=true
Encryption Config:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-32-byte-key>
- identity: {} # Fallback for reading old data
Best Practices for Secret Usage
# ❌ WRONG: Secrets in environment variables
apiVersion: v1
kind: Pod
metadata:
name: bad-secret-usage
spec:
containers:
- name: app
image: myapp:latest
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
# ❌ Can be inspected with kubectl describe pod
---
# ✓ CORRECT: Secrets mounted as files
apiVersion: v1
kind: Pod
metadata:
name: good-secret-usage
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: db-credentials
defaultMode: 0400 # Read-only
items:
- key: password
path: db_password
Audit Secret Access
# View audit logs for secret access
cat /var/log/kubernetes/audit/audit.log | jq 'select(.objectRef.resource=="secrets")' | head -20
# Track who accessed secrets
cat /var/log/kubernetes/audit/audit.log | jq 'select(.objectRef.resource=="secrets" and .verb=="get") | {user:.user.username, secret:.objectRef.name, time:.requestReceivedTimestamp}'
# Set up audit logging to catch secret exfiltration
# Configure audit policy with RequestResponse level for secrets
6. Enable Audit Logging for Compliance
Configure Audit Policy
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all secret access with request/response bodies
- level: RequestResponse
verbs: ["get", "list", "create", "update", "patch", "delete"]
resources:
- group: ""
resources: ["secrets"]
omitStages:
- RequestReceived
# Log API group creation/deletion with bodies
- level: RequestResponse
verbs: ["create", "delete", "update"]
resources:
- group: rbac.authorization.k8s.io
resources: ["*"]
omitStages:
- RequestReceived
# Log Pod exec commands (security investigation)
- level: RequestResponse
verbs: ["create"]
resources:
- group: ""
resources: ["pods/exec"]
omitStages:
- RequestReceived
# Default: log metadata only
- level: Metadata
omitStages:
- RequestReceived
Analyze Audit Logs
# Find all secret access
cat /var/log/kubernetes/audit/audit.log | jq 'select(.objectRef.resource=="secrets")'
# Track failed API calls (potential attacks)
cat /var/log/kubernetes/audit/audit.log | jq 'select(.status.code >= 400)'
# Find privilege escalation attempts
cat /var/log/kubernetes/audit/audit.log | jq 'select(.verb=="create" and .objectRef.resource=="clusterrolebindings")'
# Monitor token usage
cat /var/log/kubernetes/audit/audit.log | jq 'select(.objectRef.name=="default-token-*")'
7. Detect Runtime Threats with Falco
Deploy Falco
# Install Falco via Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco \
--namespace falco --create-namespace \
--set falco.grpc.enabled=true
Common Falco Rules for CKS
# Detect when a shell spawns in a container (suspicious!)
- rule: Suspicious Shell in Container
desc: Detect shell execution in container
condition: >
spawned_process and
container and
(proc.name = bash or proc.name = sh or proc.name = zsh)
output: >
Shell spawned in container
(user=%user.name shell=%proc.name container=%container.name)
priority: WARNING
---
# Detect privilege escalation attempts
- rule: Privilege Escalation Attempt
desc: Non-root user attempting privilege escalation
condition: >
spawned_process and
container and
(proc.name = sudo or proc.cmdline contains "sudo") and
user.uid != 0
output: >
Privilege escalation detected
(user=%user.name container=%container.name)
priority: CRITICAL
---
# Detect package installation (container modification)
- rule: Container Package Installation
desc: Package installation in running container
condition: >
spawned_process and
container and
(proc.name in (apt, apt-get, yum, dnf, apk, pip))
output: >
Package manager detected in container
(user=%user.name package_manager=%proc.name container=%container.name)
priority: WARNING
---
# Detect suspicious file writes to system directories
- rule: Write to System Directory
desc: Writing to system directories in container
condition: >
open and container and
fd.name glob("/etc/*") and
fd.mode contains "w"
output: >
System file modification detected
(user=%user.name file=%fd.name container=%container.name)
priority: CRITICAL
Monitor Falco Alerts
# View Falco alerts in real-time
kubectl logs -f -n falco -l app=falco
# Search for specific alerts
kubectl logs -n falco -l app=falco | grep "CRITICAL"
# Export alerts to external system (Splunk, ELK, etc.)
# Configure Falco output to send to your SIEM
8. Restrict System Calls with Seccomp
Create a Minimal Seccomp Profile
{
"defaultAction": "SCMP_ACT_ERRNO",
"defaultErrnoRet": 1,
"archMap": [{
"architecture": "SCMP_ARCH_X86_64"
}],
"syscalls": [
{
"names": [
"accept4", "arch_prctl", "bind", "clone", "close",
"connect", "dup", "dup2", "dup3", "epoll_create1",
"epoll_ctl", "epoll_wait", "exit", "exit_group", "fcntl",
"fstat", "futex", "getcwd", "getpeername", "getpid",
"getrandom", "getsockname", "getsockopt", "listen",
"lseek", "madvise", "mmap", "mprotect", "munmap",
"openat", "poll", "prctl", "pread64", "pwrite64",
"read", "recvfrom", "recvmsg", "rt_sigaction",
"rt_sigprocmask", "rt_sigreturn", "sched_getaffinity",
"sched_yield", "sendmsg", "sendto", "set_robust_list",
"set_tid_address", "setitimer", "setsockopt", "sigaltstack",
"socket", "stat", "statx", "write"
],
"action": "SCMP_ACT_ALLOW"
}
]
}
Apply Seccomp to Pods
apiVersion: v1
kind: Pod
metadata:
name: seccomp-protected
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: minimal.json
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
9. Use AppArmor for Additional Hardening
Create AppArmor Profile
#include <tunables/global>
profile restrict-web-server flags=(attach_disconnected) {
#include <abstractions/base>
#include <abstractions/nameservice>
# Deny all by default
deny /root/** rwkl,
deny /home/** rwkl,
deny /proc/sys/** rw,
# Allow web server config/content
/etc/nginx/** r,
/var/www/** r,
/usr/share/nginx/html/** r,
/var/log/nginx/** w,
# Network access
network inet stream,
network inet dgram,
# Capabilities
capability setuid,
capability setgid,
capability net_bind_service,
deny capability sys_admin,
deny capability sys_module,
}
Apply to Pod
apiVersion: v1
kind: Pod
metadata:
name: apparmor-protected
annotations:
container.apparmor.security.beta.kubernetes.io/app: localhost/restrict-web-server
spec:
containers:
- name: app
image: nginx:latest
10. Complete Security Checklist
Use this checklist to audit your Kubernetes security:
# RBAC
- [ ] No cluster-admin service accounts except control plane
- [ ] All service accounts have minimal permissions
- [ ] Can-i testing verifies least privilege
- [ ] Audit logging enabled for RBAC changes
# Network Security
- [ ] Network policies deployed in all namespaces
- [ ] Default deny-all policies enforced
- [ ] Ingress/egress rules explicitly allow only needed traffic
- [ ] Policies tested with kubectl debug pods
# Pod Security
- [ ] Pod Security Standards enforced in all namespaces
- [ ] No privileged containers (unless absolutely necessary)
- [ ] All containers run as non-root
- [ ] Read-only root filesystems where possible
- [ ] Capabilities dropped except for explicitly needed ones
# Image Security
- [ ] All images scanned for vulnerabilities
- [ ] Vulnerable images blocked at deployment time
- [ ] Private registries used for sensitive images
- [ ] Image signing/verification enabled
# Secrets
- [ ] Encryption at rest enabled
- [ ] Secrets mounted as volumes, not environment variables
- [ ] Secret access audited
- [ ] Rotation policy implemented
# Monitoring
- [ ] Audit logging comprehensive and monitored
- [ ] Falco deployed and monitoring runtime behavior
- [ ] Alerts configured for suspicious activities
- [ ] Log retention meets compliance requirements
Real-World Implementation Example
Here’s a complete, production-ready setup:
---
# Secure namespace
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
---
# Service account with minimal permissions
apiVersion: v1
kind: ServiceAccount
metadata:
name: web-app
namespace: production
automountServiceAccountToken: true
---
# Role with exactly needed permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: web-app-role
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["app-config"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["app-secrets"]
verbs: ["get"]
---
# Bind role to service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: web-app-binding
namespace: production
subjects:
- kind: ServiceAccount
name: web-app
roleRef:
kind: Role
name: web-app-role
---
# Network policy - deny all ingress by default
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Network policy - allow frontend traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
namespace: production
spec:
podSelector:
matchLabels:
app: web-app
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
---
# Secure deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
serviceAccountName: web-app
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
image: myregistry.azurecr.io/web-app:v1.2.3 # Private registry
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- name: config
mountPath: /etc/config
readOnly: true
- name: tmp
mountPath: /tmp
- name: var
mountPath: /var/tmp
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
volumes:
- name: secrets
secret:
secretName: app-secrets
defaultMode: 0400
- name: config
configMap:
name: app-config
defaultMode: 0444
- name: tmp
emptyDir:
sizeLimit: 100Mi
- name: var
emptyDir:
sizeLimit: 100Mi
imagePullSecrets:
- name: registry-credentials
Getting Exam-Ready with These Practices
All of these security practices are tested extensively on the CKS exam. Master them with hands-on practice at Sailor.sh.
Start practicing security implementations with realistic exam scenarios and immediate feedback.
FAQ
What’s the most important Kubernetes security practice?
RBAC is foundational—everything flows from proper access control. If RBAC is weak, all other security measures are compromised.
Can I implement all these practices at once?
No. Start with RBAC and network policies (foundations), then add PSS, image scanning, secrets management, and finally monitoring. Staged implementation allows time for team education.
Which security practices are hardest to implement?
Image signing/verification and Falco rule writing require the most expertise. Practice these heavily for CKS.
Are these practices cloud-provider specific?
These are Kubernetes-native practices that work across all cloud providers. Cloud-specific security (IAM, network security groups) supplements but doesn’t replace these.
What if my organization can’t implement all practices?
Start with: RBAC (least privilege) → Network Policies (deny-all) → Pod Security Standards (restricted profile) → Image Scanning. These four cover 80% of the threat landscape.
How do I maintain these security practices over time?
Automate what you can (policy enforcement), audit what you can’t (RBAC access), monitor everything (Falco). Regular security reviews catch drift.