Testing Challenges
Comprehensive guide to testing challenges locally before submission.
Tip: The
kubeasy devcommands simplify the challenge development loop — scaffold, deploy, validate, and iterate without needing API login or the OCI registry. See the Dev Commands Reference for the full guide.
Thorough testing ensures your challenge works correctly and provides a good learning experience. This guide covers strategies for testing challenges at every stage.
Testing workflow
Follow this workflow when testing a challenge:
- Setup -- Create a clean test environment
- Deploy -- Start the challenge
- Verify problem -- Confirm the issue is reproducible
- Apply fix -- Solve the challenge
- Submit -- Verify all objectives pass
- Clean up -- Reset for next test
Setting up a test environment
Using Kubeasy CLI (recommended)
The Kubeasy CLI sets up everything automatically:
# Setup creates the Kind cluster and installs all components
kubeasy setupThis command will:
- Create a Kind cluster named
kubeasy - Install Kyverno for policy enforcement
- Install local-path-provisioner for PersistentVolume storage
- Configure kubectl context
Manual setup
If you prefer to set up components manually:
Create the cluster
kind create cluster --name kubeasy-testInstall dependencies
1. Install Kyverno:
kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.17.0/install.yaml
# Wait for Kyverno to be ready
kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/name=kyverno \
-n kyverno \
--timeout=300s2. Install local-path-provisioner:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.34/deploy/local-path-storage.yaml
# Wait for it to be ready
kubectl wait --for=condition=ready pod \
-l app=local-path-provisioner \
-n local-path-storage \
--timeout=120sTesting the broken state
1. Start the challenge
kubeasy challenge start <challenge-slug>Or if testing locally before the challenge is published, apply manifests and policies manually:
# Create namespace
kubectl create namespace <challenge-slug>
# Apply manifests
kubectl apply -f manifests/ -n <challenge-slug>
# Apply Kyverno policies
kubectl apply -f policies/2. Verify resources are created
# Check resources in the namespace
kubectl get all -n <challenge-slug>3. Confirm the problem exists
Depending on the challenge type:
For pod failures:
kubectl get pods -n <challenge-slug>
kubectl describe pod <pod-name> -n <challenge-slug>Expected: Pod should show errors in events or status.
For connectivity issues:
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -n <challenge-slug> \
-- curl http://service-name:portExpected: Connection should fail or return an error.
For log-based issues:
kubectl logs <pod-name> -n <challenge-slug>Expected: Logs should show error messages.
Testing Kyverno policies
Apply Kyverno policies and verify they work:
kubectl apply -f policies/Check policy status:
kubectl get clusterpolicy
kubectl describe clusterpolicy <policy-name>Test enforcement -- try a change the policy should block:
# This should be rejected if your policy protects the image
kubectl set image deployment/my-app app=nginx:latest -n <challenge-slug>Expected output:
Error from server: admission webhook "validate.kyverno.svc" denied the request:
resource Deployment/my-app was blocked due to the following policies
protect-challenge-image:
preserve-image: 'Cannot change the application image'Testing the solution
1. Apply the fix
Manually apply the fix that solves the challenge:
# Example: Increase memory limits
kubectl patch deployment data-processor -n <challenge-slug> \
--type='json' -p='[
{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value": "128Mi"},
{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/memory", "value": "64Mi"}
]'2. Verify resources are healthy
kubectl get pods -n <challenge-slug>
kubectl describe pod <pod-name> -n <challenge-slug>Expected: Pods should be Running and Ready.
3. Submit and verify objectives pass
kubeasy challenge submit <challenge-slug>Review the output. Each objective shows whether it passed or failed with a message explaining why.
Example output:
Condition Validation
pod-ready-check: PASSED - All condition checks passed
Event Validation
no-crash-events: PASSED - No forbidden events found
Status Validation
low-restarts: PASSED - All status checks passed
All validations passed!Testing individual validation types
Testing condition checks
Check resource conditions directly:
kubectl get pod <pod-name> -n <challenge-slug> -o jsonpath='{.status.conditions}'Verify the condition you're checking exists and has the expected status.
Testing status checks
Check specific status fields:
# Check restart count
kubectl get pod <pod-name> -n <challenge-slug> \
-o jsonpath='{.status.containerStatuses[0].restartCount}'
# Check ready replicas
kubectl get deployment <name> -n <challenge-slug> \
-o jsonpath='{.status.readyReplicas}'Testing log checks
Verify logs contain expected content:
kubectl logs <pod-name> -n <challenge-slug>Ensure the strings from your expectedStrings list appear in the output.
Testing event checks
Check for forbidden events:
kubectl get events -n <challenge-slug> --sort-by='.lastTimestamp'After applying the fix, forbidden reasons (like OOMKilled) should stop appearing.
Testing connectivity checks
Test HTTP connectivity from within the cluster:
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -n <challenge-slug> \
-- curl -v http://service-name:port/pathAutomated testing script
Create a test script for consistent testing:
#!/bin/bash
# test-challenge.sh
set -e
CHALLENGE_SLUG="memory-pressure"
echo "Cleaning up any previous test..."
kubectl delete namespace $CHALLENGE_SLUG --ignore-not-found=true
kubectl delete clusterpolicy --selector challenge=$CHALLENGE_SLUG --ignore-not-found=true
echo "Creating namespace..."
kubectl create namespace $CHALLENGE_SLUG
echo "Applying broken manifests..."
kubectl apply -f manifests/ -n $CHALLENGE_SLUG
echo "Applying Kyverno policies..."
kubectl apply -f policies/
echo "Waiting for resources..."
sleep 10
echo "Checking broken state..."
kubectl get pods -n $CHALLENGE_SLUG
echo "Applying fix..."
kubectl patch deployment data-processor -n $CHALLENGE_SLUG \
--type='json' -p='[
{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value": "128Mi"},
{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/memory", "value": "64Mi"}
]'
echo "Waiting for pod to stabilize..."
sleep 30
echo "Checking fixed state..."
kubectl get pods -n $CHALLENGE_SLUG
echo "Verifying pod is Ready..."
POD_READY=$(kubectl get pods -n $CHALLENGE_SLUG -l app=data-processor \
-o jsonpath='{.items[0].status.conditions[?(@.type=="Ready")].status}')
echo "Pod Ready: $POD_READY"
if [ "$POD_READY" == "True" ]; then
echo "Test passed!"
exit 0
else
echo "Test failed - pod not ready"
kubectl describe pods -n $CHALLENGE_SLUG -l app=data-processor
exit 1
fiMake it executable:
chmod +x test-challenge.sh
./test-challenge.shTesting edge cases
Multiple resources
If your target matches multiple resources, ensure all pass validation:
# Deploy multiple pods
kubectl scale deployment app --replicas=3 -n <challenge-slug>
# Wait for them to be ready
kubectl wait --for=condition=ready pod -l app=myapp -n <challenge-slug>
# Submit to verify all pods pass
kubeasy challenge submit <challenge-slug>Timing issues
Some validations use sinceSeconds to check recent events or logs. Make sure:
sinceSecondsis long enough for the check to capture relevant data- The default is 300 seconds (5 minutes) for both
eventandlogtypes - For
connectivitychecks, the defaulttimeoutSecondsis 5
Multiple valid solutions
Test that different valid fixes all pass your objectives:
# Solution 1: Increase memory limit
kubectl patch deployment data-processor -n <challenge-slug> \
--type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value": "128Mi"}]'
# Verify it passes
kubeasy challenge submit <challenge-slug>
# Reset and try solution 2: Increase memory limit to a different value
kubeasy challenge reset <challenge-slug>
kubeasy challenge start <challenge-slug>
kubectl patch deployment data-processor -n <challenge-slug> \
--type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value": "256Mi"}]'
# Verify this also passes
kubeasy challenge submit <challenge-slug>Debugging validation issues
Objectives fail unexpectedly
1. Check the submission output -- it shows exactly which objective failed and why.
2. Verify the resource state manually:
# For condition checks
kubectl get pod <pod-name> -n <challenge-slug> -o jsonpath='{.status.conditions}' | jq .
# For status checks
kubectl get pod <pod-name> -n <challenge-slug> -o jsonpath='{.status}' | jq .
# For log checks
kubectl logs <pod-name> -n <challenge-slug>
# For event checks
kubectl get events -n <challenge-slug> --field-selector reason=OOMKilled3. Ensure labels match -- validation targets use labelSelector to find resources:
kubectl get pods -n <challenge-slug> --show-labelsChallenge works locally but fails in CI
- Ensure all images are publicly accessible
- Verify the challenge doesn't depend on local-only resources
- Check that timing assumptions are reasonable (pods may start slower in CI)
Testing checklist
Before submitting, ensure:
- Challenge deploys with a reproducible broken state
- Problem is observable via standard kubectl commands
- Kyverno policies prevent obvious bypasses
- Applying the fix makes all objectives pass
- Multiple valid solutions are accepted (where applicable)
- Objective titles don't reveal the solution
- Challenge works on a fresh Kind cluster
- Estimated time is reasonable
- Clean up removes all resources
Clean up
After testing:
# Reset challenge (removes resources AND server progress)
kubeasy challenge reset <challenge-slug>
# Or just remove local resources (keeps server progress)
kubeasy challenge clean <challenge-slug>Next Steps
- Review Contributing Guidelines for submission requirements
- See Validation Reference for complete type specifications
- Check existing challenges for testing examples