A complete guide for challenge creators: from idea to Pull Request using the kubeasy dev mode commands (v2.5.3+).
Paul Brissaud
•
•
5 min read
#beginner
Got an idea for a Kubeasy challenge? A production incident you've been through, an RBAC config that always trips people up, a Job that fails silently? Here's how to turn it into a playable challenge in minutes — no account needed, no API, nothing to deploy online.
The dev mode is a dedicated subsystem of the Kubeasy CLI built for challenge creators. It lets you scaffold, test, and iterate locally against your Kind cluster, using the same tools you'd use in real production. No login, no OCI registry, no backend required.
Available since kubeasy-cli v2.5.3, the dev mode turns challenge contribution into a smooth workflow: create → lint → apply → validate → PR
Prerequisites
Before starting, you need:
kubeasy CLI v2.5.3 or later installed
Docker running (required by Kind)
The local cluster initialized at least once: kubeasy setup
The challenges repo forked on GitHub:
bash
git clone https://github.com/<your-username>/challenges.git
cd challenges
git checkout -b challenge/memory-pressure
kubeasy setup creates a Kind (Kubernetes IN Docker) cluster named kubeasy and installs the required infrastructure — Kyverno (policy engine) and a local volume provisioner.
Scaffolding — kubeasy dev create
Start by generating the challenge structure. dev create works in two modes: interactive (TTY prompts) or non-interactive (flags).
In interactive mode, the CLI walks you through name, type, theme, difficulty, and estimated time. The slug is auto-generated from the name. In non-interactive mode:
This is the central file. It holds metadata, the description, and the validations (called objectives).
yaml
title: "Memory Pressure"
type: "fix"
theme: "resources-scaling"
difficulty: "easy"
estimatedTime: 20
description: |
A data processing application deployed in production
has been restarting in a loop since this morning. The team
hasn't touched the code.
initialSituation: |
A pod is deployed in the namespace. It starts, runs for a
few seconds, then gets killed. It enters CrashLoopBackOff
and keeps restarting.
objective: |
Make the pod stable. Understand why Kubernetes is killing it.
objectives:
- key: pod-running
title: "Application Running"
description: "The pod must be in Ready state"
order: 1
type: condition
spec:
target:
kind: Pod
labelSelector:
app: memory-pressure
checks:
- type: Ready
status: "True"
- key: no-crashes
title: "Stable Operation"
description: "No crash or eviction events"
order: 2
type: event
spec:
target:
kind: Pod
labelSelector:
app: memory-pressure
forbiddenReasons:
- "OOMKilled"
- "Evicted"
sinceSeconds: 300
description — Describes the symptoms, never the cause. The user must investigate.
initialSituation — What the user sees when they arrive: cluster state, deployed resources. No hints about the problem.
objective — The goal to achieve, not the method.
objectives — Validations that verify the solution. They must test the outcome, not the implementation.
⚠️ Hard rule: never reveal the cause in objective titles or descriptions.
❌ Bad
✅ Good
"Fix the memory limit to 256Mi"
"Stable Operation"
"The ConfigMap has invalid JSON"
"The app crashes shortly after startup"
"Create a Role with get/list on pods"
"CI Runner has required access"
The behavior also varies by type. For fix, the initial state is broken — manifests have an intentional bug, the user diagnoses and fixes. For build, the environment is empty or minimal — the user creates missing resources. For migrate, the initial state is working (v1 config) and the user must evolve it to v2 without breaking anything.
Writing the manifests
The files in manifests/ define the initial cluster state when a user starts the challenge. For a fix challenge, the manifest should reflect a realistic "going wrong" state:
One problem at a time, realistic state (like an actual prod incident), internal comment about the bug (removed before PR).
Custom Docker Images
Some challenges require a broken application that doesn't exist as a public image — a process that eats memory, an API that returns wrong data, a server with a misconfigured TLS cert. For these cases, you can ship a custom Docker image directly in your challenge.
Just add an image/ directory with a Dockerfile at the root of your challenge. When kubeasy dev apply detects it, it automatically runs docker build, exports the image as a tar archive, and loads it directly into every node of the Kind cluster — no registry, no docker push.
The image tag is always <slug>:latest. Reference it in your manifest with imagePullPolicy: Never — without it, Kubernetes will try to pull from a registry and fail with ImagePullBackOff.
yaml
containers:
- name: app
image: memory-pressure:latest
imagePullPolicy: Never
Example — a memory hog that reliably OOMKills with limits set too low:
docker
# image/Dockerfile
FROM python:3.11-slim
COPY app.py /app.py
CMD ["python", "/app.py"]
python
# image/app.py
import time
data = []
while True:
data.append(" " * 10_000_000) # ~10MB per iteration
time.sleep(0.1)
The image is rebuilt and reloaded on every kubeasy dev apply, including in watch mode.
Validate the structure — kubeasy dev lint
Before deploying anything, validate the challenge.yaml with the built-in linter. No cluster required.
bash
kubeasy dev lint memory-pressure
The linter checks required fields, valid values (type, theme, difficulty), and objective structure. Always run it before deploying — it's fast and avoids unnecessary round-trips with the cluster.
Deploy and test
bash
kubeasy dev apply memory-pressure --clean
--clean deletes existing resources before redeploying — useful when iterating with modified manifests. Use dev status and dev logs to confirm the challenge is in the expected broken state:
bash
kubeasy dev status memory-pressure # shows pods + recent events
kubeasy dev logs memory-pressure --follow
Then run the validations:
bash
kubeasy dev validate memory-pressure
For rapid iteration, open two terminals — terminal 1 auto-redeploys on file changes, terminal 2 re-validates every 5 seconds:
bash
kubeasy dev apply memory-pressure --watch # terminal 1
kubeasy dev validate memory-pressure --watch # terminal 2
Or do everything in one command:
bash
kubeasy dev test memory-pressure --clean --watch
Dev commands quick reference
Command
Role
Cluster needed
---------
------
:--------------:
dev create
Scaffold a new challenge
No
dev get
Display local challenge metadata
No
dev lint
Validate YAML structure
No
dev apply
Deploy local manifests
Yes
dev validate
Run validations
Yes
dev test
Apply + validate in one step
Yes
dev status
Show pods + events
Yes
dev logs
Stream pod logs
Yes
dev clean
Remove challenge resources
Yes
Challenge design best practices
Kubeasy challenges are built on 4 principles:
Realism over pedagogy — The challenge should feel like a real production incident, not a classroom exercise.
Preserve the mystery — The description shows symptoms. Never the cause. The user investigates with kubectl, logs, and events.
Autonomy first — The user solves with standard Kubernetes tools. No artificial constraints on the approach.
Failure is learning — The environment is safe. Break things, start over. Validations give feedback without revealing the solution.
There are 5 validation types available:
Type
What it checks
Typical use in a challenge
condition
Kubernetes status conditions on any resource
Pod is Ready, Deployment is Available, Job is Complete
Open a Pull Request on github.com/kubeasy-dev/challenges. After merge, the CI/CD pipeline takes over: the challenge is built as an OCI artifact, published to ghcr.io/kubeasy-dev/challenges/memory-pressure:latest, and becomes available via kubeasy challenge start memory-pressure for all users.
Your challenge could be the next one played by thousands of developers learning Kubernetes.
Wrapping up
The full workflow fits in a single line: create → lint → apply → validate → PR. From a blank directory to a challenge playable by anyone in the world, everything happens locally, with tools you already know.
Dev mode was built so that contributing to Kubeasy feels as natural as writing code — no friction, no external dependencies, no waiting on a pipeline to find out your YAML is broken. Just you, your cluster, and an idea worth sharing.
If you want to discuss a challenge idea before diving in, the Kubeasy Slack is the right place. And if you're ready to go, the challenges repo is open — PRs welcome.
Written by
Paul Brissaud
Paul Brissaud is a DevOps / Platform Engineer and the creator of Kubeasy. He believes Kubernetes education is often too theoretical and that real understanding comes from hands-on, failure-driven learning.