Kubernetes ImagePullBackOff: Causes and Fixes
Why your pods can't pull images and how to fix it. Covers registry authentication, image names, network issues, and private registries.
Your pod is stuck. kubectl get pods shows ImagePullBackOff or ErrImagePull, and the container never starts. This means Kubernetes tried to pull the container image and failed. The good news: the error messages are usually very specific about what went wrong.
Quick Answer
Kubernetes can't download the container image. To find out why:
kubectl describe pod <pod-name>Scroll to the Events section. You'll see something like:
Warning Failed kubelet Failed to pull image "myapp:v2":
rpc error: code = NotFound desc = failed to pull and unpack image:
manifest for docker.io/library/myapp:v2 not foundThe message tells you the exact problem — wrong image name, missing tag, authentication failure, or network issue.
ImagePullBackOff vs ErrImagePull
These two statuses are related but different:
| Status | What's happening |
|---|---|
| ErrImagePull | The pull just failed. Kubernetes will retry immediately |
| ImagePullBackOff | Multiple pulls failed. Kubernetes is waiting longer between retries (exponential backoff) |
ErrImagePull always comes first. After a few failed attempts, the status transitions to ImagePullBackOff. The backoff delay increases: 10s, 20s, 40s, up to 5 minutes. The pod will keep retrying indefinitely, but the issue won't resolve itself unless you fix the root cause.
Common Causes
| Cause | Error Message Pattern |
|---|---|
| Image doesn't exist | manifest unknown or not found |
| Wrong tag | manifest for image:tag not found |
| Typo in image name | repository does not exist |
| Private registry, no auth | unauthorized or 401 |
| Wrong imagePullSecret | unauthorized or no basic auth credentials |
| Network/firewall issue | timeout or dial tcp: lookup ... no such host |
| Registry rate limit | toomanyrequests or 429 |
Step-by-Step Troubleshooting
Step 1: Read the Error Message
The most important step. Always start here:
kubectl describe pod <pod-name>Look at the Events section. The Failed to pull image message tells you exactly what happened. Read the full message — the specific error code and description point to the cause.
Step 2: Verify the Image Exists
Test whether the image is actually pullable from your local machine or a node:
# For Docker Hub images
docker pull myapp:v2
# For private registries
docker pull registry.example.com/myapp:v2
# Check available tags on Docker Hub
curl -s https://hub.docker.com/v2/repositories/library/nginx/tags/?page_size=10 | jq '.results[].name'If docker pull fails on your machine too, the image genuinely doesn't exist.
Step 3: Check the Image Reference
Verify exactly what the pod is trying to pull:
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].image}'Common issues:
- Missing registry prefix:
myapp:v1pulls from Docker Hub (docker.io/library/myapp:v1), not your private registry - Wrong tag:
latestmight not exist if you only pushed versioned tags - Architecture mismatch: the image might exist but not for your node's architecture (amd64 vs arm64)
Step 4: Check imagePullSecrets
If you're using a private registry:
# Check if the pod has an imagePullSecret
kubectl get pod <pod-name> -o jsonpath='{.spec.imagePullSecrets}'
# Check if the secret exists
kubectl get secret <secret-name> -n <namespace>
# Verify the secret content (decode and check)
kubectl get secret <secret-name> -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d | jqSolutions by Cause
Cause A: Image Doesn't Exist or Wrong Tag
Symptoms: manifest unknown, not found, or repository does not exist.
The image name or tag is simply wrong.
Fix: Correct the image reference in your deployment:
containers:
- name: myapp
image: myapp:v1 # Fix the tag to one that existsAlways use explicit tags. Avoid latest in production — it's ambiguous and can be overwritten at any time.
# Verify the tag exists before deploying
docker manifest inspect myapp:v1Cause B: Private Registry Authentication
Symptoms: unauthorized, 401 Unauthorized, or no basic auth credentials.
The image is in a private registry and Kubernetes doesn't have credentials.
Fix step 1 — Create an imagePullSecret:
kubectl create secret docker-registry regcred \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=password \
-n <namespace>Fix step 2 — Reference it in the pod spec:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: myapp
image: registry.example.com/myapp:v1Pro tip: To avoid adding imagePullSecrets to every deployment, attach the secret to the default ServiceAccount:
kubectl patch serviceaccount default -n <namespace> \
-p '{"imagePullSecrets": [{"name": "regcred"}]}'Every pod using the default ServiceAccount in that namespace will automatically use the credentials.
Cause C: Docker Hub Rate Limits
Symptoms: toomanyrequests, 429 Too Many Requests, or You have reached your pull rate limit.
Docker Hub limits anonymous pulls to 100 per 6 hours, and authenticated free accounts to 200 per 6 hours.
Fix option 1 — Authenticate to Docker Hub to get a higher limit:
kubectl create secret docker-registry dockerhub-cred \
--docker-server=https://index.docker.io/v1/ \
--docker-username=<user> \
--docker-password=<token>Fix option 2 — Set up a pull-through cache (Harbor, Nexus, or a registry mirror) so your nodes don't hit Docker Hub directly:
{
"registry-mirrors": ["https://mirror.example.com"]
}Fix option 3 — Use imagePullPolicy: IfNotPresent so nodes reuse cached images instead of pulling every time:
containers:
- name: myapp
image: myapp:v1
imagePullPolicy: IfNotPresentThis is actually the default for tagged images, but it's worth being explicit.
Cause D: Network or DNS Issues
Symptoms: dial tcp: lookup registry.example.com: no such host, i/o timeout, or connection refused.
The node can't reach the registry at the network level.
Diagnose:
# Test DNS resolution from a node
kubectl run debug --rm -it --image=busybox -- nslookup registry.example.com
# Test connectivity
kubectl run debug --rm -it --image=busybox -- wget -qO- https://registry.example.com/v2/ Common network causes:
| Issue | Fix |
|---|---|
| DNS not resolving | Check CoreDNS pods and node DNS configuration |
| Firewall blocking | Open egress to registry on port 443 |
| Proxy required | Configure containerd/Docker with HTTP_PROXY and HTTPS_PROXY |
| Self-signed cert | Add the CA cert to the container runtime's trust store |
Cause E: Architecture Mismatch
Symptoms: no matching manifest for linux/arm64 or the pull succeeds but the container exits immediately.
This happens when you're running ARM nodes (like AWS Graviton or Apple Silicon) but the image only has an amd64 build.
Fix: Build and push multi-architecture images:
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:v1 --push .Or specify the platform explicitly in the pod if you have mixed-architecture nodes:
spec:
nodeSelector:
kubernetes.io/arch: amd64Understanding imagePullPolicy
The imagePullPolicy field controls when Kubernetes pulls images:
| Policy | Behavior |
|---|---|
| Always | Pull the image on every pod start |
| IfNotPresent | Pull only if the image isn't cached on the node |
| Never | Never pull. Only use images already on the node |
If you omit imagePullPolicy, Kubernetes sets the default based on how you reference the image:
| Image reference | Default policy |
|---|---|
myapp:latest or myapp (no tag) | Always |
myapp:v1.2.3 (explicit tag, not latest) | IfNotPresent |
myapp@sha256:abc123... (digest) | IfNotPresent |
This is an important detail: if you forget to set a tag, Kubernetes defaults to :latest and sets imagePullPolicy: Always, which means every pod start hits the registry. Using explicit versioned tags or digests avoids this and gives you IfNotPresent by default — faster pod starts and less registry traffic.
Note that imagePullPolicy cannot be changed after the container is created. To change it, you need to update the pod spec and let Kubernetes create a new pod.
Debugging Decision Tree
ImagePullBackOff
│
├─ kubectl describe pod → Read the error message
│
├─ "manifest not found" / "repository does not exist"
│ → Verify image name and tag exist
│
├─ "unauthorized" / "401"
│ → Create imagePullSecret → attach to pod or ServiceAccount
│
├─ "toomanyrequests" / "429"
│ → Authenticate to registry or set up a mirror
│
├─ "no such host" / "timeout"
│ → Check node DNS and network connectivity to registry
│
└─ "no matching manifest for linux/arch"
→ Build multi-arch image or pin nodeSelector to matching archPrevention Tips
- Use explicit image tags — Never deploy with
:latestin production. Use versioned tags likemyapp:v1.2.3or SHA digests likemyapp@sha256:abc... - Set up imagePullSecrets at the ServiceAccount level — Less error-prone than adding them to every deployment
- Mirror public registries — A pull-through cache protects you from rate limits and registry outages
- Validate images in CI — Run
docker manifest inspectin your pipeline before deploying to catch missing images early - Use
imagePullPolicy: IfNotPresent— Reduces pull failures and speeds up pod starts for immutable tags - Monitor for ImagePullBackOff — Alert on
kube_pod_container_status_waiting_reason{reason="ImagePullBackOff"}in Prometheus
