Back to Blog
Troubleshooting

Kubernetes ImagePullBackOff: Causes and Fixes

Why your pods can't pull images and how to fix it. Covers registry authentication, image names, network issues, and private registries.

Paul BrissaudPaul Brissaud
•
February 5, 2026
•
4 min read
#troubleshooting#pods

Your pod is stuck. kubectl get pods shows ImagePullBackOff or ErrImagePull, and the container never starts. This means Kubernetes tried to pull the container image and failed. The good news: the error messages are usually very specific about what went wrong.

Quick Answer

Kubernetes can't download the container image. To find out why:

bash
kubectl describe pod <pod-name>

Scroll to the Events section. You'll see something like:

javascript
Warning  Failed   kubelet  Failed to pull image "myapp:v2":
  rpc error: code = NotFound desc = failed to pull and unpack image:
  manifest for docker.io/library/myapp:v2 not found

The message tells you the exact problem — wrong image name, missing tag, authentication failure, or network issue.

ImagePullBackOff vs ErrImagePull

These two statuses are related but different:

StatusWhat's happening
ErrImagePullThe pull just failed. Kubernetes will retry immediately
ImagePullBackOffMultiple pulls failed. Kubernetes is waiting longer between retries (exponential backoff)

ErrImagePull always comes first. After a few failed attempts, the status transitions to ImagePullBackOff. The backoff delay increases: 10s, 20s, 40s, up to 5 minutes. The pod will keep retrying indefinitely, but the issue won't resolve itself unless you fix the root cause.

Common Causes

CauseError Message Pattern
Image doesn't existmanifest unknown or not found
Wrong tagmanifest for image:tag not found
Typo in image namerepository does not exist
Private registry, no authunauthorized or 401
Wrong imagePullSecretunauthorized or no basic auth credentials
Network/firewall issuetimeout or dial tcp: lookup ... no such host
Registry rate limittoomanyrequests or 429

Step-by-Step Troubleshooting

Step 1: Read the Error Message

The most important step. Always start here:

bash
kubectl describe pod <pod-name>

Look at the Events section. The Failed to pull image message tells you exactly what happened. Read the full message — the specific error code and description point to the cause.

Step 2: Verify the Image Exists

Test whether the image is actually pullable from your local machine or a node:

bash
# For Docker Hub images
docker pull myapp:v2

# For private registries
docker pull registry.example.com/myapp:v2

# Check available tags on Docker Hub
curl -s https://hub.docker.com/v2/repositories/library/nginx/tags/?page_size=10 | jq '.results[].name'

If docker pull fails on your machine too, the image genuinely doesn't exist.

Step 3: Check the Image Reference

Verify exactly what the pod is trying to pull:

bash
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].image}'

Common issues:

  • Missing registry prefix: myapp:v1 pulls from Docker Hub (docker.io/library/myapp:v1), not your private registry
  • Wrong tag: latest might not exist if you only pushed versioned tags
  • Architecture mismatch: the image might exist but not for your node's architecture (amd64 vs arm64)

Step 4: Check imagePullSecrets

If you're using a private registry:

bash
# Check if the pod has an imagePullSecret
kubectl get pod <pod-name> -o jsonpath='{.spec.imagePullSecrets}'

# Check if the secret exists
kubectl get secret <secret-name> -n <namespace>

# Verify the secret content (decode and check)
kubectl get secret <secret-name> -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d | jq

Solutions by Cause

Cause A: Image Doesn't Exist or Wrong Tag

Symptoms: manifest unknown, not found, or repository does not exist.

The image name or tag is simply wrong.

Fix: Correct the image reference in your deployment:

yaml
containers:
- name: myapp
  image: myapp:v1  # Fix the tag to one that exists

Always use explicit tags. Avoid latest in production — it's ambiguous and can be overwritten at any time.

bash
# Verify the tag exists before deploying
docker manifest inspect myapp:v1

Cause B: Private Registry Authentication

Symptoms: unauthorized, 401 Unauthorized, or no basic auth credentials.

The image is in a private registry and Kubernetes doesn't have credentials.

Fix step 1 — Create an imagePullSecret:

bash
kubectl create secret docker-registry regcred \
  --docker-server=registry.example.com \
  --docker-username=user \
  --docker-password=password \
  -n <namespace>

Fix step 2 — Reference it in the pod spec:

yaml
spec:
  imagePullSecrets:
  - name: regcred
  containers:
  - name: myapp
    image: registry.example.com/myapp:v1

Pro tip: To avoid adding imagePullSecrets to every deployment, attach the secret to the default ServiceAccount:

bash
kubectl patch serviceaccount default -n <namespace> \
  -p '{"imagePullSecrets": [{"name": "regcred"}]}'

Every pod using the default ServiceAccount in that namespace will automatically use the credentials.

Cause C: Docker Hub Rate Limits

Symptoms: toomanyrequests, 429 Too Many Requests, or You have reached your pull rate limit.

Docker Hub limits anonymous pulls to 100 per 6 hours, and authenticated free accounts to 200 per 6 hours.

Fix option 1 — Authenticate to Docker Hub to get a higher limit:

bash
kubectl create secret docker-registry dockerhub-cred \
  --docker-server=https://index.docker.io/v1/ \
  --docker-username=<user> \
  --docker-password=<token>

Fix option 2 — Set up a pull-through cache (Harbor, Nexus, or a registry mirror) so your nodes don't hit Docker Hub directly:

json
{
  "registry-mirrors": ["https://mirror.example.com"]
}

Fix option 3 — Use imagePullPolicy: IfNotPresent so nodes reuse cached images instead of pulling every time:

yaml
containers:
- name: myapp
  image: myapp:v1
  imagePullPolicy: IfNotPresent

This is actually the default for tagged images, but it's worth being explicit.

Cause D: Network or DNS Issues

Symptoms: dial tcp: lookup registry.example.com: no such host, i/o timeout, or connection refused.

The node can't reach the registry at the network level.

Diagnose:

bash
# Test DNS resolution from a node
kubectl run debug --rm -it --image=busybox -- nslookup registry.example.com

# Test connectivity
kubectl run debug --rm -it --image=busybox -- wget -qO- https://registry.example.com/v2/ 

Common network causes:

IssueFix
DNS not resolvingCheck CoreDNS pods and node DNS configuration
Firewall blockingOpen egress to registry on port 443
Proxy requiredConfigure containerd/Docker with HTTP_PROXY and HTTPS_PROXY
Self-signed certAdd the CA cert to the container runtime's trust store

Cause E: Architecture Mismatch

Symptoms: no matching manifest for linux/arm64 or the pull succeeds but the container exits immediately.

This happens when you're running ARM nodes (like AWS Graviton or Apple Silicon) but the image only has an amd64 build.

Fix: Build and push multi-architecture images:

bash
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:v1 --push .

Or specify the platform explicitly in the pod if you have mixed-architecture nodes:

yaml
spec:
  nodeSelector:
    kubernetes.io/arch: amd64

Understanding imagePullPolicy

The imagePullPolicy field controls when Kubernetes pulls images:

PolicyBehavior
AlwaysPull the image on every pod start
IfNotPresentPull only if the image isn't cached on the node
NeverNever pull. Only use images already on the node

If you omit imagePullPolicy, Kubernetes sets the default based on how you reference the image:

Image referenceDefault policy
myapp:latest or myapp (no tag)Always
myapp:v1.2.3 (explicit tag, not latest)IfNotPresent
myapp@sha256:abc123... (digest)IfNotPresent

This is an important detail: if you forget to set a tag, Kubernetes defaults to :latest and sets imagePullPolicy: Always, which means every pod start hits the registry. Using explicit versioned tags or digests avoids this and gives you IfNotPresent by default — faster pod starts and less registry traffic.

Note that imagePullPolicy cannot be changed after the container is created. To change it, you need to update the pod spec and let Kubernetes create a new pod.

Debugging Decision Tree

javascript
ImagePullBackOff
│
├─ kubectl describe pod → Read the error message
│
├─ "manifest not found" / "repository does not exist"
│  → Verify image name and tag exist
│
├─ "unauthorized" / "401"
│  → Create imagePullSecret → attach to pod or ServiceAccount
│
├─ "toomanyrequests" / "429"
│  → Authenticate to registry or set up a mirror
│
├─ "no such host" / "timeout"
│  → Check node DNS and network connectivity to registry
│
└─ "no matching manifest for linux/arch"
   → Build multi-arch image or pin nodeSelector to matching arch

Prevention Tips

  1. Use explicit image tags — Never deploy with :latest in production. Use versioned tags like myapp:v1.2.3 or SHA digests like myapp@sha256:abc...
  2. Set up imagePullSecrets at the ServiceAccount level — Less error-prone than adding them to every deployment
  3. Mirror public registries — A pull-through cache protects you from rate limits and registry outages
  4. Validate images in CI — Run docker manifest inspect in your pipeline before deploying to catch missing images early
  5. Use imagePullPolicy: IfNotPresent — Reduces pull failures and speeds up pod starts for immutable tags
  6. Monitor for ImagePullBackOff — Alert on kube_pod_container_status_waiting_reason{reason="ImagePullBackOff"} in Prometheus

Written by

Paul Brissaud

Paul Brissaud

Paul Brissaud is a DevOps / Platform Engineer and the creator of Kubeasy. He believes Kubernetes education is often too theoretical and that real understanding comes from hands-on, failure-driven learning.

TwitterGitHub

On this page

  • Quick Answer
  • ImagePullBackOff vs ErrImagePull
  • Common Causes
  • Step-by-Step Troubleshooting
  • Step 1: Read the Error Message
  • Step 2: Verify the Image Exists
  • Step 3: Check the Image Reference
  • Step 4: Check imagePullSecrets
  • Solutions by Cause
  • Cause A: Image Doesn't Exist or Wrong Tag
  • Cause B: Private Registry Authentication
  • Cause C: Docker Hub Rate Limits
  • Cause D: Network or DNS Issues
  • Cause E: Architecture Mismatch
  • Understanding imagePullPolicy
  • Debugging Decision Tree
  • Prevention Tips
KubeasyKubeasy

Master Kubernetes through hands-on challenges and real-world scenarios. Free, open-source learning platform for developers.

Product

  • Challenges
  • Documentation
  • CLI Tool

Community

  • GitHub
  • Discord
  • Contribute

Legal

  • Privacy
  • Terms
  • License

© 2025 Kubeasy. Open source under Apache License 2.0.

Related Articles

Kubernetes Pod Stuck in Pending: Troubleshooting Guide
Troubleshooting

Kubernetes Pod Stuck in Pending: Troubleshooting Guide

Read more
Debugging CrashLoopBackOff in Kubernetes
Troubleshooting

Debugging CrashLoopBackOff in Kubernetes

Read more
How to Fix Kubernetes OOMKilled Pods
Troubleshooting

How to Fix Kubernetes OOMKilled Pods

Read more
KubeasyKubeasyKubeasy
  • Challenges
  • Blog
  • Documentation
Sign InTry Demo