Kubernetes Service Not Reachable: Troubleshooting
Why Services don't route traffic and how to fix it. Selector mismatches, endpoint issues, and DNS problems.
Your pods are running, the Service exists, but nothing can reach it. Requests timeout, curl hangs, and your application appears completely disconnected from the network. Service connectivity issues are among the most common Kubernetes problems, and they almost always come down to a few well-known misconfigurations.
Quick Answer
Most Service connectivity issues fall into three categories: selector mismatches, endpoint problems, or DNS failures. Start here:
kubectl get endpoints <service-name>If the ENDPOINTS column is empty, your Service isn't finding any pods. Check that the Service selector matches the pod labels:
kubectl get svc <service-name> -o jsonpath='{.spec.selector}'
kubectl get pods --show-labelsStep-by-Step Troubleshooting
Step 1: Verify the Service Exists and Has the Right Configuration
Start by inspecting the Service:
kubectl get svc <service-name> -o wideCheck the output for:
- TYPE: Is it the type you expect? A
ClusterIPService won't be reachable from outside the cluster. - CLUSTER-IP: Is an IP assigned? (If you see
None, it's a headless Service.) - PORT(S): Does the port match what your application expects?
For a deeper look:
kubectl describe svc <service-name>Step 2: Check If the Service Has Endpoints
This is the single most important diagnostic step. If your Service has no endpoints, it cannot route traffic:
kubectl get endpoints <service-name>You should see pod IPs listed. If the ENDPOINTS column shows <none>, move to Step 3.
For more detail, check the EndpointSlice resources:
kubectl get endpointslices -l kubernetes.io/service-name=<service-name>Step 3: Diagnose Empty Endpoints
Empty endpoints mean the Service selector doesn't match any running, ready pods. This is the most common cause of unreachable Services.
Compare selectors and labels side by side:
# Get the Service's selector
kubectl get svc <service-name> -o jsonpath='{.spec.selector}' | jq
# List pods with their labels
kubectl get pods --show-labelsCommon selector mismatches:
| Problem | Service Selector | Pod Labels |
|---|---|---|
| Typo in label key | app: web-server | app: webserver |
| Wrong label value | app: frontend | app: front-end |
| Missing label entirely | app: api, version: v2 | app: api (no version label) |
| Namespace mismatch | Service in default | Pods in production |
Fix a selector mismatch by either updating the Service or the Deployment labels:
# Option A: Update the Service selector to match existing pods
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app # Must match pod template labels exactly
ports:
- port: 80
targetPort: 8080# Option B: Update the Deployment labels to match the Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app # Must match Service selector
spec:
containers:
- name: my-app
image: my-app:v1
ports:
- containerPort: 8080Step 4: Verify Pods Are Ready
Even if selectors match, a pod must be in Ready state to receive traffic. Pods that are running but not ready won't appear in the Service endpoints.
kubectl get pods -l <selector-labels>If pods show 0/1 READY, check why they're not ready:
kubectl describe pod <pod-name>Look for readiness probe failures in the Events section. A failing readiness probe is a common reason for endpoints disappearing.
Step 5: Check Port Configuration
A frequent but subtle mistake is a mismatch between the Service port, targetPort, and the container's actual listening port:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- port: 80 # Port the Service listens on
targetPort: 8080 # Port on the pod to forward to
protocol: TCPThe targetPort must match the port your container is actually listening on. Verify:
kubectl exec <pod-name> -- ss -tlnp
# or
kubectl exec <pod-name> -- netstat -tlnpIf your container listens on port 3000 but targetPort is set to 8080, traffic will never reach it.
Step 6: Test DNS Resolution
If endpoints exist but the Service is still unreachable by name, the problem might be DNS:
# Run a debug pod to test DNS
kubectl run dns-test --rm -it --image=busybox -- nslookup <service-name>The Service should resolve to its ClusterIP. Try the fully qualified domain name if the short name doesn't resolve:
nslookup <service-name>.<namespace>.svc.cluster.localIf DNS doesn't resolve at all, check that CoreDNS is running:
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl get svc -n kube-system kube-dnsStep 7: Test Direct Pod Connectivity
To rule out Service-level issues, try connecting to the pod IP directly:
# Get a pod's IP
kubectl get pod <pod-name> -o jsonpath='{.status.podIP}'
# Test connectivity from another pod
kubectl run curl-test --rm -it --image=curlimages/curl -- curl -v http://<pod-ip>:<container-port>If direct pod access works but the Service doesn't, the problem is in the Service configuration or kube-proxy.
Step 8: Check kube-proxy and Network Policies
If all the above checks pass but the Service is still unreachable:
Verify kube-proxy is running:
kubectl get pods -n kube-system -l k8s-app=kube-proxy
kubectl logs -n kube-system -l k8s-app=kube-proxy --tail=50Check for NetworkPolicies that might block traffic:
kubectl get networkpolicies
kubectl describe networkpolicy <policy-name>A NetworkPolicy with restrictive ingress rules can silently block Service traffic even when everything else is correctly configured.
Complete Diagnostic Flowchart
| Check | Command | If Failing |
|---|---|---|
| Service exists? | kubectl get svc | Create the Service |
| Has endpoints? | kubectl get endpoints <svc> | Fix selector/labels (Step 3) |
| Pods are Ready? | kubectl get pods -l <labels> | Fix readiness probes (Step 4) |
| Ports match? | kubectl describe svc | Align targetPort with container (Step 5) |
| DNS resolves? | nslookup <svc-name> | Check CoreDNS (Step 6) |
| Direct pod access? | curl <pod-ip>:<port> | App issue, not Service issue |
| kube-proxy running? | kubectl get pods -n kube-system | Restart kube-proxy (Step 8) |
| NetworkPolicy? | kubectl get netpol | Update policy rules (Step 8) |
Practice These Scenarios
In this hands-on challenge, you'll:
- Debug a Service with 0 endpoints
- Identify a label mismatch between the Service and Deployment
- Fix the configuration so traffic routes correctly
In this hands-on challenge, you'll:
- Create a Service from scratch to expose an application
- Learn how selectors, ports, and targetPorts connect
- Verify connectivity between pods using the Service name
In this hands-on challenge, you'll:
- Investigate intra-cluster communication failures
- Debug NetworkPolicy rules that are silently blocking traffic
- Restore connectivity while keeping security restrictions in place
Prevention Tips
- Always verify endpoints after creating a Service — Run
kubectl get endpointsimmediately to confirm pods are registered - Use consistent labeling conventions — Adopt a standard like
app.kubernetes.io/nameto reduce typos - Add readiness probes — Without a readiness probe, a crashing pod still receives traffic until it fully terminates
- Test from inside the cluster — Don't assume external tools reflect internal connectivity
- Document your port mappings — Keep a clear record of which Service port maps to which container port
