Back to Blog
Troubleshooting

Kubernetes Service Not Reachable: Troubleshooting

Why Services don't route traffic and how to fix it. Selector mismatches, endpoint issues, and DNS problems.

Paul BrissaudPaul Brissaud
•
February 9, 2026
•
3 min read
#troubleshooting#networking

Your pods are running, the Service exists, but nothing can reach it. Requests timeout, curl hangs, and your application appears completely disconnected from the network. Service connectivity issues are among the most common Kubernetes problems, and they almost always come down to a few well-known misconfigurations.

💡
Need a refresher on how Services work? Check out Kubernetes Services Explained: ClusterIP vs NodePort vs LoadBalancer first.

Quick Answer

Most Service connectivity issues fall into three categories: selector mismatches, endpoint problems, or DNS failures. Start here:

bash
kubectl get endpoints <service-name>

If the ENDPOINTS column is empty, your Service isn't finding any pods. Check that the Service selector matches the pod labels:

bash
kubectl get svc <service-name> -o jsonpath='{.spec.selector}'
kubectl get pods --show-labels

Step-by-Step Troubleshooting

Step 1: Verify the Service Exists and Has the Right Configuration

Start by inspecting the Service:

bash
kubectl get svc <service-name> -o wide

Check the output for:

  • TYPE: Is it the type you expect? A ClusterIP Service won't be reachable from outside the cluster.
  • CLUSTER-IP: Is an IP assigned? (If you see None, it's a headless Service.)
  • PORT(S): Does the port match what your application expects?

For a deeper look:

bash
kubectl describe svc <service-name>

Step 2: Check If the Service Has Endpoints

This is the single most important diagnostic step. If your Service has no endpoints, it cannot route traffic:

bash
kubectl get endpoints <service-name>

You should see pod IPs listed. If the ENDPOINTS column shows <none>, move to Step 3.

For more detail, check the EndpointSlice resources:

bash
kubectl get endpointslices -l kubernetes.io/service-name=<service-name>

Step 3: Diagnose Empty Endpoints

Empty endpoints mean the Service selector doesn't match any running, ready pods. This is the most common cause of unreachable Services.

Compare selectors and labels side by side:

bash
# Get the Service's selector
kubectl get svc <service-name> -o jsonpath='{.spec.selector}' | jq

# List pods with their labels
kubectl get pods --show-labels

Common selector mismatches:

ProblemService SelectorPod Labels
Typo in label keyapp: web-serverapp: webserver
Wrong label valueapp: frontendapp: front-end
Missing label entirelyapp: api, version: v2app: api (no version label)
Namespace mismatchService in defaultPods in production

Fix a selector mismatch by either updating the Service or the Deployment labels:

yaml
# Option A: Update the Service selector to match existing pods
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app  # Must match pod template labels exactly
  ports:
  - port: 80
    targetPort: 8080
yaml
# Option B: Update the Deployment labels to match the Service
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app  # Must match Service selector
    spec:
      containers:
      - name: my-app
        image: my-app:v1
        ports:
        - containerPort: 8080

Step 4: Verify Pods Are Ready

Even if selectors match, a pod must be in Ready state to receive traffic. Pods that are running but not ready won't appear in the Service endpoints.

bash
kubectl get pods -l <selector-labels>

If pods show 0/1 READY, check why they're not ready:

bash
kubectl describe pod <pod-name>

Look for readiness probe failures in the Events section. A failing readiness probe is a common reason for endpoints disappearing.

Step 5: Check Port Configuration

A frequent but subtle mistake is a mismatch between the Service port, targetPort, and the container's actual listening port:

yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  ports:
  - port: 80          # Port the Service listens on
    targetPort: 8080   # Port on the pod to forward to
    protocol: TCP

The targetPort must match the port your container is actually listening on. Verify:

bash
kubectl exec <pod-name> -- ss -tlnp
# or
kubectl exec <pod-name> -- netstat -tlnp

If your container listens on port 3000 but targetPort is set to 8080, traffic will never reach it.

Step 6: Test DNS Resolution

If endpoints exist but the Service is still unreachable by name, the problem might be DNS:

bash
# Run a debug pod to test DNS
kubectl run dns-test --rm -it --image=busybox -- nslookup <service-name>

The Service should resolve to its ClusterIP. Try the fully qualified domain name if the short name doesn't resolve:

bash
nslookup <service-name>.<namespace>.svc.cluster.local

If DNS doesn't resolve at all, check that CoreDNS is running:

bash
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl get svc -n kube-system kube-dns

Step 7: Test Direct Pod Connectivity

To rule out Service-level issues, try connecting to the pod IP directly:

bash
# Get a pod's IP
kubectl get pod <pod-name> -o jsonpath='{.status.podIP}'

# Test connectivity from another pod
kubectl run curl-test --rm -it --image=curlimages/curl -- curl -v http://<pod-ip>:<container-port>

If direct pod access works but the Service doesn't, the problem is in the Service configuration or kube-proxy.

Step 8: Check kube-proxy and Network Policies

If all the above checks pass but the Service is still unreachable:

Verify kube-proxy is running:

bash
kubectl get pods -n kube-system -l k8s-app=kube-proxy
kubectl logs -n kube-system -l k8s-app=kube-proxy --tail=50

Check for NetworkPolicies that might block traffic:

bash
kubectl get networkpolicies
kubectl describe networkpolicy <policy-name>

A NetworkPolicy with restrictive ingress rules can silently block Service traffic even when everything else is correctly configured.

Complete Diagnostic Flowchart

CheckCommandIf Failing
Service exists?kubectl get svcCreate the Service
Has endpoints?kubectl get endpoints <svc>Fix selector/labels (Step 3)
Pods are Ready?kubectl get pods -l <labels>Fix readiness probes (Step 4)
Ports match?kubectl describe svcAlign targetPort with container (Step 5)
DNS resolves?nslookup <svc-name>Check CoreDNS (Step 6)
Direct pod access?curl <pod-ip>:<port>App issue, not Service issue
kube-proxy running?kubectl get pods -n kube-systemRestart kube-proxy (Step 8)
NetworkPolicy?kubectl get netpolUpdate policy rules (Step 8)

Practice These Scenarios

Practice This Scenario
easyPods & Containers
Wrong Selector
The Service exists but can't find any backend pods. Requests to the service timeout, yet the pods are running fine.
10 min
Start Challenge

In this hands-on challenge, you'll:

  • Debug a Service with 0 endpoints
  • Identify a label mismatch between the Service and Deployment
  • Fix the configuration so traffic routes correctly
Practice This Scenario
easyPods & Containers
Expose Internally
The application is running, but other services in the cluster can't reach it. It needs to be accessible from within the cluster.
10 min
Start Challenge

In this hands-on challenge, you'll:

  • Create a Service from scratch to expose an application
  • Learn how selectors, ports, and targetPorts connect
  • Verify connectivity between pods using the Service name
Practice This Scenario
easyNetworking
Partial Outage
The frontend app is up — but some users report failures. It's not the code. Investigate the cluster's configuration before the incident spreads.
10 min
Start Challenge

In this hands-on challenge, you'll:

  • Investigate intra-cluster communication failures
  • Debug NetworkPolicy rules that are silently blocking traffic
  • Restore connectivity while keeping security restrictions in place

Prevention Tips

  1. Always verify endpoints after creating a Service — Run kubectl get endpoints immediately to confirm pods are registered
  2. Use consistent labeling conventions — Adopt a standard like app.kubernetes.io/name to reduce typos
  3. Add readiness probes — Without a readiness probe, a crashing pod still receives traffic until it fully terminates
  4. Test from inside the cluster — Don't assume external tools reflect internal connectivity
  5. Document your port mappings — Keep a clear record of which Service port maps to which container port

Written by

Paul Brissaud

Paul Brissaud

Paul Brissaud is a DevOps / Platform Engineer and the creator of Kubeasy. He believes Kubernetes education is often too theoretical and that real understanding comes from hands-on, failure-driven learning.

TwitterGitHub

On this page

  • Quick Answer
  • Step-by-Step Troubleshooting
  • Step 1: Verify the Service Exists and Has the Right Configuration
  • Step 2: Check If the Service Has Endpoints
  • Step 3: Diagnose Empty Endpoints
  • Step 4: Verify Pods Are Ready
  • Step 5: Check Port Configuration
  • Step 6: Test DNS Resolution
  • Step 7: Test Direct Pod Connectivity
  • Step 8: Check kube-proxy and Network Policies
  • Complete Diagnostic Flowchart
  • Practice These Scenarios
  • Prevention Tips
KubeasyKubeasy

Master Kubernetes through hands-on challenges and real-world scenarios. Free, open-source learning platform for developers.

Product

  • Challenges
  • Documentation
  • CLI Tool

Community

  • GitHub
  • Discord
  • Contribute

Legal

  • Privacy
  • Terms
  • License

© 2025 Kubeasy. Open source under Apache License 2.0.

Related Articles

Kubernetes ImagePullBackOff: Causes and Fixes
Troubleshooting

Kubernetes ImagePullBackOff: Causes and Fixes

Read more
Kubernetes Pod Stuck in Pending: Troubleshooting Guide
Troubleshooting

Kubernetes Pod Stuck in Pending: Troubleshooting Guide

Read more
Debugging CrashLoopBackOff in Kubernetes
Troubleshooting

Debugging CrashLoopBackOff in Kubernetes

Read more
KubeasyKubeasyKubeasy
  • Challenges
  • Blog
  • Documentation
Sign InTry Demo