Back to Blog
Concepts

Kubernetes Services Explained: ClusterIP vs NodePort vs LoadBalancer

Complete guide to Kubernetes Service types. When to use each, configuration examples, and real-world use cases.

Paul BrissaudPaul Brissaud
•
February 7, 2026
•
9 min read
#networking#beginner

Your app is running, but nobody can reach it. Or maybe other pods in the cluster need to talk to it, but they don't know where it lives. This is the problem Kubernetes Services solve. A Service gives your pods a stable network identity — a single DNS name and IP address that doesn't change even when pods are created, destroyed, or rescheduled.

But which Service type should you use? Kubernetes offers three main options — ClusterIP, NodePort, and LoadBalancer — and each one exposes your application differently. This article explains how each type works, when to pick one over another, and how they all fit together.

Quick Answer

Service TypeWho can reach itWhen to use it
ClusterIPOnly pods inside the clusterInternal communication between microservices
NodePortAnyone who can reach a node IPDev/test, or when you manage your own load balancer
LoadBalancerAnyone on the internet (via cloud LB)Production external traffic on cloud providers

The key insight: these types build on each other. A NodePort Service also creates a ClusterIP. A LoadBalancer Service creates both a NodePort and a ClusterIP. Each layer adds more exposure.

Why Services Exist

Pods are ephemeral. Every time a pod restarts, it gets a new IP address. If you have three replicas of your backend, their IPs change constantly. Other services can't keep track of which IPs are valid right now.

A Service solves this by providing a stable virtual IP (the ClusterIP) and a DNS name. When a pod sends a request to backend-svc, Kubernetes resolves that name to the Service's ClusterIP and then distributes the traffic across all healthy pods that match the Service's selector.

The Selector: How Services Find Pods

The connection between a Service and its pods isn't based on names or namespaces — it's based on labels. The Service defines a selector, and any pod whose labels match gets added to the Service's endpoint list automatically.

yaml
# Deployment — defines the pods
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend   # Pods get this label
    spec:
      containers:
      - name: backend
        image: backend:v1
        ports:
        - containerPort: 8080
---
# Service — finds pods by label
apiVersion: v1
kind: Service
metadata:
  name: backend-svc
spec:
  selector:
    app: backend       # Matches pods with this label
  ports:
  - port: 80           # Service listens on port 80
    targetPort: 8080   # Forwards to pod port 8080

The Service continuously watches for pods with matching labels. When pods come and go, the endpoint list updates automatically. This is what makes Services resilient — you never hard-code pod IPs.

Understanding Ports

Port mapping is a fundamental concept in Services, and it confuses a lot of people at first. There are up to three port values involved:

FieldWhat it meansExample
portThe port the Service listens on (what clients use)80
targetPortThe port your container is actually listening on8080
nodePortThe port opened on every node (NodePort and LoadBalancer only)30080

With port: 80 and targetPort: 8080, pods inside the cluster call the Service on port 80, and Kubernetes forwards to the container on port 8080. The Service acts as a translator between what consumers expect and what the app provides.

ClusterIP

ClusterIP is the default Service type. If you don't specify a type, you get a ClusterIP. It assigns an internal IP address that's only reachable from within the cluster.

yaml
apiVersion: v1
kind: Service
metadata:
  name: backend-svc
spec:
  type: ClusterIP    # This is the default, you can omit it
  selector:
    app: backend
  ports:
  - port: 80
    targetPort: 8080

Once created, any pod in the cluster can reach this service at backend-svc (same namespace) or backend-svc.default.svc.cluster.local (fully qualified). Kubernetes handles the DNS resolution and load-balances across all matching pods.

Traffic Flow

javascript
Pod A → backend-svc:80 → ClusterIP (10.96.0.15) → kube-proxy → Pod B:8080

The ClusterIP is a virtual IP that doesn't belong to any network interface. kube-proxy, running on every node, programs iptables or IPVS rules so that any traffic hitting this IP gets redirected to one of the backing pods. The caller never knows which specific pod handled the request.

When to Use ClusterIP

Use ClusterIP for any service that only needs to be reached by other pods inside the cluster. This is the right choice for most microservices: API backends, databases, caches, message queues — anything that doesn't need direct external access. In a typical architecture, the vast majority of your Services will be ClusterIP.

Headless Services

There's a special variant: if you set clusterIP: None, you get a headless Service. Instead of a single virtual IP with load balancing, DNS returns the individual pod IPs directly.

yaml
apiVersion: v1
kind: Service
metadata:
  name: db-headless
spec:
  clusterIP: None
  selector:
    app: postgres
  ports:
  - port: 5432

With this, nslookup db-headless returns multiple A records — one per pod — instead of a single ClusterIP. This is essential for StatefulSets where clients need to connect to specific pods (like database primaries vs replicas), or when an application handles its own load balancing (like Cassandra or Elasticsearch).

NodePort

A NodePort Service extends ClusterIP by opening a specific port on every node in the cluster. Traffic to any node's IP on that port gets forwarded to the Service.

yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend-svc
spec:
  type: NodePort
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080    # Optional: Kubernetes assigns one if omitted

Traffic Flow

javascript
External Client → NodeIP:30080 → kube-proxy → ClusterIP → Pod:8080

Every node listens on port 30080, even nodes that don't have the target pod running. If a client hits a node that doesn't run the pod, kube-proxy routes the request to another node that does. The client doesn't need to know which node has the pod.

Note that a NodePort Service automatically creates a ClusterIP too. So internal pods can still reach it via frontend-svc:80 — the NodePort is just an additional way in.

Port Range and Assignment

NodePort values must be in the range 30000–32767 by default (configurable via the --service-node-port-range flag on the API server). If you don't specify a nodePort, Kubernetes picks one automatically from this range. You can pin a specific port if needed, but you risk collisions — only one Service can use a given NodePort across the entire cluster.

When to Use NodePort

NodePort is useful in specific scenarios:

  • Development and testing where you need quick external access without a cloud load balancer
  • Bare-metal clusters where LoadBalancer Services aren't natively supported
  • Behind your own load balancer (HAProxy, NGINX, F5) that you point at the node ports

For production on cloud providers, NodePort alone is usually not enough. You'll want either a LoadBalancer Service or an Ingress controller in front of it.

Limitations

NodePort has real drawbacks for production use:

  • Non-standard ports — Users have to remember port 30080 instead of port 80 or 443
  • One service per port — Only one Service can claim port 30080 cluster-wide
  • No built-in high availability — If a node goes down, clients pointing at that specific node lose connectivity
  • Larger attack surface — Every node exposes the port, even nodes not running the workload

LoadBalancer

A LoadBalancer Service extends NodePort by asking the cloud provider to provision an external load balancer that routes traffic to the node ports. This is the standard way to expose services to the internet on managed Kubernetes (EKS, GKE, AKS).

yaml
apiVersion: v1
kind: Service
metadata:
  name: web-app-svc
spec:
  type: LoadBalancer
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 8080

After creating this, your cloud provider provisions a load balancer and assigns an external IP or hostname:

bash
kubectl get svc web-app-svc

# NAME          TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
# web-app-svc   LoadBalancer   10.96.45.12    203.0.113.50     80:31234/TCP

Traffic Flow

javascript
Internet Client → Cloud LB (203.0.113.50:80) → NodeIP:31234 → kube-proxy → Pod:8080

A LoadBalancer Service creates all three layers: a ClusterIP for internal access, a NodePort for node-level routing, and the external load balancer on top. The cloud LB health-checks your nodes and only sends traffic to healthy ones, giving you built-in high availability.

When to Use LoadBalancer

Use LoadBalancer when you need to expose a service to the internet on a cloud-managed cluster. It handles external IP assignment, health checking, and traffic distribution automatically. No extra infrastructure to manage.

Cost: The One-LB-Per-Service Problem

The biggest architectural consideration: every LoadBalancer Service provisions a separate cloud load balancer. On AWS, each NLB/ALB has a base cost plus traffic charges. Ten services means ten load balancers — and ten bills.

This is why most production setups use an Ingress controller or the Gateway API instead. You deploy a single Ingress controller behind one LoadBalancer Service, then create routing rules that fan out traffic to many internal ClusterIP services based on hostname or URL path. One load balancer bill instead of ten.

On Bare Metal

If you're not on a cloud provider, LoadBalancer Services stay in Pending state forever — there's no controller to provision the external IP. The solution is MetalLB, which provides a LoadBalancer implementation for bare-metal clusters by assigning IPs from a configured pool and announcing them via ARP or BGP.

How They Build on Each Other

This layering is the most important concept to understand:

javascript
ClusterIP
  └── Internal virtual IP + DNS name
  └── kube-proxy load balancing to pods

NodePort = ClusterIP + 
  └── Port opened on every node (30000-32767)
  └── External access via NodeIP:NodePort

LoadBalancer = NodePort + ClusterIP +
  └── Cloud-provisioned external load balancer
  └── Public IP / hostname

When you create a LoadBalancer Service, you can still access it internally as a ClusterIP (web-app-svc:80 from any pod) and externally via node ports (NodeIP:31234). The LoadBalancer just adds another entry point on top.

This means switching from one type to another is straightforward — you're mostly adding or removing layers of exposure.

Full Comparison

FeatureClusterIPNodePortLoadBalancer
Accessible fromInside cluster onlyNode IP + portExternal IP / hostname
Creates ClusterIPYesYesYes
Creates NodePortNoYesYes
External LBNoNoYes
Requires cloud providerNoNoYes (or MetalLB)
Port rangeAny30000–32767Any (LB maps them)
CostFreeFreePer-LB cloud charges
HA built-inYes (across pods)No (node failure = outage)Yes (LB health checks nodes)
Best forInternal microservicesDev / bare-metalProduction external traffic

ExternalName: The Odd One Out

There's a fourth Service type that works completely differently. ExternalName doesn't route traffic to pods at all. It creates a DNS CNAME record pointing to an external hostname.

yaml
apiVersion: v1
kind: Service
metadata:
  name: external-db
spec:
  type: ExternalName
  externalName: db.production.example.com

When a pod resolves external-db, Kubernetes DNS returns db.production.example.com. No proxying, no ClusterIP, no selector. This is useful for referencing external services (like a managed RDS database) through a consistent in-cluster DNS name. If the external endpoint changes, you update the Service — not every app that references it.

ExternalName has one caveat: because it works at the DNS level, it doesn't support port remapping. The target must be reachable on the same port the client uses.

Service Discovery: How Pods Find Services

Kubernetes provides two mechanisms for service discovery:

DNS (Preferred)

Every Service gets a DNS entry automatically. Pods can reach a Service by name:

bash
# Same namespace — short name is enough
curl http://backend-svc

# Cross-namespace — use the fully qualified domain name
curl http://backend-svc.other-namespace.svc.cluster.local

The DNS format follows a predictable pattern: <service-name>.<namespace>.svc.cluster.local. CoreDNS, running inside the cluster, handles resolution. For ClusterIP Services, it returns the virtual IP. For headless Services, it returns individual pod IPs.

Environment Variables

When a pod starts, Kubernetes injects environment variables for every active Service in the same namespace:

bash
BACKEND_SVC_SERVICE_HOST=10.96.0.15
BACKEND_SVC_SERVICE_PORT=80

The catch: the Service must exist before the pod is created. If you create the Service after the pod, the variables won't be there. DNS doesn't have this ordering problem, which is why it's the preferred approach.

Beyond Services: Ingress and Gateway API

Services handle L4 (TCP/UDP) routing — they forward traffic to pods based on IP and port. If you need L7 routing — hostname-based routing, URL path routing, TLS termination, header-based rules — you need an additional layer on top of Services.

Ingress

The established approach. You deploy an Ingress controller (NGINX, Traefik, HAProxy, etc.) as a Deployment behind a single LoadBalancer Service. Then you create Ingress resources that define the routing rules:

yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-svc       # ClusterIP Service
            port:
              number: 80
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend-svc  # ClusterIP Service
            port:
              number: 80

Two hostnames, two ClusterIP Services, one load balancer. This is how you avoid the one-LB-per-service cost problem.

Gateway API

The newer, more flexible replacement for Ingress. It separates infrastructure concerns (Gateway, managed by platform teams) from application routing (HTTPRoute, managed by app teams). Gateway API supports more advanced traffic management like traffic splitting, header modification, and cross-namespace routing.

Both approaches follow the same principle: use a single LoadBalancer as the entry point, and route internally to ClusterIP Services.

Choosing the Right Service Type

Here's a practical decision framework:

Is this service only called by other pods in the cluster? → Use ClusterIP. This is the default for a reason — most services are internal. Do you need external access during development or on bare metal? → Use NodePort. Quick and simple, no cloud dependencies. Do you need production external access on a cloud provider? → Use LoadBalancer for a single service, or an Ingress/Gateway behind one LoadBalancer for multiple services. Do you need to reference an external service by DNS name? → Use ExternalName.

When in doubt, start with ClusterIP and only escalate when you have a real need for external exposure. You can always change the type later.

Common Mistakes

Selector Doesn't Match Pod Labels

The most frequent Service issue. The Service has a selector, but the pod labels don't match — so the Service routes to nothing. Labels must match exactly, both key and value. Always check kubectl get endpoints <svc> after creating a Service to verify pods are being selected.

Wrong targetPort

The targetPort must match the port your container is actually listening on. If your app listens on 8080 but the Service says targetPort: 80, connections will be refused. The port field (what clients use) can be anything, but targetPort must match reality.

Using LoadBalancer for Internal Services

Every LoadBalancer costs money. If your services only talk to each other, ClusterIP is all you need. Reserve LoadBalancer for the services that genuinely face the internet — and even then, consider whether an Ingress controller with one LoadBalancer can serve all of them.

Accidentally Exposing Services via NodePort

NodePort opens a port on every node in the cluster. If your nodes have public IPs and you didn't intend external access, that's a security issue. Use ClusterIP for internal services.

Try It Yourself

Concepts stick better when you practice them. These Kubeasy challenges let you work with Services in a real cluster:

Practice This Scenario
easyPods & Containers
Expose Internally
The application is running, but other services in the cluster can't reach it. It needs to be accessible from within the cluster.
10 min
Start Challenge

A web application is running but other pods can't reach it. Create a ClusterIP Service from scratch to make it accessible within the cluster. (~10 min, easy)

Practice This Scenario
easyPods & Containers
Wrong Selector
The Service exists but can't find any backend pods. Requests to the service timeout, yet the pods are running fine.
10 min
Start Challenge

The Service exists but has 0 endpoints. Pods are healthy, yet traffic goes nowhere. Find and fix the selector mismatch. (~10 min, easy)

Practice This Scenario
easyNetworking
Partial Outage
The frontend app is up — but some users report failures. It's not the code. Investigate the cluster's configuration before the incident spreads.
10 min
Start Challenge

The frontend can reach some backends but not others. A NetworkPolicy is involved — figure out why intra-cluster communication is partially broken. (~10 min, easy)

Written by

Paul Brissaud

Paul Brissaud

Paul Brissaud is a DevOps / Platform Engineer and the creator of Kubeasy. He believes Kubernetes education is often too theoretical and that real understanding comes from hands-on, failure-driven learning.

TwitterGitHub

On this page

  • Quick Answer
  • Why Services Exist
  • The Selector: How Services Find Pods
  • Understanding Ports
  • ClusterIP
  • Traffic Flow
  • When to Use ClusterIP
  • Headless Services
  • NodePort
  • Traffic Flow
  • Port Range and Assignment
  • When to Use NodePort
  • Limitations
  • LoadBalancer
  • Traffic Flow
  • When to Use LoadBalancer
  • Cost: The One-LB-Per-Service Problem
  • On Bare Metal
  • How They Build on Each Other
  • Full Comparison
  • ExternalName: The Odd One Out
  • Service Discovery: How Pods Find Services
  • DNS (Preferred)
  • Environment Variables
  • Beyond Services: Ingress and Gateway API
  • Ingress
  • Gateway API
  • Choosing the Right Service Type
  • Common Mistakes
  • Selector Doesn't Match Pod Labels
  • Wrong targetPort
  • Using LoadBalancer for Internal Services
  • Accidentally Exposing Services via NodePort
  • Try It Yourself
KubeasyKubeasy

Master Kubernetes through hands-on challenges and real-world scenarios. Free, open-source learning platform for developers.

Product

  • Challenges
  • Documentation
  • CLI Tool

Community

  • GitHub
  • Discord
  • Contribute

Legal

  • Privacy
  • Terms
  • License

© 2025 Kubeasy. Open source under Apache License 2.0.

Related Articles

Understanding Kubernetes Probes: Liveness, Readiness & Startup
Concepts

Understanding Kubernetes Probes: Liveness, Readiness & Startup

Read more
Kubeasy vs Killercoda: Which Platform for Learning Kubernetes?
Comparisons

Kubeasy vs Killercoda: Which Platform for Learning Kubernetes?

Read more
Logo
Troubleshooting

Kubernetes Service Not Reachable: Troubleshooting

Read more
KubeasyKubeasyKubeasy
  • Challenges
  • Blog
  • Documentation
Sign InTry Demo