Kubernetes Service: Examples, Basic Usage, and Troubleshooting

What is a Kubernetes Service?

In Kubernetes, a service is an entity that represents a set of pods running an application or functional component. The service holds access policies, and is responsible for enforcing these policies for incoming requests.

The need for services arises from the fact that pods in Kubernetes are short lived and can be replaced at any time. Kubernetes guarantees the availability of a given pod and replica, but not the liveness of individual pods. This means that pods that need to communicate with another pod cannot rely on the IP address of the underlying single pod. Instead, they connect to the service, which relays them to a relevant, currently-running pod.

The service is assigned a virtual IP address, known as a clusterIP, which persists until it is explicitly destroyed. The service acts as a reliable endpoint for communication between components or applications.

For Kubernetes native applications, an alternative to using services is to make requests directly through the Kubernetes API Server. The API Server automatically exposes and maintains an endpoint for running pods.

This is part of our series of articles about Kubernetes troubleshooting.

What are the Components of a Kubernetes Services?

A Kubernetes service associates a set of pods with an abstract service name and persistent IP address. This enables pods to discover each other and route requests to each other. A service uses labels and selectors to match pods with other applications. For example, a service might connect the front end of an application to a back end, each running in a separate Deployment within the cluster.

The basic components of a Kubernetes service are a label selector that identifies pods to route traffic to, a clusterIP and port number, port definitions, and optional mapping of incoming ports to a targetPort. We’ll show examples of service configuration code in the following section.

Another, less popular option is to create a service without a pod selector. This lets you point a service to another namespace, another service in the cluster, or a static IP outside the cluster.

 
expert-icon-header

Tips from the expert

Itiel Shwartz

Co-Founder & CTO

Itiel is the CTO and co-founder of Komodor. He’s a big believer in dev empowerment and moving fast, has worked at eBay, Forter and Rookout (as the founding engineer). Itiel is a backend and infra developer turned “DevOps”, an avid public speaker that loves talking about things such as cloud infrastructure, Kubernetes, Python, observability, and R&D culture.

In my experience, here are tips that can help you better manage Kubernetes services:

Choose the right service type

Select the appropriate service type (ClusterIP, NodePort, LoadBalancer) based on your use case.

Leverage service discovery

Use Kubernetes DNS for seamless service discovery and communication within the cluster.

Monitor service health

Implement health checks and monitoring for services to detect and resolve issues promptly.

Utilize Ingress for HTTP routing

Use Ingress resources to manage HTTP and HTTPS traffic routing to services.

Employ NetworkPolicies

Define NetworkPolicies to control traffic flow between services and enhance security.

How to Create a Kubernetes Service

A Kubernetes service can be configured using a YAML manifest. Here is an example of a service YAML:

apiVersion: v1

kind: Service

metadata:

  name: my-service

spec:

  selector:

    app: nginx

  ports:

 —protocol: TCP

    port: 80

    targetPort: 8080

Here are important aspects of the service manifest:

  • metadata:name—this is the logical name of the service, which will also become the DNS name of the service when it is created.
  • spec:selector—the selector identifies which pods that should be included in the service. In this example, pods that have the label app: nginx will become part of the service.
  • spec:ports—a list of port configurations (there can be one or more). Each port configuration defines a network protocol and port number. Optionally, the port configuration can define a targetPort, which is the port the pod should send traffic to.

To create a service object in your cluster, use the following command (substituting the path to your YAML file):

kubectl apply -f /path/to/service-manifest.yaml

What are the Types of Kubernetes Services?

ClusterIP

ClusterIP is the default service type in Kubernetes. It receives a cluster-internal IP address, making its pods only accessible from within the cluster. If necessary, you can set a specific clusterIP in the service manifest, but it must be within the cluster IP range.

Manifest example:

apiVersion: v1

kind: Service

metadata:

  name: my-clusterip-service

spec:

  type: ClusterIP

  clusterIP: 10.10.5.10

  ports:

 —name: http

    protocol: TCP

    port: 80

    targetPort: 8080

NodePort

A NodePort service builds on top of the ClusterIP service, exposing it to a port accessible from outside the cluster. If you do not specify a port number, Kubernetes automatically chooses a free port. The kube-proxy component on each node is responsible for listening on the node’s external ports and forwarding client traffic from the NodePort to the ClusterIP.

By default, all nodes in the cluster listen on the service’s NodePort, even if they are not running a pod that matches the service selector. If these nodes receive traffic intended for the service, it is handled by network address translation (NAT) and forwarded to the destination pod.

NodePort can be used to configure an external load balancer to forward network traffic from clients outside the cluster to a specific set of pods. For this to work, you must set a specific port number for the NodePort, and configure the external load balancer to forward traffic to that port on all cluster nodes. You also need to configure health checks in the external load balancer to determine whether a node is running healthy pods.

The nodePort field in the service manifest is optional, and lets you specify a custom port between 30000-32767.

Manifest example:

apiVersion: v1

kind: Service

metadata:

  name: my-nodeport-service

spec:

  type: NodePort

  selector:

    app: nginx

  ports:

 —name: http

    protocol: TCP

    port: 80

    targetPort: 8080

    nodePort: 30000

LoadBalancer

A LoadBalancer service is based on the NodePort service, and adds the ability to configure external load balancers in public and private clouds. It exposes services running within the cluster by forwarding network traffic to cluster nodes.

The LoadBalancer service type lets you dynamically implement external load balancers. This typically requires an integration running inside the Kubernetes cluster, which performs a watch on LoadBalancer services.

Manifest example:

apiVersion: v1

kind: Service

metadata:

  name: my-loadbalancer-service

spec:

  type: LoadBalancer

  clusterIP: 10.0.160.135

  loadBalancerIP: 168.196.90.10

  selector:

    app: nginx

  ports:

 —name: http

    protocol: TCP

    port: 80

    targetPort: 8080

ExternalName

An ExternalName service maps the service to a DNS name instead of a selector. You define the name using the spec:externalName parameter. It returns a CNAME record matching the contents of the externalName field (for example, my.service.domain.com), without using a proxy.

This type of service can be used to create services in Kubernetes that represent external  components such as databases running outside of Kubernetes. Another use case is allowing a pod in one namespace to communicate with a service in another namespace—the pod can access the ExternalName as a local service.

Manifest example:

 apiVersion: v1

kind: Service

metadata:

  name: my-externalname-service

spec:

  type: ExternalName

  externalName: my.database.domain.com

Discovering Kubernetes Services

There are two methods by which components in a Kubernetes cluster can discover a service:

  • DNS—when DNS is enabled, a DNS server is added to a Kubernetes cluster. This server watches for relevant Kubernetes API requests and creates a DNS record for every new service that is created. This allows all pods in the cluster to perform name resolution of services.
  • Environment variables—the kubelet adds environment variables for all pods running on a node for each active service. This allows pods to access other pods matching a service. However, this method only works if the service was created before the pods using that service (otherwise the required environment variables will not exist).

Kubernetes Headless Services

In some cases, services do not require a clusterIP. You can create a “headless service” by specifying none in the spec:clusterIP field of the service manifest. This means Kubernetes does not perform load balancing and proxying, and kube-proxy ignores these services.

There are two ways to set up DNS configuration for a headless service:

  • With selectors—if a selector is used, the endpoint controller creates an endpoint record in the API, modifying DNS record and returning an A record that points to the required pods.
  • Without selectors—if a selector is not used, the endpoint controller will not create any endpoint records.

Debugging Kubernetes Services

What are the Possible Problems with a Service?

Assume you deployed pods in the cluster and set up a service that is supposed to route traffic to them. If a client attempts to access a pod and fails, this could indicate a number of problems with your service or the underlying pods:

  • Service does not exist
  • Service does not have the expected DNS name
  • DNS is not working in the cluster in general
  • Service is not defined correctly
  • Service does not map correctly to your pods
  • Pods are not working or unstable
  • There is a kube-proxy error on your nodes

How Can You See What a Pod Sees?

When debugging service issues, you will want to see the cluster from the perspective of a pod mapped to your service. There are two ways to do this:

Run a busybox pod

Here is how to run a BusyBox in your cluster. The open source BusyBox project lets you run many common Linux utilities in one tiny executable:

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh

Bash into an existing pod

If you have a running pod that is supposed to be associated with your service, use the following command to bash into a container running on the pod and execute shell commands:

kubectl exec <pod-name> -c <container-name> -- <commands to execute>

 

Debugging Procedure

The following debugging procedure will help you identify which part of the service communication chain is broken.

In each step, if you encounter an error, stop and fix the problem. If things are working properly, proceed to the next step.

1. Check if the service exists:

kubectl get svc <service-name>

2. Try to access the service via DNS name in the same namespace:

nslookup <service-name>

3. Try to access the service via DNS name in another namespace:

nslookup <service-name>.<namespace>

If this succeeds, it means you need to change the client application to access the service in another namespace, or run it in the same namespace as the service.

4. Check if DNS works in the cluster:

nslookup kubernetes.default

5. Check if the service can be accessed by IP address—run this command from a pod in the cluster:

for i in $(seq 1 3); do 

    wget -qO- <ip-of-service>

Done

6. Check if the service is defined correctly—the following are common errors in a service manifest:

    • The service port applications are trying to access is not listed in spec.ports
    • The targetPort defined in the service is different from the port used by the pods
    • The port definition is defined as a string instead of a number
    • For a named port, the name specified in the service is not the same as the port name used by the pods
    • The protocol in the service definition is not the same as the protocol used by the pods

7. Check if labels defined in the service are matching pods—substitute <label> for a label specified in metadata.labels:

kubectl get pods -l <label>

If you see that the pods have not been alive for long or have numerous restarts, this could indicate they are unstable and may have been down when the application attempted to access them.

Note: Services can also be defined without a selector label. A service can be directly matched to a pod by defining an endpoint with a static IP that has the same name as the service. The service will then connect directly to that IP. In this case, to troubleshoot the service, check if there is an endpoint in the cluster with the same name as the service.

8. Check if the service has any endpoints:

kubectl get endpoints <service-name>

This will return a string with the list of pod IPs matched to the service. If it returns <none>, this means the service didn’t match any pods. Make sure that exactly the same label is specified in your service definition and in the pod manifest.

9. Check if the pods accessed on the endpoint IPs are working:

for ep in 10.0.0.1:9376 10.0.0.2:9376 10.0.0.3:9376; do

    wget -qO- $ep

done

Note: If pods don’t exist, this doesn’t necessarily mean there is a problem. It might be acceptable in some cases that pods are not currently available; the service should then return a 503 error to align expectations with the client.

Related content: Read our guide to Kubernetes Service 503 Error (coming soon)

10. Error with kube-proxy—in the default implementation of a service, the kube-proxy mechanism that runs on every Kubernetes node is responsible for implementing the service abstraction. If you have gotten this far, the problem could be a malfunction in kube-proxy.

See the Kubernetes documentation for more details on debugging kube-proxy issues.

This procedure covers only the simplest cases of a Kubernetes service malfunction. In some cases, there could be an issue in multiple parts of the service communication chain, or other moving parts in the cluster that contribute to the error. These more complex cases are very difficult to diagnose and resolve without specialized tools—and that’s where Komodor comes in.

Kubernetes Troubleshooting with Komodor

The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can.

This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.

Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:

  • Change intelligence: Every issue is a result of a change. Within seconds we can help you understand exactly who did what and when.
  • In-depth visibility:  A complete activity timeline, showing all code and config changes, deployments, alerts, code diffs, pod logs and etc. All within one pane of glass with easy drill-down options.
  • Insights into service dependencies: An easy way to understand cross-service changes and visualize their ripple effects across your entire system.
  • Seamless notifications: Direct integration with your existing communication channels (e.g., Slack) so you’ll have all the information you need, when you need it.
 

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 8

No votes so far! Be the first to rate this post.