Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Kubernetes is the de facto container-management technology in the cloud world due to its scalability and reliability. It also provides a very flexible and developer-friendly API, which is the foundation of its control plane.
The effectiveness of the Kubernetes API comes from how it manages the Kubernetes resources via metadata: labels and annotations. Metadata is essential for grouping resources, redirecting requests and managing deployments. In addition, is is also used to troubleshoot Kubernetes applications.
In this blog post, you will learn the basics and best practices of using labels and annotations.
Kubernetes labels are the metadata information attached to the Kubernetes resources to group, view, and operate. Labels are in the format of key and value string pairs, where each key should be unique.
Let’s take a look at them in action:
$ kubectl get node minikube -o json | jq .metadata.labels { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/arch": "amd64", "kubernetes.io/hostname": "minikube", "kubernetes.io/os": "linux", "minikube.k8s.io/commit": "a03fbcf166e6f74ef224d4a63be4277d017bb62e", "minikube.k8s.io/name": "minikube", "minikube.k8s.io/updated_at": "2021_08_24T15_22_19_0700", "minikube.k8s.io/version": "v1.22.0", "node-role.kubernetes.io/control-plane": "", "node-role.kubernetes.io/master": "", "node.kubernetes.io/exclude-from-external-load-balancers": "" }
In the previous command, you retrieved the labels of the minikube node, which include information related to the operating system, hostname, and the minikube version running on the node. You can use the labels for retrieving and filtering the data from the Kubernetes API.
Let’s assume you want to get all the pods running the Kubernetes dashboard. You can use the selector k8s-app=kubernetes-dashboard over labels with the following command:
k8s-app=kubernetes-dashboard
$ kubectl get pods -n kubernetes-dashboard --selector k8s-app=kubernetes-dashboard NAME READY STATUS RESTARTS AGE kubernetes-dashboard-6fcdf4f6d-7ks9d 1/1 Running 0 35m
The hidden gem of Kubernetes labels is that they are heavily used with the Kubernetes itself, such as scheduling pods to nodes, managing replicas of deployments, and network routing of services.
Let’s look at some labels and how they are used as selectors in Kubernetes by checking the spec of the kubernetes-dashboard service:
kubernetes-dashboard
$ kubectl -n kubernetes-dashboard get svc kubernetes-dashboard -o json | jq .spec { "clusterIP": "10.109.105.207", "clusterIPs": [ "10.109.105.207" ], "ipFamilies": [ "IPv4" ], "ipFamilyPolicy": "SingleStack", "ports": [ { "port": 80, "protocol": "TCP", "targetPort": 9090 } ], "selector": { "k8s-app": "kubernetes-dashboard" }, "sessionAffinity": "None", "type": "ClusterIP" }
Kubernetes uses the labels defined in the selector section to distribute the incoming requests to the kubernetes-dashboard service. With a similar approach, replica sets track the number of pods to maintain replicas running on the cluster. Now let’s check the selector of the replica set for the dashboard:
$ kubectl -n kubernetes-dashboard get replicasets kubernetes-dashboard-6fcdf4f6d -o json | jq .spec.selector { "matchLabels": { "k8s-app": "kubernetes-dashboard", "pod-template-hash": "6fcdf4f6d" } }
The matchLabels indicate that there will be enough pods with the mentioned labels in the cluster. When you release a new version, it will create a new pod-template-hash, and replica set controllers will create new pods instead.
matchLabels
pod-template-hash
Kubernetes annotations are the second way of attaching metadata to the Kubernetes resources. They are pairs of key and value strings that are similar to labels, but which store arbitrary non-identifying data. For instance, you can keep the contact details of the responsible people in the deployment annotations. Similarly, you can attach logging, monitoring, or auditing information for the resources in the annotations format.
The main difference between annotations and labels is that annotations are not used to filter, group, or operate on the resources. Rather, they are used to easily access additional information about the Kubernetes resources.
For instance, CRI socket or volume controller annotations show how the node works, instead of its characteristics, in the following example:
$ kubectl get nodes minikube -o json | jq .metadata.annotations { "kubeadm.alpha.kubernetes.io/cri-socket": "/var/run/dockershim.sock", "node.alpha.kubernetes.io/ttl": "0", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }
Client tools and Kubernetes users can retrieve the metadata and operate accordingly. You can imagine the data kept in annotations to be stored in Excel sheets or databases; however, they are attached to the resources. Therefore, there is no selector implementation like labels in the Kubernetes API.
Now that we’ve covered the fundamentals of Kubernetes labels and annotations, it’s time to explore the best practices for using them most beneficially.
Annotations and labels are key-value pairs. Keys consists of two parts: an optional (but highly suggested) prefix and name:
k8s.vague-comma.flywheelstaging.com/
When the prefix is omitted, you can assume that labels or annotations are private for your cluster and user. When the prefix and name are used together, you should store the data to be used with multiple clients, similar to the following:
app.kubernetes.io/version
app.kubernetes.io/component
helm.sh/chart
Using the correct syntax for labels and annotations makes it easier to communicate within your team and use the cluster with client tools and libraries such as kubectl, Helm, and operators. Therefore, it is suggested to choose a prefix for your company and sub-prefixes for your projects. This company-wide consensus will help you utilize labels and annotations to their full power.
As mentioned earlier, the main difference between labels and annotations is whether they are identifiers or not. If you want to attach information to group resources and filter, you should keep the data as labels. Use annotations if the metadata is not an identifier, but rather additional data related to the Kubernetes resources.
For instance, the following pod has two labels and two annotations:
apiVersion: v1 kind: Pod metadata: name: demo labels: environment: production app: nginx annotations: vague-comma.flywheelstaging.com/owner: alice vague-comma.flywheelstaging.com/owner-phone: 911 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80
In the demo pod, labels classify it as being an nginx application running in production. Annotations show the owner and communication data. If you plan to group pods by owners in the future, it is suggested to move vague-comma.flywheelstaging.com/owner to labels.
Using labels and annotations with the correct use cases is vital to have an easy-to-operate cluster with automated tools. Therefore, ensure that your labels and annotations are not overlapping in terms of data and usage.
Kubernetes reserves all the labels and annotations with the key kubernetes.io domain name and keeps a list of well-known ones in the official documentation. You may have seen some of them in the Kubernetes dashboard or resource definitions, such as:
labels: app.kubernetes.io/name: label-pod app.kubernetes.io/instance: test-1a app.kubernetes.io/version: "1.1.0" app.kubernetes.io/component: test app.kubernetes.io/managed-by: helm
The main advantage of this metadata is that the Kubernetes machinery automatically fills values of the standard labels and annotations. Thus, it is suggested to use the well-known labels and annotations in your daily operations and client tools, such as Helm, Terraform, or kubectl.
Releasing distributed microservices applications to the cloud is not straightforward, as you have an excessively high number of small applications—each with its own version. Therefore, most developers only change the version of a single application out of a hundred and test the rest of the system. Fortunately, you can use labels for grouping and filtering the applications running on Kubernetes.
Let’s assume you have a backend service that has multiple pods running behind it with the labels version:v1 and app:backend. You can deploy a new set of backend instances to the cluster and change the service label selector to version:v2 and app:backend. Now, all requests coming to the backend service will reach v2 instances. Luckily, switching back to v1 is pretty easy, as you only need to change the service specification.
version:v1
app:backend
version:v2
v2
v1
This procedure is also known as the Blue/Green deployment strategy. In addition, you can easily implement A/B testing and canary release strategies with the help of Kubernetes labels.
The last best practice is for the Kubernetes operators who need to debug applications running inside the cluster. Let’s assume you have a deployment with the following selector labels:
app.kubernetes.io/name: my-complex-app
app.kubernetes.io/instance: prod-1
app.kubernetes.io/version: "1.1.0"
All pods of the deployment will also have the same set of labels. Unfortunately, you cannot change and modify the pods, but you can change the labels of the selector in order to not match current pods. It will make the running pods orphaned, and you can exec into them for debugging.
Kubernetes will create new pods with the new labels, and your production setup will continue living as expected—with an additional pod that you’ll want to analyze further for troubleshooting. You can interfere with the operations of Kubernetes and troubleshoot your applications when you know how labels are designed and used by Kubernetes.
In this blog post, we covered the fundamentals of Kubernetes labels and annotations through examples and best practices that are essential to bringing the power of metadata tools to light. Using the correct syntax with the intended aim will make your labels and annotations more meaningful and maintainable. In addition, you can exploit the standard labels of Kubernetes with prepopulated data in your applications. Finally, labels are helpful for cloud-native release management and application debugging.
In order to gain overall control and visibility into your Kubernetes clusters, check out Komodor and our Kubernetes-native troubleshooting solution. This will simplify the complex and distributed environment of Kubernetes and help you understand what is actually happening in your clusters.
Sign up for a free trial to see how you troubleshootiing intelligently while leveraging your existing stack can make a difference.
Share:
How useful was this post?
Click on a star to rate it!
Average rating 5 / 5. Vote count: 9
No votes so far! Be the first to rate this post.
and start using Komodor in seconds!