Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
With the rise of cloud computing, containerization, and microservices architecture, developers are adopting new approaches to building and deploying applications that are more scalable and resilient. Microservices architecture, in particular, has gained significant popularity due to its ability to break down monolithic applications into smaller, independent services.
Go is a great language choice for microservices because Golang applications have the ability to handle multiple requests concurrently using goroutines and channels. To deploy a Go application as a microservice, you have to containerize it with a tool like Docker and then deploy it with a container orchestration platform like Kubernetes.
In this tutorial, you’ll learn how to build a Golang microservice, package it as a Docker container, deploy it on Kubernetes, and monitor the deployment using Komodor.
You need the following prerequisites to follow along in this tutorial:
If you don’t already have a Kubernetes cluster set up, check out Komodor’s Guide to Getting Started with Kubernetes for step-by-step instructions.
Since you’ll be building the Docker image from code, you can obtain the code from this GitHub repo. Run the command below to clone the repo:
git clone https://github.com/vicradon/shopping-cart-microservice.git
The microservice we’ll be working with in this tutorial is a shopping cart API application with six routes:
If you change-directory into the application’s folder cd shopping-cart-microservice and run the go run main.go file, the app will be available on port 8080. You can interact with the microservice via a tool like Postman to perform CRUD operations on cart items.
cd shopping-cart-microservice
go run main.go
What comes after building your application? Deployment!
To prepare the Golang microservice for deployment, you need to build a Docker image from the application code and push this image to a container registry. You can achieve this using the docker build and docker push commands. It’ll be helpful to tag your image for easy versioning. For this tutorial, I’m assuming you’re pushing to Docker Hub, which means you need to tag your image with your username. For example, if your username is johndoe, tag your image as johndoe/shopping-cart-microservice.
docker build
docker push
johndoe
johndoe/shopping-cart-microservice
Run the command below to build the image:
docker build -t <your username>/shopping-cart-microservice
After building your image, test it locally on port 4000 by running the command below:
docker run -p 4000:8080 <your username>/shopping-cart-microservice
If you navigate to http://localhost:4000, you should see Shopping Cart Microservice displayed on the screen.
http://localhost:4000
Shopping Cart Microservice
To push the image, first authenticate the Docker CLI by running:
docker login
After completing the authentication, run:
docker push <your username>/shopping-cart-microservice
To deploy your microservice to Kubernetes, you need to create a deployment and a service. The deployment specifies how many replicas of your service are provisioned at a time, and the service exposes your deployment set to your local network.
In the repo, you’ll find a shopping-cart.yaml file that contains the deployment and service specifications. You can also find the file content in the snippet below:
shopping-cart.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: shopping-cart-deployment spec: replicas: 3 selector: matchLabels: app: shopping-cart-microservice template: metadata: labels: app: shopping-cart-microservice spec: containers: - name: shopping-cart-container image: vicradon/shopping-cart-microservice:latest ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: shopping-cart-service spec: selector: app: shopping-cart-microservice ports: - name: http port: 80 targetPort: 8080 nodePort: 30000 type: NodePort
The manifest above specifies the deployment and service for the shopping cart. The deployment defines a container built from the vicradon/shopping-cart-microservice:latest image. The container has three replicas, which you could increase or decrease to scale the app. The shopping-cart-service service forwards traffic directed at port 80 to port 8080. The nodePort field specifies that the service is exposed on a static port on each worker node in the cluster (in this case, port 30000). You can access the service from outside the cluster using the node’s IP address and port 30000.
vicradon/shopping-cart-microservice:latest
shopping-cart-service
nodePort
Remember to replace the vicradon/shopping-cart-microservice:latest with the name of the image you pushed to Docker Hub.
Then run the kubectl command to apply the manifest to your Kubernetes cluster.
kubectl apply -f shopping-cart.yaml
The previous command should give the following output:
deployment.apps/shopping-cart-deployment created service/shopping-cart-service created
If you want to scale the deployment manually using kubectl, all you need to do is edit the manifest and reapply it. Assuming your shopping cart app hasn’t gotten much traction and you want to scale down, you can edit the file to the following:
apiVersion: apps/v1 kind: Deployment metadata: name: shopping-cart-deployment spec: replicas: 1 selector: matchLabels: app: shopping-cart-microservice …
Then run:
Now that your application has been deployed to your Kubernetes cluster, you can sit back and see it working for your users. However, you should still monitor your app so that you can correct downtimes and address issues that may affect your application’s health.
There are several options for monitoring Kubernetes services. One that’s easy to set up and get moving quickly with is Komodor, a dev-first platform for monitoring the health of services on Kubernetes.
You can install the Komodor suite on your Kubernetes cluster using Helm, or you can install it directly on Mac or Linux. This tutorial installs Komodor using Helm.
Install Helm, then apply the Helm chart to your cluster. Sign up for a Komodor account, if you haven’t already, to allow Komodor to show you the metrics of your Kubernetes environment.
As part of your sign-up process, you can either input a team name of your choice or choose one of the defaults.
When you complete sign-up, you’ll be required to set up Komodor locally. Since you’re using Helm, copy the command to install via Helm and paste it in a terminal that has access to your Kubernetes cluster.
Note that Helm Dash, Komodor’s open-source dashboard for Helm, is now part of Komodor’s commercial suite. Its chart visualization is going to make your next leadership meeting presentation super easy.
You should see a shopping-cart-deployment service on your dashboard:
shopping-cart-deployment
Click on the service, and you’ll see a dashboard with different views into the deployment.
Notice the Scale button at the top of the Deployments page. Say your app doesn’t have much traffic right now, and you want to reduce the number of instances from 3 to 1. Click Scale and reduce the replica amount.
The scale command registers a new event, which should appear on your dashboard.
Thanks to its lightweight, portability, and concurrency capabilities, Go is a popular language for containerized applications (Kubernetes itself was in fact built with Golang). Packaging your Golang microservices into containers and deploying them using Kubernetes means you can take advantage of its self-healing, auto-scaling, and load-balancing features. But deploying Go applications to Kubernetes is just your first move toward leveraging this massive container orchestration system effectively—monitoring your Kubernetes services is an important next step.
Komodor is a great tool for monitoring, operating, and troubleshooting the health of your services on Kubernetes. Installed in your Kubernetes environment, Komodor collects logs and metrics from the Kubernetes API and displays them in an easy-to-use dashboard. Try Komodor for free to quickly identify and resolve issues before they impact your users, or join our Slack Kommunity to learn more.
Share:
How useful was this post?
Click on a star to rate it!
Average rating 5 / 5. Vote count: 5
No votes so far! Be the first to rate this post.
and start using Komodor in seconds!