Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
The need for automation is becoming more important day by day. The process of integrating written code with already working code and publishing new code to live environments is a very error-prone process. Performing static analysis, running tests, packaging, and versioning are tasks that require a lot of manual effort.
It’s also a complex task to solve the problem of deploying the projects we develop to more than one environment, on more than one machine, without automation.
This is where continuous integration/continuous delivery (CI/CD) can help. CI is the automatic integration of newly written code with existing code. This includes static analysis of the written code, unit/integration tests, versioning, and packaging.
CD, on the other hand, focuses on distributing new packages created during the CI phase to various environments. Let’s say we have more than one environment and more than one machine. CD tools solve which applications should run on which machine in which environment, as well as how to update them when the environment/machine information changes.
GitOps is an approach where you can manage everything as git commits. Do you want to create a new version? Just send a new commit that includes your code, and that commit will trigger the CI/CD pipeline.
By applying GitOps in your CI/CD process, you can manage your deployments just by sending a new commit to your application.
GitOps is not limited to applications. It also lets you provision operations in your infrastructure and manage Terraform scripts using a GitOps approach; this way, you will be able to see your infrastructure changes historically.
ArgoCD, a CNCF incubating project, defines itself as a continuous deployment project for Kubernetes. Let’s first review its core features.
ArgoCD has a built-in UI where you can watch/edit your deployments, resources, etc.
The automatic deployment feature enables you to deploy your application’s new versions without any manual actions required.
ArgoCD supports multiple templating tools for managing Kubernetes resources like Helm, Kustomize, and Jsonnet. It also allows for integration with other tools via its plugin-based config management system.
The ArgoCD command line interface lets you interact with your projects, resources, etc. You can even roll back your deployment to previous versions.
You can define various hooks for your project on ArgoCD to perform actions before or after a deployment. For example, resource hooks let you send notifications after a deployment succeeds or fails.
ArgoCD exposes various Prometheus metrics for your application, like its health and sync status, that you can then visualize on Grafana.
Built-in Dex support allows you to integrate with many of the OIDC providers or LDAP, SAML, etc.
ArgoCD has a set of generators. With the help of the cluster generator, you can define multiple target clusters to deploy your application based on labels.
ArgoCD defines itself as a “Declarative GitOps tool” that lets users build their CD process via Git. There is no need for an imperative action while using ArgoCD.
ArgoCD continuously watches the source repositories that you provide in a declarative way and updates the corresponding Kubernetes resources.
Installing ArgoCD to a Kubernetes cluster is quite easy. First, create a namespace for ArgoCD:
kubectl create namespace argocd
Then, deploy the ArgoCD manifests:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
This will give us the following output:
Now, log into the Argo server.
At this stage, we will forward the ArgoCD server port to our localhost so we can access it over the localhost. We use the kubectl port-forward for this.
After that, we reveal the initial admin secret of the Argo server, which we can use to log in to the Argo server using the ArgoCD CLI:
> kubectl port-forward svc/argocd-server -n argocd 8080:443
> kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=”{.data.password}” | base64 -d; echo
> argocd login –name local localhost:8080
Now, we’re ready to deploy our first application.
Let’s say we have a simple API application we want to access on Kubernetes. We can create a Deployment and a Service resource on Kubernetes to run our application Pod.
In our example, we’ll use Nginx as an example application. We will put two resources into a folder, let’s say, the folder name application. Our Deployment resource exists to run a Pod that contains the application container, and our Service resource provides access to our application inside the Kubernetes network.
Here is our Deployment resource in the application/deployment.yaml file:
And our Service resource in application/service.yaml:
There are different ways to create an application on ArgoCD. We can use the ArgoCD CLI, we can use the UI, or we can create ArgoCD manifests.
We will create two manifest files for ArgoCD. The AppProject resource specifies a list of source repositories and allowed resource types that can be created on Kubernetes by ArgoCD:
The ArgoCD application resource specifies our application that will be tracked and deployed by ArgoCD:
We can create the same application via the CLI using the following command:
argocd app create nginx –repo https://github.com/mstrYoda/argocd-example.git –path application –dest-server https://kubernetes.default.svc –dest-namespace default –sync-policy auto –self-heal
Now, let’s navigate to http://localhost:8080 on our browser to log in to the ArgoCD UI.
Use the admin secret we used to log in to the ArgoCD CLI.
Go to the Settings tab, and select Projects to create or view our projects in ArgoCD:
Figure 1: Settings tab in AgroCD CLI
Under Projects, we can see our application has been created:
Figure 2: Application created
Now, let’s navigate to the Applications page and see our application.
You can see that the application’s status is Synced, which means that we made our first deployment successfully to Kubernetes:
Figure 3: Application is synced
You can view more details by clicking on the application, including created Kubernetes resources:
Figure 4: View more details about the application
Remember that if we make any change to the deployment or service resources on our Git repository, ArgoCD will detect those changes and automatically update the Kubernetes resources.
Congratulations, you’ve installed your first application with ArgoCD.
Flux v2 is an open-source, CNCF incubating set of continuous and progressive delivery solutions for Kubernetes that originated from a company called Weaveworks. We’re specifically focusing on v2 of Flux since v1 is more like a deprecated version; only critical updates and bug fixes will be made for v1, meaning the community now invests in Flux v2, which is still under active development. Also, these two versions, v1 and v2, are entirely different and incompatible.
If you’re using v1, you can look at this migration documentation to learn how to upgrade to v2.
Flux v2 is a GitOps operator composed of a bunch of controllers, generally referred to as the GitOps Toolkit, that leverage CRDs (Custom Resource Definitions) to provide building blocks to define sources, customizations, and notifications:
Figure 5: GitOps Toolkit (Source: Flux)
“Flux in Short” gives you the complete list of core features supported in Flux v2. From our perspective, the following are some of the crucial ones.
Flux can manage resources outside of Kubernetes; this is crucial to the management of infrastructures built on GitOps principles.
Flux uses buckets that are compatible with S3, as well as any of the currently available Git providers, including GitHub, GitLab, and Bitbucket.
Flux can automatically update container images for Git on your behalf with the agreement of image-reflector-controller and image-automation-controller:
Flux is compatible with all versions of Kubernetes and all widely used Kubernetes tooling because of its native design that relies on CRDs.
Flux writes security-related blog posts to keep users informed. There are two critical considerations that were taken by the Flux team: signing all Flux controllers using cosign, one of the tools provided by the Sigstore and its CLI, and generating SBOMs using Syft, one of the tools provided by Anchore in the form of SPDX to make Flux more transparent.
Since GitOps is at the heart of Flux, we will follow GitOps principles when working with it. By publishing all Flux v2 manifests to a Git repository and executing reconciliations to maintain the components in sync with the actual state in the Kubernetes cluster, Flux v2 will attempt to install itself through its CLI to follow the GitOps approach and achieve the desired state declared in the Git repository. You’ll understand this better in a moment.
To get started with Flux, you’ll need a Kubernetes cluster, for which we’ll use KinD, and a hosted Git provider to store our source code, for which we’ll use GitHub; you will also need to have the Flux CLI installed.
Let’s first launch our local development cluster with Kind:
```shellkind create cluster```
Next, let’s install Flux into it. To accelerate and simplify the bootstrapping of the Flux components, Flux added a bootstrap command to its CLI. The Flux CLI is one of the essential things for getting started with Flux. To install it, visit Flux’s own documentation; for macOS users, you can simply install the Flux CLI through the brew package manager:
```shell
brew install fluxcd/tap/flux
```
One last thing: You have to create a personal access token to let Flux create a repository and commit the necessary changes back to Git:
export GITHUB_TOKEN=<token>
export GITHUB_USER=<username>
The Flux CLI provides a check command to verify whether you have everything in place to run Flux:
If everything checks out, you can move on with the Flux components installation; the bootstrap command will take care of this:
So, the bootstrap command above did the following:
Even if the output says everything is properly configured and installed, you can still use the check command to see one more time, but this time without the –pre flag:
Congratulations, you’ve installed Flux v2 into your Kubernetes cluster successfully. Now, let’s move on with the deployment of a sample application to see how to leverage GitOps principles with Flux.
In this section, we’ll deploy the podinfo application, one of the popular example repositories used to demonstrate GitOps use cases with Flux.
In GitOps, we declare and store everything in Git as the source of truth of the desired state. This means that if there is any change, we should edit the necessary files according to that change and commit those changes right back to Git to trigger the actual-desired state reconciliation flow. Based on that, we start with cloning the GitHub repository:
“`shell
$ git clone https://github.com/developer-guy/flux-gotk
$ cd flux-gotk
“`
As the bootstrap command helped us install the Flux components into the cluster, the create command will let us create the necessary manifest files that include CRDs to be recognized by Flux GitOps Toolkit controllers.
So let’s create a GitRepository, which is one of the source definitions available in Flux. The following command creates a Flux manifest pointing to the podinfo repository’s master branch:
flux create source git podinfo \
–url=https://github.com/stefanprodan/podinfo \
–branch=master \
–interval=30s \
–export > ./clusters/my-cluster/podinfo-source.yaml
Now, commit and push this change:
git add -A && git commit -m “Add podinfo GitRepository”
git push
Once you push this change, Flux will immediately recognize that change and create the necessary resource, a GitRepository in this case. You should see the podinfo in the GitRepository resources list:
$ kubectl get gitrepositories.source.toolkit.fluxcd.io –namespace flux-system
NAME URL AGE READY STATUS
flux-system ssh://git@github.com/developer-guy/flux-gotk 35m True stored artifact for revision ‘main/40253e9e13257e5a7ae2847b587226ca06ea4f0f’
podinfo https://github.com/stefanprodan/podinfo 70s True stored artifact for revision ‘master/44157ecd84c0d78b17e4d7b43f2a7bb316372d6c’
Congratulations, you’ve installed your first resource with Flux. Still, nothing has been installed in the cluster yet.
If you look at the podinfo repository, you will notice that there is a directory named kustomize that includes all the Kubernetes manifests necessary to deploy the application. So, we will now create a Kustomization that applies the podinfo deployment. Again, we will do that by sticking to GitOps principles, as we did in the previous section:
flux create kustomization podinfo \ –target-namespace=default \ –source=podinfo \ –path=”./kustomize” \ –prune=true \ –interval=5m \ –export > ./clusters/my-cluster/podinfo-kustomization.yaml
git add -A && git commit -m “Add podinfo Kustomization”
What we expect to happen is that the application will be deployed with Flux, so we can use the get command to monitor the deployment of the podinfo application:
$ flux get kustomizations –watch
NAME REVISION SUSPENDED READY MESSAGE
flux-system main/4e9c917 False True Applied revision: main/4e9c917
podinfo master/44157ec False True Applied revision: master/44157ec
If it says it is applied, do a double-check by looking at the resources getting deployed:
$ kubectl -n default get deployments,services
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/podinfo 2/2 2 2 4m51s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 65m
service/podinfo ClusterIP 10.96.79.0 <none> 9898/TCP,9999/TCP 4m51s
Congratulations, you’ve installed your first application with Flux.
JenkinsX is a cloud-native CI/CD tool that aims to simplify deployments to Kubernetes. It integrates with a couple of open-source tools and uses tekton pipelines inside. Let’s review its core features.
You can deploy your application to Kubernetes with GitOps using JenkinsX; it also allows itself to be upgraded in a GitOps way.
You can benefit from integrations with different secret providers like Vault and Google Cloud Secret Manager. It uses the external secrets feature of Kubernetes to keep secrets secure.
JenkinsX comes with tekton and allows you to create cloud-native pipelines using this tool.
The Lighthouse integration lets you take actions to pull request comments. You can trigger your pipelines just by writing a comment to a pull request.
To install JenkinsX for the first time on Kubernetes, you need to follow the documentation for your Kubernetes cluster. JenkinsX uses a template repository for installation. You need to clone it and make the corresponding changes.
It installs a Git operator to your cluster, which continuously watches your project’s Git repositories. After making the installation, you can start creating projects.
You also need to install the JenkinsX CLI.
JenkinsX has a dashboard for viewing the status of your pipelines:
Figure 6: JenkinsX Dashboard (https://jenkins-x.io/v3/develop/ui/dashboard/)
Figure 7: Further details offered by the JenkinsX Dashboard
With all three tools discussed, you can achieve the same functionalities. Some require more work, but in the end, you can achieve what you want.
All of them have their own advantages and disadvantages. ArgoCD has a great UI and documentation. On the other hand, FluxCD is getting more and more popular in the cloud-native area. JenkinsX looks less popular, perhaps due to its complex setup process.
One of the main advantages of ArgoCD is that it has a user-friendly interface. The JenkinsX UI is quite handy for easily accessing your deployments. Unfortunately, FluxCD has no dashboard yet.
You can create notifications for your deployment process using any of the mentioned tools.
FluxCD provides for rollouts without requiring any additional tools—an important feature. Unfortunately, JenkinsX requires additional work to achieve this
If you have to deploy your applications to multiple clusters, you will benefit from ArgoCD’s cluster generator and Flux’s multi-tenancy support. However, you’d have to install the JenkinsX operator in all remote clusters for a multi-cluster deployment.
ArgoCD has wonderful documentation, and it’s easy to dive right in. The same is true for Flux. JenkinsX is lacking on the documentation front, at least for beginners, as it might be confusing to set up for the first time. Management and configuration of ArgoCD and FluxCD is also easier than with JenkinsX.
It’s easy to write plugins for ArgoCD, and there are built-in authentication integrations for it.
On the other hand, it’s currently not possible to write external binaries as a plugin for FluxCD or JenkinsX. So if you need to extend your CD tool, you’ll need to find workarounds for these two.
CI and CD contain different steps for an application. CI focuses more on integration with existing code, while CD focuses on how to distribute and roll back the packaged code. You may choose to use one tool that lets you implement both CI and CD processes, or you can opt to use two different tools to implement CI and CD separately. In both cases, you need both CI and CD together.
An ideal continuous deployment process should include these features:
It’s hard to say which of the tools discussed you should select, as all three provide many good features. It ultimately depends on your requirements. If a user interface is a must, then you should select ArgoCD or JenkinsX. If you want to make edits on the dashboard, then you should go for ArgoCD. If your tool needs to be part of CNCF, then you should pick ArgoCD or Flux.ArgoCD and FluxCD are easier to quickly get started with, while JenkinsX requires a lot of work upfront.
Share:
How useful was this post?
Click on a star to rate it!
Average rating 5 / 5. Vote count: 7
No votes so far! Be the first to rate this post.
and start using Komodor in seconds!