This website uses cookies. By continuing to browse, you agree to our Privacy Policy.

ArgoCD for K8s Done Right

Nir Stein. ArgoCD for K8s Done Right
Nir Stein
Software Engineer, Komodor
Assaf Yacobi. ArgoCD for K8s Done Right
Assaf Yacobi
Sr. Director of DevOps, Opsfleet

[Beginning of Recorded Material]

Udi: Hi everyone, thank you for joining. We’re going to give a few more minutes for others to join in, and then we will begin. A lot of people joining in, hello Anton, Daniel, Engen. Engen here from Blooming. Go check him out on Twitter and on Tik-Tok. He just did a really cool Tik-Tok video about our open source project and dashboard. so many people.

I don’t think we’ve ever had that many people joining in live. So yes, welcome. Sid, yes. I think we we’re good to go. So hi everyone again, and welcome to I’ll ArgoCD for K8s donovite webinar. We have our friends from Oakley represented by Asapia Kobe, a senior director of DevOps at Oxley. And we have our own Stein, who is a software developer, open source contributor one of the contributors to the Argo project actually.

And together, they will give us an overview of the different tools in our local project. And deep dive into the right way to implement GitHub’s ArgoCD for Kubernetes. And we have two stone players here, we have a lot of knowledge to share. So without further ado, I’m going to pass over the stage to Assaf and Nir.

So take it away guys, and we’ll leave some time for Q&A at the end, so you can develop your questions below and then we’ll get around to them at the end of the presentation. So enjoy, and see you at the end.

Assaf: Okay. So thanks Udi, and before we start, let’s talk about what we’re going to talk about, by ArgoCD for kubernetes done right, that’s a big topic. So what we’re going to do in this webinar is basically, Nir is going to go through the core concepts like what is GitOps, and the different projects under the Argo projects, which are not ArgoCD.

Then’s going to go back to me, I’ll talk a little about an alternative to ArgoCD, which is not to use it. Going to focus on the different approaches when setting up your, obviously Infra. And talk about day zero, stuff to do before you start working with ArgoCD, important stuff. And lastly, like a quick demo of ArgoCD, some of the screens.

And then we talk about like an application life cycle, how an application usually grows, what are the decisions to make when an application starts, and then what other decision points do we have. Then we’re going to go back to new with a Komodor demo, control us and then Q&A and that’s it. And before we begin, I got to say something. This is not like ArgoCD getting started webinar, okay.

We’re not going to focus on like diving into the Yamls, and how you set up your first application and stuff like that. We’re going to point it more to the decisions, the different approaches when setting up ArgoCD, when migrating to GitHubs, how you can do it, and what to do and what not to do. And we’re going to assume some ArgoCD familiarity. So we have some experience with ArgoCD, it’s not the first time we see it.

You don’t have to be an expert user, but you do have to know it a little bit. So first things first, let me just introduce myself. I’m Assaf, I’m a senior director of DevOps in OpsFleet. We’re a kubernetes centric DevOps consultancy. We specialize in providing DevOps leadership and service implementation for all of our clients. And we do it on all of the popular cloud providers that are out there. So, to you, Nir.

Nir: Thank yourself, Assaf. So first of all hello to everyone, hello Assaf. Let me introduce myself, so my name is Nir Shtein. I’m a software engineer at Komodor, and as Assaf and Udi said, in the end of this webinar, I will show demo on the Komodor platform. So stay tuned. I also love to contribute to open source projects, to our ArgoCD projects, to other projects, and especially, our project. You can click us up.

So I will do like a really quick brief of what is GitHubs and what are the goals of the GitHub Square mob. Very quick and very in general. So the goal of GitHub’s framework is to take one source, like one source of tweet, in the GitHub, they get repositories, that like we present the desired state. And nowadays, like Git not only represent application code. It also represents infrastructure, a configuration, application configuration and Etc.

And like the goal of this framework is to take the design state. Github repository, and two upload it to some platform. This platform is usually kubernetes, and the platform is the current state, kubernetes. And what GitOps does is take desired state and sync it with the constant state, kubernetes. In this process of GitHub’s framework, there are several components as the repositories, kubernetes, CI/CD pipelines, and some configuration management tools. GitHubs is one of the DevOps best practices, in order to reach high goals in your infrastructure and configuration and so on. 

So also a little bit about ArgoCD and what it is. It is just like implementation of the GitHub’s framework, and lets you solve the challenges and let you do whatever you want in the GitHub world. It is act like as CD pipeline, as a CD tool and is implemented as kubernetes controller. What’s ArgoCD like generally does, it does two main things. One is to monitor what is happening on your kubernetes clusters, monitors your application, and the second thing is monitors your repository.

Same like what is changes, what commits how, why it’s now, and the main goal is to GitOps job is to take what exists on the GitOps post story, and to sync it with kubernetes, very simple and straightforward. You can click. So as I say, I will do like very quick overview on the other three project of Argo. The first one is Argo workflow, Argo workflow is just simply a workflow engine.

A workflow engine it just engine tell you execute sequence of tasks in a Dug. Dug is like what the graph that is representing the picture. It directed a silic graph, meaning that all the tasks are going in one direction. And how ArgoCD implemented it, each step it’s like a pod that one and container, some container. And the next product is Argo events. So Argo event is event with and workflow automation framework for kubernetes.

As we all know, like events are happening on the clusters, if you want to keep scale get events, you get events. Those events can be some creation, deletion, modification of kubernetes resources. And Argo events let you do it’s very easily configure that like after some events, it can be as I say creation, deletion some notification. You can trigger some triggers.

Those triggers can be some workflows, can be some HTTP requests, can be some Argo events, it can be also outside of the cluster. Like in the slander, some slack notifications and so on. It’s very easy to implement, and very easy to want. So highly recommend to use Argo events. And I personally also, I very much love this project and contribute to it.

So the fourth and less project of the Argo projects is the Argo rollout. As we all know, like kubernetes simplify the rollout process very much, and now the rollout process in kubernetes is very easy and it’s come out of the box. And like the configuration of this rollout process is like someone was up with a max edge, max and available, we create this likely most of the configurations that you can use while doing some rollout.

And what Argo rollout bring to the game is another strategies of doing a work. And two of the most popular strategies are canary and blue green. Canary is basically when you have like, you take some part of the pots and put it like in another state that you have a and b, and like half of the network go to a and the otherwise go to b or another division.

And blue green is also very famous strategy rollout, when you have two duplication of the same system. One is blue, one is green. And then you can choose to transfer the network between blue or to green very easily. So when you want to do rollback or rollout, it’s very easily. And also, it’s very easy to implement Argo rollout and to use it. It just implement as a COD in kubernetes and this is it. So without further ado, let me pass the mic to my friend, Assaf.

Assaf: Thanks, Nir. That was good. Okay, so before we dive into ArgoCD, and before we talk about what are the decisions we need to make and what are different approaches. Let’s take a second and talk about like if you’re not using ArgoCD, what are you probably going to use. And the only real alternative at the time of this webinar out here is the flux city.

And flux city which is also a GitHub tool, has some similarities with ArgoCD and have some key differences. So let’s start with the similarities, we’re both like CNCF projects, we’re both are currently in the incubated mode. Actually, a quick detail about this is that when flux, flux mother companies wework, and Argo is the other project.

And when wework thought about what we’re going to do with flux version 2, like what B2, what are like, we’re focused. We decided to do a collaboration with the Argo, the Argo project, and develop like a project that’s called the GitHub’s engine. The GitHub’s engine was supposed to be like the under the hood of these two applications.

And when the guys from flux came to developing flux V2, they decided not to go on the GitHub’s engine. They decided to develop a new project called the GitOps toolkit. And basically, when you’re using ArgoCD under the hood, you’re using some code that was written with the flux guys, so that’s pretty cool.

Both of the projects are currently on GitHub, you can check them out. GitHub Center and GitHub toolkit. So what are the differences between these two? Flux is considered first of all like a lightweight or lighter weight. It’s not that Argo is a heavy weight, but when you compare the two, flux is like a lightweight solution, a lighter weight solution. It is written with a CLI first approach.

That means that when the guys developing flux think about how you’re going to use flux, they were thinking that you’re going to use it with the CLI. It does not come with a banded UI in it, when you install flux don’t get any UI. You can install the UI as a plugin, but also considers it an experimental plugin. So basically as I said, it’s CLI first approach.

The last thing that I think is worth mentioning, you’ve got a few more differences of code, but the last thing I think is worth mentioning is that flux does not currently have SSL support. I know that for the guys who work with like bigger project, bigger enterprise, but this thing really weird and I agree with you. Maybe they will have it in the future, but currently, they do not have support first.

So okay, let’s dive into ArgoCD a little bit, and about some decision making that we need to do. So basically, when we come to set up our first ArgoCD infrastructure, ArgoCD server for our companies, what we usually see in most of our clients are these two types of use cases. You got on one side of the slide, like the small startup working from home, working from the home garage.

No DevOps team, no infra team. We want to do rapid development, we want to grow really fast. And on the other side, you see like a big enterprise company, you’ve got all the necessary teams and everything. And the guys from the small startup, they usually don’t want to waste a lot of time with Integrations and we don’t want to spend a lot of time with security aspects, we’re doing work for security.

And usually, the scale is like we got two environments usually, in some cases. But usually, we go to the development environment and the production environment. Inside each of these environments, you’ve got your one kubernetes cluster, sometimes two kubernetes cluster, but usually it’s one, it’s very small. And the typical way of doing things here, and the typical decision that we make is we say okay, let’s install ArgoCD from inside our development or production kubernetes cluster. What do we get from it? Basically, ArgoCD when installed inside the cluster, it runs in an in-cluster model.

So no integration is needed. ArgoCD right after it is installed it can deploy anything you want to the cluster. And on the other hand, you don’t need to do any security work regarding ArgoCD. It does not mean that you don’t have to follow best practices, the like security best practices, with your kubernetes cluster. Of course, you do, but Algo does not bring anything else to the table.

It runs inside your cluster, it pulls data from outside the cluster, like each repositories or Helm charts to the cluster, and deploys it from within so it does not bring any security aspect like new security issues. So I’ll say just this warning about this setup. This setup does not scale well. So if you look at like a big company, like an Enterprise level company, they usually have a lot of environmental development and staging production, maybe if your environment in a DR environment and some on-demand environments.

And each environment usually have a lot of kubernetes clustered, maybe dozens of clusters can add up to a hundreds of clusters. Then if we think about taking the approach that we just talked about, we think about hundreds of ArgoCD servers, then this is kind of a help to navigate to maintain and even to upgrade. So what we would usually do is we won’t use a lot of our ArgoCD servers, we don’t use one.

Why we don’t use one? We don’t just set up one ArgoCD and just integrate it with everything, we don’t do it because we don’t want a single point of failure for GitHub operations. And also when we come to upgrading or doing some configuration on this servers, then we would probably want the separation to do it first with the development server, and then on the stating environmental production.

So what we would usually do is we would bundle together or maybe if we don’t have to, because we’ve got only development stage in production, we would all make it an ArgoCD server per environment. Or we would bundle together a few environments that are logical to bundle together. Okay, this is a per use case. And this is usually the setup that we’ll do in this case.

I got to say that if you can afford to go directly to this setup, even though you’re small, know that you’re going to grow and you’re going to go fast, then take your time and do it. Because migrations are usually painful, okay. You start, and you don’t have time to change it, to do the migration when you’re growing, and when you’re big, it just becomes hell and you do the migration because you’re false.

So that’s it. So once you install and you got it up and running, usually the instinct, and usually for like the guys want to move fast where you think is, okay, let’s go and start the deploying stuff, and let’s create applications and let’s do it. And I say no, just stop at day zero, you got your servers up and running, you got your info up and running. And I’m going to mention here a few things that I think that you should do before you just dive into Argo and start working.

First thing is set up SSO. I know that for the guys, who are working with the enterprise level companies and have some soft, two requirements and something this comes naturally to you. But for the guys working for the smaller companies, for the startup companies, okay, this is not something that is like really obvious. And I got to say don’t use the admin user, don’t use it, don’t pass it around.

Usually when stuff breaks, it’s because someone has privileges that they did, are not supposed to have. And they did something that we didn’t know is going to break the production. So don’t use the admin user, integrate with your identity provider, and luckily for you, ArgoCD can integrate easily with all of the identity provider, all the popular identity provider out there.

one login, if you’re using LDAP or using XAML, it comes bundled with Dex, and Dex is sort of like the middleman between those and ArgoCD. Dex can also integrate with your other, like the one I mentioned, like OCTA and Auxili and all of those, but you don’t have to. Obviously, we can indicate with them directly. Second thing I want you to do is don’t use the default project.

Projects in ArgoCD, as I said, I’m not going to deep dive into like the details of ArgoCD, but projects in ArgoCD are a way to organize your application, the different applications. And it comes ArgoCD with a default project setup in. So what I would ask you to do on your day zero with Argo CD, is think of the logic.

Maybe a few applications that consist of a project, or maybe all of the these teams applications are bundled into one project, and create a new project and then once you’ve created, it gives you two strong abilities. One is to configure roles, okay. And those roles can be attached, policies can be attached to those roles. So if I’ll just say I got the role of developer in an ArgoCD project, then I can’t do things on our application that will change config maps.

And the other strong feature is sync windows. Sync Windows basically gives us the ability to create a time frame that you can do or you cannot do syncs. And basically, I’ll give you the most common example, let’s not do any changes on the weekend. And I can define that from I don’t know Friday to Sunday, we cannot sync applications on this project, it’s a really cool feature and strong features.

The last thing and what I’m going to focus on I think in the next 10 minutes on this webinar, is the application itself and think scale and complexity. And I don’t mean by that make it big and make it complex, it’s not what I mean. I mean think about what you currently have in your hand.

Like if you’re migrating for an existing application to GitOps, then think about what you have and what’s the best way to implement what you have with like ArgoCD. And don’t go for the quick wins, don’t go for the quick wins, think where you’re going think what’s the best solution for you. And I’m going to stop the presentation for a second, and I just want to show you like the starting application, and like you want to see some screens of ArgoCD just to make it a little bit more interesting.

So not me just talking with slide. So this is basically the main screen of ArgoCD, okay. Once we have application, we will see them here. And then if you go here to settings, we can see our configured repositories, because can see our clusters like the destinations that these Argo server can deploy to, and we can see the projects that I just talked about.

And if I want, I can create a new project with the UI and call it I don’t know maybe example two, whatever I want, just keep it lowercase. Got a lot of different configurations that I can do here, but to define a new role that I just talked about, you just click on the upper left side, and create a role I don’t know developer, and then I can start attaching policies whatever I want.

This developer can update what application, every application on this project, and is allowed to do updates. Pretty simple, not going to do it now, pretty simple and sync windows. Okay, we just talked about it. Allow or deny, am I allowing or denying to do things, and then basically you got a nice UI for your Chrome, and like chrome jobs. Like on each minute of each hour on what day of what month, you can also define the time zone, you can also associate with an application or a namespace, so that’s a pretty cool thing to do.

Just to show you that the stuff that I talked about, so it’s supposed to take days, it’s supposed to take you, I don’t know, half a day to figure it out and go. And then just for a quick demo, I am here with this repo, this Git repo here, a publicly Git repo from GitHub, I’ve got my engine, it’s a deployment Yaml for kubernetes, just connected ArgoCD to it, no problem doing it.

And then what I can do, I can click here and create an application, just give it a name, I don’t know let’s call it demo. A project name, I am going to use the default project just because this is right now. And I got to give you the path, let’s say and the needs that the people going to use. A cluster, like I said, inside the cluster. So if I’m not going to use the URL, I’m going to tell him give me a name, then you would see.

It’s in cluster, that means you don’t have to set up anything, it’s from within the cluster. Let’s give it a namespace, let’s call it, I don’t know, I-Genix I did some tests, so I don’t want to destroy nothing in the next three. And if I click here on create, and everything works good. Then we will see a new application is created. And if I go inside that application, you can see the nice RCD diagrams. This is what is so tempting and doing like jumping into it, you just want to see it work.

And if I press on sync, then we’ll, again, I’m not going to dive into all the different options. You will see it deploying on my local kubernetes cluster. We can see here the pod, let me focus on the pod, you can see the logs and all these nice, like all the recent logs, because I know it is up to next. And let’s get back to the presentation.

So let’s just say like I said, that we have our application, and we’re on day zero like you just saw. We have a GitHub repository, it’s connected to our application, we’re working, everything’s good. It won’t take a lot of time before someone comes and say okay, I want to install message broker, I want to bring another source from someone. And what is usually happening right now, in this use case, like a small application starting in a GitHub repository and someone brings help, usually we need to make a decision, okay. Are we going to split our application to two applications, one for which source? Or we’re going to merge these two sources and create one?

And what is the most common decision that we see with our clients in this use case, is we merge it all into one Git repo. One git repo, they download the helm chart, we put it in the git repo if we need to update it. We just bring it and create a branch, and then make merges and it works, it works really good. If your application is at that scale and this is where you stop, then that might be a good solution. But in most cases, what happens is stuff starts to scale and starts to grow.

And then we say okay, we got a message broker, but now we need a DBO, we need a second DBO, a memory DBO. And we’ve got another teams working on another repo. And we’ve got like some guys for another company or from abroad, and very repo in Gitlab. And if you think about merging all of those sources into one git repo, and bringing updates and maintaining it, it becomes help. So this is usually where the decision is made, split everything up into separate applications.

So we solve the sources part, everyone can work in their own Git repos, and we bring everything from the helm registry and no problem. But we’ve created a new layer of our applications that now we need to maintain. Maybe some common values, maybe a specific order in which you want to sync them.

First of all, you deploy like the database, and then you want to deploy the message broker, and then I want you to deploy some of the stuff that we did on our git repo. And then, usually what we see like a new thing comes up, like an Argo app of apps. It’s basically an Argo application that instead of deploying your kubernetes objects that you use, like your hand files or whatever Yaml files you have, it deploys application. And it’s got a strong feature here which is called sync wave.

Sync wave is like a way for us to tell Argo what application or what object do we want to sync by what order. Like sync wave one, sync wave two, right? It gets its own hooks, like when to do stuff. And I got to tell you, I got to be honest, this is usually when most products stop. It’s like this setup is okay for most of the product, and it’s kind of scalable and it works and it’s maintainable.

But sometimes, things scale, and the CLB. So if we talked about like having a lot of sources, when we get a lot of sources. And I mean, like new application needs to be created on daily basis, stuff change. And on the other hand, we’ve got lots of destinations. We’ve got a lot of clusters, a lot of kubernetes clusters going up, going down and hunting at a time.

And then our app of apps does not solve any problems, because we have to create new apps every day, and associated with the app of apps and you’ve got to change stuff a lot. And when we come to solve this problem, when we use a relatively new feature of ArgoCD, which is called an application set. An application set basically introduces a new concept which is called generators.

And a generator, what it does, it integrates with your GitHub, with your Git repository, and integrates with your directory structure. Let’s just say again my git repository, and it has like an application directory. And under the application directory, I create a directory per application. Then Argo would iterate over that directory, and generate an application from each of its subdirectories.

That means we don’t longer need to maintain like the application layer of ArgoCD. It will iterate a directory, and for every directory I will add, it will add an application. So that’s all the quick comment about it, I talked about sync waves, when we talked about app of apps. This is currently since it’s a new feature, it does not support sync wave. You have no way of ordering stuff, it’s just kind of a flat creation of application, which you all think at the same time.

And that’s so like the sources side, but on the cluster side, basically, we have the same solution. We can define a generator which iterates a list of servers that we have to maintain, we have no other alternative here. But it does so for each application. So basically, what we see now, if I have 100 directories that are applications, and I have 100 kubernetes clusters, then I would get a hundred application and each application would have a hundred destination.

So I got to say and I got to be honest. This solution is not like for every, it’s not something to aspire to. It’s not something that you say okay, I got to have this feature because this is the best one, no. Most cases you stop with app of apps. But in some cases, when you have to scale, this comes in really handy. So this is the time I’m going to pass the mic back to my friend, Nir, for the Komodor demo.

Nir: So, thank you very much, a soft slow, great insights on the on Argo. Let me share my screen, and show you the Komodor platform.

So this is like the komodor platform. As you can see, those are like all the services it was all over my clusters. In each one of my clusters, I have a komodor agent that installed using helm or customize. And the helm agent, the Komodor agency just report what is happening inside the cluster to the Komodor platform.

In EU, I can see like all the workloads, meaning deployments, and also Argo rollouts, we’re talking about the Argo projects. So let’s deep dive into some specific service, and look about the insights of this service. So first of all, we can see like some information about the surface, some metadata. We can also see like in the main view, some timeline of this service.

We need to remember that kubernetes, and also ArgoCD are stateless, they like doesn’t remember what happened in the past, and Komodor does. Komodor instead it does remember what happened the last day, in the last two days, in the last week.

And by like knowing what has happened last week, I can draw some timeline of the history and the lifetime of some service, when it were roll out as you can see here. When it was some availability issue, when some of the ports of this service weren’t available, or have some problem, I can do everything within Komodor.

ArgoCD and komodor like really complement and complete each other, because as I say, ArgoCD is stateless. It only know what is the current state. It can tell you I’m out of sync or my services are unhealthy. But this is not enough, I’m asking myself why my service is unhealthy.

And in that point, I just jump into Komodor platform, and now I can investigate and try to seek for the problem. And within Komodor, I have all the tools and all the clicks, how to control and how to see what is the problem.

Another great component of Komodor is the comparison view. And as I’ve said, one of the mythologies how to deploy Algo, is like having single Algo instance of each one of your clusters. But let’s say one of your Argo instances doesn’t work. I think this can happen to everyone. And what is the first thing you do, you literally compare something that doesn’t work to something that work.

So this feature like show you two deployments in this scenario, it can be another workload, and show you the Yamls. So in a high level, you can like compare between those ArgoCD instances and see what is the difference. Maybe the problem is hidden, maybe it’s the image, maybe some configuration of the Yaml, I don’t know.

Another thing that I want to show you on the komodor platform out there, of the inspection view. As you want some, keep CL get bots, kepp CL get notes, whatever that you want. You can see like all the nodes that currently exist on your cluster bot, with nice UI, it’s doing all the actions from here and don’t one another keep CL commands. All the metrics are collated with these nodes. You can also click there, and do some very basic actions.

Another like main capability of Komodor as I say, it’s the statelessness of Komodor. It can show you not only what of the current status of some pods, it can also show you like the history, what happened in the last 48 hours, without the deleted pods. I can say like many of our customers that I use Argo workflow, and one of their like very pinpoint problems is that they have a problem to investigate some deleted pots.

Because after the pod get deleted, kubernetes doesn’t remember what happened to this pod, it never exists for kubernetes. But komodor does know about this pod, and does know to say what are the events that happened to this pod. Like an Alfie some back off, and then the poll was terminated.

So the last capability that I want to show you on the komodor platform of the monitors, they are like a little bit similar to Argo events. As I say, when you have some events in the cluster configuring Argo events, that some action, some trigger will be triggered. And Komodor, it’s a little bit different. We see that some event happen on the cluster, it can be some PVC that is broken, some modification of some deployments, it can be in this scenario, some event that happened to some node.

And after some event happened to some node, we want several of checks to this node, to try to see what is the problem. Because like we define ourselves like the experts of kubernetes, and we know what can be the possible causes to know the issue. So by wanting those checks, when the event is happening, you can investigate see what is the real problem, what is the root cause of this node issue.

I think like in this scenario, this was some node pressure on the node, but it could be another problems on the node, and you can just investigate and see it. We have a lot of other monitors headed to configure, and we find a threshold, and then just send them to slack or OPT Genie or patch duty, whatever you want. I think this is it right now. Of course, you’re welcome to try Komodor. We have 14 free trial days, so this is it. Let me stop sharing my screen. Assaf, can you share your screen, thank you. So Udi?

Udi: Looks very good, the Komodor screens.

Assaf: Thank you.

Udi: Every time I look at it. So we have a few questions from the audience, and I’ll go over them quickly. And whoever is the more keen to answer. I think the first one is really appropriate for a start, actually the first two, so let’s start. Assaf, can Argo CD be also used in a red hat OpenShift?

Assaf: Yes. The answer is yes, we actually did a few projects, well, we worked with OpenShift and we worked with ArgoCD on OpenShift. As of most, as like most things, OpenShift brings some, I won’t say difficulty, some security constraints that you need to approach. But basically, as you can install obviously on OpenShift and work with it and get some problems.

Udi: Cool. We have another question from Jeff. What happened to the Argo CD flux integration? They seem to be going in different directions?

Assaf: So as I said, like on that topic. Basically, we work together, and we had like a collaboration to develop to get a project which is called the Git Ops engine. And the GitOps engine is currently a part of ArgoCD, it’s under the hood, ArgoCD.

For some reason, I don’t know why, we usually don’t know why, the flux guys decided not to use it. We moved to like the GitOps toolkit. You can never know where in this stuff. I guess, if you look into it, I’m not a historian, but if you look into it, you find some official reason, but you can never really know.

Udi: Cool. Next one is Stephen Bayer, he has two questions. That is very efficient breathing in both of them together. Can I deploy Argo CD on kubernetes?

Nir: Yes, I think that’s definitely yes. All the Argo project are natively for kubernetes.

Assaf: I am giving you like a challenge to deploy ArgoCD not on kubernetes.

Nir: Yes, exactly.

Udi: And the second part of the question of, or the second question is, can I connect it to Key clock?

Nir: As I know, you can. It’s one of the supported windows.

Udi: Yes. I think key flock has been bought or taken on the red hat, if I’m not mistaken.

Assaf: Yes. But anyway for the question, specifically you can. You can integrate it with key clocks, it does the truth.

Udi: Okay, next question, I think.

Nir: Explore the Komodor platform.

Udi: Yes, somebody is talking about Komodor, it’s from Marcos Poto. Is it possible to look at events from different resources at the same time?

Nir: Yes. So definitely, yes. I didn’t show it, but you have like an event screen that show you like all the events. Maybe any resource on the same screen, I think I can, like never mind. Let’s continue.

Udi: So the answer is, yes. Another question from Jeff, actually I don’t know if it’s a question or a statement. But Jeff is saying best practices, documentation for blue green with ArgoCD?

Assaf: I’m going to take this as not as a question, like it’s declaration. But I don’t know, I don’t understand like that question. There are best practices, and read the documentation for working with blue green environments with ArgoCD. I don’t know if it’s a specific question or it needs to be a little bit clarified.

Udi: Okay. Next up is Lee Chechik. What is the recommended way for having the application files? Inside a Git Repo?

Assaf: Okay. So this is something we need to think about in detail. Like I said, usually when we start off, when we start off and if we’re small, we would see it inside the GitRepo. But once we scale, and so for like the, let’s just say for the small scale companies, for the small scale teams, where you’ve got only one Git repo, or maybe a for like three, four micro services, you would see like the Yaml part over there.

But once you start scaling, and like the handling of all the Yaml files, regarding Argo and stuff, or the kubernetes and stuff, move on to like a different team. There is a common, there’s like some logic in it when we usually separate it.

Just because it’s easier to maintain, and you don’t get a lot of, you don’t have, like if you get a lot of pull requests on your configuration files, on your kubernetes files, you don’t want to be mixed with the function pull requests, and also for the CI and stuff. It makes sense when you grow past a certain point to separate stuff. But both cases might be good for you, I don’t know. You need to look at the specific use case.

Udi: Okay. Next question is from Rodrigo, how does Komodor compare to other tools that might already save the history of kubernetes events like DataDog? That’s a really good question.

Nir: Yes, of course. So Komodor give you like another insights about some events. It can be on the count locks, to happen to a certain part, when some events that happen. It can correlate you to another resources, like I have some issue with my employment, but it related to some config maps or secrets or a PVC, I don’t know.

And Komodor show you like the relation between these events to the event of another resources that exist in kubernetes. And more than that, in Komodor, you can see like the full picture and the full image across all of your cluster or clusters.

Udi: Yes. And I would also add that with komodor, you get very precise context to what you’re looking for. Data dog is very robust and sprawling and intricate, and it’s very hard to set up like the onboarding is very long.

Nir: It’s not the purpose of datadog, they have another purposes in other words.

Udi: Yes. The purpose is with APMs, that’s what they do best.

Nir: We use Datadog.

Udi: We use Datadog ourselves, that’s true. And we also integrate with data dog, and we are part of the data dog marketplace. We don’t want to replace data dog in any way. But we complement it. 

We are like an extra layer of abstraction on top of it, to give even like beginners, kubernetes novices, that don’t know infrastructure really deeply, with Komodor, they can get the insights and the data that they need that in data dog they will be lost, they will be confused with all the dashboard and all the data.

So this is, I think me differentiation between Komodor and data dog. So moving on to the next question, how effectively ArgoCD is being used across customers? Any other version control tool that can be integrated with ArgoCD?

Nir: Attending GitHub I think, means?

Assaf: No. Okay, so let’s separate it like to the two questions. Whenever we see like a kubernetes cluster, like an EKS cluster, or something like that, it usually does not take a lot of time until we see an ArgoCD being used. It’s like, let’s just say that once you scale and once you get to a certain point, ArgoCD just make sense. It’s the most popular GitOps right now. I know that we said it’s flux is like a really good alternative, and it is a good alternative.

But if we measure it in popularity, then Argo to the market. So it’s very popular, and it’s very effective. I cannot think of like a real company right now the deploying a kubernetes stuff, the tried Argo and went back. I can’t think of any. And regarding the other question, Git has the configuration management tool, it’s the only one that Argo supports.

If you’re talking about like the servers, like hosting it if it’s Git Lab or GitHub or even your own Git server, then it’s fine. You can use whatever you want, as long as it’s Git.

Assaf: Yes. I think you can even do it, okay. So it’s something to check out. But I think you can use a lot, no, we work now. We’ve worked about your DevOps in the past, we did a lot of stuff not to ingest it up.

Yes, we’ve worked also with a local Git. It’s okay, not like a local Git repository, but like the server that we spun up, which is not neither of those. So as long as you’ve got a URL with a DOT Git at the end, it takes you there.

Udi: Cool. So Ivan is saying sorry I missed the part of..

Assaf: I’m going to take it as I heard it, and I did not completely understand it, and not as I came late. And what I would say about that is the application sets, which was the feature that we explained generated inside. Is kind of, how can I say, it’s like heavy lifting. It’s like it’s like a feature that is not commonly used. It’s supposed to solve a specific problem which is scaling, like a lot of sources and a lot of destination.

A generator basically, it’s for loop or for each loop that you put inside your Yaml file. And when you give it an array of values, never mind that in the example like a directory structure, it gets it like the values from the directory structure, it just goes one by one. And for each one of those, it generates an Argo application.

Also the same for the servers, I hope it’s understood. But if you don’t get it, fully get it, it’s okay. The chances are you won’t get around to using it, if you don’t get it right now.

Udi: All right, next question, also comes with a side story. The question is what is your favorite method of bootstrapping a kubernetes cluster with ArgoCD? I’ll skip the.

Assaf: Yes, I’ll take the small story, and just summarize it afterwards. I understand something that he is talking about, like his boot strapping a kubernetes cluster, and then he takes care of the ingress and takes care of a lot of like infra related stuff. And then installs ArgoCD, and then for some reason, he wants ArgoCD to manage those components that you just configured.

And I can and I can get it, sometimes we don’t know exactly where the line goes, like where the border is between stuff that are infra and stuff that are application. What we usually follow, and usually, I got to emphasize this. Usually, it’s like the word. What we usually do, what we usually do, because every use case in its own, is we create, like what I would like to do is to create a separation.

And say okay, we’ve got a, let’s just say AWS EKS cluster, and then we got our infra components. Let’s just say I don’t know maybe a LB controller, and maybe a auto scaler, and ArgoCD, those stuff we would maybe a bootstrap using telephone, just for example. And then all the stuff that comes later, with AWS, yes, AWS I don’t know can provide or whatever.

And all the stuff that comes later, like when you get to the application there, not to like the infra layer, like the ingress and auto scaler and stuff like that. Those stuff should be managed by ArgoCD, but nothing like that comes before. You don’t have to like backwards connect everything. So that’s the way I would do it.

Udi: Okay. Next question is from an anonymous, attendee could you share the pricing model of Komodor? So I will say that we have a two-week trial for free, and after that, if you want to pay for Komodor, it’s usage based.

So we found that usage is the most convenient. And this is the most predictable, the most convenient way of pricing tools. So we found, so it’s currently $15 per node, per month. So this is the policing model, hope this helps. We have another question from Lee Chachik, what is the downside of using the application standard?

Assaf: Okay, I’m thinking, okay, I’m just organizing the answer. So basically, application sets comes to answer a specific or come to solve a specific problem which is scale. And currently, when you try to or when you implement application set, you need to have a good reason of doing it.

First of all, because it brings a certain level of complexity, you don’t just add code like your applications, you do it with some logic and run a generator that automatically generates it. So it’s something that you might prefer not to do if you don’t have to. And the second like a downside of user application sets, which I think it’s like currently, for me it’s the most painful one, is you lose the sync wave option.

Which means that if you have an application, and you have because you’re using application set, which deploys a lot of other applications, you cannot like do it in order. You cannot say first of all, I want my databases to go up, and then I want my message brokers to go up, and then I’ve got a few applications that I want up, because we do some tests or whatever.

And then, I want my application to go. So this is the two downsides. Complexity, and the lack of support with the sync waves.

Udi: Okay, good answer. So we have a couple of. So Jeff is asking where he can find some good documentation on blue green deployments. I want to share this guide by launch Darkly, I think lunch darkly.

Nir: Yes, it’s a great guide.

Udi: Yes. We use launch Darkly, I think the best tool for feature flags at the moment. So I post the guide to blue green deployments. I just left the link in the chat for you Jeff and anyone else who might be interested. So moving on, we have Jonathan.

Great session guys, in a scenario, I’ll be using weave telephone controller, how could Komodor help me? Is it possible to start with a monthly individual plan? So just to answer the last part of the question, then no, we are billing yearly. Currently, this is the pricing model, and Nir, you want to answer the first part of the question?

Nir: Yes. I’m not familiar with the telephone control of kubernetes, like everything that you want to do in kubernetes, manage, troubleshoot serve problems do things more efficient, and Komodor is the solution, I hope I can see you.

Udi: Okay. So next question from Ivan. What comes first? Komodor or ArgoCD?

Nir: I don’t think there is a like, I think they come together. Like this is things that you do, like as the shift, and so both of them are recommended to do before we start to be in production.

Assaf: But I would say, Nir that if you know you’re going to use ArgoCD and you’re going to use Komodor, you would probably set up Komodor first. Because if you’ve got problems with ArgoCD, Komodor can help you.

Nir: Exactly, yes.

Assaf: Okay.

Udi: Next question is from Anton. What if I’m deploying multiple ArgoCD service, deploying applications to the same kubernetes cluster? Can each server cooperate with the others out of the box, or do I need to manually set this up?

Assaf: Okay. So basically, I would not recommend multiple ArgoCD server deploying to the same K test clusters. It’s not recommended, okay. And it can work, a lot of stuff can work, but it’s usually not a good idea. And the reason is, because ArgoCD and we didn’t talk about it right now, because we didn’t dive into a lot of details. But ArgoCD, it works like kubernetes works in desired state.

But ArgoCD also monitors like its deployment after we does it. So if you say I want to have one pod of this, I don’t know, of this instance let’s say. And when another ArgoCD deployed a configuration that says I want the same port, but I want I have two replicas of it.

So now, what will happen is the second ArgoCD will deploy two replica, the first ArgoCD will scale it down to one replica. And then they start fighting. It’s like giving two AI the chance to talk, it can last forever.

Nir: And there is a collision.

Assaf: Yes. So don’t do it. I usually don’t say something like black and white, but on this, I would say don’t set up a couple of, like more than one ArgoCD, the instant to deploy on a kubernetes cluster, this is something I want to.

Udi: Cool. And we are out of time and out of questions, so we’re going to end this.

Assaf: Actually, just a recap?

Udi: Yes. Assaf, please give us a quick recap. And if you can switch to the final slide, so we can see your contact information. I’ll just say before we leave, that everyone will registered for the webinar will see the recording and the stack, and you can review and watch it again at your free time. So no worries, and Assaf give us the final roundup.

Assaf: Okay. So just a few giveaways from this lecture, not really technical, but at a high level. I think that the main point from what we wanted to tell you is don’t just jump at first, don’t just run and do stuff. Think about where you’re going, think about what tools you want to use. Think about what the best approach.

And don’t think about where you’re at right now, think where you’re going to be when you do these solutions. Think what is like the end point of where you’re aiming, you don’t have to like aim at 20 years from now, a year from now is enough, and that’s it. 

And if you’ve got any questions that we didn’t get around, that we didn’t get a chance to answer, you didn’t get the chance to ask, feel free to reach us both Nir and I, we’re on LinkedIn. And that’s it, I had a lot of fun guys, thank you.

Nir: Yes, thank you everyone.

Udi: Cool. So thanks everyone. And as I said, we’re going to share the deck with you. And you will have the contact information for Komodor, hopefully Nir and Assaf. Thanks everyone for joining us for this webinar. I hope you all learned something new, and I will see you again next time.

We have a lot of cool events lined up for you, and just follow us on Twitter and LinkedIn. Or join our community on Slack.

One final vlog is I’m going to share some links with you in the chat for Komodor and our open source project, and anything else that might be interesting to you, and that’s it. So thanks again for joining. Thank you, Assaf, for sharing your experience, thank you Nir, and see you next time. Bye.

Nir: Bye.

[End of Recorded Material]