This website uses cookies. By continuing to browse, you agree to our Privacy Policy.

Removing CI/CD Blockers: Navigating Kubernetes w/ Codefresh

Itiel Shwartz. Removing CI/CD Blockers: Navigating Kubernetes w/ Codefresh
Itiel Shwartz
Co-Founding CTO @Komodor
Kostis Kapelonis. Removing CI/CD Blockers: Navigating Kubernetes w/ Codefresh
Kostis Kapelonis
Dev Advocate @Codefresh

You can also view the full presentation deck here.

Udi: Hi everyone, and welcome to the removing CI/CD blockers webinar. Our panelists for today are Komodor’s co-funding CTO, Itiel Schwartz. And our dear friend and neighbor from across the Mediterranean, a developer advocate at Codefresh, the one and only, Kostis Kapelonis.

We’re going to talk about Kubernetes troubleshooting and some CI/CD best practices. And we’ll have a quick Q&A session at the end, so feel free to drop your questions below. Kostis, take it away.

Kostis: Thank you. So these are your hosts today. I have Itiel with me, who is the co-founder of Komodor. I’m working for Codefresh, which is a CI/CD solutions/GitOps. And today, we’re going to talk first about like the general problem, which is how you troubleshoot Kubernetes applications.

We should talk about why we should not use a Kubectl if you like, Kubectl or Kube Cattle or however you call it, and you think it’s the best tool for troubleshooting, it is not.

We’ll see the best tool today. We’ll see how you cannot replace your metrics, Komodor is not replacing your metrics, but something used along with your existing metrics. So don’t fear that we will take away your metrics.

We will have a demo. Actually, I think we have two demos. So if the demo gods are good, you will see two demos today.

And then we’ll have a discussion and a Q&A. So I think it’s important for this particular webinar to give us a short intro on how it started. It’s not one of those webinars where we have Codefresh as a company, Komodor as a company.

And they come together, and they say, let’s have a webinar; no, it didn’t happen like this. I discovered Komodor as a user, like as a person, not as a Codefresh employee. I found, I think, it via a Reddit ad, and I tried it, and I thought, this is the best thing ever, and most people should know about it.

So I sat down, and I wrote a blog post. If you haven’t read it, you can see it here along with the link. Which is essentially our review of Komodor and also an introduction to this new family of troubleshooting tools specifically for Kubernetes.

Which is a new family; I don’t think that there is any other tool close to Komodor today right now in this aspect. So I wrote the article, I don’t think, not many people paid attention then. But when Komodor announced their funding, and everybody said, let’s go and read the article.

So if you haven’t read it, go and read it and see why Komodore is so important. So after this blog post, we said, okay, maybe we should do a webinar so that we can answer questions.

So Komodor like a demo. And see how people troubleshoot their own applications if this is something interesting to them. So why should I care? This is the most basic question on every webinar because I find too many webinars starting with the solutions without explaining the problem.

Let’s talk about the problem first; why you need to care? And the problem that we are going to talk about today is not Kubernetes specifically.

I mean, if you are here, you probably know that everybody’s moving to Kubernetes, or at least you’re trying to go to Kubernetes or adopting Kubernetes in production.

And if you look at the ecosystem right now, and there is also the famous landscape from CNCF, which I’m not going to show. We have already several tools for existing aspects of our platform, like how you maintain a platform.

If you want to do CI/CD, there are many tools, Codefresh, is the best, of course. If you want to do security scans, there are many tools. If you want to do testing, there are many tools and monitoring; that’s it.

But there are zero tools for troubleshooting applications, zero. Maybe there are, you know, for other platforms like for virtual machines or for hardware, but not for Kubernetes applications. And when I’m talking to people about troubleshooting, I’m not always; they say, yes, Kubectl, yes.

So I open a terminal, and I start running commands, and what happens is that they play the game of 20 questions. Like something’s down, and you start, and you say, Kubectl get the namespace. Then you find your pods, then the logs, then you inspect stuff, and it takes too much time; it’s not a good tool.

And the problem that we are going to talk about today is not how you look at a cluster when you know everything is fine. But like when timing is important. So it’s 3:00 AM, you are sleeping, you have your PagerDuty going off, and you need to resolve an issue right now. Is this the perfect opportunity to use Kubectl? No.

And you will almost certainly have horror stories where you open Kubectl on the wrong terminal. You try to delete something on their own class there, and stuff like this. Kubectl is also destructive like it’s a tool that you can use to both read data and delete data. So if you ask me, it’s dangerous.

And there’s even the whole discussion, which we are not going to analyze today. If developers suit up Kubectl access on the cluster. So you need to make a choice as a company and say, hey, maybe I give my developers and operators Kubectl access, and I take the risk.

Or I don’t give developers Kubectl access, which then means that developers cannot debug stuff if you only have Kubectl and nothing else. So it’s a lose and lose situation. And even if you don’t believe me, you should believe Kelsey Hightower, who is, let’s say, the number one developer advocating the Kubernetes space; he works for Google.

And back in 2018, actually, so not this year. He said that Kubectl is the new SSH. When he recently wrote that tweet, I said, okay, then I don’t understand exactly what he’s talking about, but now I do. People abuse Kubectl; they think it’s the new secure cell for clusters.

And they use it for everything. And if you ask somebody that is working on virtual machines, most probably they will have a specific tool for troubleshooting, they will not use secure cell.

They will use secure as the last resort. Like when everything else has failed, then they will use SSH to go into the virtual machine and see something. And this is how you should do with Kubectl. Kubectl is the last resort if nothing else works.

There are several solutions before you reach Kubectl, and Komodor is obviously the one to talk about today. So don’t use Kubectl or SSH for debugging Kubernetes applications.

So what is the challenge? What are we trying to solve? The scenario that we are going to have a problem and the application is down, and we need to debug the issue as fast as possible. Because if the application is down, and we are paged, it means that it’s something important.

The company is losing money; you don’t have enough time to look at the 5,000 systems. And also, ideally, the application that you are paid is something that you’re familiar with. So you know how things work.

Maybe you have a run book, maybe you have seen the same problem again. So you see it, you wake up, and you say, yes, I’ve seen this again. You fix it; then you go back to sleep. But this is not always the case.

Sometimes you might have to work with unfamiliar clusters that you’re not the expert on. One of the more classic scenarios is that your paged for your own application, you spend some time looking at your own application.

You see that everything is fine, and then you realize that you need to go one step further and look at other applications from other teams. Maybe you have dependencies, and you are not an expert there.

So you have essentially two problems that you need solutions. One is to solve the problem itself, but you also have a big problem to understand who is responsible for solving the problem.

Maybe you can do it, maybe you cannot do it, and you need to wake up another thing. So Komodor is a great fit exactly for this scenario. I’m saying this because maybe you have another Kubernetes dashboard, if it’s your job nine to five, and you’re sitting on your chair, and the cluster is fine.

And you want a nice dashboard to look at stuff, yes there are existing solutions right now, but these are not good for this particular scenario.

And usually, where there is an incident, no matter the company that you work for, no matter the application, the questions that you need to answer are mostly standard. The first thing is, what was the last change?

Because again, if you have worked in a big company, almost always, I would say 80% of the time, 50% of the time, problems with deployment are actually a configuration setting. Something went bad, somebody changed something, either officially or not officially, and that was a problem.

Of course, you know issues where you weren’t, you having the human that has the issue, and it goes to some other system that had the issue. But in my experience, this is not, let’s say, the usual case.

So you need to know who made the last change. What was the last change? And sometimes, the last change is not passing from your CI/CD system because some people like to change the cluster on the loan.

You want to find the associate information for the service, so you want to know where are the CI/CD pipelines? Where are your metrics? Where are your logs? So there are other systems that you need to look.

And that’s where you want to know the dependencies. Because maybe your application works okay, but you’re using another application that itself has an issue. So you need to answer these questions as well as possible.

And remember, the time is 3:00 AM, you were just sleeping, and you don’t have as much time as you want. You need the team that helps you do this as fast as possible. And when I ask people, then also from my own experience, how do you actually solve this problem? The answer is complicated.

People say yes, maybe I go to my runbook. Maybe I look at the Wiki for previous issues. Maybe I open my logs; maybe I open my metrics. But the truth is that you need to open several systems at once because you don’t know in advance, where is the answer. So if it’s an existing problem that you have faced before, yes, the answer is in the runbook.

But if you don’t have a runbook, or it’s a new problem, you cannot use the runbook. Then maybe you want to go to your metrics. And the metrics usually, they say that something is wrong, but they don’t say why something is wrong. They don’t have the reason.

So you know that something’s down or when things happen, but not the actual reason. Of course, you can go to your internal Wiki and said stuff. And you know, usually, when everything else fails, you go to Slack, and you start talking to people, and asking, hey, maybe someone changed something or who can wake up this person and this person in order to help.

So I don’t know about you, but this is not my idea of fun. If I’m waking up, I want to go to sleep as fast as possible.

So not having to look at five different systems, and instead having like a central hub that connects everything and gives me a full view, is my let’s say, dream of debugging Kubernetes applications.

So Komodor is exactly this; it helps you solve these two problems. Like, don’t visit five systems, five different systems unless it’s absolutely necessary. Start from a single system and, you know, go gradually and drill down to the problem.

And also, go and find information that you want and that you have in a self-service manner. So don’t wake up people and ask questions like where are the logs for this application? Where is my CI pipeline for this application? You should have this information at hand.

These problems should be solved, so you can only focus on the problem of actually finding the problem. You should have all the information at hand and know where to find that there is a problem.

This is what we want, and right now, most Kubernetes tools do not solve this particular issue.

So we enter Komodor; what is Komodor? Komodor is a dashboard that is designed exactly for this. Solving problems with Kubernetes applications, that is it’s main focus. You can install it very easily on the cluster; it’s a small agent.

You install it on the cluster, and there is a dashboard; it’s the hybrid installation. If you know about it. Codefresh is also working in a similar way, and other systems, you can install something on your cluster, and then you get access to the dashboard.

And you will see the details in the demo, but the goal of Komodor is to give you all the information that you need, at that point in time, as fast as possible so that you can focus on the problem.

So it’s a troubleshooting tool. I want to make it clear that it’s not a metric solution; it doesn’t take any metrics. It works with metrics solutions. It doesn’t deploy anything; it works with deployment solutions.

Doesn’t do anything with networking, it works with networking solutions. So it’s something that you install, and it has integration with all the things that you already have in your company. It gets information that aggregates information for all the systems and shows you only what is important for the incident.

But it’s not a metric solution because some people might look at the dashboard and say yes, this is a metrics solution. No, it’s not a metric solution. That actual integration is for things like DataDog and other solutions like this.

If you ask me, and this is why I’m so excited for Komodor, Komodor is introducing a new family of tools, like a completely new family of tools called Kubernetes troubleshooting. And as far as I know, you know, and you can correct me if I’m wrong, this is the first and only tool in this category. So this is, you know, good and bad because it means that, Komodor is revolutionary and doesn’t have any competitors, but also it makes it a bit difficult to describe to people.

Because people like to learn about tools, along with existing tools. So you say hey, what is Github Actions. Oh, it’s like Gitlab CI. You know what Gitlab Ci does, so you know what Github Actions does. But Komodor is not like any existing tool that you have. Maybe it looks like a Kubernetes dashboard, but it’s not the Kubernetes dashboard.

I mean, like the user community dashboard, maybe it looks like metrics; it’s not a metric solution. And I want to make it clear, because I guess in the audience, maybe you know, we have both developers and operators.

Right now, if you look at the Kubernetes ecosystem, operators are mostly happy. Like most people are developing tools for operators. So they have their dashboard, their  metrics, they can see what they want.

But developers are not so happy; there are not many solutions specifically for developers. And this is also true, like for main development. There is a new family of tools for helping developers with local development on Kubernetes.

And Komodor, I think, is one solution that is not specifically in that one role on the avid; you can use it either if you are a developer or an operator along with your metrics. And especially if you are a developer, one of the problems that you have with metrics is that metrics are not, let’s say, that useful to you.

Usually, the metric solutions show too much information for a developer. I mean, as a developer, I don’t care about the PVCs or restore ads or networking. As a developer, I want to know is my application up? Yes, no.

When was the last deployment? That time. What changed? That’s it, that’s the only thing I want to know. So if, you know, you say you’re adopting DevOps, and you want developers and operators to work on the same, let’s say level. And you also have the idea of you build it; you run it.

You want developers to also maintain the application and not just say, hey, take it, and this is your problem. You know, the classic troupe. I think Komodor is like the embodiment of this idea, that you should help everybody, and you should give something for everybody.

So you can keep both developers and operators happy. I think especially developers would be very happy with Komodor. And again, it doesn’t replace your metrics. As you will see in the demo, you can get links to your metrics from Komodor.

So if you already have metrics in place, Komodor, you know, can take advantage of them, and you can go and look at them. But if you want to keep both, let’s say developers and operators happy, Komodor is the way to go.

So today, we are going to look at a demo, I’m going to use Codefresh, but this webinar is not really about Codefresh. If you’re not familiar with Codefresh, it’s combination of CI/CD plus GitOps solution; it’s three distinct products. The CI product is what you would normally do with pipelines. So it’s a pipeline that takes care of your unit tests, your security scans, your integration tests, load testing.

And whenever there is a commit to do something. Then there is a CD part of the platform where you have deployments, you can use either Helm or no Helm or customized, and you have a nice dashboard, again for developers.

So, where is the application? And when you spot that is for GitOps, we use ArgoCD behind the scenes. If you know about ArgoCD and we follow the GitOps spiraling, where everything is passing from Git.

I’m mentioning this because today we’re going to see like a demo with Codefresh, but Komodor can work with like any CI/CD platform. It doesn’t require GitOps, you can put it on your cluster, and it’s always independent or your deployment solution. So I think that’s enough for the introductions.

We have talked about the problem; we have talked about the lack of existing tools in the space. We have talked about what Komodor is doing. So time for the demo, and actually, this is the first demo. I will show a very simple example of Komodor, like a real scenario.

And then, at the end, we’ll see our second demo with more advanced stuff; we didn’t want to show everything at once. So let’s say again, it’s 3:00 AM, I’m sleeping, and I’m getting paged. And the application is down; which application? The moneymaker application.

So it’s the main application that is making money to the company; it’s absolutely critical. I need to solve the problem as fast as possible. So I can open Komodor here in the application, and this is my focus.

And the main thing that Komodor offers is a timeline. You can see the timeline here in the middle of the window. It goes from left to right; you can see times here. And on the timeline, there are events.

And unlike your CI solution that only knows about your deployments or your metrics that only know about specific things, Komodor monitors your clusters all the time. It also aggregates events from other sources. So it has a superset of all the events. In this example, just to keep things easy.

This triangle is a deployment, that happened with Codefresh, and you can see the type of deployment happened at this point in time, then everything was fine. And then, one hour in the calculator, things went down.

So just by seeing this, I even get my first, let’s say, information, piece of information that the deployment happened and everything was fine, and things broke down later. So maybe the deployment is responsible for the problem; maybe it’s not responsible.

So the first piece that I want to show about Komodor is that Komodor allows you to annotate your application with usually from work information. So let’s say that I’m not familiar with this application, and I want to look at the metrics.

I could go to my Grafana dashboard and start looking, but here in Komodor, I have all the information at hand. So here you can see I have a link which I can click, and it takes me to my Grafana metrics for this particular application if I want to see the metrics.

This is like a global annotation, but you can also annotate your application in a dynamic manner. So in this particular deployment, as I click on it, I can show you some things about deployment.

But also, Komodor has dynamic annotations where you can say which particular CI pipeline made these deployments, which is really important.

So I can add the annotation here. And now, if I want to say how did you deploy this application? I can simply click on my deployment, scroll down. We’re not going to use Jenkins today; we’re going to use Codefresh.

So I can click on it, and I go directly in the pipeline which deployed this application. So now we are in Codefresh. This is a basic CI pipeline; you can see it has multiple steps.

The first one is checking out the code; then maybe I check my quality for any errors. I compile the code; maybe I’m running some tests. I build my application, and then finally, I deploy it.

It’s all YAML as you know, most systems. I’m not going to talk too much about Codefresh; I’m just going to highlight that for each particular step, you can use a different Docker image.

And like other systems that they will not name, that only force you to use a single image. So, for example, for this LinkedIn thing, I’m using Golang, like a Golang CI image. For this one, I’m using Golang, like the language.

So that’s, let’s say, the two-minute intro to Codefresh. We’re not going to talk about it today. Also, notice that this is through continuous deployment, like I’m starting from a commit, and I go straight to deployment.

There are no approval steps or promotions or stuff like that. Even though Codefresh supports this. So the important thing here is that everything looks green. Like as far as Codefresh is concerned, all the tests were passed. All the quality steps were finished.

The image was built and pulled successfully, and the deploy has finished. So this reinforces my feeling that this deployment was not a problem, because as we saw in the dashboard, after the deployment, maybe things were okay until one hour and a half.

So I go back to Komodor, and the ‘killer-feature’ of Komodor. And I think if you want to keep one thing from this presentation, is that Komodor has this idea of dependent services.

And like other solutions that are focusing on big-time information, Komodor is actually taking this info during runtime. It tracks the dependencies between services as they are in the cluster.

And remember, I said before that Komodor is not replacing existing tools; it’s taking information from other tools. So Komodor gets this information from other systems. For example, if you have Istio, you can get this information for Istio.

If you have DataDog, you can get this information for DataDog. So you’d use an existing system and take only what you want because this is what matters to you right now. So I’m saying, hey, this application seems to be okay.

Deployment seems to be okay, but maybe there is a dependent application that I need to look at. And for this particular demo, I want to make things simpler. There isn’t an integration, but the very usual scenario is that the applications that are in the same namespace with this application are also dependent.

So maybe they are affected. So right now, I’m only looking at the money-making application, but there are two more applications in the same namespace. So I look at the timeline here, and I’m focused on a single service. But I can click on other services and put them in the same timeline.

This is, for me, one of the most interesting pieces of Komodor, that it takes all the services and merges them on a new, let’s say, timeline with everything sorted, and I can see the complete timeline of services.

And you can see that so far, Komodor is serving me and answering all the questions that I need, instead of throwing all the information all at once, info, metric solution. First, I look at my application; I’m going to look at the dependent application.

So in looking at the applications, this simple deployment doesn’t seem to be a problem. Because as you can see, nothing has happened for this. I mean, there isn’t any problem, so I can disregard it. I’m not going to look at it.

I’m going to focus on this, like a name for the demo here and dependence. And you can instantly see that here, we had two deployments. So this looks like first the dependency was deployed, and then my application was deployed.

And I can look here and see all the dates, and I will talk about them. So this also, so it’s like the granularity of things that maybe this application needs to be up first, and then the second, maybe, I don’t know.

Here, there is another event, and I think it’s important to show this. Some of you went and changed the replicas. So replicas were two, and now it’s four. And in every, let’s say, event, Komodor is smart, and it gives me all the information I need. So here, on the top, I can see what was changed in the cluster.

So our deployment happened here because I see the document is completely different. But on the lower level, I also have information for Github. So I can see also what happened on the application level.

So this particular deployment was part of a commit; I can see it here. But if I click on these particular things, some of the things manually, the replica, there isn’t any Git information. Okay, so that doesn’t look very good. Maybe he wanted to do something, she wanted to do something, but the application was still fine. Then there is another change of replica that’s still fine.

And here is the most, let’s say, interesting thing, that’s the moment of truth. We can see that this hidden dependency had a new deployment. And immediately after this application, the hidden dependency had an issue, and then my application had an issue.

So I kind of say from here that most probably these two applications are related, and this particular problem broke this. So I can click here, and I can start with a cluster; I think that is the new release. And then here, and I have named the commit groups because that’s a problem.

Somebody made a code change, wherein this particular case in order, you know, to emulate data, I have changed the liveness check of the application if you’re familiar with liveness check for the application and I made it fail, but that’s an example here. You know, I might see something that is interesting to me.

So at the very least, I know now this commit was the problem. I might say that, okay, maybe I’m not responsible for this application, so I know who I need to wake up. And I have, again, all the information.

So I can verify, for example, that the deployment itself was fine. This was a runtime error. So if we go back with the same way to Codefresh, this is a similar deployment. As far as you know, the code was concerned; everything was okay.

And the problem is in run time because the Kubernetes is not passing the liveness chek. So I found all this, and you see, I didn’t have to look at metrics. I decided what information I want to look at, and I thought of this idea of drilling down. So I start from the general case, looking at my application.

Then moved to the particular timeframe, and then I started looking at dependencies. And at all times, and I think this is important for me, I only looked at what I was interested in, instead of opening metrics, and I’m looking at everything.

So that’s a very simple scenario for Komodor and why I think it’s important. I want to stress out that this is a subset of the capabilities of Komodor. Itiel will talk about more what this entails. So that was the demo, and then maybe Itiel can share his screen and talk about all the other stuff. I hope this was interesting for you.

Itiel: Yes, thank you very much, Kostis. Thank you very much. Do we have any questions before we get started? Just to like, see like a, and maybe answer. Like first of all, it was really a great explanation on the product Kostis.

I’m not sure if I can outperform you, to be honest, even if it is Komodor. But I will try my best. And so I think that I’m going to show like a little bit on the questions. So we have, can I see the status of pods and the related events? So let me do like a screen share.

Kostis: You, you can say that I think this was a new, the new feature that you are, so people are asking for the new feature.

Itiel: Yes. Like I really say that all those started by giving like a high-level overview, but we seem more and more users basically trying to use Komodor to finish their troubleshooting cycle.

So they are going to Komodor because we notify them that they have a problem, and then they do the deep dive inside Komodor, basically understanding what is the issue. And a lot of the times, they can even know how to remidiate from within Komodor.

So let me do like a screen-sharing Komodor, like on top of Komodor basically. So here we have, like the main Komodor screen as Kostis showed us, and then to close some tabs, so you can see trash.

I can see that brain consumer PG is currently unhealthy, and I cango, and I can see that this service, in particular, had a lot of issues over the last 24 hours. And more than that, I can see that not only the service was unhealthy, it also had DataDog issues.

So because Komodor integrates with your existing tools, with your existing workflows, you can have like a full picture of everything in one single place. I will answer the first question about the Github commits directly, and would it be possible to use Bitbucket? So yes, I’m going to show here.

Here you have like a real live example of, you can see the commit directly from within Komodor, and you can go to GitHub basically to better understand what is changing in GitHub, what interesting files changed in GitHub, all within Komodor.

I will say that we are supporting Bitbucket, GitHub, and Azure records. So pretty much all of the main Git providers. So I want to say that I answered this part live. And let’s see if we can also see like live status of pods from within Komodor.

I say that some of our pods are having a crashloopbackoff, one second. Having technical issues here, so I apologize for that.

Kostis: I think we can answer this question that, you know, one of the latest features of Komodor. I think it was exactly this, to get a live status of pods and their logs. So if you want to do it from Komodor, yes, you can do it.

Itiel: Yes. Here you can really see, like everything, the change for these specific parts. Basically, you can get the full history and timeline of the parts from within Komodor.

So you don’t need to go and do like the Kubectl describe or get logs. You can get everything you need from a single place, very easily. So I will say that like a lot of our users really like this feature, particularly because it saves them the time.

And if you have multi-cluster-like system, and maybe God forbid like multi-cloud, multi clusters. Then because Komodor is cloud-agnostic, you can see everything or like every interesting thing in one single place.

And you can really see like the difference from within, like the GitHub or Gitlab diff or like any other info changes from within Kubernetes itself. I’m going to do like another, like very small, it could be deep dive on the new, like events view.

This part, in particular, is super interesting because it lets you see everything, the change from all of your clusters, all in one single place. So you can see health events, you can see deployments, you can see alerts. You can see everything that’s happening in your system in one single place. And you can also have predefined filters. So you can basically answer the question.

Okay, I get the fact that I am having an issue, but our other services are also faulty. Maybe I have this issue across cross-cluster, so maybe something the DB or something. So this view, in particular, is very useful for people who try to troubleshoot more complex issues than like the single service that is currently down.

And I can also do like three down, and to use like the Komodor filters to better understand; I don’t know, like show me all of the files on a different cluster. And from the data team, and I can see that the JSON file changed.

So maybe those ones are like interesting changes. So this is another very cool feature in Komodor. I will say about them certification; we didn’t really talk about it. Can allow you to know both about deployments and unhealthy status.

And the cool thing about this feature, in particular, is that you configure it via Kubernetes annotation. Meaning you can understand, you can have each team basically sending it to the right Slack channel without you needing to configure things in the UI. Even you basically, you can do it.

But it’s very like infra-as-code writing it down as part of the Kubernetes YAML itself. Okay, I talked about. Like I think all of the big features that I want to cover. Kostis, can you share the slide again, just to make sure I didn’t miss anything. Yes, so I think we talk about like all of those.

The only thing I didn’t say about like setting up Komodor and the integration. I think it is worth mentioning that the installation’s super easy and super smooth.

Meaning like after, around like 20 seconds, you can have Komodor installed in your clusters, and to give you the power to troubleshoot without the need to worry about like access, multi-cluster Kubectl, Kube cutter and so on. So the installation is very easy, and we have native integration with all of the big monitoring tools of the source provider, and so on.

So once you go into Komodor, you can have all of this view, all of these timelines for like everything that happened in your system in a couple of seconds. And I think like a lot of our customers are really surprised by how easy it is to enable Komodor, and basically to use it. And so I think it’s one of the coolest things that we have.

Kostis: I also want to mention something specifically about the GitHub integration, which I think is important. Especially for people that follow GitOps, we’re not going to talk about it today.

But usually, when there is a deployment, sometimes there is a change in the manifest, so it’s a configuration change. Sometimes there is a change in the code, the application source code.

And, of course, there are cases where you have both. And most systems right now, they have a specific audience in mind, and they focus only one or the other. For example, if you use ArgoCD is a good example, when you attain something, it only shows you the manifest. It doesn’t know anything about the application.

But because Komodor has this smart GitHub integration, it can actually show you both parts of the same thing.

So you can see what change in the manifest. If there is a change in the manifest, but change in the cluster, change in application, which for me, I think a specialist or developer, because maybe if you’re an operator, you’re not concerned about your application source code.

But if you a developer, it’s super helpful to see like the whole picture and understand everything, both for configuration and source code.

Itiel: Yes, totally like a great point. And I will say that one of the nice thing about Komodor is we help bridge the gap. We give a lot of value, both for the operation people that are basically, probably troubleshooting every day and basically find out what is happening.

But also for the developer themselves, who want to know about the logs, want to understand how the source code has changed. So we see our self as some bridge that like basically bridgea between like the ops people and the developers, and I think it’s super valuable.

Another question that we have is where do we store the data? Komodor is a SaaS solution, so we store the data ourselves. We saved each one of the events, and are usually, and we started out for three months, like as a default.

So this is regarding the sensor. Yes, we talked about the events view, right? I showed you like the live demo?

Kostis: That’s Komodor, so ConfigMap changes.

Itiel: Yes, we show ConfigMap changes. So you can also enable us to show secrets themselves. So basically, everything that change. We are now adding no changes. I know for like the ops people in the crowd, it’s super useful to know about issues in notes or like-new notes or bad notes.

So we take this feedback that we got from again, from customers and specifically ops people, to understand not only the application changes but the infra changes. How did the cluster change, Kubernetes cluster change, the node change? And we’re now adding support for these as well.

So you can know, oh, I have unhealthy pod, but now I can see Komodor, the reason is that the underlying node is unhealthy. Instead of trying to like figure out what is happening and why, half of my services are alive, and the other half is basically unhealthy.

Kostis: Yes. I think this was not part of my demo, where I actually went from the command line and the Kubectl command, and Komodor saw this as a generic event.

Where most other deployment systems would know nothing about manual changes, and I’m not saying it’s good, to make manual changes inthe cluster, you should not make it. But if this happens, Komodor is smart enough to understand.

Itiel: Yes. The cool thing is because we listened to the Kube API directly, like to that control plane, you can’t hide anything from us. Once you submit a request to Kubernetes, Komodor will track it and will show it.

So you can see like manual changes, and like GitHub, or like CI/CD changes all in one place. And you can usually select the one, I don’t know, commit or change between all of the CI/CD changes, like an important fix or an issue or something else.

Kostis: And then we have a question about Microsoft teams.

Itiel: Yes, we support Slack and Microsoft teams. So we support everything. We have a lot of Microsoft customers.

Kostis: I think we answered all the questions so far.

Itiel: Yes.

Udi: Yes, we got all the questions live. Itiel, is there anything else you want to show us?

Itiel: Nothing else I want to show. I will say that we have the nodes coming up pretty soon, and it’s going to be a very helpful thing.

We know that troubleshooting node issues is particularly tricky, and we are going to solve this in particular in the upcoming few weeks. I’m going to say that like we have a free trial, so it’s very like easy to use. I think I would say if like on the next slide as well, just a reminder.

Kostis: I was actually looking for this, like my point earlier, that this part shows what changed in the cluster in the cluster there, so in the configuration. So if you have installed ArgoCD, you could also get like a similar thing.

While this part is application code, straight from GitHub. So this idea that developers are really happy that they have the full idea of both source code changes and manifest changes. At the same time, which, at least for me, it’s super important.

Itiel: Yes. Another thing I forgot to mention is we also integrate with your existing configuration tools, feature flags tools.

So if you have LuanchDarkly, if you do manual changes and you always miss the place to see bill tech, manual changes, feature flag, configuration the change over DB, and your regular CI/CD flow. In Komodor, you can view everything in one single time.

Basically, and I need to know, oh, I see that the desktop alert just start, that started before or after, like after Alan struck the change. So I know those kinds of changes can be especially tricky to find out, and we give you this kind of visibility, basically everything in one single place.

Udi: Itiel, do you want to share anything about what the future holds for Komodor? What other things you are working on?

Itiel: Yes. Like I talked about the nodes, you have something even more unique coming up in the near future.

And we got the request from a lot of our customers because we know Kubernetes because you don’t have to troubleshoot to help them go once you have an issue, to allow them to understand basically where to look next or what to do next.

So our plan for the upcoming like a few months, it’s still like very early on, but we do have like a couple of alpha users to this. Is that ability to automate parts of the root-cause investigation, basically to do everything you will do.

Once you have an issue, go to the logs, go to the nodes, go to maybe two other external tools. And to give you some very like precise or concise way of understanding everything that is happening, that might be related to the issue.

So it’s like taking troubleshooting to the next level. Not only showing you all of the useful information but giving you like the next step in your troubleshooting in order to save your precious time.

Udi: Awesome. I think we’re going to end on this note. Sounds like a very bright future for Kubernetes troubleshooting. Thanks to everyone for joining us. We’re going to share the recording and the deck with you tomorrow or the day after. And thank you, Itiel and Kostis.

Itiel: No, thank, Kostis.

Kostis: Thank you.

Udi: Kostis is the main man here, and we thank you for joining us. And thank you for this very illuminating webinar. Even for me, I learned a lot of new things today. So thank you guys and see you next time; Bye-bye.

Kostis: Bye.