Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Webinars
You can also view the full presentation deck here.
Nic: Okay, so I think we are live now. So, thanks everyone for joining today. Today we’re going to be talking about stateful applications. It’s going to be all about databases and the title is ‘Five tips to successfully migrate your database to Kubernetes’. So, I don’t know if you guys have been following Kubernetes and different events or reports that have been pushed out. But there’s a big trend is to really run now stateful workload within Kubernetes, and in particular databases. So, we’re going to be talking about this today really. So, my name is Nic Vermande I’m a principal Developer Advocate with Ondat. And before joining Ondat, I’ve worked around four-five years on Kubernetes especially around networking with the ACI CNI.
Now I’ve been with Ondat for about one month, so more specifically focusing on the data service layer. And today I’m very happy and thrilled to get with me Guy from Komodor. So, Guy if you want to introduce yourself.
Guy: Yeah, hey nice to meet you. I’m from Komodor. I’m working for like two months as a Solution Engineer. Previously, I used to develop private clouds at a company called Zadara. And before that, I used to run Openshift and Kubernetes for a long time. Very exciting to be gere and talk about databases and statefulsets.
Nic: Yeah, hopefully this is going to be a fun topic. So, the program for today, what you guys can and girls can expect. We’re going to bring a little bit of context behind why run databases, why run stateful workloads within Kubernetes, and why now. We’re going to take a look at all the patterns and Kubernetes native benefits that you can get out of just running those stateful apps. And of course, we go a bit deeper in terms of considerations for databases and take a look at the DevOps ecosystem. What we can leverage there to make your database migration into Kubernetes a success.
So, we’re going to bring a couple of demos there. So, today we’re going to be using MongoDB with the Kubernetes operator, and we’re going to see like different things. Statefulsets, how to operate the database. So hopefully, you will be able to learn one thing or two. In terms of what are the pre-req for today. So hopefully, you guys know little bits about Kubernetes. So, that’s the kind of expectation, what we’re talking about you can, I mean probably do more than just spread it, right.
So, let’s get started. So, first of with the title, is it really a good idea to run your database in Kubernetes? Fundamentally, if you reply by another technology, this is nothing that is really new in terms of running databases into the Kubernetes platform. So, if you are old enough like me like 10 years ago if you were already in the industry, there were the same debates but replacing Kubernetes by VMware or Virtual Machines, right. So, it took some time but as the industry and the platform got more mature, then the toolsets also evolved. The guarantee is that you know the database will be secure, performance and will have a lot of enterprise great features. As you go, as the technology evolves then it makes sense to migrate your database to that platform to take advantage of all its benefits. And that is basically the same thing with.
And why? It’s because in Kubernetes typically, you can run two types of workloads. So, if we start by the classic stateless workload, this is what you run through deployment in Kubernetes, right. So, it’s more you know the fight between pets and cattle. So, the deployment which is displayed there is really the way to run stateless workloads into Kubernetes. So, the principle is just as a quick reminder, maybe some of you already know that but just to set the level here. A deployment in Kubernetes is just a group of containers or pods that serve the same service. So, you can place them behind a VIP which is the service object inside Kubernetes right. And all the different parts composing that particular deployment do the same thing, right. It’s exactly a clone of the application.
What is specific to deployments is that all, those pods access the same storage, right. So, when you create in your declarative format the storage requirement, the data requirement, the volume requirement, all the parts share the same requirement. Which means that essentially if they need to write as individuals into their storage space, there’s no other solution than providing shared storage. Obviously, in the context of stateful applications and particular databases, that’s pretty much a bad idea, right. Because yeah, you can’t even run a database on an FS performance won’t be there, right?
There will be bad consequences in terms of your performance. And here enters the statefulset. So, at the very beginning which was around 2016, statefulsets they were called pet sets which is really you know around this idea of cattle versus pets. And in my personal experience, the first time I saw I was like okay what is this? And it was when I’ve deployed the first time for me was the elk kind of software suite. So, it’s kind of complicated to get around it but I’m going to try to put it very simply for you guys today. The idea of the statefulsets is to bring a unique set of pods that are considered totally independent pieces of computing.
So, meaning that they have all their own identity. Their identity is stable which means that they can be addressed with their fully qualified domain name. It means that they all access their own persistent volumes. So, part one will access persistent volume one. Pod two will access persistent volume 2. And I’m going to go a bit deeper in terms of how persistent volume claim or PVC and PV interact later. But for now, what you have to realize is every pod needs to have its own name, its own storage. And when the pod dies, another I mean it needs to be restarted on another node and still have the same identity. Which obviously you have a challenge with this.
If you just rely on Kubernetes, Kubernetes won’t provide any sort of replication or encryption or anything for your volumes. Which means that you have to bring your own solution to solve this issue. If a node fails, how do we ensure that the pod that the stateful pod that is located on that node can be taken over on another node that has access to the storage. And this is where Ondat will come into play. So, on top, of course, statefulsets should be used for stateful applications and databases and there’s also on top of that another element. Which is demonstrated by Gartner in terms of the prediction around how people are going to run databases.
So, if you read this quickly at the end of 2023, what Gartner assumes is that like 50% of all the databases revenue will be coming from databases that are run in the cloud. So, what that means is, there are two ways to consume those databases, right. Database as a service, RDS, Amazon or AWS, or those types of solutions or you can rely on Kubernetes which is our preferred cloud operating system. If you’re already using EKS, GKE, this type of solution, it’s still considered as a cloud database. And just later, we’re going to see some arguments around why run the database in Kubernetes as opposed to running natively RDS or database as a service.
But here this particular slide is important is that, so this is very recent it’s been updated in October by data. report. And what it’s showing is that among all those containers like one-third of them which are the top images that people are running production, one-third of them are databases, right. Which is really amazing I didn’t expect that much, to be honest, but which even brings more I would say stress into why we are doing this session today. So, Guy now I’m going to hand it over to you because obviously operating stateful set databases doesn’t come without any issue in terms of troubleshooting and instrumentation.
Guy: Yeah, it’s tough. It’s tough to troubleshoot. It’s tough to troubleshoot like deployments and pods and it’s even tougher to troubleshoot stateful set which you mostly don’t know the code. So, troubleshooting Kubernetes is very hard today. Most of the time you get alerts from one system, it may be PagerDuty or OpsGenie. And then you got alerts from more systems like Datadog and the Slack is going on fire, everyone asking why the system is unavailable and you want to check your CI/CD and everything like that.
So basically, when you go into an incident, you have a lot of data that you need to coordinate, need to pinpoint and you need to find the right issue that is causing it. So, it become very difficult especially these days with Kubernetes. We did like a workflow for only one scenario of out of memory issue, which we actually workflow the solution for that, and the solution is very complex. You need to take a lot of time. You need to move into a lot of space and you need to do a lot of commands, check the output, move to other systems and become very complex. This is only one workflow that our engineering did and find out.
So, what Komodor does is actually help you to solve and troubleshoot those incidents. We take all the data from all the systems, integrate them into one place where you can see all your services and all of your clusters, and all your incident from any other tools you got. And we actually show you a timeline of your services with all the code changes and the deploys. The health event, if the service was unhealthy and even with the ability to drill down in it. So, this is what Komodor does. And we even can drill down into a specific pod and logs to see the logs actual from the system very fast and easy. Something which I very like and we are going to see on the live demo later. So, very excited to show you that. Maybe you can explain a little bit about the persistent volume or how that is integrated to that.
Nic: Yeah, sure so just we’re going to go into the demo in one minute. But for you guys understand what we’re going to see, because command-line sometimes it’s can be a bit foggy. So, on that at the foundation, if you think about what we are doing, you can think of we are the hyper-converse solution for Kubernetes. So, you can think about it as VSAN or this type of solution. But for Kubernetes what we do is that we take the disc space from every node. So, it can be anything that is locally at to the node and we aggregate this into a pool of storage, right. And of course, on top of that, we’re going to provide additional features for replication, encryption, compression, cashing, and we’re going to ensure that you have the right performance.
So, yeah, I would say, if you ask me, it’s an elegant piece of software. But more importantly, what is specific to Ondat is that it’s Kube-native. So, what I mean by that is, so you can see here I’ve got on the middle of the screen feature labels. So, anything you want to enable in in in on that you can do it just by applying labels at a different level. So, you can go to the storage class, you can go on the PVC level and you can change it as you want live. So of course, depending on the situation, you may have to redeploy the pods. But let’s say if you just want to change the number of replicas that’s fine. You can do it live, right?
So, if you scale a number of nodes, you can also scale the number of replicas up to five if you want, right. So, if one that fails, we will always make sure that the pod can be restarted on another node safely and as much as possible will access its own local data. And if it cannot access its local data, then we also have a network protocol whereby the pod can access a remote volume. Which obviously will ensure some performance impact, but pretty much minimal. And just as a quick reminder, in terms of the flow force or a particular Statefulset how do you define the storage requirement, right. As I said at the beginning the first time, I went into this, I was like why is this. It is so kind of complicated. But actually, it’s not that much.
So, let’s just take a look at it. So, we have on the top the stateful set. The stateful set has anything in Kubernetes, this is a first-class object where to be able for every pod to leverage its own storage space, we need a volume claim template, right. Because if you just put a volume PVC there, then it’s like a deployment. Everyone will share the same backend storage. Here we want every pod with the with a distinct piece of storage, distinct persistent volume. So, for that, you create a volume claim template. And in this template, there’s two solutions. Either you just say I want to use a storage class and give the amount like how big you want the volume to be that works. So, with Ondat, once the storage class is managed by Ondat, we will provision everything dynamically.
So, you just specify the storage class and then we will provide the PVC the PV and that’s it, super simple alternatively as we see in the demo depending on what or who is creating the stateful sets, here we’re going to see for MongoDB. We are going to be using an operator. So, it’s a piece of software itself that is going to create the Stateful set definition. In that case, it’s using persistent volume claim and making a reference to a storage class. If it’s empty, it means it’s just going to use the default. But essentially, you’re going to see a definition of the persistent volume claim.
So, the persistent volume claim, you can compare it to a pod. A pod consumes compute units, a persistent volume claim consumes persistence volumes units. And because everything is dynamic everything is created at once. So, the state full sets configuration is sent to the Kubernetes APIs. Automatically if the storage class is managed by Ondat that or a dynamic you know provider for the CSI, then it’s going to provision automatically the PVC, the PV and are going to tie them to gather, right. And you end up effectively with every pod leveraging its own piece of storage, right?
So, now if I can just quickly show you a demo of that. Let me share my screen window try not to get the wrong one, that would be a shame. Okay, so I think everyone can see my screen. So, let’s just take a quick look at the cluster what we have today. So, the cluster is basically a rancher cluster running on Linode. And the Ondat, let’s say if I do pod, good system. Okay. So, this is all the containers that are required by the Ondat’s and storage OS engine. So yes, storage OS is essentially the engine we are using for our OnDat solution. Most of the intelligence resides into the DaemonSets. And on top of that, we just have the CSI provider. We interact with the Kubernetes API with those deployments. And we also so remember when I say we will try to collocate the pod to its own local storage, this is why we have our own scheduler to do this to manage this kind of, try to optimize the pod with its storage.
So, what I want to show you is I’ve got my MongoDB namespace, all the pod. So, I’ve you can see the operator here, this is the community operator. So, you just hopefully you’re familiar with operator, but quickly the operator allows you to configure your database and encapsulate this into Yaml. And extending the Kubernetes API, this operator will take all the configuration item that you have configured into this Yaml file, that we’re going to see in a minute. And just deploy everything accordingly. So, in our case I just said, I want a MongoDB, I mean I want 3 instances for this particular MongoDB cluster. And those instances are the pods composing the Stateful sets.
So essentially if I do get Stateful sets in that namespace, I’ve got my database, right. The name is example MongoDB and as part of this is deployed three instances already. So, if we take a look at the Yaml. I just want to highlight a couple of configurations. So, this is exactly what I was mentioning. It’s managed by the operator. The operator is generating the configuration to deploy that Stateful set. So, I wanted three replicas, three in terms of. So, here it’s replicas as in pods, like a replica set controller inside Kubernetes. Because there’s two level of replicas. There is the pod container replica as in the replica setting Kubernetes. But there’s also the replicated data from MongoDB. So, the database replica which is not the same thing, right. So, here we’re talking about the containers. So, then we have multiple containers, that is going to compose all of our MongoDB pods. But what I want to show you is go to the volume claim template, which is very down here at the bottom. So, here we go.
So, remember we have our volume claim templates and I’ve got persistent volume claims that are defined there. I’m requesting specific amount of storage, which is 10 gigs here. So, I want 10 gigs for the data volume and I also want 10 gigs for the logs volume. And you can see here I don’t specify any storage class, so it’s coming from the default storage class. So, if I just get the storage class globally, I’ve got two storage class. When you install Ondat, you will have by default a fast storage class and a default a storage class, that I’ve just mapped into the Ondat solution.
And if we go a big deeper there, what we will see. So, remember I’ve said before that the only thing you need to do to configure on that because it’s super simple super easy. But although also super powerful is just by manipulating in the case of the storage class this parameter. So, any features you want to enable or change, you can just apply here at the storage class or you can also do it at the PVC layer, right, that as well, but essentially this is what I mean by OnDat that is a Kubernative solution. We’re not using anything, but to implement our features.
Okay so last thing I wanted to show you is of course, we have a command that allows you to take a look at what is provision from beyond that perspective and see the number of replicas. And which node has would be master or primary for a particular persistent volume. So, for example here, we have three pods, right. So, the PVC this is from an older PVC where it empties from my previous test. The PVC have not been released, but what is interesting here is just to see that all of those replicas you have. When you say attached on it means that this is where the primary replica is located. And also, what you want to see is here that we have three replicas for all of this persistent volume claim.
I think this is all I wanted to show you in terms of the relation between Stateful sets, PVC, oh the last one, maybe get the PV, alright. The PV themselves is just essentially, the boots are the piece of storage that have been effectively provisioned for consumption from the PVC, right. And that’s basically it. Okay, so let me just stop sharing my screen, but to the slides. Okay, so as you’ve seen, the first thing you really need to be aware is that if you want to deploy a database or really any sort of stateful workloads into Kubernetes. I mean make yourself a favor, don’t try to do this by using deployment. You have to use a statefulset.
But again, as I’ve shown, there are still challenges. You need to take care of the enterprise-grade features. You need to have to run this into Kubernetes. Replication, performance, compression, encryption, all of that. And the second part for your platform which is with tip number one, use Statefulsets but also a database operator, which will take care of the database deployment like we’ve shown here. But also, as we see later in the other demo, all the data operations such as you want to scale your database, right. So, if you need to scale the number of replicas like typically in MongoDB you have one, right. But then you can add more replicas as read to give you more performance for reads.
So, let’s say if you have three replicas for your data set. If suddenly you say, okay now I’ve got more customer that is connecting to my app, I need two or three more replicas. If you, do it in Kubernetes that means that you have to do two things. You have to scale the number of pods from the Stateful set, but also you have to scale the number of instances into this database, right. So, those are a two all sort of action and actually the operator will take care of that for you. And we’re going to see the this later. But yeah, and just a quick reminder again. The operator, the principle is a piece of software that is running as a pod or as a deployment, should I say, is going to listen to the Kubernetes API and is going to be extending the Kubernetes API to create your own custom object.
Like the MongoDB itself will be a first-class object into Kubernetes. And the way you configure and you interact with MongoDB then is by using a declarative form format which would be Yaml, right. That’s pretty easy. So yeah, just use an operator don’t do it yourself. Now the counterpart of that, that’s for the platform for the app. Of course, you want to make the app aware of where it’s running. So, it’s running inside Kubernetes, which mean that it can leverage different benefits. If you want for example to reference maybe another application, another microservice but it is providing service discovery and injecting this as environment viable. So, this is something you should use. So, which is called the downward API, which allows you into your Yaml file to reference existing metadata for your application. So, like IP addresses, service name, all those kinds of things, you can populate.
The second thing I wanted to mention. So, as I said before, there are two ways to run the database in like a cloud format. You can do it in Kubernetes which is the defacto cloud operating system or you can do into a pass DB as a service type of environment. If you take the example of RDS or any solution that is leveraging, like let’s call it cloud persistent disc that usually is a network disc that is attached to your nodes. Then it can be very costly. So, always make the comparison. So, here as an example, I’ve just been using the traditional cost explorer from AWS. So, if you want 100K IOPs, which is not that much knowing that any NVME drive today can do five time that, five instances doing this with 500 gig which is not that big. This is the cost per month, that’s 24 grand per month. It’s not cheap.
So, what is the alternative? Well, you can use things like instant store which in a nutshell is just you just pay for the local drive that is attached to your EC2 instance, right. So, you just consume your normal EC2 price or cost. It’s just this. You’re not charged on top of that as disc. And then what you can do eventually is use OnDat to create a pool of all the storage from all your notes from all this instant store. And on top of that implement those enterprise-grade feature, but really at a fraction of the cost. We are probably talking about 10X even more, right.
And another paradigm I want to mention as tip number three is bring more automation into the picture, because we know that in Kubernetes is really easy to automate because we have this declarative format. And if we compare it to software practices, then we can combine infrastructure requirements. So, platform Kubernetes requirement together with code requirements. So, in the software development landscape, we have this notion of shifting left. Which is the idea is just about bringing all the testing like earlier in the process closer to the left essentially where you are effectively developing. So, the idea here is shifting left your infrastructure. I mean that it’s bringing or reducing the friction between the developer on the one side and the platform I mean on the other side.
So, what that means is that, now the developer can just use the Yaml file, I’ve shown you before like storage class PVC. And said, okay I need that amount of storage. I need this feature. Maybe if it’s for the test, I don’t need a replica. If it’s for production, then I need X amount of replica. I need encryption, all those kinds of things, right. So, by doing this, you’re reducing the friction between the dev platform and you go faster into your software development life cycle, right. And this automation is made possible also by GitOps. So, GitOps is just this idea of instead of pushing stuff yourself in Kubernetes using kubectl. You just again make use of an operator that sits into your Kubernetes cluster, and that going to look for changes if you see on the left here into the Kubernetes manifest is going to watch for changes there.
So, a traditional pipeline would be that a dev on the top, creates its application, triggers the pipeline, the CI pipeline. The CI pipeline builds a new container image. The container image will through the customized solution in Kubernetes through customize will update the Kubernetes manifest. The operator will notice that the manifest has changed. Another result will reconcile what is living in the cluster, the state with the intent which is what is sitting inside the Git repository where you have the Kubernetes manifest, right. So, it’s even bringing the automation one step further to help you shift left in terms also of security and compliance. Because now, because this is all residing into your Git report, you can apply a plethora of policies.
You can use Open Policy Agent, which is going to be using Regal to implement your policy compliance, doing the synthetic check, all that saying that, okay if it’s going to production I don’t I need to only have an encryption enabled. If it’s not enabled, I’m not going to deploy that. So, it enables a lot of different capabilities that you were lacking before. So again, it’s helping you migrating your database into production, because now you have much more security and compliance, right? But also, you also have requirements in terms of the instrumentation. So, now Guy, it’s up to you.
Guy: Yeah, so basically my tip is build your stack for doomsday. When you are database doomsday will come. You will eventually going to a downtime and going to into an incident that involves your database. And actually, you’re going to be responsible, responsible for the database and responsible for the operator. So, you want to add visibility to anything related for that. Like any visibility to do you add one of your services. For example, the performance, the IO, the backups, everything related, you want to be metric and monitored and you Want to see everything in one place.
The next thing is about you want to make easier troubleshooting. Downtime is very bad. It can become very stressful or the company is waking up at night. And you want to make sure your production is up and running, and you need to build your stack and the tools for that. You need to know what has changed recently. If the number of replicas, it’s the code related to that database if someone changes any configuration in the ballots. You want to see everything in one place and it’s very important that. You want to have very fast access to log metrics and your monitors to know what’s going on and why. Actually, maybe you want to create some hub for all your other services. Like you want to see the logs and metrics in one place or you want to go to swift from one another, without moving into a lot of filters and going to searches. This is something that you need to build and make your troubleshooting much easier in the incident time.
The next thing is, as we talked about shift left and automating things and we want to make everything automated related to the database too. For example, if you have backups, you want to test and make sure that you know how to recover and you know how to recover. Not in that incident, and like any other day. When you wake up in the morning, you want to know your environment is stable and you know to do all the operation. It is also empowering the developers about what they are capable to do, what they’re responsible for. It can be a platform developer and can be like app developer. All of them need to practice and know what they can do in their own database.
After a while, you’re going to build your database expertise. So, the database is not something that comes in one day. You need to build it and you need to train it and you need to train your system or your automation and script to know it. So, when you’re deploying for development and staging and then moving to production, make sure you know everything and you know your database and operator very well. These things can change quite often and we want to make sure that everything is there.
So, let’s move to the next one. The next one is about choosing your database. Choosing your database is not as easy as it seems for first time. So, we have the common one and most of them have operators but operators can vary a lot. They have different capabilities. Some of them can be only installed and some of them can be data operation too. Some of their community which are open-source and some of them are enterprise-ready with a lot of enterprise features. For example, most security may be access like a Web UI. You have all of them in your enterprise and some of them are supporting upgrades which are something that you want to have when you’re moving to production and some of them are not.
Each database have, most of the common database have a few operators. For example, Postgre have many operators out there. And it’s not about only choosing the database, it’s about choosing the operator too. When you’re choosing a database, you need to know how it going to fail and what is the action you need or the stateful set will take for you in order to make sure it’s going to back live again. You need to know if it’s like active backup and you need to do like fail overing in the old classic way or a more consensus algorithm, which all the nodes can like decide who is the leader right now and then it be a more healthy and fast and smooth 36:46 fellow away. You need to know that when you’re choosing the database.
So, the database can be like Kubernetes friendly or not. Kubernetes is like designed to fail and your pods going to fail. Your stateful set will fail eventually and some database knows how to support that well and some of them not. You need to make sure that you’re choosing the right database that is Kubernetes friendly. Have all the features, really know how to adapt failure and move around. And also, something that you have to take care about is the day to operation something we sometimes forget in the first deployment. And we really want to know what are the day to operation that necessary from us they are not covered by the operator. What we need to do in that case and what is on us and we are going to test and automate. So, choose your database wisely, make sure the database operator and all the ecosystem fits your needs and fits the scenario that you’re going to feel in the future.
So, next thing we are going to show the demo. Nic, I think you start first and then we will move to Komodor and see everything Komodor.
Nic: Yeah, so what I’m going to do, because I just realized yeah, I was a bit fast on the previous demo because you didn’t have time to show it. That’s fine. So, what we’re going to there. We’re going to move into like a demo of continue what we’ve done before. But this time more around, let me show my screen. Around the day two operation for the database. And just after that we’re going to see all the power of Komodor because we’re going to see all the and actually that’s a good example. Because we can move straight to what you will see in Komodor as I scale up the database. But first, I just want to give you an extract of what is the operator configuration.
So, this is effectively where you set your database configuration. So, this is what Guy was mentioning before. Depending on the operator, you can create different things. You can interact with different parameters. So, here the only thing really that I’m going to interact with is the number of members. So, remember, members will determine both the number of pods in the Stateful set but also the number of replicas of the particular database. And then you can continue things like users, passwords. And then for the Statefulset specifically, you can add here like interact with annotation volume claim templates resources. So basically, instead of creating by yourself all these parameters into distinct Kubernetes-native objects. We leverage the operator to create a custom resource that is now sitting in the Kubernetes API. And the DB operator will take care of all the automation for the deployment, but also for day two.
So, what we’re going to do now, let me just connect to the database once to show you what is the current situation. I can just do here connect to the database from another pod. Like kind of simulating your application connecting to the database. So, you going to see that. So, remember I’ve got 3 pods, so one primary and two secondary replicas for the database. So, you can see it here ID0. So, we have the stable identity which is provided by the state full sets. Like it will start with, it will have an ordinal number 0. Then the other pod, the second pod will the dash one, dash two, dash 3 etc. right. So, we have one secondary, so the pod zero is secondary, pod one is primary and pod two is also another secondary. So, this is what is composing our database now.
So, what I’m going to do is just scale it. So, for that again, declarative passion and it’s going to take some time to get provision. So, while it’s going to be provisioning, Guy you can maybe show what’s happen, oh not two, sorry. I cannot talk and write at the same thing. Apparently, only women can do that. That’s my wife who said that. Men, we are not able to do these kinds of things. So, number five, okay. Okay, just I’ve got my screen here. okay, that’s good. So, now it’s going to apply the configuration and I should see pod, which stopped creating. You see here. So, Guy up to you. So, you can show.
Guy: Sure. Yeah sure. Now I’m going to share my screen and we will see the deployment in Komodor. So, just a second, not miss the right tab. Okay, so actually this is the main screen of Komodor when we can see all the services and the services health. Actually, we can see all the clusters, namespaces, which services are healthy or unhealthy, we want to filter by them. And any kind of resource set that are run-able, for example deployment Stateful set and DaemonSet. Now I want to like go in and drill down into the example MongoDB, which is a Stateful set. I want to go into it and see what’s going on.
So, here we can see the service we can see it’s healthy right now. If it was unhealthy, we will be able to spot it here very fast with the red light, cube here. And we can see what the history of the service was which is very important when we are taking a look about incident that running for a long time. And we want to see all the things related. If we will add more system to Komodor, we will be able to see incident from them. For example, if you have a data dog alert, we will able to see it on the timeline. Which is very useful on the same service, something is useful for us. If I’m going to drill them into the deploy, I able to see the deploy metadata and what actually change. In that deploy you increase the replica from 3 to 4 which is nice and we can spot them here.
We can see actually all the definitions, and we have the default what change. Which is very nice that you can see the deployment. If it’s racked by Github, we are able to see the same, the commit changes in Github in that manner. So, you’ll be able to track the configuration change with the Github change. And what we can see -.
Nic: Yeah, so it’s really nice right, because it’s help you correlate the app with the platform right, which is yeah, that’s pretty impressive.
Guy: Yeah, we even have the option to add like a few repos. So, you were able to do the app repo changes and the info repo changes. If you are like a common or shared chart for all the company or templates, which is very nice to see the impact. I think what nice in here that you’re able to track changes. It’s very difficult to track changes in Kubernetes right now to know what changed like a few days ago and who did that. So, I can move all back to all the deployments and all the changes through them. And let’s say I have an issue and I want to actually drill down and see what’s running on my system right now.
So, I click on pod status on logs and then I can see all the pod that are running right now in the system without using any Kubectl. I find it difficult to move between clusters do like a Kube command and see the pods and then move to another main space, do that command. Maybe like comparing two clusters and one is healthy, one is not. So, I can see all the pods which are running how many restarts they got.
Nic: Yeah, so now see that’s we have the five MongoDB and I believe you guys also have like a related. Like if you want to jump to another microservice that is related to it you could do that as well, right.
Guy: Let’s say I have an application here, let’s say, I have, I don’t know. Some other application like the Kubernetes Swatcher I able to integrate the same deploy to other deploys timeline. So, if I have correlated incident, I can see them here in two different timelines. I can add a lot of things into it and move into like seven days back. I can see like Kubernetes Watcher was unhealthy for a bit and all the deploy for the operator too. So, I can see the operator deployment. Then when I move to pod status and logs, I can see the describe of the pod which is very useful to know what happened when we configure the CRD and what move into our Stateful set. So, it’s not very easy to do that.
So, we can see everything in here. We can see the events of each container which is nice especially on Crashloopbackoff and things like that. And what is very nice that we can see the logs of the running container. You can drill down into a specific container and see what’s going into it. For example, we added two new replicas and we want to see that the replica added fine. So basically, I want to maybe third the log if I have an arrow, I want to see it on the log. I don’t want to move into a lot of logging system. So, it’s very nice, and if the pod was killed or resorted, I am able to see the logs of the previous pod, which is very good when you have a problem and the pod like killed or exit with some exit code and you’re able to tail the logs of the last one. Which is something that a lot of customers are using.
And I like that and I like that Komodor can bring like very nice option to travel through databases without be a database expert right now.
Nic: Yeah, which is I think when you migrate to your database to Kubernetes, you don’t want to have this extra burden to have to learn again everything. Like both the database, learn the platform, so a tool like that is, yeah that’s pretty stuff. So now just to double-check that everything was okay. Let’s me share my screen again. So, in terms of the database, with Komodor we’ve seen that now everything from the infra side is okay. In the container logs, we also saw that from a database perspective we should have the replicas as well the new one. So, let’s just check that’s again we should have five, I mean five replicas total, right.
So, you can see it here zero secondary, one was, oops one primary, then two secondary, now we should have three and four secondary as well. So, now with just what I’ve changed one line and I’ve got more read replicas. So, when we talk about day two operation and doing like housekeeping around the database, it’s pretty easy. Now, I just wanted to show in terms of the storage, I mean the OnDat slash storage configuration. You can see that now, because I did some scale-ups and scale downs, it took again remember when I said when it’s a Stateful set you have a stable identity and you’re always attached to the same PVC. So, I scale down, which is like killing the pod. I scale up and again it took the same PVC. But because I don’t care where I am where the pod is, I’ve got many different replicas for on that side.
It can be restarted anywhere, right. So, which is not the case for, I mean if you don’t have OnDat you will be struggling to do this kind of thing, Same thing with EBS. If you have an EBS volume, you have to in your EKS cluster, you have to use some sort of snapshot and recover the disc to another node, another AZ, this kind of things, right. So yeah, we simplify a lot of that. Okay, so I think what time is, yeah 10 minutes. I think we’re doing good, let’s go back to the slide is just basically you know the last piece of texts for today’s.
So, hopefully guys today you learn one thing or two. We’ve been showing/demonstrating some of the Kubernetes primitive and some of the ecosystem components. You can use to deploy Stateful apps and in particular why it’s useful for databases. When you deploy your database in production, make sure to pay attention to all the features we’ve mentioned. Especially around cost, around operation, troubleshooting, performance encryption. Try to leverage the Kubernetes automation capabilities, operator framework or should I say paradigm. And also, try to get your logs and all your data that is relevant to instrumentation.
Try to use them in a meaningful way, and for that Komodor will for sure bring a lot of value into a single place where you have access to everything. You have access to history. You have access to correlation in terms of the incident. So again, you’re going to save a lot of time in terms of operations and just data operations really. I don’t know Guy if you want to add something and then we can open it up for questions as well.
Okay, so if you want to get started this is where you want to go ‘ondat.io’ for OnDat. And for Komodor, you can go there and sign up for a free trial. I’m sure they will be happy to give you a free license to test, right. Guy.
Guy: Yeah, sure. Just contact us and we will reach out to you and we’ll do a free trial for you.
Nic: Okay. So, do we have any questions? Okay. So, hey, I have an issue where my DB has a very high latency and I see it’s because some of my pods are down and getting kicked out due to lack of memory. Will Komodor alert me in it? How can I find the root cause of this? So, it’s for you, I guess.
Guy: Yeah, actually we have a very nice feature that called Workflows, which enable us to help you troubleshoot out of memory issues very fast. We are taking, when you have out of memory to anyone of your pods, we are actually automated checks for you. We are checking the quality of service. We’re checking the node statuses in the system that the pod has like enough memory to run and the limits of the pod. We check configuration changes. And a few other checks at everything that comes to you in like one specific health event that actually you got all the digested information to the right place.
So, it doesn’t matter the actual reason for that. We will help you to figure this out as fast as we can with all the automated checks and everything in it. Very nice feature. You can find it in our YouTube. And if you are signing up for a free trial, we will show you that real live demo, and even on your system, I don’t know who has that.
Nic: Yeah, I think that’s a good use case, so definitely something to follow up upon. Okay, so before we close, I don’t know if you guys and girls if you have more questions, now or never. Well, seriously yeah you can reach us you know on social media, our website anything, right. Same for you, Guy.
Guy: Yeah, sure. I think we like very communicative companies. So, any channel, social media, our website, I think for both of us it will fine.
Nic: Okay, let’s give it another try, so Last time. Once, thrice. Okay, so I think we can close like 5 minutes on time, so that’s perfect. That was great timing. So, thank you again for joining this webinar today. Hopefully, you’ve been learning one thing or two or more. And yeah, we’ll see you next time.
Guy: Yeah, thank you very much. Thank you very much, Nic. Thank you to everyone that came to the webinar.
Nic: See you, Guy too.
and start using Komodor in seconds!