Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
kubectl is the command-line tool in Kubernetes that lets you run commands against Kubernetes clusters, deploy and modify cluster resources.
Containers and pods do not always terminate when an application fails. In such cases, you need to explicitly restart the Kubernetes pods. There is no such command “kubectl restart pod”, but there are a few ways to achieve this using other kubectl commands.
We’ll describe the pod restart policy, which is part of a Kubernetes pod template, and then show how to manually restart a pod with kubectl.
This is part of a series of articles about Kubectl cheat sheet.
Every Kubernetes pod follows a defined lifecycle. It starts in the “pending” phase and moves to “running” if one or more of the primary containers started successfully. Next, it goes to the “succeeded” or “failed” phase based on the success or failure of the containers in the pod.
While the pod is running, the kubelet can restart each container to handle certain errors. Within the pod, Kubernetes tracks the state of the various containers and determines the actions required to return the pod to a healthy state.
A pod cannot repair itself—if the node where the pod is scheduled fails, Kubernetes will delete the pod. Similarly, pods cannot survive evictions resulting from a lack of resources or to maintain the node. Kubernetes uses a controller that provides a high-level abstraction to manage pod instances.
You can control a container’s restart policy through the spec’s restartPolicy at the same level that you define the container:
apiVersion: batch/v1 kind: Job metadata: name: demo-restartPolicy-job spec: backoffLimit: 2 template: metadata: name: demo-restartPolicy-pod spec: containers: - name: demo image: sonarsource/sonar-scanner-cli restartPolicy: Never
You define the restart policy at the same level as the containers applied at the pod level. You can set the policy to one of three options:
If you don’t explicitly set a value, the kubelet will use the default setting (always). Remember that the restart policy only refers to container restarts by the kubelet on a specific node.
If a container continues to fail, the kubelet will delay the restarts with exponential backoffs—i.e., a delay of 10 seconds, 20 seconds, 40 seconds, and so on for up to 5 minutes. After a container has been running for ten minutes, the kubelet will reset the backoff timer for the container.
You may need to restart a pod for the following reasons:
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better handle pod restarts using kubectl:
Restart deployments gracefully with kubectl rollout restart deployment <name>.
kubectl rollout restart deployment <name>
Create scripts to automate restarts for multiple resources.
Investigate pod logs and events before restarting to understand underlying issues.
Ensure your readiness probes are correctly configured to avoid unnecessary restarts.
Use monitoring tools to observe the impact of restarts on resource utilization.
It is possible to restart Docker containers with the following command:
docker restart container_id
However, there is no equivalent command to restart pods in Kubernetes, especially if there is no designated YAML file. The alternative is to use kubectl commands to restart Kubernetes pods.
One way is to change the number of replicas of the pod that needs restarting through the kubectl scale command.
kubectl scale
To restart a Kubernetes pod through the scale command:
kubectl scale deployment demo-deployment --replicas=0
kubectl scale deployment demo-deployment --replicas=1
kubectl get pods
Related content: Read our guide to kubectl scale deployment and kubectl logs.
To restart Kubernetes pods with the rollout restart command:
Use the following command to restart the pod:
kubectl rollout restart deployment demo-deployment -n demo-namespace
The command instructs the controller to kill the pods one by one. It then uses the ReplicaSet and scales up new pods. This process continues until all new pods are newer than those existing when the controller resumes.
To restart Kubernetes pods with the delete command:
Use the following command to delete the pod API object:
kubectl delete pod demo_pod -n demo_namespace
Since the Kubernetes API is declarative, deleting the pod object contradicts the expected one. Hence, the pod gets recreated to maintain consistency with the expected one.
The above command can restart a single pod at a time. For restarting multiple pods, use the following command:
kubectl delete replicaset demo_replicaset -n demo_namespace
The above command deletes the entire ReplicaSet of pods and recreates them, effectively restarting each one.
A different approach to restarting Kubernetes pods is to update their environment variables. Then, the pods automatically restart once the process goes through.
To restart Kubernetes pods through the set env command:
kubectl set env deployment nginx-deployment DATE=$()
DATE
Terminating
Running
kubectl describe
The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can.
This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.
Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
How useful was this post?
Click on a star to rate it!
Average rating 5 / 5. Vote count: 10
No votes so far! Be the first to rate this post.
and start using Komodor in seconds!