How to Restart Kubernetes Pods & Containers with kubectl

What Is kubectl Restart Pod?

kubectl is the command-line tool in Kubernetes that lets you run commands against Kubernetes clusters, deploy and modify cluster resources.

Containers and pods do not always terminate when an application fails. In such cases, you need to explicitly restart the Kubernetes pods. There is no such command “kubectl restart pod”, but there are a few ways to achieve this using other kubectl commands.

We’ll describe the pod restart policy, which is part of a Kubernetes pod template, and then show how to manually restart a pod with kubectl.

This is part of a series of articles about Kubectl cheat sheet.

Kubernetes Pod Restart Policy

Every Kubernetes pod follows a defined lifecycle. It starts in the “pending” phase and moves to “running” if one or more of the primary containers started successfully. Next, it goes to the “succeeded” or “failed” phase based on the success or failure of the containers in the pod.

While the pod is running, the kubelet can restart each container to handle certain errors. Within the pod, Kubernetes tracks the state of the various containers and determines the actions required to return the pod to a healthy state.

A pod cannot repair itself—if the node where the pod is scheduled fails, Kubernetes will delete the pod. Similarly, pods cannot survive evictions resulting from a lack of resources or to maintain the node. Kubernetes uses a controller that provides a high-level abstraction to manage pod instances.

You can control a container’s restart policy through the spec’s restartPolicy at the same level that you define the container:

apiVersion: batch/v1

kind: Job

metadata:

name: demo-restartPolicy-job

spec:

backoffLimit: 2

template:

metadata:

name: demo-restartPolicy-pod

spec:

containers:

- name: demo

image: sonarsource/sonar-scanner-cli

restartPolicy: Never

You define the restart policy at the same level as the containers applied at the pod level. You can set the policy to one of three options:

  • Always—the pod must always be running, so Kubernetes creates a new container whenever an existing one terminates.
  • OnFailure—the container only restarts if it exits with a return code other than 0. Containers that return 0 (successful) do not require restarting.
  • Never—the container does not restart.

If you don’t explicitly set a value, the kubelet will use the default setting (always). Remember that the restart policy only refers to container restarts by the kubelet on a specific node.

If a container continues to fail, the kubelet will delay the restarts with exponential backoffs—i.e., a delay of 10 seconds, 20 seconds, 40 seconds, and so on for up to 5 minutes. After a container has been running for ten minutes, the kubelet will reset the backoff timer for the container.

You may need to restart a pod for the following reasons:

  • Unapproved resource usage or unexpected software behavior—for example, if a 600Mi memory container tries to allocate more memory, the pod will terminate with an out-of-memory (OOM) error. In this case, you must restart the pod after changing the resource specification.
  • A pod stuck in a shutdown state—this issue occurs when a pod continues to function when all its containers have terminated. It is usually the result of a cluster node shutting down unexpectedly and the controller or cluster scheduler failing to clean up the pods on the node.
  • Errors—you may need to terminate pods with unfixable errors.
  • Timeouts—the pod has exceeded the scheduled time.
  • Requesting an unavailable persistent volume—the pod can not function as intended.
 
expert-icon-header

Tips from the expert

Itiel Shwartz

Co-Founder & CTO

Itiel is the CTO and co-founder of Komodor. He’s a big believer in dev empowerment and moving fast, has worked at eBay, Forter and Rookout (as the founding engineer). Itiel is a backend and infra developer turned “DevOps”, an avid public speaker that loves talking about things such as cloud infrastructure, Kubernetes, Python, observability, and R&D culture.

In my experience, here are tips that can help you better handle pod restarts using kubectl:

Use kubectl rollout restart

Restart deployments gracefully with kubectl rollout restart deployment <name>.

Automate with scripts

Create scripts to automate restarts for multiple resources.

Check for errors

Investigate pod logs and events before restarting to understand underlying issues.

Leverage readiness probes

Ensure your readiness probes are correctly configured to avoid unnecessary restarts.

Monitor resource impact

Use monitoring tools to observe the impact of restarts on resource utilization.

4 Ways to Restart Kubernetes Pods Using kubectl

It is possible to restart Docker containers with the following command:

docker restart container_id

However, there is no equivalent command to restart pods in Kubernetes, especially if there is no designated YAML file. The alternative is to use kubectl commands to restart Kubernetes pods.

The Kubectl Set Env Command

One way is to change the number of replicas of the pod that needs restarting through the kubectl scale command.

To restart a Kubernetes pod through the scale command:

  1. Use the following command to set the number of the pod’s replicas to 0:
    kubectl scale deployment demo-deployment --replicas=0
    The command will turn the Kubernetes pod off.
  2. Use the following command to set the number of the replicas to a number more than zero and turn it on:
    kubectl scale deployment demo-deployment --replicas=1

    The command creates new replicas of the pod that the previous command destroyed. However, the new replicas will have different names.
  3. Use the following command to check the status and new names of the replicas:
    kubectl get pods

Related content: Read our guide to kubectl scale deployment and kubectl logs.

The Kubectl Delete Pod Command

To restart Kubernetes pods with the rollout restart command:

Use the following command to restart the pod:

kubectl rollout restart deployment demo-deployment -n demo-namespace

The command instructs the controller to kill the pods one by one. It then uses the ReplicaSet and scales up new pods. This process continues until all new pods are newer than those existing when the controller resumes.

The Kubectl Rollout Restart Command

To restart Kubernetes pods with the delete command:

Use the following command to delete the pod API object:

kubectl delete pod demo_pod -n demo_namespace

Since the Kubernetes API is declarative, deleting the pod object contradicts the expected one. Hence, the pod gets recreated to maintain consistency with the expected one.

The above command can restart a single pod at a time. For restarting multiple pods, use the following command:

kubectl delete replicaset demo_replicaset -n demo_namespace

The above command deletes the entire ReplicaSet of pods and recreates them, effectively restarting each one.

The Kubectl Scale Replicas Command

A different approach to restarting Kubernetes pods is to update their environment variables. Then, the pods automatically restart once the process goes through.

To restart Kubernetes pods through the set env command:

  1. Use the following command to set the environment variable:
    kubectl set env deployment nginx-deployment DATE=$()

    The above command sets the DATE environment variable to null value. The pods restart as soon as the deployment gets updated.
  2. Use the following command to retrieve information about the pods and ensure they are running: kubectl get pods. The command will show that the old pods now have a status showing Terminating and the new ones show Running.
  3. Run the following command to check that the DATEenvironment variable got updated:
    kubectl describe

Kubernetes Troubleshooting with Komodor

The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can.

This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.

Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:

  • Change intelligence: Every issue is a result of a change. Within seconds we can help you understand exactly who did what and when.
  • In-depth visibility:  A complete activity timeline, showing all code and config changes, deployments, alerts, code diffs, pod logs and etc. All within one pane of glass with easy drill-down options.
  • Insights into service dependencies: An easy way to understand cross-service changes and visualize their ripple effects across your entire system.
  • Seamless notifications: Direct integration with your existing communication channels (e.g., Slack) so you’ll have all the information you need, when you need it.

If you are interested in checking out Komodor, use this link to sign up for a Free Trial.

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 10

No votes so far! Be the first to rate this post.