top of page

Kubernetes Pod Statuses Explained

Kubernetes pod output showing different states of readiness and status

Deploying applications on Kubernetes can feel like deciphering a new language. After running a deployment, you often use kubectl get pods to check on your Pods, only to be greeted by cryptic status values like Running, Pending, CrashLoopBackOff, or 0/1 in a “Ready” column. What do all these statuses mean? In this guide, we’ll decode each Pod status and the “Ready” column in plain terms, so you can confidently understand what’s happening with your Pods.


Understanding the kubectl get pods output

When you run kubectl get pods, Kubernetes displays a table of your pods and some key information. For example:

  • NAME tells you which Pod you are looking at. See the next section to identify if it is standalone, created by a Deployment, StatefulSet, or is a static Pod.

  • READY is a fraction: ready containers / total containers in the Pod. 1/1 means the only container is ready to receive traffic; 0/1 means it is not ready yet.

  • STATUS is a human‑friendly summary. Sometimes it shows a Pod phase like Running or Pending. Other times it shows a reason like CrashLoopBackOff or Terminating. Do not confuse the display Status with the Pod’s internal phase field.

$ kubectl get pods
NAME                                 READY   STATUS             RESTARTS   AGE
api-deployment-75675f5897-qqcnn      1/1     Running            0          2m
web-0                                1/1     Running            0          3m
init-demo                            0/1     Init:0/1           0          10s
img-bad                              0/1     ImagePullBackOff   0          35s
api-fail                             0/1     CrashLoopBackOff   5          3m
batch-job                            0/1     Completed          0          20m
oldpod                               0/1     Terminating        0          5s

Decoding the NAME column: how to tell what created a pod

The reliable method: check the owner

Every Kubernetes object can record its owner. Pods created by controllers such as ReplicaSets or StatefulSets include an ownerReferences entry. Use it to see the controller that “owns” the Pod:

# Show the controlling owner kind and name 
kubectl get pod <pod-name> \ 
  -o jsonpath='{.metadata. ownerReferences[?(@.controller==true)].kind}{" "*{.metadata.ownerReferences[? (@.controller==true)].name}{"\n"}'

  • Empty output means the Pod is standalone.

  • Output shows ReplicaSet  for Pods managed by a Deployment’s ReplicaSet.

  • Output shows StatefulSet  for Pods managed by a StatefulSet.

  • You might also see Job, DaemonSet, and others. Owners and dependents are a first‑class concept in Kubernetes.

kubectl describe pod <pod> also prints a Controlled By line that points to the owning controller.


Quick visual hints in the NAME

These patterns are common, but always verify with ownerReferences.


Pod deployment categories with example names

Why pods look this way

  • Deployments create ReplicaSets. ReplicaSet names are [deployment-name]-[random-string] derived from the pod-template-hash. Pods inherit that RS name with an extra suffix for uniqueness.

  • StatefulSets give each Pod a stable identity and ordinal index.

  • Kubelet creates a mirror Pod on the API server for each static Pod and suffixes the name with the node hostname. The mirror Pod has annotation kubernetes.io/config.mirror.

  • If you create a Pod directly there is no controller owner. With generateName, the API server appends a unique suffix to your prefix.


Namespaces and kubectl get pods

By default, kubectl get pods shows Pods only in the current namespace. You can:

Target a specific namespace

kubectl get pods -n backend
# - or - 
kubectl get pods --namespace backend

List Pods across all namespaces

kubectl get pods -A

-A is shorthand for --all-namespaces


Set a default namespace for your current context

kubectl config set-context --current --namespace=backend

To learn more about namespaces review the Kubernetes documentation here.


The READY Column: 0/1 vs 1/1 (What Does It Mean?)

When you look at the READY column in kubectl get pods, you are seeing how many containers inside the Pod are currently marked as ready to serve traffic compared to the total number of containers.

What the numbers mean

  • 1/1: The container is running and has passed readiness checks, so Kubernetes will route traffic to it.

  • 0/1: The container is not ready. This can happen if it is still starting, failing a probe, or restarting after a crash.

  • 1/2 or more: In multi-container Pods, some containers are ready and others are not. Kubernetes only considers the Pod Ready when every container that matters to the workload is marked Ready.


This explains why you can see a Pod with STATUS: Running but READY: 0/1. Running means the process exists on a node. Ready is a separate condition that decides if the Pod should receive requests. Kubernetes Services only send traffic to Ready Pods, so a Running Pod with 0/1 is alive but not serving until it passes its readiness checks.

Readiness is managed through probes such as HTTP checks, TCP checks, or commands. If a probe fails, the Pod is removed from Service endpoints until it recovers. Without probes, Kubernetes assumes a container is Ready as soon as it starts, but adding probes gives much better control over when your application is trusted to handle real traffic.


Tech Stacks Tip callout:

Running does not always mean Ready

A Pod may show STATUS: Running with READY: 0/1. The container is alive on the node, but Kubernetes will not send traffic until readiness checks pass. Think of Running as engine on and Ready as safe to drive.


Pending: Pod is Waiting to Start

A Pod in Pending has been accepted by the cluster but is not yet running. At this stage none of its containers are active.

Why a Pod might be Pending

  • Scheduling delay: The scheduler has not yet found a node with enough free CPU or memory to run the Pod.

  • Image pulling: The Pod has been scheduled but the node is still downloading its container image. Large images or slow networks often make this phase longer.


For example, myapp-7c9p showed 0/1 Pending, which likely meant the container had not started. The node may have been pulling the image or waiting for resources. If a Pod stays Pending for too long, the next step is to run kubectl describe pod <name> and review the events. The most common issues are insufficient resources or missing PersistentVolume claims.

You may also see ContainerCreating while a Pod is Pending. This indicates that Kubernetes is actively pulling the image, unpacking it, and creating the container. If a Pod remains stuck in ContainerCreating, it usually points to an issue such as an unreachable image registry or a failed volume mount.


Running: Pod is Up and (Mostly) Running

Running is the state most people want to see. It means the Pod has been scheduled to a node and at least one of its containers is active. The official definition is: “The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.”


When you see STATUS: Running, it usually means:

  • The Pod was successfully scheduled to a node.

  • All init containers have completed and the main containers have started.

  • At least one container is running, or in the process of starting or restarting.


For example, db-0 shows 1/1 Running, which tells us the container is active and has passed readiness checks, so Kubernetes will route traffic to it. If the same Pod showed 0/1 Running, the container process would be alive but not yet Ready, often because the application is still warming up.

It is important to remember that Running does not always mean Ready. Running only confirms the process exists on a node. Ready is a separate condition that controls whether Kubernetes considers the Pod available for traffic. A Pod will remain in Running until it exits successfully, fails, or enters a state like CrashLoopBackOff if it keeps crashing.


Succeeded (Completed): Pod Finished Successfully

A Pod enters the Succeeded phase when all of its containers have exited successfully with an exit code of 0 and are not going to restart. In kubectl get pods, you will usually see the status reported as Completed. This status most often appears in Jobs or CronJobs, which are designed to run to completion and then stop.


Key points about Completed Pods

  • All containers in the Pod exited normally.

  • The Pod will not be restarted because the restart policy is usually set to Never or OnFailure.

  • The READY column shows 0/1 (or 0/N for multi-container Pods) because there are no active containers left.


For example, the Pod batch-job from the earlier output showed 0/1 Completed. This indicated the container finished its work and exited cleanly. Since it will not restart, Kubernetes marked the Pod as Completed. The READY value being 0/1 is expected here because nothing is running anymore.

The Kubernetes documentation explains it clearly: “All containers in the Pod have terminated in success, and will not be restarted.” You can think of Completed as Kubernetes confirming that the Pod’s mission is accomplished and its job is done.

Completed Pods remain in the API even after they finish. They will stay visible in your kubectl get pods output until you clean them up or a TTL controller removes them automatically.

If you prefer not to see them cluttering your list, you can:

  • Manually delete them with kubectl delete pod <name>

  • Use a field selector to view or clean them up:

kubectl get pods --field-selector=statusphase=Succeeded

This makes it easier to separate successful one-off jobs from long-running workloads that should stay active.


Tech Stacks Tip callout:

Completed Pods are safe to delete

A Pod in STATUS: Completed has finished successfully and will not restart. These Pods often come from Jobs or CronJobs. They stay in your cluster until you remove them or a TTL controller cleans them up, but they are no longer serving any purpose.


Failed (Error): Pod Failed to Complete Work

A Pod enters the Failed phase when one or more of its containers terminate unsuccessfully and will not be restarted. In kubectl get pods, this often appears as STATUS: Error. Error means that a container exited with a non-zero status, which usually signals a crash or misconfiguration. Since Kubernetes is not going to restart it, the Pod is marked as failed.


What Error usually indicates

  • The container command or entrypoint failed immediately.

  • The application crashed on startup.

  • An init container failed and prevented the main containers from ever starting.

  • The Pod’s restart policy was set to Never or OnFailure, so Kubernetes gave up after the failure.


For example, the Pod bad-config showed 0/1 Error. This suggests the container exited with a problem, and because the restart policy did not allow retries, Kubernetes stopped there. The READY column shows 0/1 because no containers are running at all.

Troubleshooting starts with checking events and logs. Running kubectl describe pod <name> will show termination reasons and error messages, while kubectl logs <name> gives you the container’s output leading up to the crash. Common causes include missing environment variables, invalid configuration files, or application-level exceptions.

The difference between Error and CrashLoopBackOff is important. In Error, Kubernetes has given up restarting the container. In CrashLoopBackOff, it is still attempting restarts with backoff timing. If the Pod is part of a Deployment or ReplicaSet, the controller will eventually create a replacement Pod. If it is a standalone Pod or a Job with restart policy set to Never, it will simply stay in Error until you delete or replace it.

The Kubernetes documentation summarizes it clearly: “All containers in the Pod have terminated, and at least one container has terminated in failure (exited with non-zero, or was stopped by system).”


CrashLoopBackOff: Container Keeps Crashing and Restarting

CrashLoopBackOff is one of the most common and misunderstood Pod statuses in Kubernetes. It is not a Pod phase but a waiting state that indicates a container has crashed, Kubernetes tried to restart it, and the restart attempt failed again. The “loop” comes from the repeated crashes, and “backoff” describes Kubernetes delaying each new attempt.

In the earlier example, api-fail showed CrashLoopBackOff with 5 restarts. This tells us the container failed and was restarted five times, and Kubernetes is now waiting before trying again.


What happens under the hood

Kubernetes relies on the kubelet to manage Pod lifecycles on each node. If a container crashes, kubelet follows the Pod’s restart policy to decide what to do next. For Pods with restartPolicy: Always (the default in Deployments), kubelet immediately attempts to restart the container. If that restart also fails, kubelet keeps trying, but with an exponential backoff delay between attempts.


diagram depicting K8s exponential backoff

This doubling continues until it reaches the maximum delay of 300 seconds (5 minutes). Once capped, kubelet will attempt a restart roughly every five minutes.

If a container manages to run for about 10 minutes without crashing, kubelet resets the backoff timer. The next failure starts fresh at 10 seconds rather than immediately jumping back to 5 minutes.


During these waiting periods, kubectl get pods reports the Pod’s status as CrashLoopBackOff, which is Kubernetes’ way of saying “this container is stuck in a restart loop, and we are intentionally slowing down retries.”


Why Kubernetes uses backoff

Without backoff, a container that fails instantly would restart hundreds of times per minute. That kind of thrashing could overwhelm the node, flood your logs, and make debugging nearly impossible. Backoff introduces breathing room, giving you a chance to investigate and fix the root problem.


Common causes of CrashLoopBackOff

CrashLoopBackOff is a symptom, not the cause. Some of the most common triggers include:

  • Application bugs that cause the process to exit immediately

  • Missing dependencies such as config files, secrets, or environment variables

  • Misconfigured command or entrypoint that fails on startup

  • Failing init containers that prevent main containers from ever stabilizing

  • External systems being unavailable, causing the container to exit in error


Minimal reproduction

# A failing pod that exits immediately
kubectl run crashy --image=busybox -- /bin/sh -c 'echo oops; exit 1'

# Watch the backoff grow
kubectl get pod crashy -w

# In another terminal
kubectl describe pod crashy | tail -n +1
kubectl logs crashy --previous   # logs from the last crashed attempt

How to troubleshoot

The key to solving CrashLoopBackOff is to investigate the first crash, not the repeated backoff messages. Each new restart wipes the container’s state, so early events and logs are the most useful clues.

  1. Check logs from the last run

kubectl logs <pod> --previous

💡Note: The --previous flag shows the output from the container before it crashed.


  1. Inspect Pod events

kubectl describe pod <pod>

Scroll to the Events section to see restart attempts, exit codes, and backoff timing.

  1. Validate configuration

    Double-check container commands, environment variables, config maps, and secrets.

  2. Review init containers

    If the Pod has init containers, make sure they are completing successfully. Application containers will not start until init containers finish.

  3. Look for external dependencies

    If the container needs a database, API, or file system at startup, confirm that those systems are available.


Key takeaway

CrashLoopBackOff is not an error itself but a signal that something inside your container is repeatedly failing. Kubernetes will keep retrying with progressively longer waits until the container either stabilizes or you fix the root cause. Once the issue is resolved, the Pod will leave the backoff cycle and return to Running.


ImagePullBackOff: Images Issues Pulling Container Image

ImagePullBackOff is another status that causes confusion because it looks like an error but is actually Kubernetes telling you it is retrying. This status appears when a Pod cannot start because the container image cannot be pulled from its registry. Kubernetes will keep trying with exponential backoff, just as it does with CrashLoopBackOff.


How it happens

When you first deploy a Pod, Kubernetes schedules it to a node. The kubelet on that node then tries to download the container image. If the image cannot be pulled, the Pod stays stuck. At first you may see ErrImagePull, which is a one-time error. If the problem persists, kubelet transitions the Pod into ImagePullBackOff and begins retrying.

Common reasons include:

  • The image name is incorrect (for example, a misspelled repository or missing tag).

  • The image does not exist in the specified registry.

  • The registry requires authentication, and the cluster does not have the correct imagePullSecrets.

  • The node cannot reach the registry because of network or firewall issues.


Backoff behavior

The retry timing for ImagePullBackOff mirrors CrashLoopBackOff. Kubernetes starts with a 10-second delay, then doubles with each failure until it reaches a maximum of 300 seconds (five minutes). If the pull succeeds later, the backoff resets. While in this state, the container never actually starts, so the READY column shows 0/N and the RESTARTS count remains at 0.


Example

In the earlier example, the Pod frontend-xyz showed 0/1 ImagePullBackOff. The container had not started at all, which meant Kubernetes was still failing to fetch the image. If you ran kubectl describe pod frontend-xyz and scrolled to the Events section, you would likely see messages such as:

Failed to pull image "frontend-xyz:latest": image not found
Error: ErrImagePull
Back-off pulling image "frontend-xyz:latest"

These events usually point directly to the cause, whether it is a missing tag, an authentication error, or an unreachable registry.


How to fix it

  1. Verify the image name and tag

    Make sure the image reference is spelled correctly and points to a tag that exists. Avoid relying on latest in production because it can lead to uncertainty about what version is being deployed.

  2. Check authentication

    If the image is in a private registry, configure a Kubernetes Secret with your credentials and reference it using imagePullSecrets in your Pod or ServiceAccount.

  3. Confirm registry availability

    Test whether nodes can reach the registry endpoint. Network restrictions or firewalls can prevent successful pulls.

  4. Inspect cluster-wide settings

    Some managed Kubernetes environments require extra configuration for private registries. Double-check your cloud provider’s documentation if pulls consistently fail.


Tech Stacks Tip callout:

ImagePullBackOff is not the error itself

It means Kubernetes is retrying to pull the image with exponential backoff. The actual cause is almost always revealed in the Events section of kubectl describe pod.

Key takeaway

Think of ImagePullBackOff as a special case of Pending where the blocker is image download. Kubernetes is essentially saying: “I am trying to start your Pod, but I cannot fetch the image. I will keep retrying with longer waits.” Until the pull succeeds, the Pod will never reach Running.


Terminating: Pod is Shutting Down

When you delete a Pod or scale down a controller such as a Deployment or StatefulSet, the Pod does not disappear instantly. Instead, it enters the Terminating state while Kubernetes shuts it down gracefully. In kubectl get pods, the status column will show Terminating until the cleanup is complete.


What Terminating means

Kubernetes is in the process of removing the Pod. During this phase it:

  • Sends a termination signal to the containers.

  • Runs any configured preStop hooks.

  • Waits for containers to exit.

  • Executes any finalizers defined on the Pod object.


The Pod remains in Terminating until all of these steps finish. Normally this happens quickly, often within the default grace period of 30 seconds, or whatever grace period you have set.


When Terminating takes longer

Most Pods shut down so quickly that you barely notice them in Terminating. But sometimes the process can hang. Common reasons include:

  • A container that ignores or blocks the termination signal.

  • A finalizer that is not completing as expected.

  • A node that has become unresponsive and cannot confirm the shutdown.

  • External resources such as volumes or network storage that are slow to detach.


In these cases the Pod may sit in Terminating for minutes or longer.


Example

In the earlier example, oldpod showed 0/1 Terminating. The Pod had been deleted about 10 seconds earlier and Kubernetes was still finishing cleanup. The READY column showed 0/1 because the container had already stopped.


Forcing deletion

If a Pod stays stuck in Terminating, you can force deletion with:

kubectl delete pod oldpod --force --grace-period=0

This bypasses the normal graceful shutdown and immediately removes the Pod from the API. Force delete should only be used as a last resort because it can leave behind orphaned resources such as mounted volumes or unfinished network cleanup.


Tech Stacks Tip callout:

Terminating means graceful shutdown

A Pod in Terminating is not failing. Kubernetes is signaling containers to exit, running preStop hooks, and cleaning up resources. If the Pod lingers in this state, check for stuck finalizers or unresponsive nodes. Use --force --grace-period=0 only as a last resort.

Key takeaway

Terminating is not an error. It is a normal part of the Pod lifecycle. A Pod showing Terminating simply means Kubernetes is in the middle of shutting it down. If you see Pods frequently stuck in this state, it usually points to cleanup issues such as persistent volumes that do not detach or webhook finalizers that block deletion.


Init:X/Y – Pods with Init Containers

Another status pattern you will see is the Init: prefix. This appears when a Pod has one or more init containers defined. Init containers run sequentially before any of the main application containers are allowed to start. Their purpose is usually setup work such as preparing configuration, running database migrations, or loading initial data.


k8s init container deployment order

What the status means

While init containers are running, the Pod’s STATUS will show something like Init:0/1 or Init:2/3. The fraction indicates how many init containers have successfully completed out of the total defined.


  • Init:0/1 means no init containers have finished yet.

  • Init:1/2 means the first init container has succeeded, and the Pod is now running the second.

  • Init:7/10 means seven init containers have completed successfully, and three remain.


Only once all init containers have succeeded will Kubernetes move on to starting the main containers. Until then, the Pod is not considered Running, even though work is happening in the background.


READY during init

While init containers are still running, the READY column will almost always remain 0/1 (or 0/N for multi-container Pods). This is because readiness is only checked for the main application containers, which do not begin until init containers finish. Once init containers succeed, the Pod transitions to Running, and readiness checks then apply to the main containers.


Example

The Crossplane project provides a clear demonstration of this behavior. When one of their Pods starts, it shows Init:0/1 while the init container is preparing the environment, with READY 0/1. A few seconds later, once the init container completes, the Pod transitions to Running with READY 1/1 as the application container comes online.


When things go wrong

Init containers can fail just like regular containers. In that case, you may see:

  • Init:Error if an init container exited unsuccessfully.

  • Init:CrashLoopBackOff if the init container repeatedly crashes and Kubernetes is backing off restarts.


Because application containers never start until init containers succeed, a failing init container will block the Pod indefinitely. Troubleshooting follows the same pattern as other failures: use kubectl logs <pod> -c <init-container-name> and kubectl describe pod <pod> to review errors and events.


Tech Stacks Tip callout:

Init containers must succeed first

A Pod will not start its main containers until all init containers have completed successfully. If you see Init:X/Y for too long, check the init container logs with: kubectl logs <pod> -c <init-container-name>

Key takeaway

Init containers are a normal part of Pod startup when defined. The Init:N/M status simply tells you how many of them have finished. Once they all succeed, the Pod moves forward to start its main workload. If an init container fails, focus your debugging there first because no application containers will launch until the init stage is complete.


Putting It All Together

Pod statuses give you a quick window into what is happening with your workloads. Each status points to a stage in the Pod lifecycle or a condition that needs attention:

Status

What it means

Pending

The Pod has been accepted but is not running yet. It may be waiting for scheduling or still downloading images.

Running

At least one container is active. This is the desired state for long-running Pods, although they may not be Ready yet.

Succeeded/Completed

All containers exited successfully. The Pod’s work is done.

Failed/Error

One or more containers exited with an error and will not be restarted.

CrashLoopBackOff

A container is repeatedly crashing, and Kubernetes is delaying restarts. The RESTARTS count usually increases here.

ImagePullBackOff

The Pod cannot start because the container image is not pulling successfully. Check the image name, tag, and credentials.

Terminating

The Pod is shutting down after deletion or scale-down and is cleaning up resources.

Init:X/Y

The Pod is still running init containers before the main containers begin.

The READY column and statuses

Statuses tell you what stage the Pod is in. The READY column works alongside them to tell you how many containers inside the Pod are ready to serve traffic. A Pod can be Running but show 0/1 in READY if the container is still warming up. Completed or Failed Pods typically show 0/N because none of the containers are active anymore.


What to do when you see an unexpected status

  • Run kubectl describe pod <name> and check the Events section. It often explains exactly why the Pod is stuck, whether it is a failed image pull, a scheduling problem, or a crash.

  • Run kubectl logs <pod> to view container logs. Use -c <container> for multi-container Pods or

    --previous to see logs from the last crash.

  • If the Pod is Pending, confirm that nodes have enough CPU, memory, and storage. Describe output will also show if a PersistentVolume claim is waiting to bind.

  • For ImagePullBackOff, check the image reference and make sure the correct registry credentials are configured.


Key mindset for beginners

Do not panic when you see a Pod in one of these states. Each status is Kubernetes signaling what it is doing.

  • Pending or ContainerCreating means setup is still in progress.

  • Running means the Pod is alive, but it might not yet be Ready to serve.

  • CrashLoopBackOff or ImagePullBackOff means something is wrong, and Kubernetes is retrying with backoff. Investigate the root cause with logs and describe.

  • Completed or Error means the Pod has finished, either successfully or with a failure.

  • Terminating means the Pod is shutting down gracefully.


By understanding what each status represents and pairing it with the READY column, you can quickly determine whether your workloads are healthy or need attention. Kubernetes is complex, but these statuses are your first signal when something changes. Reading them well is the first step to diagnosing and fixing issues confidently.


Helpful commands while you troubleshoot

These commands are the first line of defense when a Pod shows an unexpected status:

# Show detailed Pod information and recent events from the scheduler and kubelet
kubectl describe pod <name>
# View logs from the current container attempt
kubectl logs <name>
# View logs from the last crashed attempt
kubectl logs <name> --previous
# Stream logs and follow them live
kubectl logs -f <name>
# Watch Pod status update in real time
kubectl get pods -w

Quick reference tables

Common STATUS values and where to look.

STATUS

Where to start troubleshooting

Pending

kubectl describe for scheduling events, resource requests, node selectors, taints, and image pulls.

Running but READY 0/1

Readiness probe configuration and container startup logs.

Completed

Normal for Jobs. Inspect Job history if needed.

Error

Exit code and termination reason in describe and logs.

CrashLoopBackOff

Container logs, prior crash logs with --previous, environment variables, command/args, and init container logs. Note that backoff doubles up to 5 minutes.

ImagePullBackOff

Image name and tag, registry access, imagePullSecrets. Backoff increases up to 5 minutes.

Init:X/Y

Logs of init containers. Main containers wait until all init containers succeed.

Terminating

Check for stuck finalizers or admission webhooks that might block deletion.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Tech Stacks Decoded Logo

Build, Deploy, and Innovate

Connect With Us:

  • LinkedIn
  • Facebook
  • X

TechStacksDecoded.com.

©2025 All rights reserved.

Stay Up to Date with New Content

bottom of page