Devops

Using RollingUpdate Strategy in Kubernetes Deployments for Zero Downtime

April 7, 2026
Published
#CI/CD#Containers#Deployments#DevOps#Jenkins#Kubernetes

Shipping updates without breaking live traffic is one of those things that sounds simple—until it isn't. If you've ever pushed a deployment and watched users hit errors for even a few seconds, you already understand why Kubernetes' RollingUpdate strategy exists.

Let’s walk through how it actually works, how to configure it properly, and where it fits into a Jenkins-driven CI/CD pipeline.

What RollingUpdate Really Does

At a high level, the RollingUpdate strategy replaces old Pods with new ones gradually instead of all at once. That means your application stays available while the update happens.

Instead of terminating everything and starting fresh (like the Recreate strategy), Kubernetes carefully balances:

  • How many new Pods can be created
  • How many old Pods can be taken down

This balance is controlled by two key parameters:

  • maxUnavailable
  • maxSurge

A Minimal Working Example

Here’s a standard Kubernetes Deployment using RollingUpdate:

YAML
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: web-app
5spec:
6  replicas: 4
7  strategy:
8    type: RollingUpdate
9    rollingUpdate:
10      maxUnavailable: 1
11      maxSurge: 1
12  selector:
13    matchLabels:
14      app: web
15  template:
16    metadata:
17      labels:
18        app: web
19    spec:
20      containers:
21      - name: web-container
22        image: myapp:v2
23        ports:
24        - containerPort: 80
25

This configuration ensures:

  • At most 1 Pod is unavailable during updates
  • At most 1 extra Pod is created beyond the desired count

How the Rollout Actually Happens

Let’s say you have 4 replicas running version v1, and you deploy v2.

Kubernetes will:

  1. Create 1 new Pod (total = 5)
  2. Wait until it's ready
  3. Terminate 1 old Pod (total = 4)
  4. Repeat until all Pods run v2

The key here is readiness probes. If your new Pods aren’t marked as “ready,” Kubernetes won’t continue the rollout.

Important: Readiness Probes Are Not Optional

A common mistake developers make is skipping readiness probes. Without them, Kubernetes assumes a Pod is ready immediately, which can cause traffic to hit containers that aren't fully initialized.

YAML
1readinessProbe:
2  httpGet:
3    path: /health
4    port: 80
5  initialDelaySeconds: 5
6  periodSeconds: 10
7

Where Jenkins Fits Into This

If you're using Jenkins for CI/CD, RollingUpdate becomes the backbone of safe deployments.

A typical Jenkins pipeline might:

  • Build and tag a Docker image
  • Push it to a registry
  • Update the Kubernetes Deployment image

For example, a simple pipeline step:

JSON
1stage('Deploy to Kubernetes') {
2  steps {
3    sh "kubectl set image deployment/web-app web-container=myapp:${BUILD_NUMBER}"
4  }
5}
6

This triggers a RollingUpdate automatically—no extra scripting required.

Here’s where things get interesting: Jenkins doesn’t need to manage rollout logic. Kubernetes handles that for you.

Fine-Tuning RollingUpdate Behavior

Depending on your application, the default values may not be ideal.

When to Increase maxSurge

If startup time is slow, increasing maxSurge helps spin up more new Pods faster.

  • Good for: CPU-light apps
  • Risk: temporary resource spikes

When to Reduce maxUnavailable

If uptime is critical, set maxUnavailable to 0.

TEXT
1maxUnavailable: 0
2maxSurge: 2

This guarantees no downtime—but requires enough cluster capacity.

Watching a Deployment in Real Time

You don’t have to guess what’s happening. Kubernetes gives you visibility:

TEXT
1kubectl rollout status deployment/web-app

Or for more detail:

TEXT
1kubectl describe deployment web-app

This is especially useful when Jenkins triggers deployments and you need to debug rollout behavior.

Rollback: Your Safety Net

If something goes wrong, Kubernetes keeps revision history.

TEXT
1kubectl rollout undo deployment/web-app

This instantly rolls back to the previous working version—another reason RollingUpdate is preferred in production environments.

RollingUpdate vs Recreate (Quick Reality Check)

StrategyDowntimeUse Case
RollingUpdateNone (if configured well)Most production apps
RecreateYesBreaking schema changes, stateful transitions

If your app can’t run multiple versions simultaneously (for example, due to database incompatibility), RollingUpdate might not be safe without extra coordination.

Subtle Pitfalls to Watch For

  • Missing readiness probes → traffic hits unready containers
  • Resource limits too tight → new Pods can’t schedule
  • Long startup times → slow rollouts or timeouts
  • Stateful apps → version conflicts during overlap

These issues often show up only under load, which is why testing rollout behavior matters just as much as testing code.

A Practical Deployment Flow

Putting it all together, a real-world flow looks like this:

  1. Developer pushes code
  2. Jenkins builds Docker image
  3. Image pushed to registry
  4. Jenkins updates Kubernetes Deployment
  5. Kubernetes performs RollingUpdate
  6. Health checks validate rollout
  7. Rollback triggered if needed

The beauty here is separation of concerns:

  • Jenkins handles automation
  • Kubernetes handles deployment safety

Why Teams Stick With RollingUpdate

It’s not just about zero downtime. RollingUpdate also gives:

  • Gradual exposure to new versions
  • Built-in rollback capability
  • Compatibility with autoscaling

For most stateless services, it’s the default for a reason.

If you're already using Jenkins, you get all of this almost “for free” by simply updating the image tag—Kubernetes does the heavy lifting behind the scenes.

Comments

Leave a comment on this article with your name, email, and message.

Loading comments...

Similar Articles

More posts from the same category you may want to read next.

Share: