Shipping updates without breaking live traffic is one of those things that sounds simple—until it isn't. If you've ever pushed a deployment and watched users hit errors for even a few seconds, you already understand why Kubernetes' RollingUpdate strategy exists.
Let’s walk through how it actually works, how to configure it properly, and where it fits into a Jenkins-driven CI/CD pipeline.
What RollingUpdate Really Does
At a high level, the RollingUpdate strategy replaces old Pods with new ones gradually instead of all at once. That means your application stays available while the update happens.
Instead of terminating everything and starting fresh (like the Recreate strategy), Kubernetes carefully balances:
- How many new Pods can be created
- How many old Pods can be taken down
This balance is controlled by two key parameters:
- maxUnavailable
- maxSurge
A Minimal Working Example
Here’s a standard Kubernetes Deployment using RollingUpdate:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: web-app
5spec:
6 replicas: 4
7 strategy:
8 type: RollingUpdate
9 rollingUpdate:
10 maxUnavailable: 1
11 maxSurge: 1
12 selector:
13 matchLabels:
14 app: web
15 template:
16 metadata:
17 labels:
18 app: web
19 spec:
20 containers:
21 - name: web-container
22 image: myapp:v2
23 ports:
24 - containerPort: 80
25This configuration ensures:
- At most 1 Pod is unavailable during updates
- At most 1 extra Pod is created beyond the desired count
How the Rollout Actually Happens
Let’s say you have 4 replicas running version v1, and you deploy v2.
Kubernetes will:
- Create 1 new Pod (total = 5)
- Wait until it's ready
- Terminate 1 old Pod (total = 4)
- Repeat until all Pods run
v2
The key here is readiness probes. If your new Pods aren’t marked as “ready,” Kubernetes won’t continue the rollout.
Important: Readiness Probes Are Not Optional
A common mistake developers make is skipping readiness probes. Without them, Kubernetes assumes a Pod is ready immediately, which can cause traffic to hit containers that aren't fully initialized.
1readinessProbe:
2 httpGet:
3 path: /health
4 port: 80
5 initialDelaySeconds: 5
6 periodSeconds: 10
7Where Jenkins Fits Into This
If you're using Jenkins for CI/CD, RollingUpdate becomes the backbone of safe deployments.
A typical Jenkins pipeline might:
- Build and tag a Docker image
- Push it to a registry
- Update the Kubernetes Deployment image
For example, a simple pipeline step:
1stage('Deploy to Kubernetes') {
2 steps {
3 sh "kubectl set image deployment/web-app web-container=myapp:${BUILD_NUMBER}"
4 }
5}
6This triggers a RollingUpdate automatically—no extra scripting required.
Here’s where things get interesting: Jenkins doesn’t need to manage rollout logic. Kubernetes handles that for you.
Fine-Tuning RollingUpdate Behavior
Depending on your application, the default values may not be ideal.
When to Increase maxSurge
If startup time is slow, increasing maxSurge helps spin up more new Pods faster.
- Good for: CPU-light apps
- Risk: temporary resource spikes
When to Reduce maxUnavailable
If uptime is critical, set maxUnavailable to 0.
1maxUnavailable: 0
2maxSurge: 2This guarantees no downtime—but requires enough cluster capacity.
Watching a Deployment in Real Time
You don’t have to guess what’s happening. Kubernetes gives you visibility:
1kubectl rollout status deployment/web-appOr for more detail:
1kubectl describe deployment web-appThis is especially useful when Jenkins triggers deployments and you need to debug rollout behavior.
Rollback: Your Safety Net
If something goes wrong, Kubernetes keeps revision history.
1kubectl rollout undo deployment/web-appThis instantly rolls back to the previous working version—another reason RollingUpdate is preferred in production environments.
RollingUpdate vs Recreate (Quick Reality Check)
| Strategy | Downtime | Use Case |
|---|---|---|
| RollingUpdate | None (if configured well) | Most production apps |
| Recreate | Yes | Breaking schema changes, stateful transitions |
If your app can’t run multiple versions simultaneously (for example, due to database incompatibility), RollingUpdate might not be safe without extra coordination.
Subtle Pitfalls to Watch For
- Missing readiness probes → traffic hits unready containers
- Resource limits too tight → new Pods can’t schedule
- Long startup times → slow rollouts or timeouts
- Stateful apps → version conflicts during overlap
These issues often show up only under load, which is why testing rollout behavior matters just as much as testing code.
A Practical Deployment Flow
Putting it all together, a real-world flow looks like this:
- Developer pushes code
- Jenkins builds Docker image
- Image pushed to registry
- Jenkins updates Kubernetes Deployment
- Kubernetes performs RollingUpdate
- Health checks validate rollout
- Rollback triggered if needed
The beauty here is separation of concerns:
- Jenkins handles automation
- Kubernetes handles deployment safety
Why Teams Stick With RollingUpdate
It’s not just about zero downtime. RollingUpdate also gives:
- Gradual exposure to new versions
- Built-in rollback capability
- Compatibility with autoscaling
For most stateless services, it’s the default for a reason.
If you're already using Jenkins, you get all of this almost “for free” by simply updating the image tag—Kubernetes does the heavy lifting behind the scenes.