Devops

Implementing Canary Deployments with Flagger in a Jenkins Pipeline

April 7, 2026
Published
#Canary Deployment#CI/CD#DevOps#Flagger#Jenkins#Kubernetes

Shipping code directly to production without guardrails is risky. Even with solid testing, real-world traffic often behaves differently. This is where canary deployments come in—and when paired with Flagger and Jenkins, you get a powerful, automated progressive delivery setup.

Instead of flipping traffic all at once, canary releases gradually shift user traffic to a new version while monitoring key metrics. If something goes wrong, rollback happens automatically. Let’s walk through how this works in practice using Jenkins as the CI/CD orchestrator.

Why Flagger for Canary Deployments?

Flagger is a Kubernetes operator that automates canary releases using metrics from systems like Prometheus, Datadog, or CloudWatch. It integrates with service meshes such as Istio, Linkerd, or App Mesh to control traffic shifting.

Here’s what makes Flagger useful in a Jenkins-driven workflow:

  • Automated traffic shifting based on success metrics
  • Built-in rollback if thresholds fail
  • Declarative configuration via Kubernetes CRDs
  • Works alongside CI pipelines without adding complexity to Jenkins itself

Where Jenkins Fits In

Jenkins doesn’t manage the canary logic directly. Instead, it:

  • Builds and pushes container images
  • Applies Kubernetes manifests
  • Triggers Flagger by updating deployment specs

Flagger then takes over inside the cluster.

A Minimal Flow

Let’s break down the flow developers typically implement:

  1. Developer pushes code
  2. Jenkins builds Docker image
  3. Jenkins updates Kubernetes deployment
  4. Flagger detects the change
  5. Traffic gradually shifts to the new version
  6. Metrics are evaluated
  7. Deployment is promoted or rolled back

Defining a Flagger Canary Resource

Before wiring Jenkins, you need a Flagger Canary resource in Kubernetes:

YAML
1apiVersion: flagger.app/v1beta1
2kind: Canary
3metadata:
4  name: my-app
5  namespace: default
6spec:
7  targetRef:
8    apiVersion: apps/v1
9    kind: Deployment
10    name: my-app
11  service:
12    port: 80
13  analysis:
14    interval: 1m
15    threshold: 5
16    maxWeight: 50
17    stepWeight: 10
18    metrics:
19      - name: request-success-rate
20        thresholdRange:
21          min: 99
22        interval: 1m
23      - name: request-duration
24        thresholdRange:
25          max: 500
26        interval: 30s
27

This tells Flagger how aggressively to shift traffic and what metrics define a “healthy” deployment.

Jenkins Pipeline Example

Here’s a simplified Jenkinsfile that integrates with this setup:

JSON
1pipeline {
2  agent any
3
4  environment {
5    IMAGE = "myrepo/my-app:${BUILD_NUMBER}"
6  }
7
8  stages {
9    stage('Build Image') {
10      steps {
11        sh 'docker build -t $IMAGE .'
12      }
13    }
14
15    stage('Push Image') {
16      steps {
17        sh 'docker push $IMAGE'
18      }
19    }
20
21    stage('Deploy to Kubernetes') {
22      steps {
23        sh "kubectl set image deployment/my-app my-app=$IMAGE"
24      }
25    }
26  }
27}
28

That last step is the trigger. Once the deployment image changes, Flagger detects it and begins the canary rollout automatically.

What Happens During the Canary Rollout

Here’s where things get interesting.

After Jenkins updates the deployment:

  • Flagger creates a canary version of the deployment
  • Traffic starts at a small percentage (e.g., 10%)
  • Metrics are continuously evaluated
  • If metrics pass, traffic increases step-by-step
  • If metrics fail, Flagger rolls back automatically

You don’t need to script any of this in Jenkins. That separation is intentional—it keeps pipelines simple and moves release intelligence into the cluster.

Common Pitfalls

A few things tend to trip teams up when first using Flagger with Jenkins:

  • No metrics configured: Flagger needs a metrics provider. Without it, analysis won’t work.
  • Service mesh missing: Traffic shifting depends on Istio, Linkerd, or similar.
  • Over-aggressive thresholds: Setting unrealistic success rates can cause constant rollbacks.
  • Pipeline timeouts: Jenkins jobs may finish before rollout completes—this is expected.

That last one surprises people. Jenkins doesn’t “wait” for the canary to finish. If you need visibility, you can query Flagger status separately.

Observability: Don’t Skip This

Canary deployments are only as good as the signals they rely on. Typical metrics include:

  • HTTP success rate
  • Latency (P95 or P99)
  • Error rate

Prometheus is the most common choice, and Flagger integrates with it out of the box. You can also plug in custom metrics if your system has specific SLIs.

When This Approach Makes Sense

Using Flagger with Jenkins shines in environments where:

  • You deploy frequently to Kubernetes
  • Downtime or regressions are costly
  • You already use or plan to use a service mesh

If your deployments are infrequent or simple, this setup may be overkill. But for high-velocity teams, it provides a safety net without slowing things down.

A Subtle Advantage: Decoupling CI from Release Strategy

One of the biggest wins here is architectural.

Jenkins focuses purely on building and shipping artifacts. Flagger owns rollout decisions. This separation keeps your CI pipeline clean and avoids embedding complex deployment logic into Jenkinsfiles.

In practice, that means fewer brittle pipelines and more consistent release behavior across services.

Final Thoughts

Combining Jenkins with Flagger gives you a practical path to progressive delivery in Kubernetes without rewriting your CI system. Jenkins triggers deployments, Flagger manages risk, and your users experience smoother rollouts.

If you’re already running Kubernetes and looking to reduce deployment anxiety, this setup is worth exploring.

Comments

Leave a comment on this article with your name, email, and message.

Loading comments...

Similar Articles

More posts from the same category you may want to read next.

Share: