Services DevOps DevSecOps Cloud Consulting Infrastructure Automation Managed Services AIOps MLOps DataOps Microservices 🔐 Private AINEW Solutions DevOps Transformation CI/CD Automation Platform Engineering Security Automation Zero Trust Security Compliance Automation Cloud Migration Kubernetes Migration Cloud Cost Optimisation AI-Powered Operations Data Platform Modernisation SRE & Observability Legacy Modernisation Managed IT Services 🔐 Private AI DeploymentNEW Products ✨ ZippyOPS AINEW 🛡️ ArmorPlane 🔒 DevSecOpsAsService 🖥️ LabAsService 🤝 Collab 🧪 SandboxAsService 🎬 DemoAsService Bootcamp 🔄 DevOps Bootcamp ☁️ Cloud Engineering 🔒 DevSecOps 🛡️ Cloud Security ⚙️ Infrastructure Automation 📡 SRE & Observability 🤖 AIOps & MLOps 🧠 AI Engineering 🎓 ZOLS — Free Learning Company About Us Projects Careers Get in Touch

A Guide to Implementing Canary Deployment in Kubernetes

How to Implement Canary Deployment in Kubernetes: A Step-by-Step Guide

Canary deployment is a powerful strategy for testing new features and updates in a production environment. It allows teams to release updates gradually, minimizing the risk of widespread issues. In this guide, we’ll show you how to implement a canary deployment in Kubernetes, ensuring smooth rollouts and improved user experience.

What is Canary Deployment?

A canary deployment is a phased release strategy where a small portion of users (referred to as “canaries”) are exposed to a new version of an application before it’s rolled out to the entire user base. This technique allows teams to monitor performance and catch potential issues early without affecting all users.

By using Kubernetes for canary deployments, teams can manage traffic routing efficiently and scale their application updates smoothly. As a result, the process of upgrading apps becomes safer and more manageable.

ZippyOPS offers consulting, implementation, and managed services to help optimize your DevOps, DevSecOps, Cloud, and MLOps workflows, making it easier for you to integrate and automate these deployment strategies seamlessly.

Diagram illustrating Canary Deployment in Kubernetes

Step 1: Pull the Docker Image

Start by downloading the Docker image that you want to deploy. In our example, we’ll use Nginx as the web server:

# docker pull nginx

You should see a message confirming the image download.

Next, confirm that the image is downloaded successfully by listing all local Docker images:

# docker image ls

This command will show all available images. For example:

REPOSITORY    TAG     IMAGE ID      CREATED          SIZE
nginx         latest  62d49f9b      7 days ago      133MB

Step 2: Create the Kubernetes Deployment

Next, create a Kubernetes deployment definition. In this step, you’ll specify the deployment configuration using a YAML file.

Create the YAML file for the deployment:

# vi nginx-deployment.yaml

Add the following content to define the deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
        version: "1.0"
    spec:
      containers:
        - name: nginx
          image: nginx:alpine
          resources:
            limits:
              memory: "128Mi"
              cpu: "50m"
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: index.html
      volumes:
        - name: index.html
          hostPath:
            path: /path/to/nginx/v1

This configuration will create three replicas of the Nginx pod with the label version “1.0”. The pods will use a sample index.html file for the “Hello World!” message.

Apply the deployment to your Kubernetes cluster:

# kubectl apply -f nginx-deployment.yaml

Verify the deployment was successful by running:

# kubectl get pods -o wide

Step 3: Create the Kubernetes Service

Now, create a Kubernetes service to route traffic to your Nginx pods. This service will balance the incoming requests across the three Nginx replicas.

Create the service definition file:

# cat nginx-deployment.service.yaml

Add the following YAML configuration:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
    version: "1.0"
  ports:
    - port: 8888
      targetPort: 80

Apply the service configuration:

# kubectl apply -f nginx-deployment.service.yaml

Step 4: Verify the First Version of the Cluster

To check if the service is running correctly, open a browser and navigate to the external IP of the service. If you’re running Kubernetes locally, use localhost.

To find the external IP address:

# kubectl get service

In the browser, you should see the “Hello World” message from version 1 of the app.

Step 5: Implement the Canary Deployment

Now that the first version is running, it’s time to implement the canary deployment (version 2). This process involves deploying a new version of the application and routing a portion of the traffic to it.

Create the YAML configuration for the canary deployment:

# cat nginx-canary-deployment.yaml

Add the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-canary-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
        version: "2.0"
    spec:
      containers:
        - name: nginx
          image: nginx:alpine
          resources:
            limits:
              memory: "128Mi"
              cpu: "50m"
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: index.html
      volumes:
        - name: index.html
          hostPath:
            path: /path/to/nginx/v2

Create the canary deployment:

# kubectl apply -f nginx-canary-deployment.yaml

Verify that the new pods are running by checking:

# kubectl get pods -o wide

Step 6: Route Traffic to the Canary Deployment

To test the canary deployment, modify the service to split traffic between version 1 and version 2.

Edit the service definition and update the version label to “2.0”:

# cat nginx-deployment.service.yaml

Update the version label in the selector:

selector:
  app: nginx
  version: "2.0"

Apply the updated service file:

# kubectl apply -f nginx-deployment.service.yaml

Now, traffic will be routed between the original and the canary pods. Refresh the web page several times to see the different versions in action.

Step 7: Roll Back or Complete the Deployment

If the canary deployment isn’t performing as expected, you can roll back the changes:

# kubectl delete deployment.apps/nginx-canary-deployment

This will stop the canary deployment, and the service will route all traffic to version 1.

However, if the canary deployment is successful, you can proceed to roll out the full upgrade. There are a few ways to do this:

  1. Upgrade the First Version: Modify the Docker image, create a new deployment, and remove the canary pods.
  2. Remove the Old Version: Delete the version 1 pods and update the service to only route traffic to the canary version.

By managing deployments this way, you ensure a smoother transition and avoid potential issues from a full rollout.


Conclusion

Implementing a canary deployment in Kubernetes allows you to test updates gradually and ensure minimal disruption to your users. By following this guide, you can effectively manage traffic, rollbacks, and upgrades, optimizing your deployment process.

For more guidance on optimizing your DevOps, DevSecOps, and Cloud workflows, ZippyOPS provides consulting, implementation, and managed services tailored to your business needs.

If you need professional assistance, reach out to us at sales@zippyops.com.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top