Introduction to Canary Deployment on Kubernetes
Canary deployment on Kubernetes is an effective strategy for deploying and testing new versions of an application with minimal risk. This approach allows you to gradually introduce new features or updates in a controlled manner, ensuring that any potential issues are caught early. By directing a portion of traffic to a new version while maintaining the stability of the old version, teams can monitor performance and address any problems before full-scale deployment.
In this guide, we’ll walk you through the process of setting up Canary deployment on Kubernetes, from pulling Docker images to routing traffic between different versions. Whether you’re looking to refine your DevOps practices or leverage advanced Kubernetes features, this process offers a streamlined way to manage software rollouts.
At ZippyOPS, we specialize in consulting, implementation, and managed services across DevOps, DevSecOps, DataOps, and more. Our expertise ensures that your deployments are not only smooth but also secure and scalable. For more information, visit our services page.

Step 1: Pull Docker Image for Deployment
Before you can deploy a new version of your application, you need to pull the required Docker image. To begin, execute the following command to download the Nginx image:
docker pull nginx
After downloading, verify the image exists in your local repository with:
docker image ls
You should see a list of Docker images, including the Nginx image. This will confirm the download was successful and that the image is ready for use in your Kubernetes deployment.
Step 2: Create the Kubernetes Deployment
With the Docker image ready, the next step is to define the deployment configuration for Kubernetes. This involves creating a YAML file that specifies the necessary components for the Nginx deployment.
Here’s an example YAML configuration for version 1 of the Nginx app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
version: "1.0"
spec:
containers:
- name: nginx
image: nginx:alpine
resources:
limits:
memory: "128Mi"
cpu: "50m"
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: index.html
volumes:
- name: index.html
hostPath:
path: /path/to/your/nginx/v1
Once the file is created, deploy it using:
kubectl apply -f nginx-deployment.yaml
Verify the deployment by running:
kubectl get pods -o wide
This command should list three running Nginx pods as part of your deployment.
Step 3: Create the Kubernetes Service
Next, create a service to route traffic to your deployed pods. Use this YAML file to define the service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
version: "1.0"
ports:
- port: 8888
targetPort: 80
Deploy the service with the following command:
kubectl apply -f nginx-deployment.service.yaml
To check the service status, use:
kubectl get service
If running locally, navigate to localhost:8888 in your browser to confirm the service is active and serving the “Hello World” message.
Step 4: Verify the First Version of the Cluster
At this stage, the first version of your service is up and running. Open your browser and point it to the IP address or localhost to see the output from version 1.
Step 5: Create the Canary Deployment
Now it’s time to create the canary deployment, which will serve as the second version of the application. Begin by creating a YAML file for version 2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-canary-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
version: "2.0"
spec:
containers:
- name: nginx
image: nginx:alpine
resources:
limits:
memory: "128Mi"
cpu: "50m"
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: index.html
volumes:
- name: index.html
hostPath:
path: /path/to/your/nginx/v2
Deploy the canary version with:
kubectl apply -f nginx-canary-deployment.yaml
Check that the canary pods have been deployed by running:
kubectl get pods -o wide
The output should display both the original and canary deployment pods.
Step 6: Split Traffic Between Versions
To begin routing traffic to the canary deployment, update the service configuration to point to version 2 of your application. Open the service file and replace the version line:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
version: "2.0"
ports:
- port: 8888
targetPort: 80
Apply the updated service definition:
kubectl apply -f nginx-deployment.service.yaml
As a result, the traffic will now be distributed between the original and the canary versions. Refresh your browser to observe how the service redirects traffic to different versions of the app.
Advanced Traffic Routing with Istio
In more advanced scenarios, especially when using Istio for service mesh management, you can set routing rules to control traffic distribution based on specific criteria. For example, you might want to direct 10% of traffic to the canary deployment. This can be done by defining a routing rule like so:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: index.html
spec:
hosts:
- index.html
http:
- route:
- destination:
host: index.html
subset: v1
weight: 90
- destination:
host: index.html
subset: v2
weight: 10
In this setup, Istio routes traffic based on a set weight for each version, allowing for fine-grained control over the rollout.
Conclusion on Canary deployment on Kubernetes
Canary deployment on Kubernetes is a powerful method for managing application updates while ensuring stability. By gradually introducing new versions and testing them in a production-like environment, you can catch issues early and minimize the risk of disruptions. With tools like Kubernetes and Istio, you can easily control traffic distribution, making it easier to scale your deployments effectively.
ZippyOPS helps businesses implement robust, scalable Kubernetes solutions through our consulting, implementation, and managed services. If you’re looking for expert guidance on DevOps, DataOps, or AIOps, reach out to us for more information at sales@zippyops.com.



