Join us on Thursday, May 23rd with Buoyant & Ambassador as we dive into the battle of the Service Mesh vs the API Gateway. Register Now.

Back to blog
LOCAL KUBERNETES , LOCAL DEVELOPMENT, KUBERNETES

Kubernetes Canary Testing and Release with Istio

Sudip Sengupta
June 24, 2022 | 17 min read

Kubernetes offers multiple deployment strategies to fit various integration and delivery use-cases. The Kubernetes deployment controller relies on these strategies to determine the suitable rollout pattern for containerized workloads and applications. Some most commonly used Kubernetes deployment strategies include: recreate, ramped, blue green deployment, A/B testing, and Canary testing & release.

While each deployment strategy offers different features, organizations typically prefer to use a combination of various deployment strategies interchangeably based on workload types, organizational goals, and usability. The Canary testing and release strategy, also known as the Canary deployment strategy, is known to be one of the best deployment strategies as it allows for live testing as new features are introduced to a subset of users in production, with the subset growing gradually.

In this article, we’ll discuss Canary deployments in Kubernetes and how Istio can help perform seamless Canary upgrades.

What is a Canary Deployment?

The canary deployment strategy allows for the incremental release of new features/code in production to minimize the impact and risk of new software updates. With the canary deployment pattern, DevOps teams can roll out a newer version of the application while the old one is still running. While doing so, a small subset of the workload or users is replaced with the newer version, while the rest keep using the older version. This subset is tested for bugs, security flaws, and user experience to ensure the new version is safe to deploy. Subsequent users/workloads are then gradually transferred to the newer version while continuously testing for quality and user experience.

Kubernetes offers in-built rollout strategy controls to help DevOps teams perform canary deployments. The most common method involves using the service resource object as a load balancer, pointing the incoming traffic to different pods within the cluster.

How Canary Testing & Release Works

While Kubernetes does not provide the functionality out-of-the-box, it allows for progressive canary deployments where pods hosting newer versions of an application are deployed alongside pods hosting the older version. Using deployments and a rolling update strategy, DevOps teams can enable rudimentary canary rollouts but can not define the percentage of traffic that needs to be directed to the old and new application versions. Canary testing also prevents administrators from automating the gradual switching process from an older to a newer version.


Canary Deployment Workflow

Using the Service object as a load-balancer between two deployments enables comprehensive control over the traffic redirection. The service is created to point to two deployment objects and directs traffic to each depending on how many pod replicas are specified in their configuration files. DevOps teams can gradually reduce the number of replicas in the older deployment version until the complete traffic is transitioned to the Canary version.

What are the benefits of Canary Deployment?

Some advantages of Canary deployments include

  • Low Infrastructure Costs: The infrastructure cost for Canary deployment is mostly lower than other rolling strategies. In Canary testing, since changes are rolled out to a small subset of real users, teams are only required to provision a small extra amount of infrastructure instead of setting up a whole new set of production-like infrastructure.
  • Eliminate Downtime: Canary deployment allows gradual switching of traffic from the current version to the newer version of the application, eliminating maintenance downtimes. During a transition window, both the production and testing environments can be accessed by the application, and if the newer version is not performing as expected, changes are rolled back.
  • Offers Deployment Flexibility: As canary release allows tests to be performed on a limited number of users in production without impacting the infrastructure’s configuration and user experience, software development teams can confidently experiment and innovate new features. The teams can easily compare how the two versions work in production and progressively test the Canary deployment for stability while it is running.

Introduction to the Istio Service Mesh

A service mesh is a dedicated infrastructure layer that helps implement security and observability for communication between services in cloud-native and microservice-based deployments. Service meshes help manage Kubernetes deployments by implementing visibility and security controls at the platform layer (Kubernetes) for a unified view of how application services interact. A service mesh is typically deployed as a scalable set of proxies alongside the application code to act as an entry point of security and observability features.

Most commonly, service meshes are used to perform sophisticated cluster activities, including authentication, encryption, A/B testing, canary testing & release deployments, load balancing, service discovery, monitoring, and failure recovery.

Istio is a modern, open-source service mesh that provides a transparent way to automate interactions between microservices. Istio enables teams to connect, secure, and monitor microservices in a hybrid and multi-cloud production environment while enabling them to run secure, reliable Kubernetes applications anywhere. Some features of the Istio service mesh include

  • TLS-based encryption for data in transit, enabling secure inter-service communication with strong identity-based authentication and authorization.
  • Relies on Envoy proxies to manage traffic and perform automatic load balancing for gRPC, WebSocket, HTTP, and TCP traffic.
  • Includes an extensible policy layer and API that enable the implementation of rate limits, quotas, and access controls.
  • Automated traffic logging, metric, and trace management for all cluster traffic, enforcing observability and visibility
  • Performs user-based routing for fine-grained network and access controls.
  • Enforces security policies with ACLs and monitoring solutions for customizable access control

Performing a Kubernetes Canary Deployment with Istio

The following demo outlines the steps to perform canary testing in a cluster with Istio configured to control traffic routing. This article assumes the reader has the following:

  • An existing Kubernetes (minikube, KIND, K3S) cluster with an active Istio installation.
  • kubectl CLI tool installed to interact with the cluster through the command line
  • An active Docker Desktop installation and access to a Docker Hub account.

Canary Deployment Workflow

To perform the Canary deployment, follow the steps below;

Step 1: Deploying the Docker Images

This section describes how to build the images to be used in the containers running the production and canary version of the application. Each image runs a vanilla web application containing a few HTML code specifying the version of the application.

To start with, create a directory that will be used to build the images. In this demo, we use a directory named

istio_canary
:

$ mkdir istio-canary

Navigate to the created directory:

$ cd istio-canary

Create the HTML file for the production web application. For the purpose of this demo, we use a text editor to create an

index.html
file with contents as shown below:

<!doctype html>
<html lang=”en”>
<head><meta charset=”utf-8">
<title>Web App 1.0</title>
</head>
<body>
<p>This is version 1
</p>
</body>
</html>

Next, create a Dockerfile that will be used to build the image. This file builds an httpd web server to expose the application and will look similar to:

FROM httpd:2.4
COPY index.html /var/www/html/
EXPOSE 80

Quick note: The Dockerfile should be named

Dockerfile
with no file extension so that Docker recognizes it as an image template.

Build this image with the

docker build
command. Since the image is to be published to Docker hub, it should be named with the Docker hub account’s user-name and the image name (in our case, its
darwin-prod-image
), as follows:

$ docker build -t [docker-hub-account-name]/darwin-prod-image

A successful image creation returns a prompt similar to:

sha256:ed7cd06b48383368f4572d9ccf4173eb6519eed6585c1bb94a72969f4e73df4a 0.1s
=> => naming to docker.io/[docker-hub-account-name]/darwin-prod-image 0.1s

To create the image for the Canary version, first edit

index.html
so that its contents look similar to:

<!doctype html>
<html lang=”en”>
<head>
<meta charset=”utf-8">
<title>Web App 2.0</title>
</head>
<body>
<p>This is version 2
</p>
</body>
</html>

With the contents of the Dockerfile still the same, build the image using Docker’s build command. The command should look similar to:

$ docker build -t [docker-hub-account-name]/darwin-canary-image

With a successful image creation returning a prompt similar to:

sha256:5e57c737b1c626cb7872d3768d4846de25fa8abd34eacad953174287c2c9ed13 0.1s
=> => naming to docker.io/[docker-hub-account-name]/darwin-canary-image 0.1s

Confirm image creation using the

docker images
command:

Once the images have been created, push them to Docker hub. Log in to Docker hub using the following command:

$ docker login -u [docker-hub-account-name]

This prompts for the password. A successful login returns an acknowledgement:

Login Succeeded

Push the image repositories individually to the Docker hub account by running the following commands:

$ docker push <<docker-hub-account-username>>:darwin-canary-image$
docker push <<docker-hub-account-username>>:darwin-prod-image

Confirm the deployment of the images by checking out the repositories section in Docker Hub’s Web UI.


Step 2: Creating the Application Deployment

This section describes how to build the manifest file containing configurations for both the production and canary application, and service that exposes the pods for virtual networking and traffic management.

Create a YAML configuration file named

app-manifest.yaml
and add the content below to it:

apiVersion: v1
kind: Service
metadata:
name: darwin-app
labels:
app: darwin-app
spec:
selector:
app: darwin-app
ports:
— name: http
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: darwin-app-v1
labels:
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: darwin-app
template:
metadata:
labels:
app: darwin-app
version: v1
spec:
containers:
— name: darwin-app
image: [docker-hub-account-name]/darwin-prod-image
ports:
— containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: darwin-app-v2
labels:
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: darwin-app
template:
metadata:
labels:
app: darwin-app
version: v2
spec:
containers:
— name: darwin-app
image: [docker-hub-account-name]/darwin-canary-image
ports:
— containerPort: 8080

This file contains three configurations:

  • A Kubernetes service object named darwin-app that listens on the container port 8080 for any pods with the label
    app:darwin-app
  • A Kubernetes deployment named darwin-app-v1 that creates one pod replica with the labels
    app: darwin-app
    and
    version: v1
    . The pod runs a container named
    darwin-app
    , which is built using the
    darwin-prod
    image in the docker hub repository. Note that this deployment is the production build.
  • A Kubernetes deployment named darwin-app-v2 that creates one pod replica with the labels
    app: darwin-app
    and
    version: v2
    . The pod runs a container named darwin-app, which is built using the darwin-canary image in the docker hub repository. Note that this deployment is the canary build.

Now let’s apply the Kubernetes objects into our cluster by running the

kubectl apply
command below:

$ kubectl apply -f app-manifest.yaml

Which, upon successful deployment, will return the response:

service/darwin-app created
deployment.apps/darwin-app-v1 created
deployment.apps/darwin-app-v2 created

Confirm the creation of these resources using

kubectl get
commands as shown below:

$ kubectl get services

Which returns the result:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
darwin-app ClusterIP 10.107.54.79 <none> 8080/TCP 22s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h25m

To verify deployments, run the command:

$ kubectl get deployments

Which returns the result:

NAME READY UP-TO-DATE AVAILABLE AGE
darwin-app-v1 1/1 1 1 4m20s
darwin-app-v2 1/1 1 1 4m19s

Finally, verify the pods by running the command:

$ kubectl get pods

Which returns the result:

NAME READY STATUS RESTARTS AGE
darwin-app-v1–5f9cd96475-ft4tx 1/1 Running 0 5m21s
darwin-app-v2–847955f85f-78gnm 1/1 Running 0 5m21s

Step 3: Configuring the Istio Virtual Service

While the service allows for pods to be discovered within the cluster, internet traffic cannot be connected to workloads running in the container by default. Istio provides various API objects in the networking.istio.io/v1alpha3 API group to simplify load balancing, routing, and other traffic management functions. This section demonstrates how to distribute traffic between production and canary release using Istio’s Gateway, Virtual Service, and Destination Rules.

First, create a YAML file named

istio.yaml
that will be used for specifying the configurations of the three API objects:

$ nano istio.yaml

Once

istio.yaml
is created, you can specify configurations of the Istio Gateway, the virtual service, and destination rule to
istio.yaml
as shown below. To configure the Istio Gateway that describes the load balancer to receive incoming and outgoing connections, add a configuration similar to:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: darwin-app-gateway
spec:
selector:
istio: ingressgateway
servers:
— port:
number: 80
name: http
protocol: HTTP
hosts:
— “*”

Now, add the virtual service to

istio.yaml
which sets the routing rule for the distribution of requests. Use the virtual service to introduce both versions of the application to the mesh, with a configuration similar to:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: darwin
spec:
hosts:
— “*”
gateways:
— darwin-app-gateway
http:
— route:
— destination:
host: darwin-app
subset: v1
weight: 80
— destination:
host: darwin-app
subset: v2
weight: 20

This configuration includes a destination routing rule that sends 80% of the traffic to version v1 (production build) and 20% of the traffic to version v2 (canary build). This is seen in the

spec:http:route:weight
specification in the file above.

Next, add the destination rule to

istio.yaml
, which applies distribution policies to the traffic after routing has been implemented:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: darwin-app
spec:
host: darwin-app
subsets:
— name: v1
labels:
version: v1
— name: v2
labels:
version: v2

Add these resources to the cluster using the

kubectl apply
command:

$ kubectl apply -f istio.yaml

Which returns the response:

gateway.networking.istio.io/darwin-app-gateway createdvirtualservice.networking.istio.io/darwin createddestinationrule.networking.istio.io/darwin-app created

Step 4: Canary Testing Deployments in Istio

To test how much of the traffic reaches the pods, first use

autoscalers
to manage replicas for the canary and production deployments:

$ kubectl autoscale deployment darwin-app — v1 — cpu-percent=50 — min=1 — max=5
$ kubectl autoscale deployment darwin-app — v2 — cpu-percent=50 — min=1 — max=5

Generate some load on the darwin-app service:

$ kubectl get pods | grep darwin-app

Which shows the result:

darwin-app-v1–3523621687-dt7n7 2/2 Running 0 23m
darwin-app-v1–3523621687-gdhq9 2/2 Running 0 18m
darwin-app-v1–3523621687–73642 2/2 Running 0 27m
darwin-app-v1–3523621687–7hs31 2/2 Running 0 15m
darwin-app-v2–3523621687-l8rjn 2/2 Running 0 33m

Notice that the autoscaler spins up 4 replicas of v1 and 1 replica of v2, ensuring that 80% of traffic is served by v1 while only 20% of traffic is served by deployment v2.

Now, let’s change the routing rule to send 60% of the traffic to v2. This is achieved by editing the

spec:http:route:weight
spec of the virtual service in
istio.yaml
, as follows:

http:
— route:
— destination:
host: darwin-app
subset: v1
weight: 40
— destination:
host: darwin-app
subset: v2
weight: 60

Apply the changes:

$ kubectl apply -f istio.yaml

Which returns the response:

gateway.networking.istio.io/darwin-app-gateway unchangedvirtualservice.networking.istio.io/darwin configureddestinationrule.networking.istio.io/darwin-app unchanged

On simulating the traffic to the pods using the command:

$ kubectl get pods | grep darwin-app

Returns the result:

darwin-app-v1–4095161145-t2ccm 0/2 Running 0 31m
darwin-app-v1–4095161145-c3dpj 2/2 Running 0 34m
darwin-app-v2–4095161145–9322m 2/2 Running 0 42m
darwin-app-v2–4095161145–963wt 0/2 Running 0 27m
darwin-app-v2–4095161145–57537 0/2 Running 0 39m

The autoscaler now scales down v1 replicas to 3 and correspondingly scales up replicas of v2 to 2, resulting in a 40:60 workload ratio.

Summary

Canary deployment enables safe and incremental testing of new features and updates, less infrastructure requirement, and swift and smooth rollbacks. Owing to its flexible features, the canary testing & release strategy is gaining an edge in modern deployments and provides a great alternative to Blue-Green deployments.

Thank you for reading up till this point. I hope you learned how Istio can be used to simplify scalable canary deployments for Kubernetes workloads with intelligent routing features.

Simplified Kubernetes management with Edge Stack API Gateway

Routing traffic into your Kubernetes cluster requires modern traffic management. And that’s why we built Ambassador Edge Stack to contain a modern Kubernetes ingress controller that supports a broad range of protocols, including HTTP/3, gRPC, gRPC-Web, and TLS termination.


Edge Stack provides traffic management controls for resource availability. Learn more about Edge Stack API Gateway or Schedule a Demo Now.