Argo is an open source suite of projects that helps developers safely deploy code to production.
Within a GitOps context, Argo makes application deployment and lifecycle management easier, particularly as the line between developers and operators disappears, because it automates deployment, makes rollbacks easier and can be audited for easier troubleshooting.
For this guide, we will build a CD pipeline to deploy an app from a repo into a Kubernetes cluster, then perform a canary release on that app to test incrementally rolling out a new version.
Argo can use Kubernetes projects formatted using different templating systems (Helm, Kustomize, etc.) but for this app we're just going to deploy a folder of static YAML files.
- Kubectl installed and configured to use a cluster
- a GitHub account
You'll first need to install Edge Stack in your cluster. Follow the Edge Stack installation to install via Kubernetes YAML, Helm, or the command-line installer in your cluster.
By default, Edge Stack routes via Kubernetes services. For best performance with canaries, we recommend you use endpoint routing. Enable endpoint routing on your cluster by saving the following configuration in a file called
Apply this configuration to your cluster:
kubectl apply -f resolver.yaml
First, if you're using Google Kubernetes Engine, grant your account the ability to create new Cluster Roles:
Run the following commands to create the namespaces required for Argo and install the components:
Next, you will need to install the Argo CD CLI (for building pipelines) and the Argo Rollouts plugin (for managing and visualizing rollouts) on your laptop:
First set up port forwarding to access the Argo API:
In a new terminal window, retrieve the default password; it is auto-generated and stored in a Kubernetes Secret:
Authenticate against the API using the default username
admin and password (answer
y about the certificate error):
Finally, set a new admin password:
Argo can quickly create pipelines and deploy apps using the CLI tool.
To start with, we'll deploy an app from the
echo directory in this repo. Later on in this guide however you will need to edit a part of the repo to perform a canary release, so fork this repo now into your own GitHub account. On the commands that reference the repo from here to the end of the guide you will need to edit the GitHub URL to include your own username.
Now build the pipeline that deploys our app. The following command points Argo to the repo and specific path to the YAML files we want to deploy and sets the destination to the local cluster. Finally, it syncs the app, which is the action that actually deploys the manifests to the cluster.
To access your deployed app, first get your load balancer IP:
Now curl the service:
You should get a reply saying
Successful Argo deployment!
Next we'll start by removing the previously created app. This deletes all the Kubernetes resources from the cluster that Argo created.
Now we'll deploy a slightly different version of the app from here. There is a new Rollout file. This is similar to a Deployment, but it adds a rollout strategy section that defines how the rollout will incrementally happen once started. In this case, it will route 30% of traffic to the new service for 30 seconds, followed by 60% of the traffic for another 30 seconds, then 100% of the traffic.
Deploy the app to your cluster (note the different value for
Curl again to test the app:
You should get a response of
It's time to rollout a new version of the service. Edit the
rollout.yaml file in your fork here:
https://github.com/<your GitHub username>/argo-qs/edit/main/canary/rollout.yaml and change line 17 from
Canary v1 to
Canary v2. Then click Commit changes at the bottom.
Apply the rollout to the cluster. Argo will 1) look at the repo for anything that's changed since the app was created 2) apply those changes (in this case, our update to the Rollout) and 3) begin rolling out a version 2 of the service based on the Rollout strategy.
Verify that the canary is progressing appropriately by sending curl requests in a loop:
This will display a running list of responses from the service that will gradually transition from
Canary v1 strings to
Canary v2 strings.
In a new terminal window, you can monitor the status of your rollout at the command line:
Will display an output similar to the following: