Continuous Delivery with Kubernetes

Once a developer is ready to ship their code, the work of shipping safely without breaking things means understanding and adopting a number of delivery concepts. Many of these, such as continuous integration/continuous delivery (CI/CD), are familiar to most developers, while others, such as continuous deployment, progressive delivery and experiment-oriented rollout techniques may be less familiar. Gaining an understanding of these concepts forms the basis for this next step in owning the full development lifecycle.

Continuous delivery: Shipping code safely and with speed

Shipping code safely relies on a number of delivery practices, and most software developers have experience with continuous integration/continuous delivery (CI/CD) regardless of their application architecture or target deployment platform/infrastructure. However, with the increasing adoption of cloud native technologies and approaches, CI/CD is evolving to include progressive delivery and continuous deployment, which automate deployment and release and contribute to giving developers faster feedback loops and safer deployments.

CI/CD is a familiar pattern for getting changes to features, configuration, bug fixes, and so on, into production safely. If these principles are already well-understood, progressive delivery and continuous deployment will be logical extensions to the delivery and deployment landscape.

Continuous integration/continuous delivery (CI/CD) form a combined practice of integrating and delivering code continuously into a production environment. CI/CD is a well-known practice in developer circles.

Continuous integration (CI) is an automation process and development practice that lets teams of developers introduce code changes to an application, run test suites for quality assurance, and build software artifacts on a continuous basis.

Continuous delivery (CD) is a process that introduces changes, from artifacts and version updates to configuration changes, into a production environment as safely as possible. When continuous delivery changes are made, these changes rely on a human decision.

From continuous delivery to continuous deployment

Developing for fast-moving, cloud-native environments poses new challenges that increasingly call for something more than continuous delivery. Delivering larger sets of microservice-based applications at an increasing velocity, traditional CD is often not often enough to maintain the speed required at an acceptable risk level. The combination of independent service teams all building and releasing concurrently, and multiple services collaborating at runtime to provide business functionality can slow things down and create friction.

Continuous deployment is a software release process that uses automated integration and end-to-end testing to validate changes, along with the observability of the system’s health signals, to autonomously change the state of a production environment.

Continuous deployment extends continuous delivery. While both CDs are automated, humans do not intervene in continuous deployment. Deployment and release happen automatically, and the only way a change won't be deployed to production is if an automated check fails.

With continuous deployment, developers can get their code into real-world conditions and benefit from faster feedback loops, better decision-making, and the ability to safely deploy more code.

A CI/CD pipeline is a process specifying steps that must be taken to deliver a new version of the software. A CI/CI pipeline consists of workflows, activities, and automation. Automation is key to scaling CD, accelerating application delivery, and making developers' lives easier in the cloud-native environment. For cloud native developers, GitOps is central to the evolution of CI/CD.

GitOps: Continuous deployment best practices for Kubernetes

GitOps is an approach to continuous deployment that relies on source control as a single source of truth for all infrastructure and configuration for a Kubernetes deployment. The source code itself as well as deployment metadata that describe how the application should run inside the cluster live within this source control system.

In the GitOps model, configuration changes go through a specific pull-based workflow:

  • All configuration is stored in source control. The Git repository is the source of truth.
  • A configuration change is made via pull request.
  • The pull request is approved and merged into the production branch.
  • Automated systems (e.g., a build pipeline or Kubernetes Operator) ensure the configuration of the production branch is in full sync with actual production systems.

Changes are automated via the source control system; GitOps enables Kubernetes clusters themselves to "pull" updates from source control manifests. The entire GitOps workflow is also self-service; an operations team does not need to be directly involved in managing the change process (except in the review/approval process).

The idea of GitOps for developers is to allow for ease of use: a developer writes code, sends a pull request via a version control system, it's reviewed by another person, and from there, there is no further human interaction. The rollout is automated in a GitOps workflow, in the form, for example, of a canary deployment using a tool like Argo, a declarative, GitOps CD tool for Kubernetes.

🚀 Hands-on! Continuous delivery and deployment

All of the above content about CI/CD and GitOps is summarised in this presentation:

🏆 Challenges! Continuous delivery and deployment

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • What two properties of software are we generally trying to achieve as a team/company from implementing effective continuous delivery (CD)?
  • With a Docker-based CD pipeline what would be the typical artifact that is stored in a registry and used for all of the testing and verification steps?
  • Name one mechanism/pattern that can be used to separate deployment and release of an application
  • What four Kubernetes objects can you use to deploy an application via a CD pipeline?
  • What is the best Kubernetes object to use to run a one-off task, such as a database migration, when releasing an application
  • Where should configuration be stored when adopting a GitOps model? When deploying to Kubernetes as part of a GitOps CD pipeline, should you apply this configuration using a CLI tool like kubectl?
  • What two features of Kubernetes are useful for defining custom configuration and synchronizing (converging) from config stored in a repository to the state of a cluster?

Check your answers

Understanding progressive delivery and deploying apps with ArgoCD

With a GitOps deployment approach, Argo continuously monitors a Git repository with Kubernetes manifests for commits and actively pulls changes from the repo, syncing them with cluster resources. This pull-and-sync reconciliation process continuously harmonizes the state of the cluster configuration with the state described in Git.

Continuous monitoring and syncing helps to eliminate the common problem of configuration drift, which often occurs when clusters are configured differently. Unexpected configuration differences are one of the most common reasons why deployments fail, but Argo can prevent this “drift”, or at the very least provide a traceable path to understand the cluster deployment history and detect out-of-sync deployments.

Progressive delivery relies on automated rollouts to incrementally and iteratively release features and easily roll back if needed. This is designed to make the process safe and reduce the blast radius of any problems. Making the progressive delivery process easier, or more developer friendly, is where GitOps shines. Using GitOps means that everything is defined as code, which lives in Git.

🚀 Hands-on! K8s Continuous delivery with GitOps and Argo CD

Read this blog post from the Container Solution team, which explains the differences (on April 2020) between three different continuous delivery and GitOps tools for Kubernetes.

You'll need access to a Kubernetes cluster, either running remotely (e.g. GKE) or locally (e.g. kind).

We recommend a remote cluster for this lesson, as there may be some limitations with using a local cluster. Remote demo clusters are available via our Telepresence quickstart.

Install and explore ArgoCD

Follow the ArgoCD getting started.​

  • For step 3, use the "Port forwarding" mechanism
  • For step 6, before creating the guestbook application, fork the guestbook repo into your own GitHub account and use your new repo URL

Be sure to view the sample guestbook application using "kubectl port-forward" e.g. "kubectl port-forward svc/guestbook-ui 8090:80" and open "localhost:8090" in your browser

Install Edge Stack and deploy a Mapping via ArgoCD

Install the Ambassador Edge Stack into your cluster.

  • If you want to get hold of an edgestack/me domain name and configure TLS, the easier installation mechanism to use is "edgectl install" which is located under the "Quick CLI Install" instructions in Step 1

Commit an Edge Stack Mapping for the guestbook-ui service into the guestbook directory of your forked ArgoCD getting started repo.

  • Refresh and Sync via ArgoCD
  • View the guestbook via this Mapping

A step-by-step walkthrough of the above instructions are provided in the video below:

🏆 Challenges! K8s Continuous delivery with GitOps and Argo CD

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • Does Flux allow individual RBAC configuration for target deployment environments?
  • List three supported manifest formats for ArgoCD
  • What other software build/delivery feature does Jenkins X support in addition to continuous delivery (CD)

After completing the hands-on tasks:

  • Examine the ArgoCD UI showing the new mapping resource in the guestbook application
  • Ensure that you can access the guestbook-ui service via the Edge Stack Mapping URL using a browser

Check your answers →

Getting deeper with progressive delivery and canary releases

Progressive delivery is a practice that builds on CI/CD principles but adds processes and techniques for gradually rolling out new features with good observability and tight feedback loops. Progressive delivery provides a fast-moving but risk-sensitive way to exert more fine-grained control over delivery.

Progressive delivery makes the rollout of new features and testing them in a production environment possible without introducing significant disruption. Testing cloud applications in a staging environment cannot provide a realistic facsimile of the true production experience, which is why experiment-based progressive rollout techniques, such as canary releases, facilitate proactive risk mitigation with:

  • realistic test environments
  • the ability to apply fine-grained control of traffic and introduce automated rollbacks
  • the ability to see and measure what's happening, i.e. good observability

A canary release is so-called because, like sending a canary into a coal mine, only a small amount of traffic, for example, 1% or 5%, will be routed to the new version of a service at a time. Meanwhile, the majority of traffic continues to go to the original version. This incremental rollout creates an opportunity for observing how changes work in practice and enables easy rollback at the first sign of trouble, all while preventing too much disruption to users.

🚀 Hands-on! Canary releases on Kubernetes with Argo Rollouts

Tutorial Prerequisites


A step-by-step walkthrough of all of the above instructions are provided in the video with guest presenter Kostis Kapelonis below:

🏆 Challenges! Canary releases on Kubernetes with Argo Rollouts

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • Why do we need Argo Rollouts? What are the disadvantages of the default deployment methods built into Kubernetes?
  • How do blue/green deployments work? Pros/cons with canary deployments?
  • How do canary deployments work? Pros/cons with blue/green deployments?

After completing the hands-on tutorial, ensure that you can:

  • View your application via a browser during the canary
  • Access the rollout CLI in a terminal while watching a canary.

Check your answers →

GitOps and canary releasing: Joining the dots by combining Argo CD and Rollouts

Now that you have a solid understanding of both GitOps and canary releasing and how these techniques can be implemented within Kubernetes, the next step is to combine the approaches to enable a progressive delivery workflow.

🚀 Hands-on! Combining Argo CD and Argo Rollouts for Progressive Delivery on Kubernetes


The goal of this tutorial is to trigger an Argo Rollout of the sample project from the previous hands-on section of this module without using kubectl to apply the changes to the Argo Rollout CRD.

  • First, you will need to combine the cluster configuration from the previous two hands-on sections of “ship” module. The goal is to create a Kubernetes cluster with the following installed:
    • Argo CD (from “Hands-on! K8s Continuous Delivery with GitOps and Argo CD”)
    • Ambassador Edge Stack (from “Hands-on! K8s Continuous Delivery with GitOps and Argo CD”, and referenced in “Hands-on! Canary releases on Kubernetes with Argo Rollouts”)
    • Argo Rollouts (from “Hands-on! Canary releases on Kubernetes with Argo Rollouts”)
  • You will also need to have both the Argo CD and Argo Rollouts CLI tools installed locally.
  • First, clone Kostis' sample manifest repo that he used to deploy his demo app to your own GitHub account
  • Next, create an Argo CD project using your forked demo repo as the target. You can do this either through the Argo CD UI or the CLI
  • Trigger sync on this project in Argo CD so that your sample app deploys into your cluster. View the /demo/ endpoint of the application via the Edge Stack to verify everything is working
  • Now update the “rollout.yaml” file within your forked repo with a new version of the demo app image e.g. “kostiscodefresh/summer-of-k8s-app:v2” and commit this to GitHub.
  • Trigger another Argo CD sync to begin the canary. As Kostis showed in the livestream, you will need to use the Argo Rollouts CLI to "promote" your canary several times so that a complete rollout occurs. Take a screenshot of the Argo CD UI during your canary rollout
  • Finally, modify the “rollout.yaml” with a new version of the image e.g. “kostiscodefresh/summer-of-k8s-app:v3”, and change the pauses to remove the need for manual promotion and instead add a 30-second pause between each step.

🏆 Challenges! Combining Argo CD and Argo Rollouts for Progressive Delivery on Kubernetes

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • What is the standard practice for GitOps rollbacks? Should a GitOps tool be able to rewrite the configuration?
  • In the opinion of the Soluto team, what is a “good traffic percentage” to give a canary?
  • Provide two examples of good metrics to measure when canarying?
  • What is the purpose of a smoke test in a CD pipeline?

After completing the hands-on section above, ensure that you can

  • View the Argo Rollouts CLI "watching" the canary (e.g. running 'kubectl argo rollouts get rollout summer-k8s-rollout -n demo -w') at the second step of the canary release process.

Check your answers →

✅ Checkpoint! Test your learning and win prizes

When you submit a module checkpoint, you're automatically eligible for a $10 UberEats voucher and entered to win our weekly and monthly prizes!

UberEats vouchers are only issued to the first 50 valid checkpoint submissions every week (Monday through Sunday). Limit three total entries per developer, one entry per week per module. All fraudulent submissions will be disqualified from any prize eligibility. All prizes are solely distributed at the discretion of the Ambassador Labs community team.