CD

From Pre-Cloud CI/CD to Kubernetes Continuous Deployment

Before Kubernetes

Before cloud native architecture became the dominant approach to designing, deploying, and releasing software the continuous delivery story was much simpler. Typically a sysadmin would create a build server and install a version control system and continuous integration tool such as Jenkins, TeamCity, or GoCD. In addition to continually building and integrating code, these tools could be augmented via plugins to perform rudimentary continuous deployment operations, such as FTPing binaries to VMs or uploading an artifact to a remote application server via a bespoke SDK/API.

This approach worked well when dealing with a small number of applications and a relatively static deployment environment. The initial configuration of a delivery pipeline was typically challenging and involved much trial and error. When a successful configuration was discovered, this was used as a template and then copy-pasted as more build jobs were added. Debugging a build failure often required specialist support.

After Kubernetes

With the rise in popularity of containers and Kubernetes has meant that roles and responsibilities in relation to continuous delivery have changed. Operators may still set up the initial continuous integration and deployment tooling, but developers now want to self-service as they are releasing and operating what they build. This means that the scope of the infrastructure a developer needs to understand and manage has expanded from pure development tools (e.g., IDE, libraries and APIs) to deployment infrastructure (e.g., container registries and deployment templates) to runtime infrastructure (e.g., API gateways and observability systems).

Traditional
Cloud Native
Number of services per application or system~1 large serviceMany small (micro)services
Deployment ArtifactSmall number of language-specific packages or binariesLarge number of container images
Artifact manifestsLow-medium complexity (language/platform specific). A limited amount of small-medium scriptsHigh complexity (language, OS, and framework). Potentially a large amount of long configuration files
CI infrastructure requiredBare metal or VMsBare metal, VMs, Docker / containers, Kubernetes
Deployment mechanismsCustom (imperative) scripts run via SSH or FTP etc., or proprietory SDKs/APIsWell-defined declarative configuration applied via standardised APIs
Release mechanismsDeployment and release implicitly coupled. Verification managed by eyeballing metrics systems and dashboards.Release controlled via traffic shifting e.g. canaries (north-south and east-west) and verification managed automatically with observability system integrations
Environment managementSmall number of manually curated test and staging environments. Artifacts and configuration relatively staticLarge number of bespoke. Artifacts and configuration highly dynamic and transient
Traditional
Cloud Native
Number of services per application or system
~1 large serviceMany small (micro)services
Deployment Artifact
Small number of language-specific packages or binariesLarge number of container images
Artifact manifests
Low-medium complexity (language/platform specific). A limited amount of small-medium scriptsHigh complexity (language, OS, and framework). Potentially a large amount of long configuration files
CI infrastructure required
Bare metal or VMsBare metal, VMs, Docker / containers, Kubernetes
Deployment mechanisms
Custom (imperative) scripts run via SSH or FTP etc., or proprietory SDKs/APIsWell-defined declarative configuration applied via standardised APIs
Release mechanisms
Deployment and release implicitly coupled. Verification managed by eyeballing metrics systems and dashboards.Release controlled via traffic shifting e.g. canaries (north-south and east-west) and verification managed automatically with observability system integrations
Environment management
Small number of manually curated test and staging environments. Artifacts and configuration relatively staticLarge number of bespoke. Artifacts and configuration highly dynamic and transient

Creating an Effective Kubernetes Continuous Deployment Pipeline

Being able to configure an effective Kubernetes deployment pipeline is not dependent on a single tool or technique. A combination of technologies are required:

  • Container Build Tools
  • YAML Templating and Package Managers
  • Continuous Integration Tooling
  • Continuous Deployment Tooling
  • Environment Management
Wheel
More

Learnig Journey

Ready to adopt continuous delivery within Kubernetes?

This learning journey walks you through the primary concepts and activities required to create and integrate an effective build pipeline for deploying applications to Kubernetes.

Skill level Kubernetes beginner or experienced userTime to complete25 minutes • 6 lessonsWhat you'll learn
  • How the developer workflow and CI/CD are changing with cloud-native continuous delivery
  • How GitOps supports developer-friendly deployment
  • Why you should test in production
  • Deploying code safely
  • Using Argo Rollouts for canary releases
What you needNothing, we’ll walk through learning the concepts and installing the tools you’ll need as we goWhat you'll use
Start Now

Related Resources

Frequently Asked Questions

FAQ
  • Can I continue using my existing CI/CD tools when migrating to Kubernetes?
  • Should developers be responsible for defining Dockerfiles and Kubernetes YAML manifests?
  • Do developers need to learn Kubernetes YAML syntax?
  • Is continuous integration (CI) different in comparison with continuous deployment (CD)? I often see them written as CI/CD?
FAQ