Introduction to Kubernetes for Application Developers

Kubernetes 101 for Developers: Install K8s and build containers

Kubernetes (or K8s) is an open-source platform for managing containerized workloads and services. Containers hold the entire runtime environment, that is, an application and all of its dependencies, libraries and configuration files. This makes them portable and predictable across different computing environments.

Cloud-native application development, containers and Kubernetes, while requiring a shift in mindset and developer experience, enable:

  • Containerized workloads and increased automation
  • The write-once, run-everywhere concept and the elimination of complex dependencies or incompatibilities in or across different systems
  • Shared responsibility for managing deployments (operational activities become developer responsibilities)
  • Easier deployment through fully automated rollouts and rollbacks with fine-grained observability and no downtime/minimal end-user disruption
  • Faster feedback: Continuous code deployment and near-instant feedback

Essential Kubernetes terminology

Nodes: VMs or physical servers where Kubernetes runs containers. There are two types:

  • Master nodes are home to ‘control plane’ functions and services and where the desired state of a cluster is maintained by managing the scheduling of pods across various worker nodes.
  • Worker nodes are where an application actually runs.

Cluster: a set of nodes for running containerized apps.

Namespaces: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.

Pods: the smallest, most basic unit of deployment in Kubernetes. To run an application in Kubernetes, it first needs to be packaged and run as a container. And these containers become pods or parts of pods. A pod can consist of one or more containers. A pod, being the smallest unit Kubernetes can run, is what Kubernetes recognizes, so if a pod is deployed or destroyed, all the containers inside of it are started or killed at the same time.

Deployments: defines how pods should be deployed, and how Kubernetes should manage the deployment.

Services: an abstract way to expose an application running on a set of Pods as a network service. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

Kubectl: the command line interface for managing a Kubernetes cluster.

Armed with this basic terminology, it should be easier to get started with containerized K8s app development.

🧑‍💻 Hands-On! Run Kubernetes locally with Kind and deploy a 12 factor app

Follow these instructions to install and configure a local Kubernetes cluster using Kind and deploy the emojivoto application to this cluster.

  1. Read about The 12 Factor Container
  2. Read about The Twelve-Factor App
  3. Read Creating a Productive Local Development Environment within Kubernetes
  4. Create a Kind cluster
  5. Set up Kubie
  6. Deploy the Emojivoto sample application

A full walkthrough of the instructions can be seen in the video below:

🏆 Challenge! Run Kubernetes locally with Kind and deploy a 12 factor app

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • In Codebase (1st factor), what file would you typically write to package and run code within a container
  • When storing and accessing config (the 3rd factor), where is the best place to do this in Kubernetes
  • With "Build, Release, Run" (the 5th factor), should you consider using a multi-stage Dockerfile build? Bonus points if you can describe in one sentence what benefits multi-stage builds enable
  • For "Dev/Prod Parity" (10th factor) how many Dockerfiles should you consider creating and maintaining for each application?
  • What is the best practice in Kubernetes for running a one-off Admin Process (12th factor)?

Check your answers

Using Kubernetes to code and ship faster

Cloud-native development and containerization form a different way of designing and packaging software, and its adoption has been developer-led because of the speed and flexibility offered. Kubernetes in particular has become the de facto standard for cloud-native application development and orchestration largely because of its speed and modularity.

To enable the speed and autonomy that Kubernetes promises, developers need the right tools, practices, and configuration to be productive and to establish a fast development feedback loop. There are multiple ways to package code into containers, each with strengths and weaknesses.

🧑‍💻 Hands-on! Packaging code in containers with Docker and Buildpacks

  1. Follow the Docker quickstart for building a container image
  2. Read Best practices for writing Dockerfiles
  3. Read Use multi-stage builds
  4. Read Why Cloud Native Buildpacks?
  5. Install Pack and use this to build a container
  6. Install Dive and use this to view your containers

A full walkthrough of the instructions can be seen in the video below:

🏆 Challenge! Packaging code in containers with Docker and Buildpacks

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • Should application and the containers they are packaged in be designed/defined to be "ephemeral?"
  • What is the primary use of multi-stage build Dockerfiles
  • Who first conceived/created the concept of buildpacks?
  • Name a popular buildpack provider
  • How many layers are included in the latest Ambassador Edge Stack (AES) image?

Check your answers

K8s fast local code-build-test feedback: Skaffold and Telepresence

Benefits and challenges of the cloud-native developer experience

The developer experience is the workflow a developer uses to develop, test, deploy, and release software.

Typically this experience consists of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and pushing code to version control triggers the outer dev loop.

The outer dev loop consists of everything else that happens leading up to release. This includes code merge, automated code review, test execution, deployment, controlled (canary) release, and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a GitOps workflow and a progressive delivery strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible.

Developers have lived comfortably in the inner dev loop for most of the history of app development, but now with the "shift left", in which developers assume more responsibility for how services behave "in the wild", the outer dev loop becomes a part of the new, cloud-native developer experience. The changing workflow accompanying this shift is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to code using the languages and tools with which they are most productive and comfortable.

Engineers now must design and build distributed service-based applications and take on responsibility for the full development life cycle. This means understanding and managing external dependencies, building containers, and implementing orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation.

Speed up the inner dev loop & make the remote local

Bridging the gap between remote Kubernetes clusters and local development helps to recover much of the speed that gets lost in cloud-native development and reduces time to feedback in the dev loop. A fast and continuous feedback loop is essential for productivity and speed.

A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like Telepresence, an open-source tool that lets developers code and test microservices locally against a remote Kubernetes cluster. Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies.

Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production.

🧑‍💻 Hands-on: Make code changes quickly with K8s, Skaffold, and Telepresence

A full walkthrough of the instructions can be seen in the video below:

🏆 Challenge: Make code changes quickly with K8s, Skaffold, and Telepresence

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • Do you write code in the inner or outer dev loop?
  • Do you run integration tests in the inner or outer dev loop?
  • Why would someone introduce Skaffold into their inner dev loop?
  • What command do you use to start skaffold in development mode?
  • Why would someone introduce Telepresence into their inner dev loop?
  • What command connects your local machine to the remote cluster?
  • Bonus challenge! Run telepresence intercept on a service

Check your answers

Embrace Cloud-Native Continuous Integration (CI)

Moving CI to the cloud: Self-service CI for developers

Before cloud-native architecture became the dominant approach to designing, deploying, and releasing software, the continuous delivery story was much simpler. Typically a sysadmin would create a build server and install a version control system and continuous integration tool, such as Jenkins, TeamCity, or GoCD. In addition to continually building and integrating code, these tools could be augmented via plugins to perform rudimentary continuous deployment operations, such as FTPing binaries to VMs or uploading an artifact to a remote application server via a bespoke SDK/API.

This approach worked well when dealing with a small number of applications and a relatively static deployment environment. The initial configuration of a delivery pipeline was typically challenging and involved much trial and error. When a successful configuration was discovered, this was used as a template and then copy-pasted as more build jobs were added. Debugging a build failure often required specialist support.

The rise in popularity of containers and Kubernetes has meant that roles and responsibilities in relation to continuous delivery have changed. Operators may still set up the initial continuous integration and deployment tooling, but developers now want to self-service as they are releasing and operating what they build. This means that the scope of the infrastructure a developer needs to understand and manage has expanded from pure development tools (e.g., IDE, libraries and APIs) to deployment infrastructure (e.g., container registries and deployment templates) to runtime infrastructure (e.g., API gateways and observability systems).

All code within an application (and its subsystems) must be continuously integrated at the code level. Typically in a cloud native system this requires building a language-specific package, artifact, or binary, and then building this into a container image.

🧑‍💻 Hands-on: Continuous Integration with GitHub Actions

A full walkthrough of the instructions can be seen in the video below:

🏆 Challenge! Continuous Integration with GitHub Actions

Answer the following questions to confirm your learning. At the end of the module you can complete a series of “checkpoint” questions and enter a competition to win prizes!

  • What problem did continuous integration solve?
  • Are all of the CI tools shown in the CNCF landscape open source?
  • Why did Jenkins X get developed when Jenkins still exists?
  • Bonus challenge! Complete the GitHub Actions tutorial and add a step for RUN ECHO "I’ve just completed the CI task for the Kubernetes Developer Learning Center!"

Check your answers

Checkpoint! Test your learning and win prizes

When you submit a module checkpoint, you're automatically eligible for a $10 UberEats voucher and entered to win our weekly and monthly prizes!

UberEats vouchers are only issued to the first 50 valid checkpoint submissions every week (Monday through Sunday). Limit three total entries per developer, one entry per week per module. All fraudulent submissions will be disqualified from any prize eligibility. All prizes are solely distributed at the discretion of the Ambassador Labs community team.