![Thumbnail for resource: "[Podcast] Livin’ on the Edge #13: Dave Sudia on Kubernetes Local Development, Building a PaaS, and Platform Personas"](https://cdn.sanity.io/images/e3vd3ukt/production/e390cbf299848a5fcd0e72d06636459238b11be1-1200x628.png?w=1200&h=628&auto=format)
Creating a Productive Local Development Environment with Kubernetes
Tools, practices, and configuration for creating an effective local development loop when building and deploying apps to Kubernetes
Ask any developer what their top priority is when working with a new application or new technology stack and they will point to creating an effective local development environment. This is primarily focused on installing all of the tools they need to be productive. The goal is always to get a fast development feedback loop established that is as production-like as possible.
Although the goal remains the same when working with cloud native technologies, when adopting containers and Kubernetes there are a few more tools to install and configurations to tweak.
This guide applies to a developer simply experimenting with Kubernetes and also a new engineer onboarding onto a team deploying onto Kubernetes. The quicker a developer can get their local development environment configured, the quicker they can ship code to production. The gold standard is to ship code on the first day.
From Local Dev to Remote Deployment: Avoiding Cloud Complexity
Before Kubernetes
Before cloud native architecture became the dominant approach to designing, deploying, and releasing software the local development story was much simpler. Typically a developer would install the language runtime on their machine, download the application source code, and build and run the (often monolithic) application locally via their favourite IDE.
After Kubernetes
As applications and the underlying frameworks increased in complexity, the start time of an app in development increased. This often resulted in a slow coding feedback loop. This led to many web frameworks, IDEs, or custom tools enabling “hot reloading”. This capability allows code changes to be quickly visible (and testable) via the locally running application, without the need for a redeployment or restart.
With the rise in popularity of containers and Kubernetes has introduced more layers into a typical tech stack. There are clear advantages in relation to this, such as isolation and fault tolerance, but this has also meant that the local development setup has increased in complexity.

Supercharging Your Local Kubernetes Development Environment
Being able to effectively configure a local development environment for services deployed in Kubernetes is not dependent on a single tool or technique. A combination of approaches is required:
Container Build Tools
The ability to quickly and repeatedly build containers locally is vital when changing code and configuration. Many teams want to adopt industry-approved container build standards or don’t want the hassle of assembling their own containers, and here the use of buildpacks is popular.
Hot Reload
Developers want to be able to quickly see the results of their code changes without having to redeploy or restart all of their services.
K8s-Aware Command Line
When deploying applications via the CLI to multiple K8s environments, both local and remote, it is essential to be able to rapidly understand which context and namespace is being used. Being able to quickly and easily change context and namespaces is also valuable.
K8s Dashboard
Not every developer wants to be at the command line the entire time they are working. Being able to point and click around a high-level overview of an application’s deployment can support rapid learning and help identify problems such as high CPU or memory consumption.
Collaborative Remote Testing
When building distributed (microservice-based) systems it is often the case that issues can only be recreated in certain environments. Being able to share access to these environments with both fellow developers and stakeholders enables a more quicker find-fix-release dev loop.
Related Resources
Frequently Asked Questions
Do I need to use a new IDE when developing applications for Kubernetes?
No. Many IDEs offer plugins to add extra Kubernetes support, such as VS Code and IntelliJ IDEA but there is no need to search for a new IDE.
What’s the difference between developing applications for Docker and developing applications for Kubernetes?
Building and deploying a Docker container-based application only requires a developer to have Docker installed locally. Deploying applications into Kubernetes requires access to a cluster, which can be running locally (e.g. minikube, k3s etc) or remotely (e.g. GKE, AWS EKS etc).
Docker Desktop (or Docker for Mac and Docker for Windows) now include the option to install and run a local Kubernetes cluster. Developers that don’t want to install Docker or Kubernetes locally can also develop an application locally (without building a container) and connect and integrate with a remote cluster using a tool like Telepresence.
What’s the best practice for testing Kubernetes-based applications locally?
In addition to software development best practices like unit testing and component testing, for small applications a simple end-to-end test can be conducted by deploying all of the services that make up an application in a locally running Kubernetes cluster.
When a local development machine runs out of CPU or memory to run all the services in a local cluster, using a local-to-remote development tool like Ksync, Skaffold or Telepresence can allow integration testing.
How can I develop and test my application when I can’t run all of my applications in a local Kubernetes cluster (minikube, k3s, kind etc)?
Using a local-to-remote development tool like Ksync, Skaffold or Telepresence can enable a fast development loop and integration testing.