Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally.
Before you begin, you need to have Telepresence installed, and either the Kubernetes command-line tool,
kubectl, or the OpenShift Container Platform command-line interface,
oc. This document uses kubectl in all example commands. OpenShift users can substitute oc commands instead.
This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop.
With Telepresence, you can create global intercepts that intercept all traffic going to a service in your cluster and route it to your local environment instead.
Connect to your cluster with
telepresence connectand connect to the Kubernetes API server:
You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster.
If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering
telepresence versionand upgrade if needed.
telepresence listand make sure the service you want to intercept is listed. For example:
Get the name of the port you want to intercept on your service:
kubectl get service <service name> --output yaml.
Intercept all traffic going to the service in your cluster:
telepresence intercept <service-name> --port <local-port>[:<remote-port>] --env-file <path-to-env-file>.
--port: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon.
--env-file: specify a file path for Telepresence to write the environment variables that are set in the pod. The example below shows Telepresence intercepting traffic going to service
example-service. Requests now reach the service on port
httpin the cluster get routed to
8080on the workstation and write the environment variables of the service to
- Start your local environment using the environment variables retrieved in the previous step.
The following are some examples of how to pass the environment variables to your local process:
- Docker: enter
docker runand provide the path to the file using the
--env-fileargument. For more information about Docker run commands, see the Docker command-line reference documentation.
- Visual Studio Code: specify the path to the environment variables file in the
envFilefield of your configuration.
- JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.): use the EnvFile plugin.
- Docker: enter
Query the environment in which you intercepted a service and verify your local instance being invoked. All the traffic previously routed to your Kubernetes Service is now routed to your local environment
You can now:
Make changes on the fly and see them reflected when interacting with your Kubernetes environment.
Query services only exposed in your cluster's network.
Set breakpoints in your IDE to investigate bugs.