In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally.
- Sign in to Ambassador Cloud to download your demo cluster archive. The archive contains all the tools and configurations you need to complete this guide.
Extract the archive file, open the
ambassador-demo-clusterfolder, and run the installer script (the commands below might vary based on where your browser saves downloaded files).
Confirm that your
kubectlis configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named
kubectl get nodes
Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet):
Telepresence connects your local workstation to a remote Kubernetes cluster.
Connect to the cluster (this requires root privileges and will ask for your password):
Test that Telepresence is working properly by connecting to the Kubernetes API server:
curl -ik https://kubernetes.default
Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation.
Clone the emojivoto app:
git clone https://github.com/datawire/emojivoto.git
Deploy the app to your cluster:
kubectl apply -k emojivoto/kustomize/deployment
Change the kubectl namespace:
kubectl config set-context --current --namespace=emojivoto
List the Services:
kubectl get svc
Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at http://web-app.emojivoto. This is the namespace qualified DNS name in the form of
Vote for some emojis and see how the leaderboard changes.
There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console:
GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)
The error is on a backend service, so we can add an error page to notify the user while the bug is fixed.
Now start up the
web-app service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service.
In a new terminal window, change into the repo directory and build the application:
cd <cloned repo location>/emojivoto
Change into the service's code directory and start the server:
yarn webpack serve
Access the application at http://localhost:8080 and see how voting for the 🍩 is generating the same error as the application deployed in the cluster.
We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩.
In the terminal running webpack, stop the server with
In your preferred editor open the file
emojivoto/emojivoto-web-app/js/components/Vote.jsxand replace the
render()function (lines 83 to the end) with this highlighted code snippet.
Run webpack to fully recompile the code then start the server again:
yarn webpack serve
Reload the browser tab showing http://localhost:8080 and vote for 🍩. Notice how you see an error instead, improving the user experience.
Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead.
Start the intercept with the
interceptcommand, setting the workload name (a Deployment in this case), namespace, and port:
telepresence intercept web-app --namespace emojivoto --port 8080
Go to the frontend service again in your browser at http://web-app.emojivoto. Voting for 🍩 should now show an error message to the user.
Use preview URLS to collaborate with your colleagues and others outside of your organization.
While connected to the cluster, your laptop can interact with services as if it was another pod in the cluster.
Learn more about uses cases and the technical implementation of Telepresence.