Telepresence Release Notes
Version 2.3.1 (June 14, 2021)
Feature: DNS Resolver Configuration
Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis.
Feature: AlsoProxy Configuration
Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet.
Feature: Mutating Webhook for Injecting Traffic Agents
The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the
telepresence.getambassador.io/traffic-agent: enabledannotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past
Change: Traffic Manager Connect Timeout
The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook.
Bug Fix: Fix for large file transfers
Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely
Change: Brew Formula Changed
Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so:
brew install datawire/blackbird/telepresence.
Version 2.3.0 (June 01, 2021)
Feature: Brew install Telepresence
Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command:
brew install datawire/blackbird/telepresence2.
Feature: TCP and UDP routing via Virtual Network Interface
Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP.
Change: SSH is no longer used
All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an
sshdinstalled. Volume mounts are still established using
sshfsbut it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without
sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration.
Feature: Running in a Docker container
Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously.
Feature: Configurable Log Levels
Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in
Version 2.2.2 (May 17, 2021)
Feature: Legacy Telepresence subcommands
Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary.
For a detailed list of all the changes in past releases, please consult the CHANGELOG.