Telepresence Release Notes

Version 2.3.7 (July 23, 2021)

Feature: Also-proxy in telepresence status

An also-proxy entry in the Kubernetes cluster config will show up in the output of the telepresence status command.

Feature: Non-interactive telepresence login

telepresence login now has an --apikey=KEY flag that allows for non-interactive logins. This is useful for headless environments where launching a web-browser is impossible, such as cloud shells, Docker containers, or CI.
Feature: Non-interactive telepresence login
Feature: Non-interactive telepresence login

Bug Fix: Mutating webhook injector correctly hides named ports for probes.

The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes

Bug Fix: telepresence current-cluster-id crash fixed

Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` to crash.

Bug Fix: Better UX around intercepts with no local process running

Requests would hang indefinitely when initiating an intercept before you had a local process running. This has been fixed and will result in an Empty reply from server until you start a local process.

Bug Fix: API keys no longer show as "no description"

New API keys generated internally for communication with Ambassador Cloud no longer show up as "no description" in the Ambassador Cloud web UI. Existing API keys generated by older versions of Telepresence will still show up this way.
Bug Fix: API keys no longer show as "no description"
Bug Fix: API keys no longer show as "no description"

Bug Fix: Fix corruption of user-info.json

Fixed a race condition that logging in and logging out rapidly could cause memory corruption or corruption of the user-info.json cache file used when authenticating with Ambassador Cloud.

Bug Fix: Improved DNS resolver for systemd-resolved

Telepresence's systemd-resolved-based DNS resolver is now more stable and in case it fails to initialize, the overriding resolver will no longer cause general DNS lookup failures when telepresence defaults to using it.

Bug Fix: Faster telepresence list command

The performance of telepresence list has been increased significantly by reducing the number of calls the command makes to the cluster.

Version 2.3.6 (July 20, 2021)

Bug Fix: Fix preview URLs

Fixed a regression introduced in 2.3.5 that caused preview URLs to not work.

Bug Fix: Fix subnet discovery

Fixed a regression introduced in 2.3.5 where the Traffic Manager's RoleBinding did not correctly appoint the traffic-manager Role, causing subnet discovery to not be able to work correctly.

Bug Fix: Fix root-user configuration loading

Fixed a regression introduced in 2.3.5 where the root daemon did not correctly read the configuration file; ignoring the user's configured log levels and timeouts.

Bug Fix: Fix a user daemon crash

Fixed an issue that could cause the user daemon to crash during shutdown, as during shutdown it unconditionally attempted to close a channel even though the channel might already be closed.

Version 2.3.5 (July 15, 2021)

Feature: traffic-manager in multiple namespaces

We now support installing multiple traffic managers in the same cluster. This will allow operators to install deployments of telepresence that are limited to certain namespaces.
Feature: traffic-manager in multiple namespaces
Feature: traffic-manager in multiple namespaces

Feature: No more dependence on kubectl

Telepresence no longer depends on having an external kubectl binary, which might not be present for OpenShift users (who have oc instead of kubectl).

Feature: Agent image now configurable

We now support configuring which agent image + registry to use in the config. This enables users whose laptop is an air-gapped environment to create selective intercepts without requiring a login. It also makes it easier for those who are developing on Telepresence to specify which agent image should be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer used.
Feature: Agent image now configurable
Feature: Agent image now configurable

Feature: Max gRPC receive size now configurable

The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured.
Feature: Max gRPC receive size now configurable
Feature: Max gRPC receive size now configurable

Feature: CLI can be used in air-gapped environments

While Telepresence will auto-detect if your cluster is in an air-gapped environment, we've added an option users can add to their config.yml to ensure the cli acts like it is in an air-gapped environment. Air-gapped environments require a manually installed licence.
Feature: CLI can be used in air-gapped environments
Feature: CLI can be used in air-gapped environments

Version 2.3.4 (July 09, 2021)

Bug Fix: Improved IP log statements

Some log statements were printing incorrect characters, when they should have been IP addresses. This has been resolved to include more accurate and useful logging.
Bug Fix: Improved IP log statements
Bug Fix: Improved IP log statements

Bug Fix: Improved messaging when multiple services match a workload

If multiple services matched a workload when performing an intercept, Telepresence would crash. It now gives the correct error message, instructing the user on how to specify which service the intercept should use.
Bug Fix: Improved messaging when multiple services match a workload
Bug Fix: Improved messaging when multiple services match a workload

Bug Fix: Traffic-manger creates services in its own namespace to determine subnet

Telepresence will now determine the service subnet by creating a dummy-service in its own namespace, instead of the default namespace, which was causing RBAC permissions issues in some clusters.

Bug Fix: Telepresence connect respects pre-existing clusterrole

When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the cluster, Telepresence will no longer try to update the clusterrole.

Bug Fix: Helm Chart fixed for clientRbac.namespaced

The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true.

Version 2.3.3 (July 07, 2021)

Feature: Traffic Manager Helm Chart

Telepresence now supports installing the Traffic Manager via Helm. This will make it easy for operators to install and configure the server-side components of Telepresence separately from the CLI (which in turn allows for better separation of permissions).
Feature: Traffic Manager Helm Chart
Feature: Traffic Manager Helm Chart

Feature: Traffic-manager in custom namespace

As the traffic-manager can now be installed in any namespace via Helm, Telepresence can now be configured to look for the Traffic Manager in a namespace other than ambassador. This can be configured on a per-cluster basis.
Feature: Traffic-manager in custom namespace
Feature: Traffic-manager in custom namespace

Feature: Intercept --to-pod

telepresence intercept now supports a --to-pod flag that can be used to port-forward sidecars' ports from an intercepted pod.
Feature: Intercept --to-pod
Feature: Intercept --to-pod

Change: Change in migration from edgectl

Telepresence no longer automatically shuts down the old api_version=1 edgectl daemon. If migrating from such an old version of edgectl you must now manually shut down the edgectl daemon before running Telepresence. This was already the case when migrating from the newer api_version=2 edgectl.

Bug Fix: Fixed error during shutdown

The root daemon no longer terminates when the user daemon disconnects from its gRPC streams, and instead waits to be terminated by the CLI. This could cause problems with things not being cleaned up correctly.

Bug Fix: Intercepts will survive deletion of intercepted pod

An intercept will survive deletion of the intercepted pod provided that another pod is created (or already exists) that can take over.

Version 2.3.2 (June 18, 2021)

Feature: Service Port Annotation

The mutator webhook for injecting traffic-agents now recognizes a telepresence.getambassador.io/inject-service-port annotation to specify which port to intercept; bringing the functionality of the --port flag to users who use the mutator webook in order to control Telepresence via GitOps.
Feature: Service Port Annotation
Feature: Service Port Annotation

Feature: Outbound Connections

Outbound connections are now routed through the intercepted Pods which means that the connections originate from that Pod from the cluster's perspective. This allows service meshes to correctly identify the traffic.

Change: Inbound Connections

Inbound connections from an intercepted agent are now tunneled to the manager over the existing gRPC connection, instead of establishing a new connection to the manager for each inbound connection. This avoids interference from certain service mesh configurations.

Change: Traffic Manager needs new RBAC permissions

The Traffic Manager requires RBAC permissions to list Nodes, Pods, and to create a dummy Service in the manager's namespace.

Change: Reduced developer RBAC requirements

The on-laptop client no longer requires RBAC permissions to list the Nodes in the cluster or to create Services, as that functionality has been moved to the Traffic Manager.

Bug Fix: Able to detect subnets

Telepresence will now detect the Pod CIDR ranges even if they are not listed in the Nodes.
Bug Fix: Able to detect subnets
Bug Fix: Able to detect subnets

Bug Fix: Dynamic IP ranges

The list of cluster subnets that the virtual network interface will route is now configured dynamically and will follow changes in the cluster.

Bug Fix: No duplicate subnets

Subnets fully covered by other subnets are now pruned internally and thus never superfluously added to the laptop's routing table.

Change: Change in default timeout

The trafficManagerAPI timeout default has changed from 5 seconds to 15 seconds, in order to facilitate the extended time it takes for the traffic-manager to do its initial discovery of cluster info as a result of the above bugfixes.

Bug Fix: Removal of DNS config files on macOS

On macOS, files generated under /etc/resolver/ as the result of using include-suffixes in the cluster config are now properly removed on quit.

Bug Fix: Large file transfers

Telepresence no longer erroneously terminates connections early when sending a large HTTP response from an intercepted service.

Bug Fix: Race condition in shutdown

When shutting down the user-daemon or root-daemon on the laptop, telepresence quit and related commands no longer return early before everything is fully shut down. Now it can be counted on that by the time the command has returned that all of the side-effects on the laptop have been cleaned up.

Version 2.3.1 (June 14, 2021)

Feature: DNS Resolver Configuration

Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis.
Feature: DNS Resolver Configuration
Feature: DNS Resolver Configuration

Feature: AlsoProxy Configuration

Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet.
Feature: AlsoProxy Configuration
Feature: AlsoProxy Configuration

Feature: Mutating Webhook for Injecting Traffic Agents

The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past
Feature: Mutating Webhook for Injecting Traffic Agents
Feature: Mutating Webhook for Injecting Traffic Agents

Change: Traffic Manager Connect Timeout

The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook.
Change: Traffic Manager Connect Timeout
Change: Traffic Manager Connect Timeout

Bug Fix: Fix for large file transfers

Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely
Bug Fix: Fix for large file transfers
Bug Fix: Fix for large file transfers

Change: Brew Formula Changed

Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence.
Change: Brew Formula Changed
Change: Brew Formula Changed

Version 2.3.0 (June 01, 2021)

Feature: Brew install Telepresence

Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2.
Feature: Brew install Telepresence
Feature: Brew install Telepresence

Feature: TCP and UDP routing via Virtual Network Interface

Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP.
Feature: TCP and UDP routing via Virtual Network Interface
Feature: TCP and UDP routing via Virtual Network Interface

Change: SSH is no longer used

All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration.
Change: SSH is no longer used
Change: SSH is no longer used

Feature: Running in a Docker container

Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously.
Feature: Running in a Docker container
Feature: Running in a Docker container

Feature: Configurable Log Levels

Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log.
Feature: Configurable Log Levels
Feature: Configurable Log Levels

Version 2.2.2 (May 17, 2021)

Feature: Legacy Telepresence subcommands

Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary.
Feature: Legacy Telepresence subcommands
Feature: Legacy Telepresence subcommands

For a detailed list of all the changes in past releases, please consult the CHANGELOG.


Questions?

We’re here to help if you have questions.