6 min • read

Telepresence and VPNs

The test-vpn command

You can make use of the telepresence test-vpn command to diagnose issues with your VPN setup. This guides you through a series of steps to figure out if there are conflicts between your VPN configuration and Telepresence.

Prerequisites

Before running telepresence test-vpn you should ensure that your VPN is in split-tunnel mode. This means that only traffic that must pass through the VPN is directed through it; otherwise, the test results may be inaccurate.

You may need to configure this on both the client and server sides. Client-side, taking the Tunnelblick client as an example, you must ensure that the Route all IPv4 traffic through the VPN tickbox is not enabled:

Tunnelblick

Server-side, taking AWS' ClientVPN as an example, you simply have to enable split-tunnel mode:

Modify client VPN Endpoint

In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently.

Testing the VPN configuration

To run it, enter:

The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then press enter:

Once it's gathered information about your network configuration without an active connection, it will ask you to connect to the VPN:

It will then connect to the cluster:

And show you the results of the test:

Interpreting test results

Case 1: VPN masked by cluster

In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN routes:

This means that all VPN hosts within 10.0.0.0/19 will be rendered inaccessible while telepresence is connected.

The ideal resolution in this case is to move the pods to a different subnet. This is possible, for example, in Amazon EKS by configuring a new CIDR range for the pods. In this case, configuring the pods to be located in 10.1.0.0/19 clears the VPN and allows you to reach hosts inside the VPC's 10.0.0.0/19

However, it is not always possible to move the pods to a different subnet. In these cases, you should use the never-proxy configuration to prevent certain hosts from being masked. This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often customary to set the .2 host as a DNS server (e.g. 10.0.0.2 in this case):

Modify Client VPN Endpoint

If this is the case for your VPN, you should place the DNS server in the never-proxy list for your cluster. In the values file that you pass to telepresence helm install [--upgrade] --values <values file>, add a client.routing entry like so:

Case 2: Cluster masked by VPN

In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR that the VPN routes:

Typically this means that pods within 10.0.0.0/8 are not accessible while the VPN is connected.

As with the first case, the ideal resolution is to move the pods away, but this may not always be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR (that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. One easy way to do this may be by disabling split tunneling (see the prerequisites section for more on split-tunneling).

Note that once you fix this, you may find yourself landing again in Case 1, and may need to use never-proxy rules to whitelist hosts in the VPN: