If you’re experiencing issues with the Emissary-ingress and cannot diagnose the issue through the
/ambassador/v0/diag/ diagnostics endpoint, this document covers various approaches and advanced use cases for debugging Emissary-ingress issues.
We assume that you already have a running Emissary-ingress installation in the following sections.
A Note on TLS
TLS can appear intractable if you haven't set up certificates correctly. If you're having trouble with TLS, always check the logs of your Emissary-ingress Pods and look for certificate errors.
Check Emissary-ingress status
First, check the Emissary-ingress Deployment with the following:
kubectl get -n emissary deployments
After a brief period, the terminal will print something similar to the following:
Check that the “desired” number of Pods matches the “current” and “available” number of Pods.
If they are not equal, check the status of the associated Pods with the following command:
kubectl get pods -n emissary.
The terminal should print something similar to the following:
The actual names of the Pods will vary. All the Pods should indicate
Running, and all should show 1/1 containers ready.
If the Pods do not seem reasonable, use the following command for details about the history of the Deployment:
kubectl describe -n emissary deployment emissary-ingress
Look for data in the “Replicas” field near the top of the output. For example:
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
Look for data in the “Events” log field near the bottom of the output, which often displays data such as a fail image pull, RBAC issues, or a lack of cluster resources. For example:
Additionally, use the following command to “describe” the individual Pods:
kubectl describe pods -n emissary <emissary-ingress-pod-name>
Look for data in the “Status” field near the top of the output. For example,
Look for data in the “Events” field near the bottom of the output, as it will often show issues such as image pull failures, volume mount issues, and container crash loops. For example:
In both the Deployment Pod and the individual Pods, take the necessary action to address any discovered issues.
Emissary-ingress logging can provide information on anything that might be abnormal or malfunctioning. While there may be a large amount of data to sort through, look for key errors such as the Emissary-ingress process restarting unexpectedly, or a malformed Envoy configuration.
Emissary-ingress has two major log mechanisms: Emissary-ingress logging and Envoy logging. Both appear in the normal
kubectl logs output, and both can have additional debug-level logging enabled.
Emissary-ingress debug logging
Much of Emissary-ingress's logging is concerned with the business of noticing changes to Kubernetes resources that specify the Emissary-ingress configuration, and generating new Envoy configuration in response to those changes. Enabling debug logging for this part of the system is under the control of two environment variables:
AES_LOG_LEVEL=debugto debug the early boot sequence and Emissary-ingress's interactions with the Kubernetes cluster (finding changed resources, etc.).
AMBASSADOR_DEBUG=diagdto debug the process of generating an Envoy configuration from the input resources.
Emissary-ingress Envoy logging
Envoy logging is concerned with the actions Envoy is taking for incoming requests. Typically, Envoy will only output access logs, and certain errors, but enabling Envoy debug logging will show very verbose information about the actions Envoy is actually taking. It can be useful for understanding why connections are being closed, or whether an error status is coming from Envoy or from the upstream service.
It is possible to enable Envoy logging at boot, but for the most part, it's safer to
enable it at runtime, right before sending a request that is known to have problems.
To enable Envoy debug logging, use
kubectl exec to get a shell on the Emissary-ingress
This will turn on Envoy debug logging for ten seconds, then turn it off again.
To view the logs from Emissary-ingress:
Use the following command to target an individual Emissary-ingress Pod:
kubectl get pods -n emissary
The terminal will print something similar to the following:
Then, run the following:
kubectl logs -n emissary <emissary-ingress-pod-name>
The terminal will print something similar to the following:
Note that many deployments will have multiple logs, and the logs are independent for each Pod.
Examine Pod and container contents
You can examine the contents of the Emissary-ingress Pod for issues, such as if volume mounts are correct and TLS certificates are present in the required directory, to determine if the Pod has the latest Emissary-ingress configuration, or if the generated Envoy configuration is correct or as expected. In these instructions, we will look for problems related to the Envoy configuration.
To look into an Emissary-ingress Pod, get a shell on the Pod using
kubectl exec. For example,
Determine the latest configuration. If you haven't overridden the configuration directory, the latest configuration will be in
/ambassador/snapshots. If you have overridden it, Emissary-ingress saves configurations in
In the snapshots directory:
snapshot.yamlcontains the full input configuration that Emissary-ingress has found;
aconf.jsoncontains the Emissary-ingress configuration extracted from the snapshot;
ir.jsoncontains the IR constructed from the Emissary-ingress configuration; and
econf.jsoncontains the Envoy configuration generated from the IR.
In the snapshots directory, the current configuration will be stored in files with no digit suffix, and older configurations have increasing numbers. For example,
ir-1.jsonis the next oldest, then
If something is wrong with
aconf, there is an issue with your configuration. If something is wrong with
econf, you should open an issue on Github.
The actual input provided to Envoy is split into
bootstrap-ads.jsonfile contains details about Envoy statistics, logging, authentication, etc.
envoy.jsonfile contains information about request routing.
- You may generally find it simplest to just look at the
econf.jsonfiles in the
snapshotdirectory, which includes both kinds of configuration.