** This tutorial was originally published on Datawire.io in 2017. As a result, some of the tools mentioned may no longer be actively maintained. Please join our Slack if you have any questions.
Monitoring Envoy and Ambassador on Kubernetes with the Prometheus Operator
In the Kubernetes ecosystem, one of the emerging themes is how applications can best take advantage of the various capabilities of Kubernetes. The Kubernetes community has also introduced new concepts such as Custom Resources to make it easier to build Kubernetes-native software.
In late 2016, CoreOS introduced the Operator pattern and released the Prometheus Operator as a working example of the pattern. The Prometheus Operator automatically creates and manages Prometheus monitoring instances.
The operator model is especially powerful for cloud-native organizations deploying multiple services. In this model, each team can deploy their own Prometheus instance as necessary, instead of relying on a central SRE team to implement monitoring.
Envoy, Ambassador, and Prometheus
In this tutorial, we'll show how the Prometheus Operator can be used to monitor an Envoy proxy deployed at the edge. Envoy is an open source L7 proxy. One of the (many) reasons for Envoy's growing popularity is its emphasis on observability. Envoy uses statsd as its output format.
Instead of using Envoy directly, we'll use Ambassador. Ambassador is a Kubernetes-native API Gateway built on Envoy. Similar to the Prometheus Operator, Ambassador configures and manages Envoy instances in Kubernetes, so that the end user doesn't need to do that work directly.
Prerequisites
This tutorial assumes you're running Kubernetes 1.8 or later, with RBAC enabled.
Note: If you're running on Google Kubernetes Engine, you'll need to grant cluster-admin
privileges to the account that will be installing Prometheus and Ambassador. You can do this with the commands below:
$ gcloud info | grep AccountAccount: [username@example.org]$ kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=username@example.org
Deploy the Prometheus Operator
The Prometheus Operator is configured as a Kubernetes deployment
. We'll first deploy the Prometheus operator.
1apiVersion: rbac.authorization.k8s.io/v1beta12kind: ClusterRoleBinding3metadata:4 name: prometheus-operator5roleRef:6 apiGroup: rbac.authorization.k8s.io7 kind: ClusterRole8 name: prometheus-operator9subjects:10- kind: ServiceAccount11 name: prometheus-operator12 namespace: default13---14apiVersion: rbac.authorization.k8s.io/v1beta115kind: ClusterRole16metadata:17 name: prometheus-operator18rules:19- apiGroups:20 - extensions21 resources:22 - thirdpartyresources23 verbs:24 - "*"25- apiGroups:26 - apiextensions.k8s.io27 resources:28 - customresourcedefinitions29 verbs:30 - "*"31- apiGroups:32 - monitoring.coreos.com33 resources:34 - alertmanagers35 - prometheuses36 - servicemonitors37 verbs:38 - "*"39- apiGroups:40 - apps41 resources:42 - statefulsets43 verbs: ["*"]44- apiGroups: [""]45 resources:46 - configmaps47 - secrets48 verbs: ["*"]49- apiGroups: [""]50 resources:51 - pods52 verbs: ["list", "delete"]53- apiGroups: [""]54 resources:55 - services56 - endpoints57 verbs: ["get", "create", "update"]58- apiGroups: [""]59 resources:60 - nodes61 verbs: ["list", "watch"]62- apiGroups: [""]63 resources:64 - namespaces65 verbs: ["list"]66---67apiVersion: v168kind: ServiceAccount69metadata:70 name: prometheus-operator71---72apiVersion: extensions/v1beta173kind: Deployment74metadata:75 labels:76 k8s-app: prometheus-operator77 name: prometheus-operator78spec:79 replicas: 180 template:81 metadata:82 labels:83 k8s-app: prometheus-operator84 spec:85 containers:86 - args:87 - --kubelet-service=kube-system/kubelet88 - --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.189 image: quay.io/coreos/prometheus-operator:v0.15.090 name: prometheus-operator91 ports:92 - containerPort: 808093 name: http94 resources:95 limits:96 cpu: 200m97 memory: 100Mi98 requests:99 cpu: 100m100 memory: 50Mi101 serviceAccountName: prometheus-operator102
kubectl apply -f prom-operator.yaml
We'll also want to create an additional ServiceAccount
s for the actual Prometheus instances.
1apiVersion: v12kind: ServiceAccount3metadata:4 name: prometheus5---6apiVersion: rbac.authorization.k8s.io/v1beta17kind: ClusterRole8metadata:9 name: prometheus10rules:11- apiGroups: [""]12 resources:13 - nodes14 - services15 - endpoints16 - pods17 verbs: ["get", "list", "watch"]18- apiGroups: [""]19 resources:20 - configmaps21 verbs: ["get"]22- nonResourceURLs: ["/metrics"]23 verbs: ["get"]24---25apiVersion: rbac.authorization.k8s.io/v1beta126kind: ClusterRoleBinding27metadata:28 name: prometheus29roleRef:30 apiGroup: rbac.authorization.k8s.io31 kind: ClusterRole32 name: prometheus33subjects:34- kind: ServiceAccount35 name: prometheus36 namespace: default
kubectl apply -f prom-rbac.yaml
The Operator functions as your virtual SRE. At all times, the Prometheus operator insures that you have a set of Prometheus servers running with the appropriate configuration.
Deploy Ambassador
Ambassador also functions as your virtual SRE. At all times, Ambassador insures that you have a set of Envoy proxies running the appropriate configuration.
We're going to deploy Ambassador into Kubernetes. On each Ambassador pod, we'll also deploy an additional container that runs the Prometheus statsd exporter. The exporter will collect the statsd metrics emitted by Envoy over UDP, and proxy them to Prometheus over TCP in Prometheus metrics format.
1---2apiVersion: v13kind: Service4metadata:5 labels:6 service: ambassador-admin7 name: ambassador-admin8spec:9 type: NodePort10 ports:11 - name: ambassador-admin12 port: 887713 targetPort: 887714 selector:15 service: ambassador16---17apiVersion: rbac.authorization.k8s.io/v1beta118kind: ClusterRole19metadata:20 name: ambassador21rules:22- apiGroups: [""]23 resources:24 - services25 verbs: ["get", "list", "watch"]26- apiGroups: [""]27 resources:28 - configmaps29 verbs: ["create", "update", "patch", "get", "list", "watch"]30- apiGroups: [""]31 resources:32 - secrets33 verbs: ["get", "list", "watch"]34---35apiVersion: v136kind: ServiceAccount37metadata:38 name: ambassador39---40apiVersion: rbac.authorization.k8s.io/v1beta141kind: ClusterRoleBinding42metadata:43 name: ambassador44roleRef:45 apiGroup: rbac.authorization.k8s.io46 kind: ClusterRole47 name: ambassador48subjects:49- kind: ServiceAccount50 name: ambassador51 namespace: default52---53apiVersion: extensions/v1beta154kind: Deployment55metadata:56 name: ambassador57spec:58 replicas: 159 template:60 metadata:61 labels:62 service: ambassador63 spec:64 serviceAccountName: ambassador65 containers:66 - name: ambassador67 image: datawire/ambassador:0.21.068 imagePullPolicy: Always69 resources:70 limits:71 cpu: 172 memory: 400Mi73 requests:74 cpu: 200m75 memory: 100Mi76 env:77 - name: AMBASSADOR_NAMESPACE78 valueFrom:79 fieldRef:80 fieldPath: metadata.namespace81 livenessProbe:82 httpGet:83 path: /ambassador/v0/check_alive84 port: 887785 initialDelaySeconds: 386 periodSeconds: 387 readinessProbe:88 httpGet:89 path: /ambassador/v0/check_ready90 port: 887791 initialDelaySeconds: 392 periodSeconds: 393 - name: statsd-sink94 image: datawire/prom-statsd-exporter:0.6.095 restartPolicy: Always
kubectl apply -f ambassador-rbac.yaml
Ambassador is typically deployed as an API Gateway at the edge of your network. We'll deploy a service to map to the Ambassador deployment
. Note: if you're not on AWS or GKE, you'll need to update the service below to be a NodePort
instead of a LoadBalancer
.
1---2apiVersion: v13kind: Service4metadata:5 labels:6 service: ambassador7 name: ambassador8spec:9 type: LoadBalancer10 ports:11 - name: ambassador12 port: 8013 targetPort: 8014 selector:15 service: ambassador
kubectl apply -f ambassador.yaml
You should now have a working Ambassador and StatsD/Prometheus exporter that is accessible from outside your cluster.
Configure Prometheus
We now have Ambassador/Envoy running, along with the Prometheus Operator. How do we hook this all together? Logically, all the metrics data flows from Envoy to Prometheus in the following way:

So far, we've deployed Envoy and the StatsD exporter, so now it's time to deploy the other components of this flow.
We'll first create a Kubernetes service
that points to the StatsD exporter. We'll then create a ServiceMonitor
that tells Prometheus to add the service as a target.
1---2apiVersion: v13kind: Service4metadata:5 name: ambassador-monitor6 labels:7 service: ambassador-monitor8spec:9 selector:10 service: ambassador11 type: ClusterIP12 clusterIP: None13 ports:14 - name: prometheus-metrics15 port: 910216 targetPort: 910217 protocol: TCP18---19apiVersion: monitoring.coreos.com/v120kind: ServiceMonitor21metadata:22 name: ambassador-monitor23 labels:24 ambassador: monitoring25spec:26 selector:27 matchLabels:28 service: ambassador-monitor29 endpoints:30 - port: prometheus-metrics
kubectl apply -f statsd-sink-svc.yaml
Next, we need to tell the Prometheus Operator to create a Prometheus cluster for us. The Prometheus cluster is configured to collect data from any ServiceMonitor
with the ambassador:monitoring
label.
1apiVersion: monitoring.coreos.com/v12kind: Prometheus3metadata:4 name: prometheus5spec:6 serviceAccountName: prometheus7 serviceMonitorSelector:8 matchLabels:9 ambassador: monitoring10 resources:11 requests:12 memory: 400Mi13
kubectl apply -f prometheus.yaml
Finally, we can create a service to expose Prometheus to the rest of the world. Again, if you're not on AWS or GKE, you'll want to use a NodePort
instead.
1apiVersion: v12kind: Service3metadata:4 name: prometheus5spec:6 type: NodePort7 ports:8 - name: web9 port: 909010 protocol: TCP11 targetPort: web12 selector:13 prometheus: prometheus
kubectl apply -f prom-svc.yaml
Testing
We've now configured Prometheus to monitor Envoy, so now let's test this out. Get the external IP address for Prometheus.
$ kubectl get servicesNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEambassador 10.11.255.93 35.221.115.102 80:32079/TCP 3hambassador-admin 10.11.246.117 <nodes> 8877:30366/TCP 3hambassador-monitor None <none> 9102/TCP 3hkubernetes 10.11.240.1 <none> 443/TCP 3hprometheus 10.11.254.180 35.191.39.173 9090:32134/TCP 3hprometheus-operated None <none> 9090/TCP 3h
In the example above, this is 35.191.39.173
. Now, go to http://$PROM_IP:9090 to see the Prometheus UI. You should see a number of metrics automatically populate in Prometheus.
Troubleshooting
If the above doesn't work, there are a few things to investigate:
- Make sure all your pods are running (
kubectl get pods
) - Check the logs on the Prometheus cluster (
kubectl logs $PROM_POD prometheus
) - Check Ambassador diagnostics to verify Ambassador is working correctly
Get a service running in Envoy
The metrics so far haven't been very interesting, since we haven't routed any traffic through Envoy. We'll use Ambassador to set up a route from Envoy to the httpbin service. Ambassador is configured using Kubernetes annotations, so we'll do that here.
1apiVersion: v12kind: Service3metadata:4 name: httpbin5 annotations:6 getambassador.io/config: |7 ---8 apiVersion: ambassador/v09 kind: Mapping10 name: httpbin_mapping11 prefix: /httpbin/12 service: httpbin.org:8013 host_rewrite: httpbin.org14spec:15 ports:16 - port: 80
kubectl apply -f httpbin.yaml
Now, if we get the external IP address of Ambassador, we can route requests through Ambassador to the httpbin service:
$ kubectl get servicesNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEambassador 10.11.255.93 35.221.115.102 80:32079/TCP 3hambassador-admin 10.11.246.117 <nodes> 8877:30366/TCP 3hambassador-monitor None <none> 9102/TCP 3hkubernetes 10.11.240.1 <none> 443/TCP 3hprometheus 10.11.254.180 35.191.39.173 9090:32134/TCP 3hprometheus-operated None <none> 9090/TCP 3h$ curl http://35.221.115.102/httpbin/ip{"origin": "35.214.10.110"}
Run a curl
command a few times, as shown above. Going back to the Prometheus dashboard, you'll see that a bevy of new metrics that contain httpbin
have appeared. Pick any of these metrics to explore further. For more information on Envoy stats, Matt Klein has written a detailed overview of Envoy's stats architecture. If you are interested in setting up a Grafana dashboard, Alex Gervais has published a sample Grafana/Ambassador dashboard.
Conclusion
Microservices, as you know, are distributed systems. The key to scaling distributed systems is creating loose coupling between each of the components. In a microservices architecture, the most painful source of coupling is actually organizational and not architectural. Design patterns such as the Prometheus Operator enable teams to be more self-sufficient, and reduce organizational coupling, enabling teams to code faster.
Next Steps
- Learn more about monitoring ingress with Prometheus.
- Need some expert help? Speak with an expert to see how we might be able to help improve your current development workflow.
- Check out Telepresence and Ambassador for more info on each of our open source tools.