- The Mapping Resource
- Automatic Retries
- Canary Releases
- Circuit Breakers
- Cross-Origin Resource Sharing
- Method-based Routing
- Prefix Regex
- Traffic Shadowing
- Developer Portal
- Edge Policy Console
- The Ambassador Module
- Gzip Compression
- Host CRD, ACME Support, and External Load Balancer Configuration
- Ingress Controller
- Troubleshooting Ambassador
- Custom Filters
- Deploying to Kubernetes from GitHub
- Knative Serverless Framework
- Prometheus monitoring
- Frequently Asked Questions
In cloud environments, provisioning a readily available network load balancer with Ambassador is the best option for handling ingress into your Kubernetes cluster. When running Kubernetes on a bare metal setup, where network load balancers are not available by default, we need to consider different options for exposing Ambassador.
The simplest way to expose an application in Kubernetes is via a
NodePort service. In this configuration, we create the Ambassador service] and identify
type: NodePort instead of
LoadBalancer. Kubernetes will then create a service and assign that service a port to be exposed externally and direct traffic to Ambassador via the defined
---apiVersion: v1kind: Servicemetadata:name: ambassadorspec:type: NodePortports:- name: httpport: 8088targetPort: 8080nodePort: 30036 # Optional: Define the port you would like exposedprotocol: TCPselector:service: ambassador
NodePort leaves Ambassador isolated from the host network, allowing the Kubernetes service to handle routing to Ambassador pods. You can drop-in this YAML to replace the
LoadBalancer service in the YAML installation guide and use
http://<External-Node-IP>:<NodePort>/ as the host for requests.
When running Ambassador on a bare metal install of Kubernetes, you have the option to configure Ambassador pods to use the network of the host they are running on. This method allows you to bind Ambassador directly to port 80 or 443 so you won't need to identify the port in requests.
This can be configured by setting
hostNetwork: true in the Ambassador deployment.
dnsPolicy: ClusterFirstWithHostNet will also need to set to tell Ambassador to use KubeDNS when attempting to resolve mappings.
---apiVersion: apps/v1kind: Deploymentmetadata:name: ambassadorspec:replicas: 1selector:matchLabels:service: ambassadortemplate:metadata:annotations:sidecar.istio.io/inject: "false"labels:service: ambassadorapp.kubernetes.io/managed-by: getambassador.iospec:+ hostNetwork: true+ dnsPolicy: ClusterFirstWithHostNetserviceAccountName: ambassadorcontainers:- name: ambassadorimage: docker.io/datawire/ambassador:1.5.5resources:limits:cpu: 1memory: 400Mirequests:cpu: 200mmemory: 100Mienv:- name: AMBASSADOR_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacelivenessProbe:httpGet:path: /ambassador/v0/check_aliveport: 8877initialDelaySeconds: 30periodSeconds: 3readinessProbe:httpGet:path: /ambassador/v0/check_readyport: 8877initialDelaySeconds: 30periodSeconds: 3restartPolicy: Always
This configuration does not require a defined Ambassador service, so you can remove that service if you have defined one.
Note: Before configuring Ambassador with this method, consider some of the functionality that is lost by bypassing the Kubernetes service including only having one Ambassador able to bind to port 8080 or 8443 per node and losing any load balancing that is typically performed by Kubernetes services.