Scalability is a great reason to move to Kubernetes, but it’s far from the only one.
When you think of companies that might use Kubernetes, some really big names probably come to mind. Kubernetes has become synonymous with scale, and rightfully so. However, scale is just one benefit to running on Kubernetes with Ambassador Edge Stack, and knowing all of the advantages way before you need to scale to hundreds of requests every second is extremely beneficial.
Running with Ambassador Edge Stack on Kubernetes also brings lots of very modern, very cool, and very powerful tools into focus. This creates an environment that is built to integrate. In this really short piece, we’d like to take a minute to explain these benefits. We’ll try to look through the lens of a smaller company, at a stage where the teams aren’t really huge yet, but there are lots of established procedures around building and shipping releases and fixes.
There’s great incentive to really adapt (and stick with) CI/CD workflows.
For teams to be able to work on things safely, and often independently, a single source of truth for the way the current environment is deployed is absolutely critical. Tracking this in a versioned central repository ensures that everyone has transparency into what’s going on, and changes can be made quickly and with confidence.
CI/CD workflows become streamlined with tools like the AES Delivery Accelerator module, becoming guard rails that you appreciate instead of more steps that you have to take that still don’t completely solve stand-up and shipping
Ideas become much cheaper to chase and explore.
While being able to stand up test environments and playgrounds safely and quickly without having to involve a bunch of people is a major ingredient to meeting shipping goals, the ability to just throw things away and start over without the fear of sunken costs can be equally powerful.
You can explore radical and potentially breaking ideas easily, and not have to worry about putting everything back together - just delete the sandbox and re-run it if you take a wrong turn. If you’ve been thinking about splitting up an application into multiple services that have different resource and environment requirements, this is the kind of platform automation that you need.
With incoming requests being managed by Ambassador, you can handle all the logic of what code gets selected to handle which request centrally, based on patterns and rules that you establish. This lets you ease new code in as you test it, and then ultimately decide whether or not it’s ready for production.
Self-service becomes a safe reality.
You want to be able to try new things, iterate quickly, and build on what works. It’s difficult to count the number of ideas that didn’t make an impact on something because they were never tested - and this can be a consequence of weighing the work involved in bringing lots of stakeholders to the table on what could turn out to just be a whim.
Self-service means being able to spin those ideas up quickly in a test environment, validate them and then deciding which way to go based on observations.
Using Kubernetes with Ambassador Edge Stack, developers can spin up a test bed, decide how they want to route traffic to it, manage any request re-writes or redirects, set up rate limits and everything else they need to move from a dancing skeleton or proof of concept to actual services that people depend on.
When a team has a new service ready to go, the integration conversation becomes significantly easier, because the idea is proven. Additionally, the new service can be ready to go automatically with documentation through the Ambassador Developer Portal.
Microservices start to become an interesting idea.
Code isn’t always executed proportionately. Developers know that features can drive funnels that result in 10% of the code running 90% of the time (or 90% of the code running 10% of the time). This means that loading even a minimal instance of an entire framework to run a couple of classes and models for a very popular API probably doesn’t make a whole lot of sense. In fact, you might feel like you could re-implement those bits in a compiled language to save significant overhead for every request. Speed is definitely a feature, and every millisecond matters.
You grab the functional part of the code, you grab that lightweight C++ web library you were eying up, and you quickly stand up something that can take requests. Then you iterate as you apply more and more stress to it, and then ultimately, you change the ingress rules to send API traffic over to the service you just wrote, which really alleviates stress on the main website and customer portal.
And you guessed it, auth and rate limiting and whatever else needs to happen to the request and headers in order for your service to reply will be handled by the platform - there’s no need to bloat your new code with additional logic.
All of this is free to explore, right now.
You may not need to scale to hundreds or thousands of requests per second (though, arguably, that’s a great problem to have!), but you probably can make great use of many more features and workflows that come naturally with operating on a Kubernetes environment with Ambassador Edge Stack.
Want to dive in? We’ve got a quick start guide that can get you up and running quickly. We also encourage you to try our K8s Initializer, where you can choose from a bunch of great components and automatically generate the configurations needed to launch your cluster by pasting a few commands into your terminal.