Microservices Essentials: The Key to High Velocity Software Development
August 4, 2016 | 14 min read
Table of contents
Companies thriving in the new world order have technology as a core competency. They build complex cloud applications. They constantly bring new capabilities and features to the market. And despite the constant iteration and updates to their cloud application, their software is rock-solid reliable. How do they achieve such agility and reliability?
Over the past few years, Amazon, Uber, Airbnb, Netflix, Yelp, and many other industry disruptors adopted a new paradigm for developing cloud applications – microservices. The velocity that microservices is giving these disrupters is even raising software architecture to board agendas.
Whether the term microservices is vaguely familiar or something you haven’t encountered yet, this article will cover what it is, why it matters, and what will change in your company when you adopt it.
What are microservices?
Traditionally, cloud applications were built as a single large application (popularly known as the monolith). Some describe microservices as a splintering of monolithic software applications into smaller pieces. That is a true but incomplete explanation that misses the essential benefit of microservices – each of your development teams can work on an independently shippable unit of code, called a microservice. It is better to describe microservices as ‘an architecture for the distributed development of cloud applications.’
For example, the original Amazon.com was a monolithic application, consisting of millions of lines of C++. Over time, Amazon has split the functionality of that single application into smaller services, so there is a separate service for recommendations, for payment, for search, and so forth. In turn, each of these separate services may consist of dozens of smaller microservices.
In the original Amazon architecture, a bug fix to the recommendation service would require changing some C++ in the monolithic application, waiting for other groups to complete their respective changes to the monolithic application, and testing the entire application for release. In the microservices architecture, the recommendation development team can make changes to their microservice – and release it without waiting to coordinate with the other feature teams.
Yelp has also adopted a microservices architecture, consisting of hundreds of services. Just loading the Yelp homepage invokes dozens of microservices, as shown in the example below.
There are six major benefits to microservices:
- Increased agility. Microservices empower development organizations to respond much more quickly to market and customer feedback. Whether it’s a game-changing feature or a tweak to make an existing feature eminently more usable, no longer will its release be delayed by the schedule of the single release train. Instead, each microservice can be released independently.
- Organizational scale. With a monolith, the risk of breaking each other’s code rises dramatically with each additional developer. This slows development, as additional testing, debugging, and integration is required to prevent inadvertent errors. In a microservices architecture, each team works on an independent code base, so bugs are isolated to a single microservice.
- Development efficiency. Monolith development teams are constrained to a common technology stack and process. Microservices architecture enables independent teams to choose the right processes and technology for a given service. For example, the PCI standard requires that any code base that handles credit card data be subject to compliance audits. With payment processing handled by a single or a set of microservices, the amount of code that is in scope is substantially reduced. Or a recommendation engine might be written in Python to use the TensorFlow machine learning library, while other services are written in Java.
- Cost efficient scaling. In a traditional monolith, the entire application must be scaled when it reaches its limits. For example, if the application is not performing, new servers must be added that are capable of running additional instances of the monolith. In a microservices architecture, each microservice can be individually scaled. Thus, new servers are only added for a given microservice that is a bottleneck. This more granular approach to scaling enables a more cost efficient compute model.
- Faster onboarding. With microservices, new engineers can safely start coding on a small microservice. There’s no need for an engineer to learn a monolithic code base just to fix a bug.
- Attract more talented engineers. Microservices enables you to incrementally adopt and test new technologies – and good engineers want to work with the latest technology.
Microservices are not for everyone
While there are many benefits to microservices, microservices is not a fit for everyone. In particular, organizations with small engineering teams should consider a monolith-first approach. In these organizations, the development team does not have the capacity to independently iterate on multiple features at the same time. Adopting a monolith first, and adding microservices as the team grows, is typically a better strategy.
The impacts of adopting a microservices architecture
Adopting a microservices architecture yields significant benefits, but also drives considerable changes. There are three types of changes to anticipate: culture, deployment infrastructure, and developer infrastructure.
Teams gain greater autonomy
Perhaps the biggest change in adopting microservices is cultural. In order to increase agility and efficiency, microservices development teams make decisions that were previously out of their control, such as ship dates. Many organizations also empower teams to customize their QA strategy or technology stack.
This reallocation of decision-making authority requires changes in people and education. Handing off more responsibilities for ship-date decisions as well as technology and process selection to teams requires at least one team member capable of making these decisions or some up-front investment in training and mentoring to develop these skills.
This is not to say that technology stack standards or architectural review boards will disappear and chaos reign. But flexibility in technology and processes will increase and the review process will be more open to team needs.
The role of VP of Engineering will also evolve significantly. They will orchestrate the cultural change along with changes in job descriptions and training. They will evaluate and implement the tools and infrastructure described below. And they will revamp metrics. Engineering throughput will no longer be measured by story points per sprint but also by speed at which new and updated features are deployed. Application reliability will no longer be measured simply by Mean Time Between Failure but also by Mean Time To Recover for microservices.
Deployment infrastructure becomes fully automated
At scale, a cloud application may have hundreds of individual microservices, each of which may consist of multiple instances (for availability and scalability). Moreover, these microservices will be independently updated by their respective development teams. The ballooning number of moving parts and frequency of updates quickly require a fully automated deployment workflow and infrastructure is essential.
While there are many technology choices to fully automate deployment workflow, there are a few common capabilities in the deployment infrastructure that we’ve observed in all successful adopters of microservices.
- Full automation from code to customer. Typically, once a developer commits code to a source repository, there is a push button process that automatically takes the latest source code, builds it, packages it into a deployment artifact such as a container or AMI, and then deploys the entire artifact into staging or production. This is often referred to as Continuous Delivery.
- Elastic scaling. Microservices by nature are fairly ephemeral – new versions get deployed, old versions are retired, and new instances are added or removed based on utilization. A deployment infrastructure such as Amazon’s Elastic Compute Cloud, Docker Datacenter, or Kubernetes that supports elastic scaling is essential to support these use cases.
While many other capabilities can be added to deployment infrastructure such as automated regression testing, we’ve seen organizations successfully adopt microservices by investing in automation and elastic scaling.
Developer infrastructure for the microservices network
The last challenge in adopting microservices is perhaps the most poorly understood aspect of microservices. If a monolithic application resembles a house with different features in separate rooms, a microservices application is more like a neighborhood of houses each hosting a microservice. Communication in the microservices neighborhood requires a different paradigm. Like houses speaking by phone (instead of yelling downstairs), microservices communicate over a network. This requires a set of common services and enabling technology. These tools and technologies are used by developers who are coding microservices, and are distinct from the deployment infrastructure used by DevOps engineers.
*For a deeper discussion of the developer infrastructure needed to support microservices, please see the end of this article.
Eat or be eaten
Given that the benefits of microservices require new investment and trigger changes, you may ask, as a VP Engineering was recently asked “Why are you doing microservices?” He replied, “Because if we don’t do it, we will die from the competition moving faster.”
There is incredible momentum in adopting microservices because of the benefits around agility, efficiency, onboarding, and recruiting. Dozens of companies are investing in the training, tools, and infrastructure to simplify the adoption of microservices. The number of developers who have experience in adopting microservices is growing. The effort to adopt microservices is rapidly shrinking, and will continue to go down over time.
Videos from the first Microservices Practitioner Summit featuring Netflix, Uber, and others are also online at microservices.com.
*More about developer infrastructure
The basic use case for developer infrastructure is to provide a common protocol so that all microservices can connect to each other. Libraries that support these protocols need to be available in every technology stack used by an organization. HTTP is popular choice of protocol in monolithic architectures, but the synchronous nature of HTTP erodes reliability as you add more microservices to your network. (Microservices connected together using HTTP are analogous to Christmas tree lights wired serially. If one light bulb goes out, your entire strand no longer lights. More sophisticated protocols offer the ability to wire your services in parallel.)
In addition, in a microservices architecture a typical service request spans multiple microservices. For example, the process of loading all the tweets in your Twitter feed invokes advertising-related microservices, microservices that display a tweet (including adding hashtag links and images), and microservices that display tweets from the people you follow. Specialized tools designed to trace errors and performance to a specific microservice in this scenario are important to ensure developer productivity.
There are two fundamental capabilities required of microservices developer infrastructure:
- Enable loosely coupled services. Services need well-defined protocols to communicate, and dependencies need to be contained within each microservice. Ben Christensen, one of the architects of the Netflix microservices architecture, speaks about this challenge in his talk Don’t Build A Distributed Monolith. Loosely coupled services let you update the code for a single microservice without affecting any other services that depend on it. Your developer infrastructure needs to provide implementations of these protocols across all your microservices.
- Application resiliency. When a microservice is unavailable for whatever reason – software bug, network outage, machine failure – the entire cloud application must continue to function and recover gracefully with minimal human intervention. Techniques for service availability (e.g., load balancing), service isolation (e.g., circuit breakers), and service recovery (e.g., rollback) are an essential part of the developer toolkit.
This post originally appeared at forentrepreneurs.com.