Join us on July 18th for a webinar on CI/CD Pipeline Optimization with APIs & K8s. Register now
Back to blog

Is Serverless Architecture Right For You?

Kay James
May 21, 2024 | 20 min read
High Availability & Scalability with a K8s API Gateway

We’re in the age of serverless. Serverless functions, serverless storage, serverless gateways, serverless everything.

Serverless computing has revolutionized the way we build and deploy applications. In 2023, the global market for serverless architecture was over $15 billion. This will only grow as more and more use cases for this technology are found.

But even as serverless grows and benefits organizations, that doesn’t automatically mean it will work for you. It’s common for developers to jump on the latest technologies. Serverless works well for the specific use cases it was built for, but sometimes organizations can waste a lot of time and resources going down the serverless rabbit hole only to find it doesn’t fit what they are trying to do.

The Benefits of Serverless Architecture

Let's start with why you might choose a serverless architecture.

The first reason is that serverless architectures are inherently scalable and elastic. They automatically scale up or down based on the incoming workload without requiring manual intervention through serverless compute services like AWS Lambda, Azure Functions, or Google Cloud Functions.

Benefits of Serverless Architecture

(Source: Datadog)

These services dynamically allocate resources to handle incoming requests, ensuring that your application can handle sudden spikes in traffic or usage. Each function is triggered by an event, such as an HTTP request, and scales automatically based on the incoming workload. This eliminates the need for over-provisioning resources and allows your application to scale precisely to meet the demand, providing a highly responsive and efficient system.

This then leads to cost-effectiveness. With serverless, you only pay for the actual execution time and resources consumed by your application. There is no need to pay for idle server time or unused capacity. Serverless platforms charge based on the number of requests, execution duration, and memory usage, allowing for fine-grained billing.

This pay-per-use model can lead to significant cost savings, especially for applications with variable or unpredictable workloads. It eliminates the need to invest in and maintain expensive infrastructure, making it particularly attractive for startups and small to medium-sized businesses.

Cost-effectiveness also comes from reduced operational overhead. Serverless architectures abstract away the underlying server infrastructure, relieving developers and operations teams from the burden of managing servers. The cloud provider provides provisioning, scaling, patching, and maintaining the servers, allowing teams to focus on writing and deploying code. Serverless platforms are fully managed by the cloud provider, reducing operational overhead for developers.

It can also lead to increased development speed. Serverless architectures enable faster development cycles and quicker time-to-market. Serverless computing has streamlined application development by abstracting away infrastructure concerns. With serverless, developers can focus on writing modular, event-driven functions that perform specific tasks rather than worrying about the underlying infrastructure. Serverless allows developers to build applications by composing individual functions, each responsible for a specific task. Each function encapsulates a piece of business logic and can be independently developed, tested, and deployed. This modular approach promotes code reusability and allows for parallel development, as different team members can work on separate functions independently.

The Limitations of Serverless Architecture

One of the most commonly cited drawbacks of serverless functions is cold start latency. When a function has not been invoked for a certain period, the cloud provider may release its allocated resources. Consequently, when a new request comes in, the function needs to be initialized again, leading to increased latency. This cold start latency can be particularly noticeable for tasks with significant dependencies or requiring extensive setup.

You see different cold start latencies with different runtimes. In their The State of Serverless report, Datadog found that Java runtimes can be 2.7X slower than Python or Node.js serverless functions.

Serverless functions can have other performance issues. Functions typically have a maximum execution time limit, varying depending on the cloud provider. For example, AWS Lambda functions have a maximum execution time of 15 minutes. This limit can be restrictive for long-running or compute-intensive tasks. Additionally, serverless functions may have limitations on the package size (e.g., 100MB uncompressed for Google Cloud Functions) and the amount of memory (e.g., 1.5GB for Azure Functions) they can utilize. These constraints can impact the performance of specific workloads and may require architecting the application differently to work within these limitations.

Performance issues can also be worse because debugging and monitoring are more complex than traditional server-based applications. Since serverless functions are ephemeral and only run when triggered, traditional debugging techniques may not be applicable. Developers rely on logs, traces, and distributed tracing to troubleshoot issues. The distributed nature of serverless architectures can make gaining a holistic view of the application's behavior challenging. Serverless platforms provide monitoring and logging capabilities but may not be as comprehensive as the tools available for server-based applications.

The final two issues you see come down to control:

  1. Limited control over infrastructure: This is the inverse of the reduced operational overhead benefit. Serverless architectures abstract away the underlying infrastructure, meaning developers have limited control over the environment in which their functions run. This lack of control can be challenging when dealing with specific performance requirements, such as CPU or memory-intensive tasks. Additionally, serverless platforms may have restrictions on the runtime environments, supported languages, and available libraries, which can limit flexibility in certain scenarios.
  2. Vendor lock-in: Serverless architectures depend heavily on the cloud provider's ecosystem and proprietary services. As a result, migrating serverless applications from one cloud provider to another can be challenging. Each provider has its own serverless services, APIs, and tooling, which can lead to vendor lock-in.

Obviously, you have to weigh these limitations against the benefits of serverless architecture and consider them in the context of your specific application requirements. While serverless can be a powerful approach for particular use cases, it won’t best suit every scenario.

5 Cases Where Serverless is Not Ideal

Let's explore some situations where serverless might not be the ideal choice:

1. Long-running tasks

Serverless functions are designed to execute quickly and have a maximum execution time limit, such as the 15-minute limit on the AWS Lambda. Function as a Service (FaaS) is a key component of serverless architecture, allowing developers to deploy individual functions without managing the underlying infrastructure. If your application requires tasks that take longer than the specified limit, serverless may not be suitable. Long-running tasks, such as complex data processing, video encoding, or scientific simulations, may not be appropriate. For example, a video encoding application must process high-resolution videos and apply complex transformations.

The encoding process can take several hours, making it unsuitable for serverless functions with limited execution time. These types of processing are better suited for traditional server-based architectures or distributed computing frameworks like Apache Spark.

Serverless platforms enforce these time limits to ensure efficient resource utilization and prevent individual functions from monopolizing resources. Exceeding the execution time limit will result in the function being forcibly terminated. For instance, if you have a serverless function that needs to process large datasets or perform complex mathematical computations that take hours, it would not be feasible to implement it within the serverless execution time constraints.

2. High-performance computing

Serverless functions are not optimized for high-performance computing (HPC) workloads that require extensive computational power or specialized hardware.

If your application demands consistent high performance, such as real-time gaming, high-frequency trading, or machine learning model training, serverless may not provide the necessary performance characteristics. With a real-time multiplayer gaming application that requires low-latency communications and high-performance processing to handle game physics and player interactions, serverless functions may not provide the necessary performance and responsiveness for such applications.

Serverless platforms typically do not offer direct access to high-performance hardware, such as GPUs or FPGAs, which are often required for computationally intensive tasks. Additionally, the inherent latency introduced by the serverless architecture, such as cold starts and network overhead, can impact the performance of latency-sensitive applications.

3. Predictable and consistent workloads

Serverless architecture is ideal for applications with variable and unpredictable workloads. However, serverless may not be the most cost-effective choice if your application has a predictable and consistent workload. With serverless, you pay for each function invocation and the associated resources consumed. If your application has a steady and predictable traffic pattern, using traditional server-based architectures with pre-provisioned resources might be more cost-efficient.

An example is a backend API service for an enterprise application with a consistent and predictable traffic pattern, serving a fixed number of clients with known request volumes. In this case, monitoring usage and provisioning dedicated servers with appropriate capacity may be more cost-effective than serverless functions.

4. Complex and resource-intensive applications

Serverless functions have limitations on package size, memory usage, and execution time. If your application is complex and requires extensive resources, serverless may not be the best fit. Applications with large codebases, numerous dependencies, or resource-intensive tasks may exceed the limits imposed by serverless platforms. Traditional server-based architectures or containerization technologies like Kubernetes may be more suitable in such cases.

Serverless platforms restrict the package size of the deployed functions to optimize performance and reduce cold start times. So, a complex data processing pipeline involves multiple stages and large libraries and requires gigabytes of memory to process big datasets. The resource requirements may exceed the limits of serverless platforms, making it more suitable to run on a cluster of servers or using distributed processing frameworks.

5. Regulatory and compliance requirements

Some industries have strict regulatory and compliance requirements that may be challenging to meet with serverless architectures. For example, a healthcare application that processes patient data needs to comply with HIPAA regulations. While serverless platforms may provide HIPAA-compliant services, developers must ensure that their application code and data handling practices adhere to the required security and privacy standards, which can be complex and require specialized expertise.

Serverless platforms operate on a shared responsibility model, where the cloud provider secures the underlying infrastructure. At the same time, the developers are responsible for the security of their application code and data. Compliance with HIPAA, PCI DSS, or GDPR regulations may require additional measures, such as data encryption, access controls, and auditing, which developers need to implement and manage themselves.

Alternatives to Serverless Architecture

The most obvious alternative to serverless architecture is a traditional server-based architecture. Traditional server-based architecture involves deploying and running applications on a physical server or virtual servers you manage and maintain. This approach gives you complete control over the underlying infrastructure, including the operating system, runtime environment, and system resources.

Three-Tier Architecture

One typical pattern is the three-tier architecture, which consists of a presentation tier (frontend), an application tier (backend), and a data tier (database). Each tier runs on dedicated servers, and you are responsible for provisioning, scaling, and managing these servers. Based on your application's requirements, you can choose the appropriate server configurations, such as CPU, memory, and storage.

Hybrid Approaches

Hybrid approaches combine serverless architecture with other computing models to leverage the strengths of each. This allows you to use serverless for certain parts of your application while using traditional servers or containers for different components requiring more control or specific requirements.

For instance, you can use serverless functions for event-driven and scalable tasks, such as image processing or data transformations, while using containers or virtual machines for stateful components or long-running processes. This hybrid approach allows you to optimize costs, performance, and flexibility based on the specific needs of different parts of your application.

Another example of a hybrid approach is using serverless functions for the frontend API layer while using managed services like Amazon RDS or Google Cloud SQL for the backend database. This combination allows you to benefit from the scalability and cost-effectiveness of serverless for the API layer while using a managed database service for persistent storage and transactional capabilities.

Choosing the Right Architecture

When deciding between serverless architecture and its alternatives, it's crucial to consider various factors to ensure you select the approach that best aligns with your application's needs and your organization's goals. Here are the key considerations:

  1. Application requirements and characteristics: Assess your application's specific requirements, such as scalability, performance, latency, and resource intensity. You should also consider the nature of your workloads, whether they are event-driven, long-running, or have predictable traffic patterns. Then, you can evaluate the need for real-time processing, data consistency, and transactional capabilities.
  2. Development team's expertise: Consider your development team's skills and expertise in working with different architectures and technologies. Assess their familiarity with serverless platforms, containers, and traditional server-based architectures. Then, determine if additional training or hiring may be necessary to adopt a particular approach effectively.
  3. Cost considerations: Evaluate the cost implications of each architecture based on your application's usage patterns and scale. Consider the pricing models of serverless platforms, which charge based on function invocations and resource consumption, and compare the costs of serverless servers with the expenses of running and maintaining traditional servers or containers.
  4. Future scalability and flexibility: Assess your application's expected growth and evolution over time. Consider the scalability requirements and whether serverless architecture can accommodate future needs. Evaluate each approach's flexibility and portability, considering the potential for vendor lock-in and the ability to migrate if needed.

By carefully evaluating these factors and weighing the trade-offs, you can decide whether serverless architecture or its alternatives best suit your application's requirements and organizational goals. It's essential to conduct thorough research, engage in proof-of-concept testing, and consider the long-term implications of your architectural choice.

Remember: the right architecture depends on your specific context, and no one-size-fits-all solution exists. It's essential to regularly reassess your architecture as your application and business needs evolve and be open to adapting and refining your approach as necessary.

Choose Serverless When It Is Right For You

We've explored the world of serverless architecture, examining its benefits and limitations. Serverless architectures offer unparalleled scalability, cost-effectiveness, reduced operational overhead, and increased development speed. These advantages make serverless an attractive choice for many applications, especially those with variable and unpredictable workloads.

However, it's crucial to recognize that serverless is not a one-size-fits-all solution. The key takeaway is the importance of evaluating your individual project needs. Before jumping on the serverless bandwagon, take the time to assess your application's requirements, consider your team's expertise, evaluate the cost implications, and think about future scalability and flexibility. Conduct thorough research, conduct proof-of-concept testing, and weigh the trade-offs to determine if serverless aligns with your goals and constraints.

Serverless architecture is a powerful tool in your arsenal, but it's not the answer for everyone. Choose serverless when it aligns with your project's requirements and goals, but don't be afraid to explore alternative architectures when necessary. By carefully evaluating your options and making informed decisions, you can build scalable, cost-effective, and well-suited applications to your specific needs.