Serverless Architecture Patterns in AWS
Serverless architecture is a way to build and run applications and services without having to manage infrastructure. The approach helps the teams to focus on the actual business value add and forget about the Infrastructure management. There are many other advantages with Serverless based architecture and you could find blogs covering the same.
The focus of this blog is to cover some of the Serverless Architecture patterns that I have used in real life projects. Let’s take a look at them —
Pattern 1 — This is the simplest architecture pattern that you would come across the moment you search for Serverless based architecture.
A backend service with AWS API Gateway acting as the Proxy layer for the Lambda based business functions. Lambda functions are invoked by API Gateway in a synchronous fashion. Data is saved or retrieved from AWS DynamoDB, a Managed NoSQL Serverless service from AWS.
API Gateway comes with lot of additional features like Caching, Rate Limit, etc. which can be leveraged as per the business need.
Single Lambda function for all the business functionalities or one Lambda per business functionality is a typical question that I am generally asked? Well, to keep it simple and secure — Use Single Responsibility Principle and have one Lambda function per business functionality. Club all related Lambda functions to have high cohesive ness and form a Microservice for the specific domain. And follow Least Privilege principle — Each Lambda function has an execution role with only those permissions required for it to achieve the business functionality.
Pattern 2 — If you are dealing with multiple Microservices in your product, with each service owned by a different team, how do you deploy these services is a very important question that needs to be addressed?
Well, one of the approaches is depicted below —
Key Points —
- Each Team manages and owns end to end service deployment — API Gateway, Lambda functions and DynamoDB.
- Each service gets deployed in a different AWS account (managed by the service team). It inherently increases the TPS of the overall product because API Gateway and Lambda functions concurrency limit are at the Account level. These limits are off-course soft limits and can be increased by raising a case via AWS Console, if you plan to host the services in the same AWS account
- Each service can have a custom domain attached to it. Something like — catalog.example.com OR order.example.com.
If you are planning to deploy Microservices in a single account, then you can also access the service with single domain name but mounted at different URL paths. For example — example.com/api/order for Order service OR example.com/api/catalog for Catalog service. This convention is not possible with services hosted on different AWS accounts. OK…I am lying!!! It is possible but in a different way. Will be covering it in Pattern #5.
Pattern 3 — A standard architectural pattern for a product having both backend and frontend with frontend being a Single Page Application (SPA) like React or Angular based application.
Key Points —
- The SPA static frontend application is hosted on a private S3 bucket and proxied via AWS CloudFront service. This allows you to give a custom domain to the web application. Apart from that CDN capabilities of the CloudFront can also be leveraged to have low latency while serving the content.
- Backend Service is hosted using the API Gateway + Lambda + DynamoDB stack, coming from Pattern #1.
Pattern 4 — An extension of Pattern #3. If you have clients which are geographically dispersed and you want to have more control over the distribution, you can configure a CloudFront as a proxy to a Regional API Gateway.
An edge optimized API Gateway is proxied via a CloudFront which is managed by AWS and you don’t have any control over it.
Pattern 5 — One of the issues with Pattern #3 and Pattern #4 is you have to handle CORS which results in some additional latency for every API call made from the Browser (Client) to the backend API. And this additional latency is because of the OPTIONS call made by Browser to the backend to check the CORS. So, there is an extra Browser to Server roundtrip before the actual API call is invoked. To bypass it, below pattern can be leveraged.
In this pattern, both frontend and backend APIs are proxied by single CloudFront and is exposed by the same Domain name. Both S3 and API Gateway (Single/Multiple) are configured as Origins and Cache Behaviors are configured for each Origin. Something like —
And because the domain for accessing the frontend application and backend APIs is same, CORS does not play any role.
Pattern 6 — Storage First pattern where data is saved first before it gets processed. API Gateway acts a HTTP based proxy and can save data in SQS, Kinesis or similar storage service. And the data is then processed by the Lambda functions.
This pattern helps to have high incoming traffic flow even if the backend services is not able to scale.
Pattern 7 — This pattern is the most advance version which can secure both the APIs hosted by the backend service and also the frontend content hosted in S3.
API Gateway can leverage AWS Lambda Authorizer and/or AWS Cognito service to secure API endpoints exposed. AWS AWS Lambda@Edge functions helps to secure content exposed via CloudFront.
AWS Lambda@Edge is one of the powerful construct which can be used to perform quite a few interesting tasks when combined with CLoudFront distribution. For example —
- Secure static site
- Enhanced Origin Failover
- Add Security Headers for the static site
- A/B Testing
- Progressive rollout for static site
I have not done a deep dive against each pattern but hope this blog gives you some idea of the various Serverless Architecture Patterns used for creating and deploying the applications.
Do give a Clap if you liked it !!! And don’t forget to share with your friends.
Happy Blogging !!!