This blog gives you a high-level overview of exposing services running inside a private Kubernetes cluster @AWS Public Cloud using EKS managed service to the external world. You will get a taste of Architecting the system given a set of constraints. So, if you want to go beyond “Hello World Kubernetes”, continue reading!!!
A company has some services running in AWS Cloud and some in On-Premise Data Center. The company is looking to close its On-Premise data centre for cost reason and has asked the application team to move all the services to AWS.
The blog does not focus on the actual migration planning. If you are looking for details around it, do take a look at this blog.
Mentioned below are some of the details which needs to be considered to come up with the deployment architecture on AWS —
- The On-Premise services are running in a self-managed Kubernetes Cluster.
- The services to be moved to AWS needs to be scalable as the product has done very well in the market.
- There is a new requirement to impose rate limit on the APIs being served.
- Services already running on AWS is leveraging AWS API Gateway, which has AWS Web Application Firewall (WAF) integrated with it to improve the security posture of the services.
- Some of the team members already have expertise with AWS API Gateway.
- There is a mandate to reduce the operational cost of the Kubernetes cluster.
- The migration from On-Premise to AWS should be done in less than 3 months to avoid signing the new lease contract of Data Centre for one more year.
- From compliance need, team wants to have control on the Operating System on which services shall run.
- OK to move SSL termination of external traffic from individual services to API Gateway.
Well, given the requirements and constraints above, team comes up with below architecture on AWS —
Key highlights of the proposed solution —
- Leverage AWS managed Elastic Kubernetes Service (EKS) with Self- managed nodes. Reduces operational cost.
- EKS configured with Horizontal Node scaler helps to handle scaling of the system.
- To handle resiliency, AWS Virtual Private Cloud (VPC) spanning across two Availability Zones with two subnets per Availability Zone. One private subnet called Application Subnet to host the EKS nodes and second private subnet called Data Subnet to host the Database. Public subnet is deliberately not mentioned here to avoid complexity.
- From security perspective, EKS compute nodes and Database is all in the private subnets and is accessible only via API Gateway. Gives you more control to filter the unwanted traffic right at the entry point.
- Use Nginx Ingress Controller to expose services running inside EKS. With this, a Network Load Balancer (NLB) is provisioned in application private subnet which routes traffic to the Nginx Ingress Controller which in-turn routes traffic to the services running inside EKS.
- External traffic is routed via API Gateway which sends traffic to the services hosted inside EKS via VPC Private Link -> NLB.
- Rate limit is handled by API Gateway.
- SSL termination responsibility removed from the services and is moved to API Gateway.
For some of the internal concepts used in the architecture, do refer to below links —
AWS PrivateLink - Amazon Web Services
Establish private connectivity between VPCs and services hosted on AWS or on-premises, without exposing data to the…
Working with VPC links for HTTP APIs
VPC links enable you to create private integrations that connect your HTTP API routes to private resources in a VPC…
Hope this blog gives you some insights on integrating AWS EKS and API Gateway.
Do give few claps if you like this blog and don’t forget to share with your friends. Cheers!!!