Deploy Code At Scale | AWS EC2 + Autoscaling + Load Balancer + CodeDeploy | DevOps With AWS Part 5

Deploy Code At Scale | AWS EC2 + Autoscaling + Load Balancer + CodeDeploy | DevOps With AWS Part 5

A common AWS use case:

You have a production-level web application that is running n (e.g. 1, 2, 3,4 5, 6, ...n) numbers of EC2 instances, the number of instances is managed via Autoscaling based on resource usage (e.g. based on Average CPU usage and/ Memory ), and traffic load balancing is done with Application Load Balancer.

Now, just as an example you might have 100 instances, you have made code change and now you want to update each instance, but doing it for 100s of instances is too much time taking and at glace seems impossible to do so manually, and here comes AWS CodeDeploy.

Using AWS CodeDeploy you can deploy your changes to any number of instances, and in the event of any failure, easy automatic and/or manual rollback is possible.

For those who are new, here's each component/services we can utilize to achieve this:

AWS EC2:

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.

Amazon EC2 offers the broadest and deepest compute platform with a choice of processor, storage, networking, operating system, and purchase model. We offer the fastest processors in the cloud and we are the only cloud with 400 Gbps ethernet networking. We have the most powerful GPU instances for machine learning training and graphics workloads, as well as the lowest cost-per-inference instances in the cloud. More SAP, HPC, Machine Learning, and Windows workloads running on AWS than any other cloud.

AWS Autoscaling:

AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, you can setup scaling for multiple resources across multiple services in minutes. AWS Auto Scaling provides a simple, powerful user interface that lets you build scaling plans for Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables, and Amazon Aurora Replicas. 

AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance, costs, or balance between them. If you’re already using Amazon EC2 Auto Scaling, you can now combine it with AWS Auto Scaling to scale additional resources for other AWS services. With AWS Auto Scaling, your applications always have the right resources at the right time.

There is no additional charge for AWS Auto Scaling. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees. To get started you can use the AWS Management ConsoleCommand Line Interface (CLI), or SDK.

AWS Elastic Loadbalancer:

Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, Lambda functions, and virtual appliances. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing offers four types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant.

There are at this moment 4 types of Load balancer available:

Application Load Balancer

Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Application Load Balancer routes traffic to targets within Amazon VPC based on the content of the request.

Network Load Balancer

Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Transport Layer Security (TLS) traffic where extreme performance is required. Network Load Balancer routes traffic to targets within Amazon VPC and is capable of handling millions of requests per second while maintaining ultra-low latencies.

Gateway Load Balancer

Gateway Load Balancer makes it easy to deploy, scale, and run third-party virtual networking appliances. Providing load balancing and auto-scaling for fleets of third-party appliances, Gateway Load Balancer is transparent to the source and destination of the traffic. This capability makes it well suited for working with third-party appliances for security, network analytics, and other use cases.

Classic Load Balancer (not recommended)

Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and the connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.

AWS CodeDeploy:

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate software deployments, eliminating the need for error-prone manual operations. The service scales to match your deployment needs.

No alt text provided for this image


Follow attached video tutorial and follow below steps:

1) 0:11 Setting Up EC2 and Install Basic Dependencies to Run The App

2) 5:58 Install & Setting Up the NodeJS application (if you want you can setup your own application)

3) 10:44 Create AMI from the newly created EC2 instance (So that it can download AWS CodeDeploy Agent from S3)

4) 12:14 Create new launch configuration (Launch Configuration will be used for Autoscaling)

5) 15:16 Create Autoscaling Group

6) 16:08 Creating new load balancer + Target Group & attaching with Autoscaling group

7) 18:27 Configure Group Size & Scaling Policy while creating Autoscaling Group

8) 19:53 Changing health check URL & port in Loadbancer's Target Group

9) 21:58 Showing How to debug if any issue

10) 26:52 Create AWS CodeDeploy Application

11) 27:16 Create CodeDeploy Group

12) 31:35 Create Deployment


After Deployment is done, the latest changes will be deployed to the EC2 instances.

I hope all this knowledge in this article, you are going to apply in your current and future projects and manage code more efficiently 👍

References: AWS Official Site Documentation

Also, read (If not already): 

AWS CodeCommit | DevOps With AWS Part 1

AWS CodeBuild | DevOps With AWS Part 2

AWS CodeDeploy | DevOps With AWS Part 3

Securely Passing Environment Variables - CodeCommit + CodeBuild + CodeDeploy | DevOps With AWS Part 4

About the Author:

No alt text provided for this image

Sandip Das works as a Sr. Cloud Solutions Architect & DevOps Engineer for multiple tech product companies/start-ups, have AWS DevOps Engineer Professional certification, also holding the title of "AWS Container Hero",

He is always in "keep on learning" mode, enjoys sharing knowledge with others, and currently holds 5 AWS Certifications. Sandip finds blogging as a great way to share knowledge: he writes articles on Linkedin about Cloud, DevOps, Programming, and more. He also creates video tutorials on his YouTube channel.




Harshitha Harsh

✨I help Businesses Upskill their Employees in DevOps | DevOps Mentor & Process Architect

1y

Thanks for sharing, Sandip! Leveraging AWS services for scalable deployment is crucial in today's dynamic tech landscape. Keep inspiring with your valuable insights.

Sunil Kumar

Security Analyst at AB Mobile Standards Alliance India Pvt.Ltd

3y

Nice Thinking

Like
Reply

Good stuff brother

Kamran Arif

🇨🇦 | 2 x Microsoft Azure | 4 x Oracle Cloud Infrastructure | 1 x AWS

3y

Thanks bhai

Rahul Jonwal

| Devops Engineer @ EPAM | Google Cloud | GKE | Kubernetes | Mircoservices | Anthos | Prometheus | Grafana | Infrastructure | Ansible | Ex-Tata Power DDL

3y

To view or add a comment, sign in

More articles by Sandip Das

Insights from the community

Others also viewed

Explore topics