Elastic Load Balancing (ELB) is the AWS Scale service that automatically distributes and manage incoming application traffic across multiple Amazon EC2 instances. It allows achieving fault tolerance in your applications, and it provides the required amount load balancing capacity to route application traffic seamlessly as needed. It works with Amazon Private Cloud to produce robust networking and features of security.
Elastic Load Balancing provides two types of load balancers that both feature high availability, automatic scaling, and robust security.
- Classic Load Balancer: The Classic Load Balancer handles route traffic on either network or application level information.
- Application Load Balancer: The Application Load Balancer operates at the application layer and allows you to define routeing rules based on content across multiple services or containers running on one or more Amazon Elastic Compute Cloud (Amazon EC2) instances.
- Achieving Even Better Fault Tolerance for Your Applications
- DNS Failover for Elastic Load Balancing
- Auto Scaling with Elastic Load Balancing
- Using Elastic Load Balancing in your Amazon VPC (Source: https://aws.amazon.com/elasticloadbalancing/)
Task list for the module 8:
- To create an Elastic Load Balancer under security group for balancing HTTP port (80) traffic between primary Webserver and secondary (Read-only) WebServer
- To test the load balancer performance
The architecture of the project:
Steps for the task: Creating an Elastic Load Balancer under security group for balancing HTTP port (80) traffic between primary Webserver and secondary (Read-only) WebServer
In this task, I will create a load balance for the HTTP port (80) to distribute load balance between to live servers that are serving services to the customers at the same time. It will automatically balance the load between servers at any scale, and reduce pressure on one EC2 instance. It also able to detect the health of Amazon EC2 instances that means when it detects unhealthy Amazon EC2 instances, it stops route traffic on that unhealthy instance instead, it spreads the load across the remaining healthy EC2 instances.
- Go to your Amazon EC2 Instance dashboard from services menu in the management console. Select “Load Balancers” under “Load Balancing” from the left panel. Click “Create Load Balancer” and select “Classic Load balancer which is the best suit for making the decision in the transport layer.
- Now you will get “Define Load Balancer” wizard. Give a unique name for your load balancer. In this lab, Load balance name =DinoStoreLB, and Create LB inside = your VPC. You will also need to configure ports and protocols for your load balancer. By default, your load balancer with a standard web server on port 80; But you can add here more port to control load balance.
- You will need to select a Subnet for each Availability Zone where you wish traffic to be routed by your load balancer. select at least two Subnets in different Availability Zones to provide higher availability for your load balancer. This load balancer will work between WebServer and Webserver (AMI). Therefore, I chose Availability zone of those two servers.
- Click “Next: Assign Security Group“. Select your existing ‘WebRDPGroup’ security group that we configured for those Servers. And those two servers allow HTTP port 80 so that load balancing traffic goes through this port number.
- Select the EC2 running EC2 Instances that you want to define under the load balancer. In this case, Choose WebserverRDp and Webserver02. Note: If load balancer detects one server unhealthy, it will pass traffic control to the healthy server so that system can handle fault tolerance.
- Review and Create.
- After a few minutes of creating your load balancer, It will be “InService” status while the load balancer has finished registering the instances. That means, your newly created load balancer is working properly between two servers.
- In the description tab, you can notice a message: “Because the set of IP addresses associated with a LoadBalancer can change over time, you should never create an “A” record with any specific IP address. If you want to use a friendly DNS name for your load balancer instead of the name generated by the Elastic Load Balancing service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone“
Steps for the task: testing the load balancer performance
In this task, I will test the load balancer effects on my two servers. To check the performance I will refresh the pages again and again by copying the URL link from the Load balancer. And I will monitor that if one instance on load, other will come to take active and perform the service perfectly or not!
- You need to copy the load balancer DNS name and copy it into your local browser tab. Remember to add ‘/YOUR WEBSITE NAME/’ (E.g. /NET702.DinoStore/) to the load balancer URL.
- Now for testing the load balancer, repeatedly refresh the browser page from above and note the internal IP address of the server change as the load balancer swaps web servers (round robin).
The result of testing: You can view now two different private IP address comes up while you refreshing your page again and again. That means, Our Load balancer is working perfectly and distributing the load of the website traffic between two servers.
3. Now you will try removing one of the instances from the load balancer, then refresh the browser page repeatedly. Monitor that the internal IP address doesn’t change anymore because you took one of the web servers out of the load balancing configuration.
4. In this step, You need to add again the web server02 that you removed to test recently into the load balancer configuration. Go to “Instance Edit” on the load balancer console. Select the instance you want to add from “Add and Remove Instance” Wizard. Click save. The instance will be added.
Congratulation! You load balancer testing is successful. I completed this module without any difficulty and error. I hope you will get it easy to deploy. The Load balancer will keep your system scalable, sustainable and available to your customers by automatically distribute the incoming application traffic between Amazon Ec2 instances.
The next Module I will discuss “ Enabling Auto Scale to Handle Spikes and Troughs”
Enjoy your website load 🙂