A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of our applications. ELB detects unhealthy instances and routes traffic only to healthy instances.
Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers.
Network Load Balancer: A load balancer serves as the single point of contact for clients. The load balancer distributes incoming traffic across multiple targets, such as Amazon EC2 instances. This increases the availability of our application. We add one or more listeners to our load balancer. A listener checks for connection requests from clients, using the protocol and port that we configure, and forwards requests to a target group. Each target group routes requests to one or more registered targets, such as EC2 instances, using the TCP protocol and the port number that we specify.
An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply, and then selects a target from the target group for the rule action. We can configure listener rules to route requests to different target groups based on the content of the application traffic.
There is a key difference in how the load balancer types are configured. With Application Load Balancers and Network Load Balancers, we register targets in target groups, and route traffic to the target groups. With Classic Load Balancers, we register instances with the load balancer.
Elastic Load Balancing with Amazon EC2: We can configure our load balancer to route traffic to our EC2 instances.
Elastic Load Balancing with Amazon EC2 Auto Scaling: If we enable Auto Scaling with Elastic Load Balancing, instances that are launched by Auto Scaling are automatically registered with the load balancer, and instances that are terminated by Auto Scaling are automatically de-registered from the load balancer.
On EC2 dashboard, click Instances on left menu. We will be launching 2 new instances. So Click Launch Instance
Next page, Select Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type. Next page, select t2 micro and click Next: Configure Instance Details. Next page, input 2 in number of instances. Attach IAM role previously created. Click Next: Add Storage. Next Page, keep the default storage settings. Click Next: Add Tags. Next page, click Next: Configure Security Group. Next page, click Select an existing security group. Select default VPC security group. Click Review and Launch. Next page, click Launch. In the pop up, select do not need a key pair and continue. Next page, click View Instances. While our instances are launching, we can give names to our instances. So hover mouse pointer over cell under Name column of first instance and name it InstanceA and give the second instance name InstanceB.
On left menu, click Load Balancers. Click Create Load Balancer. We will be using Application Load Balancer. Click Create. Next page, input Name. Select Scheme as Internet facing. IP address type will be ipv4. Listeners will be default settings (protocol:HTTP, Port:80). In availability zones, keep default VPC, and select at least two Availability zones. Click Next: Configure Security Settings. Click Next: Configure Security Groups. Next page, we will use default VPC security group. Click Next: Configure Routing. Next page, input Name. Select Target type as Instance. Keep default values for protocol, port and Health checks. Click Next: Register Targets. Next page, select both instances in bottom table and click Add to registered. We can see our instances in the top table of Registered Targets. Click Next: Review. Next page, Click Create. Next page, the load balancer will be in provisioning state. After a while, refresh the status of Load Balancer and its state will active. That's it . Our Load Balancers are up and running.
If we host a website on multiple Amazon EC2 instances, we can distribute traffic to those instances by using Elastic Load Balancer (ELB). ELB service automatically scales the instances as per the traffic to our website. ELB also can monitor the health of its registered instances and will route domain traffic only to registered healthy instances.
To route domain traffic to an ELB, use Amazon Route 53 to create an alias(A) record that points to our load balancer. An alias record is a Route 53 extension to DNS. It's similar to a CNAME record, but we can create an alias record both for the root domain, such as example.com, and for subdomains, such as www.example.com. (We can create CNAME records only for subdomains).
For step by step guidance, click here.
Route 53 performs three main functions:
It monitors our AWS resources and the applications we run on AWS in real time. We can use CloudWatch to collect and track metrics which are variables we can measure for our resources and applications. Cloud watch alarms send notifications or automatically make changes to the resources we are monitoring based on rules that we define.
We can use Dashboard to view metrics that we select. For example,
We can view the alarms in CloudWatch or have the alarm trigger an action (like an SNS message or Stop the instance). For example:
It is a service that enables governance, compliance, operational auditing and risk auditing of our AWS acccount. With CloudTrail, we can log, continuously monitor and retain account activity related to actions across our AWS infrastructure. CloudTrail provides event history of our AWS account activity including actions taken through the AWS management Console, AWS Software Development Kit (SDKs), command line tools, and other AWS services. This even history simplifies security analysis, resource change tracking and troubleshooting.
CloudTrail saves logs in an S3 bucket in a gzip archive.
SNS is a web service that coordinates and manages (topics) the delivery or sending of messages to subscribing endpoints or clients. Publishers (e.g. Cloud watch) communicate asynchronously with subscribers by producing and sending a message to a topic which is a logical access point and communication channel. Subscribers (e.g. web servers, email addresses, SQS queues, Lambda functions) consume or receive the message or notification over one of the supported protocols (e.g. AWS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.
We will now cover Simple Storage Service (S3)