Module 10: Configuring DNS with Route 53

Amazon Route 53 is a highly Available and Scalable Domain Name System (DNS) that is provided by Amazon cloud service. You can use Amazon Route 53 to register new domains, transfer existing domains, route traffic for your domains to your AWS and external resources, and monitor the health of your resources.

It is translating the names such as http://www.hello-world.com into the IP address like 192.168.1.58,; Therefore, users don’t need to remember IP address or public DNS names that are usually very long, to connect any websites or Computer. Route 53 is specially designed for the business and developers to route their users to the Internet application with extremely reliable and cost effective way.

Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS. (Source: https://aws.amazon.com/route53/)

Task list for the module 10:  

  • To create Hosted Zone
  • To register Dinostore website with the Domain Name Service
  • To create Recordset

The architecture of the project: diagram (2).png

Steps for the task: To create Hosted Zone

A hosted zone is a collection of resource record sets for a specified domain. You create a hosted zone for a domain. Amazon Route 53 responds with information about your hosted zone whenever someone enters your domain name in a web browser. Hosted Zone provides amazon public DNS to own Domain for publishing websites in the cloud platform.

  1.  Go to the AWS route 53 consoles from services menu in the amazon console. Select “DNS management”. Then Click “Create Hosted Zone“. 1.jpg
  2. Now Provide your Domain name. The name of the domain: Wildlife-dinostore.com. You can choose any suitable name for your domain. And select domain type: Public hosted Zone. Then click “Create”. Note that All record sets in the hosted zone including the name of the domain. 8.jpg
  3. When you will click on ‘Create”. You will see the name servers assigned to your domain by Route 53. You need to copy all of these and paste them into the free domain management under nameservers.  9.jpg
  4. Now you need to go to your domain host and customise the Nameservers’ section of your domain setting. Note: you must update the name server records either with the current DNS service or with the registrar for the domain, as applicable.10.jpg

Step for the task: To register Dinostore website with the Domain Name Service

  1. Go to the “https://my.freenom.com/clientarea.php?action=domaindetails” to register your free domain and customised your name server with Hosted Zone NameServers.12.jpg11

Steps for the task: To create Recordset

Now you need to go back in the AWS Route 53 console, click on ‘Go to Record Sets’ and then on ‘Create Record Set’.6.jpg

  1. Provide the appropriate information to “Create Recordset”. Leave the recordset name blank,  Alias target= Your ELB link because all the instances are registered now under “DinoStoreLB” load balancers. Routeing policy stays as ‘Simple’ and Evaluate target health = No. Then Click “Create7.jpg13.jpg
  2. Congratulation! your domain is ready to go. Open your local browser. Replace your public DNS name with your newly registered domain name. Now you can see your website is published by your new domain name.14.jpg

Recommendation: You may face some problem during open your new domain. visit the following website to open free new domain: “https://my.freenom.com/clientarea.php“. Some domains you will get free for one year.  Note that Sometimes new domain takes a long time to be activated. Wait until it gets active!

In the end of the modules, You can see my dinostore website is operating on the AWS cloud successfully. I hope you’d able to move your locally developed website into the cloud by following modules 1 -10.

Welcome to Wild Life Dino-Store http://wildlife-dinostore.ml/net702dinostore/Default.aspx

Thank you 🙂

Advertisements

Module 9: Enabling Auto Scale to Handle Spikes and Troughs

Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define such as user-defined policies, schedules, and health checks. You can define which EC2 instance you want to run with the help of auto scaling and it will ensure you that you running your desire instances.

Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs. Auto Scaling is well suited both to applications that have stable demand patterns or that experience hourly, daily, or weekly variability in usage.

Features and Benefits

  • Maintain your Amazon EC2 instance availability and Automated Provisioning
  • Adjustable Capacity and Automatically Scale Your Amazon EC2 Fleet
  • Reusable Instance Templates                           (Source: https://aws.amazon.com/autoscaling/)

Task list for the module 9:  

  • To enable Auto Scaling Group 
  • To Create launch configuration for Auto Scaling
  • To Create Auto Scaling group
  • To check and monitor  the auto scaling on dinostore website page
  • To check the auto scale testing by terminate instances

The architecture of the project: diagram1 (1).png

Steps for the task: To enable Auto Scaling Group 

In this task, I will demonstrate how to create auto scaling group and its configuration. You can use Auto Scaling to manage Amazon EC2 capacity automatically, maintain the right number of instances for your application, operate a healthy group of instances, and scale it according to your needs.

  1.  You need to go AWS EC2 dashboard from services menu in the AWS console management.  Choose ‘Auto Scaling Group’ from Auto Scaling. 1.jpg

Click “Create Auto Scaling Group“:  Step 1: Create launch configuration and Step 2: Create Auto Scaling group.

Steps for the task:  Create launch configuration

An AMI is a template that contains the software configuration (operating system, application server, and applications) required to launch your instance. So whenever your system needs Auto scaling, it will directly fetch the AMIs that you will mention here for creating the new instance.

  1. In this section, You have to choose “My AMIs”. I am going to choose “My AMIs” which I created during module 6 deployment. 2.jpg 2. Choose your “DinoStoreWebServer” AMI. and click on “Select” for further configuration. 3.jpg

3. In the configure details page, Type the name of Auto Scale “Scale-Web” and Choose IAM role that we created before “WebServerRole“. On the Add storage page, keep the storage configuration default as it is!5.jpg6.jpg

4. Click “Next: Configure Security Group“. Select an existing security group = WebRDPGroup. Then Click “Review“.7.jpg8.jpg

5. Click “Create launch configuration” and select your an existing key pair or you can create a new key pair. In this case, I chose my existing key pair.9.jpg

Steps for the task: Create Auto Scaling group

In this section, I will create Auto Scaling group. An Auto Scaling group is a collection of EC2 instances and the core of the Auto Scaling service. You create an Auto Scaling group by specifying the launch configuration you want to use for launching the instances and the number of instances your group must maintain at all times. You can also mention that which Availability Zone you want to launch your instances.You can also specify the Availability Zone in which you want the instances to be launched.

  1. click Name the Auto Scaling Group name and your network details. Initially, we start with 2 instance group size.

14.jpg

2. Keep the group at its initial size. Note that you can use scaling policies if you had criteria to scale up or down beyond the two instances we have. 15.jpg16.jpg

3. Review and click ” Create Auto Scaling Group17.jpg20.jpg21.jpg

4.  Now you need to back in your AWS EC2 dashboard, check out that newly created Auto-scaling information to watch the instances launch. Those two “ASG-WebServer” are an auto-scaling group.22.jpg

5. Now you need to check your Load Balancer. You will get there four instances, previously there were two instances registered.  Now You can see four instances are registered under the Load balancer. All are up and running and status is “InService“.23.jpg

Steps for the task: To check and monitor the auto scaling on dinostore website page

In this section, you will monitor the auto scaling effect on the dinostore website pages. You will refresh the page again and again so that you will able to see four different private IP address and website is still running.

  1. Open up your local browser with Dinostore website. Replace the URL with the Load balancer URL. Keep refreshing the page. And see the effect of auto scaling.

24.jpg25

26.jpg

27.jpg

Steps for the task: To check the auto scale testing by terminate instances

In this section, I will terminate my two original servers including WebServer and Webserver 02 (Real-only). Those servers are not in the auto-scaling group. After terminating the server, I will check whether the website is running or not!

  1. In the AWS EC2 Instances window, terminate the WebServer and Webserver0228.jpg29.jpg

The result of Servers termination:

After terminating two original servers including WebServer and WebServer02, You will notice only two Private IP address are showing while refreshing pages. 30.jpg31.jpg

2. Now we will test the auto scaling, terminate one of the auto-scale instances and watch what happens in both the EC2 instances window and the auto scaling groups window. You will notice that one instance will be created automatically after deleting one auto-scale group server.32.jpg

The result of Auto Scale Instance termination:

You can only see one IP address after refreshing your website again and again.33.jpg

And One Auto-Scale instance is automatically initiated to create.34.jpg

After creating new automatic auto scaling instance. If you refresh your website, You will able to see two private IP addresses at this time. 36.jpg37.jpg

If you check Load balancer console, you will notice that there is now another new instance is automatically registered with the status “InService35.jpg

Auto Scaling will help you to reduce the load during the rush time of your business website, Your customers will be experiences better availability services.Auto Scaling can dynamically increase and decrease capacity as needed. As you pay for the EC2 instances you use, you save money by launching instances when they are actually needed and terminating them when they aren’t needed.

It also provides you better fault tolerance, because Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it automatically.  The another benefit is that If one Availability Zone becomes unavailable, Auto Scaling can launch instances in another one to compensate.

Screencast link : https://www.youtube.com/watch?v=AkIOG5iK5ME 

The next Module I will discuss “ Configuring DNS with Route 53”

Enjoy your shopping 24/7 Service on Wildlife Dino-Store website! 

Thank you 🙂

Module 8: Using ELB to Scale Applications

Elastic Load Balancing (ELB) is the AWS Scale service that automatically distributes and manage incoming application traffic across multiple Amazon EC2 instances. It allows achieving fault tolerance in your applications, and it provides the required amount load balancing capacity to route application traffic seamlessly as needed. It works with Amazon Private Cloud to produce robust networking and features of security.

Elastic Load Balancing provides two types of load balancers that both feature high availability, automatic scaling, and robust security.

  • Classic Load Balancer: The Classic Load Balancer handles route traffic on either network or application level information.
  • Application Load Balancer: The Application Load Balancer operates at the application layer and allows you to define routeing rules based on content across multiple services or containers running on one or more Amazon Elastic Compute Cloud (Amazon EC2) instances.

Use Cases:

  • Achieving Even Better Fault Tolerance for Your Applications
  • DNS Failover for Elastic Load Balancing
  • Auto Scaling with Elastic Load Balancing
  • Using Elastic Load Balancing in your Amazon VPC                              (Source: https://aws.amazon.com/elasticloadbalancing/)

Task list for the module 8:  

  • To create an Elastic Load Balancer under security group for balancing HTTP port (80) traffic between primary Webserver and secondary (Read-only) WebServer
  • To test the load balancer performance

The architecture of the project: diagram2.png

Steps for the task: Creating an Elastic Load Balancer under security group for balancing HTTP port (80) traffic between primary Webserver and secondary (Read-only) WebServer

In this task, I will create a load balance for the HTTP port (80) to distribute load balance between to live servers that are serving services to the customers at the same time. It will automatically balance the load between servers at any scale, and reduce pressure on one EC2 instance. It also able to detect the health of Amazon EC2 instances that means when it detects unhealthy Amazon EC2 instances, it stops route traffic on that unhealthy instance instead, it spreads the load across the remaining healthy EC2 instances.

  1. Go to your Amazon EC2 Instance dashboard from services menu in the management console. Select “Load Balancers” under “Load Balancing” from the left panel. Click “Create Load Balancer” and select “Classic Load balancer which is the best suit for making the decision in the transport layer.1.jpg
  2. Now you will get “Define Load Balancer” wizard. Give a unique name for your load balancer. In this lab, Load balance name =DinoStoreLB, and Create LB inside = your VPC.  You will also need to configure ports and protocols for your load balancer. By default, your load balancer with a standard web server on port 80; But you can add here more port to control load balance. 2.jpg
  3. You will need to select a Subnet for each Availability Zone where you wish traffic to be routed by your load balancer. select at least two Subnets in different Availability Zones to provide higher availability for your load balancer. This load balancer will work between WebServer and Webserver (AMI). Therefore, I chose Availability zone of those two servers.3.png
  4. Click “Next: Assign Security Group“. Select your existing ‘WebRDPGroup’ security group that we configured for those Servers. And those two servers allow HTTP port 80 so that load balancing traffic goes through this port number.4.jpg
  5. Select the EC2 running EC2 Instances that you want to define under the load balancer. In this case, Choose WebserverRDp and Webserver02. Note: If load balancer detects one server unhealthy, it will pass traffic control to the healthy server so that system can handle fault tolerance.5.jpg6.jpg
  6. Review and Create. 78.jpg
  7. After a few minutes of creating your load balancer, It will be “InService” status while the load balancer has finished registering the instances. That means, your newly created load balancer is working properly between two servers.9.jpg
  8. In the description tab, you can notice a message: “Because the set of IP addresses associated with a LoadBalancer can change over time, you should never create an “A” record with any specific IP address. If you want to use a friendly DNS name for your load balancer instead of the name generated by the Elastic Load Balancing service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone10.jpg

Steps for the task: testing the load balancer performance

In this task, I will test the load balancer effects on my two servers. To check the performance I will refresh the pages again and again by copying the URL link from the Load balancer. And I will monitor that if one instance on load, other will come to take active and perform the service perfectly or not!

  1. You need to copy the load balancer DNS name and copy it into your local browser tab. Remember to add ‘/YOUR WEBSITE NAME/’ (E.g. /NET702.DinoStore/) to the load balancer URL. 11.jpg
  2. Now for testing the load balancer, repeatedly refresh the browser page from above and note the internal IP address of the server change as the load balancer swaps web servers (round robin). 12.jpg

The result of testing: You can view now two different private IP address comes up while you refreshing your page again and again. That means, Our Load balancer is working perfectly and distributing the load of the website traffic between two servers.

3. Now you will try removing one of the instances from the load balancer, then refresh the browser page repeatedly. Monitor that the internal IP address doesn’t change anymore because you took one of the web servers out of the load balancing configuration.13.jpg14.jpg15.jpg

4. In this step, You need to add again the web server02 that you removed to test recently into the load balancer configuration. Go to “Instance Edit” on the load balancer console. Select the instance you want to add from “Add and Remove Instance” Wizard. Click save. The instance will be added.16.jpg17.jpg

Congratulation! You load balancer testing is successful. I completed this module without any difficulty and error. I hope you will get it easy to deploy. The Load balancer will keep your system scalable, sustainable and available to your customers by automatically distribute the incoming application traffic between Amazon Ec2 instances.

The next Module I will discuss “ Enabling Auto Scale to Handle Spikes and Troughs”

Enjoy your website load 🙂

Module 7: Using Elastic IPs

An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. This IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.

It is a public IPv4 address, which is use to access AWS instances and resources from local machines via the  Internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the Internet.

The basic characteristics of an Elastic IP address:

  • You need to first allocate one Elastic IP to your account, and then associate it with your instance or a network interface.
  • When you associate an Elastic IP address with an instance or its primary network interface, the instance’s public IPv4 address is released back into Amazon’s pool of public IPv4 addresses. You cannot reuse a public IPv4 address.
  • You can associate, disassociate and reassociate your one Elastic IP address with your AWS different resources.
  • An Elastic IP address is for use in a specific region only.
  • When you associate an Elastic IP address with an instance that previously had a public IPv4 address, the public DNS hostname of the instance changes to match the Elastic IP address.
  • While your instance is running, you are not charged for one Elastic IP address associated with the instance, but you are charged for any additional Elastic IP addresses associated with the instance.   (Source: http://docs.aws.amazon.com/) 

Task list for the module 7: 

  • To Allocate a new Elastic IP address and configure it for the dinostore website which is running on the primary web server
  • To reassociate the  Elastic IP address for the replica server “WebServer 02” and configure it for the dinostore website which is running on the secondary web server

The architecture of the project: diagram3.png

Steps for the task: To Allocate a new Elastic IP address and configure it for the dinostore website which is running on the primary web server

In this task, I will create a new elastic IP address and I will associate it with my primary web server. And I will check that is my website running from through this newly create Elastic IP address or not!

  1. Firstly, You need to allocate a new address. Go to the EC2 dashboard from services Menu in the AWS Management Console. Click “Allocate New Address” and next page, Click “Allocate”. An IP  address will be automatically allocated.1.jpg2.jpg34.jpg
  2. In this step, You will associate to your newly created Elastic IP address with your instance “Web Server”. Go to the “Action” menu.  Select “Associate Address” and click on it.5
  3. Now associate your instance with the Elastic IP. Select Instance or Network Instance. Select the instance from the top down list and private IP. Then click “Associate“.6.jpg
  4. Your instance and private IP will be associated with that Elastic IP Address. You can view it from the Elastic IPs panel of the EC2 Dashboard.7.jpg
  5. Now open your browser from your local machines. Run your dinostore website. You can see it is running from the public DNS address. Now replace the public DNS address with the Elastic IP Address. Note: you can see your website is working with the new elastic IP address. Notices that your Private address is same in both cases.8.jpg9.jpg

Steps for the task: To reassociate the  Elastic IP address for the replica server “WebServer 02” and configure it for the dinostore website which is running on the secondary web server

In this task, I will disassociate that Elastic IP address and reassociate it with the AMI instance “webserver02 “. Then I will test with the dinostore website for checking is it working or not!

  1. Go to the Action menu from the Elastic IP console page. Choose the “Disassociate address” and click on it. That Elastic IP address will be unmapped from the primary web server.10.jpg11.jpg12.jpg
  2. Go the Action Menu again. Click “Associate“. This time you will associate it with your AMI instance “WebServer02“.13.jpg1415.jpg
  3. Now you need to open your website with the public DNS address and replace it with Elastic IP which is associated with AMI Instance Server. 16.jpg17.jpg

After complete all of the above tasks, You will successfully able to hosting your website with an elastic IP address. During implementing this module, I did not face any problem. I hope you will able to deploy it smoothly.

Remember that When you allocate an Elastic IP address, it’s for use only in a VPC. And it would remain associated If you stop an instance. By default, all AWS accounts are limited to 5 Elastic IP addresses per region, because public IPv4 Internet addresses are a scarce public resource.

The next Module I will discuss “Module Using ELB to Scale Applications”

Thank you  🙂

Module 6: Creating and using AMIs

Amazon Machine Images (AMI) is a template for the root volume for the instance such as an operating system, an application server, and applications. It gives the information required to launch an instance, which is a virtual server in the cloud.

  • A block device mapping that specifies the volumes to attach to the instance when it’s launched

Benefits and features 

  • The benefit of the AMI is that you can launch as many instances as you need and you can also specify an AMI or as many different AMIs as you need.
  • You can define the permission during launch for controlling which AWS account can use the AMI to launch.
  • You can also specify a block device mapping of the volumes to attach to the instance while it is launching.  (Sources: docs.aws.amazon.com)

Task list for the module 6: 

  • To create a new image(template) of current Web Server instance
  • To launch the EC2 AMIs of Web Server and queue server instance
  • To configure the resources for WEB SERVER 02  Instance read Replica from original Web Server

The architecture of the project:diagram5.png

Steps for the task: To create a new image(template) to current Web Server instance 

In this task, I will demonstrate how to create AMI from original Server. You can launch as many instances as you want from your virtual machines template. AMI helps to create a real only replica server as the same configuration of production Server. It helps to maintain load balancing and high-performance execution environment for applications running on Amazon EC2, control failover and data loss.

  1. For creating an image file, go to EC2 Console from services menu in the AWS management tools. Right click on your web server EC2 instance. Select “Image” and Click  “Create Image“.1.jpg
  2. Define the name and description for your new image. As it is your Dinostore Webserver AMI. Then give a meaningful name so that you can identify easily which AMI it is! In my case, I gave Image ID: “DinoStoreWebServer”, and image description: “Image of DinoStore websiteVMm”. Then Click “Create Image”. Your AMI will be ready in a minute.2.jpg3.jpg
  3. Now You need to create another AMI for your queue Server. Follow the same procedure that I did for web server AMI. In this case, Image Name: “DinoQueueServer”, and image Description: “Image of DinoStore queue server VM”4.jpg5.jpg6.jpg
  4. Now go to the “IMAGES” options under EC2 Dashboard from EC2 management console page. Click “AMIs“. You can view now your newly created two images.7.jpg8.jpg

Steps for the task: To launch the EC2 AMIs of Web Server and queue server instances

In this Steps, We will see how to launch the EC2 AMIs for creating the read-only replica servers. The configuration of original servers and replica servers would be same.

  1. For launching EC2 instance from AMI images of Web Server, Click on “DinostoreWebServer” and select “Launch9.jpg
  2. Choose the instance type = General purpose, t2.micro. And Click “Next Configure Instance Details”.10.jpg
  3. In the subnet section, If you want to spread up your Web Server instance in AZ choose “No Preference” or you can choose your own preference. In my case, I chose my own preference.  And IAM role= “WebServerRole” that we made previously.10.jpg
  4. Add storage, leave it to default if you don’t want to change.Then click “Add Tag“. Give a name “WebServer02”11.jpg12.jpg
  5. Click “Next: Configure Security Group”. Select an existing security group = WebRDPGroup. 13.jpg
  6. Click “Launch and Review“. After reviewed, click “launch“. You will get key pair wizard. You can choose a new key pair or existing selected key pair. In my case, I chose existing key pair.14.jpg15.jpg
  7. Click “Launch Instances“. Your instance will be ready in a minute. 16.jpg
  8. You can now view the instance from the EC2 instances Dashboard. 17.jpg

Steps for the task: To configure the resources for WEB SERVER 02  Instance read Replica from original Web Server.

In this task, I will show how to configure DNS string for the dinostore website in the replica server. You will see your Dinostore website from the second instance. You can also view the IP Address of the replica server. The whole configuration will be a mirror of the production Web Server.

  1. Launch your replica Server “WebServer 02” by clicking “Connect”. Give right credential to the remote connection. Note: You will not “Get password” retrieve option here. You have to give your production web server password because it is the replica of that server.18.jpg19.jpg
  2. The website from Orginal Web Server: 20.jpg
  3. The website from replica Web Server. Notice the private IP address on the top left of the webpage. Both websites private IP address is different. That means, Our website is running on both servers successfully.21.jpg

This module is easy to compare to our previous modules. I hope You will not face any problem during deployed your AMI instance for replica Server. A replica Server is essential because It makes the system highly available to the end users. And it also provides high performace by sharing load balance.

The next Module I will discuss “Using Elastic IPs”

Thank you  🙂

Module 5: Adding EC2 Virtual Machines and Deploying the WebApp

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.That means developers can increase or decrease capacity within minutes, and the application can automatically scale itself up and down depending on its needs because it is controlled with web service APIs. Amazon

You can only pay for the capacity that you only use, as result amazon Ec2 changes the economics of computing. Amazon EC2 provides developers with the tools to build failure resilient applications and isolate them from common failure scenarios.

The Task lists for the module 5: 

  • To create policy for the DynamoDB and SQS by using IAM services and Role for EC2 Instance
  • To create Amazon Ec2 instance for web server and queue server
  • To connect remotely to the Web Server and Install IIS, asp.NET 4.5 (including developer stuff), HTTP connectors and windows authentication role services
  • To publish Dinostore application and move it from local machine to AWS Cloud Platform

The architecture of the project:diagram5

Steps for the task: To create policy for the DynamoDB and SQS by using IAM services and Role for EC2 Instance

In this task,  You will create roles and policies for accessing amazon services using Amazon IAM so that applications running on EC2 instances don’t have to have credentials baked into the code

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. IAM features to securely give applications that run on EC2 instances the credentials that they need in order to access other AWS resources, like S3 buckets and RDS or DynamoDB databases. You can grant other people permission to administer and use resources in your AWS account without having to share your password or access key.

  1. In the IAM dashboard, Go to the policy. Click “Create Policy1.jpg
  2. Create a policy by selecting “Policy Generator“. You need to create a policy that allows DynamoDB to delete items, describable, get the item, put the item, update the item.2.jpg
  3. Another role for Amazon SQS  delete the message, delete message batch, get queue URL, receive the message, send the message, send message batch.3.jpg
  4. After clicking next step, now you need to set policy name in the policy review page. Click “Create Policy” for creating the policy of DynamoSqsPolicy. Note: Policy Name must contain only alphanumeric characters and/or the following: +=,.@-_ 4.jpg5.jpg
  5. Now you need to create a role for Amazon EC2 Instance, and this role will be attached to the newly created policy. Go to the “Amazon IAM” dashboard and click “Roles“. Then click “Create new role“.6.jpg
  6. Click “Next Step“. Select Role type = Amazon EC2.7.jpg
  7. Now we will attach our newly created policy under the New role “WebServerRole“. Select the attach policy: “DynamosqsPolicy” and then click “Next Step”8.jpg
  8. Now review the policy and click “Create role”.9.jpg10.jpg

Steps for the task: To create Amazon Ec2 instance for web server and queue server

In this step, I will demonstrate the two new instance for the web server and queue server. I will configure operating system windows server 2012 R2 for both instances.

  1. Go to EC2 from Services Menu in the Amazon Management Console. Click “Launch Instance”. Select “Microsoft Windows Server 2012 R2 Base” as Amazon Machine Image(AMI) and click on select.11.jpg12.jpg
  2. Put the IAM role that we made recently “WebServerRole” for this new EC2 instance while configuring.13.jpg
  3. In this step, you need to configure a security group for your EC2 instance of Web server which will allow or deny inbound and outbound traffic in your network.14.jpg
  4. Your new EC2 Instance for web server will be ready in a minute. Now you need to create another instance for handling your queue server. Queue server will handle the order processor application. The application will fetch the message from the queue and store it in the database after polling it. 16.jpg
  5. To create a new queue server, follow the previous same steps of the EC2 instance that we created for the web server. The only difference is the security group. You need to configure new security group for the Queue Server because this security will handle a different kind of traffic for inbound and outbound.1718.jpg

Steps for the task: To connect remotely to the Web Server and Install IIS, asp.NET 4.5 (including developer stuff), HTTP connectors and windows authentication role services

In this task, I will demonstrate how to login into the EC2 web server from your local machine. And I will install IIS, asp.NET 4.5 (including developer stuff), HTTP connectors and windows authentication role services components.

  1. From the Ec2 dashboard, Click Instance and select”dinostore webserver” Instance. Then click Connect. You will get “Connect to your Instance” Wizard. Now “Download Remote Desktop file” and In “Get Password“, upload your key pair and “Decrypt Password”  for getting the password to login on the WebServer machine.19.jpg
  2. Now login your web server instance with the decrypted password through RDP connection. 71.jpg
  3. In this step, we will install Web Server (IIS), ASP.NET 4.5, Windows Authentication from the windows server 2012 R2 Add roles and features configuration.2223.jpg24.jpg5354.jpg

Steps for the task: To publish Dinostore application and move it from local machine to AWS Cloud Platform

In this task, I will show you how to publish Dinostore Project as a file to any folder. And this file will be retrieved from RDP into your web server. And I will expose the drive via the RDP setting where I stored the published web files.

  1. Go to your Microsoft Visual Studio. Open your DinoStore Project. Click the right point of your mouse on “NET701 Dinostore” in the Solution explorer.  select “publish“.25.jpg26.jpg
  2. Now select the publish method “File system” and give a specific target location in the connection setup. Note: Make sure you expose the drive(via the RDP settings) where you stored the published web files.27.jpg28.jpg
  3. After your file is published, you need to copy it to the web server instance into C:\inetpub\wwwroot folder.29.jpg
  4. In this step, IIS access the file that you published from your Visual Studio and It will be entered into your web server. Copy the file from the location where you published and paste it “C:\inetpub\wwwroot” on the web server.30
  5. Go to Windows IIS. In IIS right click on your newly copied folder in “C:\inetpub\wwwroot“, and select ‘Convert to Application31.jpg68.jpg
  6. Now You need to delete Access key and your secret key from your source code’ We.Config file” because it is not a good idea to put access key in the code. If you put your access key to your code, anyone can get those key and access your confident data. It is the vulnerable idea for security. 33.jpg
  7.  Paste the following link into your browser to the web server. (http://169.254.169.254/latest/meta-data/iam/security-credentials/WebServerRole ). Notice that your code is able to get automatic access key or not!35.jpg
  8.  Go to your Web Server on the cloud.open the IIS Manager,. Highlight your website in the connections pane and Go into ‘Content View’. Then  Right click on ‘Default.aspx‘, and browse. Remember your website should now be running on your server.40.jpg41.jpg57.jpg
  9. Test out the various aspects: inspect elements (should show S3 source), add an item to your cart (uses DynamoDB).60.jpg61.jpg45.jpg
  10. Now We will test our website over the Internet. Copy the public DNS string of your web server and paste it into the browser of your local machines. Now add ‘/YOUR WEBSITE NAME/’ E.g. /NET702.DinoStore/ to the generic URL above and you should now see your DinoStore website. Note the IP of the server on the top left – this matches your internal address of your web server instance.62.jpg63.jpg

Now I will show to move Queue Server in the AWS.First of all, You need to publish the Net702 Order Processor project file from Visual Studio. The work procedure is the same like Net702 Dinostore.

  1. Go to your Visual Studio.Release the project before publishing from the top of visual studio.46.jpg
  2. Now publish the NET702.DinoStore.OrderProcessor right clicks on your mouse from Solution explorer location.47.jpg51.jpg
  3.  .After published, copy the published file from your local location to any location of queue server. Then Make a shortcut of your setup.exe file. You need to create a shortcut of your setup.exe file and copy the shortcut to C:\ProgramData\Microsoft\Windows\Start Menu\Programs location of your queue server. Note: In general, this order processor publish file would be set up as a windows service, but in this case, we are just running it as an application at startup.48.jpg49.jpg69.jpg
  4. In this step,  You need to Run the application. In MySQL, connect to the AWS DinoStore database and check your order table. There may be your previous order list from earlier.55.jpg
  5. Now you need to open up your cloud websites, add some dinosaurs to your cart and check out dinosaurs with your card details.  Note: Check whether your RDP session is ready to access the web server or not! As you see in the following snapshot, one session is ready when I add the item to my cart.64.jpg
  6. If you check your queue application, You will able to see your queue application fetching your request message and deleting it from the queue and store the data in the item table in your MySQL database. You should have seen the ‘Queue messages received: count is 1’ and then the ‘Queue message(s) deleted’ lines come up on the console.66.jpg
  7. Now you can view that the details about our recent selling (Check out) are updated into the database automatically. Execute a select SQL query on the order of your MySQL database. 67

Congratulations! Your locally developed website is moved to the cloud system after complete this module. Now you can publicly access your website from any machines in the world if that machine has Internet service.

Problem and solution:

Problem 1:  If you come across the following error that I show the following snapshot, You just need to follow some steps.36.jpg

Solutions: You need to install “ASP.NET 4.5” and “.NET extensibility” from Add Roles and Features from your WebServer Windows 2012 R2. I hope the problem will be solved. You also need to Active session state for your Internet Information Services (IIS) manager. It will help you to solve any session related error when you will run your website first time in the cloud.38.jpg

Problem 2: If your order processing application fails to pull up the message from the queue and delete it from the queue. Or you get the following screenshot Runtime Error. Check the following solution.72.jpg

Solution: Check your security group rules setup for inbound traffic for Web Server security and Queue Server. Define the Inbound rules for all traffics that will be shared between each other. 70.jpg

You can visit youtube LAB 5 Screencast 

The next Module I will discuss “Creating and using AMIs” 

Thank you 🙂

Module 4: Configuring the System to Use Simple Queue Service

Amazon Simple Queue Service (SQS) is a reliable, scalable, and fully-managed message queuing services for making reliable communication among distributed software components and microservices at any scale. It is best practice design for modern application system because of each individual components perform a discrete function while building the application which helps to improve reliability and scalability. SQS makes the application system components simple and cost effective to decouple and coordinate the components of a cloud application.

You can use Amazon SQS to send, store, and receive message between applications components at any volume, at any level of throughput, without losing messages or requiring other services to be available.

Through Two types of queue, SQS provides services. One is SQS Standard and another is SQS FIFO Queue. SQS Standard queue provides maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO Queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent, with limited throughput.

The Benefit of SQS:  With SQS, there is no upfront cost, no need to acquire, install, and configure messaging software, and no time-consuming build-out and maintenance of supporting infrastructure.

SQS lets you decouple application components so that they run and fail independently, increasing the overall fault tolerance of the system.

Task list for the module 4: 

  • To create queue in the Amazon SQS
  • To Test the queue by sending a message in Queue Actions
  • Editing and changing the application source code
  • To add order and check out, and buy order to cart and testing queue from SQS services.
  •  to add a new application to the project that can pull the orders from the queue

The architecture of the project:

diagram6

Steps for the task: Creating queue in the Amazon SQS

In this task, I will demonstrate how to create a queue using SQS in the Amazon cloud Platform. The queue will help the application system to coordinate with the database to hold and send the message.

  1. You need to go to Amazon Simple Queue Service (SQS) from Services menu in the Amazon Management console. Click “Get Stated Now” for creating the queue. Put the name in the new queue: “dinoorders“. Make sure you’ve chosen the right region.1.jpg
  2. Choose Standard queue and click “Configure Queue“. Leave the Queue attributes with default parameters. Then Click “Create Queue“.2.jpg3.jpg

Steps for the task: Testing the queue by sending a message in Queue Actions

A queue has been created. Now I will show you how to test the newly created queue is working or not by sending a message.

  1. From the Queue Actions tab, you need to select “Send a Message” and click on it.4.jpg5.jpg
  2. You will get the following wizard after sending your message.6.jpg
  3. Now you need to refresh the queue screen and check that your message is available or not! I can view that one(1) message available for my dinoorders queue.7.jpg
  4. Now you will view the receiving message by right click the mouse on the queue and then select View/Delete Message option.8
  5. After clicking on View/Delete Message option, You will get a wizard called “View/Delete Messages in dinoorders”. Now I will pull the message by clicking on “StartPolling for the message” from this screen.9.jpg
  6. Now you can view the message exact message that you had sent previously. Pulling message means It will fetch all the message that has been sent to the queue or those messages are still in the queue.10.jpg
  7. After complete, your test successfully,  Delete the test message and stop polling.11.jpg
  8. After deleting the message from the poll queue, now you can see now messages are available in the dinoorders queue.12.jpg

Steps for the task: Editing and changing the application source code

In this task, I will demonstrate to apply some changes in application source code of NET702.DinoStore-> secure-> Checkout.aspx.cs file. I will insert a new code with order information into the queue as well as I will also remove the previously existing code because the previous code doesn’t have the scale and it directly inserts the order information data into the database. The new replacing code will help my system to handle bursts.

  1. Open NET702.DinoStore project on the Microsoft Visual studio, Go to the checkout. aspx.cs file. 13.jpg
  2. Now remove the code from ‘protected void btnPurchase_Click(object sender, EventArgs e)’ down to and including Response.Redirect(“PaymentConfirmation.aspx”);’. (line 66 to 126 inclusive)  in Checkout.aspx.cs. 14.jpg
  3. Replace the delete code with the following code: 

protected void btnPurchase_Click(object sender, EventArgs e)
{
//get shopping cart
ShoppingCart cart = Session.Contents[“cart”] as ShoppingCart;
//get user id
MembershipUser user = Membership.GetUser();
string userId = user.ProviderUserKey.ToString();
//add payment info
cart.CcType = ddlCcType.SelectedItem.Text;
cart.CcNumber = txtCcNumber.Text;
cart.CcExpiration = txtExpire.Text;
//create message for queue
using (AmazonSQSClient client = new AmazonSQSClient())
{
var jsoncart = Newtonsoft.Json.JsonConvert.SerializeObject(cart);
SendMessageRequest request = new SendMessageRequest();
request.QueueUrl = “YOUR QUEUE URL HERE”;
request.MessageBody = jsoncart;
SendMessageResponse response = client.SendMessage(request);
}
//clear out cart
cart.Items.Clear();
Session[“cart”] = cart;
//send user to confirmation page
Response.Redirect(“PaymentConfirmation.aspx”);15.jpg

4. In this step, I will add the following new ‘using’ statements under the existing set: using Amazon.SQS;
using Amazon.SQS.Model;
using Newtonsoft.Json; Those “using” codes are called function of the code. It will call directories of the function classes objects that are registered in those directories. 16.jpg

5. Now I will install NuGet packages in the references object on the DinoStore. This package is for the Json.NET package that will allow the shopping cart to be serialised as a JSON object and added to the queue. The” json.net” using the function I have called from the check.aspx.cs file.17.jpg

6.  Search for and install the Json.NET package on the Browser, and select Newtonsoft_Json and install it.18.jpg19.jpg20.jpg

7.Now you need to go “the web.config” file to add your credentials such as regions, access keys, security keys. Those credentials elements you will get AWS accounts.  Insert the following code after ‘ValidationSettings: UnobtrusiveValidationMode’

21.jpg

Note: Remember that we will remove this credentials from our source code when we will move to the cloud server. We will AWS IAM services because it is not good practice to use credentials into the code.

Steps for the task: To add order and check out, and buy order to cart and testing queue from SQS services.

In this task, I will process the order by login my user ID. I will add items to cart, check-out and buy some dinosaurs.

  1. Login your website by your user id. 22
  2. Now add some items to the cart. And check out them for buy. Then click “buy” for the items.23.jpg24.jpg25.jpg

If you check your cloud system fromSQS dinoorders queue, You will see some message in queue order.26.jpg27.jpg

This queue orders available with request file, It helps the application to balancing the load from web application side. SQS is acting as the buffer memory and reduce database overloading processing in the real time.

Adding and buying dinosaurs from the website shows that all configuration and changes that I’ve made till are working properly.

Steps for the task: to add a new application to the project that can pull the orders from the queue

In this task, I will add an order processor on the Microsoft Visual studio application. That order process will create an application for the queue services. The purpose of the queue application is that if request message comes from the web application or websites to the queue, the application polling the requested automatically in the background process.  The benefit is that you do not go to the SQS services every time to handle the queue manually. As a result, your application system processing performance will be increased.

I will do some changes in my application source code to demonstrate this process now.

  1. To the add the  NET702.DinoStore.OrderProcessor under your solution manager, Go to your solution explorer and right click your mouse, Click add on the Existing project and give the location of the file.  The location of my file is: (E:\NET702orginal\NET702.DinoStore.OrderProcessor)21.jpg
  2.  Now you need to insert your dinoorders queue URL into the NET702.DinoStore.OrderProcessor ‘Program.cs’ code. Therefore, your application source code able to establish the connection with your SQS services in the AWS cloud. You have to put this URL in the two places of your code: “request.QueueUrl” and “batchRequest.QueueUrl”. 34.jpg
  3. Now ‘App.config file in the “orderprocessor” from your solution explorer.5.jpg
  4. You need to add AWSRegion, and Access keys and Secret key to the point of “Add key“. and Change your StoreSqlDb server connection string to your “dinostoreinstance” URL, and add your DB username and password to this line.6.jpg
  5. After complete all those changes on the NET702.DinoStore.OrderProcessor, Right click on NET702.DinoStore.OrderProcessor project and select ‘Set as Startup Project’7.jpg
  6. Right-click the NET702.DinoStore.OrderProcessor from the solution explorer, choose Properties-Signing and untick the ‘Sign the Click Once manifestsInked9_LI
  7. To run the “NET702.DinoStore.OrderProcessor” project in the visual studio. First, you need to build the solutions, otherwise, You will get an error that program doesn’t get the exe file. After complete “Build Solution”, click “Start” at the top of the visual studio.11.jpg
  8. When you will click on “Start” program in visual studio. An application will be running like the following Screenshot.  Note: I already bought some dinosaurs from my websites with my user credentials that made a buying request message in the queue.  This application will show that a request has been received with a total number of count, And it will automatically be deleted from the queue. After deleting the queue message, It will again start for polling to fetch the message from the queue. quue app
  9. We can also view order details from the database through MySQL Workbench. That means SQS services polling data from the queue through the queue application. It has done automatic execution and delete the request message from the queue and store it in the database. And this running application helps the queue table to maintain SQS polling service automatic.12.jpg

Problem and solution:

When you will implement module 4, you will come across different kind of problems. Here I am going to mention what kind of problems I faced and with solutions.

Problem 1:  When you will debug your code after editing checkout.aspx.cs file, you may see the following error: CS0103: The name Newtonsoft doesn’t exist in the current context.error-1.jpgerror-2.jpg

Solution: To solve this error, You need to install “Newtonsoft.json” from “Manage Nuget package“. After installing Newtonsoft.json the problem must be solved. If you still get the above error, go to “Reference“, right click on your mouse.  and add references, you will get reference manager wizard and add Json.NET.NET 4.0. Your problem will be solved.Untitled.jpgerror2-solve.jpg

Problem 2: If you get the following configuration error. Follow the solution I’ve given below.error.jpg

Solution: Install the “AWS Session Provider” from the “Manager Nuget package“. The session regarding all problem will be solved.error-solve.jpg

Problem 3: If you get the following error after “build solution“, or “debugging your code”. Follow the following solution I’ve been given below.8.jpg

Solutions: Click the property of the order Processor file, from application wizard signing and untick the “Sign Click Once manifests. The above problem will be solved.

Inked9_LI.jpg

Problem 4: If get a message like that during polling message through your running application ” dinostoremembershipdb .orders” table not found or ” dinostoremembershipdb.items table doesn’t exist. 

Solution: Add the correct schema name before table name into the insert query of the code of program.cs. The problem will be solved. Your running application will able to fetch message without any exceptional handle errors message.polling error.jpg

The other errors you can get during configuring module 4:

Error 1: The type or namespace name ‘Amazon’ could not be found (are you missing a using directive or an assembly reference?)

Analysis: The compiler is unable to find the definition of “Amazon” from the available resources to be able to build the project (NET702.DinoStore.Order.Processor). So the hint here is the name “Amazon”. In the previous lab (Lab 3), we were instructed to make sure that a package (AWS SDK for .NET) should be installed in our environment. Now, we know that AWS stands for Amazon Web Services. Thus, we can infer that the package AWS SDK must contain the definition of the namespace called “Amazon”. Now let us verify if NET702.DinoStore.Order.Processor Project has the AWS SDK package by right-clicking on it and choosing “Manage NuGet Packages”. In the newly opened tab, type AWSSDK to search for the package that we are looking. In my case, an older version of the package was installed but a newer version is already available. What we can conclude from here is that the namespace “Amazon” is not yet defined in our current AWSSDK package. Thus we need to upgrade to the newer version. After installing the latest version of AWSSDK, build the project again and the error related to the amazon should be resolved. 

Solution: Install the latest AWSSDK package. Note that this will solve similar errors related to this package.

Error 2: ‘ReceiveMessageRequest’ does not contain a definition for ‘AttributeName’ and no extension method ‘AttributeName’ accepting a first argument of type ‘ReceiveMessageRequest’ could be found (are you missing a using directive or an assembly reference?)

Analysis: As mentioned in the error description, ‘ReceiveMessageRequest’ does not contain a definition for ‘AttributeName’. So we need to inspect the definition of ‘ReceiveMessageRequest’ to verify if it has a member named ‘AttributeName’. To get to the definition of ‘ReceiveMessageRequest’: 1. First, double click the error at the bottom of the visual studio. This will redirect you to the line where the error was detected. 2. Now right click ‘ReceiveMessageRequest’ and select Go To Definition. In the definition, we can observe that ‘ReceiveMessageRequest’ class has a member named ‘AttributeNames’ and not ‘AttributeName’. This means that code in Program.cs is incorrect.

Solution: Modify request.AttributeNametorequest.AttributeNames. It should have letter “S”.

Error 3: ‘ReceiveMessageResult’ does not contain a definition for ‘Message’ and no extension method ‘Message’ accepting a first argument of type ‘ReceiveMessageResult’ could be found (are you missing a using directive or an assembly reference?)

Solution:  Modify response.ReceiveMessageResult.Message. Count to response.ReceiveMessageResult.Messages.  Put the letter “s” after the message. There is ‘S’ missing.

The next Module I will discuss “Adding EC2 Virtual Machines and Deploying the WebApp”

Thank you 🙂