Question : QuickTechie.com Inc is planning to setup a management network on the AWS VPC which is trying to secure the webserver on a single VPC instance such that it allows the internet traffic as well as the back-end management traffic. QuickTechie admin wants to make sure that the back end management network interface can receive the SSH traffic only from a selected IP range, while the internet facing webserver will have an IP address which can receive traffic from all the internet IPs. How can the QuickTechie.com achieve this by running web server on a single instance?
1. The organization should create two network interfaces with the same subnet and security group to assign separate IPs to each network interface. 2. The organization should create two network interfaces with separate subnets so one instance can have two subnets and the respective security groups for controlled access. 3. The organization should launch an instance with two separate subnets using the same network interface which allows to have a separate CIDR as well as security groups. 4. It is not possible to have two IP addresses for a single instance.
Explanation: You can create a management network using network interfaces. In this scenario, the secondary network interface on the instance handles public-facing traffic and the primary network interface handles back-end management traffic and is connected to a separate subnet in your VPC that has more restrictive access controls. The public facing interface, which may or may not be behind a load balancer, has an associated security group that allows access to the server from the Internet (for example, allow TCP port 80 and 443 from 0.0.0.0/0, or from the load balancer) while the private facing interface has an associated security group allowing SSH access only from an allowed range of IP addresses either within the VPC or from the Internet, a private subnet within the VPC or a virtual private gateway.
To ensure failover capabilities, consider using a secondary private IP for incoming traffic on a network interface. In the event of an instance failure, you can move the interface and/or secondary private IP address to a standby instance.
Question : HadoopExam.com has setup an application on AWS and wants to achieve scalability and HA for the application. Application should scale up and down when there is a higher / reduced load on the application. Which of the below mentioned configurations is not required to be performed in this scenario? 1. Setup ELB with instances to distribute the load on the web server. 2. Setup schedule to shut off the instance when the instance is not in use. 3. Setup bootstrapping to start the web and DB servers on instance boot. 4. Create an AMI of a running instance and configure that AMI with AutoScaling.
Explanation: AWS EC2 allows the user to launch On-Demand instances. AutoScaling offers automation which can scale up or down resources as per the configured policy. To setup AutoScaling, the organization must first create an AMI. The organization should setup bootstrapping with AMI so that whenever the instance starts it will automatically start the app server and DB server. The organization should also setup ELB with instances to distribute the incoming load. AutoScaling should be configured to scale up and down based on the application load and not on a particular schedule. Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud. It enables you to achieve greater levels of fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to distribute application traffic.
Available : Achieve higher levels of fault tolerance for your applications by using Elastic Load Balancing to automatically route traffic across multiple instances and multiple Availability Zones. Elastic Load Balancing ensures that only healthy Amazon EC2 instances receive traffic by detecting unhealthy instances and rerouting traffic across the remaining healthy instances. If all of your EC2 instances in one Availability Zone are unhealthy, and you have set up EC2 instances in multiple Availability Zones, Elastic Load Balancing will route traffic to your healthy EC2 instances in those other zones. Elastic : Elastic Load Balancing automatically scales its request handling capacity to meet the demands of application traffic. Additionally, Elastic Load Balancing offers integration with Auto Scaling to ensure that you have back-end capacity to meet varying levels of traffic levels without requiring manual intervention. Secure : Elastic Load Balancing works with Amazon Virtual Private Cloud (VPC) to provide robust networking and security features. You can create an internal (non-internet facing) load balancer to route traffic using private IP addresses within your virtual network. You can implement a multi-tiered architecture using internal and internet-facing load balancers to route traffic between application tiers. With this multi-tier architecture, your application infrastructure can use private IP addresses and security groups, allowing you to expose only the internet-facing tier with public IP addresses. Elastic Load Balancing provides integrated certificate management and SSL decryption, allowing you to centrally manage the SSL settings of the load balancer and offload CPU intensive work from your instances.
Question : You can use Amazon Route health checking and DNS failover features to
1. enhance the availability of the applications running behind Elastic Load Balancers 2. run applications in multiple AWS regions and designate alternate load balancers for failover across regions 3. Both 1 and 2 4. None of above
Explanation: You can use Amazon Route 53 health checking and DNS failover features to enhance the availability of the applications running behind Elastic Load Balancers. Route 53 will fail away from a load balancer if there are no healthy EC2 instances registered with the load balancer or if the load balancer itself is unhealthy.
Using Route 53 DNS failover, you can run applications in multiple AWS regions and designate alternate load balancers for failover across regions. In the event that your application is unresponsive, Route 53 will remove the unavailable load balancer endpoint from service and direct traffic to an alternate load balancer in another region.