Question : A web company is looking to implement an external payment service into their highly available application deployed in a VPC. Their application EC2 instances are behind a public facing ELB. Auto scaling is used to add additional instances as traffic increases. Under normal load the application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an API. How should they architect their solution?
1. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances. 2. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway. 3. Access Mostly Uused Products by 50000+ Subscribers through the ELB. 4. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.
Correct Answer : Get Lastest Questions and Answer : Here problem is, when load is high then 3*2 (six nodes) And upto 4 Ips whitlisting is allowed. ELBs IPs are keep changing, hence option 3 is out. Once your VPC is configured to use the NAT instance all the outbound traffic will be attributed to the EIP of the NAT instance. NAT instances will have fixed IP addresses.
Question : You are running a news website in the eu-west- region that updates every minutes. The website has a world-wide audience it uses an Auto Scaling group behind an Elastic Load Balancer and an Amazon RDS database Static content resides on Amazon S3, and is distributed through Amazon CloudFront. Your Auto Scaling group is set to trigger a scale up event at 60% CPU utilization, you use an Amazon RDS extra large DB instance with 10.000 Provisioned IOPS its CPU utilization is around 80%. While freeable memory is in the 2 GB range. Web analytics reports show that the average load time of your web pages is around 1 5 to 2 seconds, but your SEO consultant wants to bring down the average load time to under 0.5 seconds. How would you improve page load times for your users? (Choose 3 answers)
A. Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively. B. Add an Amazon ElastiCache caching layer to your application for storing sessions and frequent DB queries C. Configure Amazon CloudFront dynamic content support to enable caching of re-usable content from your site D. Switch Amazon RDS database to the high memory extra large Instance type E. Set up a second installation in another region, and use the Amazon Route 53 latency based routing feature to select the right region.
Explanation: ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases
Deliver Your Dynamic Content Globally With Amazon CloudFront Amazon CloudFront offers a simple, cost-effective way to improve the performance, reliability and global reach of your entire website for both static content and the dynamic portions of your site that change for each end user.
Amazon CloudFront works seamlessly with dynamic web applications running in Amazon EC2 or your origin running outside of AWS without any custom coding or proprietary configurations, making the service simple to deploy and manage. You can use a single Amazon CloudFront distribution to deliver your entire website, including both static and dynamic (or interactive) content. This means that you can continue to use a single domain name (e.g., www.example) for your entire website without the need to separate your static and dynamic content or manage multiple domain names on your website.
Latency Routing Policy Use the latency routing policy when you have resources in multiple Amazon EC2 data centers that perform the same function and you want Amazon Route 53 to respond to DNS queries with the resources that provide the best latency. For example, you might have web servers for example.com in the Amazon EC2 data centers in Ireland and in Tokyo. When a user browses to example.com, Amazon Route 53 chooses to respond to the DNS query based on which data center gives your user the lowest latency.
Question : Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following.
1. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets. 2. Create your RDS instance separately and add its IP address to your application's DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC's IP address block. 3. Access Mostly Uused Products by 50000+ Subscribers connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. 4. Create your RDS instance separately and pass its DNS name to your's DB connection string as an environment variable Alter its security group to allow access to It from hosts In your application subnets.
Correct Answer : Get Lastest Questions and Answer : As we can see data deployment as well as data fetch both can be done from same VPC subnet. Hence the hosts inside the VPC only needs the access to RDS.
1. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes. 2. Implement security groups and configure outbound rules to only permit traffic to software depots. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Implement network access control lists to all specific destinations, with an Implicit deny as a rule.
1. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes. 2. Use synchronous database master-slave replication between two availability zones. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.