Question : You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.
1. Create IAM users in the Master account with full Admin permissions. Create crossaccount roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account. 2. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
Explanation: To use a role to delegate access to resources that are in different AWS accounts that you own. You'll share resources in one account with users in a different account. By setting up cross-account access in this way, you don't need to create individual IAM users in each account, and users don't have to sign out of one account and sign into another in order to access resources that are in different AWS accounts. Dev and Test will create a cross-account role with admin permissions and will grant the master account to use this role.
Question : You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC). The previous architect has already deployed a 3-tier VPC. The configuration is as follows: VPC vpc-2f8t>C447 IGW ig-2d8bc445 NACL acl-2080c448
Subnets and Route Tables: Web server's subnet-258bc44d Application server's subnet-248bc44c Database server's subnet-9189c6f9
Route Tables: rrb-218bc449 rtb-238bc44b
Associations: subnet-258bc44d rtb-218bc449 Subnet-248bc44c rtb-238tX44b subnet-9189c6f9 rtb-238bc44b You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the internet. Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet? 1. Create a bastion and NAT Instance in subnet-248bc44c and add a route from rtb-238bc44b to subnet-258bc44d. 2. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within suonet-248bc44c. 3. Access Mostly Uused Products by 50000+ Subscribers subnet-248bc44c. 4. Create a bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance.
Correct Answer : Get Lastest Questions and Answer : Its three tier architecture. As NAT instances to be in Public Subnet wherever IGW exist. And database and appliacation tier should be in Proivate subnet.
So NAT and Bastion should be in public subnet (subnet-258bc44d rtb-218bc449) (Option 1 and 2 is out). And IGW must be able to connect from private subnet hence route required between. An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic. An Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IP addresses. Attach an Internet gateway to your VPC. Ensure that your subnet's route table points to the Internet gateway.Ensure that instances in your subnet have public IP addresses or Elastic IP addresses. Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance.
Question : You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. During a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? (Choose 2 answers)
A. Latency resource record sets cannot be used in combination with weighted resource record sets. B. You did not setup an http health check for one or more of the weighted resource record sets associated with the disabled web servers. C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region. D. One of the two working web servers in the other region did not pass its HTTP health check. E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example.com in the region where you disabled the servers.
Correct Answer : Get Lastest Questions and Answer : Explanation: To discover the availability of your EC2 instances, the load balancer periodically sends pings, attempts connections, or sends requests to test the EC2 instances. These tests are called health checks. The status of the instances that are healthy at the time of the health check is InService. The status of any instances that are unhealthy at the time of the health check is OutOfService. The load balancer performs health checks on all registered instances, whether the instance is in a healthy state or an unhealthy state.
The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state.
The load balancer checks the health of the registered instances using either the default health check configuration provided by Elastic Load Balancing or a health check configuration that you configure.
If you have associated your Auto Scaling group with a load balancer, you can use the load balancer health check to determine the health state of instances in your Auto Scaling group. By default, an Auto Scaling group periodically determines the health state of each instance.
For both latency alias resource record sets, you set the value of Evaluate Target Health to Yes. You use the Evaluate Target Health setting for each latency alias resource record set to make Amazon Route 53 evaluate the health of the alias targets-the weighted resource record sets-and respond accordingly. Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region.
Amazon Route 53 selects a weighted resource record set based on weight. Evaluate Target Health is Yes for the latency alias resource record set, so Amazon Route 53 checks the health of the selected weighted resource record set.
The health check failed, so Amazon Route 53 chooses another weighted resource record set based on weight and checks its health. That resource record set also is unhealthy.
Amazon Route 53 backs out of that branch of the tree, looks for the latency alias resource record set with the next-best latency, and chooses the resource record set for ap-southeast-2.
Amazon Route 53 again selects a resource record set based on weight, and then checks the health of the selected resource record set. The health check passed, so Amazon Route 53 returns the applicable value in response to the query.
1. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast. Database use RDS with multi-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMIS. Database backed up via DB snapshots. 2. Web servers, store-read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast Database use RDS with multi-AZ deployment and one or more read replicas. Backup web servers app servers, and database backed up weekly to Glacier using snapshots. 3. Access Mostly Uused Products by 50000+ Subscribers Database use RDS with multi-AZ deployment. Backup web and app servers backed up weekly via AMIs. Database backed up via DB snapshots. 4. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers share state using a combination of DynamoDB and IP multicast. Database use RDS with multl-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMI. Database backed up via DB snapshots
1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance 2. Ingest data into a DynamoDB table and move old data to a Redshift cluster 3. Access Mostly Uused Products by 50000+ Subscribers 4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
1. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance. 2. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote. 3. Access Mostly Uused Products by 50000+ Subscribers authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table. 4. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.