Question : You have a web application leveraging an Elastic Load Balancer (ELB) In front of the web servers deployed using an Auto Scaling Group Your database is running on Relational Database Service (RDS) The application serves out technical articles and responses to them in general there are more views of an article than there are responses to the article. On occasion, an article on the site becomes extremely popular resulting in significant traffic Increases that causes the site to go down. What could you do to help alleviate the pressure on the infrastructure while maintaining availability during these events? Choose 3 answers
A. Leverage CloudFront for the delivery of the articles. B. Add RDS read-replicas for the read traffic going to your relational database C. Leverage ElastiCache for caching the most frequently used data. D. Use SQS to queue up the requests for the technical posts and deliver them out of the queue. E. Use Route53 health checks to fail over to an S3 bucket for an error page.
Explanation: CloudFront : Amazon CloudFront is a content delivery web service. It integrates with other Amazon Web Services products to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no minimum usage commitments. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory caching system, instead of relying entirely on slower disk-based databases. The service simplifies and offloads the management, monitoring and operation of in-memory cache environments, enabling your engineering resources to focus on developing applications. Using Amazon ElastiCache, you can not only improve load and response times to user actions and queries, but also reduce the cost associated with scaling web applications. Amazon ElastiCache automates common administrative tasks required to operate a distributed cache environment. Using Amazon ElastiCache, you can add a caching layer to your application architecture in a matter of minutes via a few clicks of the AWS Management Console. Once a cache cluster is provisioned, Amazon ElastiCache automatically detects and replaces failed cache nodes, providing a resilient system that mitigates the risk of overloaded databases, which slow website and application load times. Through integration with Amazon CloudWatch monitoring, Amazon ElastiCache provides enhanced visibility into key performance metrics associated with your cache nodes. Amazon ElastiCache is protocol-compliant with Memcached and Redis, so code, applications, and popular tools that you use today with your existing Memcached or Redis environments will work seamlessly with the service. As with all Amazon Web Services, there are no up-front investments required, and you pay only for the resources you use. There are a variety of scenarios where deploying one or more Read Replica for a given source DB instance might make sense. Common reasons for deploying a Read Replica include the following: .Scaling beyond the compute or I/O capacity of a single DB instance for read-heavy database workloads. This excess read traffic can be directed to one or more Read Replicas. .Serving read traffic while the source DB instance is unavailable. If your source DB instance cannot take I/O requests (for example, due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your Read Replica(s). For this use case, keep in mind that the data on the Read Replica might be "stale" because the source DB instance is unavailable. .Business reporting or data warehousing scenarios where you might want business reporting queries to run against a Read Replica, rather than your primary, production DB instance
The questions mentions RDS so an answer that includes that as part of the solution makes sense. Also, Route53 does nothing to alleviate pressure on the infrastructure, its for failover.
To me the key word is "alleviate the pressure". The system is failing because it cannot take the pressure. If one implements ABC, the chances of needing E is very slim, as all 3 will help to alleviate the pressure.
Also it doesn't matter if the "speed" between elasticache or RDS read replica is different, the criteria is about alleviating pressure and redundancy not about speed of delivery.
Question : The majority of your Infrastructure is on premises and you have a small footprint on AWS. Your company has decided to roll out a new application that is heavily dependent on low latency connectivity to LDAP for authentication Your security policy requires minimal changes to the company's existing application user management processes. What option would you implement to successfully launch this application1? 1. Create a second, independent LDAP server in AWS for your application to use for authentication 2. Establish a VPN connection so your applications can authenticate against your existing on-premises LDAP servers 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create a second LDAP domain on AWS establish a VPN connection to establish a trust relationship between your new and existing domains and use the new domain for authentication
Explanation: The issue here is both are correct options, but the indicators are "low latency connectivity to LDAP for authentication" and "Your security policy requires minimal changes to the company's existing application user management processes."
there is no need for a new separate domain or trust. just a simple LDAP server for authentication with minimal security policy change.
Answer is C here. A replica would allow for the authentication as requested. For those also looking at Azure, Microsoft recommends this same thing for AD extension.
D. would require MORE administrative work for the sysadmins and opens a new level of security requirements as you establish trusts, password policies and new/additional domain users.
Question : You need to design a VPC for a web-application consisting of an Elastic Load Balancer (ELB). a fleet of web/application servers, and an RDS database The entire Infrastructure must be distributed over 2 availability zones. Which VPC configuration works while assuring the database is not available from the Internet? 1. One public subnet for ELB one public subnet for the web-servers, and one private subnet for the database 2. One public subnet for ELB two private subnets for the web-servers, two private subnets for RDS 3. Access Mostly Uused Products by 50000+ Subscribers 4. Two public subnets for ELB two public subnets for the web-servers, and two public subnets for RDS
Ans : 4 Exp : After an instance has been marked unhealthy by Auto Scaling, as a result of an Amazon EC2 or ELB health check, it is almost immediately scheduled for replacement as it will never automatically recover its health. If the user knows that the instance is healthy then he can manually call the SetInstanceHealth action (or the as-setinstance- health command from CLI. to set the instance's health status back to healthy. Auto Scaling will throw an error if the instance is already terminating or else it will mark it healthy.
Question : A system admin wants to add more zones to the existing ELB. The system admin wants to perform this activity from CLI. Which of the below mentioned command helps the system admin to add new zones to the existing ELB?
1. elb-enable-zones-for-lb 2. elb-add-zones-for-lb 3. Access Mostly Uused Products by 50000+ Subscribers 4. elb-configure-zones-for-lb Ans : 1 Exp : The user has created an Elastic Load Balancer with the availability zone and wants to add more zones to the existing ELB. The user can do so in two ways: From the console or CLI, add new zones to ELB;
Question : An organization is planning to create a user with IAM. They are trying to understand the limitations of IAM so that they can plan accordingly. Which of the below mentioned statements is not true with respect to the limitations of IAM?
1. One IAM user can be a part of a maximum of 5 groups 2. The organization can create 100 groups per AWS account 3. Access Mostly Uused Products by 50000+ Subscribers 4. One AWS account can have 250 roles Ans : 1 Exp : AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. The default maximums for each of the IAM entities is given below: Groups per AWS account: 100 Users per AWS account: 5000 Roles per AWS account: 250 Number of groups per user: 10 (that is, one user can be part of these many groups.
Question : A user is planning to scale up an application by AM and scale down by PM daily using Auto Scaling. What should the user do in this case? 1. Setup the scaling policy to scale up and down based on the CloudWatch alarms 2. The user should increase the desired capacity at 8 AM and decrease it by 7 PM manually 3. Access Mostly Uused Products by 50000+ Subscribers 4. Setup scheduled actions to scale up or down at a specific time Ans : 1 Exp : Auto Scaling based on a schedule allows the user to scale the application in response to predictable load changes. To configure the Auto Scaling group to scale based on a schedule, the user needs to create scheduled actions. A scheduled action tells Auto Scaling to perform a scaling action at a certain time in the future.
Question : A user has created a VPC with two subnets: one public and one private. The user is planning to run the patch update for the instances in the private subnet. How can the instances in the private subnet connect to theinternet? 1. Use the internet gateway with a private IP 2. Allow outbound traffic in the security group for port 80 to allow internet updates 3. Access Mostly Uused Products by 50000+ Subscribers 4. Use NAT with an elastic IP
Ans : 4 Exp : A Virtual Private Cloud (VPC. is a virtual network dedicated to the user's AWS account. A user can create a subnet with VPC and launch instances inside that subnet. If the user has created two subnets (one private and one public., he would need a Network Address Translation (NAT. instance with the elastic IP address. This enables the instances in the private subnet to send requests to the internet (for example, to perform software updates..
Question : A user has configured an EC instance in the US-East-a zone. The user has enabled detailed monitoring of the instance. The user is trying to get the data from CloudWatch using a CLI. Which of the below mentioned CloudWatch endpoint URLs should the user use? 1. monitoring.us-east-1.amazonaws.com 2. monitoring.us-east-1-a.amazonaws.com 3. Access Mostly Uused Products by 50000+ Subscribers 4. cloudwatch.us-east-1a.amazonaws.com
Ans 1 Exp : The CloudWatch resources are always region specific and they will have the end point as region specific. If the user is trying to access the metric in the US-East-1 region, the endpoint URL will be: monitoring.us-east- 1.amazonaws.com
Question : A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling AddToLoadBalancer (which adds instances to the load balancer. process for a while. What will happen to the instances launched during the suspension period?
1. The instances will not be registered with ELB and the user has to manually register when the process is resumed 2. The instances will be registered with ELB only once the process has resumed 3. Access Mostly Uused Products by 50000+ Subscribers 4. It is not possible to suspend only the AddToLoadBalancer process
Ans : 1 Exp : Auto Scaling performs various processes, such as Launch, Terminate, add to Load Balancer etc. The user can also suspend the individual process. The AddToLoadBalancer process type adds instances to the load balancer when the instances are launched. If this process is suspended, Auto Scaling will launch the instances but will not add them to the load balancer. When the user resumes this process, Auto Scaling will resume adding new instances launched after resumption to the load balancer. However, it will not add running instances that were launched while the process was suspended; those instances must be added manually.
Question : A sys admin has enabled a log on ELB. Which of the below mentioned activities are not captured by the log? 1. Response processing time 2. Front end processing time 3. Access Mostly Uused Products by 50000+ Subscribers 4. Request processing time Ans :2 Exp : Elastic Load Balancing access logs capture detailed information for all the requests made to the load balancer. Each request will have details, such as client IP, request path, ELB IP, time, and latencies. The time will have information, such as Request Processing time, Backend Processing time and Response Processing time.
Question : A user has moved an object to Glacier using the life cycle rules. The user requests to restore the archive after months. When the restore request is completed the user accesses that archive. Which of the below mentioned statements is not true in this condition?
1. The archive will be available as an object for the duration specified by the user during the restoration request 2. The restored object's storage class will be RRS 3. Access Mostly Uused Products by 50000+ Subscribers 4. The user needs to pay storage for both RRS (restored. and Glacier (Archive. Rates
Exp : When the user creates an EBS volume and is trying to access it for the first time it will encounter reduced IOPS due to wiping or initiating of the block storage. To avoid this as well as achieve the best performance it is required to pre warm the EBS volume. For a volume created from a snapshot and attached with a Linux OS, the "dd" command pre warms the existing data on EBS and any restored snapshots of volumes that have been previously fully pre warmed. This command maintains incremental snapshots; however, because this operation is read-only, it does not pre warm unused space that has never been written to on the original volume. In the command "dd if=/dev/xvdf of=/dev/null bs=1M" , the parameter "if=input file" should be set to the drive that the user wishes to warm. The "of=output file" parameter should be set to the Linux null virtual device, /dev/null. The "bs" parameter sets the block size of the read operation; for optimal performance, this should be set to 1 MB.
Question : An organization has created IAM users. The organization wants to give them the same login ID but different passwords. How can the organization achieve this? 1. The organization should create a separate login ID but give the IAM users the same alias so that each one can login with their alias 2. The organization should create each user in a separate region so that they have their own URL to login 3. Access Mostly Uused Products by 50000+ Subscribers 4. The organization should create various groups and add each user with the same login ID to different groups. The user can login with their own group ID Ans: 3 Exp : AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. Whenever the organization is creating an IAM user, there should be a unique ID for each user. It is not possible to have the same login ID for multiple users. The names of users,groups, roles, instance profiles must be alphanumeric, including the following common characters: plus (+., equal (=., comma (,., period (.., at (@., and dash (-..
Question : A user is planning to evaluate AWS for their internal use. The user does not want to incur any charge on his account during the evaluation. Which of the below mentioned AWS services would incur a charge if used?
Ans : 4 Exp : AWS is introducing a free usage tier for one year to help the new AWS customers get started in Cloud. The free tier can be used for anything that the user wants to run in the Cloud. AWS offers a handful of AWS services as a part of this which includes 750 hours of free micro instances and 750 hours of ELB. It includes the AWS S3 of 5 GB and AWS EBS general purpose volume upto 30 GB. PIOPS is not part of free usage tier.
Question : A user has developed an application which is required to send the data to a NoSQL database. The user wants to decouple the data sending such that the application keeps processing and sending data but does not wait for an acknowledgement of DB. Which of the below mentioned applications helps in this scenario? 1. AWS Simple Notification Service 2. AWS Simple Workflow 3. Access Mostly Uused Products by 50000+ Subscribers 4. AWS Simple Query Service Ans : 3
Exp : Amazon Simple Queue Service (SQS. is a fast, reliable, scalable, and fully managed message queuing service. SQS provides a simple and cost-effective way to decouple the components of an application. In this case, the user can use AWS SQS to send messages which are received from an application and sent to DB. The application can continue processing data without waiting for any acknowledgement from DB. The user can use SQS to transmit any volume of data without losing messages or requiring other services to always be available.
Question : A root AWS account owner is trying to understand various options to set the permission to AWS S. Which of the below mentioned options is not the right option to grant permission for S3? 1. User Access Policy 2. S3 Object Access Policy 3. Access Mostly Uused Products by 50000+ Subscribers 4. S3 ACL Ans : 2 Exp : Amazon S3 provides a set of operations to work with the Amazon S3 resources. Managing S3 resource access refers to granting others permissions to work with S3. There are three ways the root account owner can define access with S3: S3 ACL: The user can use ACLs to grant basic read/write permissions to other AWS accounts. S3 Bucket Policy: The policy is used to grant other AWS accounts or IAM users permissions for the bucket and the objects in it. User Access Policy: Define an IAM user and assign him the IAM policy which grants him access to S3.
Question : A sys admin has created a shopping cart application and hosted it on EC. The EC instances are running behind ELB. The admin wants to ensure that the end user request will always go to the EC2 instance where the user session has been created. How can the admin configure this? 1. Enable ELB cross zone load balancing 2. Enable ELB cookie setup 3. Access Mostly Uused Products by 50000+ Subscribers 4. Enable ELB connection draining
Ans : 3 Exp : Generally AWS ELB routes each request to a zone with the minimum load. The Elastic Load Balancer provides a feature called sticky session which binds the user's session with a specific EC2 instance. If the sticky session is enabled the first request from the user will be redirected to any of the EC2 instances. But, henceforth, all requests from the same user will be redirected to the same EC2 instance. This ensures that all requests coming from the user during the session will be sent to the same application instance.
Question : A user has configured ELB with three instances. The user wants to achieve High Availability as well as redundancy with ELB. Which of the below mentioned AWS services helps the user achieve this for ELB? 1. Route 53 2. AWS Mechanical Turk 3. Access Mostly Uused Products by 50000+ Subscribers 4. AWS EMR Ans : 1 Exp : The user can provide high availability and redundancy for applications running behind Elastic Load Balancer by enabling the Amazon Route 53 Domain Name System (DNS. failover for the load balancers. Amazon Route 53 is a DNS service that provides reliable routing to the user's infrastructure.
Question : An organization is using AWS since a few months. The finance team wants to visualize the pattern of AWS spending. Which of the below AWS tool will help for this requirement?
1. AWS Cost Manager 2. AWS Cost Explorer 3. Access Mostly Uused Products by 50000+ Subscribers 4. AWS Consolidated Billing Ans : 2 Exp : The AWS Billing and Cost Management console includes the Cost Explorer tool for viewing AWS cost data as a graph. It does not charge extra to user for this service. With Cost Explorer the user can filter graphs using resource tags or with services in AWS. If the organization is using Consolidated Billing it helps generate report based on linked accounts. This will help organization to identify areas that require further inquiry. The organization can view trends and use that to understand spend and to predict future costs.
Question : A user has launched an ELB which has instances registered with it. The user deletes the ELB by mistake. What will happen to the instances? 1. ELB will ask the user whether to delete the instances or not 2. Instances will be terminated 3. Access Mostly Uused Products by 50000+ Subscribers 4. Instances will keep running Ans : 4 Exp : When the user deletes the Elastic Load Balancer, all the registered instances will be deregistered. However, they will continue to run. The user will incur charges if he does not take any action on those instances.
Question : A user is planning to setup notifications on the RDS DB for a snapshot. Which of the below mentioned event categories is not supported by RDS for this snapshot source type? 1. Backup 2. Creation 3. Access Mostly Uused Products by 50000+ Subscribers 4. Restoration Ans : 1 Exp : Amazon RDS uses the Amazon Simple Notification Service to provide a notification when an Amazon RDS event occurs. Event categories for a snapshot source type include: Creation, Deletion, and Restoration. The Backup is a part of DB instance source type.
Question : A customer is using AWS for Dev and Test. The customer wants to setup the Dev environment with Cloudformation. Which of the below mentioned steps are not required while using Cloudformation?
1. Create a stack 2. Configure a service 3. Access Mostly Uused Products by 50000+ Subscribers 4. Provide the parameters configured as part of the template Ans : 2 Exp : AWS Cloudformation is an application management tool which provides application modelling, deployment, configuration, management and related activities. AWS CloudFormation introduces two concepts: the template and the stack. The template is a JSON-format, text-based file that describes all the AWS resources required to deploy and run an application. The stack is a collection of AWS resources which are created and managed as a single unit when AWS CloudFormation instantiates a template. While creating a stack, the user uploads the template and provides the data for the parameters if required.
Question : A user has configured the AWS CloudWatch alarm for estimated usage charges in the US East region. Which of the below mentioned statements is not true with respect to the estimated charges?
1. It will store the estimated charges data of the last 14 days 2. It will include the estimated charges of every AWS service 3. Access Mostly Uused Products by 50000+ Subscribers 4. The metric data will show data specific to that region
1. RDS will have an internal IP which will redirect all requests to the new DB 2. RDS uses DNS to switch over to stand by replica for seamless transition 3. Access Mostly Uused Products by 50000+ Subscribers 4. RDS will have both the DBs running independently and the user has to manually switch over Ans : 2 Exp : In the event of a planned or unplanned outage of a DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if the user has enabled Multi AZ. The automatic failover mechanism simply changes the DNS record of the DB instance to point to the standby DB instance. As a result, the user will need to re-establish any existing connections to the DB instance. However, as the DNS is the same, the application can access DB seamlessly.
Question : An organization is generating digital policy files which are required by the admins for verification. Once the files are verified they may not be required in the future unless there is some compliance issue. If the organization wants to save them in a cost effective way, which is the best possible solution?
Ans : 4 Exp : Amazon S3 stores objects according to their storage class. There are three major storage classes: Standard, Reduced Redundancy and Glacier. Standard is for AWS S3 and provides very high durability. However, the costs are a little higher. Reduced redundancy is for less critical files. Glacier is for archival and the files which are accessed infrequently. It is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup.
Question : A user has launched an EBS backed instance. The user started the instance at AM in the morning. Between AM to AM, the user is testing some script. Thus, he stopped the instance twice and restarted it. In the same hour the user rebooted the instance once. For how many instance hours will AWS charge the user? 1. 3 hours 2. 4 hours 3. Access Mostly Uused Products by 50000+ Subscribers 4. 1 hour Ans : 1 Exp : A user can stop/start or reboot an EC2 instance using the AWS console, the Amazon EC2 CLI or the Amazon EC2 API. Rebooting an instance is equivalent to rebooting an operating system. When the instance is rebooted AWS will not charge the user for the extra hours. In case the user stops the instance, AWS does not charge the running cost but charges only the EBS storage cost. If the user starts and stops the instance multiple times in a single hour, AWS will charge the user for every start and stop. In this case, since the instance was rebooted twice, it will cost the user for 3 instance hours.
Question : A user has a weighing plant. The user measures the weight of some goods every minutes and sends data to AWS CloudWatch for monitoring and tracking. Which of the below mentioned parameters is mandatory for the user to include in the request list? 1. Value 2. Namespace 3. Access Mostly Uused Products by 50000+ Subscribers 4. Timezone