Question : A user has created an S bucket which is not publicly accessible. The bucket is having thirty objects which are also private. If the user wants to make the objects public, how can he configure this with minimal efforts? 1. The user should select all objects from the console and apply a single policy to mark them public 2. The user can write a program which programmatically makes all objects public using S3 SDK 3. Access Mostly Uused Products by 50000+ Subscribers 4. Make the bucket ACL as public so it will also mark all objects as public
Answer: 3
Explanation: A system admin can grant permission of the S3 objects or buckets to any user or make the objects public using the bucket policy and user policy. Both use the JSON-based access policy language. Generally if the user is defining the ACL on the bucket, the objects in the bucket do not inherit it and vice a versa. The bucket policy can be defined at the bucket level which allows the objects as well as the bucket to be public with a single policy applied to that bucket.
Question : A sys admin is maintaining an application on AWS. The application is installed on EC and user has configured ELB and Auto Scaling. Considering future load increase, the user is planning to launch new servers proactively so that they get registered with ELB. How can the user add these instances with Auto Scaling?
1. Increase the desired capacity of the Auto Scaling group 2. Increase the maximum limit of the Auto Scaling group 3. Access Mostly Uused Products by 50000+ Subscribers 4. Decrease the minimum limit of the Auto Scaling grou
Answer: 1
Explanation: A user can increase the desired capacity of the Auto Scaling group and Auto Scaling will launch a new instance as per the new capacity. The newly launched instances will be registered with ELB if Auto Scaling group is configured with ELB. If the user decreases the minimum size the instances will be removed from Auto Scaling. Increasing the maximum size will not add instances but only set the maximum instance cap.
Question : Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use?
1. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput. 2. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
Answer: 2
Explanation: Elasticache and Read Replica is good option for heavy read. DynamoDB is also good for best performance, but we dont see need of NOSQL DB here , so exclude option 4 as well. Queues are used to decouple message producers from message consumers. This is one way to architect for scale and reliability.
Let's say you've built a mobile voting app for a popular TV show and 5 to 25 million viewers are all voting at the same time (at the end of each performance). How are you going to handle that many votes in such a short space of time (say, 15 seconds)? You could build a significant web server tier and database back-end that could handle millions of messages per second but that would be expensive, you'd have to pre-provision for maximum expected workload, and it would not be resilient (for example to database failure or throttling). If few people voted then you're overpaying for infrastructure; if voting went crazy then votes could be lost.
A better solution would use some queuing mechanism that decoupled the voting apps from your service where the vote queue was highly scalable so it could happily absorb 10 messages/sec or 10 million messages/sec. Then you would have an application tier pulling messages from that queue as fast as possible to tally the votes.
1. Keep moving all the log files generated on the ephermal drive to the EBS volume for the audit trails. 2. Setup the EBS volume with the DeleteOnTermination flag set to False to ensure that EBS survives instance termination. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Take a snapshot of the EBS volume at regular intervals for backup purpose.
1. Allow only IAM users to connect with the EC2 instances with their own secret access key. 2. Apply the latest patch of OS and always keep it updated. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create a procedure to revoke the access rights of the individual user when they are not required to connect to EC2 instance anymore for the purpose of application configuration.
1. Each site cannot have an overlapping IP range and unique Autonomous System Numbers for each gateway. 2. Each site must have the same Autonomous System Numbers for each gateway and the IP address of each site should be within the VPC CIDR. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Each site should have the same Autonomous System Numbers and unique Border Gateway Protocol.