Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : A user has created an S bucket which is not publicly accessible. The bucket is having thirty objects which are also private. If the user wants to
make the objects public, how can he configure this with minimal efforts?
   : A user has created an S bucket which is not publicly accessible. The bucket is having thirty objects which are also private. If the user wants to
1. The user should select all objects from the console and apply a single policy to mark them public
2. The user can write a program which programmatically makes all objects public using S3 SDK
3. Access Mostly Uused Products by 50000+ Subscribers
4. Make the bucket ACL as public so it will also mark all objects as public



Answer: 3

Explanation: A system admin can grant permission of the S3 objects or buckets to any user or make the objects public using the bucket policy and user policy.
Both use the JSON-based access policy language. Generally if the user is defining the ACL on the bucket, the objects in the bucket do not inherit
it and vice a versa. The bucket policy can be defined at the bucket level which allows the objects as well as the bucket to be public with a single
policy applied to that bucket.





Question : A sys admin is maintaining an application on AWS. The application is installed on EC and user has configured ELB and Auto Scaling.
Considering future load increase, the user is planning to launch new servers proactively so that they get registered with ELB. How can the user
add these instances with Auto Scaling?

  : A sys admin is maintaining an application on AWS. The application is installed on EC and user has configured ELB and Auto Scaling.
1. Increase the desired capacity of the Auto Scaling group
2. Increase the maximum limit of the Auto Scaling group
3. Access Mostly Uused Products by 50000+ Subscribers
4. Decrease the minimum limit of the Auto Scaling grou

Answer: 1

Explanation: A user can increase the desired capacity of the Auto Scaling group and Auto Scaling will launch a new instance as per the new capacity. The
newly launched instances will be registered with ELB if Auto Scaling group is configured with ELB. If the user decreases the minimum size the
instances will be removed from Auto Scaling. Increasing the maximum size will not add instances but only set the maximum instance cap.





Question : Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many
database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use?

  : Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many
1. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
2. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.

Answer: 2

Explanation: Elasticache and Read Replica is good option for heavy read. DynamoDB is also good for best performance, but we dont see need of NOSQL DB here , so exclude option 4 as well.
Queues are used to decouple message producers from message consumers. This is one way to architect for scale and reliability.

Let's say you've built a mobile voting app for a popular TV show and 5 to 25 million viewers are all voting at the same time (at the end of each performance). How are you going to
handle that many votes in such a short space of time (say, 15 seconds)? You could build a significant web server tier and database back-end that could handle millions of messages per
second but that would be expensive, you'd have to pre-provision for maximum expected workload, and it would not be resilient (for example to database failure or throttling). If few
people voted then you're overpaying for infrastructure; if voting went crazy then votes could be lost.

A better solution would use some queuing mechanism that decoupled the voting apps from your service where the vote queue was highly scalable so it could happily absorb 10
messages/sec or 10 million messages/sec. Then you would have an application tier pulling messages from that queue as fast as possible to tally the votes.





Related Questions


Question : QuickTechie.com is setting up a highly scalable application using the Elastic Beanstalk and is using ELB and RDS with VPC.
QuickTechie.com has public and private subnets within the cloud. Which of the below mentioned configurations will not work in this scenario?
 : QuickTechie.com is setting up a highly scalable application using the Elastic Beanstalk and is using ELB and RDS with VPC.
1. The configuration must have two private subnets in separate AZs.
2. The configuration must have public and private subnets in the same AZ.
3. Access Mostly Uused Products by 50000+ Subscribers
4. It is recommended to setup RDS in a private subnet and ELB in a public subnet.




Question : QuickTechie.com has launched a large EC instance with an EBS store backed AMI and an additional ephermal drive and wants to ensure that even during
the outage all the critical data will not be lost. Which of the below mentioned steps will not help the QuickTechie achieve their goal?

 : QuickTechie.com has launched a large EC instance with an EBS store backed AMI and an additional ephermal drive and wants to ensure that even during
1. Keep moving all the log files generated on the ephermal drive to the EBS volume for the audit trails.
2. Setup the EBS volume with the DeleteOnTermination flag set to False to ensure that EBS survives instance termination.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Take a snapshot of the EBS volume at regular intervals for backup purpose.




Question : QuickTechie.com has hosted an application on the EC instances. There will be multiple users connecting to the instance for setup and configuration of
application. QuickTechie is planning to implement certain security best practices. Which of the below mentioned pointers will not help the QT to achieve better security arrangement?

 : QuickTechie.com has hosted an application on the EC instances. There will be multiple users connecting to the instance for setup and configuration of
1. Allow only IAM users to connect with the EC2 instances with their own secret access key.
2. Apply the latest patch of OS and always keep it updated.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create a procedure to revoke the access rights of the individual user when they are not required to connect to EC2 instance anymore for the purpose of application configuration.




Question : QuickTechie.com has five branches across the globe (NewYork, Geneva, HongKong, Mumbai and London). They want to expand their data centers such that
their web server will be in the AWS and each branch would have their own database in the local data center.
Based on the user login, the company wants to connect to the data center.
While designing this scenario with the AWS VPC CloudHub, which of the below mentioned important factors should QuickTechie ensure?


 : QuickTechie.com has five branches across the globe (NewYork, Geneva, HongKong, Mumbai and London). They want to expand their data centers such that
1. Each site cannot have an overlapping IP range and unique Autonomous System Numbers for each gateway.
2. Each site must have the same Autonomous System Numbers for each gateway and the IP address of each site should be within the VPC CIDR.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Each site should have the same Autonomous System Numbers and unique Border Gateway Protocol.




Question :

The following example shows a policy you could assign to Bob to allow him to manage his own access keys

{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["iam:*AccessKey*"],
"Resource": "arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user/division_abc/subdivision_xyz/Bob"
}]
}

What is the resource in this example


 :
1. iam
2. ACCOUNT-ID-WITHOUT-HYPHENS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Resource is not correctly defined
5. Bob


Question :
Danial is the main administrator in HadoopExam Corp., and he decides to use paths to help delineate the
users in the company and set up a separate administrator group for each path-based division. Following is a subset
of the full list of paths he plans to use:
/marketing
/sales
/tech
/billing
/risk
Danial creates an administrator group for the marketing part of the company and calls it Marketing_Admin.
He assigns it the /marketing path. The group's ARN is arn:aws:iam::123456789012:group/marketing/Marketing_Admin.
Danial assigns the following policy to the Marketing_Admin group that gives the group permission to use all
IAM actions with all groups and users in the /marketing path
{ "Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:*",
"Resource": [
"arn:aws:iam::123456789012:group/marketing/*",
"arn:aws:iam::123456789012:user/marketing/*"
]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::example_bucket/marketing/*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket*",
"Resource": "arn:aws:s3:::example_bucket",
"Condition": {"StringLike": {"s3:prefix": "marketing/*"}}
} ]}
The policy gives the Marketing_Admin group permission to perform
 :
1. Any Amazon S3 actions on the objects in the portion of the corporate bucket dedicated to the marketing employees in the company
2. Only object addition on Amazon S3 in the portion of the corporate bucket dedicated to the marketing employees in the company
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above