Premium

AWS Certified Developer - Associate Questions and Answers (Dumps and Practice Questions)



Question : You are writing to a DynamoDB table and receive the following exception:"
ProvisionedThroughputExceededException". though according to your Cloudwatch metrics
for the table, you are not exceeding your provisioned throughput.
What could be an explanation for this?
  : You are writing to a DynamoDB table and receive the following exception:
1. You haven't provisioned enough DynamoDB storage instances
2. You're exceeding your capacity on a particular Range Key
3. You're exceeding your capacity on a particular Hash Key
4. You're exceeding your capacity on a particular Sort Key
5. You haven't configured DynamoDB Auto Scaling triggers




Correct Answer : 3


Explanation: If you have checked in the AWS Management Console and verified that throttling events are occurring even when read capacity is well below provisioned capacity the most likely answer is that your hash keys are not evenly distributed. As your DynamoDB table grows in size and capacity, the DynamoDB service will automatically split your table into partitions. It will then use the hash key of the item to determine which partition to store the item. In addition, your provisioned read capacity is also split evenly among the partitions.

If you have a well-distributed hash key, this all works fine. But if your hash key is not well distributed it can cause all or most of your reads to come from a single partition. So, for example, if you had 10 partitions and you had a provisioned read capacity of 1000 on the table, each partition would have a read capacity of 100. If all of your reads are hitting one partition you will be throttled at 100 read units rather than 1000.

Unfortunately, the only way to really fix this problem is to pick a better hash and rewrite the table with those hash values.






Question :

If you're executing code against AWS on an EC2 instance that is assigned an IAM role, which of the following is a true statement?
  :
1. The code will assume the same permissions as the EC2 role
2. The code must have AWS access keys in order to execute
3. Only Python code can assume
4. IAM roles None of the above



Correct Answer : 1


Explanation:


IAM Roles for Amazon EC2

Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting them from other users. However, it's challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials.

We designed IAM roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles as follows:

1.Create an IAM role.
2.Define which accounts or AWS services can assume the role.
3.Define which API actions and resources the application can use after assuming the role.
4.Specify the role when you launch your instances.
5.Have the application retrieve a set of temporary credentials and use them.

For example, you can use IAM roles to grant permissions to applications running on your instances that needs to use a bucket in Amazon S3.
Note : Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using the console, the console creates an instance profile automatically and gives it the same name as the role it corresponds to. If you use the AWS CLI, API, or an AWS SDK to create a role, you create the role and instance profile as separate actions, and you might give them different names. To launch an instance with an IAM role, you specify the name of its instance profile. When you launch an instance using the Amazon EC2 console, you can select a role to associate with the instance; however, the list that's displayed is actually a list of instance profile names. For more information, see Instance Profiles in the Using IAM.

You can specify permissions for IAM roles by creating a policy in JSON format. These are similar to the policies that you create for IAM users. If you make a change to a role, the change is propagated to all instances, simplifying credential management.

Note : You can't assign a role to an existing instance; you can only specify a role when you launch a new instance.






Question : Company C is currently hosting their corporate site in an Amazon S bucket with Static
Website Hosting enabled. Currently, when visitors go to http://www.companyc.com the
index.html page is returned. Company C now would like a new page welcome.html to be
returned when a visitor enters http://www.companyc.com in the browser.
Which of the following steps will allow Company C to meet this requirement? Choose 2 answers

A. Upload an html page named welcome.html to their S3 bucket
B. Create a welcome subfolder in their S3 bucket
C. Set the Index Document property to welcome.html
D. Move the index.html page to a welcome subfolder
E. Set the Error Document property to welcome.html
 : Company C is currently hosting their corporate site in an Amazon S bucket with Static
1. A,B
2. B,C
3. C,E
4. B,D
5. A,C

Correct Answer : 5

Explanation: An index document is a webpage that is returned when a request is made to the root of a website or any subfolder. For example, if a user enters http://www.example.com in the browser, the user is not requesting any specific page. In that case, Amazon S3 serves up the index document, which is sometimes referred to as the default page

When you configure your bucket as a website, you should provide the name of the index document. You must upload an object with this name and configure it to be publicly readable.
The trailing slash at the root-level URL is optional. For example, if you configure your website with index.html as the index document, either of the following two URLs will return index.html.

http://example-bucket.s3-website-region.amazonaws.com/
http://example-bucket.s3-website-region.amazonaws.com

sample1.jpg

photos/2006/Jan/sample2.jpg

photos/2006/Feb/sample3.jpg

Although these are stored with no physical hierarchical organization, you can infer the following logical folder structure from the key names.

sample1.jpg object is at the root of the bucket

sample2.jpg object is in the photos/2006/Jan subfolder, and

sample3.jpg object is in photos/2006/Feb subfolder.



Related Questions


Question : You are providing AWS consulting services for a company developing a new mobile
application that will be leveraging Amazon SNS Mobile Push for push notifications. In order
to send direct notification messages to individual devices each device registration identifier
or token needs to be registered with SNS; however the developers are not sure of the best
way to do this. You advise them to:
 : You are providing AWS consulting services for a company developing a new mobile
1. Bulk upload the device tokens contained in a CSV file via the AWS Management Console.
2. Let the push notification service (e.g. Amazon Device Messaging) handle the registration.
3. Implement a token vending service to handle the registration.
4. Call the CreatePlatformEndPoint API function to register multiple device tokens.


Question : If you want to delay the message for some time in the queue, which is the best way
 : If you want to delay the message for some time in the queue, which is the best way
1. Using delay queue
2. Using the Dead later queue
3. Using the normal queue but set the attribute DelaySeconds
4. 1 and 3
5. All 1,2 and 3



Question : The total size of all the messages that you send in a single call to SendMessageBatch cannot exceed
 :   The total size of all the messages that you send in a single call to SendMessageBatch cannot exceed
1. 256 KB
2. 1024 KB
3. 64 KB
4. There is practically no size limit in case of batch


Question : What kind of following monitoring is possible with Amazon SQS and CloudWatch ?

 :  What kind of following monitoring is possible with Amazon SQS and CloudWatch ?
1. you can gain better insight into the performance of your Amazon SQS queues and applications
2. you can monitor the NumberOfEmptyReceives metric to make sure that your application isn't spending too much of its time polling for new messages
3. You can set an alarm to send you an email notification if a specified threshold is met for an Amazon SQS metric, such as NumberOfMessagesReceived
4. Only 2 and 3
5. All 1,2 and 3


Question : You have written an application that uses the Elastic Load Balancing service to spread
traffic to several web servers. Your users complain that they are sometimes forced to login
again in the middle of using your application, after they have already logged in. This is not
behavior you have designed
What is a possible solution to prevent this happening?


 : You have written an application that uses the Elastic Load Balancing service to spread
1. Use instance memory to save session state.

2. Use instance storage to save session state.
3. Use EBS to save session state

4. Use ElastiCache to save session state.
5. Use Glacier to save session slate.


Question : Which of the following service you will be using to captures " API calls made by or on behalf of Amazon SQS in your AWS account "



 :  Which of the following service you will be using to captures
1. CloudWatch
2. CloudTrail
3. SQS Monitoring tool
4. 1 and 2
5. Either of 1,2 and 3