Premium

AWS Certified Developer - Associate Questions and Answers (Dumps and Practice Questions)



Question : Company B provides an online image recognition service and utilizes SQS to decouple system components for scaleability
. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible.
However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses.
How can Company B reduce the number empty responses? "Hint: these are API calls; if you're using an SDK, the SDK might use
a different method name to make this API call but still executes the same API call below against AWS."
"Tip: Understand why the message is correct and what the other API calls do. You might expect to see
a question worded differently, with a different answer, but have the exact same answer options."


 : Company B provides an online image recognition service and utilizes SQS to decouple system components for scaleability
1. Set the imaging queue VisibilityTimeout attribute to 20 seconds
2. Set the imaging queue ReceiveMessageWaitTimeSeconds Attribute to 20 seconds
3. Access Mostly Uused Products by 50000+ Subscribers
4. Set the DelaySeconds parameter of a message to 20 seconds

Correct Answer : Get Lastest Questions and Answer :

Enabling long polling reduces the amount of false and empty responses from SQS service. It also reduces the number of calls that need to be made to a queue by staying connected to the queue until all messages have been received or until timeout. In order to enable long polling the ReceiveMessageWaitTimeSeconds attribute needs to be set to a number greater than 0. If it is set to 0 then short polling is enabled.






Question :

Which of the following statements about SQS is true?


 :
1. Messages will be delivered exactly once and messages will be delievered in First in, First out order
2. Messages will be delivered exactly once and message delivery order is intermediate
3. Access Mostly Uused Products by 50000+ Subscribers
4. Messages will be delivered one or more times and message delivery order is intermediate

Correct Answer : Get Lastest Questions and Answer :

Explanation: SQS guarantees delivery of at least one message. However, due to the high availability design of SQS,
SQS cannot guarantees that duplicates will not be sent. It is up to the application to prevent duplicates.





Question : Amazon SQS max message size is ____.


 : Amazon SQS max message size is ____.
1. 64KB
2. 128KB
3. Access Mostly Uused Products by 50000+ Subscribers
4. 16KB


Correct Answer : Get Lastest Questions and Answer :


Explanation: To configure the maximum message size, set the MaximumMessageSize attribute using the SetQueueAttributes method. This attribute specifies the limit on how many bytes an SQS message can contain. It can be set anywhere from 1024 bytes (1kB), up to 65536 bytes (64kB).


Related Questions


Question :

You attempt to store an object in the US-STANDARD region in Amazon S3 and receive a confirmation
that it has been successfully stored. You then immediately make another API call and attempt to read this object.
S3 tells you that the object does not exist. What could explain this behavior?



 :
1. US - STANDARD uses eventual consistency and it can take time for an object to be readable in a bucket.
2. Objects in Amazon S3 do not become visible until they are replicated to a second region.
3. Access Mostly Uused Products by 50000+ Subscribers
4. You exceeded the bucket object limit and once this limit is raised the object will be visible.


Question : When uploading an object, what request header can be explicitly specified in a request to
Amazon S3 to encrypt object data when saved on the server side?
 : When uploading an object, what request header can be explicitly specified in a request to
1. x-amz-storage-class
2. Content-MD5
3. Access Mostly Uused Products by 50000+ Subscribers
4. x-amz-server-side-encryption

Ans : 4
Exp : Protecting Data Using Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3)

Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

Amazon S3 supports bucket policies that you can use if you require server-side encryption for all objects that are stored in your bucket. For example, the following bucket policy denies upload object (s3:PutObject) permission to everyone if the request does not include the x-amz-server-side-encryption header requesting server-side encryption.

{
"Version":"2012-10-17",
"Id":"PutObjPolicy",
"Statement":[{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::YourBucket/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"AES256"
}
}
}
]
}
Server-side encryption encrypts only the object data. Any object metadata is not encrypted.


Question : Select correct statement about DynamoDB ?

 : When uploading an object, what request header can be explicitly specified in a request to
1. DynamoDB restricts item access during reads
2. DynamoDB restricts item access during writes
3. Access Mostly Uused Products by 50000+ Subscribers
4. DynamoDB uses conditional writes for consistency




Question : Select the correct statements about DynamoDB ?

  : Select the correct statements about DynamoDB ?
1. DynamoDB does not support conditional writes
2. DynamoDB uses optimistic concurrency control
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above


Question : What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  : What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
1. store backed instances can be stopped and restarted
2. Auto scaling requires using Amazon EBS - backed instances
3. Access Mostly Uused Products by 50000+ Subscribers
4. Virtual Private Cloud requires EBS backed instances




Question :

You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site.
At some point you find out that other sites have been linking to the photos on your site, causing loss to your business.
What is an effective method to mitigate this?


 :
1. Use CloudFront distributions for static content.
2. Remove public read access and use signed URLs with expiry dates.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Store photos on an EBS volume of the web server.



Question :

Your application is trying to upload a 6 GB file to Simple Storage Service and you receive a
"Your proposed upload exceeds the maximum allowed object size." error message. What is a possible solution for this?

 :
1. API for this object Contact support to increase your object size limit
2. API for this object Use the large object upload
3. Access Mostly Uused Products by 50000+ Subscribers
4. None, Simple Storage Service objects are limited to 5 GB