Question : Company B provides an online image recognition service and utilizes SQS to decouple system components for scaleability . The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number empty responses? "Hint: these are API calls; if you're using an SDK, the SDK might use a different method name to make this API call but still executes the same API call below against AWS." "Tip: Understand why the message is correct and what the other API calls do. You might expect to see a question worded differently, with a different answer, but have the exact same answer options."
1. Set the imaging queue VisibilityTimeout attribute to 20 seconds 2. Set the imaging queue ReceiveMessageWaitTimeSeconds Attribute to 20 seconds 3. Access Mostly Uused Products by 50000+ Subscribers 4. Set the DelaySeconds parameter of a message to 20 seconds
Enabling long polling reduces the amount of false and empty responses from SQS service. It also reduces the number of calls that need to be made to a queue by staying connected to the queue until all messages have been received or until timeout. In order to enable long polling the ReceiveMessageWaitTimeSeconds attribute needs to be set to a number greater than 0. If it is set to 0 then short polling is enabled.
Question :
Which of the following statements about SQS is true?
1. Messages will be delivered exactly once and messages will be delievered in First in, First out order 2. Messages will be delivered exactly once and message delivery order is intermediate 3. Access Mostly Uused Products by 50000+ Subscribers 4. Messages will be delivered one or more times and message delivery order is intermediate
Explanation: SQS guarantees delivery of at least one message. However, due to the high availability design of SQS, SQS cannot guarantees that duplicates will not be sent. It is up to the application to prevent duplicates.
Explanation: To configure the maximum message size, set the MaximumMessageSize attribute using the SetQueueAttributes method. This attribute specifies the limit on how many bytes an SQS message can contain. It can be set anywhere from 1024 bytes (1kB), up to 65536 bytes (64kB).
1. US - STANDARD uses eventual consistency and it can take time for an object to be readable in a bucket. 2. Objects in Amazon S3 do not become visible until they are replicated to a second region. 3. Access Mostly Uused Products by 50000+ Subscribers 4. You exceeded the bucket object limit and once this limit is raised the object will be visible.
Ans : 4 Exp : Protecting Data Using Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3)
Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.
Amazon S3 supports bucket policies that you can use if you require server-side encryption for all objects that are stored in your bucket. For example, the following bucket policy denies upload object (s3:PutObject) permission to everyone if the request does not include the x-amz-server-side-encryption header requesting server-side encryption.
{ "Version":"2012-10-17", "Id":"PutObjPolicy", "Statement":[{ "Sid":"DenyUnEncryptedObjectUploads", "Effect":"Deny", "Principal":"*", "Action":"s3:PutObject", "Resource":"arn:aws:s3:::YourBucket/*", "Condition":{ "StringNotEquals":{ "s3:x-amz-server-side-encryption":"AES256" } } } ] } Server-side encryption encrypts only the object data. Any object metadata is not encrypted.
Question : Select correct statement about DynamoDB ?
1. store backed instances can be stopped and restarted 2. Auto scaling requires using Amazon EBS - backed instances 3. Access Mostly Uused Products by 50000+ Subscribers 4. Virtual Private Cloud requires EBS backed instances
1. Use CloudFront distributions for static content. 2. Remove public read access and use signed URLs with expiry dates. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Store photos on an EBS volume of the web server.
1. API for this object Contact support to increase your object size limit 2. API for this object Use the large object upload 3. Access Mostly Uused Products by 50000+ Subscribers 4. None, Simple Storage Service objects are limited to 5 GB