Premium

AWS Certified Developer - Associate Questions and Answers (Dumps and Practice Questions)



Question : You have a big file stored in your local data center Hadoop Distributed File Systme (HDFS) which has size of TB, and you want to transfer this file
to AWS S3 as fast as possible, which of the following option would help you accomplish it

 :  You have a big file stored in your local data center Hadoop Distributed File Systme (HDFS) which has size of TB, and you want to transfer this file
1. Upload this single file as it is and Amazon will start parallel uploading of the file.
2. Devide this file in 5 part (1 TB in each and upload it in parallel)
3. Access Mostly Uused Products by 50000+ Subscribers
4. Maximum object size can be 1 TB only,to upload S3


Correct Answer : Get Lastest Questions and Answer :


Explanation: Depending on the size of the data you are uploading, Amazon S3 offers the following options:

Upload objects in a single operation With a single PUT operation you can upload objects up to 5 GB in size.

For more information

Upload objects in parts Using the Multipart upload API you can upload large objects, up to 5 TB.

The Multipart Upload API is designed to improve the upload experience for larger objects. You can upload objects in parts. These object parts can be uploaded independently, in any order, and in parallel. You can use a Multipart Upload for objects from 5 MB to 5 TB in size. For more information, see Uploading Objects Using Multipart Upload. For more information.

Amazon encourage Amazon S3 customers to use Multipart Upload for objects greater than 100 MB.

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes.





Question : Which of the follwing is a feature of IAM ?

A Central control of users and security credentials You can control creation, rotation, and revocation of each users AWS security credentials (such as access keys)

B Central control of user access You can control what data in the AWS system users can access and how they access it

C Shared AWS resources Users can share data for collaborative projects

D Permissions based on organizational groups You can restrict users AWS access based on their job duties (for example, admin, developer, etc.) or departments. When users move inside the organization, you can easily update their AWS access to reflect the change in their role

E Central control of AWS resources Your organization maintains central control of the AWS data the users create, with no breaks in continuity or lost data as users move around within or leave the organization
 :  Which of the follwing is a feature of IAM ?
1. A,B,C only
2. A,D,E Only
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,C,D Only
5. A,B,C,D,E All mentioned

Correct Answer : Get Lastest Questions and Answer :

IAM includes the following features:
Central control of users and security credentials You can control creation, rotation, and revocation of each users AWS security credentials (such as access keys)
Central control of user access You can control what data in the AWS system users can access and how they access it Shared AWS resources Users can share data for

collaborative projects
Permissions based on organizational groups You can restrict users AWS access based on their job duties (for example, admin, developer, etc.) or departments. When users move inside the organization, you can easily update their AWS access to reflect the change in their role
Central control of AWS resources Your organization maintains central control of the AWS data the users create, with no breaks in continuity or lost data as users move around within or leave the organization
Control over resource creation You can help make sure that users create AWS data only in sanctioned places
Networking controls You can help make sure that users can access AWS resources only from within the organizations corporate network, using SSL
Single AWS bill Your organization's AWS account gets a single AWS bill for all your users AWS activity







Question :

All the Read Replicas of Source DB Instance will be automatically deleted as soon as, its source DB Instance is deleted?

 :
1. Only for MySQL RDS types
2. No for all RDS types
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes all RDMS types can


Correct Answer : Get Lastest Questions and Answer :

A Read Replica will stay active and continue accepting read traffic even after its corresponding source DB Instance has been deleted.
If you desire to delete the Read Replica in addition to the source DB Instance, you must explicitly delete the Read Replica using the DeleteDBInstance API or AWS Management Console.



Related Questions


Question : What is the maximum number of S buckets allowed per AWS account?


 : What is the maximum number of S buckets allowed per AWS account?
1. 50
2. 150
3. 100
4. 1000




Question : Company C has recently launched an online commerce site for bicycles on AWS. They
have a "Product" DynamoDB table that stores details for each bicycle, such as,
manufacturer, color, price, quantity and size to display in the online store. Due to customer
demand, they want to include an image for each bicycle along with the existing details.
Which approach below provides the least impact to provisioned throughput on the
"Product" table?

 : Company C has recently launched an online commerce site for bicycles on AWS. They
1. Serialize the image and store it in multiple DynamoDB tables
2. Create an "Images" DynamoDB table to store the Image with a foreign key constraint to the "Product" table
3. Add an image data type to the "Product" table to store the images in binary format
4. Store the images in Amazon S3 and add an S3 URL pointer to the "Product" table item for each image


Question : Your application is trying to upload a GB file to Simple Storage Service and receive
a "Your proposed upload exceeds the maximum allowed object size." error message. What is a possible solution for this?

  : Your application is trying to upload a  GB file to Simple Storage Service and receive
1. Simple Storage Service objects are limited to 5 GB
2. Use the multipart upload API for this object
3. Use the large object upload API for this object
4. Contact support to increase your object size limit




Question : You successfully upload an item to the US-STANDARD region.
You then immediately make another API call and attempt to read the object and receive a HTTP 404 error.
What is the most likely cause of this behavior


 :  You successfully upload an item to the US-STANDARD region.
1. US-STANDARD uses eventual consistency and it can take time for an object to be readable in a bucket

2. Objects in Amazon S3 do not become visible until they are replicated to a second region

3. US-STANDARD imposes a 1 second delay before new objects are readable

4. You exceeded the bucket object limit, and once this limit is raised the object will be visible





Question : If you set the "VisibilityTimeout=" in Amazon SQS queue message by calling ChangeMessageVisibility then

 : If you set the
1. Message will be immediately deleted from all the queues
2. Message will never be deleted from the queue until you change this value again
3. Message will remain in the queue, but no component can process it
4. Immediately makes the message visible to other components in the system to process


Question : Which statements about DynamoDB are true? Choose answers
A. DynamoDB uses a pessimistic locking model
B. DynamoDB uses optimistic concurrency control
C. DynamoDB uses conditional writes for consistency
D. DynamoDB restricts item access during reads
E. DynamoDB restricts item access during writes
 : Which statements about DynamoDB are true? Choose  answers
1. A,B
2. B,C
3. C,D
4. D,E
5. A,E