Question : You have a big file stored in your local data center Hadoop Distributed File Systme (HDFS) which has size of TB, and you want to transfer this file to AWS S3 as fast as possible, which of the following option would help you accomplish it
1. Upload this single file as it is and Amazon will start parallel uploading of the file. 2. Devide this file in 5 part (1 TB in each and upload it in parallel) 3. Access Mostly Uused Products by 50000+ Subscribers 4. Maximum object size can be 1 TB only,to upload S3
Explanation: Depending on the size of the data you are uploading, Amazon S3 offers the following options:
Upload objects in a single operation With a single PUT operation you can upload objects up to 5 GB in size.
For more information
Upload objects in parts Using the Multipart upload API you can upload large objects, up to 5 TB.
The Multipart Upload API is designed to improve the upload experience for larger objects. You can upload objects in parts. These object parts can be uploaded independently, in any order, and in parallel. You can use a Multipart Upload for objects from 5 MB to 5 TB in size. For more information, see Uploading Objects Using Multipart Upload. For more information.
Amazon encourage Amazon S3 customers to use Multipart Upload for objects greater than 100 MB.
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes.
Question : Which of the follwing is a feature of IAM ?
A Central control of users and security credentials You can control creation, rotation, and revocation of each users AWS security credentials (such as access keys)
B Central control of user access You can control what data in the AWS system users can access and how they access it
C Shared AWS resources Users can share data for collaborative projects
D Permissions based on organizational groups You can restrict users AWS access based on their job duties (for example, admin, developer, etc.) or departments. When users move inside the organization, you can easily update their AWS access to reflect the change in their role
E Central control of AWS resources Your organization maintains central control of the AWS data the users create, with no breaks in continuity or lost data as users move around within or leave the organization 1. A,B,C only 2. A,D,E Only 3. Access Mostly Uused Products by 50000+ Subscribers 4. A,B,C,D Only 5. A,B,C,D,E All mentioned
IAM includes the following features: Central control of users and security credentials You can control creation, rotation, and revocation of each users AWS security credentials (such as access keys) Central control of user access You can control what data in the AWS system users can access and how they access it Shared AWS resources Users can share data for
collaborative projects Permissions based on organizational groups You can restrict users AWS access based on their job duties (for example, admin, developer, etc.) or departments. When users move inside the organization, you can easily update their AWS access to reflect the change in their role Central control of AWS resources Your organization maintains central control of the AWS data the users create, with no breaks in continuity or lost data as users move around within or leave the organization Control over resource creation You can help make sure that users create AWS data only in sanctioned places Networking controls You can help make sure that users can access AWS resources only from within the organizations corporate network, using SSL Single AWS bill Your organization's AWS account gets a single AWS bill for all your users AWS activity
Question :
All the Read Replicas of Source DB Instance will be automatically deleted as soon as, its source DB Instance is deleted?
A Read Replica will stay active and continue accepting read traffic even after its corresponding source DB Instance has been deleted. If you desire to delete the Read Replica in addition to the source DB Instance, you must explicitly delete the Read Replica using the DeleteDBInstance API or AWS Management Console.
1. Serialize the image and store it in multiple DynamoDB tables 2. Create an "Images" DynamoDB table to store the Image with a foreign key constraint to the "Product" table 3. Add an image data type to the "Product" table to store the images in binary format 4. Store the images in Amazon S3 and add an S3 URL pointer to the "Product" table item for each image
1. Simple Storage Service objects are limited to 5 GB 2. Use the multipart upload API for this object 3. Use the large object upload API for this object 4. Contact support to increase your object size limit
1. Message will be immediately deleted from all the queues 2. Message will never be deleted from the queue until you change this value again 3. Message will remain in the queue, but no component can process it 4. Immediately makes the message visible to other components in the system to process