Correct Answer : Get Lastest Questions and Answer : Explanation: Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.
Question : While setting up the level read replica in RDS , you find the lag between master and nd level replication, why ?
1. 2nd level read replicas configuration is wrong 2. 2nd level read replica's Harware configuration will always be lower grade then master. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Because of additional replication latency introduced as transactions are replicated from the master to the first level replica and then to the second-level replica. 5. Any of the above could be correct
Explanation: You can create a second-tier Read Replica from an existing first-tier Read Replica. By creating a second-tier Read Replica, you may be able to move some of the replication load from the master database instance to a first-tier Read Replica. Please note that a second-tier Read Replica may lag further behind the master because of additional replication latency introduced as transactions are replicated from the master to the first tier replica and then to the second-tier replica
Question : How would Acmeshell can use S to accomplish this goal?
"As soon as their training video file age reached to 6 month, it should be moved to Glacier storage." 1. Write an AWS command line tool to backup the data and send it to glacier after 6 months 2. Use S3 bucket policies to manage the data 3. Access Mostly Uused Products by 50000+ Subscribers 4. Use bucket Lifecycle policies and set the files to go to glacier storage after 6 months
Explanation: Lifecycle management defines how Amazon S3 manages objects during their lifetime. Some objects that you store in an Amazon S3 bucket might have a well defined lifecycle:
If you are uploading periodic logs to your bucket, your application might need these logs for a week or a month after creation, and after that you might want to delete them.
Some documents are frequently accessed for a limited period of time. After that, you might not need real time access to these objects, but your organization might require you to archive them for a longer period and then optionally delete them later. Digital media archives, financial and healthcare records, raw genomics sequence data, long term database backups, and data that must be retained for regulatory compliance are some kinds of data that you might upload to Amazon S3 primarily for archival purposes.
For such objects, you can define rules that identify the affected objects, a timeline, and specific actions you want Amazon S3 to perform on the objects.
Amazon S3 manages object lifetimes with a lifecycle configuration, which is assigned to a bucket and defines rules for individual objects. Each rule in a lifecycle configuration consists of the following:
An object key prefix that identifies one or more objects to which the rule applies.
An action or actions that you want Amazon S3 to perform on the specified objects.
A date or a time period, specified in days since object creation, when you want Amazon S3 to perform the specified action.
You can add these rules to your bucket using either the Amazon S3 console or programmatically.
1. As this is about stopping primary DB instance in one of the availability zone. Hence, you need to raise support ticket to stop the primary RDS instance.
2. You have to stop Secondary DB instance, as this does not require AWS support ticket. You can stop that and test the given scenario.