Question : Which of the following statement is correct with regards to Amazon Aurora database?
1. Amazon Aurora replicates each chunk of my database volume six ways across three Availability Zones,
2. Whatever, storage you provision for Aurora database, you will be charged 3 times of that.
3. Amazon Aurora supports both MySQL and PostgreSQL
4. 1,3
5. 1,2,3
Correct Answer : 4 Explanation: Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora MySQL delivers up to five times the performance of MySQL without requiring any changes to most MySQL applications, similarly Amazon Aurora PostgreSQL delivers up to three times the performance of PostgreSQL. Amazon RDS manages your Amazon Aurora databases, handling time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair. You pay a simple monthly charge for each Amazon Aurora database instance you use. There are no upfront costs or long-term commitments required. Amazon Aurora replicates each chunk of data volume six times across 3 availability zones, but AWS will not charge you for 6 copies. It will charge you only for one copy of data.
Question : You have provisioned MySQL based Aurora DB engine for QuickTechie.com website, where number of website members are increasing quite fast and you need to have GB storage increase every day. What would you do?
1. You will provision extra 300GB at the start of every month.
2. Whatever storage you need, you have to provision in advance. Because once you provisioned the storage changing storage size will require migration of data.
3. Aurora DB will take care of this, to increase the 10GB storage need every day.
4. You can configure the feature in Aurora DB to have desired increase in storage every day, by paying extra charges of configuring this capability.
Correct Answer : 3 Explanation: In this question AWS wanted to know, that you have basic knowledge of Aurora DB features, it is their native service and practically they wanted you to use their native service, so that your project is tightly coupled with AWS and migration would be difficult.
The minimum storage required for Aurora DB is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance.
You can scale the compute resources allocated to your DB Instance in the AWS Management Console by selecting the desired DB Instance and clicking the Modify button. Memory and CPU resources are modified by changing your DB Instance class.
When you modify your DB Instance class, your requested changes will be applied during your specified maintenance window. Alternatively, you can use the "Apply Immediately" flag to apply your scaling requests immediately. Both of these options will have an availability impact for a few minutes as the scaling operation is performed. Bear in mind that any other pending system changes will also be applied.
Question : You have provisioned Aurora DB for one of your ecommerce websites, and it requires you to regularly backup your data. And project testing and data analytics team also needs as latest data as possible, which of the following options is/are suitable for given requirement? A. You have to configure backup schedule, when your website uses is least. B. You will be creating snapshots of your live DB C. Analytics team can directly fetch the data from live DB instance D. You don't have to configure backup schedule. E. You will not be creating snapshots of your live DB, because it impact live database performance. 1. A,B 2. B,C 3. B,D 4. D,E 5. A,E
Correct Answer : 3 Explanation: What is the requirement in given question? 1. Wanted to create backup for the database. 2. They need DB snapshots on regular interval, so that analytics team can run analytics on that.
For Aurora DB you don't have to configure backups they are automated. Automated backups are always enabled on Amazon Aurora DB Instances. Backups do not impact database performance.
Yes, you can take the snapshot of the Aurora DB, and there is no performance impact when taking snapshots. Note that restoring data from DB Snapshots requires creating a new DB Instance.
Amazon Aurora automatically maintains 6 copies of your data across 3 Availability Zones and will automatically attempt to recover your database in a healthy AZ with no data loss. In the unlikely event your data is unavailable within Amazon Aurora storage, you can restore from a DB Snapshot or perform a point-in-time restore operation to a new instance. Note that the latest restorable time for a point-in-time restore operation can be up to 5 minutes in the past.
4. EC2 instance Correct Answer : Exp : Question is given with a scenario having integrated with various AWS services and which particular application can cause performance issue. From the given option you need to select the service which cannot be auto-scaled until you configure to do so.
SQS: It can support elastic load, you don't have to explicitly configure. How much load it can support and how much it cannot. DynamoDB: Again this component you don't have to configure explicitly for scaling. AWS will take care of this. S3 Bucket: Any amount of data is supported. If you see and focus all the above three services are native and their auto scaling is managed by AWS only.
EC2 Service: You have to provision as per your need. As you can see in the given scenario, you have only one instance. You should use Auto-scaling group for scaling EC2 services.
Question : You have 's of text files generated on random time, which are generated by your on premise applications. You wanted to do some processing on these files and this code is already written using Java independent applications, which take input path of the files and process the file and generate file in output location. You already have provisioned AWS services for various activity and you are planning to migrate this file processing to AWS. Hence, as soon as file created you will publish in AWS bucket and your existing Java application can process that file and generate the output on S3 bucket. What solution you will prefer for this requirement in AWS?
1. You will be using AWS EMR to process the files
2. You will provision 5 EC2 servers, so each can process 200 files at a time
3. You will be using AWS Lambda Service
4. You will be using AWS S3 Lifecycle configuration features