Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : Which of the following statement is correct with regards to Amazon Aurora database?


 : Which of the following statement is correct with regards to Amazon Aurora database?
1. Amazon Aurora replicates each chunk of my database volume six ways across three Availability Zones,

2. Whatever, storage you provision for Aurora database, you will be charged 3 times of that.

3. Amazon Aurora supports both MySQL and PostgreSQL

4. 1,3

5. 1,2,3

Correct Answer : 4
Explanation: Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora MySQL delivers up to five
times the performance of MySQL without requiring any changes to most MySQL applications, similarly Amazon Aurora PostgreSQL delivers up to three times the performance of PostgreSQL. Amazon RDS manages your Amazon
Aurora databases, handling time-consuming
tasks such as provisioning, patching, backup, recovery, failure detection and repair. You pay a simple monthly charge for each Amazon Aurora database instance you use. There are no upfront costs or long-term
commitments required.
Amazon Aurora replicates each chunk of data volume six times across 3 availability zones, but AWS will not charge you for 6 copies. It will charge you only for one copy of data.




Question : You have provisioned MySQL based Aurora DB engine for QuickTechie.com website, where number of website members are increasing quite fast and you need to have GB storage increase every day. What would
you do?


 : You have provisioned MySQL based Aurora DB engine for QuickTechie.com website, where number of website members are increasing quite fast and you need to have  GB storage increase every day. What would
1. You will provision extra 300GB at the start of every month.

2. Whatever storage you need, you have to provision in advance. Because once you provisioned the storage changing storage size will require migration of data.

3. Aurora DB will take care of this, to increase the 10GB storage need every day.

4. You can configure the feature in Aurora DB to have desired increase in storage every day, by paying extra charges of configuring this capability.


Correct Answer : 3
Explanation: In this question AWS wanted to know, that you have basic knowledge of Aurora DB features, it is their native service and practically they wanted you to use their native service, so that your
project is tightly coupled with AWS and migration would be difficult.

The minimum storage required for Aurora DB is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no
need to provision storage in advance.

You can scale the compute resources allocated to your DB Instance in the AWS Management Console by selecting the desired DB Instance and clicking the Modify button. Memory and CPU resources are modified by changing
your DB Instance class.

When you modify your DB Instance class, your requested changes will be applied during your specified maintenance window. Alternatively, you can use the "Apply Immediately" flag to apply your scaling requests
immediately. Both of these options will have an availability impact for a few minutes as the scaling operation is performed. Bear in mind that any other pending system changes will also be applied.




Question : You have provisioned Aurora DB for one of your ecommerce websites, and it requires you to regularly backup your data. And project testing and data analytics team also needs as latest data as possible,
which of the following options is/are suitable for given requirement?
A. You have to configure backup schedule, when your website uses is least.
B. You will be creating snapshots of your live DB
C. Analytics team can directly fetch the data from live DB instance
D. You don't have to configure backup schedule.
E. You will not be creating snapshots of your live DB, because it impact live database performance.
 : You have provisioned Aurora DB for one of your ecommerce websites, and it requires you to regularly backup your data. And project testing and data analytics team also needs as latest data as possible,
1. A,B
2. B,C
3. B,D
4. D,E
5. A,E

Correct Answer : 3
Explanation: What is the requirement in given question?
1. Wanted to create backup for the database.
2. They need DB snapshots on regular interval, so that analytics team can run analytics on that.

For Aurora DB you don't have to configure backups they are automated. Automated backups are always enabled on Amazon Aurora DB Instances. Backups do not impact database performance.

Yes, you can take the snapshot of the Aurora DB, and there is no performance impact when taking snapshots. Note that restoring data from DB Snapshots requires creating a new DB Instance.

Amazon Aurora automatically maintains 6 copies of your data across 3 Availability Zones and will automatically attempt to recover your database in a healthy AZ with no data loss. In the unlikely event your data is
unavailable within Amazon Aurora storage, you can restore from a DB Snapshot or perform a point-in-time restore operation to a new instance. Note that the latest restorable time for a point-in-time restore operation
can be up to 5 minutes in the past.



Related Questions


Question : You have created an application for security trading, and you are using Aurora MySQL DB. Till now you have been running your application and Database without data encryption. After audit done by security
team, it is mandated that data in transit as well as at rest needs to be encrypted, which of the following is correct for given scenario
A. You will enable the encryption for new data
B. You will encrypt all the existing data
C. You cannot enable encryption for Aurora Amazon DB
D. You cannot enable encryption already existing data in Aurora DB


 : You have created an application for security trading, and you are using Aurora MySQL DB. Till now you have been running your application and Database without data encryption. After audit done by security
1. A,B
2. B,C
3. C,D
4. A,D
5. B,D


Question : You already have one of the application on MySQL, which was running your own datacenter, and grown up to TB . You decided to move on AWS, your requirement are below
- Initial Capacity of the DB should be 7 TB
- Database will grow by approx. 7-9GB each day and needs to be supported
- Higher performance is required, hence you need 7 read replicas

Which of the following database or storage is good fit for given requirement



 : You already have one of the application on MySQL, which was running your own datacenter, and grown up to TB . You decided to move on AWS, your requirement are below
1. Amazon Redshift

2. Amazon Oracle RDS

3. DynamoDB

4. ElastiCache

5. Amazon AuroraDB




Question : You have developed a website QuickTechie.com where users can appear for online certification preparation material. All the questions and answers are stored in RDBMS, including user profile data and at a
time millions of users can appear for exam, which require almost 16000IOPS from the database. You have created your own database and installed it on EBS volume, which of the below EBS storage type you will select for
given requirement?

 : You have developed a website QuickTechie.com where users can appear for online certification preparation material. All the questions and answers are stored in RDBMS, including user profile data and at a
1. EBS Provisioned IOPS SSD

2. EBS Magnetic storage

3. EBS Throughput optimized HDD

4. EBS General purpose SSD

5. EBS HDD


Question : You have a web application which is deployed on EC instance and database was created on EBS volume. Day by day your application is becoming popular. Hence, it needs higher data storage as well as higher
IOPS. For that what you have done you selected provisioned IOPS EBS volume and also increase the size of overall volume. But you don't see a major improvement in overall performance. What further action you will take
to improve the performance?


 : You have a web application which is deployed on EC instance and database was created on EBS volume. Day by day your application is becoming popular. Hence, it needs higher data storage as well as higher
1. You will further increase the EBS volume size

2. You will further provision more IOPS

3. You have to create new EBS volume with the desired IOPS and Volume and migrate everything from existing EBS volume to new EBS volume

4. You will change the EC2 instance type


Question : Suppose you have provisioned an EBS volume with IOPS, then which of the following statement is correct?
A. You can have up to 500, writes of each with the size up to 256KB
B. You can have up to 250, writes of each with the size up to 256KB
C. You can have up to 250, writes of each with the size up to 512KB
D. You can have up to 500, writes of each with the size up to 512KB
 : Suppose you have provisioned an EBS volume with  IOPS, then which of the following statement is correct?
1. A,B
2. B,C
3. C,D
4. A,C
5. B,D


Question : You are developing a web application which hosted on EC instance. This application allows user to appear online exam and once they appear for exam and successfully clear that exam a certificate will be
issued. Applications is designed like that, user logged in to appear for exam and a certificate (pdf) file will be created by application same application, save this file in Amazon S3 bucket, so that user can download
it from web application. As soon as user completes the exam a message will be published on the SQS with the user id and URL for certificate file. Application on EC2 instance will read this message and send the email
to user with all the details as well as store this data in DynamoDB which is partitioned by unique exam id per attempt. Until user appear for the exam entire session data is saved in ElastiCache. Now this web
application is subscribed by one of the big organization which has 250K employees and all the employee has to appear for this exam? At which place in entire AWS resources you see that can cause performance hit?


 : You are developing a web application which hosted on EC instance. This application allows user to appear online exam and once they appear for exam and successfully clear that exam a certificate will be
1. SQS Queue

2. DynamoDB

3. S3 bucket

4. EC2 instance
Correct Answer : Exp : Question is given with a scenario having integrated with various AWS services and which particular application can cause performance issue. From the given option you need to select the service
which cannot be auto-scaled until you configure to do so.

SQS: It can support elastic load, you don't have to explicitly configure. How much load it can support and how much it cannot.
DynamoDB: Again this component you don't have to configure explicitly for scaling. AWS will take care of this.
S3 Bucket: Any amount of data is supported.
If you see and focus all the above three services are native and their auto scaling is managed by AWS only.

EC2 Service: You have to provision as per your need. As you can see in the given scenario, you have only one instance. You should use Auto-scaling group for scaling EC2 services.


Question : You have 's of text files generated on random time, which are generated by your on premise applications. You wanted to do some processing on these files and this code is already written using Java
independent applications, which take input path of the files and process the file and generate file in output location. You already have provisioned AWS services for various activity and you are planning to migrate
this file processing to AWS. Hence, as soon as file created you will publish in AWS bucket and your existing Java application can process that file and generate the output on S3 bucket. What solution you will prefer
for this requirement in AWS?


 : You are developing a web application which hosted on EC instance. This application allows user to appear online exam and once they appear for exam and successfully clear that exam a certificate will be
1. You will be using AWS EMR to process the files

2. You will provision 5 EC2 servers, so each can process 200 files at a time

3. You will be using AWS Lambda Service

4. You will be using AWS S3 Lifecycle configuration features

5. You will be using AWS Simple workflow service