Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : Your company is involved in ecommerce business and they have partnership with one of the popular data analytics company, who provides various suggestions like advertisement recommendation, product
recommendations etc. Hence, they request you to share your data as latest as possible with them. You are using Aurora DB as a backend database and your partner also have AWS accounts and they also run analytics on
AWS infra. Which of the following is best suitable and correct?


 : Your company is involved in ecommerce business and they have partnership with one of the popular data analytics company, who provides various suggestions like advertisement recommendation, product
1. You will give your partner access to one of the live replica in one of the availability zone based on their preference.

2. You will use automated snapshots creation and configure to be shared with the partner AWS account.

3. You will manually create a snapshot from live Aurora DB and then share that snapshot with the partner.

4. You cannot share Aurora DB snapshots with any other AWS account.

5. You will be creating VPC peering between your VPC (where Aurora DB hosted) and Partner VPC. So that partner can directly access the snapshots

Correct Answer : 3
Explanation: What question want? How do you share latest data with the partner, who also has AWS account, so that they can run analytics on that?

Remember: You should never ever give live data access to anybody not even developer, tester and data analytics team of your own organization.
Hence, option 1, 5 is out.

Option 2 : You can have automated snapshots created. But they cannot be automatically shared with any other AWS account ID. Hence, this option is also out.
Option 3 : Yes, you can create snapshot of your Aurora DB and share with any other account ID (it can be shared with up to 20 Account Id as of now). Hence, this is correct option.

Option 4: You can share snapshot with other AWS account. So this statement is altogether wrong.

Please note important points with regards to AuroraDB
- Automated backups are always enabled on Amazon Aurora DB Instances. Backups do not impact database performance.
- There is no performance impact when taking snapshots. Note that restoring data from DB Snapshots requires creating a new DB Instance.
- Amazon Aurora automatically maintains 6 copies of your data across 3 Availability Zones and will automatically attempt to recover your database in a healthy AZ with no data loss.
- In the unlikely event your data is unavailable within Amazon Aurora storage, you can restore from a DB Snapshot or perform a point-in-time restore operation to a new instance.
- latest restorable time for a point-in-time restore operation can be up to 5 minutes in the past.
- You can choose to create a final DB Snapshot when deleting your DB Instance. If you do, you can use this DB Snapshot to restore the deleted DB Instance at a later date.
- Amazon Aurora retains this final user-created DB Snapshot along with all other manually created DB Snapshots after the DB Instance is deleted. Only DB Snapshots are retained after the DB Instance is deleted (i.e.,
automated backups created for point-in-time restore are not kept).
- You can share a snapshot with a different AWS account, and the owner of the recipient account can use your snapshot to restore a DB that contains your data.
- You can even choose to make your snapshots public - that is, anybody can restore a DB containing your (public) data.
- You can use this feature to share data between your various environments (production, dev/test, staging, etc.) that have different AWS accounts, as well as keep backups of all your data secure in a separate account
in case your main AWS account is ever compromised.
- There is no charge for sharing snapshots between accounts. However, you may be charged for the snapshots themselves, as well as any databases you restore from shared snapshots.
- AWS do not support sharing automatic DB snapshots. To share an automatic snapshot, you must manually create a copy of the snapshot, and then share the copy.
- You can share your Aurora snapshots in all AWS regions where Aurora is available.
- Very important: Your shared Aurora snapshots will only be accessible by accounts in the same region as the account that shares them.
- you can share encrypted Aurora snapshots.




Question : Which of the following is true for Amazon Aurora DB?
A. Aurora DB automatically divides databased in 10GB segments of each.
B. It will create size copies of each segment across three availability zone
C. If there is any data corruption occurred, you have to manually correct in each replica
D. You have to write a process to scan all the data blocks for finding errors in it.
 : Which of the following is true for Amazon Aurora DB?
1. A,B
2. B,C
3. C,D
4. A,D
5. B,D

Correct Answer : 1
Explanation: AuroraDB is a managed service from the AWS, Hence most of the things should be available out of the box and need not to be done manually.
For AuroraDB
- Yes, it divide database in 10GB segment of each and will be copied across three availability zone.
- Amazon can handle loss of data if two replica out of six are corrupted in case of write availability and three replica corrupted in case of read availability.
- AuroaDB is a self-healing DB, so if any corruption found for storage, it will heal it and scans automatically to find errors.





Question : In your application you are using Aurora MySQL DB as a backend database, it is required that your application should not go down and its very critical application. Hence, you decided to replicate data
across the region. Which of the following statement is correct for that scenario?

A. You cannot setup replication for Aurora MySQL DB across the region.
B. You can setup replication for Aurora MySQL DB across region but it will have few milliseconds of lag with the primary DB.
C. You cannot promote cross-region replica as a Primary RDS for your application
D. You can promote cross-region Aurora DB as a primary but it will take few minutes to do that
 : In your application you are using Aurora MySQL DB as a backend database, it is required that your application should not go down and its very critical application. Hence, you decided to replicate data
1. A,B
2. B,C
3. C,D
4. A,D
5. B,D

Correct Answer : 5
Explanation: You can setup Aurora MYSQL DB across the region for replication. This replication is single threaded and depend on many other network factor, which can delay the replication. You can promote
cross region replica as a Primary DB. However, promotion process will take few minutes depending on your workload.

Even you can set the priority which replica can be promoted in case of failure.


Related Questions


Question : You have an independent .Net utility which take input data from S files and generates the output files and its metadata. However, you want that after processing these files, output files should be
distributed in your company owned Google Drive, Microsoft One drive and S3 bucket. At the same time generated metadata should be stored in DynamoDB . Which of the AWS service will help you to accomplish this given
requirement


 : You have an independent .Net utility which take input data from S files and generates the output files and its metadata. However, you want that after processing these files, output files should be
1. AWS Elastic Beanstalk

2. AWS Container Service

3. AWS Simple Workflow Service

4. AWS Lambda


Question : You have a Java based independent utility which process files, these files are very critical and contain health information of various VIP patients and data must not to be exposed. Hence, they are
password protected and encrypted as well. However, you need to process these files before delivering to patient. Which of the following service you will be using, it must be cost efficient solutions.

A. AWS Lambda
B. AWS Key Management Service
C. AWS EC2
D. Amazon Elastic Beanstalk
E. Amazon container service
 : You  have a Java based independent utility which process files, these files are very critical and contain health information of various VIP patients and data must not to be exposed. Hence, they are
1. A,B
2. B,C
3. C,D
4. D,E
5. A,E


Question : You have developed a website which will take collection of word docs in zip file and process these files and convert them back into pdf file and compress it back to zip file and which will be downloaded
by any public user. You are using AWS Lambda to process uploaded zip file and converting them to pdf file. Your site suddenly become popular and more people have started using it. You have seen that out of 120 jobs
submitted in a day, 20 were not completed and in timeout state. What could be the cause?

 : You have developed a website which will take collection of word docs in zip file and process these files and convert them back into pdf file and compress it back to zip file and which will be downloaded
1. You can submit only 100 Lambda calls in a day, until limit is increased by requesting AWS support.

2. AWS does not have enough capacity to run Lambda functions that particular day.

3. Zip files have many files which can take more than 5 minutes to process a uploaded zip file.

4. AWS found that your account has crossed the payment limit set by you


Question : Which all are the correct triggering events for AWS Lambda

A. Change in AWS S3 bucket
B. AWS Kinesis stream, can publish message to Lambda Function
C. AWS CloudTrail will publish the event directly to Amazon Lambda function
D. AWS CloudTrail will Logging an event to S3 and S3 bucket will notify Lambda
 : Which all are the correct triggering events for AWS Lambda
1. A,B
2. B,C
3. C,D
4. A,D
5. B,D


Question : You have a website with the domain name as QuickTechie.com where students can appear for online exam and score card and certificate will be issued after successfully completing the exam. You have two
separate storage structure for score card and certificate. What you want that after one year score card should be deleted and certificate should be moved to RRS storage. Which of the below functionality can help to
implement this requirement?



 : You have a website with the domain name as QuickTechie.com where students can appear for online exam and score card and certificate will be issued after successfully completing the exam. You have two
1. You should use S3 Lifecycle management

2. You will be writing an AWS Lambda function which will keep checking once in week age of particular object and take action accordingly.

3. You will have to setup bucket policy specifically for these two different storage one for Score card and another for certificate.

4. You will be using Simple Workflow service where in one activity you will check the age of certificate and move to RRS and in another activity you will check score card age and delete that object if
those are older than one year. And schedule that workflow to run once in a week.



Question : You have developed custom fonts for the websites, who wanted to protect their websites from copy-paste and cannot easily scrapped. You have hosted these fonts in S bucket which is protected and only paid
subscriber can use this fonts. Any website who uses these fonts will have access to the S3 bucket for accessing fonts. What else your client have to do?


 : You have developed custom fonts for the websites, who wanted to protect their websites from copy-paste and cannot easily scrapped. You have hosted these fonts in S bucket which is protected and only paid
1. Your client website will download the font locally, on their webserver every day to use these fonts on the website

2. Your client website will have to have cookie enabled sessions for having this fonts to be used

3. Your client would have a S3 bucket which will have a connection with the original S3 bucket hosting website fonts.

4. You have to enabled cross origin resource sharing for the bucket which is hosting this custom paid fonts