Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : You are working with an investment bank, which has recently announced to use Hybrid Cloud, public cloud as an AWS and private cloud as in-house datacenter. Company mandated to login AWS console to use
multifactor authentication for each account.
Now one of the team started using DynamoDB NoSQL solution, from the application which is installed on, on premise Linux instances. During development and testing they have been using secret keys and access keys, which
are stored locally on the same Linux
host. One of your security team member had raised concern over storing this keys in text file and using this in this way, and he suggested you need to come up with more secure and safe way for interacting between
Linux instance and DynamoDB. Which of the
following you should consider is the safest way?

A. Amazon can store keys more secure way. So you will be creating an encrypted EBS volume and store that text file on that encrypted EBS volume.
B. You will enable encryption between DynamoDB and application installed on Linux instance, using secure certificates.
C. You will encrypt that text file and store it in the same instance, and whenever you need to make a connection with the DynamoDB, you have to decrypt that key.
D. You will be using Amazon provided KMS (Key management service) service
E. You will be leveraging IAM Role functionality
 : You are working with an investment bank, which has recently announced to use Hybrid Cloud, public cloud as an AWS and private cloud as in-house datacenter. Company mandated to login AWS console to use
1. A,B
2. B,C
3. C,D
4. B,E
5. A,D

Correct Answer : 4
Explanation: Ans : B,E


Exp : This question has some latent aspect of security. Question is focusing on access keys and secrete keys. But in the given option, if you have to select more than one answer than you have to check which all
options are appropriate for secure data
transfer and making connection with the AWS services.

Access Keys: You should always avoid to saves access keys on the same host from which your application runs. It is not at all secure. In the exam they may give you a question with AMI (instead of your datacenter, you
launched EC2 instance and deployed your
application on it, which will connect with the DynamoDB, with the given use case also you should not store these keys on text file)

Can I store it in S3 Bucket? : What is the point of storing Access keys in S3 bucket? No it is not a secure way.

Why keys at all? : Wherever you see a question regarding credentials and you find in the option that IAM Role is given, then think over it. There is the probability this answer will be correct, as in this question.

KMS: Key management service is for storing SSL certificates and not for access keys and secret keys.

Encryption: Yes, whenever data leaves from one network (from AWS) and reach to another network (host on your data center), you must have encryption enabled so that your data are not corrupted by middleman attack.


Question : QuickTechie.com is a very popular websites for the certification exam preparations and they provide online practice material to prepare for a certifications. Only member of the website can attempt to
practice paper, hence you need to create you
profile and once you create the profile you can start your practice. However, to maintain the session, like how many questions you have appeared and how many right and wrong etc. are maintained using sessions. Once
you close the sessions or finish the
question paper history of your exam attempts will be deleted. To maintain this entire history of an attempt of the exam, which of the following services can be used, remember it is scalable website with 4 EC2 nodes in
two availability zones?

A. Amazon S3
B. ElastiCache
C. DynamoDB
D. Amazon Simple workflow service
E. Amazon Redshift
Exp : This question has some latent aspect of security.  is focusing on access keys and secrete keys. But in the given option, if you have to select more than one answer than you have to check which all
1. A,B
2. B,C
3. C,D
4. D,E
5. A,E

Correct Answer : 2
Explanation: In this question you need to understand what exactly the requirement. Website is scalable they are already using 4 EC2 instance. And given options are not related to the scalability etc. It is
more about managing user session and
until user finishes its exam, session data must be stored somewhere. If you are not using AWS then also you would be using some way to manage the session specific data either using in-memory cache or somewhere in
database.

However, in case of AWS you should prefer ElastiCache for caching the session data or if you want to persist it and delete later on, then you can use DynamoDB. DynamoDB is very fast NoSQL database, if you use it
properly.

S3: Nope, S3 is more for the object storage like files, images and videos. Hence, eliminate this option.
Redshift: It is a Data warehouse solution. So it cannot be used here eliminate this option as well.


SWS: Question requirement is nowhere related to workflow service and eliminate this option as well. Hence, remaining options are well suited.


Question : You are working in AcmeShell Inc. their accounting department submit tax on monthly basis for their employee as well for the services provided by the company to their client and they need to store all
these records which are documented and
should be protected by deletion and any kind of data loss. Which of the following is best suitable solutions from AWS?

SWS:  requirement is nowhere related to workflow service and eliminate this option as well. Hence, remaining options are well suited.
1. You will create an EBS volume and attach it to one of the EC2 instance and install accounting application on it, which will encrypt the document and store on it. You will also replicate same EBS
volume in another region using sync process.

2. You should create one copy in EBS volume and another copy in the instance store of EC2 instance.

3. You should use AWS Glacier storage service

4. You can use S3 storage service with versioning enabled

5. You should use DynamoDB where you can stored documents as well

Correct Answer : 4
Explanation: What is the requirement in question : storing document which should be protected from accidental deletion.
All options are storage
S3: Simple Storage service, it is used to store objects like Documents, Images and video files. It could be a correct option. Is it durable? Answer is yes. Once you store your object in it. It will remain in it,
until you delete it explicitly and you can
do the faster retrieval as well. Can document accidently deleted from S3 bucket, if user has permission to do so, yes it can be deleted. But if you enable versioning than all the versions of the document will be
saved, by default. Hence, this is the most
suitable answer for given requirement. Please note that, whatever extra space is taken by the multiple version of documents will attract charges. Versioning is free but storage taken by versioned document will be
charged.
DynamoDB: Not at all good for Document storage, hence eliminate this. It is a No-SQL storage to store key-vale data for faster retrieval.
Glacier: This is also an object storage, but it is used for archival, any data which is very frequently accessed should use this storage. However, you may get confused with this option. But to avoid accidental
deletion, enabling versioning on the S3 bucket
will give you correct choice.
EBS: Elastic block storage, it is a storage. But you generally not use this storage for given requirement. EBS can be used for creating databases on it. Like you want to install your own MySQL service. You will be
using EBS. And to access data stored in EBS
volume, EBS always has to be attached with an EC2 instance.

Instance store: Anything you will store in instance store, will be deleted as soon as you terminate your instance. Hence, this can never be a correct choice for given requirement.



Related Questions


Question : Which of the following statement is correct with regards to Amazon Aurora database?


 : Which of the following statement is correct with regards to Amazon Aurora database?
1. Amazon Aurora replicates each chunk of my database volume six ways across three Availability Zones,

2. Whatever, storage you provision for Aurora database, you will be charged 3 times of that.

3. Amazon Aurora supports both MySQL and PostgreSQL

4. 1,3

5. 1,2,3


Question : You have provisioned MySQL based Aurora DB engine for QuickTechie.com website, where number of website members are increasing quite fast and you need to have GB storage increase every day. What would
you do?


 : You have provisioned MySQL based Aurora DB engine for QuickTechie.com website, where number of website members are increasing quite fast and you need to have  GB storage increase every day. What would
1. You will provision extra 300GB at the start of every month.

2. Whatever storage you need, you have to provision in advance. Because once you provisioned the storage changing storage size will require migration of data.

3. Aurora DB will take care of this, to increase the 10GB storage need every day.

4. You can configure the feature in Aurora DB to have desired increase in storage every day, by paying extra charges of configuring this capability.



Question : You have provisioned Aurora DB for one of your ecommerce websites, and it requires you to regularly backup your data. And project testing and data analytics team also needs as latest data as possible,
which of the following options is/are suitable for given requirement?
A. You have to configure backup schedule, when your website uses is least.
B. You will be creating snapshots of your live DB
C. Analytics team can directly fetch the data from live DB instance
D. You don't have to configure backup schedule.
E. You will not be creating snapshots of your live DB, because it impact live database performance.
 : You have provisioned Aurora DB for one of your ecommerce websites, and it requires you to regularly backup your data. And project testing and data analytics team also needs as latest data as possible,
1. A,B
2. B,C
3. B,D
4. D,E
5. A,E


Question : Your company is involved in ecommerce business and they have partnership with one of the popular data analytics company, who provides various suggestions like advertisement recommendation, product
recommendations etc. Hence, they request you to share your data as latest as possible with them. You are using Aurora DB as a backend database and your partner also have AWS accounts and they also run analytics on
AWS infra. Which of the following is best suitable and correct?


 : Your company is involved in ecommerce business and they have partnership with one of the popular data analytics company, who provides various suggestions like advertisement recommendation, product
1. You will give your partner access to one of the live replica in one of the availability zone based on their preference.

2. You will use automated snapshots creation and configure to be shared with the partner AWS account.

3. You will manually create a snapshot from live Aurora DB and then share that snapshot with the partner.

4. You cannot share Aurora DB snapshots with any other AWS account.

5. You will be creating VPC peering between your VPC (where Aurora DB hosted) and Partner VPC. So that partner can directly access the snapshots


Question : Which of the following is true for Amazon Aurora DB?
A. Aurora DB automatically divides databased in 10GB segments of each.
B. It will create size copies of each segment across three availability zone
C. If there is any data corruption occurred, you have to manually correct in each replica
D. You have to write a process to scan all the data blocks for finding errors in it.
 : Which of the following is true for Amazon Aurora DB?
1. A,B
2. B,C
3. C,D
4. A,D
5. B,D


Question : In your application you are using Aurora MySQL DB as a backend database, it is required that your application should not go down and its very critical application. Hence, you decided to replicate data
across the region. Which of the following statement is correct for that scenario?

A. You cannot setup replication for Aurora MySQL DB across the region.
B. You can setup replication for Aurora MySQL DB across region but it will have few milliseconds of lag with the primary DB.
C. You cannot promote cross-region replica as a Primary RDS for your application
D. You can promote cross-region Aurora DB as a primary but it will take few minutes to do that
 : In your application you are using Aurora MySQL DB as a backend database, it is required that your application should not go down and its very critical application. Hence, you decided to replicate data
1. A,B
2. B,C
3. C,D
4. A,D
5. B,D