Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : Which of the following encryption is used by AWS S at Server side?

  : Which of the following encryption is used by AWS S at Server side?
1. SSE-S3

2. AES-256

3. Access Mostly Uused Products by 50000+ Subscribers

4. X.509

Correct Answer : Get Lastest Questions and Answer :
Explanation: Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3)
employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a
master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption
Standard (AES-256), to encrypt your data.




Question : While setting up the level read replica in RDS , you find the lag between master and nd level replication, why ?

  : While setting up the  level read replica in RDS , you find the lag between master and nd level replication, why ?
1. 2nd level read replicas configuration is wrong
2. 2nd level read replica's Harware configuration will always be lower grade then master.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Because of additional replication latency introduced as transactions are replicated from the master to the first level
replica and then to the second-level replica.
5. Any of the above could be correct


Correct Answer : Get Lastest Questions and Answer :

Explanation: You can create a second-tier Read Replica from an existing first-tier Read Replica.
By creating a second-tier Read Replica, you may be able to move some of the replication load from the master database instance to a first-tier Read
Replica. Please note that a second-tier Read Replica may lag further behind the master because of additional replication latency introduced as transactions
are replicated from the master to the first tier replica and then to the second-tier replica







Question : How would Acmeshell can use S to accomplish this goal?

"As soon as their training video file age reached to 6 month, it should be moved to Glacier storage."
  : How would Acmeshell can use S to accomplish this goal?
1. Write an AWS command line tool to backup the data and send it to glacier after 6 months
2. Use S3 bucket policies to manage the data
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use bucket Lifecycle policies and set the files to go to glacier storage after 6 months



Correct Answer : Get Lastest Questions and Answer :


Explanation: Lifecycle management defines how Amazon S3 manages objects during their lifetime.
Some objects that you store in an Amazon S3 bucket might have a well defined lifecycle:

If you are uploading periodic logs to your bucket, your application might need these logs for a week or a month after creation, and after that you might
want to delete them.

Some documents are frequently accessed for a limited period of time. After that, you might not need real time access to these objects, but your
organization might require you to archive them for a longer period and then optionally delete them later. Digital media archives, financial and healthcare records,
raw genomics sequence data, long term database backups, and data that must be retained for regulatory compliance are some kinds of data that you might
upload to Amazon S3 primarily for archival purposes.

For such objects, you can define rules that identify the affected objects, a timeline, and specific actions you want Amazon S3 to perform on the objects.

Amazon S3 manages object lifetimes with a lifecycle configuration, which is assigned to a bucket and defines rules for individual objects. Each rule in a
lifecycle configuration consists of the following:

An object key prefix that identifies one or more objects to which the rule applies.

An action or actions that you want Amazon S3 to perform on the specified objects.

A date or a time period, specified in days since object creation, when you want Amazon S3 to perform the specified action.

You can add these rules to your bucket using either the Amazon S3 console or programmatically.




Related Questions


Question : You are working with a IT Technology blogging company named QuickTechie Inc. They have huge number of blogs on various technologies. They
have their infrastructure developed in AWS, using EC2 and MYSQL database, installed on the same EC2 instance locally. And your chief architect informed
that there are some performance issues with this architect and as you know this is not a resilient solution. To solve this problem, you suggested using
RDS with Multi-AZ deployment, which solves the resilient requirement. Now, with regards to performance, you have been informed that 99% time blogging
site is accessed for reading the blogs and hardly for 1 % wring and updating the blogs. Which of the following option will help you to increase the
performance in given scenario?


 : You are working with a IT Technology blogging company named QuickTechie Inc. They have huge number of blogs on various technologies. They
1. As you have Multi-AZ deployment so that start writing to secondary copy whenever blog is created or updated.

2. You should have Multi-AZ deployment and all the write should go to the primary database and read from the secondary database.

3. Access Mostly Uused Products by 50000+ Subscribers

4. You should have used NoSQL database, as they are good fit for blogging solutions.



Question : You are working in a bank, Arinika Inc. This bank asked you to implement best solution for commenting and messaging on their mobile
applications. You have used DynamoDB to store all this comments and messages. However, now bank want to do some analytics based on user transactions,
messages and comments to analyze the spending behavior. What is your solution for this requirement?


 : You are working in a bank, Arinika Inc. This bank asked you to implement best solution for commenting and messaging on their mobile
1. You should move these comments and messages on the RDS and ask your analytics team to use this database for their analytical queries.

2. You should move transaction data on the Redshift and ask your analytics team to use this database for their analytical queries.

3. Access Mostly Uused Products by 50000+ Subscribers

4. You should use RRS to store comments and messages and combine it with the already stored transaction to do analytics.



Question : You have been working in a financial company, which creates indexes based on equity data. Your technical team are in process of creating
product based on newly acquired vendor and data stored in Amazon RDS, where automated backup is enabled. During production deployment you have asked DBA
team to create a table, with statement drop first and then create table. Now table have been dropped by DBA and new empty table has been created.
However, in few moments you got a complain that other applications were reading the data from that table and they are not getting these data now. It’s
very critical production application. How, will you and your DBA can handle this scenario?


 : You have been working in a financial company, which creates indexes based on equity data. Your technical team are in process of creating
1. As you are using Amazon RDS, in a few moment you can re-store the table which has been deleted.

2. You will restore the entire database from snapshot backup.

3. Access Mostly Uused Products by 50000+ Subscribers

4. as Amazon RDS store backup snapshots in Amazon Glacier. It will take 3-5 hrs to get the snapshot, and will be recovered.



Question : You are working as an AWS solution architect and you need to have a better solution for mobile wallet to store all the transaction data.
Hence, you decided to use Amazon RDS, but now question comes that, you need highly resilient solution. Which of the following RDS instances support
resiliency based on your requirement?


 : You are working as an AWS solution architect and you need to have a better solution for mobile wallet to store all the transaction data.
1. SQL Server, MySQL, Oracle

2. Oracle , Aurora, SQLServer

3. Access Mostly Uused Products by 50000+ Subscribers

4. All Database supported as part of RDS have Multi-Az support



Question : You are developing a technology blogging website. Where ’s of learners will create the blogs on weekly basis. However, more than
millions of user will read this blog website. And you decided to use the Amazon RDS, you are confusing which DB engine would be good for this requirement?



 : You are developing a technology blogging website. Where ’s of learners will create the blogs on weekly basis. However, more than
1. SQL Server, MySQL, Oracle

2. Oracle, Aurora, SQLServer

3. Access Mostly Uused Products by 50000+ Subscribers

4. All Database supported as part of RDS have Multi-Az support

5. MySQL, MariaDB, Amazon Aurora and PostGreSQL




Question : You have been working with an e-commerce company, which has its data stored in AWS RDS database. You are at the last stage of
productionizing this solution. For resiliency you have enabled multi-Az deployment. Now, your chief architect asked you that have you done testing for
failover or assumed it will work. Certainly, you should not assume it will work, you have to test for every scenario your solution. What will you do to
test failover for given scenario?


 : You have been working with an e-commerce company, which has its data stored in AWS RDS database. You are at the last stage of
1. As this is about stopping primary DB instance in one of the availability zone. Hence, you need to raise support ticket to stop the
primary RDS instance.

2. You have to stop Secondary DB instance, as this does not require AWS support ticket. You can stop that and test the given scenario.

3. Access Mostly Uused Products by 50000+ Subscribers
in background.

4. This scenario practically cannot be tested, as required entire datacenter to bring down.