Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question :
Your Database engine needs to build indexes. You configure a read replica from the current production system
and start the index building on the read replicas. What do you need to do once the indexes are complete?
  :
1. Change the DNS to the read replica
2. Request AWS to change endpoint to read replica
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above


Correct Answer : Get Lastest Questions and Answer :

Explanation: There are a number of different uses for this new feature. Here are some suggestions to get you started:

Perform DDL Operations Table level DDL operations such as adding columns or indices can take a long time and can impose a performance penalty on your
master database instance. Here is another way to do it:

Execute the operations on a designated Read Replica and wait for them to complete.
Wait for the Read Replica to catch up with the master database instance.
Promote the Read Replica to a master.
Direct all database traffic to the newly promoted master.
Create additional Read Replicas for performance purposes as needed.
Terminate the original master and any remaining Read Replicas associated with it.
Shard a Table Sharding involves splitting a table into smaller tables, often using a hashing algorithm on the table's primary key to partition the key space
across tables. You can move from a single table model to a sharded model using Read Replicas and Promotion as follows:

Create a Read Replica for each shard.
Wait for each of the new Read Replicas to become available.
Promote the Read Replicas to masters.
Direct database traffic to the new sharded masters.
On each shard, delete the rows that belong to the other shards.
Terminate the original master.
Implement Failure Recovery Amazon RDS provides multiple options for data recovery during failures including Multi AZ deployments and Point in Time
Recovery. With the ability to promote, Read Replica can be considered as an additional data recovery scheme against failures. However, you will want to make
sure that you understand the ramifications of the asynchronous replication model and its limitations before electing to use this option as a recovery
mechanism. If your use case requires synchronous replication, automatic failure detection and failover, we recommend you run your DB Instance as a Multi AZ
deployment. If you do want to use Read Replica as a data recovery mechanism, you would start by creating a Read Replica, and then monitoring the master for
failures. In the event of a failure you would proceed as follows:

Promote the Read Replica.
Direct database traffic to the new master.
Create a replacement Read Replica.





Question :
Which storage engine is required for MySQL read replicas?

  :
1. InnoDB
2. MyISAM
3. Access Mostly Uused Products by 50000+ Subscribers
4. Federated


Correct Answer : Get Lastest Questions and Answer :

Explanation: The PointIn Time Restore and Snapshot Restore features of Amazon RDS for MySQL require a crash recoverable storage engine and are supported for InnoDB
storage engine only. While MySQL supports multiple storage engines with varying capabilities, not all of them are optimized for crash recovery and data
durability. For example, MyISAM storage engine does not support reliable crash recovery and may result in lost or corrupt data when MySQL is restarted after
a crash, preventing Point In Time Restore or Snapshot restore from working as intended.

Read Replicas require a transactional storage engine and are only supported for the InnoDB storage engine.

Non transactional engines such as MyISAM might prevent Read Replicas from working as intended. However, if you still choose to use MyISAM with Read
Replicas, we advise you to watch the Amazon CloudWatch Replica Lag metric (available via the AWS Management Console or Amazon CloudWatch APIs) carefully and
recreate the Read Replica should it fall behind due to replication errors. The same considerations apply to the use of temporary tables and any other non
transactional engines.





Question :
What would you need to edit to allow connections from your remove IP to your database instance?
  :
1. VPC Network
2. Options Group
3. Access Mostly Uused Products by 50000+ Subscribers
4. Security Group


Correct Answer : Get Lastest Questions and Answer :

Explanation: DB Security Groups vs. VPC Security Groups

The following table shows the key differences between DB security groups and VPC security groups.

DB Security Group
Controls access to DB instances outside a VPC
Uses Amazon RDS APIs or Amazon RDS page of the AWS Management Console to create and manage group or rules
When you add a rule to a group, you do not need to specify port number or protocol.
Groups allow access from EC2 security groups in your AWS account or other accounts.
Security Group Scenario

VPC Security Group
Controls access to DB instances in VPC.
Uses Amazon EC2 APIs or Amazon VPC page of the AWS Management Console to create and manage group or rules.
When you add a rule to a group, you should specify the protocol as TCP, and specify the same port number that you used to create the DB instances (or
Options) you plan to add as members to the group.
Groups allow access from other VPC security groups in your VPC only.


A common use of an RDS instance in a VPC is to share data with an application server running in an EC2 instance in the same VPC and that is accessed by a
client application outside the VPC. For this scenario, you would do the following to create the necessary instances and security groups. You can use the RDS
and VPC pages on the AWS Console or the RDS and EC2 APIs.

Create a VPC security group (for example, "sg-appsrv1") and define inbound rules that use as source the IP addresses of the client application. This
security group allows your client application to connect to EC2 instances in a VPC that uses this security group.

Create an EC2 instance for the application and add the EC2 instance to the VPC security group ("sg-appsrv1") you created in the previous step. The EC2
instance in the VPC shares the VPC security group with the DB instance.

Create a second VPC security group (for example, "sg-dbsrv1") and create a new rule by specifying the VPC security group you created in step 1
("sg-appsrv1") as the source.

Create a new DB instance and add the DB instance to the VPC security group ("sg-dbsrv1") you created in the previous step. When you create the instance, use
the same port number as the one specified for the VPC security group ("sg-dbsrv1") rule you created in step 3.



Related Questions


Question : You are working with a healthcare IT organization, which maintain the health record of many USA health patients. You have two applications one of which create health records and stored it in Amazon S
bucket. This health records cannot be exposed to public and needs to be protected. Another application which is a Web application hosted on EC2 instance needs to read those sensitive documents and whenever user login
on the website can access and view those health records, even their family doctors can also view those documents. However, it is suggested by audit security team that you can access this documents over the public
network, what is the best solution for this problem?


 : You are working with a healthcare IT organization, which maintain the health record of many USA health patients. You have two applications one of which create health records and stored it in Amazon S
1. You will create your custom VPC and attach internet gateway to this and from that gateway, you will access S3 buckets.

2. You will be using VPC peering

3. You will be installing storage gateway to access the data in S3 over the private network.

4. You will be creating a VPN connection, so that data can be accessed over the VPN tunnel

5. You will be using VPC endpoint to access the data from AWS S3


Question : You have a monthly job/batch, which analyzes millions of files accumulated in entire month and contains various patient health detail and want to recommend the patient what he needs to do, hence you have
written good amount of MapReduce code which can run on these files. These jobs needs to be executed once in every 30 days using AWS EC2 instances, which requires approx. 1000 vCPU for approx. 3 hrs. to complete the
entire job. Which of the following approach you will use?



 : You have a monthly job/batch, which analyzes millions of files accumulated in entire month and contains various patient health detail and want to recommend the patient what he needs to do, hence you have
1. You will request 9 EC2 on-demand instances with m5.24xlarge, which can deliver approx. 9X5X24 vCPU = 1080

2. You will request 9 EC2 spot instances with m5.24xlarge, which can deliver approx. 9X5X24 vCPU = 1080 at lower cost

3. You will request 1 EC2 spot instances with m5.24xlarge, which can deliver approx. 9X5X24 vCPU = 216 and run the job for 15 hours

4. You will be using EC2 Fleet to launch EC2 spot instances with m5.24xlarge and capacity would be 1000 vCPU



Question : You have been working with a HealthCare IT company who manages the patients on behalf of various hospitals. This data is very sensitive some research team can run analytics on the data if permitted.
However, this data is very sensitive and needs to be stored in RDBMS. How would you make sure that data stored in RDS is secure and cannot be attacked through network attack, hence research team can access this data
from EC2 instances


 : You have been working with a HealthCare IT company who manages the patients on behalf of various hospitals. This data is very sensitive some research team can run analytics on the data if permitted.
1. You will be having two VPC one for research team and another for RDS instance and make a connection between these two VPC using VPC peering.

2. You will be creating database user for research team so that only permitted users can access data from RDS instance

3. You will be defining security groups such that only data can be accessed from allowed networks.

4. You will be having VPN connection between EC2 instance and RDS instance.


Question : You have developed Docker container and want to run application using this container with the pre-defined EC instance types, EBS volumes and ELB . However, you want to have fine grain control over the
container you have created while deploying your application using Docker container. Which of the following service would be more suitable for given requirement?


 : You have developed Docker container and want to run application using this container with the pre-defined EC instance types, EBS volumes and ELB . However, you want to have fine grain control over the
1. Elastic Beanstalk

2. Amazon Cloudwatch

3. AWS Cloud Formation

4. Amazon Container Service


Question : You have developed a mobile based gaming applications, where various users can participate and maintain their score. You wanted to show top scorer on a particular game, as your application is very
popular and top 1000 scorers are keep changing (It's a leaderboard), however there are in total more than a million users who play this game on regular basis. Which of the following is most suitable data storage which
can give result as fast as needed?


 : You have developed a mobile based gaming applications, where various users can participate and maintain their score. You wanted to show top  scorer on a particular game, as your application is very
1. Amazon ElastiCache using Memcache protocol

2. Amazon ElastiCache using Redis protocol

3. Amazon DynamoDB having index on user id and score together

4. Maintaining data in MySQL RDS and creating primary index on user id and secondary index on score

5. You will use Lambda function which will sort user score in every minute which will be stored in a text file


Question : You have created Docker Image for your application and leverage the AWS ECR (Elastic container Registry). You created a private subnet and wanted to launch instance based on Docker images you created and
registered with the ECR. But you are not able to access that Docker image?

 : You have created Docker Image for your application and leverage the AWS ECR (Elastic container Registry). You created a private subnet and wanted to launch instance based on Docker images you created and
1. You don't have proper IAM role to access this Docker image.

2. You don't have connectivity via internet between your VPC and ECR

3. Your Docker image could be corrupted

4. Datacenter where your Docker image is stored in ECR is down while you are trying to use it.