Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : When preparing for a compliance assessment of your system built inside of AWS. what are three best-practices for you to prepare for an audit?

Choose 3 answers
A. Gather evidence of your IT operational controls
B. Request and obtain applicable third-party audited AWS compliance reports and certifications
C. Request and obtain a compliance and security tour of an AWS data center for a preassessment security review
D. Request and obtain approval from AWS to perform relevant network scans and in-depth penetration tests of your system's Instances and endpoints
E. Schedule meetings with AWS's third-party auditors to provide evidence of AWS compliance that maps to your control objectives



 :  When preparing for a compliance assessment of your system built inside of AWS. what are three best-practices for you to prepare for an audit?
1. B,D,E
2. A,B,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. C,D,E
5. A,B,C


Correct Answer : Get Lastest Questions and Answer :

Explanation:






Question :

You have an application running on Amazon Web Services. The application has 4 EC2 instances in availability zone us-east-1c.
You re using Elastic Load Balancer to load balance traffic across your four instances.
What changes would you make to create a fault tolerant architecture? Select the "best" possible answer.

 :
1. Create EBS backups to ensure data is not lost.
2. Move all 4 instances to a different availability zone.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above

Correct Answer : Get Lastest Questions and Answer :

Explanation: Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon Elastic Compute Cloud (Amazon EC2) instances. You
can set up an elastic load balancer to load balance incoming application traffic across Amazon EC2 instances in a single Availability Zone or multiple
Availability Zones. Elastic Load Balancing enables you to achieve even greater fault tolerance in your applications, plus it seamlessly provides the amount
of load balancing capacity that is needed in response to incoming application traffic.

You can build fault tolerant applications by placing your Amazon EC2 instances in multiple Availability Zones. To achieve even more fault tolerance with less
manual intervention, you can use Elastic Load Balancing. When you place your compute instances behind an elastic load balancer, you improve fault tolerance
because the load balancer can automatically balance traffic across multiple instances and multiple Availability Zones. This ensures that only healthy EC2
instances receive traffic.

Elastic Load Balancing also detects the health of EC2 instances. When it detects unhealthy Amazon EC2 instances, it no longer routes traffic to them.
Instead, it spreads the load across the remaining healthy EC2 instances. If you have set up your EC2 instances in multiple Availability Zones, and all of
your EC2 instances in one Availability Zone become unhealthy, Elastic Load Balancing will route traffic to your healthy EC2 instances in those other zones.
When the unhealthy EC2 instances have been restored to a healthy state Elastic Load Balancing will resume load balancing to those instances. Additionally,
Elastic Load Balancing is itself a distributed system that is fault tolerant and actively monitored.

Elastic Load Balancing also offers integration with Auto Scaling, which ensures that you have the back-end capacity available to meet varying traffic levels.
Let's say that you want to make sure that the number of healthy EC2 instances behind an Elastic Load Balancer is never fewer than two. You can use Auto
Scaling to set these conditions, and when Auto Scaling detects that a condition has been met, it automatically adds the requisite amount of EC2 instances to
your Auto Scaling Group. Here's another example: If you want to make sure to add EC2 instances when the latency of any one of your instances exceeds 4
seconds over any 15 minute period, you can set that condition. Auto Scaling will take the appropriate action on your EC2 instances, even when running behind
an Elastic Load Balancer. Auto Scaling works equally well for scaling EC2 instances whether you're using Elastic Load Balancing or not.

One of the major benefits of Elastic Load Balancing is that it abstracts out the complexity of managing, maintaining, and scaling load balancers. The service
is designed to automatically add and remove capacity as needed, without needing any manual intervention.






Question : You are reviewing your Auto Scaling events for your application and you notice that your application is scaling up and down multiple times in
the same hour. What
design choices could you make to optimize for cost while preserving elasticity?
  : You are reviewing your Auto Scaling events for your application and you notice that your application is scaling up and down multiple times in
1. Terminate your oldest instances first by modifying the Auto Scaling group termination policy.
2. Terminate your newest instances first by modifying the Auto Scaling group termination policy.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Modify the Auto Scaling group cool-down timers and modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy.

Correct Answer : Get Lastest Questions and Answer :

Explanation:



Related Questions


Question : Amazon SNS can be used with other AWS services as well, select the correct one
   : Amazon SNS can be used with other AWS services as well, select the correct one
1. Amazon SQS
2. Amazon EC2
3. Access Mostly Uused Products by 50000+ Subscribers
4. Only 1 and 2
5. All 1,2 and 3



Question : Select which applies correctly for the topic security.

  : Select which applies correctly for the topic security.
1. All API calls made to Amazon SNS are validated for the users AWS Id and the signature
2. Topics can only be created by users with valid AWS IDs who have signed up for Amazon SNS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Only 1 and 2 are correct
5. All 1,2 and 3 are correct




Question : Which of the following items are required to allow an application deployed on an EC instance to write data to a DynamoDB table?
Assume that no security keys are allowed to be stored on the EC2 instance.

A. Create an IAM Role that allows write access to the DynamoDB table.
B. Add an IAM Role to a running EC2 instance.
C. Create an IAM User that allows write access to the DynamoDB table.
D. Add an IAM User to a running EC2 instance.
E. Launch an EC2 Instance with the IAM Role included in the launch configuration.

 : Which of the following items are required to allow an application deployed on an EC instance to write data to a DynamoDB table?
1. A,C
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,E


Question : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
instance and send real-time alerts to their operations team. Which AWS services can
accomplish this? Choose 2 answers

A. Amazon Simple Email Service
B. Amazon CloudWatch
C. Amazon Simple Queue Service
D. Amazon Route 53
E. Amazon Simple Notification Service

  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. B,E
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,E
Ans :1 Exp : Amazon RDS provides several metrics that you can use to determine how your DB instance is performing. You can view the metrics in the RDS
console by selecting your DB instance and clicking Show Monitoring. You can also use Amazon CloudWatch to monitor these metrics. For more information, go to
the Viewing DB Instance Metrics.
" IOPS - the number of I/O operations completed per second. This metric is reported as the average IOPS for a given time interval. Amazon RDS reports read
and write IOPS separately on one minute intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to tens of
thousands per second.
" Latency - the elapsed time between the submission of an I/O request and its completion. This metric is reported as the average latency for a given time
interval. Amazon RDS reports read and write latency separately on one minute intervals in units of seconds. Typical values for latency are in the millisecond
(ms); for example, Amazon RDS reports 2 ms as 0.002 seconds.
" Throughput - the number of bytes per second transferred to or from disk. This metric is reported as the average throughput for a given time interval.
Amazon RDS reports read and write throughput separately on one minute intervals using units of megabytes per second (MB/s). Typical values for throughput
range from zero to the I/O channel's maximum bandwidth.
" Queue Depth - the number of I/O requests in the queue waiting to be serviced. These are I/O requests that have been submitted by the application but have
not been sent to the device because the device is busy servicing other I/O requests. Time spent waiting in the queue is a component of Latency and Service
Time (not available as a metric). This metric is reported as the average queue depth for a given time interval. Amazon RDS reports queue depth in one minute
intervals. Typical values for queue depth range from zero to several hundred.
" Amazon CloudWatch uses Amazon Simple Notification Service (Amazon SNS) to send email. This section shows you how to create and subscribe to an Amazon

Simple Notification Service topic. When you create a CloudWatch alarm, you can add this Amazon SNS topic to send an email notification when the alarm changes
state.
" This scenario walks you through how to use the AWS Management Console or the command line tools to create an Amazon CloudWatch alarm that sends an Amazon
Simple Notification Service email message when the alarm changes state from OK to ALARM.
" In this scenario, you configure the alarm to change to the ALARM state when the average CPU use of an EC2 instance exceeds 70 percent for two consecutive
five-minute periods.




Question : Which set of Amazon S features helps to prevent and recover from accidental data loss?
  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. Object lifecycle and service access logging
2. Object versioning and Multi-factor authentication
3. Access Mostly Uused Products by 50000+ Subscribers
4. Website hosting and Amazon S3 policies
Ans : 2 Exp : Data integrity compromise To ensure that data integrity is not compromised through deliberate or accidental modification, use resource
permissions to limit the scope of users who can modify the data. Even with resource permissions, accidental deletion by a privileged user is still a threat
(including a potential attack by a Trojan using the privileged user's credentials), which illustrates the importance of the principle of least privilege.
Perform data integrity checks, such as Message Authentication Codes (SHA-1/SHA-2), or Hashed Message Authentication Codes (HMACs), digital signatures, or
authenticated encryption (AES-GCM), to detect data integrity compromise. If you detect data compromise, restore the data from backup, or, in the case of
Amazon S3, from a previous object version

Accidental deletion using the correct permissions and the rule of the least privilege is the best protection against accidental or malicious deletion. For
services such as Amazon S3, you can use MFA Delete to require multi-factor authentication to delete an object, limiting access to Amazon S3 objects to
privileged users. If you detect data compromise, restore the data from backup, or, in the case of Amazon S3, from a previous object version




Question : A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the
same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR
blocks. The company wants to push minor code releases from Dev to Prod to speed up
time to market. Which of the following options helps the company accomplish this?

  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. Create a new peering connection Between Prod and Dev along with appropriate routes.
2. Create a new entry to Prod in the Dev route table using the peering connection as the target.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.
Ans : 1
Exp : A VPC peering connection is a one to one relationship between two VPCs. You can create multiple VPC peering connections for each VPC that you own, but
transitive peering relationships are not supported: you will not have any peering relationship with VPCs that your VPC is not directly peered with.
VPC peering does not support transitive peering relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC
may be peered with. This includes VPC peering connections that are established entirely within your own AWS account




Question : An Auto-Scaling group spans AZs and currently has running EC instances.
When Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will:

A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance.
B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected.
C. Send an SNS notification, if configured to do so.
D. Terminate an instance in the AZ which currently has 2 running EC2 instances.
E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ

  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. A,B
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A.E


Question : You have an environment that consists of a public subnet using Amazon VPC and
instances that are running in this subnet. These three instances can successfully
communicate with other hosts on the Internet. You launch a fourth instance in the same
subnet, using the same AMI and security group configuration you used for the others,
but find that this instance cannot be accessed from the internet. What should you do to enable Internet access?
  : You have an environment that consists of a public subnet using Amazon VPC and
1. Deploy a NAT instance into the public subnet
2. Assign an Elastic IP address to the fourth instance.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Modify the routing table for the public subnet.


Question : A company is deploying a two-tier, highly available web application to AWS. Which service
provides durable storage for static content while utilizing lower Overall CPU resources for
the web tier?
 :  A company is deploying a two-tier, highly available web application to AWS. Which service
1. Amazon EBS volume
2. Amazon S3
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amazon RDS instance

Ans : 2 Exp : Absolutely. For 4+ reasons:
" Amazon S3 is almost management-free, so no hassles on provisioning, scaling, etc.
" You will reduce EC2 server load
" The storage is cheaper in S3 than in EC2 EBS volumes, as in S3 you only pay for what you consume, in EC2 you pay for the whole EBS provisioned storage
(so there is some free space which you are paying for)
" You could eventually add a CloudFront distribution to approach the static content to your users wherever they are (http://aws.amazon.com/cloudfront)
" probably more ...
In terms of costs:
" the data transfer from S3 to internet would be the same as you would pay on EC2
" you will probably reduce the cost of the storage
" you will have an additional cost for the number of requests made to your S3 files (http://aws.amazon.com/s3/#pricing)
" on high traffic loads, you will also probably need less EC2 instances / resources (this is obviously not a fact, as it depends 100% on your app)
You will also have an overhead of complexity when releasing a new version of the app, because besides deploying it into the EC2 instances, you will also have
to upload the new static file versions to S3. But you could automate this with a pretty simple script.







Question : Which of the following notification endpoints or clients are supported by Amazon Simple
Notification Service? Choose 2 answers
A. Email
B. CloudFront distribution
C. File Transfer Protocol
D. Short Message Service
E. Simple Network Management Protocol



 :  A company is deploying a two-tier, highly available web application to AWS. Which service
1. A,B
2. A,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,E

Ans : 2
Exp : In order for customers to have broad flexibility of delivery mechanisms, Amazon SNS supports notifications over multiple transport protocols. Customers
can select one the following transports as part of the subscription requests:
" "HTTP", "HTTPS" - Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the
specified URL.
" "Email", "Email-JSON" - Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based
email.
" "SQS" - Users can specify an SQS queue as the endpoint; Amazon SNS will enqueue a notification message to the specified queue (which subscribers can then
process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)
" "SMS" - Messages are sent to registered phone numbers as SMS text messages.

Q: Which types of endpoints support raw message delivery?
New raw message delivery support is added to endpoints of type SQS Queue and HTTP(S). Deliveries to Email and SMS endpoints will behave the same independent
of the "RawMessageDelivery" property.






Question : What is a placement group?
 :  A company is deploying a two-tier, highly available web application to AWS. Which service
1. A collection of Auto Scaling groups in the same Region
2. Feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections
3. Access Mostly Uused Products by 50000+ Subscribers
4. A collection of authorized Cloud Front edge locations for a distribution