Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile
devices. The votes must be
collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
  :  You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile
1. Amazon DynamoDB
2. Amazon Redshift
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amazon Simple Queue Service


Correct Answer : Get Lastest Questions and Answer : Amazon CloudWatch is an Amazon Web Services utility allowing monitoring of various components like EC2 instances, EBS volumes and the
Elastic Load Balancer. For EC2 instances, we can monitor CPUUtilization, DiskReadBytes, DiskReadOps, DiskWriteBytes, NetworkIn and NetworkOut. More often
than not, end-users would want to monitor more parameters than the ones available. eg. Free Memory, Free Swap and so on.
Amazon CloudWatch provides custom metrics to help circumvent the problem. One can simply define a custom metric based on each one's need and continuously
feed it with data using a simple bash or python script running a while loop. Let's take an example of Free Memory.
Deleting a custom Metric
A custom metric cannot be explicitly deleted. If the metric remains unused for 2 weeks, it gets automatically deleted.
Costing
$0.50 per metric per month
Summary : You can see how easy it is to add a custom metric. In this example we have shown how to add a FreeMemory metric. There are several other useful
metrics such FreeSwap, ProcessAvailability, DiskSpace, etc that can also be added. Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and
the applications you
run on AWS in real-time.You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and
applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For
example, you can monitor the CPU usage and disk reads and writes of your Amazon Elastic Compute Cloud (Amazon EC2) instances and then use this data to
determine whether you should launch additional instances to handle increased load.You can also use this data to stop under-used instances to save money. In
addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility
into resource utilization, application performance, and operational health.


You can configure alarm actions to stop, start, or terminate an Amazon EC2 instance when certain criteria are met. In addition, you can create alarms that
initiate Auto Scaling and Amazon Simple Notification Service (Amazon SNS) actions on your behalf. The MemoryUtilization metric is a custom metric. In order
to use the MemoryUtilization metric,
you must install the Monitoring Scripts for Amazon EC2 Instances




Question : Which services allow the customer to retain full administrative privileges of the underlying EC instances?

Choose 2 answers
A. Amazon Elastic Map Reduce
B. Elastic Load Balancing
C. AWS Elastic Beanstalk
D. Amazon Elasticache
E. Amazon Relational Database service


  : Which services allow the customer to retain full administrative privileges of the underlying EC instances?
1. A,B
2. A,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. C,D
5. A,C

Correct Answer : Get Lastest Questions and Answer :

Explanation: The root account has full access to all AWS Elastic Beanstalk environments launched by any IAM user under that account. If you use the Elastic
Beanstalk template to grant read-only access to an IAM user, that user will be able to view all applications, application versions, environments and any
associated resources in that account. If you use the Elastic Beanstalk template to grant full access to an IAM user, that user will be able to create,
modify, and terminate any Elastic Beanstalk resources under that account.






Question : Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access
Protocol) directory service?

 : Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access
1. Use an IAM policy that references the LDAP account identifiers and the AWS credentials.
2. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated.
5. Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types.

Correct Answer : Get Lastest Questions and Answer :
Exp: AWS Identity and Access Management (IAM) is a web service from Amazon Web Services (AWS) for managing users and user permissions in AWS. Outside the
AWS cloud, administrators of corporate systems rely on the Lightweight Directory Access Protocol (LDAP)1 to manage identities. By using role-based access
control (RBAC) and Security Assertion Markup Language (SAML) 2.0, corporate IT systems administrators can bridge the IAM and LDAP systems and simplify
identity and permissions management across on-premises and cloudbased infrastructures.




Related Questions


Question : Amazon SNS can be used with other AWS services as well, select the correct one
   : Amazon SNS can be used with other AWS services as well, select the correct one
1. Amazon SQS
2. Amazon EC2
3. Access Mostly Uused Products by 50000+ Subscribers
4. Only 1 and 2
5. All 1,2 and 3



Question : Select which applies correctly for the topic security.

  : Select which applies correctly for the topic security.
1. All API calls made to Amazon SNS are validated for the users AWS Id and the signature
2. Topics can only be created by users with valid AWS IDs who have signed up for Amazon SNS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Only 1 and 2 are correct
5. All 1,2 and 3 are correct




Question : Which of the following items are required to allow an application deployed on an EC instance to write data to a DynamoDB table?
Assume that no security keys are allowed to be stored on the EC2 instance.

A. Create an IAM Role that allows write access to the DynamoDB table.
B. Add an IAM Role to a running EC2 instance.
C. Create an IAM User that allows write access to the DynamoDB table.
D. Add an IAM User to a running EC2 instance.
E. Launch an EC2 Instance with the IAM Role included in the launch configuration.

 : Which of the following items are required to allow an application deployed on an EC instance to write data to a DynamoDB table?
1. A,C
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,E


Question : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
instance and send real-time alerts to their operations team. Which AWS services can
accomplish this? Choose 2 answers

A. Amazon Simple Email Service
B. Amazon CloudWatch
C. Amazon Simple Queue Service
D. Amazon Route 53
E. Amazon Simple Notification Service

  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. B,E
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,E
Ans :1 Exp : Amazon RDS provides several metrics that you can use to determine how your DB instance is performing. You can view the metrics in the RDS
console by selecting your DB instance and clicking Show Monitoring. You can also use Amazon CloudWatch to monitor these metrics. For more information, go to
the Viewing DB Instance Metrics.
" IOPS - the number of I/O operations completed per second. This metric is reported as the average IOPS for a given time interval. Amazon RDS reports read
and write IOPS separately on one minute intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to tens of
thousands per second.
" Latency - the elapsed time between the submission of an I/O request and its completion. This metric is reported as the average latency for a given time
interval. Amazon RDS reports read and write latency separately on one minute intervals in units of seconds. Typical values for latency are in the millisecond
(ms); for example, Amazon RDS reports 2 ms as 0.002 seconds.
" Throughput - the number of bytes per second transferred to or from disk. This metric is reported as the average throughput for a given time interval.
Amazon RDS reports read and write throughput separately on one minute intervals using units of megabytes per second (MB/s). Typical values for throughput
range from zero to the I/O channel's maximum bandwidth.
" Queue Depth - the number of I/O requests in the queue waiting to be serviced. These are I/O requests that have been submitted by the application but have
not been sent to the device because the device is busy servicing other I/O requests. Time spent waiting in the queue is a component of Latency and Service
Time (not available as a metric). This metric is reported as the average queue depth for a given time interval. Amazon RDS reports queue depth in one minute
intervals. Typical values for queue depth range from zero to several hundred.
" Amazon CloudWatch uses Amazon Simple Notification Service (Amazon SNS) to send email. This section shows you how to create and subscribe to an Amazon

Simple Notification Service topic. When you create a CloudWatch alarm, you can add this Amazon SNS topic to send an email notification when the alarm changes
state.
" This scenario walks you through how to use the AWS Management Console or the command line tools to create an Amazon CloudWatch alarm that sends an Amazon
Simple Notification Service email message when the alarm changes state from OK to ALARM.
" In this scenario, you configure the alarm to change to the ALARM state when the average CPU use of an EC2 instance exceeds 70 percent for two consecutive
five-minute periods.




Question : Which set of Amazon S features helps to prevent and recover from accidental data loss?
  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. Object lifecycle and service access logging
2. Object versioning and Multi-factor authentication
3. Access Mostly Uused Products by 50000+ Subscribers
4. Website hosting and Amazon S3 policies
Ans : 2 Exp : Data integrity compromise To ensure that data integrity is not compromised through deliberate or accidental modification, use resource
permissions to limit the scope of users who can modify the data. Even with resource permissions, accidental deletion by a privileged user is still a threat
(including a potential attack by a Trojan using the privileged user's credentials), which illustrates the importance of the principle of least privilege.
Perform data integrity checks, such as Message Authentication Codes (SHA-1/SHA-2), or Hashed Message Authentication Codes (HMACs), digital signatures, or
authenticated encryption (AES-GCM), to detect data integrity compromise. If you detect data compromise, restore the data from backup, or, in the case of
Amazon S3, from a previous object version

Accidental deletion using the correct permissions and the rule of the least privilege is the best protection against accidental or malicious deletion. For
services such as Amazon S3, you can use MFA Delete to require multi-factor authentication to delete an object, limiting access to Amazon S3 objects to
privileged users. If you detect data compromise, restore the data from backup, or, in the case of Amazon S3, from a previous object version




Question : A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the
same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR
blocks. The company wants to push minor code releases from Dev to Prod to speed up
time to market. Which of the following options helps the company accomplish this?

  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. Create a new peering connection Between Prod and Dev along with appropriate routes.
2. Create a new entry to Prod in the Dev route table using the peering connection as the target.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.
Ans : 1
Exp : A VPC peering connection is a one to one relationship between two VPCs. You can create multiple VPC peering connections for each VPC that you own, but
transitive peering relationships are not supported: you will not have any peering relationship with VPCs that your VPC is not directly peered with.
VPC peering does not support transitive peering relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC
may be peered with. This includes VPC peering connections that are established entirely within your own AWS account




Question : An Auto-Scaling group spans AZs and currently has running EC instances.
When Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will:

A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance.
B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected.
C. Send an SNS notification, if configured to do so.
D. Terminate an instance in the AZ which currently has 2 running EC2 instances.
E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ

  : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS
1. A,B
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A.E


Question : You have an environment that consists of a public subnet using Amazon VPC and
instances that are running in this subnet. These three instances can successfully
communicate with other hosts on the Internet. You launch a fourth instance in the same
subnet, using the same AMI and security group configuration you used for the others,
but find that this instance cannot be accessed from the internet. What should you do to enable Internet access?
  : You have an environment that consists of a public subnet using Amazon VPC and
1. Deploy a NAT instance into the public subnet
2. Assign an Elastic IP address to the fourth instance.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Modify the routing table for the public subnet.


Question : A company is deploying a two-tier, highly available web application to AWS. Which service
provides durable storage for static content while utilizing lower Overall CPU resources for
the web tier?
 :  A company is deploying a two-tier, highly available web application to AWS. Which service
1. Amazon EBS volume
2. Amazon S3
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amazon RDS instance

Ans : 2 Exp : Absolutely. For 4+ reasons:
" Amazon S3 is almost management-free, so no hassles on provisioning, scaling, etc.
" You will reduce EC2 server load
" The storage is cheaper in S3 than in EC2 EBS volumes, as in S3 you only pay for what you consume, in EC2 you pay for the whole EBS provisioned storage
(so there is some free space which you are paying for)
" You could eventually add a CloudFront distribution to approach the static content to your users wherever they are (http://aws.amazon.com/cloudfront)
" probably more ...
In terms of costs:
" the data transfer from S3 to internet would be the same as you would pay on EC2
" you will probably reduce the cost of the storage
" you will have an additional cost for the number of requests made to your S3 files (http://aws.amazon.com/s3/#pricing)
" on high traffic loads, you will also probably need less EC2 instances / resources (this is obviously not a fact, as it depends 100% on your app)
You will also have an overhead of complexity when releasing a new version of the app, because besides deploying it into the EC2 instances, you will also have
to upload the new static file versions to S3. But you could automate this with a pretty simple script.







Question : Which of the following notification endpoints or clients are supported by Amazon Simple
Notification Service? Choose 2 answers
A. Email
B. CloudFront distribution
C. File Transfer Protocol
D. Short Message Service
E. Simple Network Management Protocol



 :  A company is deploying a two-tier, highly available web application to AWS. Which service
1. A,B
2. A,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,E

Ans : 2
Exp : In order for customers to have broad flexibility of delivery mechanisms, Amazon SNS supports notifications over multiple transport protocols. Customers
can select one the following transports as part of the subscription requests:
" "HTTP", "HTTPS" - Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the
specified URL.
" "Email", "Email-JSON" - Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based
email.
" "SQS" - Users can specify an SQS queue as the endpoint; Amazon SNS will enqueue a notification message to the specified queue (which subscribers can then
process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)
" "SMS" - Messages are sent to registered phone numbers as SMS text messages.

Q: Which types of endpoints support raw message delivery?
New raw message delivery support is added to endpoints of type SQS Queue and HTTP(S). Deliveries to Email and SMS endpoints will behave the same independent
of the "RawMessageDelivery" property.






Question : What is a placement group?
 :  A company is deploying a two-tier, highly available web application to AWS. Which service
1. A collection of Auto Scaling groups in the same Region
2. Feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections
3. Access Mostly Uused Products by 50000+ Subscribers
4. A collection of authorized Cloud Front edge locations for a distribution