Question : A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers
A. Amazon Simple Email Service B. Amazon CloudWatch C. Amazon Simple Queue Service D. Amazon Route 53 E. Amazon Simple Notification Service
1. B,E 2. C,D 3. Access Mostly Uused Products by 50000+ Subscribers 4. A,E Ans :1 Exp : Amazon RDS provides several metrics that you can use to determine how your DB instance is performing. You can view the metrics in the RDS console by selecting your DB instance and clicking Show Monitoring. You can also use Amazon CloudWatch to monitor these metrics. For more information, go to the Viewing DB Instance Metrics. " IOPS - the number of I/O operations completed per second. This metric is reported as the average IOPS for a given time interval. Amazon RDS reports read and write IOPS separately on one minute intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to tens of thousands per second. " Latency - the elapsed time between the submission of an I/O request and its completion. This metric is reported as the average latency for a given time interval. Amazon RDS reports read and write latency separately on one minute intervals in units of seconds. Typical values for latency are in the millisecond (ms); for example, Amazon RDS reports 2 ms as 0.002 seconds. " Throughput - the number of bytes per second transferred to or from disk. This metric is reported as the average throughput for a given time interval. Amazon RDS reports read and write throughput separately on one minute intervals using units of megabytes per second (MB/s). Typical values for throughput range from zero to the I/O channel's maximum bandwidth. " Queue Depth - the number of I/O requests in the queue waiting to be serviced. These are I/O requests that have been submitted by the application but have not been sent to the device because the device is busy servicing other I/O requests. Time spent waiting in the queue is a component of Latency and Service Time (not available as a metric). This metric is reported as the average queue depth for a given time interval. Amazon RDS reports queue depth in one minute intervals. Typical values for queue depth range from zero to several hundred. " Amazon CloudWatch uses Amazon Simple Notification Service (Amazon SNS) to send email. This section shows you how to create and subscribe to an Amazon
Simple Notification Service topic. When you create a CloudWatch alarm, you can add this Amazon SNS topic to send an email notification when the alarm changes state. " This scenario walks you through how to use the AWS Management Console or the command line tools to create an Amazon CloudWatch alarm that sends an Amazon Simple Notification Service email message when the alarm changes state from OK to ALARM. " In this scenario, you configure the alarm to change to the ALARM state when the average CPU use of an EC2 instance exceeds 70 percent for two consecutive five-minute periods.
Question : Which set of Amazon S features helps to prevent and recover from accidental data loss? 1. Object lifecycle and service access logging 2. Object versioning and Multi-factor authentication 3. Access Mostly Uused Products by 50000+ Subscribers 4. Website hosting and Amazon S3 policies Ans : 2 Exp : Data integrity compromise To ensure that data integrity is not compromised through deliberate or accidental modification, use resource permissions to limit the scope of users who can modify the data. Even with resource permissions, accidental deletion by a privileged user is still a threat (including a potential attack by a Trojan using the privileged user's credentials), which illustrates the importance of the principle of least privilege. Perform data integrity checks, such as Message Authentication Codes (SHA-1/SHA-2), or Hashed Message Authentication Codes (HMACs), digital signatures, or authenticated encryption (AES-GCM), to detect data integrity compromise. If you detect data compromise, restore the data from backup, or, in the case of Amazon S3, from a previous object version
Accidental deletion using the correct permissions and the rule of the least privilege is the best protection against accidental or malicious deletion. For services such as Amazon S3, you can use MFA Delete to require multi-factor authentication to delete an object, limiting access to Amazon S3 objects to privileged users. If you detect data compromise, restore the data from backup, or, in the case of Amazon S3, from a previous object version
Question : A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market. Which of the following options helps the company accomplish this?
1. Create a new peering connection Between Prod and Dev along with appropriate routes. 2. Create a new entry to Prod in the Dev route table using the peering connection as the target. 3. Access Mostly Uused Products by 50000+ Subscribers 4. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs. Ans : 1 Exp : A VPC peering connection is a one to one relationship between two VPCs. You can create multiple VPC peering connections for each VPC that you own, but transitive peering relationships are not supported: you will not have any peering relationship with VPCs that your VPC is not directly peered with. VPC peering does not support transitive peering relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC may be peered with. This includes VPC peering connections that are established entirely within your own AWS account
Question : An Auto-Scaling group spans AZs and currently has running EC instances. When Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will:
A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance. B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected. C. Send an SNS notification, if configured to do so. D. Terminate an instance in the AZ which currently has 2 running EC2 instances. E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ
Correct Answer : Get Lastest Questions and Answer : Explanation: When you use Auto Scaling to scale your applications automatically, you want to know when Auto Scaling is launching or terminating the EC2 instances in your Auto Scaling group. You can configure your Auto Scaling group to send a notification, whenever the Auto Scaling group changes. If configured, the Auto Scaling group uses Amazon Simple Notification Service (Amazon SNS) to send the notifications. Amazon SNS coordinates and manages the delivery or sending of notifications to subscribing clients or endpoints. Amazon SNS can deliver notifications as HTTP or HTTPS POST, email (SMTP, either plain-text or in JSON format), or as a message posted to an Amazon SQS queue. For more information, see What Is Amazon SNS in theAmazon Simple Notification Service Developer Guide. he default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly. When using the default termination policy, Auto Scaling selects an instance to terminate as follows: 1. Auto Scaling determines whether there are instances in multiple Availability Zones. If so, it selects the Availability Zone with the most instances. If there is more than one Availability Zone with this number of instances, Auto Scaling selects the Availability Zone with the instances that use the oldest launch configuration. 2. Auto Scaling determines which instances in the selected Availability Zone use the oldest launch configuration. If there is one such instance, it terminates it. 3. Access Mostly Uused Products by 50000+ Subscribers (This helps you maximize the use of your EC2 instances while minimizing the number of hours you are billed for Amazon EC2 usage.) If there is one such instance, Auto Scaling terminates it. 4. If there is more than one instance closest to the next billing hour, Auto Scaling selects one of these instances at random.
Question : You have an environment that consists of a public subnet using Amazon VPC and instances that are running in this subnet. These three instances can successfully communicate with other hosts on the Internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for the others, but find that this instance cannot be accessed from the internet. What should you do to enable Internet access? 1. Deploy a NAT instance into the public subnet 2. Assign an Elastic IP address to the fourth instance. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Modify the routing table for the public subnet.
Question : A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing lower Overall CPU resources for the web tier? 1. Amazon EBS volume 2. Amazon S3 3. Access Mostly Uused Products by 50000+ Subscribers 4. Amazon RDS instance
Ans : 2 Exp : Absolutely. For 4+ reasons: " Amazon S3 is almost management-free, so no hassles on provisioning, scaling, etc. " You will reduce EC2 server load " The storage is cheaper in S3 than in EC2 EBS volumes, as in S3 you only pay for what you consume, in EC2 you pay for the whole EBS provisioned storage (so there is some free space which you are paying for) " You could eventually add a CloudFront distribution to approach the static content to your users wherever they are (http://aws.amazon.com/cloudfront) " probably more ... In terms of costs: " the data transfer from S3 to internet would be the same as you would pay on EC2 " you will probably reduce the cost of the storage " you will have an additional cost for the number of requests made to your S3 files (http://aws.amazon.com/s3/#pricing) " on high traffic loads, you will also probably need less EC2 instances / resources (this is obviously not a fact, as it depends 100% on your app) You will also have an overhead of complexity when releasing a new version of the app, because besides deploying it into the EC2 instances, you will also have to upload the new static file versions to S3. But you could automate this with a pretty simple script.
Question : Which of the following notification endpoints or clients are supported by Amazon Simple Notification Service? Choose 2 answers A. Email B. CloudFront distribution C. File Transfer Protocol D. Short Message Service E. Simple Network Management Protocol
Ans : 2 Exp : In order for customers to have broad flexibility of delivery mechanisms, Amazon SNS supports notifications over multiple transport protocols. Customers can select one the following transports as part of the subscription requests: " "HTTP", "HTTPS" - Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL. " "Email", "Email-JSON" - Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email. " "SQS" - Users can specify an SQS queue as the endpoint; Amazon SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.) " "SMS" - Messages are sent to registered phone numbers as SMS text messages.
Q: Which types of endpoints support raw message delivery? New raw message delivery support is added to endpoints of type SQS Queue and HTTP(S). Deliveries to Email and SMS endpoints will behave the same independent of the "RawMessageDelivery" property.
Question : What is a placement group? 1. A collection of Auto Scaling groups in the same Region 2. Feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections 3. Access Mostly Uused Products by 50000+ Subscribers 4. A collection of authorized Cloud Front edge locations for a distribution
Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking. For more information, see Enhanced Networking. First, you create a placement group and then you launch multiple instances into the placement group. We recommend that you launch the number of instances that you need in the placement group in a single launch request and that you use the same instance type for all instances in the placement group. If you try to add more instances to the placement group later, or if you try to launch more than one instance type in the placement group, you increase your chances of getting an insufficient capacity error. If you stop an instance in a placement group and then start it again, it still runs in the placement group. However, the start fails if there isn't enough capacity for the instance. If you receive a capacity error when launching an instance in a placement group, stop and restart the instances in the placement group, and then try the launch again.