Question : You have a web-style application with a stateless but CPU and memory-intensive web tier running on a cc2 8xlarge EC2 instance inside of a VPC The instance when under load is having problems returning requests within the SLA as defined by your business The application maintains its state in a DynamoDB table, but the data tier is properly provisioned and responses are consistently fast. How can you best resolve the issue of the application responses not meeting your SLA? 1. Add another cc2 8xlarge application instance, and put both behind an Elastic Load Balancer 2. Move the cc2 8xlarge to the same Availability Zone as the DynamoDB table 3. Access Mostly Uused Products by 50000+ Subscribers 4. Move the database from DynamoDB to RDS MySQL in scale-out read-replica configuration
Question : If your EBS volume stays in the 'detaching' state, you can force the detachment using the API action ? if the previous detachment attempt did not occur cleanly
Explanation: Detaches an Amazon EBS volume from an instance. Make sure to unmount any file systems on the device within your operating system before detaching the volume. Failure to do so results in the volume being stuck in a busy state while detaching.
If an Amazon EBS volume is the root device of an instance, it cant be detached while the instance is in the running state. To detach the root volume, stop the instance first.
If the root volume is detached from an instance with an AWS Marketplace product code, then the AWS Marketplace product codes from that volume are no longer associated with the instance.
Force : Request Parameter
Forces detachment if the previous detachment attempt did not occur cleanly (logging into an instance, unmounting the volume, and detaching normally). This option can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance. The instance wont have an opportunity to flush file system caches or file system metadata. If you use this option, you must perform file system check and repair procedures.
Ans : 4 Exp : Amazon CloudWatch uses Amazon Simple Notification Service (Amazon SNS) to send email. When you create a CloudWatch alarm, you can add this Amazon SNS topic to send an email notification when the alarm changes state.
You can create an alarm from the Alarms list in the Amazon CloudWatch console.
Amazon CloudWatch alarm that sends an Amazon Simple Notification Service email message when the alarm changes state from OK to ALARM. Amazon CloudWatch console or the AWS command line interface (CLI) to set up an Amazon Simple Notification Service notification and configure an alarm that monitors load balancer latency exceeding 100 ms.
AWS Management Console or the command line tools to set up an Amazon Simple Notification Service notification and to configure an alarm that sends email when EBS exceeds 100 MB throughput.
Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop or terminate your Amazon Elastic Compute Cloud (Amazon EC2) instances when you no longer need them to be running.
You can monitor your estimated Amazon Web Services (AWS) charges using Amazon CloudWatch. When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to Amazon CloudWatch as metric data that is stored for 14 days.
Question : By default when an application checks the header for a request coming from ELB, which IP address will it receive?
Ans : 1 Exp :The Proxy Protocol header helps you identify the IP address of a client when you use a load balancer configured for TCP/SSL connections. Because load balancers intercept traffic between clients and your back-end instances, the access logs from your back-end instance contain the IP address of the load balancer instead of the originating client. When Proxy Protocol is enabled, the load balancer adds a human-readable format header that contains the connection information, such as the source IP address, destination IP address, and port numbers of the client. The header is then sent to the back-end instance as a part of the request. You can parse the first line of the request to retrieve your client's IP address and the port number.
If the client connects with IPv6, the address of the proxy in the header will be the public IPv6 address of your load balancer. This IPv6 address matches the IP address that is resolved from your load balancer's DNS name that is prefixed with either ipv6 or dualstack. If the client connects with IPv4, the address of the proxy in the header will be the private IPv4 address of the load balancer and will therefore not be resolvable through a DNS lookup outside the Amazon Elastic Compute Cloud (Amazon EC2) network.
Question : You have a website, which getting popular day by day and traffic is increasing as well. Hence, you increase the EC instances to handle heavy traffic as well as introduced the ELB in front of all EC2 instances to balance the traffic. However, you see that your ELB is not accepting traffic, why?
1. You had forgotten to configure the PORT on which incoming traffic will be accepted
2. You have forgotten to configure Security Group for ELB
4. It seems, you are working with the default limit of maximum EC2 instances per account per region which is 20, you should raise AWS request to get more instances.
1. DB security groups 2. RDS security groups 3. Access Mostly Uused Products by 50000+ Subscribers 4. EC2 security groups Ans : 2 Exp : Amazon RDS Security Groups Security groups control the access that traffic has in and out of a DB instance. Three types of security groups are used with Amazon RDS: DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that is not in a VPC, a VPC security group controls access to a DB instance (or other AWS instances) inside a VPC, and an EC2 security group controls access to an EC2 instance.
By default, network access is turned off to a DB instance. You can specify rules in a security group that allows access from an IP address range, port, or EC2 security group. Once ingress rules are configured, the same rules apply to all DB instances that are associated with that security group. You can specify up to 20 rules in a security group.
Question : Which of the following volume event status showing that volume performance is severely impacted (for provisioned IOPS volumes only)?
Ans : 1 Exp : Monitoring Volume Events When Amazon EBS determines that a volume's data is potentially inconsistent, it disables I/O to the volume from any attached EC2 instances by default. This causes the volume status check to fail, and creates a volume status event that indicates the cause of the failure.
To automatically enable I/O on a volume with potential data inconsistencies, change the setting of the AutoEnableIO volume attribute.
Each event includes a start time that indicates the time at which the event occurred, and a duration that indicates how long I/O for the volume was disabled. The end time is added to the event when I/O for the volume is enabled.
Volume status events include one of the following descriptions:
Awaiting Action: Enable IO Volume data is potentially inconsistent. I/O is disabled for the volume until you explicitly enable it. The event description changes to IO Enabled after you explicitly enable I/O.
IO Enabled : I/O operations were explicitly enabled for this volume. IO Auto-Enabled : I/O operations were automatically enabled on this volume after an event occurred. We recommend that you check for data inconsistencies before continuing to use the data. Normal : For provisioned IOPS volumes only. Volume performance is as expected. Degraded : For provisioned IOPS volumes only. Volume performance is below expectations. Severely Degraded : For provisioned IOPS volumes only. Volume performance is well below expectations. Stalled : For provisioned IOPS volumes only. Volume performance is severely impacted. You can view events for your volumes using the Amazon EC2 console, the API, or the command line interface.
Question : Your website is running on multiple EC instances, now every day you want take out the application logs and aggregate all the logs in AWS, so that later on team can analyze about the traffic, user behavior attack on website etc. So which of the below solution will help you to have log aggregated in AWS? 1. You will be writing your custom solution to read logs from each EC2 instance and store it in S3
2. You have to mandatorily configured auto scaling to log automatically aggregated.
4. You can enable this on VPC subnet level, so that all the instances in a subnet can send their logs regularly on S3 bucket, which is defined by the admin.
Ans : 4 Exp : Amazon RDS provides two types of metrics that you can use to determine performance: disk metrics and database metrics.
Disk Metrics •IOPS - the number of IO operations completed per second. This metric is reported as the average IOPS for a given time interval. Amazon RDS reports read and write IOPS separately on one minute intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to tens of thousands per second.
•Latency - the elapsed time between the submission of an IO request and its completion. This metric is reported as the average latency for a given time interval. Amazon RDS reports read and write latency separately on one minute intervals in units of seconds. Typical values for latency are in the millisecond (ms); for example, Amazon RDS reports 2 ms as 0.002 seconds.
•Throughput - the number of bytes per second transferred to or from disk. This metric is reported as the average throughput for a given time interval. Amazon RDS reports read and write throughput separately on one minute intervals using units of megabytes per second (MB/s). Typical values for throughput range from zero to the IO channel's maximum bandwidth.
•Queue Depth - the number of IO requests in the queue waiting to be serviced. These are IO requests that have been submitted by the application but have not been sent to the device because the device is busy servicing other IO requests. Time spent waiting in the queue is a component of Latency and Service Time (not available as a metric). This metric is reported as the average queue depth for a given time interval. Amazon RDS reports queue depth in one minute intervals. Typical values for queue depth range from zero to several hundred.
Database Metrics •Commit Latency - the elapsed time from submitting a commit request to receiving an acknowledgment. This metric is closely associated with disk metric write latency. Lower disk write latency can result in lower commit latency. CloudWatch metrics do not report this value.
•Transaction Rate - the number of transactions completed in a given time interval, typically expressed as TPM (Transactions per Minute) or TPS (Transactions per Second). Another commonly used term for transaction rate is database throughput, which should not be confused with the disk metric called throughput. The two metrics are not necessarily related; a database can have a high transaction rate and have little to no disk throughput if, for example, the workload consists of cached reads. CloudWatch metrics do not report this value.
Question : Select the correct prerequisite for creating a DB Instance within a VPC:
1. You should NOT allocate large CIDR blocks to each of your subnets so that there are spare IP addresses for Amazon RDS to use during maintenance activities. 2. You need to have a VPC set up with at least one subnet created in every Availability Zone in the Region you want to deploy your DB Instance. 3. Access Mostly Uused Products by 50000+ Subscribers 4. There should not be a DB Security Group defined for your VPC
Ans : 2 Exp : A DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB instances. A DB subnet group allows you to specify a particular VPC when you create DB instances using the CLI or API; if you use the Amazon RDS console, you can just select the VPC and subnets you want to use. Each DB subnet group must have at least one subnet in at least two Availability Zones in the region.
When you create a DB instance in a VPC, you must select a DB subnet group. Amazon RDS then uses that DB subnet group and your preferred Availability Zone to select a subnet and an IP address within that subnet. Amazon RDS creates and associates an Elastic Network Interface to your DB instance with that IP address. For Multi-AZ deployments, defining a subnet for two or more Availability Zones in a region allows Amazon RDS to create a new standby in another Availability Zone should the need arise. You need to do this even for Single-AZ deployments, just in case you want to convert them to Multi-AZ deployments at some point.
Question : You have launched a very popular website using Amazon EC instance and ELB. EC instance is placed behind the load balancer, both load balancer and EC2 instance are opened to accept HTTP traffic on port 80 . As soon as a single client try to access the website how many connections will be established ?