Question : Security groups control the access that traffic has in and out of a DB instance, which of the following is not correct security group
1. DB security groups 2. RDS security groups 3. Access Mostly Uused Products by 50000+ Subscribers 4. EC2 security groups Ans : 2 Exp : Amazon RDS Security Groups Security groups control the access that traffic has in and out of a DB instance. Three types of security groups are used with Amazon RDS: DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that is not in a VPC, a VPC security group controls access to a DB instance (or other AWS instances) inside a VPC, and an EC2 security group controls access to an EC2 instance.
By default, network access is turned off to a DB instance. You can specify rules in a security group that allows access from an IP address range, port, or EC2 security group. Once ingress rules are configured, the same rules apply to all DB instances that are associated with that security group. You can specify up to 20 rules in a security group.
Question : Which of the following volume event status showing that volume performance is severely impacted (for provisioned IOPS volumes only)?
Ans : 1 Exp : Monitoring Volume Events When Amazon EBS determines that a volume's data is potentially inconsistent, it disables I/O to the volume from any attached EC2 instances by default. This causes the volume status check to fail, and creates a volume status event that indicates the cause of the failure.
To automatically enable I/O on a volume with potential data inconsistencies, change the setting of the AutoEnableIO volume attribute.
Each event includes a start time that indicates the time at which the event occurred, and a duration that indicates how long I/O for the volume was disabled. The end time is added to the event when I/O for the volume is enabled.
Volume status events include one of the following descriptions:
Awaiting Action: Enable IO Volume data is potentially inconsistent. I/O is disabled for the volume until you explicitly enable it. The event description changes to IO Enabled after you explicitly enable I/O.
IO Enabled : I/O operations were explicitly enabled for this volume. IO Auto-Enabled : I/O operations were automatically enabled on this volume after an event occurred. We recommend that you check for data inconsistencies before continuing to use the data. Normal : For provisioned IOPS volumes only. Volume performance is as expected. Degraded : For provisioned IOPS volumes only. Volume performance is below expectations. Severely Degraded : For provisioned IOPS volumes only. Volume performance is well below expectations. Stalled : For provisioned IOPS volumes only. Volume performance is severely impacted. You can view events for your volumes using the Amazon EC2 console, the API, or the command line interface.
Question : Your website is running on multiple EC instances, now every day you want take out the application logs and aggregate all the logs in AWS, so that later on team can analyze about the traffic, user behavior attack on website etc. So which of the below solution will help you to have log aggregated in AWS? 1. You will be writing your custom solution to read logs from each EC2 instance and store it in S3
2. You have to mandatorily configured auto scaling to log automatically aggregated.
4. You can enable this on VPC subnet level, so that all the instances in a subnet can send their logs regularly on S3 bucket, which is defined by the admin.
Correct Answer : Get Lastest Questions and Answer : Explanation: You have to use CloudWatch Logs agent. You first need to use CloudWatch logs agent installer, which will install Cloudwatch agent on each server, which can be used to aggregate the logs from each EC2 server, wherever Cloudwatch agent is installed.
Question : Amazon RDS provides which of the following types of metrics that you can use to determine performance ?
Ans : 4 Exp : Amazon RDS provides two types of metrics that you can use to determine performance: disk metrics and database metrics.
Disk Metrics •IOPS - the number of IO operations completed per second. This metric is reported as the average IOPS for a given time interval. Amazon RDS reports read and write IOPS separately on one minute intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to tens of thousands per second.
•Latency - the elapsed time between the submission of an IO request and its completion. This metric is reported as the average latency for a given time interval. Amazon RDS reports read and write latency separately on one minute intervals in units of seconds. Typical values for latency are in the millisecond (ms); for example, Amazon RDS reports 2 ms as 0.002 seconds.
•Throughput - the number of bytes per second transferred to or from disk. This metric is reported as the average throughput for a given time interval. Amazon RDS reports read and write throughput separately on one minute intervals using units of megabytes per second (MB/s). Typical values for throughput range from zero to the IO channel's maximum bandwidth.
•Queue Depth - the number of IO requests in the queue waiting to be serviced. These are IO requests that have been submitted by the application but have not been sent to the device because the device is busy servicing other IO requests. Time spent waiting in the queue is a component of Latency and Service Time (not available as a metric). This metric is reported as the average queue depth for a given time interval. Amazon RDS reports queue depth in one minute intervals. Typical values for queue depth range from zero to several hundred.
Database Metrics •Commit Latency - the elapsed time from submitting a commit request to receiving an acknowledgment. This metric is closely associated with disk metric write latency. Lower disk write latency can result in lower commit latency. CloudWatch metrics do not report this value.
•Transaction Rate - the number of transactions completed in a given time interval, typically expressed as TPM (Transactions per Minute) or TPS (Transactions per Second). Another commonly used term for transaction rate is database throughput, which should not be confused with the disk metric called throughput. The two metrics are not necessarily related; a database can have a high transaction rate and have little to no disk throughput if, for example, the workload consists of cached reads. CloudWatch metrics do not report this value.
Question : Select the correct prerequisite for creating a DB Instance within a VPC:
1. You should NOT allocate large CIDR blocks to each of your subnets so that there are spare IP addresses for Amazon RDS to use during maintenance activities. 2. You need to have a VPC set up with at least one subnet created in every Availability Zone in the Region you want to deploy your DB Instance. 3. Access Mostly Uused Products by 50000+ Subscribers 4. There should not be a DB Security Group defined for your VPC
Ans : 2 Exp : A DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB instances. A DB subnet group allows you to specify a particular VPC when you create DB instances using the CLI or API; if you use the Amazon RDS console, you can just select the VPC and subnets you want to use. Each DB subnet group must have at least one subnet in at least two Availability Zones in the region.
When you create a DB instance in a VPC, you must select a DB subnet group. Amazon RDS then uses that DB subnet group and your preferred Availability Zone to select a subnet and an IP address within that subnet. Amazon RDS creates and associates an Elastic Network Interface to your DB instance with that IP address. For Multi-AZ deployments, defining a subnet for two or more Availability Zones in a region allows Amazon RDS to create a new standby in another Availability Zone should the need arise. You need to do this even for Single-AZ deployments, just in case you want to convert them to Multi-AZ deployments at some point.
Question : You have launched a very popular website using Amazon EC instance and ELB. EC instance is placed behind the load balancer, both load balancer and EC2 instance are opened to accept HTTP traffic on port 80 . As soon as a single client try to access the website how many connections will be established ?
Correct Answer : Get Lastest Questions and Answer : Explanation: In the given scenario two connection will be created , one between client and ELB. Another will be between ELB and EC2 instance. Both will be with the same protocol and port.
Question : You are implementing a solution where based on CPU utilization you will configure the Auto Scaling as well as send the notification on the mobile through SNS. You want to analyze the CPU utilization, and report the same to chief architect every month. But first month as soon as you go and check matric was not available in CloudWatch to analyze, why? 1. Cloudwatch delete the matric in 24 hours.
2. CloudWatch delete the metrics as soon as notification was sent
1. The instance is replace automatically by the ELB. 2. The instance gets terminated automatically by the ELB. 3. Access Mostly Uused Products by 50000+ Subscribers 4. The instance gets quarantined by the ELB for root cause analyis
1. Create an Origin Access Identify (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. 2. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM user. 3. Access Mostly Uused Products by 50000+ Subscribers