Question : When you change a user's name or path in IAM, which of the following statements describe, what happens after the change is applied?
1: Any policies attached to the user stay with the user under the new name. 2: The user stays in the same groups under the new name. 3: The unique ID for the user remains the same
1. 2 and 3 NOT 1 2. 1 and 3 NOT 2 3. 1 and 2 NOT 3 4. 1, 2 and 3
Correct Answer : Get Lastest Questions and Answer : To change a user's name or path, you must use the IAM CLI or API. There is no option in the console to rename a user. To rename IAM users, use the following commands: CLI: aws iam update-user API: UpdateUser When you change a user's name or path, the following happens: Any policies attached to the user stay with the user under the new name. The user stays in the same groups under the new name. The unique ID for the user remains the same. Any resource or role policies that refer to the user as the principal (the user is being granted access) are automatically updated to use the new name or path. For example, any queue-based policies in Amazon SQS or resource-based policies in Amazon S3 are automatically updated to use the new name and path.
IAM does not automatically update policies that refer to the user as a resource to use the new name or path; you must manually do that. For example, imagine that user Bob has a policy attached to him that lets him manage his security credentials. If an administrator renames Bob to Robert, the administrator also needs to update that policy to change the resource from this:
arn:aws:iam::account-number-without-hyphens:user/division_abc/subdivision_xyz/Bob to this: arn:aws:iam::account-number-without-hyphens:user/division_abc/subdivision_xyz/Robert This is true also if an administrator changes the path; the administrator needs to update the policy to reflect the new path for the user.
Question : Select correct statement which applies to S metadata 1. Object metadata is not encrypted 2. You should not store private data in object metadata 3. You should store private data in object metadata 4. Object metadata is optional 1. 1,2 2. 2,3 3. 3,4 4. 1,2,4 5. 1,3,4
Correct Answer : Get Lastest Questions and Answer : Object Key and Metadata : Each Amazon S3 object has data, a key, and metadata. When you create an object you specify the key name. This key name uniquely identifies the object in the bucket. For example, in Amazon S3 console (see AWS Management Console), when you highlight a bucket, a list of objects in your bucket appear. These names are the object keys. The name for a key is a sequence of Unicode characters whose UTF-8 encoding is at most 1024 bytes long.
Note : If you anticipate that your workload against Amazon S3 will exceed 100 requests per second, follow the Amazon S3 key naming guidelines for best performance.
In addition to the key, each Amazon S3 object has metadata. It is a set of name-value pairs. You can set object metadata at the time you upload it. After you upload the object, you cannot modify object metadata. The only way to modify object metadata is to make copy of the object and set the metadata. For more information, go to PUT Object - Copy in the Amazon Simple Storage Service API Reference. You can use the Amazon S3 management console to update the object metadata but internally it makes an object copy replacing the existing object to set the metadata.There are two kinds of metadata: system metadata and user-defined metadata. Encryption provides added security for your object data stored in your buckets in Amazon S3. You can encrypt data on your client-side and upload the encrypted data to Amazon S3. In this case, you manage encryption process, the encryption keys, and related tools. Optionally, you might want to use the server-side encryption feature in which Amazon S3 encrypts your object data before saving it on disks in its data centers and decrypts it when you download the objects, freeing you from the tasks of managing encryption, encryption keys, and related tools. You can also use your own encryption keys with the Amazon S3 server-side encryption feature . Server-side encryption encrypts only the object data. Any object metadata is not encrypted. Instead of using Amazon S3's server-side encryption, you also have the option of encrypting your data before sending it to Amazon S3. You can build your own library that encrypts your objects data on the client side before uploading it to Amazon S3. Optionally, you can use the AWS SDK for Java, which you can use to automatically encrypt your data before uploading it to Amazon S3.
Specifying Encryption Metadata Storage Location : When the Amazon S3 client (using the AmazonS3EncryptionClient class) encrypts data and uploads it to Amazon S3, the encrypted envelope symmetric key is also stored in S3. By default, the encrypted key is stored as user-defined object metadata. After you upload an encrypted object, you can view its properties and see the additional metadata name-value pairs related to encryption. For example, the key name x-amz-meta-x-amz-key and key value equal to the envelope key are set on an client-side encrypted object uploaded to Amazon S3. Optionally, you can also choose to store encryption metadata as an instruction file stored at the same location as the encrypted file. The instruction file will have the same key name as the encrypted data file but with the extension ".instruction" appended. You should use an instruction file when the strength of your encryption key results in a symmetric key that is too big for the object metadata. Metadata should be less than 2 KB. Encryption metadata is either stored as object metadata or an instruction file, but not both.
Question : Which is the correct statement with regards to AWS S Consistency model?
1. AWS provides read-after-write consistency for PUTS
2. AWS read-after-write consistency for DELETES
3. AWS Provides eventual consistency for overwrite PUTS and DELETES
4. 1,3
5. 2,3
Correct Answer : Get Lastest Questions and Answer : Explanation: Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.
Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat. The caveat is that if you make a HEAD or GET request to the key name (to find if the object exists) before creating the object, Amazon S3 provides eventual consistency for read-after-write.
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.
Updates to a single key are atomic. For example, if you PUT to an existing key, a subsequent read might return the old data or the updated data, but it will never write corrupted or partial data.
Amazon S3 achieves high availability by replicating data across multiple servers within Amazon's data centers. If a PUT request is successful, your data is safely stored. However, information about the changes must replicate across Amazon S3, which can take some time, and so you might observe the following behaviors:
A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list. A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior data. A process deletes an existing object and immediately attempts to read it. Until the deletion is fully propagated, Amazon S3 might return the deleted data. A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is fully propagated, Amazon S3 might list the deleted object. Note : Amazon S3 does not currently support object locking. If two PUT requests are simultaneously made to the same key, the request with the latest time stamp wins. If this is an issue, you will need to build an object-locking mechanism into your application. Updates are key-based; there is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application.
Ans : 4 Exp : Amazon CloudWatch uses Amazon Simple Notification Service (Amazon SNS) to send email. When you create a CloudWatch alarm, you can add this Amazon SNS topic to send an email notification when the alarm changes state.
You can create an alarm from the Alarms list in the Amazon CloudWatch console.
Amazon CloudWatch alarm that sends an Amazon Simple Notification Service email message when the alarm changes state from OK to ALARM. Amazon CloudWatch console or the AWS command line interface (CLI) to set up an Amazon Simple Notification Service notification and configure an alarm that monitors load balancer latency exceeding 100 ms.
AWS Management Console or the command line tools to set up an Amazon Simple Notification Service notification and to configure an alarm that sends email when EBS exceeds 100 MB throughput.
Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop or terminate your Amazon Elastic Compute Cloud (Amazon EC2) instances when you no longer need them to be running.
You can monitor your estimated Amazon Web Services (AWS) charges using Amazon CloudWatch. When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to Amazon CloudWatch as metric data that is stored for 14 days.
Question : By default when an application checks the header for a request coming from ELB, which IP address will it receive?
Ans : 1 Exp :The Proxy Protocol header helps you identify the IP address of a client when you use a load balancer configured for TCP/SSL connections. Because load balancers intercept traffic between clients and your back-end instances, the access logs from your back-end instance contain the IP address of the load balancer instead of the originating client. When Proxy Protocol is enabled, the load balancer adds a human-readable format header that contains the connection information, such as the source IP address, destination IP address, and port numbers of the client. The header is then sent to the back-end instance as a part of the request. You can parse the first line of the request to retrieve your client's IP address and the port number.
If the client connects with IPv6, the address of the proxy in the header will be the public IPv6 address of your load balancer. This IPv6 address matches the IP address that is resolved from your load balancer's DNS name that is prefixed with either ipv6 or dualstack. If the client connects with IPv4, the address of the proxy in the header will be the private IPv4 address of the load balancer and will therefore not be resolvable through a DNS lookup outside the Amazon Elastic Compute Cloud (Amazon EC2) network.
Question : You have a website, which getting popular day by day and traffic is increasing as well. Hence, you increase the EC instances to handle heavy traffic as well as introduced the ELB in front of all EC2 instances to balance the traffic. However, you see that your ELB is not accepting traffic, why?
1. You had forgotten to configure the PORT on which incoming traffic will be accepted
2. You have forgotten to configure Security Group for ELB
4. It seems, you are working with the default limit of maximum EC2 instances per account per region which is 20, you should raise AWS request to get more instances.
1. DB security groups 2. RDS security groups 3. Access Mostly Uused Products by 50000+ Subscribers 4. EC2 security groups Ans : 2 Exp : Amazon RDS Security Groups Security groups control the access that traffic has in and out of a DB instance. Three types of security groups are used with Amazon RDS: DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that is not in a VPC, a VPC security group controls access to a DB instance (or other AWS instances) inside a VPC, and an EC2 security group controls access to an EC2 instance.
By default, network access is turned off to a DB instance. You can specify rules in a security group that allows access from an IP address range, port, or EC2 security group. Once ingress rules are configured, the same rules apply to all DB instances that are associated with that security group. You can specify up to 20 rules in a security group.
Question : Which of the following volume event status showing that volume performance is severely impacted (for provisioned IOPS volumes only)?
Ans : 1 Exp : Monitoring Volume Events When Amazon EBS determines that a volume's data is potentially inconsistent, it disables I/O to the volume from any attached EC2 instances by default. This causes the volume status check to fail, and creates a volume status event that indicates the cause of the failure.
To automatically enable I/O on a volume with potential data inconsistencies, change the setting of the AutoEnableIO volume attribute.
Each event includes a start time that indicates the time at which the event occurred, and a duration that indicates how long I/O for the volume was disabled. The end time is added to the event when I/O for the volume is enabled.
Volume status events include one of the following descriptions:
Awaiting Action: Enable IO Volume data is potentially inconsistent. I/O is disabled for the volume until you explicitly enable it. The event description changes to IO Enabled after you explicitly enable I/O.
IO Enabled : I/O operations were explicitly enabled for this volume. IO Auto-Enabled : I/O operations were automatically enabled on this volume after an event occurred. We recommend that you check for data inconsistencies before continuing to use the data. Normal : For provisioned IOPS volumes only. Volume performance is as expected. Degraded : For provisioned IOPS volumes only. Volume performance is below expectations. Severely Degraded : For provisioned IOPS volumes only. Volume performance is well below expectations. Stalled : For provisioned IOPS volumes only. Volume performance is severely impacted. You can view events for your volumes using the Amazon EC2 console, the API, or the command line interface.
Question : Your website is running on multiple EC instances, now every day you want take out the application logs and aggregate all the logs in AWS, so that later on team can analyze about the traffic, user behavior attack on website etc. So which of the below solution will help you to have log aggregated in AWS? 1. You will be writing your custom solution to read logs from each EC2 instance and store it in S3
2. You have to mandatorily configured auto scaling to log automatically aggregated.
4. You can enable this on VPC subnet level, so that all the instances in a subnet can send their logs regularly on S3 bucket, which is defined by the admin.
Ans : 4 Exp : Amazon RDS provides two types of metrics that you can use to determine performance: disk metrics and database metrics.
Disk Metrics •IOPS - the number of IO operations completed per second. This metric is reported as the average IOPS for a given time interval. Amazon RDS reports read and write IOPS separately on one minute intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to tens of thousands per second.
•Latency - the elapsed time between the submission of an IO request and its completion. This metric is reported as the average latency for a given time interval. Amazon RDS reports read and write latency separately on one minute intervals in units of seconds. Typical values for latency are in the millisecond (ms); for example, Amazon RDS reports 2 ms as 0.002 seconds.
•Throughput - the number of bytes per second transferred to or from disk. This metric is reported as the average throughput for a given time interval. Amazon RDS reports read and write throughput separately on one minute intervals using units of megabytes per second (MB/s). Typical values for throughput range from zero to the IO channel's maximum bandwidth.
•Queue Depth - the number of IO requests in the queue waiting to be serviced. These are IO requests that have been submitted by the application but have not been sent to the device because the device is busy servicing other IO requests. Time spent waiting in the queue is a component of Latency and Service Time (not available as a metric). This metric is reported as the average queue depth for a given time interval. Amazon RDS reports queue depth in one minute intervals. Typical values for queue depth range from zero to several hundred.
Database Metrics •Commit Latency - the elapsed time from submitting a commit request to receiving an acknowledgment. This metric is closely associated with disk metric write latency. Lower disk write latency can result in lower commit latency. CloudWatch metrics do not report this value.
•Transaction Rate - the number of transactions completed in a given time interval, typically expressed as TPM (Transactions per Minute) or TPS (Transactions per Second). Another commonly used term for transaction rate is database throughput, which should not be confused with the disk metric called throughput. The two metrics are not necessarily related; a database can have a high transaction rate and have little to no disk throughput if, for example, the workload consists of cached reads. CloudWatch metrics do not report this value.
Question : Select the correct prerequisite for creating a DB Instance within a VPC:
1. You should NOT allocate large CIDR blocks to each of your subnets so that there are spare IP addresses for Amazon RDS to use during maintenance activities. 2. You need to have a VPC set up with at least one subnet created in every Availability Zone in the Region you want to deploy your DB Instance. 3. Access Mostly Uused Products by 50000+ Subscribers 4. There should not be a DB Security Group defined for your VPC
Ans : 2 Exp : A DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB instances. A DB subnet group allows you to specify a particular VPC when you create DB instances using the CLI or API; if you use the Amazon RDS console, you can just select the VPC and subnets you want to use. Each DB subnet group must have at least one subnet in at least two Availability Zones in the region.
When you create a DB instance in a VPC, you must select a DB subnet group. Amazon RDS then uses that DB subnet group and your preferred Availability Zone to select a subnet and an IP address within that subnet. Amazon RDS creates and associates an Elastic Network Interface to your DB instance with that IP address. For Multi-AZ deployments, defining a subnet for two or more Availability Zones in a region allows Amazon RDS to create a new standby in another Availability Zone should the need arise. You need to do this even for Single-AZ deployments, just in case you want to convert them to Multi-AZ deployments at some point.
Question : You have launched a very popular website using Amazon EC instance and ELB. EC instance is placed behind the load balancer, both load balancer and EC2 instance are opened to accept HTTP traffic on port 80 . As soon as a single client try to access the website how many connections will be established ?