Premium

AWS Certified Developer - Associate Questions and Answers (Dumps and Practice Questions)



Question : You are creating a snapshot of an EBS volume. Which of the below statements is incorrect in relation to the creation of an EBS snapshot?
 : You are creating a snapshot of an EBS volume. Which of the below statements is incorrect in relation to the creation of an EBS snapshot?
1. It is a point in time backup of the EBS volume
2. It is stored in the same AZ as the volume
3. Access Mostly Uused Products by 50000+ Subscribers
4. It can be used to launch a new instance


Correct Answer : Get Lastest Questions and Answer :

Explanation: When you create an EBS volume, you can create it based on an existing snapshot. The new volume begins as an exact replica of the original volume that was used to create the snapshot. When you create a volume from an existing snapshot, it loads lazily in the background so that you can begin using them right away. If you access a piece of data that hasn't been loaded yet, the volume immediately downloads the requested data from Amazon S3, and then continues loading the rest of the volume's data in the background. For more information, see Creating an Amazon EBS Snapshot.

Snapshots of encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted. Your encrypted volumes and any associated snapshots always remain protected. For more information, see Amazon EBS Encryption.

You can share your unencrypted snapshots with specific AWS accounts, make them public to share them with the entire AWS community. User with access to your snapshots can create their own EBS volumes from your snapshot. This doesn't affect your snapshot. For more information about how to share snapshots, see Sharing an Amazon EBS Snapshot. Note that you can't share encrypted snapshots, because your volume encryption keys and master key are specific to your account. If you need to your encrypted snapshot data, you can migrate the data to an unencrypted volume and then share a snapshot of that volume. For more information, see Migrating Data.

Snapshots are constrained to the region in which they are created. After you have created a snapshot of an EBS volume, you can use it to create new volumes in the same region. For more information, see Restoring an Amazon EBS Volume from a Snapshot. You can also copy snapshots across regions, making it easier to leverage multiple regions for geographical expansion, data center migration and disaster recovery. You can copy any accessible snapshots that are in the available state. The EBS snapshots are a point in time backup of the EBS volume. It is an incremental snapshot, but is always specific to the region and never specific to a single AZ.





Question : HadoopExam AWS Developer had to configure AutoScaling which scales up when the CPU utilization is above
70% and scales down when the CPU utilization is below 30%. How can the user configure AutoScaling for the above mentioned condition?
 : HadoopExam AWS Developer had to configure AutoScaling which scales up when the CPU utilization is above
1. Use AutoScaling by manually modifying the desired capacity during a condition
2. Use dynamic AutoScaling with a policy
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use AutoScaling with a schedule

Correct Answer : Get Lastest Questions and Answer :
Explanation: When you use Auto Scaling to scale on demand, you must define how you want to scale in response to changing conditions. For example, you have a web application that currently runs on two instances. You want to launch two additional instances when the load on the running instances reaches 70 percent, and then you want to terminate the additional instances when the load goes down to 40 percent. You can configure your Auto Scaling group to scale up and then scale down automatically based on specifying these conditions.

An Auto Scaling group uses a combination of policies and alarms to determine when the specified conditions for launching and terminating instances are met. An alarm is an object that watches over a single metric (for example, the average CPU utilization of your EC2 instances in an Auto Scaling group) over a time period that you specify. When the value of the metric breaches the thresholds that you define, over a number of time periods that you specify, the alarm performs one or more actions. An action can be sending messages to Auto Scaling. A policy is a set of instructions for Auto Scaling that tells the service how to respond to alarm messages. Along with creating a launch configuration and Auto Scaling group, you need to create the alarms and the scaling policies and associate them with your Auto Scaling group. When the alarm sends the message, Auto Scaling executes the associated policy on your Auto Scaling group to scale the group in (terminate instances) or scale the group out (launch instances).The user can configure the AutoScaling group to automatically scale up and then scale down based on the specified conditions. To configure this, the user must setup policies which will get triggered by the CloudWatch alarms.
When a scaling policy is executed, it changes the size of your Auto Scaling group by the amount specified in the policy. You can express the change to the current size as an absolute number, an increment, or as a percentage of the current group size. When the policy is executed, Auto Scaling uses both the current group capacity and the change specified in the policy to compute a new size for your Auto Scaling group. Auto Scaling then updates the current size, and this consequently affects the size of your group.

A positive adjustment value increases the current capacity and a negative adjustment value decreases the current capacity. Auto Scaling does not scale above the maximum size or below the minimum size of the Auto Scaling group. We recommend that you create two policies for each scaling change that you want to perform: one policy for scaling out and another policy for scaling in. To create a scaling policy, you need to specify a name for the policy, the name of the Auto Scaling group to associate the policy with, the number of instances by which to scale, and the adjustment type. Auto Scaling supports the following adjustment types:

ChangeInCapacity: Increases or decreases the existing capacity. For example, the current capacity of your Auto Scaling group is set to three instances, and you then create a scaling policy on your Auto Scaling group, specify the type as ChangeInCapacity, and the adjustment as five. When the policy is executed, Auto Scaling adds five more instances to your Auto Scaling group. You then have eight running instances in your Auto Scaling group: current capacity (3) plus ChangeInCapacity (5) equals 8.

ExactCapacity: Changes the current capacity to the specified value. For example, if the current capacity is 5 instances and you create a scaling policy on your Auto Scaling group, specify the type as ExactCapacity and the adjustment as 3. When the policy is executed, your Auto Scaling group has three running instances. You'll get an error if you specify a negative adjustment value for the ExactCapacity adjustment type. PercentChangeInCapacity: Increases or decreases the capacity by a percentage. For example, if the current capacity is 10 instances and you create a scaling policy on your Auto Scaling group, specify the type as PercentChangeInCapacity, and the adjustment as 10. When the policy is executed, your Auto Scaling group has eleven running instances because 10 percent of 10 instances is 1 instance, and 10 instances plus 1 instance is 11 instances. Auto Scaling handles non-integer numbers returned by PercentChangeInCapacity as follows:
If the value is greater than 1, Auto Scaling rounds it to the lower value. For example, a return value of 12.7 is rounded to 12.
If the value is between 0 and 1, Auto Scaling rounds it to 1. For example, a return value of .67 is rounded to 1.
If the value between 0 and -1, Auto Scaling rounds it to -1. For example, a return value of -.58 is rounded to -1.
If the value is less than -1, Auto Scaling rounds it to the higher value. For example, a return value of -6.67 is rounded to -6.





Question : Which of the below mentioned options can be a good use case for storing content in AWS RRS?
 : Which of the below mentioned options can be a good use case for storing content in AWS RRS?
1. Storing image thumbnails
2. Storing mission critical data Files
3. Access Mostly Uused Products by 50000+ Subscribers
4. Storing infrequently used log files

Correct Answer : Get Lastest Questions and Answer :
Explanation: Amazon S3 stores objects according to their storage class. It assigns the storage class to an object when it is written to Amazon S3. You can assign objects a specific storage class (standard or reduced redundancy) only when you write the objects to an Amazon S3 bucket or when you copy objects that are already stored in Amazon S3. Standard is the default storage class. For information about storage classes, see Object Key and Metadata.

In order to reduce storage costs, you can use reduced redundancy storage for noncritical, reproducible data at lower levels of redundancy than Amazon S3 provides with standard storage. The lower level of redundancy results in less durability and availability, but in many cases, the lower costs can make reduced redundancy storage an acceptable storage solution. For example, it can be a cost-effective solution for sharing media content that is durably stored elsewhere. It can also make sense if you are storing thumbnails and other resized images that can be easily reproduced from an original image.

Reduced redundancy storage is designed to provide 99.99% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.01% of objects. For example, if you store 10,000 objects using the RRS option, you can, on average, expect to incur an annual loss of a single object per year (0.01% of 10,000 objects).

Note : This annual loss represents an expected average and does not guarantee the loss of less than 0.01% of objects in a given year.

Reduced redundancy storage stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but it does not replicate objects as many times as Amazon S3 standard storage. In addition, reduced redundancy storage is designed to sustain the loss of data in a single facility.
If an object in reduced redundancy storage has been lost, Amazon S3 will return a 405 error on requests made to that object. Amazon S3 also offers notifications for reduced redundancy storage object loss: you can configure your bucket so that when Amazon S3 detects the loss of an RRS object, a notification will be sent through Amazon Simple Notification Service (Amazon SNS). You can then replace the lost object. To enable notifications, you can use the Amazon S3 console to set the Notifications property of your bucket. AWS RRS provides the same functionality as AWS S3, but at a cheaper rate. It is ideally suited for non-mission, critical applications, such as files which can be reproduced.





Related Questions


Question : Being a HadoopExam.com AWS Developer you are defining a policy for the IAM user.
Which of the below mentioned elements will be used as a part of the policy?
  : Being a HadoopExam.com AWS Developer you are defining a policy for the IAM user.
1. VersionManagement
2. PrincipalResource
3. Access Mostly Uused Products by 50000+ Subscribers
4. Supported Data Types



Question : Being a HadoopExam.com AWS Developer you have hosted an application on EC.
The application makes a call to RDS. How can the user ensure that access to the RDS DB is secure?
  :  Being a HadoopExam.com AWS Developer you have hosted an application on EC.
1. Access between RDS and EC2 is always secure
2. IAM role with RDS access and attach it to EC2
3. Access Mostly Uused Products by 50000+ Subscribers
4. Allow the EC2 security group within the RDS security group to allow access from EC2



Question : Being a HadoopExam.com AWS Developer you have launched RDS with the Oracle DB. The instance size is GB.
The user has taken 2 snapshots of that DB. Will RDS charge the user for the snapshot?
  : Being a HadoopExam.com AWS Developer you have launched RDS with the Oracle DB. The instance size is  GB.
1. No, provided the total snapshot size is less than 20 GB
2. Yes
3. Access Mostly Uused Products by 50000+ Subscribers
4. No. Backup storage is always free


Question : Being a HadoopExam.com AWS Developer you are configuring MySQL RDS without PIOPS. What should be the minimum size of DB storage provided by the user?
  : Being a HadoopExam.com AWS Developer you are configuring MySQL RDS without PIOPS. What should be the minimum size of DB storage provided by the user?
1. 100GB
2. 50GB
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1TB


Question : Being a HadoopExam.com AWS Developer you are configuring an application with the PostgreSQL RDS instance.
And does not want a mandatory maintenance window. How can the user configure this?
  : Being a HadoopExam.com AWS Developer you are  configuring an application with the PostgreSQL RDS instance.
1. The user should skip the step to configure the maintenance window
2. The user should select the option as 'No' in the Management Options screen against the Maintenance window
3. Access Mostly Uused Products by 50000+ Subscribers
4. The user should provide the same time for start and end so that it will not perform maintenance


Question : Being a HadoopExam.com AWS Developer you have configured RDS with MySQL and
also had setup the maintenance window of 12:00 AM with the duration as 1 hour on every Sunday.
Now you wants to setup the automated backup for the same instance. What time should the user supply in this case?
  : Being a HadoopExam.com AWS Developer you have configured RDS with MySQL and
1. 12:30 AM
2. 01:01 AM
3. Access Mostly Uused Products by 50000+ Subscribers
4. 01:00 AM