Premium

AWS Certified Developer - Associate Questions and Answers (Dumps and Practice Questions)



Question : What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  : What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
1. store backed instances can be stopped and restarted
2. Auto scaling requires using Amazon EBS - backed instances
3. Access Mostly Uused Products by 50000+ Subscribers
4. Virtual Private Cloud requires EBS backed instances



Correct Answer : Get Lastest Questions and Answer :

Explanation: You can stop and restart your instance if it has an Amazon EBS volume as its root device. The instance retains its instance ID, but can change as described in the Overview section. When you stop an instance, we shut it down. We don't charge hourly usage for a stopped instance, or data transfer fees, but we do charge for the storage for any Amazon EBS volumes. Each time you start a stopped instance we charge a full instance hour, even if you make this transition multiple times within a single hour. While the instance is stopped, you can treat its root volume like any other volume, and modify it (for example, repair file system problems or update software). You just detach the volume from the stopped instance, attach it to a running instance, make your changes, detach it from the running instance, and then reattach it to the stopped instance. Make sure that you reattach it using the storage device name that's specified as the root device in the block device mapping for the instance. If you decide that you no longer need an instance, you can terminate it. As soon as the state of an instance changes to shutting-down or terminated, we stop charging for that instance.





Question :

You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site.
At some point you find out that other sites have been linking to the photos on your site, causing loss to your business.
What is an effective method to mitigate this?


 :
1. Use CloudFront distributions for static content.
2. Remove public read access and use signed URLs with expiry dates.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Store photos on an EBS volume of the web server.


Correct Answer : Get Lastest Questions and Answer :

referring to the signed urls for private data stored on Amazon S3.

If files are publicly accessible they can be accessed with a simple url to the file:

eg http://s3.amazonaws.com/[bucket]/[key]

However, they can be set to private in which case you need to provide a signed url to access the file. This url is created using your public and secret keys, and its this url that has an expiry time. eg

http://[bucket].s3.amazonaws.com/[key]?AWSAccessKeyId=[AWS_Public_Key]&Expires=1294766482&Signature=[generated_hash]
As per your question, for web graphics, you might re-use the same generated url with the expiry time set far into the future so that browsers can cache the file, whereas for file downloads you'd probably create a new url for each request with the url set to expire only a day in advance to protect your data.

This DOES NOT expire/delete/remove your data stored on S3. It only affects the url to the file and you can generate as many urls with different expiry dates as you require.

You can also invalidate all URLs pointing to an object on S3 by renaming or moving or deleting the object. That's pretty obvious once you understand that the URLs you make are just that - they point to a file on your S3 account, and amazon does not even know when you make one - you don't need an internet connection to make a signed URL.




Question :

Your application is trying to upload a 6 GB file to Simple Storage Service and you receive a
"Your proposed upload exceeds the maximum allowed object size." error message. What is a possible solution for this?

 :
1. API for this object Contact support to increase your object size limit
2. API for this object Use the large object upload
3. Access Mostly Uused Products by 50000+ Subscribers
4. None, Simple Storage Service objects are limited to 5 GB



Correct Answer : Get Lastest Questions and Answer :


Explanation: AWS S3 (Simple Storage Service) allows a maximum object size of 5TB. However, objects 5GB or larger are required to be uploaded using the multi-part upload API.


Related Questions


Question : QuickTechie.com is planning to host data with RDS. Which of the below mentioned databases is not supported by RDS?
 : QuickTechie.com is planning to host data with RDS. Which of the below mentioned databases is not supported by RDS?
1. PostgreSQL
2. Oracle
3. Access Mostly Uused Products by 50000+ Subscribers
4. MS SQL


Question : QuickTechie.com has created a blank EBS volume in the US-East- region and unable to attach the volume
to a running instance in the same region. What could be the possible reason for this?
 : QuickTechie.com has created a blank EBS volume in the US-East- region and unable to attach the volume
1. The instance has enabled the volume attach protection
2. The instance must be in a running state. It is required to stop the instance to attach volume
3. Access Mostly Uused Products by 50000+ Subscribers
4. The AZ for the instance and volume are different



Question : HadoopExam AWS Developer had attached an EBS volume to a running Linux instance as a "/dev/sdf" device.
The user is unable to see the attached device when he runs the command "df -h". What is the possible reason for this?
 : HadoopExam AWS Developer had attached an EBS volume to a running Linux instance as a
1. The volume is not attached as a root device
2. The volume is not mounted
3. Access Mostly Uused Products by 50000+ Subscribers
4. The volume is not formatted



Question : You are creating a snapshot of an EBS volume. Which of the below statements is incorrect in relation to the creation of an EBS snapshot?
 : You are creating a snapshot of an EBS volume. Which of the below statements is incorrect in relation to the creation of an EBS snapshot?
1. It is a point in time backup of the EBS volume
2. It is stored in the same AZ as the volume
3. Access Mostly Uused Products by 50000+ Subscribers
4. It can be used to launch a new instance



Question : HadoopExam AWS Developer had to configure AutoScaling which scales up when the CPU utilization is above
70% and scales down when the CPU utilization is below 30%. How can the user configure AutoScaling for the above mentioned condition?
 : HadoopExam AWS Developer had to configure AutoScaling which scales up when the CPU utilization is above
1. Use AutoScaling by manually modifying the desired capacity during a condition
2. Use dynamic AutoScaling with a policy
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use AutoScaling with a schedule


Question : Which of the below mentioned options can be a good use case for storing content in AWS RRS?
 : Which of the below mentioned options can be a good use case for storing content in AWS RRS?
1. Storing image thumbnails
2. Storing mission critical data Files
3. Access Mostly Uused Products by 50000+ Subscribers
4. Storing infrequently used log files