Premium

AWS Certified Developer - Associate Questions and Answers (Dumps and Practice Questions)



Question : A root owner wants to set up MFA with a smartphone for each of his IAM users. What should he do to achieve this?
  : A root owner wants to set up MFA with a smartphone for each of his IAM users. What should he do to achieve this?
1. It is not possible to have MFA with a smartphone
2. Create a policy which will allow each user to setup their smartphone for MFA
3. Access Mostly Uused Products by 50000+ Subscribers
4. The owner needs a setup for each user's smartphone

Correct Answer : Get Lastest Questions and Answer :

Explanation: The root owner should have physical access to the IAM user's MFA device (smartphone) to configure MFA. This is cumbersome and the better option is to let users configure and manage their own virtual MFA devices. For that, the owner must grant users the permissions to perform the necessary IAM actions.







Question : Which of the below mentioned statements is false with respect to the public DNS of an EC instance?
  : Which of the below mentioned statements is false with respect to the public DNS of an EC instance?
1. It is mapped to the primary network interface.
2. It can be used to access the instance from the internet.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The public DNS cannot be changed by the user.

Correct Answer : Get Lastest Questions and Answer :

Explanation: When a user has launched an EC2 instance, AWS assigns a public and a private DNS to the instance. Both the private and public DNS are mapped using NAT. The EBS backed instance, if started / stopped, will always have a new public and private DNS. However, an instance store backed AMI will have single only a public DNS throughout the lifecycle as they cannot be re-started / stopped.








Question : Being a HadoopExam.com AWS Developer you are defining a policy for an IAM user.
The user did not define the version as a part of the policy. What will happen now?
  : Being a HadoopExam.com AWS Developer you are defining a policy for an IAM user.
1. AWS will take the default "Version":"2008-10-17"
2. AWS will throw an error
3. Access Mostly Uused Products by 50000+ Subscribers
4. AWS will not take any version of the policy


Correct Answer : Get Lastest Questions and Answer :

Explanation: The version element specifies the IAM policy language version. Only the following values are allowed:
" 2012-10-17. This is the current version of the policy language, and the user should use this version number for all the policies.
" 2008-10-17. This was an earlier version of the policy language. The user might see this version on the existing policies. Do not use this version for any new policies or any existing policies that are being updated.
If a version element is not included, the value defaults to 2008-10-17.





Related Questions


Question : A unit of "read capacity" represents one strongly consistent read per section or two eventually consistent reads per second.

 : A unit of
1. True
2. False
Ans :1
Exp :



Question : One unit of read capacity is ____ in size?
 : A unit of
1. 3 KB
2. 5 KB
3. Access Mostly Uused Products by 50000+ Subscribers
4. 4 KB
Ans 4
Exp : One unit of read capacity is 4KB and one unit of write capacity is 1KB
Specifying Read and Write Requirements for Tables : DynamoDB is built to support workloads of any scale with predictable, low latency response times.
To ensure high availability and low latency responses, DynamoDB requires that you specify your required read and write throughput values when you create a table. DynamoDB uses this information to reserve sufficient hardware resources and appropriately partitions your data over multiple servers to meet your throughput requirements. As your application data and access requirements change, you can easily increase or decrease your provisioned throughput using the DynamoDB console or the API.
DynamoDB allocates and reserves resources to handle your throughput requirements with sustained low latency and you pay for the hourly reservation of these resources. However, you pay as you grow and you can easily scale up or down your throughput requirements. For example, you might want to populate a new table with a large amount of data from an existing data store. In this case, you could create the table with a large write throughput setting, and after the initial data upload, you could reduce the write throughput and increase the read throughput to meet your application's requirements.
During the table creation, you specify your throughput requirements in terms of the following capacity units. You can also specify these units in an UpdateTable request to increase or decrease the provisioned throughput of an existing table:
Read capacity units The number of strongly consistent reads per second of items up to 4 KB in size per second. For example, when you request 10 read capacity units, you are requesting a throughput of 10 strongly consistent reads per second of 4 KB for that table. For eventually consistent reads, one read capacity unit is two reads per second for items up to 4 KB. For more information about read consistency, see Data Read and Consistency Considerations.
Write capacity units The number of 1 KB writes per second. For example, when you request 10 write capacity units, you are requesting a throughput of 10 writes per second of 1 KB size per second for that table.
DynamoDB uses these capacity units to provision sufficient resources to provide the requested throughput. When deciding the capacity units for your table, you must take the following into consideration: Item size DynamoDB allocates resources for your table according to the number of read or write capacity units that you specify. These capacity units are based on a data item size of 4 KB per read or 1 KB per write. For example, if the items in your table are 4 KB or smaller, each item read operation will consume one read capacity unit. If your items are larger than 4 KB, each read operation consumes additional capacity units, in which case you can perform fewer database read operations per second than the number of read capacity units you have provisioned. For example, if you request 10 read capacity units throughput for a table, but your items are 8 KB in size, then you will get a maximum of 5 strongly consistent reads per second on that table. Expected read and write request rates You must also determine the expected number of read and write operations your application will perform against the table, per second. This, along with the estimated item size helps you to determine the read and write capacity unit values.
Consistency Read capacity units are based on strongly consistent read operations, which require more effort and consume twice as many database resources as eventually consistent reads. For example, a table that has 10 read capacity units of provisioned throughput would provide either 10 strongly consistent reads per second of 4 KB items, or 20 eventually consistent reads per second of the same items. Whether your application requires strongly or eventually consistent reads is a factor in determining how many read capacity units you need to provision for your table. By default, DynamoDB read operations are eventually consistent. Some of these operations allow you to specify strongly consistent reads.
Local secondary indexes : If you want to create one or more local secondary indexes on a table, you must do so at table creation time. DynamoDB automatically creates and maintains these indexes. Queries against indexes consume provisioned read throughput. If you write to a table, DynamoDB will automatically write data to the indexes when needed, to keep them synchronized with the table. The capacity units consumed by index operations are charged against the table's provisioned throughput. In other words, you only specify provisioned throughput settings for the table, not for each individual index on that table. For more information, see Provisioned Throughput Considerations for Local Secondary Indexes.
These factors help you to determine your application's throughput requirements that you provide when you create a table. You can monitor the performance using CloudWatch metrics, and even configure alarms to notify you in the event you reach certain threshold of consumed capacity units. The DynamoDB console provides several default metrics that you can review to monitor your table performance and adjust the throughput requirements as needed. For more information, go to DynamoDB Console. DynamoDB automatically distributes your data across table partitions, which are stored on multiple servers. For optimal throughput, you should distribute read requests as evenly as possible across these partitions. For example, you might provision a table with 1 million read capacity units per second. If you issue 1 million requests for a single item in the table, all of the read activity will be concentrated on a single partition. However, if you spread your requests across all of the items in the table, DynamoDB can access the table partitions in parallel, and allow you to reach your provisioned throughput goal for the table.


Question : In AWS, which security aspects are the customer's responsibility? Choose answers
A. Life-cycle management of IAM credentials
B. Decommissioning storage devices
C. Security Group and ACL (Access Control List) settings
D. Encryption of EBS (Elastic Block Storage) volumes
E. Controlling physical access to compute resources
F. Patch management on the EC2 instance's operating system
 : A unit of
1. A,B,C,F

2. C,D,E,F
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,C,D,E
Ans : Everything inside the instance is your responsibility. Amazon support does not have access to your instance. Amazon AWS Data Protection with New EBS Encryption : Amazon AWS recently announced the Encryption feature for the newly created EBS Volumes. Until today, we needed third-party security tools to encrypt data for EBS volumes. With Amazon EBS encryption, now we can create an encrypted EBS volume and attach it to a supported instance type. Data on the volume, disk I/O, and snapshots created from the volume then all are encrypted.
Introduction : Amazon AWS handles the Key management for the volumes encryption. Each encrypted volume is created with a unique 256-bit key. All the subsequent snapshots of the EBS volumes and restoration of the snapshots to volumes will use the same key. And AWS is providing this feature at no extra cost.
As of now, EBS volumes encryption feature is available for newer generation instance types only. Because the communication between Instances and EBS volume with encryption puts extra overhead which will increase the size of the packets. So it requires latest hardware and faster instance types to support this encryption EBS feature.



Question :

If your table item's size is 3KB and you want to have 90 strongly consistent reads per second,
how many read capacity units will you need to provision on the table?


 : A unit of
1. 45
2. 90
3. Access Mostly Uused Products by 50000+ Subscribers
4. 10

Ans : 2
Exp : 90 (reads per second) x 3KB/4 (round up to nearest number) =90 Minimum capacity unit is 4KB in order to calculate required throughput we will need to take the number needed strongly consistent reads (90) and multiply it by the item request size. In order to easily solve that we take the item size / 4 (4 being the size of a read capacity unit) and round it up.


Question : Your items are KB in size and you want to have strongly consistent reads per second. How many read capacity units do you need to provision?

 : A unit of
1. 100
2. 80
3. Access Mostly Uused Products by 50000+ Subscribers
4. 50
Ans : 3
Exp : 100 (reads per second) x 2 (6KB/4KB =1.5 round to 2) = 200 read throughput capacity units. A unit of read capacity is 4KB in size. In order to calculate the number of require capacity units we take the item size (6KB) divided by the size of a single unit of read throughput capacity (4KB) and multiple that by the number of needed reads per second.




Question : If you have an item that is kb in size and you want to provision read capacity units for
requests per second, how many read capacity units do you need to provision?

 : A unit of
1. 100

2. 50
3. Access Mostly Uused Products by 50000+ Subscribers
4. 90
Ans : 1
Exp : 100 x (4/4) = 100


Question : You have items in your table that are KB in size and you want to have strongly consistent
reads per second. How many read capacity units would you need to provision


 : A unit of
1. 300
2. 30
3. Access Mostly Uused Products by 50000+ Subscribers

4. 1
Ans 2
Exp : 10 x (12/4) = 30


Question :

You want 5 strongly consistent 1KB writes per second. How many units of throughput capacity do you need to provision

 : A unit of
1. 5
2. 10
3. Access Mostly Uused Products by 50000+ Subscribers
4. 9
Ans : 1
Exp : The only option for a write is strongly consistent. The throughput units needed to write 5 strongly consistent writes per second of 1KB in size is 5 x 1 = 5.



Question : You have created a mobile application that relies on reading data from DynamoDB.
How could you give each mobile device permissions to read from DynamoDB?

 : A unit of
1. Add the username and password into the app code
2. Connect to an EC2 instance which will pull the data from DynamoDB securely
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an IAM user
Ans : 3
Exp : It is bad practice to store any API credentials in a mobile application. Each mobile device should have their own permissions and access credentials to DynamoDB. In order to facilitate this you can integrate federated users (facebook, google, twitter, amazon, etc) credentials with IAM. After authenticated as a federated user the user/app can then assume an IAM role with the proper read/write permissions to DynamoDB.




Question : What type of block cipher does Amazon S offer for server side encryption?
 : A unit of
1. Triple DES
2. Advanced Encryption Standard
3. Access Mostly Uused Products by 50000+ Subscribers
4. RC5
Ans : 2
Exp : Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

Amazon S3 supports bucket policies that you can use if you require server-side encryption for all objects that are stored in your bucket. For example, the following bucket policy denies upload object (s3:PutObject) permission to everyone if the request does not include the x-amz-server-side-encryption header requesting server-side encryption.



Question : Which API call would you use to query an item by it's primary hash key?


 : A unit of
1. scan
2. query
3. Access Mostly Uused Products by 50000+ Subscribers
4. PutItem
Ans : 3
Exp : The GetItem operation returns a set of Attributes for an item that matches the primary key. If there is no matching item, GetItem does not return any data. Common API calls can be found in the AWS Certified Developer notes as part of the course module. The link will take you AWS API documentation it is suggested that you become familiar with the most common DynamoDB API calls.



Question : Each AWS account can own how many buckets?

 : A unit of
1. 1000
2. 50
3. Access Mostly Uused Products by 50000+ Subscribers
4. 10
Ans : 3
Exp : AWS accounts are limited to the total number of buckets allowed. Since S3 is a global namespace the
limitation is per account and not per region. The limit cannot be increased upon request to AWS.



Question : You're using CloudFormation templates to build out staging environments.
What section of the CloudFormation would you edit in order to allow the user to specify the PEM key-name at start time?

 : A unit of
1. Resources Section
2. Declaration Section
3. Access Mostly Uused Products by 50000+ Subscribers
4. Parameters Section

Ans : 4
Exp : Parameters property type in CloudFormation allows you to accept user input when starting the CloudFormation template. It allows you to reference the user input as variable throughout your CloudFormation template. Other examples might include asking the user starting the template to provide Domain admin passwords, instance size, pem key, region, and other dynamic options.



Question : What is the default limit for CloudFormation templates per region
 : A unit of
1. 20
2. 10
3. Access Mostly Uused Products by 50000+ Subscribers
4. 40
Ans : 1
Exp : AWS Accounts are limited to 20 running CloudFormation templates PER REGION. If additional capacity is required you can contact AWS through a limit increase form and have the 20 stack limit increased.



Question : What would you set in your CloudFormation template to fire up different instance sizes based off of environment type?
i.e. (If this is for prod then use m1.large instead of t1.micro)

 : A unit of
1. mappings
2. resources
3. Access Mostly Uused Products by 50000+ Subscribers
4. conditions


Question :

Which of the following statements about SQS is true?


 :
1. Messages will be delivered exactly once and messages will be delivered in First in, First out order
2. Messages will be delivered exactly once and message delivery order is indeterminate
3. Access Mostly Uused Products by 50000+ Subscribers
4. Messages will be delivered one or more times and message delivery order is indeterminate
Ans : 4
Exp : Amazon SQS is engineered to provide "at least once" delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies.
Amazon SQS does not guarantee FIFO access to messages in Amazon SQS queues, mainly because of the distributed nature of the Amazon SQS. If you require specific message ordering, you should design your application to handle it.





Question : You have designed an Android applications, which read the messages from the Amazon SQS queue.
However, you observed that you are keep getting the same message again and again in your application.
What could be the main reason for this behaviour

 :
1. Because in the Queue Amazon SQS keeps the 4 message by default to avoid message loss, you have to change the configuration so it will send only one message
2. It is not possible, there is some problem in your Android application.
3. Access Mostly Uused Products by 50000+ Subscribers
4. When Amazon SQS returns a message it ransmits the 4 copies by defualt, as soon as you receive first copy, your application should ignore other copies.
Ans : 3

Exp : When Amazon SQS returns a message to you, that message stays in the queue, whether or not you actually received the message. You are responsible for deleting the message; the delete request acknowledges that you’re done processing the message. If you don’t delete the message, Amazon SQS will deliver it again on another receive request.




Question : You have written a Java Application which reads the daily stock prices from the Amazon SQL queue,
and you also have delete message fucntionality implemented in your application to delete the message from Amazon SQS queue.
Once you receive IBM stock price from a queue, and this message is being deleted from the queue as well
as from your application because of bug in the application , can you get the same message again from queue after deletion?
 :
1. No, Never
2. Yes, always possible.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, under rare circumstances you might receive a previously deleted message again, not always.

Ans : 4
Exp :

Yes, under rare circumstances you might receive a previously deleted message again. This can occur in the rare situation in which a DeleteMessage operation doesn’t delete all copies of a message because one of the servers in the distributed Amazon SQS system isn’t available at the time of the deletion. That message copy can then be delivered again. You should design your application so that no errors or inconsistencies occur if you receive a deleted message again.




Question :

In your Python application, you have fired a command to delete a message from SQS queue, and you receive
"SUCCESS" and for saftey you fire same deletion command again. What SQS response you will receive ?




 :
1. No Message Found Exception
2. FAILED
3. Access Mostly Uused Products by 50000+ Subscribers
4. No respnse
Ans : 3
Exp : If you issue a DeleteMessage request on a previously deleted message SQS will return "success" response.




Question :

Can messages be shared between queues in different regions?




 :
1. No, Amazon SQS in each region is totally independent in message stores and queue names.
2. Yes, Amazon SQS in each region can be shared if queue names are same.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but you have to the fee as many times as many reason you use to share the message.
Ans : 1
Exp : No – Amazon SQS in each region is totally independent in message stores and queue names.




Question :

You have created a Amazon SQS queue as a developer which you want to share with other team as well,
which of the following is correct way to share the queue ?



 :
1. A developer associates an access policy statement (specifying the permissions being granted) with the queue to be shared.
2. You will use APIs to create and manage the access policy statements: AddPermission, RemovePermission, SetQueueAttributes and GetQueueAttributes
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1 and 2 are corect
5. All 1,2 and 3 are correct
Ans : 5
Exp : A developer associates an access policy statement (specifying the permissions being granted) with the queue to be shared. Amazon SQS provides APIs to create and manage the access policy statements: AddPermission, RemovePermission, SetQueueAttributes and GetQueueAttributes. Refer to the latest API specification for more details.
with the WSDL 2009 and later APIs, a developer can set an access policy that allows anonymous users access to a queue.




Question :

You have designed a STOCK Queue (SQS), which deliver the current and historical stock prices stored in it.
You wish the retrive the 7 days older stock price from the queue, you have retried many times but you are not getting it.
What could be the correct reason for this


 :
1. Once message is consumed, it can not be retrieved again.
2. Message retention limit is 4 days by default, after that message will be automatically delted.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Both 1 and 2 are correct
Ans : 2
Exp : The SQS message retention period is configurable and can be set anywhere from 1 minute to 2 weeks. The default is 4 days and once the message retention limit is reached your messages will be automatically deleted. The option for longer message retention provides greater flexibility to allow for longer intervals between message production and consumption.





Question :

Select the correct statement from below, for Amazon SQS queue


 :
1. A single queue may contain an unlimited number of messages, and you can create an unlimited number of queues.
2. A single queue may contain 65536 number of messages, and you can create 65536 number of queues.
3. Access Mostly Uused Products by 50000+ Subscribers
4. A single queue may contain 65536 number of messages, and you can create 1024 number of queues.

Ans : 1
Exp : A single queue may contain an unlimited number of messages, and you can create an unlimited number of queues.



Question : While desigining SQS queue, name as "Stock". How do you secure the message stored in the Stock queue,
so that unauthorized user can not access it, select correct from below


 :
1. Amazon SQS uses proven cryptographic methods to authenticate your identity, through the use of your Access Key ID and request signature
2. Amazon SQS uses proven cryptographic methods to authenticate your identity, through the use of an X.509 certificate
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1 and 2
5. All 1,2 and 3 are correct
Ans : 4
Exp : Authentication mechanisms are provided to ensure that messages stored in Amazon SQS queues are secured against unauthorized access. Only the AWS account owners can access the queues they create.

Amazon SQS uses proven cryptographic methods to authenticate your identity, either through the use of your Access Key ID and request signature, or through the use of an X.509 certificate. For the details of how to use either of these authentication mechanisms with Amazon SQS





Question : You have designed the Amazon SQS queue, lets say its name is "Stock", Now, there are multiple consumer
of this queue like Android App, Java Desktop App and WebService (readers app). So is that possible that all the readers application,
can fetch the same message for getting the stock price for same Ticker to ?


 :
1. No, once the message consumed by one of the reader it is not available to others.
2. Yes, if you have configured the visibility timeout for the queue.
3. Access Mostly Uused Products by 50000+ Subscribers
Ans : 2
Exp : Every Amazon SQS queue has a configurable visibility timeout. For the designated amount of time after a message is read from a queue, it will not be visible to any other reader. As long as the amount of time that it takes to process the message is less than the visibility timeout, every message will be processed and deleted. In the event that the component processing the message fails or becomes unavailable, the message will again become visible to any component reading the queue once the visibility timeout ends. This allows you to have many components all reading messages from the same queue, with each working to process different messages.


Question : What is the only "required" CloudFormation section in a template? This section is also where you specify what
AWS services are used by the template.


 :
1. resources
2. output
3. Access Mostly Uused Products by 50000+ Subscribers
4. properties
Ans : 1
Exp : CloudFormation service is designed to launch and deploy AWS resources. Thus the only required section is the resource section which defines what AWS resources are to be launched during stack creation time.



Question : By default, what event occurs if your CloudFormation receives an error during creation?


 :
1. DELETE_IN_PROGRESS
2. ROLLBACK_IN_PROGRESS
3. Access Mostly Uused Products by 50000+ Subscribers
4. DELETE_COMPLETE

Ans : 2
Exp : The sample application failed to complete its initiation if you see that the stack has a status of ROLLBACK_IN_PROGRESS in the AWS CloudFormation console. You can retry the sample application by going through the above steps again


Question : Your app is using SQS to create distributed applications. Your messages need to contain more
information than the 256KB SQS limit size allowed. How could you solve this problem?


 :
1. Store the information in S3 and attach retrieval information to the message for the application to process
2. Use DynamoDB instead of SQS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Contact Amazon and request an increase to the message size for your account
Ans : 1
Exp : SQS messages can contain up to 256KB of data. This data can include any information needed. In order to work around the limit issue the message can contain information on how to access the larger dataset from another service such as S3 or DynamoDB.




Question : Setting the VisibilityTimeout = has what affect on your message?


 :
1. Removes the message immediately upon receipt
2. Makes the message immediately available
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above
Ans : 2
Exp : VisbilityTimout defines how long a message is INVISIBLE to other workers after being accessed by a worker. It is invisible so the worker who retrieved the message has the opportunity to process the message and remove it from the queue. If the worker is not successfully in processing the message the VisibilityTimout then expires and the message is again available to be accessed by another worker. This ensures that if part of your application fails the message is not lost.




Question : Default timeout for visibility queue is ____ seconds.


 :
1. 10
2. 14
3. Access Mostly Uused Products by 50000+ Subscribers
4. 30
Ans : 4


Question : Your EC component receives a message from a message queue. The message will then become invisible for seconds.
What API request must be called in order for the VisibilityTimeout not to make the message visible again?


 :
1. VisibilityTimeout
2. ReceiveMessage
3. Access Mostly Uused Products by 50000+ Subscribers
4. DeleteMessage
Ans : 4
The message will become invisible again if the worker instance that is processing the data in the message does not delete the message after it has been successfully completed. This allows another worker to then process the message again if the original worker fails to process the message.




Question : A SWF workflow task or task execution can live up to ______ long?

 :
1. 30 Days
2. 1 Year
3. Access Mostly Uused Products by 50000+ Subscribers
4. 30 Mins
Ans : 2
Exp : SQS messages live up to 14 days, BUT an SWF workflow or task execution can last up to 1 year.


Question : A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN.
The application must authenticate against the on-premise LDAP server. Once authenticated,
logged-in users can only access an S3 keyspace specific to the user.

1. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials.
The application can use the IAM temporary credentials to access the appropriate S3 bucket.

2. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user.
The application then calls the IAM Security Token Service to assume that IAM Role.

3. Access Mostly Uused Products by 50000+ Subscribers
and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to
get IAM federated user credentials with access to the appropriate

4. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get
temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app




 :
1. 1,2
2. 2,3
3. Access Mostly Uused Products by 50000+ Subscribers
4. 4,1
Ans : 2
Exp : Identity Federation
Today we are enabling Identity Federation with IAM. This new capability allows existing identities (e.g. users) in your enterprise to access AWS APIs and resources using IAM’s fine-grained access controls, without the need to create an IAM user for each identity.

Applications can now request temporary security credentials that can be used to sign requests to AWS. The temporary security credentials are comprised of short lived (1-36 hour) access keys and session tokens associated with the keys. Your enterprise users (or, to be a bit more precise, the AWS-powered applications that they run) can use the access keys the same way as before, as long as they pass the token along in the calls that they make to the AWS APIs. The permissions associated with temporary security credentials are at most equal to those of the IAM user who issued them; you can further restrict them by specifying explicit permissions as part of the request to create them. Moreover, there is no limit on the number of temporary security credentials that can be issued.



Question : You are working in HadoopExam Inc, and the name of the s bucket you have created is "hadoopexam". Now you have lot of files
in a folder almost a 1000s, how can you upload this whole folder to s3 bucket "hadoopexam"
 :
1. Not supported to upload the complete folder.
2. The only option is load file one by one
3. Access Mostly Uused Products by 50000+ Subscribers
4. Files can always be uploaded only as a complete folder
Ans : 3
Exp : 1.In the Amazon S3 console, click the name of bucket where you want to upload an object and then click Upload.

2.In the Upload - Select Files wizard, if you want to upload an entire folder, you must click Enable Enhanced Uploader to install the necessary Java applet. You only need to do this once per console session.

Note
If you are behind a firewall you will need to install your organization's supported proxy client in order for the Java applet to work.

3. Access Mostly Uused Products by 50000+ Subscribers
A file selection dialog box opens:
If you enabled the advanced uploader in step 2, you see a Java dialog box titled Select files and folders to upload, as shown.
If not, you see the File Upload dialog box associated with your operating system.

4.Select the file that you want to upload and then click Open.

5.Click Start Upload.
You can watch the progress of the upload from within the Transfer panel.
Tip
To hide the Transfer dialog box, click the Close button at top right in the Transfers panel. To open it again, click Transfers.




Question : By default, your Amazon S buckets and objects are ______

 :
1. public
2. private
3. Access Mostly Uused Products by 50000+ Subscribers
Ans : 2
Exp : By default, your Amazon S3 buckets and objects are private. To make an object viewable by using a URL, for example, https://s3.amazonaws.com/Bucket/Object, you must make the object publicly readable. Otherwise, you will need to create a signed URL that includes a signature with authentication information.



Question : Select the correct statment which applies to "AWS Security: The Shared Responsibility Model"

 :
1. Amazon is completely responsible for the security of its infrastructure
2. Anything you put on the AWS infrastructure is your responsibility to secure
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1 and 2
5. 2 and 3

Ans : 5
Exp : In their AWS Data Security Center, Amazon claims that “The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today[5].� AWS Data Security conforms to best security practices and compliance standards:

•Electronic surveillance and multi-factor access control systems in AWS datacenters.
•Trained security personnel who authorize access to datacenters on a least privileged basis.
•Environmental systems designed to minimize the impact of disruptions to operations.
•Availability zones that enable you to operate in the face of natural disasters.
•An extensive network of security monitoring systems, providing (DDoS) protection and password brute-force detection.
•Many more features like built-in firewalls, private subnets, etc.
•Compliance with difference security regulations like: SOC 1/SSAE 16/ISAE 3402, FISMA, DIACAP, FedRAMP, and other standards[6]
Amazon is responsible for the security of its infrastructure and they do a great job at it.

However, they clearly state on their website that anything you put on the AWS infrastructure is your responsibility to secure.

This is the shared responsibility model: they provide the infrastructure and secure it, you use the infrastructure and must secure that.

AWS Data Security: your part
It is your responsibility to secure your EC2 instances as well as anything you install on them. This is a lot of responsibility, but it is actually in your best interest. Because you control the security of your accounts and data, you can ensure that you are as safe in the cloud as you were in a physical data center. Perhaps even more important, you can ensure that you still own your data – even though you are housing it in public infrastructure.

Make sure you consider these factors as part of your part of the AWS Data Security Shared Responsibility Model:

•Updates and Patches: Make sure all software and operating systems on the instances you run in EC2, are updated regularly to eliminate security loopholes.
•Limit access to the root account: Instead of giving access to the root account, you can create groups with access to different AWS resources.
•Encrypt: Encrypt data at rest and in transit using the industry’s strongest encryption algorithms.
•Manage encryption keys: This is perhaps the most important aspect of your AWS Data Security Responsibility, so it gets its own section.






Question : Which of the following you should consider as your part of the AWS Data Security Shared Responsibility Model:


 :
1. Updates and Patches
2. Limit access to the root account
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1 and 2
5. 1,2 and 3

Ans : 4
Exp : In their AWS Data Security Center, Amazon claims that “The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today[5].� AWS Data Security conforms to best security practices and compliance standards:

•Electronic surveillance and multi-factor access control systems in AWS datacenters.
•Trained security personnel who authorize access to datacenters on a least privileged basis.
•Environmental systems designed to minimize the impact of disruptions to operations.
•Availability zones that enable you to operate in the face of natural disasters.
•An extensive network of security monitoring systems, providing (DDoS) protection and password brute-force detection.
•Many more features like built-in firewalls, private subnets, etc.
•Compliance with difference security regulations like: SOC 1/SSAE 16/ISAE 3402, FISMA, DIACAP, FedRAMP, and other standards[6]
Amazon is responsible for the security of its infrastructure and they do a great job at it.

However, they clearly state on their website that anything you put on the AWS infrastructure is your responsibility to secure.

This is the shared responsibility model: they provide the infrastructure and secure it, you use the infrastructure and must secure that.

AWS Data Security: your part
It is your responsibility to secure your EC2 instances as well as anything you install on them. This is a lot of responsibility, but it is actually in your best interest. Because you control the security of your accounts and data, you can ensure that you are as safe in the cloud as you were in a physical data center. Perhaps even more important, you can ensure that you still own your data – even though you are housing it in public infrastructure.

Make sure you consider these factors as part of your part of the AWS Data Security Shared Responsibility Model:

•Updates and Patches: Make sure all software and operating systems on the instances you run in EC2, are updated regularly to eliminate security loopholes.
•Limit access to the root account: Instead of giving access to the root account, you can create groups with access to different AWS resources.
•Encrypt: Encrypt data at rest and in transit using the industry’s strongest encryption algorithms.
•Manage encryption keys: This is perhaps the most important aspect of your AWS Data Security Responsibility, so it gets its own section.





Question : Which of the following are the benefit of "Encryption Key Management for AWS Data Security"
 :
1. No one (not even AWS) can access your data
2. You maintain compliance with regulations like HIPAA or PCI
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1 and 2
5. 1,2 and 3
Ans : 5
Exp : Amazon has done their part to protect their infrastructure and you have done your part to protect everything you have put on the infrastructure. The last (and, really, an ongoing) step is to properly manage your encryption keys.

By managing them yourself (not through a third party and not even through AWS), you retain control of your data. That is the most significant step you can take to ensure your ownership of your data.

For key management, we recommend split-key encryption and homomorphic key management: techniques which allow you to control your encryption keys so that even in the event of a breach:

1.No one (not even AWS) can access your data
2.You maintain compliance with regulations like HIPAA or PCI
3. Access Mostly Uused Products by 50000+ Subscribers






Question : Aws platform is compliant with which of the following certifications
 :
1. HIPAA
2. FISMA
3. Access Mostly Uused Products by 50000+ Subscribers
4. 2 and 3
5. 1,2 and 3
Ans : 4
Exp :AWS is compliant with various certifications and third-party attestations. These include:
? SAS70 Type II. This report includes detailed controls AWS operates along with an independent auditor opinion about the effective operation of those controls.
? PCI DSS Level 1. AWS has been independently validated to comply with the PCI Data Security Standard as a shared host service provider.
? ISO 27001. AWS has achieved ISO 27001 certification of the Information Security Management System (ISMS) covering infrastructure, data centers, and services.
? FISMA. AWS enables government agency customers to achieve and sustain compliance with the Federal Information Security Management Act (FISMA). AWS has been awarded an approval to operate at the FISMA-Low level. It has also completed the control implementation and successfully passed the independent security testing and evaluation required to operate at the FISMA-Moderate level. AWS is currently pursuing an approval to operate at the FISMA-Moderate level from government agencies.


Additionally, customers have built healthcare applications compliant with HIPAA’s Security and Privacy Rules on AWS.
Further information about these certifications and third-party attestations is available in the Risk and Compliance whitepaper available on the website: http://aws.amazon.com/security.


Question : ____________ has the ability to deal with load variations by adding more resources during high load or
consolidating the tenants to fewer nodes when the load decreases, all in a live system
without service disruption, is therefore critical for these systems

 :
1. Elasticity
2. Vertical Scalability
3. Access Mostly Uused Products by 50000+ Subscribers
4. All of the above
Ans : 1
Exp : Scalability is a desirable property of a system, which indicates its ability to either handle growing amounts of work in a graceful manner or its ability to improve throughput when additional resources (typically hardware) are added system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system. Similarly, an algorithm is said to scale if it is suitably efficient and practical when applied to large situations (e.g. a large input data set or large number of participating nodes in the case of a distributed system). If the algorithm fails to perform when the resources increase then it does not scale.

There are typically two ways in which a system can scale by adding hardware resources. The first approach is when the system scales vertically and is referred to as
scale-up. To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of processors or memory to a single computer. Such vertical scaling of existing systems also enables them to use virtualization technology more effectively, as it provides more resources for the hosted set of operating system and application modules to share.

The other approach of scaling a system is by adding hardware resources horizontally referred to as scale-out. To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application. An example might be scaling out from one web-server system to a system with three webservers.

Elasticity in the Cloud : One of the major factors for the success of the cloud as an IT infrastructure is its pay per use pricing model and elasticity. For a DBMS deployed on a pay-per-use cloud infrastructure, an added goal is to optimize the system’s operating cost. Elasticity, i.e. the ability to deal with load variations by adding more resources during high load or consolidating the tenants to fewer nodes when the load decreases, all in a live system without service disruption, is therefore critical for these systems

Even though elasticity is often associated with the scale of the system, a subtle difference exists between elasticity and scalability when used to express a system’s behavior. Scalability is a static property of the system that specifies its behavior on a static configuration. For instance, a system design might scale to hundreds or even to thousands of nodes. On the other hand, elasticity is dynamic property that allows the system’s scale to be increased on-demand while the system is operational. For instance, a system design is elastic if it can scale from 10 servers to 20 servers (or vice-versa) on-demand. A system can have any combination of these two properties.

Elasticity is a desirable and important property of large scale systems. For a system deployed on a pay-per-use cloud service, such as the Infrastructure as a Service (IaaS) abstraction, elasticity is critical to minimize operating cost while ensuring good performance during high loads. It allows consolidation of the system to consume less resources and thus minimize the operating cost during periods of low load while allowing it to dynamically scale up its size as the load decreases. On the other hand, enterprise infrastructures are often statically provisioned. Elasticity is also desirable in such scenarios where it allows for realizing energy efficiency. Even though the infrastructure is statically provisioned, significant savings can be achieved by consolidating the system in a way that some servers can be powered down reducing the power usage and cooling costs. This, however, is an open research topic in its own merit, since powering down
random servers does not necessarily reduce energy usage. Careful planning is needed to select servers to power down such that entire racks and alleys in a data-center are powered down so that significant savings in cooling can be achieved. One must also consider the impact of powering down on availability. For instance, consolidating the system to a set of servers all within a single point of failure (for instance a switch or a power supply unit) can result in an entire service outage resulting from a single failure. Furthermore, bringing up powered down servers is more expensive, so the penalty for a miss-predicted power down operation is higher.




Question : Which of the following is possible in case of Elasticity


 :
1. In the elastic environment, the available resources match the "current demands" as closely as possible.
2. Elasticity adapts to both the "workload increase" as well as "workload decrease" by "provisioning and deprovisioning" resources in an "autonomic" manner.
3. Access Mostly Uused Products by 50000+ Subscribers
4. All of the above


Ans : 4
Exp : Scalability: In a scaling environment, the available resources may exceed to meet the "future demands".
Elasticity: In the elastic environment, the available resources match the "current demands" as closely as possible.

Scalability: Scalability adapts only to the "workload increase" by "provisioning" the resources in an "incremental" manner.
Elasticity: Elasticity adapts to both the "workload increase" as well as "workload decrease" by "provisioning and deprovisioning" resources in an "autonomic" manner.

Scalability: Increasing workload is served with increasing the power of a single computer resource or with increasing the power by a group of computer resources.
Elasticity: Varying workload is served with dynamic variations in the use of computer resources.

Scalability: Scalability enables a corporate to meet expected demands for services with "long-term, strategic needs".
Elasticity: Elasticity enables a corporate to meet unexpected changes in the demand for services with "short-term, tactical needs".

Scalability: It is "increasing" the capacity to serve an environment where workload is increasing.
This scalability could be "Scaling Up" or "Scaling Out".
(Example: Scaling Up - increasing the ability of an individual server Scaling out - increasing the ability by adding multiple servers to the individual server.)
Elasticity: It is the ability to "scale up or scale down" the capacity to serve at will.

Scalability: To use a simile, "scaling up" is an individual increasing her power to meet the increasing demands, and "scaling out" is building a team to meet the increasing demands.
Elasticity: To use a simile, a film actor increasing or reducing her body weight to meet differing needs of the film industry.





Question : Select the correct which applies to AAA (Authentication, Authorization and Accounting) ?
 :
1. AAA are a set of primary concepts that aid in understanding computer and network security as well as access control
2. AAA concepts are used to protect property, data, and systems from intentional or even unintentional damage
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1 and 3
5. 1,2 and 3

Ans : 5
Exp : AAA stands for Authentication, Authorization and Accounting. AAA are a set of primary concepts that aid in understanding computer and network security as well as access control. These concepts are used daily to protect property, data, and systems from intentional or even unintentional damage. AAA is used to support the Confidentiality, Integrity, and Availability (CIA) security concept.


Question : Please map the following

1. Confidentiality
2. Integrity
3. Access Mostly Uused Products by 50000+ Subscribers

A. The term means that the data you need should always be available to you.
B. The term means means secret should stay secret.
c. The term means that the data being worked with is the correct data, which is not tampered or altered.

 :
1. 1-A,2-B, 3-C
2. 1-C,2-B, 3-A
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1-A,2-C, 3-B
Ans : 3
Exp : Confidentiality: The term confidentiality means that the data which is confidential should remain confidential. In other words, confidentiality means secret should stay secret.

Integrity: The term integrity means that the data being worked with is the correct data, which is not tampered or altered.

Availability: The term availability means that the data you need should always be available to you.

Authentication provides a way of identifying a user, typically requiring a Userid/Password combo before granting a session. Authentication process controls access by requiring valid user credentials. After the Authentication process is completed successfully, a user must be given authorization (permission) for carrying out tasks within the server. Authorization is the process that determines whether the user has the authority to carry out a specific task. Authorization controls access to the resources after the user has been authenticated. The last one is Accounting. Accounting keeps track of the activities the user has performed in the server.

Confidentiality, integrity, and availability (CIA) is a model designed to guide policies for information security within an organization. In this context, confidentiality is a set of rules that limits access to information, integrity is the assurance that the information is trustworthy and accurate, and availability is a guarantee of ready access to the information by authorized people. The model is sometimes known as the CIA triad.

Confidentiality prevents sensitive information from reaching the wrong people, while making sure that the right people can in fact get it. A good example is an account number or routing number when banking online. Data encryption is a common method of ensuring confidentiality. User IDs and passwords constitute a standard procedure; two-factor authentication is becoming the norm and biometric verification is an option as well. In addition, users can take precautions to minimize the number of places where the information appears, and the number of times it is actually transmitted to complete a required transaction.

Integrity involves maintaining the consistency, accuracy, and trustworthiness of data over its entire life cycle. Data must not be changed in transit, and steps must be taken to ensure that data cannot be altered by unauthorized people (for example, in a breach of confidentiality). In addition, some means must be in place to detect any changes in data that might occur as a result of non-human-caused events such as an electromagnetic pulse (EMP) or server crash. If an unexpected change occurs, a backup copy must be available to restore the affected data to its correct state.

Availability is best ensured by rigorously maintaining all hardware, performing hardware repairs immediately when needed, providing a certain measure of redundancy and failover, providing adequate communications bandwidth and preventing the occurrence of bottlenecks, implementing emergency backup power systems, keeping current with all necessary system upgrades, and guarding against malicious actions such as denial-of-service (DoS) attacks.




Question : What could be from following would valid reason for an Amazon EBS-backed instance might immediately terminate

 :
1. You've reached your volume limit. For information about the volume limit, and to submit a request to increase your volume limit, see Request to Increase the Amazon EBS Volume Limit.
2. The AMI is missing a required part.
3. Access Mostly Uused Products by 50000+ Subscribers
4. All of the above


Ans : 4
Exp : After you launch an instance, we recommend that you check its status to confirm that it goes from the pending status to the running status, the not terminated status.

The following are a few reasons why an Amazon EBS-backed instance might immediately terminate:

•You've reached your volume limit. For information about the volume limit, and to submit a request to increase your volume limit, see Request to Increase the Amazon EBS Volume Limit.

•The AMI is missing a required part.

•The snapshot is corrupt.






Question : A company called Acmeshell has a backup policy stating that backups need to be easily available for months and then be sent to long term archiving.
How would Acmeshell can use S3 to accomplish this goal?
 :
1. Write an AWS command line tool to backup the data and send it to glacier after 6 months
2. Use S3 bucket policies to manage the data
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use bucket Lifecycle policies and set the files to go to glacier storage after 6 months




Question :

You are an Amazon Web Service Solution architech, and in your organization you have multiple processes run asynchronously,
however they have some depnedencies on each other and It requires that you coordinate the execution of multiple distributed components
and deal with the increased latencies and unreliability inherent in remote communication. So which of the following solutions perfectly fit
to handle this scenerio

  :
1. You will implement this with the help of message queues and databases, along with the logic to synchronize them.
2. You will use Amazon Simple Workflow (SWF)
3. Access Mostly Uused Products by 50000+ Subscribers
4. You will solve this problem using Amazon Simple Notification Service (Amazon SNS)



Question : How do you define the Activity Task, in the context of Amazon Simple Workflow

  : How do you define the Activity Task, in the context of Amazon Simple Workflow
1. It is a definition of the Activity
2. One invocation of an activity
3. Access Mostly Uused Products by 50000+ Subscribers
4. Collection of activity



Question : Which region does not support read after write for objects on S?

  : Which region does not support read after write for objects on S?
1. US West Oregon
2. US West
3. Access Mostly Uused Products by 50000+ Subscribers
4. US Standard


Question :

What would be the best way to set permissions on an S3 bucket if you would like to deliver the content over the internet but only to your employees?
  :
1. Use S3 signed URL's through the API
2. Create an S3 account for every employee
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above