Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : You have a website which has huge logs archival and for regulatory reason it is mandate to keep saved, now you decided to use Amazon
glacier to store this logs why ?
1. Amzon glaciers are good for Infrequently accessed data
2. Amzon glaciers are good for Data archives
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amzon glaciers are good for Frequently accessed data

  : You have a website which has huge logs archival and for regulatory reason it is mandate to keep saved, now you decided to use Amazon
1. 1,3
2. 1,2
3. Access Mostly Uused Products by 50000+ Subscribers
4. 3,4
5. 1,2,3

Correct Answer : Get Lastest Questions and Answer :
Explanation: Amazon Glacier is a secure, durable, and extremely low-cost storage service for data archiving and online backup. Customers can
reliably store large or small amounts of data for as little as $0.01 per gigabyte per month, a significant savings compared to on-premises solutions. To
keep costs low, Amazon Glacier is optimized for infrequently accessed data where a retrieval time of several hours is suitable.
Data is stored in Amazon Glacier in "archives." An archive can be any data such as a photo, video, or document. You can upload a single file as an
archive
or aggregate multiple files into a TAR or ZIP file and upload as one archive.
A single archive can be as large as 40 terabytes. You can store an unlimited number of archives and an unlimited amount of data in Amazon Glacier. Each
archive is assigned a unique archive ID at the time of creation, and the content of the archive is immutable, meaning that after an archive is created it
cannot be updated.




Question : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
have decided to use tape libraries, but the initial cost involved with this is very high. So in this case which is the better solution from AWS

  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. AWS S3
2. AWS RDS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Any of the above

Ans : 3
Exp : On-premises or offsite tape libraries can lower storage costs but require large upfront investments and specialized maintenance. Amazon Glacier
has no upfront cost and eliminates the cost and burden of maintenance.



Question : Which of the following provides secure, durable, highly-scalable key-value object storage
  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. Amazon Simple Queue Service
2. Amazon Simple Workflow Service
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amazon Simple Notification Service

Ans: 3 Exp : Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable object storage. Amazon S3
is
easy to use, with a simple web services interface to store and retrieve any amount of data from anywhere on the web. With Amazon S3, you pay only for the
storage you actually use. There is no minimum fee and no setup cost.
Amazon S3 can be used alone or together with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Block Store (Amazon
EBS),
and Amazon Glacier, as well as third party storage repositories and gateways. Amazon S3 provides cost-effective object storage for a wide variety of use
cases including cloud applications, content distribution, backup and archiving, disaster recovery, and big data analytics.

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Amazon S3 redundantly stores data in
multiple facilities and on multiple devices within each facility. To increase durability, Amazon S3 synchronously stores your data across multiple
facilities before confirming that the data has been successfully stored. In addition, Amazon S3 calculates checksums on all network traffic to detect
corruption of data packets when storing or retrieving data. Unlike traditional systems, which can require laborious data verification and manual repair,
Amazon S3 performs regular, systematic data integrity checks and is built to be automatically self-healing.
Amazon S3's standard storage is:
" Backed with the Amazon S3 Service Level Agreement for availability.
" Designed for 99.999999999% durability and 99.99% availability of objects over a given year.
" Designed to sustain the concurrent loss of data in two facilities.
Amazon S3 is storage for the Internet. It's a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data
storage infrastructure at very low costs.
Amazon S3 provides a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.
Using
this web service, developers can easily build applications that make use of Internet storage. Since Amazon S3 is highly scalable and you only pay for
what
you use, developers can start small and grow their application as they wish, with no compromise on performance or reliability. It is designed to be
highly
flexible: Store any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a
simple FTP application, or a sophisticated web application such as the Amazon.com retail web site. Amazon S3 frees developers to focus on innovation, not
figuring out how to store their data.

With Amazon S3's lifecycle policies, you can configure your objects to be archived to Amazon Glacier or deleted after a specific period of time. You can
use
this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a
transition to Amazon Glacier, and/or an expiration. For example, you could create a rule that archives all objects with the common prefix "logs/" 30 days
from creation, and expires these objects after 365 days from creation. You can also create a separate rule that only expires all objects with the prefix
"backups/" 90 days from creation. Lifecycle policies apply to both existing and new S3 objects, ensuring that you can optimize storage and maximize cost
savings for all current data and any new data placed in S3 without time-consuming manual data review and migration. Within a lifecycle rule, the prefix
field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects,
specify their common prefix (e.g. "logs/"). You can specify a transition action to have your objects archived and an expiration action to have your
objects
removed. For time period, provide the date (e.g. January 31, 2013) or the number of days from creation date (e.g. 30 days) after which you want your
objects
to be archived or removed. You may create multiple rules for different prefixes.




Question : QuickTechie.com is a static website, however there is a lot of content links and files which visitors can download by clicking hyperlink.
Which of the below is best solutions to provide, low-cost, highly available hosting solution that can scale automatically to meet traffic demands
  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. AWS S3
2. AWS RDS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Any of the above

Ans : 1
Exp : You can host your entire static website on Amazon S3 for a low-cost, highly available hosting solution that can scale automatically to meet traffic
demands. With Amazon S3, you can reliably serve your traffic and handle unexpected peaks without worrying about scaling your infrastructure.




Question : PhotoAnalytics.com is a photo and video hosting website and they have millions of users, which of the following is a good solutions for
storing big data object, by reducing costs, scaling to meet demand, and increasing the speed of innovation.
  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. AWS S3
2. AWS RDS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Any of the above
Ans :1
Exp : Whether you're storing pharmaceutical or financial data, or multimedia files such as photos and videos, Amazon S3 can be used as your big data
object store. Amazon Web Services offers a comprehensive portfolio of services to help you manage big data by reducing costs, scaling to meet demand, and
increasing the speed of innovation.

Backup and Archiving
Amazon S3 offers a highly durable, scalable, and secure solution for backing up and archiving your critical data. You can use Amazon S3's versioning
capability to provide even further protection for your stored data. You can also define lifecycle rules to archive sets of Amazon S3 objects to Amazon
Glacier, an extremely low-cost storage service.

Content Storage and Distribution
Amazon S3 provides highly durable and available storage for a variety of content. It allows you to offload your entire storage infrastructure into the
cloud, where you can take advantage of Amazon S3's scalability and pay-as-you-go pricing to handle your growing storage needs. You can distribute your
content directly from Amazon S3 or use Amazon S3 as an origin store for delivering content to your Amazon CloudFront edge locations.

Cloud-native Application Data
Amazon S3 provides high performance, highly available storage that makes it easy to scale and maintain cost-effective mobile and Internet-based apps that
run fast. With Amazon S3, you can add any amount of content and access it from anywhere, so you can deploy applications faster and reach more customers.

Disaster Recovery
Amazon S3's highly durable, secure, global infrastructure offers a robust disaster recovery solution designed to provide superior data protection.
Whether
you're looking for disaster recovery in the cloud or from your corporate data center to Amazon S3, AWS has the right solution for you.



Question : AcmeArchive.com is website where you have file sharing and storing services like Google Drive and DropBox, now you also want to support
that during the syncup from desktop you have by accident deleted one of the file and you realized that it was important for you. So whichof the Simple storeage service will help you to get the deleted file.

A. Versioning in S3
B. Secured signed URLs for S3 data access
C. Dont allow delete objects from s3 (only soft delete is permitted)
D. S3 Reduced Redundancy Storage.
E. Amazon S3 event notifications

  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. A, B, C
2. B, C, D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A, E
5. A, D
Ans : 5
Exp : Versioning
Amazon S3 provides further protection with versioning capability. You can use versioning to preserve, retrieve, and restore every version of every object
stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will
retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for
every version stored. You can configure lifecycle rules to automatically control the lifetime and cost of storing multiple versions.
Reduced Redundancy Storage (RRS)
Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to reduce their costs by storing noncritical, reproducible data at
lower levels of redundancy than Amazon S3's standard storage. It provides a cost-effective, highly available solution for distributing or sharing content
that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option
stores
objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as
many
times as standard Amazon S3 storage.

Reduced Redundancy Storage is:
" Backed with the Amazon S3 Service Level Agreement for availability.
" Designed to provide 99.99% durability and 99.99% availability of objects over a given year. This durability level corresponds to an average annual
expected loss of 0.01% of objects.
" Designed to sustain the loss of data in a single facility.




Question : Your Hadoop job should be triggered based on the event notification of file upload, so which of the followig component can help
implementing this in AWS

1. S3
2. SQS
3. Access Mostly Uused Products by 50000+ Subscribers
4. EC2
5. IAM
6. CloudWatch Alarm

  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. 1,2,3
2. 2,3,4
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2,3,6
5. 2,3,4,5
Ans : 1
Exp : Amazon S3 can send event notifications when objects are uploaded to Amazon S3. Amazon S3 event notifications can be delivered using Amazon SQS or
Amazon SNS, or sent directly to AWS Lambda, enabling you to trigger workflows, alerts, or other processing. For example, you could use Amazon S3 event
notifications to trigger transcoding of media files when they are uploaded, processing of data files when they become available, or synchronization of
Amazon S3 objects with other data stores.





Question : AchmePhoto.com website has millions of photos and also thumbnail for each photo. Thumbnail can easily be reproduced from the actual full
photo. However, thumbnail take less space than actual photo. Which of the folloing is best solutions to store thumbnails.


  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. S3
2. RRS
3. Access Mostly Uused Products by 50000+ Subscribers
4. ElastiCache
5. Amazon Glacier

Answers: 2
Reduced Redundancy Storage (RRS)
Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to reduce their costs by storing noncritical, reproducible data at
lower levels of redundancy than Amazon S3's standard storage. It provides a cost-effective, highly available solution for distributing or sharing content
that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option
stores
objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as
many
times as standard Amazon S3 storage.

Reduced Redundancy Storage is:
" Backed with the Amazon S3 Service Level Agreement for availability.
" Designed to provide 99.99% durability and 99.99% availability of objects over a given year. This durability level corresponds to an average annual
expected loss of 0.01% of objects.
" Designed to sustain the loss of data in a single facility.






Question : You have a www.QuickTechie.com, which has huge load, hence you decided to use EC instances, with two availability zone in two
region each with 25 instances. However, while starting the servers you are able to start 20 servers in each zone and 5 request failed in each zone. Why ?


  : You have a www.QuickTechie.com, which has huge load, hence you decided to use  EC instances, with two availability zone in two
1. There is a limit of 20 ec2 instances in each region, you can ask to increase this limit.
2. There is a limit of 20 ec2 instances in each availability zone, you can ask to increase this limit.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above.

Correct Answer : Get Lastest Questions and Answer :
Explanation: Unless otherwise noted, each limit is per region. You are limited to running up to 20 On-Demand Instances, purchasing 20
Reserved
Instances, and requesting 5 Spot Instances per region. New AWS accounts may start with limits that are lower than the limits described here. Certain
instance types are further limited per region


Related Questions


Question : You have deployed two tier application, in this web server in public subnet and db server in private subnet. You have also created a NAT instance in public subnet and
attached an EIP to that NAT instance and added route for that. Now, all the instances in private subnet need to download OS patch from internet and will access internet via NAT
instances. However, after entire configuration they are still not able to reach internet. What could be the possible reason?

  : You have deployed two tier application, in this web server in public subnet and db server in private subnet. You have also created a NAT instance in public subnet and
1. Keep the NAT instance in the same private subnet. So that instances from private subnet can reach NAT instances. And NAT instances can send back traffic to instances in private subnet.

2. Instances in private subnet can never access the internet. They have to be in public subnet.

3. Access Mostly Uused Products by 50000+ Subscribers

4. You would have to complete one more step on NAT instance, disable source/destination check on the NAT instance.



Question : You are building an automated transcription service in which Amazon EC worker
instances process an uploaded audio file and generate a text file. You must store both of
these files in the same durable storage until the text file is retrieved. You do not know what
the storage capacity requirements are. Which storage option is both cost-efficient and
scalable?
  : You are building an automated transcription service in which Amazon EC worker
1. Multiple Amazon EBS volume with snapshots
2. A single Amazon Glacier vault
3. Access Mostly Uused Products by 50000+ Subscribers
4. Multiple instance stores

Ans : 3



Question : You want to install, your own custom database on EC. So that you can migrate your in-house MySQL db. On that EC instance. Now, you have also attached an EIP and
Elastic Block store to that instance. After installing required software, it is recommended to stop and start the instance again. You have some license versioned data on instance
store as well. Because of this, how your entire configuration will be impacted?

A. EIP of the instance will be detached and you have to attach it again, after restart and it could be different one.
B. All the data which is on instance store will be lost.
C. You have to attach EBS back to the instance after re-start
D. Underlying host of EC2 instance would be changed
E. You have to re-create all the Security Group and NACL, previously you created for this instance.

  : You are building an automated transcription service in which Amazon EC worker
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. B,D


Question : In the event of a planned or unplanned outage of your primary DB instance,
Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled__________

 : In the event of a planned or unplanned outage of your primary DB instance,
1. More than One Read Replica
2. More Than one write Replica
3. Access Mostly Uused Products by 50000+ Subscribers
4. Multi Region Deployment


Question : Which of the following approaches provides the lowest cost for Amazon Elastic Block Store
snapshots while giving you the ability to fully restore data?

  : Which of the following approaches provides the lowest cost for Amazon Elastic Block Store
1. Maintain two snapshots: the original snapshot and the latest incremental snapshot.
2. Maintain a volume snapshot; subsequent snapshots will overwrite one another
3. Access Mostly Uused Products by 50000+ Subscribers
4. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.
Ans : 1
Exp :


Question : You try to connect via SSH to a newly created Amazon EC instance and get one of the
following error messages:
"Network error: Connection timed out" or "Error connecting to [instance], reason: ->
Connection timed out: connect,"
You have confirmed that the network and security group rules are configured correctly and
the instance is passing status checks. What steps should you take to identify the source of
the behavior? Choose 2 answers
A. Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch.
B. Verify that your IAM user policy has permission to launch Amazon EC2 instances.
C. Verify that you are connecting with the appropriate user name for your AMI.
D. Verify that the Amazon EC2 Instance was launched with the proper IAM role.
E. Verify that your federation trust to AWS has been established.
  : Which of the following approaches provides the lowest cost for Amazon Elastic Block Store
1. A,B
2. A,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. C,D
Ans : 2
Exp :



Question : In VPC Network access control lists (ACLs) Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the ___________ level
  : Which of the following approaches provides the lowest cost for Amazon Elastic Block Store
1. Full VPC
2. Customer Gateway
3. Access Mostly Uused Products by 50000+ Subscribers
4. Subnet


Question : Which of the following is wrong statement for the Local Secondary Index


  : Which of the following is wrong statement for the Local Secondary Index
1. The key of a local secondary index consists of a hash key and a range key.
2. For each hash key, the total size of all indexed items must be 10 GB or less.
3. Access Mostly Uused Products by 50000+ Subscribers
4. When you query a local secondary index, you can choose either eventual consistency or strong consistency.
5. The hash key of the index is the same attribute as the hash key of the table. The range key can be any scalar table attribute.



Question :
A user has created multiple data points for the CloudWatch metrics with the dimensions
Box=UAT, App=Document and Box=UAT, App=Notes.
If the user queries CloudWatch with the dimension parameter as Server=Prod, what data will he get?


 :
1. The last value of the email and SMS metric
2. It will not return any data as the dimension for Box=UAT does not exist
3. Access Mostly Uused Products by 50000+ Subscribers
4. All values specified for the dimension Box=UAT, App=Notes

Ans : 2
Exp : A dimension is a key value pair used to uniquely identify a metric. The user cannot get the CloudWatch metrics statistics
if he has not defined the right combination of dimensions for it. In this case the dimension combination is either
Box=UAT, App=Document or Box=UAT, App=Notes. Thus, if the user tries to get the data for a dimension with Box=UAT,
it will not return any statistics. This is because the combination is not right and no statistics are defined for the dimension Box=UAT.

Dimensions help you design a structure for your statistics plan. Because dimensions are part of the unique identifier for a metric,
whenever you add a unique name value pair to one of your metrics, you are creating a new metric.

CloudWatch treats each unique combination of dimensions as a separate metric




Question :
For DynamoDB, which statement are correct ?
1.By using Proxy, it is not possible for a developer to achieve item level access control
2.By using FGAC, it is possible for a developer to achieve item level access control
3. Access Mostly Uused Products by 50000+ Subscribers
4.By using secret key, it is possible for a developer to achieve item level access control


 :
1. 1,2,3
2. 2,3,4
3. Access Mostly Uused Products by 50000+ Subscribers
4. 2,3,4
Ans : 1
Exp : Fine Grained Access Control (FGAC) gives a DynamoDB table owner a high degree of control over data in the table.
Specifically, the table owner can indicate who (caller) can access which items or attributes of the table and perform what actions (read / write capability).

To achieve this level of control without FGAC, a developer would have to choose from a few potentially onerous approaches. Some of these are:
1.Proxy: The application client sends a request to a brokering proxy that performs the authentication and authorization.
Such a solution increases the complexity of the system architecture and can result in a higher total cost of ownership (TCO).
2.Per Client Table: Every application client is assigned its own table. Since application clients access different tables,
they would be protected from one another. This could potentially require a developer to create millions of tables, thereby
making database management extremely painful.
3. Access Mostly Uused Products by 50000+ Subscribers
changing the token and handling its impact on the stored data. Here, the key of the items accessible by this client would contain the secret token.



Question : When you tries to enable lifecycle policies on the one of the S bucket, created by you,
but you are not able to do so on that particular bucket, what could be reason ?
 :
1. Bucket is corrupted
2. Versioning is enabled on that bucket
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above