Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : As you know www.HadoopExam.com provides online recorded training on various technlogies. And you use www.HadoopExam.com as a portal to show
all the training videos. However, videos are stored in S3. Now you want to make sure that these videos should only be embedded at www.HadoopExam.com and
should not be played any other domain. What of the IAM components you will configure.

  : As you know www.HadoopExam.com provides online recorded training on various technlogies. And you use www.HadoopExam.com as a portal to show
1. You will create IAM user for each new subscriber on HadoopExam.com and give access on those videos.

2. You will create an IAM group which has permission on these videos. And you will keep adding new user in that group.

3. You will create IAM Role and attach it to the EC2 instance, where www.HadoopExam.com instance is installed. This role should have permission on videos stored in S3

4. No you don't have to do anything with IAM. Just manage member on your website.

Correct Answer : 3
Explanation: An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can
and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role
does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access keys are created dynamically and
provided to the user.

You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. For example, you might
want to grant users in your AWS account access to resources they don't usually have, or grant users in one AWS account access to resources in another
account. Or you might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app (where they can be difficult to rotate and where
users can potentially extract them). Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your
corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources.







Question : You have deployed web application www.HadoopExam.com on AWS web service and want to configure allow and deny access. Which of the following points are correct?

A. If you use default configuration, then all the services external access will be denied.
B. You can override the default configuration so that external services may or may not access services in AWS application. Based on configuration.
C. If you don't configure any security, all external services can access your application and vulnerable to DDOS attack.
D. A,B
E. B,C
  : You have deployed web application www.HadoopExam.com on AWS web service and want to configure allow and deny access. Which of the following points are correct?
1. If you use default configuration, then all the services external access will be denied.

2. You can override the default configuration so that external services may or may not access services in AWS application. Based on configuration.

3. If you don't configure any security, all external services can access your application and vulnerable to DDOS attack.

4. 1,2

5. 2,3

Correct Answer : 4
Explanation: Our AWS account automatically has a default security group per VPC and per region. If you don't specify a
security group when you launch an instance, the instance is automatically associated with the default security group.

A default security group is named default, and it has an ID assigned by AWS. The following are the default rules for each default security group:

Allows all inbound traffic from other instances associated with the default security group (the security group specifies itself as a source security
group in its inbound rules) Allows all outbound traffic from the instance.
You can add or remove the inbound rules for any default security group. You can add or remove outbound rules for any VPC default security group.

You can't delete a default security group. If you try to delete the EC2-Classic default security group, you'll get the following error:
Client.InvalidGroup.Reserved: The security group 'default' is reserved. If you try to delete a VPC default security group, you'll get the following
error:
Client.CannotDelete: the specified group: "sg-51530134" name: "default" cannot be deleted by a user.


Policy Evaluation Basics

When an AWS service receives a request, the request is first authenticated using information about the access key ID and signature. (A few services, like
Amazon S3, allow requests from anonymous users.) If the request passes authentication, AWS then determines whether the requester is authorized to
perform the action represented by the request.

Requests that are made using the credentials of the AWS account owner (the root credentials) for resources in that account are allowed. However, if the

request is made using the credentials of an IAM user, or if the request is signed using temporary credentials that are granted by AWS STS, AWS uses the
permissions defined in one or more IAM policies to determine whether the user's request is authorized.

Determining Whether a Request is Allowed or Denied

When a request is made, the AWS service decides whether a given request should be allowed or denied. The evaluation logic follows these rules:

By default, all requests are denied. (In general, requests made using the account credentials for resources in the account are always allowed.)
An explicit allow overrides this default.
An explicit deny overrides any allows.




Question : You want to run some MapReduce analytics algorithm for the data stored in S. And you decided to use EMR service of AWS. Why?

A. AWS EMR is based on Hadoop and can run MapReduce jobs
B. AWS EMR has AWS own MapReduce engine. Which can run MapReduce jobs.
C. AWS EMR is based on Apache Kafka, which can support MapReduce
D. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR
  : You want to run some MapReduce analytics algorithm for the data stored in S. And you decided to use EMR service of AWS. Why?
1. A,B
2. B,C
3. C,D
4. A,D
5. B,D

Correct Answer : 4
Explanation: Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark,
on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can
process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into
and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable
Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact
with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB.

Amazon EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine
learning, financial analysis, scientific simulation, and bioinformatics.


Related Questions


Question : You have a website which has huge logs archival and for regulatory reason it is mandate to keep saved, now you decided to use Amazon
glacier to store this logs why ?
1. Amzon glaciers are good for Infrequently accessed data
2. Amzon glaciers are good for Data archives
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amzon glaciers are good for Frequently accessed data

  : You have a website which has huge logs archival and for regulatory reason it is mandate to keep saved, now you decided to use Amazon
1. 1,3
2. 1,2
3. Access Mostly Uused Products by 50000+ Subscribers
4. 3,4
5. 1,2,3


Question : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
have decided to use tape libraries, but the initial cost involved with this is very high. So in this case which is the better solution from AWS

  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. AWS S3
2. AWS RDS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Any of the above

Ans : 3
Exp : On-premises or offsite tape libraries can lower storage costs but require large upfront investments and specialized maintenance. Amazon Glacier
has no upfront cost and eliminates the cost and burden of maintenance.



Question : Which of the following provides secure, durable, highly-scalable key-value object storage
  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. Amazon Simple Queue Service
2. Amazon Simple Workflow Service
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amazon Simple Notification Service

Ans: 3 Exp : Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable object storage. Amazon S3
is
easy to use, with a simple web services interface to store and retrieve any amount of data from anywhere on the web. With Amazon S3, you pay only for the
storage you actually use. There is no minimum fee and no setup cost.
Amazon S3 can be used alone or together with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Block Store (Amazon
EBS),
and Amazon Glacier, as well as third party storage repositories and gateways. Amazon S3 provides cost-effective object storage for a wide variety of use
cases including cloud applications, content distribution, backup and archiving, disaster recovery, and big data analytics.

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Amazon S3 redundantly stores data in
multiple facilities and on multiple devices within each facility. To increase durability, Amazon S3 synchronously stores your data across multiple
facilities before confirming that the data has been successfully stored. In addition, Amazon S3 calculates checksums on all network traffic to detect
corruption of data packets when storing or retrieving data. Unlike traditional systems, which can require laborious data verification and manual repair,
Amazon S3 performs regular, systematic data integrity checks and is built to be automatically self-healing.
Amazon S3's standard storage is:
" Backed with the Amazon S3 Service Level Agreement for availability.
" Designed for 99.999999999% durability and 99.99% availability of objects over a given year.
" Designed to sustain the concurrent loss of data in two facilities.
Amazon S3 is storage for the Internet. It's a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data
storage infrastructure at very low costs.
Amazon S3 provides a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.
Using
this web service, developers can easily build applications that make use of Internet storage. Since Amazon S3 is highly scalable and you only pay for
what
you use, developers can start small and grow their application as they wish, with no compromise on performance or reliability. It is designed to be
highly
flexible: Store any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a
simple FTP application, or a sophisticated web application such as the Amazon.com retail web site. Amazon S3 frees developers to focus on innovation, not
figuring out how to store their data.

With Amazon S3's lifecycle policies, you can configure your objects to be archived to Amazon Glacier or deleted after a specific period of time. You can
use
this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a
transition to Amazon Glacier, and/or an expiration. For example, you could create a rule that archives all objects with the common prefix "logs/" 30 days
from creation, and expires these objects after 365 days from creation. You can also create a separate rule that only expires all objects with the prefix
"backups/" 90 days from creation. Lifecycle policies apply to both existing and new S3 objects, ensuring that you can optimize storage and maximize cost
savings for all current data and any new data placed in S3 without time-consuming manual data review and migration. Within a lifecycle rule, the prefix
field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects,
specify their common prefix (e.g. "logs/"). You can specify a transition action to have your objects archived and an expiration action to have your
objects
removed. For time period, provide the date (e.g. January 31, 2013) or the number of days from creation date (e.g. 30 days) after which you want your
objects
to be archived or removed. You may create multiple rules for different prefixes.




Question : QuickTechie.com is a static website, however there is a lot of content links and files which visitors can download by clicking hyperlink.
Which of the below is best solutions to provide, low-cost, highly available hosting solution that can scale automatically to meet traffic demands
  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. AWS S3
2. AWS RDS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Any of the above

Ans : 1
Exp : You can host your entire static website on Amazon S3 for a low-cost, highly available hosting solution that can scale automatically to meet traffic
demands. With Amazon S3, you can reliably serve your traffic and handle unexpected peaks without worrying about scaling your infrastructure.




Question : PhotoAnalytics.com is a photo and video hosting website and they have millions of users, which of the following is a good solutions for
storing big data object, by reducing costs, scaling to meet demand, and increasing the speed of innovation.
  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. AWS S3
2. AWS RDS
3. Access Mostly Uused Products by 50000+ Subscribers
4. Any of the above
Ans :1
Exp : Whether you're storing pharmaceutical or financial data, or multimedia files such as photos and videos, Amazon S3 can be used as your big data
object store. Amazon Web Services offers a comprehensive portfolio of services to help you manage big data by reducing costs, scaling to meet demand, and
increasing the speed of innovation.

Backup and Archiving
Amazon S3 offers a highly durable, scalable, and secure solution for backing up and archiving your critical data. You can use Amazon S3's versioning
capability to provide even further protection for your stored data. You can also define lifecycle rules to archive sets of Amazon S3 objects to Amazon
Glacier, an extremely low-cost storage service.

Content Storage and Distribution
Amazon S3 provides highly durable and available storage for a variety of content. It allows you to offload your entire storage infrastructure into the
cloud, where you can take advantage of Amazon S3's scalability and pay-as-you-go pricing to handle your growing storage needs. You can distribute your
content directly from Amazon S3 or use Amazon S3 as an origin store for delivering content to your Amazon CloudFront edge locations.

Cloud-native Application Data
Amazon S3 provides high performance, highly available storage that makes it easy to scale and maintain cost-effective mobile and Internet-based apps that
run fast. With Amazon S3, you can add any amount of content and access it from anywhere, so you can deploy applications faster and reach more customers.

Disaster Recovery
Amazon S3's highly durable, secure, global infrastructure offers a robust disaster recovery solution designed to provide superior data protection.
Whether
you're looking for disaster recovery in the cloud or from your corporate data center to Amazon S3, AWS has the right solution for you.



Question : AcmeArchive.com is website where you have file sharing and storing services like Google Drive and DropBox, now you also want to support
that during the syncup from desktop you have by accident deleted one of the file and you realized that it was important for you. So whichof the Simple storeage service will help you to get the deleted file.

A. Versioning in S3
B. Secured signed URLs for S3 data access
C. Dont allow delete objects from s3 (only soft delete is permitted)
D. S3 Reduced Redundancy Storage.
E. Amazon S3 event notifications

  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. A, B, C
2. B, C, D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A, E
5. A, D
Ans : 5
Exp : Versioning
Amazon S3 provides further protection with versioning capability. You can use versioning to preserve, retrieve, and restore every version of every object
stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will
retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for
every version stored. You can configure lifecycle rules to automatically control the lifetime and cost of storing multiple versions.
Reduced Redundancy Storage (RRS)
Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to reduce their costs by storing noncritical, reproducible data at
lower levels of redundancy than Amazon S3's standard storage. It provides a cost-effective, highly available solution for distributing or sharing content
that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option
stores
objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as
many
times as standard Amazon S3 storage.

Reduced Redundancy Storage is:
" Backed with the Amazon S3 Service Level Agreement for availability.
" Designed to provide 99.99% durability and 99.99% availability of objects over a given year. This durability level corresponds to an average annual
expected loss of 0.01% of objects.
" Designed to sustain the loss of data in a single facility.




Question : Your Hadoop job should be triggered based on the event notification of file upload, so which of the followig component can help
implementing this in AWS

1. S3
2. SQS
3. Access Mostly Uused Products by 50000+ Subscribers
4. EC2
5. IAM
6. CloudWatch Alarm

  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. 1,2,3
2. 2,3,4
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2,3,6
5. 2,3,4,5
Ans : 1
Exp : Amazon S3 can send event notifications when objects are uploaded to Amazon S3. Amazon S3 event notifications can be delivered using Amazon SQS or
Amazon SNS, or sent directly to AWS Lambda, enabling you to trigger workflows, alerts, or other processing. For example, you could use Amazon S3 event
notifications to trigger transcoding of media files when they are uploaded, processing of data files when they become available, or synchronization of
Amazon S3 objects with other data stores.





Question : AchmePhoto.com website has millions of photos and also thumbnail for each photo. Thumbnail can easily be reproduced from the actual full
photo. However, thumbnail take less space than actual photo. Which of the folloing is best solutions to store thumbnails.


  : QuickTechie.com has a archival mechanism of everyday logs, however for the regulatory requirement they have to keep it somewhere. So they
1. S3
2. RRS
3. Access Mostly Uused Products by 50000+ Subscribers
4. ElastiCache
5. Amazon Glacier


Question : You have a www.QuickTechie.com, which has huge load, hence you decided to use EC instances, with two availability zone in two
region each with 25 instances. However, while starting the servers you are able to start 20 servers in each zone and 5 request failed in each zone. Why ?


  : You have a www.QuickTechie.com, which has huge load, hence you decided to use  EC instances, with two availability zone in two
1. There is a limit of 20 ec2 instances in each region, you can ask to increase this limit.
2. There is a limit of 20 ec2 instances in each availability zone, you can ask to increase this limit.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above.


Question : You have reserved instances to run www.QuickTechie.com website, but you see that all instances launched in one AZ's are not performing
well and you decided to change the AZ of the instances, please select the correct statement in this case.


  : You have  reserved instances to run www.QuickTechie.com website, but you see that all instances launched in one AZ's are not performing
1. If you change the Availability Zone of an RI, its capacity reservation will still apply to the original Availability Zone
2. If you modify the Network Platform of a RI, its capacity reservation no longer applies to the original Network Platform and starts
applying to usage with the new Network Platform
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the 1,2 and 3
5. Both 1 and 2
Ans : 2
Exp : Reserved Instances provide you with a capacity reservation, so you can have confidence that you will be able to launch the instances you have
reserved
when you need them. There are three RI payment options (No Upfront, Partial Upfront, All Upfront) that enable you to balance the amount you pay upfront
with
your effective hourly price.

Yes. You can request to modify active RIs that you own in one of the following ways:

Move RIs between Availability Zones within the same region
Change the Network Platform of your RIs between "EC2-VPC" and "EC2-Classic" (for EC2 Classic-enabled customers)
Change the instance type of your Linux/UNIX RIs to a larger or smaller size in the same family (e.g., convert 8 m1.smalls into 4 m1.mediums, or vice
versa)

You can submit an RI modification request from the AWS Management Console or the ModifyReservedInstances API. We process your requests as soon as
possible,
depending on available capacity. There is no additional cost for modifying your RI.
If you change the Availability Zone of an RI, its capacity reservation and pricing benefits no longer apply to the original Availability Zone and start
applying to usage in the new Availability Zone. If you modify the Network Platform of a RI, its capacity reservation no longer applies to the original
Network Platform and starts applying to usage with the new Network Platform. Pricing benefits continue to apply to both EC2-Classic and EC2-VPC instance
usage matching the rest of the RI parameters.



Question : Your company in process of migrating from in-house to AWS and it is in the middle of that process. Some of the services already migrated on AWS, and users in that
want to have access those services, hence you are using the VPN connection. Which of the following statement is correct?
A. VPG is a AWS side of VPN connection
B. CGW is a customer side VPN connection
C. IGW is a Server side VPN connection
D. Cygwin is a customer side VPN connection

  : You have  reserved instances to run www.QuickTechie.com website, but you see that all instances launched in one AZ's are not performing
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. B,D


Question : You are working with a HealthCare IT company, which helps hospitals to create their infrastructure in AWS. You will be creating VPC for each individual hospital,
after creating 5 VPC it is failed to create a new VPC, why?


  : You are working with a HealthCare IT company, which helps hospitals to create their infrastructure in AWS. You will be creating VPC for each individual hospital,
1. On each AWS account you can create at MAX 5 VPC only.

2. On each AWS account, you can create at MAX 5 VPC in a region only.

3. Access Mostly Uused Products by 50000+ Subscribers

4. For having more than 5 VPC, you have to make upfront payment for new VPC.



Question : You have deployed a real estate property listing website in AWS on an EC instance. It is working fine and getting popular day by day. Suddenly one day you see
suddenly traffic increases and when you risk team investigated that this IP is coming from some other country which no relation with the property listed on INDIA. It seems
they are trying to access and find which ports are open on EC2 server. Which of the following will help you to prevent this guy to EC2?


  : You have deployed a real estate property listing website in AWS on an EC instance. It is working fine and getting popular day by day. Suddenly one day you see
1. You will define strict rule in security group which will deny the traffic.

2. You will be defining rule in NACL, which will deny specific traffic

3. Access Mostly Uused Products by 50000+ Subscribers

4. You will put ELB in front of your EC instance