Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You're running an application on-premises due to its dependency on non-x hardware and want to use AWS for data backup. Your backup application is only able to write
to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server. Users must be able to access portions of this data
while the backups are taking place. What backup solution would be most appropriate for this use case?
 :  You're running an application on-premises due to its dependency on non-x hardware and want to use AWS for data backup. Your backup application is only able to write
1. Use Storage Gateway and configure it to use Gateway Cached volumes.
2. Configure your backup software to use S3 as the target for your data backups.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use Storage Gateway and configure it to use Gateway Stored volumes.

Correct Answer : Get Lastest Questions and Answer :
Explanation: Anti-Patterns
Amazon S3 is optimal for storing numerous classes of information that are relatively static and benefit from its durability,
availability, and elasticity features. However, in a number of situations Amazon S3 is not the optimal solution. Amazon S3
has the following anti-patterns:
? File system-Amazon S3 uses a flat namespace and isn't meant to serve as a standalone, POSIX-compliant file system. However, by using delimiters (commonly either the '/' or '\'
character) you are able construct your keys to emulate the hierarchical folder structure of file system within a given bucket. ? Structured data with query-Amazon S3 doesn't offer
query capabilities: to retrieve a specific object you need to already know the bucket name and key. Thus, you can't use Amazon S3 as a database by itself. Instead, pair Amazon S3
with a database to index and query metadata about Amazon S3 buckets and objects. ? Rapidly changing data-Data that must be updated very frequently might be better served by a storage
solution with lower read / write latencies, such as Amazon EBS volumes, Amazon RDS or other relational databases, or Amazon DynamoDB.
? Backup and archival storage-Data that requires long-term encrypted archival storage with infrequent read access may be stored more cost-effectively in Amazon Glacier. ? Dynamic
website hosting-While Amazon S3 is ideal for websites with only static content, dynamic websites that depend on database interaction or use server-side scripting should be hosted on
Amazon EC2.
AWS Storage Gateway : AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an
organization's on-premises IT environment and AWS's storage infrastructure. The service enables you to securely store data to the AWS cloud for scalable and cost-effective storage.
AWS Storage Gateway supports industry-standard storage protocols that work with your existing applications. It provides low-latency performance by maintaining frequently accessed
data on-premises while securely storing all of your data encrypted in Amazon S3. For disaster recovery scenarios, it can serve as a cloud-hosted solution, together with Amazon
EC2, that mirrors your entire production environment. AWS Storage Gateway's software appliance is available for download as a virtual machine (VM) image that you install on a host in
your datacenter. Once you've installed your gateway and associated it with your AWS account through our activation process, you can use the AWS Management Console to create either
gateway-cached or gateway-stored volumes that can be mounted as iSCSI devices by your on-premises applications. Gateway-cached volumes allow you to utilize Amazon S3 for your primary
data, while retaining some portion of it locally in a cache for frequently accessed data. These volumes minimize the need to scale your on-premises storage infrastructure, while
still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TBs in size and mount them as iSCSI devices from
your on-premises application servers. Data written to these volumes is stored in Amazon S3, with only a cache of recently written and recently read data stored locally on your
on-premises storage hardware. Gateway-stored volumes store your primary data locally, while asynchronously backing up that data to AWS. These volumes provide your on-premises
applications with low-latency access to their entire datasets, while providing durable, off-site backups. You can create storage volumes up to 1 TB in size and mount them as iSCSI
devices from your onpremises application servers. Data written to your gateway-stored volumes is stored on your on-premises storage

hardware, and asynchronously backed up to Amazon S3 in the form of Amazon EBS snapshots.

Ideal Usage Patterns
Organizations are using AWS Storage Gateway to support a number of use cases. These include corporate file sharing,
enabling existing on-premises backup applications to store primary backups on Amazon S3, disaster recovery, and data
mirroring to cloud-based compute resources. Q. How much volume data can I manage per gateway?

Each Gateway-Cached Volume gateway can support up to 20 volumes and a maximum of 150 TB of data.
Each Gateway-Stored Volume gateway can support up to 12 volumes and a maximum of 192 TB of data.





Question : An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC resources
running within the enterprise's account. The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least
privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of
these conditions?

 :  An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC resources
1. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.
2. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new
access and secret key for the user and provide these credentials to the SaaS provider.
3. Access Mostly Uused Products by 50000+ Subscribers
SaaS application.
4. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required for the Saas application to work, provide the role ARN to the SaaS
provider to use when launching their application instances.


Correct Answer : Get Lastest Questions and Answer :

Explanation: At times, you will want to provide a third party with access to your AWS resources. A recommended best practice is to use IAM roles. If you haven't used roles before, they
provide a mechanism to grant access to your AWS resources without needing to share long-term credentials (for example, an IAM user's access key). Let's say you want to use an
offering from a member of the AWS Partner Network (APN) that monitors your AWS account and provides advice to optimize costs. In order to track your daily spending, the APN Partner
(Partner) will need access to your AWS resources. Though you could provide that Partner with the credentials of an IAM user, we highly recommend you use a role. You can learn more
about roles in the IAM user guide.

Providing access to third parties

When third parties require access to your organization's AWS resources, you can use roles to delegate access to them. For example, a third party might provide a service for managing
your AWS resources. With IAM roles, you can grant these third parties access to your AWS resources without sharing your AWS security credentials. Instead, they can assume a role that
you created to access your AWS resources.

Third parties must provide you the following information for you to create a role that they can assume:

The third party's AWS account ID that contains the IAM users that can use your role. You specify their AWS account ID as the principal when you define the trust policy for the role.

An external ID that the third party can use to associate you with your role. You specify the ID that is provided by the third party as a condition when you define the trust policy
for the role. For more information about the external ID, see About the External ID.

The permissions that the third party requires to work with your AWS resources. You specify these permissions when defining the role's permission policy. This policy defines what
actions they can take and what resources they can access.

After you create the role, you must share the role's Amazon Resource Name (ARN) with the third party. They require your role's ARN in order to use the role.

Important
When you grant third parties access to your AWS resources, they can access any resource that you give them permissions to and their use of your resources is billed to you. Ensure
that you limit their use of your resources appropriately.




Question : Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC instances deployed across multiple
Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency
model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)

1. Deploy ElasticCache in-memory cache running in each availability zone
2. Implement sharding to distribute load to multiple RDS MySQL instances
3. Access Mostly Uused Products by 50000+ Subscribers
4. Add an RDS MySQL read replica in each availability zone
 :  Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC instances deployed across multiple
1. 1,2
2. 2,3
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,4

Correct Answer : Get Lastest Questions and Answer :

Explanation: Amazon RDS Read Replicas provide enhanced performance and durability for Database (DB) Instances. This replication feature makes it easy to elastically scale out beyond the
capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application
read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted, so that they become standalone DB Instances.

We've been running read-replicas of our production databases for a couple years now without any significant issues. All of our sales, marketing, etc. people who need the ability to
run queries are provided access to the replica. It's worked quite well and has been stable for the most part. The production databases are locked down so that only our applications
can connect to it, and the read-replicas are accessible only via SSL from our office. Setting up the security is pretty important since you would be creating all the user accounts on
the master database and they'd then get replicated to the read-replica.

I think we once saw a read-replica get into a bad state due to a hardware-related issue. The great thing about read-replicas though is that you can simply terminate one and create a
new one any time you want/need to. As long as the new replica has the exact same instance name as the old one its DNS, etc. will remain unchanged, so aside from being briefly
unavailable everything should be pretty much transparent to the end users. Once or twice we've also simply rebooted a stuck read-replica and it was able to eventually catch up on its
own as well.

There's no way that data on the read-replica can be updated by any method other than processing commands sent from the master database. RDS simply won't allow you to run something
like an insert, update, etc. on a read-replica no matter what permissions the user has. So you don't need to worry about data changing on the read-replica causing things to get out
of sync with the master.

Occasionally the replica can get a bit behind the production database if somebody submits a long running query, but it typically catches back up fairly quickly once the query
completes. In all our production environments we have a few monitors set up to keep an eye on replication and to also check for long running queries. We make use of the
pmp-check-mysql-replication-delay command in the Percona Toolkit for MySQL to keep an eye on replication. It's run every few minutes via Nagios. We also have a custom script that's
run via cron that checks for long running queries. It basically parses the output of the "SHOW FULL PROCESSLIST" command and sends out an e-mail if a query has been running for a
long period of time along with the username of the person running it and the command to kill the query if we decide we need to.

Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q and A
portals) or compute-intensive workloads (such as a recommendation engine). Caching improves application performance by storing critical pieces of data in memory for low-latency
access. Cached information may include the results of I/O-intensive database queries or the results of computationally-intensive calculations. Applications needing a data structure
server, will find the Redis engine most useful.




Related Questions


Question : QuickTechie.com is setting up a multi-site solution where the application runs on premise as well as on AWS
to achieve the minimum RTP. They have database as Oracle in backend.
Select the configurations which is not the requirements of the multi-site solution scenario?
 : QuickTechie.com is setting up a multi-site solution where the application runs on premise as well as on AWS
1. Configure data replication based on RTO.
2. Setup a single DB instance which will be accessed by both sites.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup a weighted DNS service like Route 53 to route traffic across sites.



Question : QuickTechie.com is hosting a scalable web application using AWS. And configured internet facing ELB
and Auto Scaling to make the application scalable. Which of the below mentioned statements is required to be followed when the
application is planning to host a website on VPC?
 : QuickTechie.com is hosting a scalable web application using AWS. And configured internet facing ELB
1. The ELB can be in a public or a private subnet but should have the ENI which is attached to an elastic IP.
2. The ELB must not be in any subnet; instead it should face the internet directly.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The ELB must be in a public subnet of the VPC to face the internet traffic.


Question : www.HadoopExam.com is planning to create a secure, scalable and HA system on the AWS VPC.
Which of the below mentioned configurations will not help HadoopExam to achieve their goals if they are planning to use the AWS VPC?
 : www.HadoopExam.com is planning to create a secure, scalable and HA system on the AWS VPC.
1. Setup CloudWatch which will monitor the AWS instances and trigger an alert to the Auto Scaling group when there is some odd behaviour.
2. Setup Auto Scaling with multiple public subnets in separate zones from the same VPC.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup the internet facing ELB with VPC which is facing external traffic and has all instances registered with it.


Question : Which of the following tenancy attribute of a VPC help all instances launched in the VPC run as single-tenancy instances

 :  Which of the following tenancy attribute of a VPC help all instances launched in the VPC run as single-tenancy instances
1. default
2. dedicated
3. Access Mostly Uused Products by 50000+ Subscribers
4. None



Question : One of the AWS account owners faced a major challenge in June as his account was hacked and the hacker deleted
all the data from his AWS account. This resulted in a major blow to the business. Which of the below mentioned
steps may not help in preventing this action?
 : One of the AWS account owners faced a major challenge in June as his account was hacked and the hacker deleted
1. Take a backup of the critical data to offsite / on premise.
2. Create an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Do not share the AWS access and secret access keys with others as well do not store it inside programs, instead use IAM roles.




Question : QuickTechie.com is hosting a scalable "Polling of the new News" web application using AWS. The organization has configured ELB
and Auto Scaling to make the application scalable. Which of the below mentioned statements is not required to be
followed for ELB when the application is planning to host a web application on VPC?
 : QuickTechie.com is hosting a scalable
1. Configure the security group rules and network ACLs to allow traffic to be routed between the subnets in the VPC.
2. The internet facing ELB should have a route table associated with the internet gateway.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The ELB and all the instances should be in the same subnet.