Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect
to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet. You do not want to publish the application on the Internet.

Which network design meets the above requirements while minimizing deployment and operational costs?

  : You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect
1. Implement AWS Direct Connect, and create a private interface to your VPC. Create a public subnet and place your application servers in it.
2. Implement Elastic Load Balancing with an SSL listener that terminates the back-end connection to the application.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your
VPC and place your application servers in it.

Correct Answer : Get Lastest Questions and Answer :
You do not want to publish the application on the internet - don't place in a public subnet . Amazon Virtual Private Cloud (Amazon VPC) provides customers with tremendous network
routing flexibility. This document describes how a customer can create a secure SSL tunnel (using OpenVPN) to connect multiple VPCs into a larger virtual private network that allows
instances in each VPC to seamlessly connect to each other using private IP addresses.
Internet Gateway (IGW)

The IGW is an egress point from a customer's VPC that allows public Elastic IP (EIP) addresses to be mapped to VPC instances. IGW provides public address mapping that allows VPN
instances in each VPC to communicate with each other. When communicating between VPCs in different AWS Regions, the Internet gateway routes the VPN connections over the Internet.
However, when communicating between VPCs in the same AWS Region, the IGW routes traffic directly between the VPCs using the AWS network.

SSL Connection

An OpenVPN SSL connection between two EC2 VPN instances that will be used to virtually connect the two VPC networks.
1.The SSL connections will require each VPN instance to live in a public subnet and have an Elastic IP address.
2.VPN instances are a potential single point of failure. Please see the Appendix for a high-level HA design for this component
3. Access Mostly Uused Products by 50000+ Subscribers
4.This guide assumes you already have two or more VPCs created. For instructions on creating VPCs, please see the Amazon Virtual Private Cloud Getting Starting Guide.
5.In this scenario, AWS manages the IGW and the customer is responsible for managing their EC2 instances and the VPN connections.







Question : You are building a website that will retrieve and display highly sensitive information to users. The amount of traffic the site will receive is known
and not expected to fluctuate. The site will leverage SSL to protect the communication between the clients and the web servers. Due to the nature of the site you
are very concerned about the security of your SSL private key and want to ensure that the key cannot be accidentally or intentionally moved outside your
environment. Additionally, while the data the site will display is stored on an encrypted EBS volume, you are also concerned that the web servers' logs might
contain some sensitive information; therefore, the logs must be stored so that they can only be decrypted by employees of your company. Which of these architectures
meets all of the requirements?

  : You are building a website that will retrieve and display highly sensitive information to users. The amount of traffic the site will receive is known
1. Use Elastic Load Balancing to distribute traffic to a set of web servers. To protect the SSL private key, upload the key to the load balancer and configure the load
balancer to offload the SSL traffic. Write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.

2. Use Elastic Load Balancing to distribute traffic to a set of web servers. Use TCP load balancing on the load balancer and configure your web servers to retrieve the
private key from a private Amazon S3 bucket on boot. Write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption.

3. Access Mostly Uused Products by 50000+ Subscribers
SSL transactions, and write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption.

4. Use Elastic Load Balancing to distribute traffic to a set of web servers. Configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the
SSL transactions, and write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.


Correct Answer : Get Lastest Questions and Answer
:

Explanation: Web server logs won't persist on a ephermal volume , Private key in S3 won't provide the required level of security








Question : Your company's on-premises content management system has the following architecture:
- Application Tier - Java code on a JBoss application server
- Database Tier - Oracle database regularly backed up to Amazon Simple Storage Service (S3) using the Oracle RMAN backup utility
- Static Content - stored on a 512GB gateway stored Storage Gateway volume attached to the application server via the iSCSI interface

Which AWS based disaster recovery strategy will give you the best RTO?


 : Your company's on-premises content management system has the following architecture:
1. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Generate an EBS volume of static content from the Storage
Gateway and attach it to the JBoss EC2 server.

2. Deploy the Oracle database on RDS. Deploy the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon Glacier. Generate an EBS volume of static content
from the Storage Gateway and attach it to the JBoss EC2 server.

3. Access Mostly Uused Products by 50000+ Subscribers
Gateway running on Amazon EC2 as an iSCSI volume to the JBoss EC2 server.
4. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content from an AWS Storage Gateway-VTL
running on Amazon EC2



Correct Answer : Get Lastest Questions and Answer :
The RDS instance will be quicker to create than Oracle on EC2
The AWS Storage gateway normally runs in on your premises not EC2??


Related Questions


Question : You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S, across the AWS Direct Connect link. You want other Internet
traffic to use your existing link to an Internet Service Provider. What is the correct way to configure AWS Direct connect for access to services such as Amazon S3?
 : You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S, across the AWS Direct Connect link. You want other Internet
1. Configure a public Interface on your AWS Direct Connect link. Configure a static route via your AWS Direct Connect link that points to Amazon S3. Advertise a default
route to AWS using BGP.
2. Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3. Configure specific
routes to your network in your VPC.
3. Access Mostly Uused Products by 50000+ Subscribers
to AWS.
4. Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.



Question : An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon
DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API
credentials?
 :  An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon
1. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the
application instances by referencing an instance profile.
2. Use the Parameter section in the Cloud Formation template to have the user input Access and Secret Keys from an already created IAM user that has the permissions
required to read and write from the required DynamoDB table.
3. Access Mostly Uused Products by 50000+ Subscribers
instance profile property of the application instance.
4. Create an identity and Access Management user in the CioudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt
function to retrieve the Access and secret keys and pass them to the application instance through user-data.


Question : Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement
Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures
ability to handle unexpected increases in traffic. The application currently consists of 2 tiers. A web tier which consists of a load balancer and several Linux Apache web servers as
well as a database tier which hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability
of your application in the short timeframe required?
 :  Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement
1. Offload traffic from on-premises environment. Setup a CloudFront distribution and configure CloudFront to cache objects from a custom origin. Choose to customize your
object cache behavior, and select a TTL that objects should exist in cache.
2. Migrate to AWS. Use VM import `Export to quickly convert an on-premises web server to an AMI create an Auto Scaling group, which uses the imported AMI to scale the web
tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.
3. Access Mostly Uused Products by 50000+ Subscribers
failover to the S3 hosted website.
4. Hybrid environment. Create an AMI which can be used of launch web servers in EC2. Create an Auto Scaling group which uses the AMI to scale the web tier based on
incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.



Question : You're running an application on-premises due to its dependency on non-x hardware and want to use AWS for data backup. Your backup application is only able to write
to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server. Users must be able to access portions of this data
while the backups are taking place. What backup solution would be most appropriate for this use case?
 :  You're running an application on-premises due to its dependency on non-x hardware and want to use AWS for data backup. Your backup application is only able to write
1. Use Storage Gateway and configure it to use Gateway Cached volumes.
2. Configure your backup software to use S3 as the target for your data backups.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use Storage Gateway and configure it to use Gateway Stored volumes.


Question : An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC resources
running within the enterprise's account. The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least
privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of
these conditions?

 :  An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC resources
1. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.
2. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new
access and secret key for the user and provide these credentials to the SaaS provider.
3. Access Mostly Uused Products by 50000+ Subscribers
SaaS application.
4. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required for the Saas application to work, provide the role ARN to the SaaS
provider to use when launching their application instances.



Question : Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC instances deployed across multiple
Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency
model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)

1. Deploy ElasticCache in-memory cache running in each availability zone
2. Implement sharding to distribute load to multiple RDS MySQL instances
3. Access Mostly Uused Products by 50000+ Subscribers
4. Add an RDS MySQL read replica in each availability zone
 :  Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC instances deployed across multiple
1. 1,2
2. 2,3
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,4